Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, November 5th, 2013
| Time |
Event |
| 2:22p |
Preparing for Recovery: Four Strategies for Disaster Proofing Data Jarrett Potts, director of strategic marketing for STORServer, a provider of data backup solutions for the mid-market. Before joining the STORServer team, Potts spent the past 15 years working in various capacities for IBM, including Tivoli Storage Manager marketing and technical sales. He has been the evangelist for the TSM family of products since 2000.
 JARRETT POTTS
STORServer
According to International Data Corporation’s Digital Universe Study, in 2012, less than a fifth of the world’s data was protected, despite 35 percent requiring such actions. Levels of data protection are significantly lagging behind the expansion in volume.
Over the years, I’ve read and heard many different ways to get data and data centers ready for recovery. While leveraging cloud services, using “hardened” storage devices and synchronizing NAS devices are all good ideas, they are all missing one very important fact: the prep work is the most important part of recovery.
This may come as a shock, but backup is completely worthless if you cannot recover your data. And, archiving data is completely useless if the data cannot be retrieved in the time you need it.
Sounds like a stupid statement, right? You’d be surprised. I have talked with far too many organizations that buy data protection products and are never quite sure if their data will be recovered or not. All the bells and whistles are not worth the money if you cannot get your data back quickly and easily.
More than ever, robust data protection is imperative to recovery in the event of data loss. In fact, failure to safeguard company data can result in business disruption, devastating losses, and in some cases, catastrophic consequences to the business. Numerous reports and studies show that businesses that go through critical data loss often never recover.
Below are the four steps organizations should take to disaster proof their data.
Tiering comes first
This first thing every business needs to do is plan. That’s right, plan. The classification of data is the most important part of all disaster recovery planning procedures. Planning for the worst means that you have to decide which data is important and which data needs to be recovered first.
Tiering your data helps align the value of the data with the cost of protecting it. It helps stretch your backup budget and makes data protection and recovery more efficient. The recovery point objective (RPO) and recovery time objective (RTO) should vary for each application and its data, and all data should not be treated in the same way during backup and recovery procedures. All data is not created equal.
This tiering of data is not simple as it requires many different parties to agree on which data is most important. For example, any data that is historical (not used on a daily or monthly basis) should be the lowest tier. This data needs to come back after a disaster but not until everything else is up and running. Perhaps tier one data stays on disk for fast restore.
Having three or four tiers of data breaks up the recovery into manageable parts, allowing the recovery process to be more focused. An example of how to prioritize the tiers of data:
- Tier 1: Data that’s essential to your daily operations and/or highly confidential.
- Tier 2: Data that only needs accessed from time to time.
- Tier 3: Information you rarely access and that is being stored until your data retention date is met and it can be destroyed.
Replication
Replication is a key component of disaster recovery. The technology often works in combination with deduplication, virtual servers or the cloud to carry out its role in recovery. Copying data from a host computer to another computer (at a remote location) establishes redundant copies and ensures business continuity in the event of a disaster. When data replication is done over a computer network, changed data is copied to the remote location as soon as it changes.
Data needs to exist in at least two places at all times: onsite for near-line recovery and offsite for disaster recovery. The cloud is limited in the amount of data it can ingest on a daily basis and is even more limited on the amount of data it can recover. Placing data offsite to another raised floor or location, or even shipping tapes to a secondary site, is the only way to recover massive amounts of data.
In a disaster, the data center or backup technology may be gone or no longer functioning. Replication enables IT staff to restore the data that is stored in the data center on the given media in order to continue business operations. Without this data, organizations have no historical data to process. Having a copy of the data enables the business to pick up close to where it left off when the disaster struck.
Timing
The timing is important. It is necessary to understand how much data you have and how long you have to recover it. Once these two items are known, planning the mechanics of the recovery can really begin. If 10 terabytes (TBs) of data have to be recovered, but connections will only push 5 TBs per day, it will take you two days. If two days is too long, then increased bandwidth must be purchased.
Test, test and test again
Disaster recovery is no exception to the old adage, “practice makes perfect.” Testing is the only way to verify that all of your plans will work in the event of an actual disaster. If you do not test at least once a year, then you have no idea what your timing looks like or if you can even recover your data. You have to have an understanding of the weak spots or failure points.
It may seem simple, but just try it. Declare a disaster at your organization and see how fast you can line up the right resources to pull off the recovery. See how fast all parties could get to your recovery site. You do not even have to do the recovery. Perform this simple assessment, and you will start to see the need for thorough testing.
Truth be told, you cannot disaster proof your systems, however you can plan for the worst. Planning and testing are the most important parts of “disaster proofing,” but the testing can never end.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:20p |
Pushing New Limits: HGST Launches 6TB Helium Hard Drive  Graphic shows difference between an air-filled hard drive and a helium-filled drive. Click to enlarge. (Photo by HGST.)
HGST (formerly Hitachi Global Storage Technologies), a Western Digital (WDC) company, has launched a hermetically sealed helium-filled 6TB hard drive. After working closely with companies and organizations such as HP, Netflix, CERN, Green Revolution Cooling and other large social media and search companies, HGST developed the HelioSeal platform as a path for higher capacity storage. The new Ultrastar He6 drive is aimed at use in cloud storage, massive scale-out environments, disk-to-disk backup, and replicated or RAID environments.
HGST previously described the benefits of using helium in a sealed drive. Since the density of helium is one-seventh that of air, there is dramatically less drag force acting on the spinning disk stack so that mechanical power into the motor is substantially reduced and also it means that the fluid flow forces buffeting the disks and the arms, which position the heads over the data tracks, are substantially reduced allowing for disks to be placed closer together and to place data tracks closer together. The lower shear forces and more efficient thermal conduction of helium also mean the drive will run cooler and will emit less acoustic noise.
Leveraging the inherent benefits of helium, the new Ultrastar He6 drive features a 7Stac disk design with 6TB, making it a very high capacity HDD. The helium platform will serve as the main platform for new technologies like shingled magnetic recording (SMR) and heat-assisted magnetic recording (HAMR) where HGST will continue to push the HDD areal density envelope. The new 6TB drive is 50g lighter than a standard 3.5 inch drive, has low power consumption – with 49 percent better watts-per-TB, and has a small footprint in a standard 3.5 inch form factor.
“With ever-increasing pressures on corporate and cloud data centers to improve storage efficiencies and reduce costs, HGST is at the forefront delivering a revolutionary new solution that significantly improves data center TCO on virtually every level – capacity, power, cooling and storage density – all in the same 3.5-inch form factor,” said Brendan Collins, vice president of product marketing, HGST.
Large Scale Storage Success Stories
“The Netflix Open Connect delivery platform is a highly optimized video content delivery network. We serve billions of hours of streaming video per quarter to over 40 million subscribers,” said David Fullagar, director of Content Delivery Architecture, Netflix. “As part of our efforts to optimize the delivery ecosystem for Netflix and our Internet Service Provider partners, we strive to build better and better streaming appliances. The high storage density and lower power usage of the Ultrastar He6 hard drives allow us to continue with that goal, and create a great customer experience.”
Another large requirement for data storage comes from CERN, which operates the world’s largest particle physics laboratory. Olof Bärring, IT Department section leader responsible for facility planning and procurement at CERN, said, “Over the past 20 years, we’ve recorded more than 100 petabytes of physics data, and our projected data growth rate is accelerating. To scale efficiently, we must deploy vast amounts of cost-effective storage with the best TCO. We have tested the helium drive and it looks very promising: it surpassed our expectations on power, cooling and storage density requirements. We’re excited about the opportunity to qualify the HGST Ultrastar He6 hard drive in our environment.”
The 6TB HGST Ultrastar He6 hard drives are now generally available. | | 3:34p |
Rackspace’s New Powerhouse Cloud Rolls Out Rackspace has re-engineered its cloud for greater speed throughput and reliability. A major part of the upgrade is move to all-solid-state disks (SSDs) in RAID 10 configuration, for a big boost in storage performance and I/O throughput. Also included is redundant 10-Gigabit networking, Intel Xeon E5 processors and larger RAM, which delivers 2.6 times more overall performance.
As for pricing, Rackspace has dropped the price about 33 percent, in addition to the upgrading its cloud architecture. Those clients residing on the current cloud infrastructure will not be forced to migrate – though they have ample reason to do so – and this newly re-architected cloud now becomes the default.
The new cloud rolls out today in Northern Virginia, quickly followed by Dallas, Chicago and London. The company has set the first half of 2014 as the target dates for roll out in Hong Kong and Sydney.
More Power Less Cost
The company says that the dropped cost and performance boost has been made possible by its scale and ability to drive down costs. However, the performance enhancements mean this is not just another shot across the bow in the cloud price war, but a product offering change, by creating a significant increase in power.
“Rackspace’s new Cloud Server offerings provide enhanced performance at every level with greatly expanded network capacity and allowing customers to benefit from greater speed, scale and high disk I/O,” said Scott Sanchez, Director of Strategy at Rackspace. “We spent a lot of time out on the road talking to customers, speaking with folks actually building on the platform. The theme we hear is around performance. Many grew up around the dedicated hosting product, and when they look at cloud, there seems to be a much smaller number of applications that are suitable. It’s meant for a smaller subset.”
Sanchez says the current cloud is most suitable for about 30 percent of applications, whereas the new enhancements move the mark to over 60 percent. The goal is to make the cloud fit for everything. “Over time, the role of dedicated hardware will be for custom use cases,” said Sanchez.
The company has tuned its public cloud offering for a wider variety of workloads. There will be two levels of service Performance Cloud 1 and Performance Cloud 2. “We’ve engineered several classes that are much more powerful. We think it’s the most powerful in the market,” said Erik Carlin, Director, Cloud Compute Product Line.
According to Sanchez, Performance 1 is suited around the worker node, web server, and batch processing type jobs. Performance 2 is really optimized for database workloads, NoSQL, MongoDB and Cassandra. “We’ve done a test MongoDB same config on standard and this and we see a 10x increase in performance,” Sanchez said.
“The first thing we’ve done is eliminated spinning disk and replaced it with high quality grade SSD in raid 10 configuration,” he added. “There is an extreme amount of disk I/O, enough to go around. We’re now going up to four times the RAM. We’re now going up to 120 gigs. We’re going up to 32 virtual CPUs, up from 8. We’ve switched to Intel Xeon processors. We’re embracing 10 gigabit networking, 4 separate 10 gig connections. It allows a high availability in the network. That allows us to deliver high performance, even in the event of a failure.”
These servers deliver more total performance over existing cloud servers as follows:
- 4X more total RAM
- 2X more total CPU performance
- 132X more total disk I/O (input/output)
- 8.3X more total network bandwidth
- 2.6X more total overall performance
The new Performance Cloud Servers’ high throughput network has been specifically designed to work with Cloud Block Storage, delivering up to 1.5X more disk I/O performance for Standard volumes and 2.5X more disk I/O performance for SSD volumes.
“Our existing cloud service offer is a generic compute platform, designed to run a variety of workloads, but never designed for specific types of workloads,” Sanchez said. “We took a workload oriented view: how do we build applications? What are the typically workloads that we run? We sough to build an optimized cloud.”
Rick Jackson, chief marketing officer at Rackspace, said, “In today’s world of instant demand, applications must be capable of scaling fast, and performing at scale without compromise. As a cloud provider, our role is to enable that without customers having to over-provision and constantly re-architect their applications. Our mission is to provide our customers with the best-fit infrastructure to optimize the performance of their applications, and today, we are redefining the benchmark for performance in a public cloud offering as part of our hybrid cloud portfolio.”
Performance Cloud Servers are powered by OpenStack. Customers can connect the new cloud servers to dedicated bare metal as part of the Rackspace Hybrid cloud, and the company continues to offer its brand of “fanatical” support.
| | 3:35p |
Successfully Planning and Executing a Data Center Migration Data Center Migration, three words that can cause sleepless nights to even the most experienced professionals involved in enterprise IT and facilities departments. While it can be a daunting challenge, it also offers a great opportunity to improve and rethink your IT architecture and examine how well it meshes with your organization’s long term business strategies.
Join Data Center Knowledge contributor Julius Neudorfer on Thursday, November 21 for a special webinar in which Julius will discuss planning and executing a successful Data Center Migration.
This webinar will examine some of the major strategic issues that should be an integral part of your evaluation when forming a data center migration strategy.
Title: Data Center Migration
Date: Thursday, November 21, 2013
Time: 2 pm Eastern/ 11 am Pacific (Duration 60 minutes, including time for Q&A)
Register: Sign up for the webinar.
In this webinar, you’ll learn:
- Migration of Necessity vs Strategic Migration
- Platform Alternatives – Data Center, Cloud or Hybrid
- Major Decisions – Relocate Existing IT Equipment or Purchase New
- Mitigating Risk – Dollars vs. Downtime
Following the presentation, there will be a Q&A session with your peers and Julius. Sign up today and you will receive further instructions via e-mail about the webinar. We invite you to join the conversation. | | 4:26p |
GI Partners Acquires New Jersey Data Center Operated By Telx  Telx is the long-term tenant and operator of this facility in Clifton, New Jersey, and will remain so after a change of ownership of the property. (Photo by Rich Miller.)
Mountain Development Corp. has sold a recently completed Clifton, New Jersey, data center for $53.9 million to private-equity firm GI Partners and the California Public Employees’ Retirement System (CalPERS). Despite the sale, Telx, the long-term tenant and operator, will maintain operations at the 215,000-square-foot facility, dubbed NJR3.
The facility is part of a two-building campus. One facility, which was completed in June, is located at 2 Peekay Drive and the other adjacent data center is at 100 Delawanna. The sale closed in early October.
The deal reflects a trend in which real estate finds are buying fully-leased data centers as investments.
“GI Partners, through one of its investment vehicles, recently finalized its purchase of 2 Peekay Drive in Clifton, New Jersey, essentially our NJR3 data center’s underlying building,” said Ron Sterbenz, Telx’s Senior Vice President, Product & Marketing. “For Telx, this represents a financing transaction, and in no way changes the operational aspects of NJR3, or any operations taking place on the Telx Clifton Campus. Telx still owns the 100 Delawanna building. Although GI Partners’ investment vehicle is 2 Peekay’s owner and landlord, Telx is the long-term exclusive tenant and in complete control of the day-to-day operations of the building. This is a financial decision that allows each party to focus on what we do well, and for Telx that is operating both data centers and its encompassing campus.”
“It’s really just a change-out of the people that own the bricks and mortar,” Mountain Development President Michael Seeve said on Wednesday to NorthJersey.com.
In June, Telx laid out its plans for the Clifton campus, which employs about 15 to 20 Telx staff.
Growing Investments in Data Center Real Estate
CalPERS is the largest U.S. pension fund, and this is not the fund’s first data center property acquisition. On behalf of CalPERS, GI Partners manages a $500 million discretionary core real estate fund, called TechCore, which targets data centers, internet gateways, corporate technology campuses and life science properties. In December 2012, GI Partners acquired two fully leased data centers.
The Clifton facility falls in line with the TechCore fund’s mission. The acquisition is a good example of the healthy state of the data center industry. The smart money goes where the potential is, and right now, the industry is healthy enough that these fully leased properties are extremely attractive. | | 4:50p |
Arista 7000 X Series Takes On Cisco Catalyst Adding to its software-defined cloud network, Arista Networks launched the 7000 X Series with the Arista 7300 and Arista 7250, which delivers a resilient architecture, enhanced programmability, control of virtualized networks, improved power efficiency, and price/performance optimized for universal cloud and data center deployments.
“Newer small form-factor core devices can enable network managers to reduce capital costs by 30 percent to 70 percent, and save 30 percent or more on operations expenses, compared with chassis-based switches,” said Mark Fabbi, vice president and distinguished analyst at Gartner. “Enterprises should not only compare the capital cost differences between fixed form factor switches and chassis-based switches, but also should look at operating expenditures (Opex) such as power consumption and maintenance, which can cut costs by at least 30 percent.”
The Arista 7304, 7308 and 7316 with 4-, 8- and 16-line card slots, respectively, all share a common resilient architecture that scales up to 512 ports of 40GbE or 2,048 ports of 10GbE, with wirespeed performance of 40Tbps (terabits per second) of throughput. With power consumption under 3W per 10GbE port and latency under 2 microseconds, a pair of 7300 series switches replaces two Catalyst 6509Es with more than ten times the scale, throughput, latency improvement and power efficiency. Complementing the 7300X series, the Arista 7250X Series is a high density solution delivering 64 ports of wire-speed 40GbE or up to 256 Ports of 10GbE.
Both 7300X and 7250X series feature a unified forwarding table, new duplex fiber 40GbE Optics, real-time visibility into network congestion, and a physical-virtual-cloud network via Arista’s VMtracer provisioning and native VXLAN support. The 7250QX-64 is available immediately, and the 7300X series will be available in the first quarter of 2014. | | 6:03p |
Data Center Jobs: CBRE At the Data Center Jobs Board, we have a new job listing from CBRE, which is seeking a Building Engineer – Data Center Critical Facility in Birmingham, Alabama.
The Building Engineer – Data Center Critical Facility is responsible for utilizing advanced skills to perform complex preventive maintenance and corrective repair of buildings, industrial systems, vehicles, equipment and grounds, working under limited supervision, monitors building system operations and performance, utilizing several trade skills such as carpentry, plumbing, electrical, painting, roofing, heating and cooling, complying with all applicable codes, regulations, governmental agency and Company directives related to building operations and work safety, and inspecting building systems including fire alarms, HVAC, and plumbing to ensure operation of equipment is within design capabilities and achieves environmental conditions prescribed by client. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 6:30p |
Five Steps to Preparing Your Data Center for VDI  Virtualization at the desktop level has reached maturity and is being used in all types of organizations.
Virtualization, at least at the server level, has been in use for some time. Since, the concept has expanded to user, application, network, security, storage, and, of course, desktop virtualization (VDI). This new approach took the market by storm, and many thought it was the direct answer to many of their enterprise’s desktop problems. Initially, there were some challenges – serious challenges. Data centers never really understood the requirements that initial VDI technologies really required. So, during the onset of the VDI push, there were some very flawed VDI deployments.
Now, the maturity is certainly here. Many companies understand the direct fit for VDI. Labs, kiosks, call centers, educational institutions and healthcare are all finding powerful users for VDI. Of course, as with any technological deployment – each organization is unique and will have its own set of business requirements. The requirements can depend on the type of organization, vertical, how the user interacts with his or her desktop, and much more. Still there are five key steps that should always be followed when creating a truly powerful VDI solution.
The Big 5 VDI Considerations
Virtualizing a desktop requires a few data center components to be present for the solution to be successful. Furthermore, planning and designing the infrastructure goes well beyond just sizing and scoping out the actual virtual desktop.
- Consider the end-point. Many organizations are not just trying to go with better computing options, but they’re simultaneously aiming for greener computing technology. The option to remove a thick terminal to replace it with a very thin or even zero client is very enticing. With VDI, you’re able to deploy highly efficient end-points which look for the central boot server to pull their images. Here’s the point – hardware manufacturers are seeing the end of the PC days. New types of endpoints like those from nComputing are creating powerful platforms while pulling less than 5 watts. Furthermore, mobile technologies like those from the ChromeBook and the Chrome OS allow for complete application and desktop delivery to truly mobile devices. These are web-enabled platforms capable of streaming very rich content directly to the end-user. The important part here is that these SoC designs will only continue to proliferate as we redefine the end-point, the data, and how this data is delivered.
- QoS, LAN and WAN optimization. VDI can be very resource intensive – this includes traffic over the wire. Having a good core switching infrastructure will help alleviate this pain by allowing the administrators to create certain rules and policies revolving around traffic flow. Setting QoS metrics for VDI-specific traffic can help remove congestion and ensure that the right traffic has the proper amount of priority. As for traffic leaving the data center – knowing where the user is located and optimizing their experience based on certain criteria becomes very important. New VDI technologies allow for users to connect over 3G/4G networks and still have their traffic optimized. The protocols delivering this rich media are improving. Along with that, WANOP systems and bandwidth in general have evolved a long way as well.
- Persistent vs. pooled. Or possibly both – or maybe just apps. When deploying VDI, there are two major options an administrator can go with when it comes to designing the actual image. A persistent desktop is one that will save the changes a user makes to it. On the other hand, pooled desktops will go into their original state when rebooted. In some cases, many users will touch the end-point and the administrator may want the devices to be booted into a clean state each time. In many cases, user groups will actually require that both pooled and persistent desktops be deployed. Remember, some users will have one type of image, while another set will have a completely different one. Keep in mind that virtual resource delivery doesn’t only have to be desktops. In many cases, it’s much more efficient to only deliver applications – instead of entire desktops. Plus, in some of those cases, users can utilize a hosted (or shared) desktop model instead of their own dedicated image. In all of these situations, the user and their behavior have to be understood to deliver the appropriate computing experience.
- Storage preparation. Large organizations will oftentimes have numerous storage controllers. At the same time, some smaller organizations will be using only one. Regardless of the amount of storage controllers available, they need to be sized properly for VDI. To prevent boot and processing storms, organizations must look at IOPS requirements for their images. To alleviate processing pains, administrators can look at flash technologies (NetApp, Fusion-IO, XtremIO) or SSD technologies (Violin, Nimbus) to help offload that kind of workload. Furthermore, intermediary platforms like Atlantis ILIO run on top of a virtual machine that utilizes massive amounts of RAM as the key storage repository. Developments around this technology now allow for both persistent and non-persistent images all to reside on RAM-based storage.
- The infrastructure consideration. High-density, multi-tenancy computing has truly evolved how we utilize resources within the modern data center. Massive blades and chassis now make up the DNA of data center and cloud computing. The introduction of truly converged systems created an even more efficient way of delivering all core computing resources from one massive chassis plane. Fast-forward to today. We see even more progression in the converged infrastructure field. When Cisco acquired Whiptail, they introduced a new model which will integrate millions of IOPS directly into a UCS blade chassis. This sort of trend will only continue as the digitization of the modern business becomes the norm. There will be more users accessing data via the cloud, more resources delivered over the WAN, and entire workloads will be delivered to a variety of end-points. All of this will require platforms which are capable of integrating network, storage, and compute to deliver true data acceleration.
As the virtual platform continues to evolve, organizations will need to make sure that their infrastructure continues to stay directly in line with business needs. Never forget that modern businesses are tied at the hip to their IT environment. A lack of technological understanding can allow the competition to jump ahead.
So, as with any new innovation or technology, do not overlook management and training. Take the time to learn the key metrics that revolve around keeping a virtual environment proactively healthy. Furthermore, educate your staff so that they can not only support the end-user more efficiently, they can understand the true power of their virtual infrastructure. This in-line training and communication will help strengthen the vision of the entire organization directly with the IT department. |
|