Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, July 23rd, 2013
| Time |
Event |
| 11:30a |
Host Europe Acquired for $667 Million Private equity firm Cinven Group will acquire European web hosting provider Host Europe Group for approximately $667 million from Montagu Private Equity, reports The WHIR. The acquisition announcement comes less than three years after Montagu acquired Host Europe from Oakley Capital Private Equity for $344 million.
Cinven is said to be starting discussions with lenders on debt financing within the next week. The private equity firm is undecided if the financing structure will be comprised of a loan, bond, or a combination of the two.
For more details, see Host Europe Group Acquired by Private Equity Group Cinven at The WHIR. | | 12:30p |
Storage Wars: Dispelling the Myth of Flash Economics Walter Hinton is the Sr. Director of Field Marketing at Virident, a performance leader in flash-based storage-class memory (SCM) solutions.
 WALTER HINTON
Virident
While flash storage is generating a lot of positive buzz for its improved response times and low latencies, it suffers from a reputation of being very expensive. This is true, if the cost of flash is measured like that of traditional spinning disks (Cost/MB, Cost/GB, Cost/TB). However, measuring flash as a capacity commodity is not practical, as flash delivers on very different value propositions (IO and throughput) that can only be achieved by using RAID and lots of hard drives bundled together. Let’s take a look at new metrics for measuring flash costs and illustrate the ROI that can be achieved with this new technology.
Storage Causes the Bottleneck
With multi-core processors and IO-hungry applications in today’s enterprise and web-scale organizations, the performance bottleneck has moved from the server to storage. Given that a typical hard drive can deliver 175 to 210 IOPs and a PCIe card can sustain more than 325,000 IOPs (using the same measurement technique), one could argue that it would take 1,547 SAS drives to achieve the same performance that can be delivered by one card. Given that each of the SAS drives could have as much as 600GB of raw capacity, you could easily end up buying 928TB of capacity to hold a database that actually needs less than 2TB to host the application, log files and meta data.
From a disk drive perspective the cost/GB for that device is less, approximately $0.10/GB, while a PCIe card could be as much as $8/GB, but if you’re seeking performance, paying $16,000 for a 2TB flash card vs. $92,800 for SAS drives is a significant difference.
A New Measurement
What about considering a new metric for flash technology? Since the goal of flash is to deliver application acceleration through low latencies and high IOPs, perhaps we should consider the cost/IOP as a more relevant metric. If a 2.2TB card can deliver 325,000 IOPs at $16,000, there’s a cost/IOP of $.05/IOP. Considering the above example of 1,547 drives at $92,800 to achieve the same level of IOPs, the tables are suddenly turned and the cost of traditional storage looks like $0.28/IOP, more than a five times difference in cost.

Another big driver of cost in the enterprise and web-scale environments is power. As one might imagine, flash really shows off when we look at cost/watt. With 1,547 drives each consuming anywhere from 7 to 17 watts per hour each (varies by supplier), a scale-out environment can easily exceed 10,000 watts per hour. This compares to a PCIe card with equivalent IOPs consuming 25 watts per hour at maximum use. Given an industry average of $6/watt/year, that’s a power consumption differential of $60,000/year for the hard drive solution compared to $150/year for a card.
More and Faster Drives
Of course, all of the above does not take into consideration storage arrays that have massive amounts of RAM and hundreds of drives to meet the IOPs challenge. But these systems cost hundreds of thousands of dollars. Take, for example, an enterprise array targeted at high-end applications like enterprise database and web-scale organizations. This array consumes more than 400 watts/hour (without fully populated drives) and boasts 39,041 IOPs with built-in SSDs and SAS drives. It costs more than $115,000 when populated with just 17.8TB of capacity. The key value proposition of the array is the ability to add SSDs, fast SAS drives and capacity drives for a complete storage hierarchy. But the fact remains that if IOPs is a priority, you need more fast drives, so costs go up and power consumption increases proportionally.
We must dispel a myth about storage costs, particularly for flash. As storage professionals, we have been taught that the cost of storage drops by 40 percent per year. This is not the case with flash (at least not today). Costs will come down as more technology is deployed and as the market grows, but for the past five years, we have seen only a modest decline in flash costs (5-10 percent per year). Don’t expect flash to be cheaper on a pure dollars/GB basis than HDDs any time soon. However, as we’ve discussed, flash shouldn’t be measured in dollars/GB but in dollars/IOPs, and that is where the cost savings are seen.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:00p |
Overview: When to Use Cloud Computing to Replicate 
In deploying a cloud computing model, organizations have many options. One cloud computing solution is to deploy the platform as a means for disaster recovery, business continuity, and extending the data center. With flexible “pay-as-you-grow” models, cloud computing can evolve with the needs of your business. In using the cloud, many organizations are still asking – When should I use the cloud for replication?
- Uptime and resiliency. Organizations which are forced to stay resilient at all times constantly look for ways to be creative and keep uptime level high. Whether it is redundant hardware or a private hot-site, keeping an environment up and running 99.99% (insert more 9’s here) of the time is a tough job. This is where cloud computing can really help. By utilizing either private or public cloud technologies, an IT infrastructure can efficiently replicate their environment site-to-site. This can be done to a private site or one stored at distributed sites by a cloud provider. With a well-planned deployment, and a good infrastructure, companies can efficiently load-balance their IT environment between multiple active, cloud-based, sites. So, if one site should go down – users would transparently be balanced to the next nearest or most available data center.
- Remote backup and storage. Storage systems have become more efficient and can now do offsite, cloud-based backup and replication. For those organizations looking to take their environment offsite, using cloud replication may be an option. By having a dedicated link to a cloud-based data center, engineers are able to safely backup and even restore from the cloud provider. Furthermore, some companies are bound by certain data retention policies. This is where cloud storage can really help. The data is retrievable (not for DR purposes though) and can be reviewed as needed. Using cloud-based backup and replication creates a versatile environment capable of greater amounts of recoverability and business continuity. The current market for cloud-based backup and storage is growing and more providers are offering competitive pricing. Organizations can adopt a flexible growth plan capable of scaling with IT infrastructure data demands.
- Branches and other offices. Cloud replication goes beyond just backup and DR. Remote offices can benefit from a private cloud environment where a central data center delivers applications, workloads and even desktops to remote users and branch offices. By having only a few connecting points at the branch office level, much of the horsepower is being done by the corporate data center. Only a few machines or key servers as well as possible a WAN optimizer would reside at the remote office. This type of efficient connection methodology reduces the amount of infrastructure components necessary at a remote office. Furthermore, using cloud replication and fewer components at the branch allows administrators to control resources and management very granularly. When deployed and planned out properly, this can have great cost savings for an organization.
- Building a “business-in-a-box.” Although we are really constructing a business “outside of the box,” the idea behind cloud computing allows many administrators to automate the launching of a new business branch. By centralizing the entire process within a cloud environment, IT only needs to deploy a few components at the end-point to allow connectivity into the cloud-hosted platform. This solution would hold workloads, files, desktops, applications and everything else needed to run an organization’s day-to-day business activities. This means that the standard business launch process can be cut in time and expedited dramatically. We’re reducing the amount of hardware that we need, and increasing agility by using the cloud to replicate business processes. The beauty with this type of scenario is that the entire infrastructure can be hosted within a private, public or even hybrid cloud environment.
There are many ways to deploy a solid cloud model. Remember, some components of using the Wide Area Network (WAN) as a delivery mechanism may be resource intensive. Latency, bandwidth and the end-user must all be considerations when cloud replication projects are being deployed. Cloud computing, as a technology, came in and overwhelmed a lot of people.
The reality here is that organizations are just finding better ways to utilize the Wide Area Network. With more Ethernet Services becoming available every day — and better infrastructure components become available – many companies, both large and small, are find ways to deploy some part of their environment into the cloud. When the IT goals and the business vision align properly on a project, the results can create a very resilient and agile environment capable of effective cloud-based replication. | | 2:15p |
Deliver Business Advantage with Bring Your Own Device The business landscape is evolving as more devices are continuously connecting to the cloud. IT consumerization and BYOD are paving the way to conducting business in a new fashion and allowing users new level so of network access. However, allowing users to utilize their own devices also comes with some direct challenges as well. This includes:
- Onboarding
- Maintaining security and mitigating risk
- Ensuring high service quality
- Managing growth
- Supporting a diverse environment
Still, many users insist on utilizing their own devices to access applications, workloads and their data. Now, IT administrators have to find ways not only to deliver that information quickly, but also factor in the end user experience as well as security. Creating a BYOD plan is a truly encompassing process. There are numerous different variables to consider in creating the right type of policy. In HP’s white paper, you are able to see how to more easily deliver data to any device. Specifically, there are a few considerations that need to be taken into mind.

[Image source: Deliver unparalleled business advantage with bring your own device]
Download this white paper to learn what you need to protect your environment and still deliver a powerful user experience. In creating the right type of platform, consider the following:
- Simple network access
- Intrusion detection and prevention
- Securing and preventing data loss
- Creating a single pain-of-glass management
- Traffic shaping
- User monitoring
- Device profiling and posturing
- Creating a unified policy for BYOD
To be successful in today’s growing mobile market – organizations will have to gain more control over their environment. This means better visibility into the user devices and being able to deliver workloads in a fast and effective manner. HP’s white paper outlines the various BYOD initiatives currently in the market and how to best secure, control, and optimize this type of environment. | | 2:30p |
A Closer Look at Data Center “High Availability” and “Service Delivery” The need around the modern data center infrastructure continues to grow. As more services are delivered via the data center, organizations are looking for ways to enter the colocation space. Because these infrastructures are becoming integral parts of any business organization, resiliency and high availability quickly become important concerns. With so many different types of requirements, the search for the right data center or service provider can be a challenge. Companies rely on data centers or colocation service providers to support mission critical information technology (IT) infrastructure and maintain business continuity. A high-availability data center or colocation service provider minimizes the chances of downtime occurring for critical applications and makes it easier to secure and manage a company’s IT infrastructure requirements. As a result, the decision of choosing a data center and/or a colocation service provider to entrust and house their business or mission critical IT applications becomes even more critical.
According to this white paper from FORTRUST, the key to making the right decision often depends on asking the right questions.
But how do you know if you’re asking questions that truly help you make an informed decision? Asking potential data center and colocation service providers’ questions uncovers important information about the facility, network access, operations, performance history and the quality of service delivery.
Remember, not all data centers are designed the same, built the same, managed alike or operated alike. In many cases, companies looking for a data center or colocation provider primarily examine the data center’s critical systems infrastructure design through facility tours, one-line diagrams and visual inspections. Additionally, companies will collect and review information about:
- The data center’s location, facility, and risk mitigation features
- Business stability and compliance measures
- The data center IT equipment space and environment
- Access and connectivity
- Physical security

The concepts of reliability and “high-availability service delivery” are facilitated through an operational mindset in which attention to detail, process discipline and procedural compliance emanate from every aspect of the provider’s approach to operations and service delivery.
Download this white paper today to learn about important criteria companies should evaluate when choosing a data center or colocation service provider. At the end of the paper, FORTRUST provides a detailed workbook/checklist where you’ll find a list of evaluation questions. These questions may help you gather the important information you need to make a well-informed decision about your data center or colocation service provider. | | 3:07p |
Fusion-io Accelerates Flash Apps With Open Source Contributions At the O’Reilly OSCON 2013 this week in Portland, Oregon, Fusion-io made the announcement that its Atomic Writes API contributed for standardization to the T10 SCSCI Storage Interfaces Technical Committee is now in use in mainstream MySQL databases MariaDB 5.5.31 and Percona Server 5.5.31.
Fusion-io is contributing its NVMKV (nonvolatile memory key-value) interface to flash and is also posting the first flash-aware Linux kernel virtual memory Demand Paging Extension to GitHub for community testing. Atomic writes are used in the popular MySQL databases for streamlining the software stack by replacing the need to write twice to maintain atomicty, or database ACID compliance. In I/O intensive workloads, Atomic Writes provides performance throughput increases up to 50%, as well as a 4x reduction in latency spikes.
“A flash-aware application optimizes the placement, movement, and especially the processing of data with awareness of NAND flash in the memory hierarchy, and offers configuration options for leveraging the properties of flash memory to improve performance, manageability, and return on investment,” said Pankaj Mehra, Fusion-io Chief Technology Officer. “With flash-aware applications, developers can eliminate redundant layers in the software stack, deliver more consistent low latency, more application throughput, and increased NAND flash durability, all with less application level code. Complementing our ongoing standards work, we are pleased to make NVMKV and the Linux Demand Paging Extension available to the open source community, as Fusion-io continues to add uncommon value to common standards.”
In hyperscale computing key-value stores are popular for schema-less data structures used in NoSQL databases. To reduce complexity NVMKV eliminates the need to continually convert native key-value I/O to block I/O used in disk storage. Flash Translation Layers also delivers additional flash-aware benefits with NVMKV.
“Increasingly our customers expect MariaDB products to not just compete with, but to exceed what they can get from rival database technologies,” said Monty Widenius, MariaDB creator. “The highly innovative solutions we have worked on with the Fusion-io Atomic Writes API are a great example of how both companies are bringing the best thinking to the best database in the world.” | | 3:20p |
Digital Realty Acquires Amsterdam Site Data center developer Digital Realty Trust has purchased a 5.3 acre site in a suburb of Amsterdam, where it plans to build an 11.5 megawatt data center, the company said today. The 15,900 square meter development at De President, Hoofddorp, Haarlemmermeer will feature six data halls, each capable of supporting 1.92 megawatts of IT load, with dedicated infrastructure services and cooling options tailored to meet customers’ needs.
Construction is expected to begin in the fourth quarter of 2013, with the first two data halls to be delivered in mid-2014. Each data hall will be designed using Digital Realty’s POD 3.0 architecture, a modular approach to building mechanical and electrical systems. The site will also offer access to multiple Tier 1 and Tier 2 International Carriers, and will connect via Amsterdam’s main fibre ring to Digital Realty’s other European data centers as well as its U.S. locations.
“Amsterdam is ideally located at the heart of the demand we are seeing for today’s networked data centre requirements across Europe,” said Bernard Geoghegan, Managing Director, Europe, Middle East and Africa of Digital Realty. “Feedback from enterprises indicates that a ‘well-connected’ data centre is critical to their businesses. Transporting large volumes of data at high speeds is key to enabling IT initiatives, such as cloud computing. Our state-of-the-art De President facility will provide an ecosystem unrivalled in terms of connectivity to support companies of all sizes in Amsterdam.”
Digital Realty Trust has 122 properties comprising approximately 22.7 million square feet of space in 32 markets throughout North America, Europe, Asia and Australia. | | 5:24p |
COPT Signs 2MW Deal in Northern Virginia  The exterior of the COPT DC-6 data center in Manassas, Virginia. (Photo: Rich Miller)
Corporate Office Properties Trust (COPT) has signed a 2 megawatt lease at its COPT-DC 6 data center in Manassas, Virginia, the company said today. The new tenant was identified as a “Fortune 100 Global consumer electronics and technology company.”
The lease means that COPT has now leased 6.3 megawatts of the facility’s 9 megawatts of available space, or 70 percent of capacity. Existing tenants at COPT DC-6 include CapGemini, EvoSwitch (LeaseWeb) and Vazata (previously Horizon Data Center Solutions).
“The caliber of the tenant and the size of this lease commitment illustrate the effectiveness of our re-branding and marketing efforts surrounding COPT DC-6,” commented Roger Waesche, Jr., President & Chief Executive Officer of COPT. “We are pleased to see meaningful progress toward stabilizing this asset.”
The 233,000 square foot COPT DC-6 data center opened its doors in 2010 as Power Loft, with a new design focused on energy efficiency. The two-story facility was one of the first designs to create a “hard separation” between the server area and the mechanical equipment by placing them on different floors. The property was acquired by COPT in Sept. 2010 for $115 million.
But leasing in Manassas has trailed the robust activity seen about 25 miles north in Loudoun County’s “Data Center Alley” in Ashburn. Last year COPT refined its marketing plan for the site, a strategy that has helped secure two substantial leases.
COPT is a real estate investment trust (REIT) focused primarily on serving the specialized requirements of U.S. Government agencies and defense contractors, most of whom are engaged in defense information technology and national security-related activities. As of March 31, 2013, COPT’s portfolio included 210 office properties totaling 19.1 million rentable square feet. | | 6:00p |
Fishbowl Finds the Right Fit for Hybrid Cloud With Latisys The path to hybrid infrastructure isn’t always an obvious one. “Hybrid” has replaced “cloud” as the hot buzzword, and from a customer perspective, there’s even more confusion around exactly what it means.
Fishbowl, providers of software and services to the restaurant industry, has leveraged Latisys for a hybrid solution that encompasses colocation, managed services, and cloud. Before choosing Latisys, it uncovered a disconnect between its needs and what some pure play hosting providers were pitching.
Fishbowl was using Latisys for colocation of its internal systems and intranet, while hosting customer-facing services with another managed hosting provider. Its customer systems were growing too fast for its previous provider.
“Our member database is growing very fast –we’re at 110 million members and growing,” said Khalid Namez, the IT Manager at Fishbowl. “Email is growing 40-50% every year. Storage and bandwidth is growing accordingly.”
The anticipated growth, as well as some initiatives that will move the company into other SaaS-based services around customer relationship management, prompted the company to seek out a provider that could accommodate its current and future needs.
A Mix of Services
Fishbowl uses a mix of services from Latisys that came about through several conversations. “It was an open conversation – an ongoing conversation that lasted about 2 months,” said Namez. “It was around what we think we need and what they can offer. The end result came from internal analysis and Latisys suggestions.”
The company has a typical software as service setup with a few pieces added in. A SQL database is the backend, with 64 CPU boxes to do all of the heavy processing. There’s a lot of business intelligence and data warehousing that require a lot of horsepower, as well as application/web servers and other applications dedicated to handling massive email volumes.
“Because of the amount of mailflow, we need to manage that very closely,” said Namez. “A lot of providers won’t let us in their plans. We appreciate Latisys working with us. They manage dedicated firewalls for us, but as far as load balancers we us cloud. When we need highly detailed control over the firewalls, Latisys provided us with dedicated.”
The big difference the company found was the amount of flexibility Latisys offered in terms of services, as well as a key difference it found in negotiating with them
A Cooperative Process
“They wouldn’t say ‘we think you should do this,’ they came back with specific numbers for everything,” said Namez. “I wasn’t sure cloud load balancing would be right, for example, but they showed us the numbers. We had the same conversation about firewalls. Cloud based firewalls aren’t the answer.”
The nature of Fishbowl restaurant-focused business means a lot of fluctuation. “A lot of our activity is seasonal,” said Nemez. “Any holiday when people eat out, there’s a lot of fluctuation. After studying the bandwidth profile, the suggestion from Latisys was to use blended network, as another example of the cooperative process.”
There are a ton of companies out there in need of the right solutions to take them to the next level. There is also a ton of confusion as to how “hybrid” will let them get to the next level. The path to hybrid is not an easy one; in the case of Fishbowl, it was a company that didn’t fit nicely into any pure-play hosting provider plans. A good experience with Latisys when it came to colocation led the company to move its customer-facing systems with it as well. Latisys’ ability to provide a blend of solutions, and to show numbers behind it, was crucial.
The rise of the hybrid provider is among us – hybrid in terms of services, not just cloud. Customers are growing up from one size fits all cloud, and growing out from their current setups as they look to leverage their data and better serve customers. |
|