Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, July 6th, 2015
| Time |
Event |
| 12:00p |
Cloud Needs Drive Consolidation in Colocation Data Center Market The colocation market continues to undergo consolidation, its three leaders all active participants in mergers and acquisitions, according to Synergy Research.
The top-twelve providers in the research firm’s Q1 data account for 40 percent of the worldwide market. There is also a long tail of smaller providers with less than 1 percent market share.
The market is increasingly being driven by service-provider clients rather than enterprises, according to Synergy chief analyst and research director John Dinsdale. As a result, data center providers have to be able to provide both scale and breadth of geographic footprint to meet service-provider needs, prompting consolidation in the market.
In terms of market drivers, cloud is the big one, said Dinsdale, adding that the relationship with colo is somewhat complex.
“Growth of cloud puts a bit of a damper on the enterprise segment of the colo market, but helps to drive the service provider side of colocation,” he said. “But colo providers make a lot more money from their service-provider clients than they do from their enterprise clients.” (Note: in this context “service provider” means cloud and IT service providers, telcos, content and digital media companies).

Market leader Equinix holds close to 10 percent of the market and is in the process of acquiring TelecityGroup, which ranks tenth. There’s been chatter that the second-largest provider, Digital Realty, might acquire top-20 player Telx. A Telx acquisition would not only mean consolidation, but a deep expansion into retail colocation for Digital, which is primarily a wholesale data center provider.
Retail colocation dominates the top ten and will likely continue to do so. The lines between what is retail and what is wholesale continue to blur, however, Dinsdale sees this as a small trend and not a major shift.
“Digital Realty has already dipped its toe into the retail market and may be getting ready to dive in. But in the other direction, for example, retail colo provider NTT is moving more into wholesale via a couple of acquisitions that it has made,” he said.
A Digital acquisition of Telx would be a much more significant blurring of the wholesale-retail lines, but Dinsdale questioned whether it would be a good fit.
“Personally, I’d say that the logic is somewhat questionable,” said Dinsdale. “While there will always be something of a blurry demarcation line between retail and wholesale, at their heart, these are two market segments with differing characteristics which have different business metrics and require somewhat differing skill sets.”
Rounding out the top three on the leader board is Japan’s NTT, which recently acquired e-shelter, significantly boosting its European market share. Europe continues to be a focal point and battleground for market share.
Another interesting highlight from the report is that colocation market leader-board spots are now split evenly between colocation specialists and telcos, six on each side. Four of the telcos on the board landed there through acquisition.
Whether or not telcos can continue success via organic growth is uncertain.
“This tends to be a bit of a tough play for telcos,” said Dinsdale. “Generally, they are in the colo business because they have to offer a comprehensive range of services to their enterprise clients. But their real goal is often to drive sales of other more core services. So their focus on colo is oftentimes a bit half-hearted.”
There are exceptions, he added. A few telcos have made big acquisitions of substantial colocation business lines, such as NTT, Verizon, CenturyLink, and Rogers in Canada.
Consolidation activity is not limited to the top players, said Dinsdale. “For sure, [regional activity] is happening already,” he said. “A lot of this is among small-medium-sized colo providers, but regional consolidation is ongoing. The market is pretty fragmented in the smaller metro areas with a lot of small local players being active.” | | 3:00p |
How to Enable Modern Cloud Management and Visibility Before any cloud environment is even deployed, it’s important to know what type of tools will be used to manage it. In virtualization, native hypervisor tools already come with many great features that help with cloud-based visibility. Also, using third-party tools can help align management of distributed data centers into one single pane of glass. When working with a distributed cloud environment it’s very important to proactively keep it in check.
There are many components to a cloud deployment. If an organization is using a public cloud, chances are that the provider will have their own set of tools for an administrator to use. However, it’s still very important to know how the entire environment is operating.
This means looking at several factors to ensure optimal cloud performance:
- User count. At any time, an administrator must know how many users are accessing the cloud environment, which server they are residing on, and what workloads they are accessing. This type of granular control will allow IT administrators to properly balance and manage server-to-user rations. The only effective way to load-balance cloud servers is to know who is accessing them and in what number.
- Resource management. Deep resource visibility comes on multiple levels. As discussed earlier, it’s important to see how well physical resources within the cloud are being used. This also means viewing graphs, gathering statistical information, and planning for the future. Visibility and management will heavily revolve around an administrator’s ability to see what resources they have available and where they are being allocated. Again, it’s important to note that resources are finite and improper allocation can become costly very quickly.
- Alerts and alarms. A healthy environment with good cloud visibility will have alerts and alarms set up to proactively catch any issues which may arise. By catching problems before they become potential outages, an organization is able to maintain higher levels of uptime. Setting up alerts is a very important process where the proper administrator is notified depending on the issue. This means that if a storage alert is sent out, a storage administrator will respond to the issue promptly. If the alert is server related, the server team must address the issue as soon as it comes in.
- Failover capabilities. With good visibility comes the ability to failover cloud servers without causing downtime for the user. If an error or an issue is caught, administrators may have the time to fail users over to a host capable of handling the user count. In many environments, this can be automated. If a physical host goes down, the VMs residing on the host will be safely migrated and balanced between other available servers. Of course, if there is such an outage, an alert will be sent out to the appropriate engineer.
- Roles and privileges. Good visibility also means having roles and privileges built into the environment. This means that the storage team should only have access to cloud-based storage components and the virtualization team can have access to VM management. This isolation of roles creates an environment capable of security and effective audit trails. It also greatly reduces the risk that a team member will make changes to the wrong part of the system.
- SLA considerations. When working with a 3rd party provider, it’s important to have visibility into the service-level agreements put into place. This means monitoring uptime and usage of the environment. Depending on the type of SLA, different metrics will be important to the administrator. This might mean monitoring the amount of VMs working or adjusting downtime requirements.
- Testing and maintenance. Cloud environments, just like any other infrastructure will require maintenance and testing. Administrators must have a good plan in place for server patching, updates, and general maintenance. Planning for testing of bandwidth or failover capabilities must also be in place. Creating a test and maintenance plan will help keep any cloud environment operational longer and work towards reducing issues revolving around various infrastructure variables.
Regardless of the toolset being used, the most important point when analyzing visibility into a cloud environment is to ensure that all aspects of the infrastructure can be managed. Poor cloud management will lead to improper resource provisioning and throttled performance. The following point is the key takeaway for cloud visibility and management: Proactive cloud management will ensure the most optimal performance of the environment for both the administrator and the end-user.
Cloud environments will continue to evolve and expand. Business needs are the drivers for technological innovation and cloud computing is helping push organizations forward. As more IT environments see the benefits of the cloud, administrators will need to learn how to properly size, manage, and deploy effective cloud platforms. Planning will always be the pivotal step with any cloud initiative and can often mean the success or failure of a cloud deployment.
Remember, although every environment is unique, there are best practices which can be followed depending on the needs of your environment. When working with cloud computing, remember the following:
- Sizing and deployment steps are important. Always create a plan based on current and future business needs.
- Know the difference between different cloud delivery models. Since needs will vary, the right solution may be a single type of cloud or a combination of solutions.
- Visibility into a cloud environment is crucial to its effective management. Administrators must know how their cloud is operating at any given time.
- Cloud agility will revolve around an organization’s understanding of their needs and how they are relayed to the IT department.
- Resources are finite. So plan and use them wisely.
The cloud’s ability to create an agile distributed datacenter infrastructure can help a business grow and achieve its goals. With proper planning and a good deployment methodology, cloud platforms provide a powerful tool for corporate growth. | | 3:30p |
PagerDuty Promises Cash as Downtime Compensation Looking to become one of the few IT vendors that put some teeth behind their service level agreements, downtime incident management platform provider PagerDuty this week announced a downtime insurance program under which it will pay out up to $3 million to anyone that experiences outages using its IT incident management service.
PagerDuty CTO Andrew Miklas said most SLAs only compensate organizations based on a small percentage of the money they invested to acquire the product. Even then, that compensation usually takes the form of credits toward continuing to use the product.
Downtime Insurance, in contrast, represents one of the first efforts to tie a cash payment to an SLA, Miklas said.
“When we are talking about incident response, it’s always about peace of mind,” he said. “In accordance with that we wanted to tie the SLA to an actual business value.”
The insurance is available to any organization that signs up for the Enterprise Plan attached to the PagerDuty service. In the event of downtime, PagerDuty and the customer would jointly comb through PagerDuty logs and assess the damage to the business.
PagerDuty is willing to assume responsibility for any downtime relating to an incident, up to a maximum $3 million.
By comparison, most other SLAs are essentially toothless, Miklas said. Not only do most SLAs hide behind a best-effort clause buried somewhere deep in a contract, they don’t actually result in the provider of the IT service assuming any financial risk.
PagerDuty recently published a survey of 100 business and IT professionals, conducted by Forrester Consulting on its behalf, that found that more than half of the respondents said their organization experiences significant disruption of IT services at least once a week. Worse yet, half the time IT is notified of the disruption by internal employees or external customers.
The study infers that one of the reasons this occurs so much is that many IT organizations are trying to make sense of six or more IT management tools, each tool addressing a specific tactical issue. As a result, correlating all that information into something that resembles actionable intelligence is next to impossible.
Having access to an incident-response system essentially creates a framework around which the IT organization develops a discipline to not only minimize any potential downtime, but also keep the rest of the organization informed about what is actually occurring, and who specifically is taking care of the problem, Miklas said.
Naturally, the degree to which organizations have a formal process in place for managing IT incidents varies greatly. But, as the saying goes inside and outside of IT, it never hurts to expect the unexpected. | | 5:16p |
Cloud Collects Real-Time Data on 200 Tour de France Cyclists Dimension Data is powering real-time data behind Tour de France, which is taking place through most of this month. The company said this is the first time spectators are able to view real-time information on all individual riders during the major cycling tournament.
The company said data on riders is processed by its cloud platform across five continents, consuming over 350 million CPU cycles per second. A website showing real-time data is built to support 17 million viewers and 2,000 page requests per second.
Constantly improving ability to track and process data in real time is powering new and innovative ways to look at sporting events. IBM played a similar role during the recent US Open tennis tournament. Now, Dimension has built a platform that brings a new depth of understanding to the Tour.
Tour de France was first held in 1903 and has since evolved into one of the biggest global sporting events. Occurring over multiple stages, the tournament kicked off Saturday and will go on through July 26.
Data is collected from live trackers under the saddle of about 200 individual riders. Dimension’s system then processes and analyzes the data making it available to cycling fans the media. Over three weeks, Dimension said it will roll out a range of new capabilities including a beta live tracking website.
The live tracking website lets users track the speed at which each cyclist is riding, how far along they are in the race and their position in relation to other cyclists.
“Until now it was difficult to understand what was happening outside of what could be shown on the live television coverage,” said Dimension Data executive Jeremy Ord in a press release. “The ability to follow riders, get accurate information about which riders are in a group, and see real-time speed are just some of the innovations that will be realized through this solution.”
Data is being provided by a third-party geo localization transmission component. It is then “cleansed” or cleaned up, analyzed and provided via real-time streaming and historical archive. In total, the just under 200 riders are expected to generate 42,000 geospatial points and 75 million GPS readings.
The Amaury Sport Organisation partnered with the 22 teams participating in the event making the platform possible. The technology was first tested during the Critérium du Dauphiné race held in France last month. Once cyclist was clocked in at an astounding 65 miles per hour.
“Dimension Data is bringing a new level of technical capability to the Tour de France in areas that will transform the technology landscape, including internet of things, real time big data analytics, Elastic Cloud Infrastructure, contemporary digital platforms, advanced collaboration technologies, and agile development practices,” said Brett Dawson, Dimension Data Group CEO. “We’ll be their ‘Technical Tour de Force’.” | | 5:34p |
VMware, Carahsoft to Settle Government IT Contract Dispute for $75.5M VMware and Carahsoft will pay a $75.5 million settlement to resolve allegations that they had overcharged the government and violated the False Claims Act.
The allegations were around misrepresenting commercial pricing practices and overcharging the government for VMware software products and related services, according to the Department of Justice.
VMware is the well-known virtualization giant while Maryland-based Carahsoft is IT provider to local, state, and federal government.
Government IT is undergoing an evolution to cloud, and VMware has a big play in the federal space. There’s a lot of activity going on and sometimes deals go sour. The US Defense Information Systems Agency recently cancelled a $1.6 billion, five-year cloud contract with VMware that spanned various military branches following protests from other cloud competitors. The reason for cancelling was around the bidding process or lack thereof for the deal.
False statements were allegedly made in the sale of VMware products and services under Carahsoft’s Multiple Award Schedule (MAS) contract between 2007 and 2013.
Carahsoft’s MAS contract gives it access to the broad federal government IT marketplace. However, MAS requires commercial pricing disclosures to make sure that government entities are getting a fair deal.
The MAS program means prospective vendors agree to disclose commercial pricing policies and practices to the General Services Administration in exchange for the opportunity to gain access to the broad federal marketplace. The MAS is heavily coveted as it allows a vendor to sell to any government purchaser through one central contract. However, commercial pricing must be accurately represented both before and after MAS is awarded.
“Transparency by contractors in the disclosure of their discounts and prices offered to commercial customers is critical in the award of GSA Multiple Award Schedule contracts and the prices charged to government agency purchasers,” Dana Boente, US Attorney for the Eastern District of Virginia, said in a statement.
During negotiations with GSA, those seeking an MAS contract have to provide “current, accurate, and complete” disclosures of all discounts offered to commercial customers. After the MAS contract is awarded, vendors need to continue to disclose changes in their commercial pricing, including improved discounts offered to commercial customers.
GSA said it will continue to look into all allegations of false claims in its contracts. | | 6:00p |
Software-Defined Storage: Chasm Crossing or Tipping Point? Jason Phippen is Head of Global Product and Solutions Marketing for SUSE.
For more than 20 years, Geoffrey Moore’s Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers has been the de facto standard to describe technology adoption. Even those who haven’t read the book are familiar with his terms: innovators, early adopters, early majority, late majority and laggards. A technology crosses the “chasm” when it achieves a sustainable level of demand and adoption, separating technology’s winners from its losers.

Moore’s description of technology adoption works perfectly in markets where there is little concrete differentiation. Nowhere is this more applicable than among the old giants of the enterprise storage market. Take a look at proprietary storage from any of the 20th Century’s household names – be they EMC, Dell, HP, IBM– it’s a struggle to find something that really makes a concrete, provable difference. They all have management software that can be broadly categorized as proprietary, compatible hardware, comparable prices and comparable performance. Choosing a storage vendor was historically akin to picking between a BMW, a Jag, or a Lexus: everyone has their preference, but much of that comes down to familiarity and preference. No model costs half as much as the other at retail, travels twice as fast, or costs half as much to run.
The Tipping Point?
But every so often, Moore’s take on the diffusion curve is wrong. Circumstances combine to create a situation where a technology doesn’t just cross the chasm, but factors combine to lead a sudden and dramatic rise in adoption and influence. It’s a similar effect to another bestselling book, Malcolm Gladwell’s 2000 work, The Tipping Point.
For a sudden shift to occur, two things need to happen:
- The technology needs to deliver a genuine change – real benefits and obvious advantages as opposed to marginal brand differences.
- The problem or advantage the technology delivers has to be big enough to offset the reservations customers on the early adoption side of the chasm feel. The “need’ of the buyer has to be genuine and pressing.
We are at this inflection point in the enterprise storage market – and accordingly the market is going to change quickly and permanently. The advantages of software-defined storage (SDS) are so great, they represent an end-of-an-era step change, leaving traditional enterprise storage vendors to change or perish accordingly.
Why SDS?
So what’s the step change advantage in software defined? First, the cost advantage of SDS is huge. By separating the physical storage plane from the data storage logic (or control plane), SDS eliminates the need for proprietary hardware. Freed from the need to buy proprietary appliances running proprietary software, IT teams can work on commodity x86 hardware and disks in cheap racks, generating as much as a 50 percent cost savings. This marks the first of our necessary market conditions for rapid adoption: real advantage in avoiding vendor lock-in coupled with huge cost savings.
So what about the second condition, the pressing problem or advantage? As a storage professional, I doubt you need to be informed what the key problems are, as you will be dealing with them day in day out: more and more data to store, bigger unstructured data, and indefinite storage duration. Here are your top seven storage pain points as measured by 451 Research:

The biggest of these problems is data growth, which is a major problem for more than half of all enterprises, large and small. The next problem is managing the cost – which harks back to my previous point – and the third is capacity forecasting. Put these three together and you are creating a serious budget and management headache. Storage is growing at a fierce rate, it’s difficult to predict how much you will need, and the costs are spiraling out of control.
How to Integrate SDS
The key argument around adopting SDS doesn’t pertain to its inevitability or its benefits; it’s how to make the right choice in technology to migrate your bulk, object, backup and email storage onto SDS storage. There are a number of key factors in this decision, including product maturity, financial stability, ecosystem integration, testing, and – of course – whether the technology is open source or proprietary.
In the open source community, it starts and ends with Ceph. The 2004 brainchild of Sage Weill, a college dissertation in support of his PhD in computer science at the University of California, Santa Cruz, set out to create his own storage platform, one with no single point of failure, self healing, with replication to make it highly fault tolerant, and scalable to the exabyte level.
At the heart of Ceph are CRUSH and RADOS.
CRUSH
Just as with any distributed file system, files placed into a Ceph cluster are “striped” so consecutive segments are stored on different physical storage nodes using CRUSH – Controlled Replication Under Scalable Hashing, a hash-based algorithm that calculates how and where to store and retrieve data. CRUSH allows clients to communicate directly with storage devices without a central dictionary or index server to manage object locations. It thus enables Ceph clusters to both store and retrieve data very quickly and access more data concurrently, thereby improving throughput.
RADOS
The Reliable Autonomic Distributed Object Store provides applications with object, block and file system storage in a single, unified storage cluster. This makes Ceph flexible, highly reliable and easy to manage. RADOS enables vast scalability ― thousands of client hosts or KVMs accessing petabytes to exabytes of data. All applications can use the object, block or file system interfaces to the same RADOS cluster simultaneously, meaning Ceph storage systems can serve as a flexible foundation for all of your data storage needs.
Build Your Own
With Ceph, you use “white boxes”― whatever commodity x86 hardware you choose (or even your older end-of-life storage arrays). Because you are free to use commodity hardware and whatever you have on hand, you can avoid being locked into proprietary platforms and all the costs they entail. Choosing software-defined storage on Ceph can generate savings of up to 50 percent, and for today’s hard-pressed storage administrator, that’s something that has to be investigated.
There’s no question the storage market is on the precipice of an important change in technology. The question is how to make a change when necessary. Will your organization be ready?
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:50p |
Report: Public Cloud Data Center Expansion Fuels IT Spending Growth Investment in IT hardware by public cloud providers is growing faster than by any other type of hardware buyer. Money they spend on cloud data center expansion is the single biggest revenue-growth driver for servers, storage, and network hardware vendors, according to a new report by the market research firm IDC.
Total investment in public cloud IT infrastructure is less than half of investment in hardware for non-cloud IT. Non-cloud IT investment, however, will remain flat this year, while public cloud IT investment will grow by nearly one-third, according to analysts’ expectations.
The amount of money companies spend on non-cloud IT continues to be enormous – far greater than cloud-infrastructure spend – but users increasingly augment their in-house IT infrastructure with cloud services. As cloud user base and variety of cloud services grows, so does investment in cloud data centers.
For cloud service providers, expanding data center infrastructure is also matter of staying competitive. Geographic scale and computing capacity are crucial attributes of a quality cloud service, so competition in the market is driving a lot of data center spending.
Some of the biggest and most recent new cloud data center announcements included Amazon Web Services’ plan to build data centers in India, Oracle’s future data center in Brazil to support its cloud software services, and Microsoft’s plans to establish the first two Azure cloud data centers in Canada. Those are just a handful of the multitude of cloud data center expansion projects announced this year.
“The breadth and width of cloud offerings only continue to grow, with an increasing universe of business- and consumer-oriented solutions being born in the cloud and/or served better by the cloud,” IDC analyst Natalya Yezhkova said in a statement. “This growing demand from the end user side and expansion of cloud-based offerings from service providers will continue to fuel growth in spending on the underlying IT infrastructure in the foreseeable future.”
Another major growth area is investment in hardware for private cloud infrastructure. IDC expects companies to spend $11.7 billion on this kind of infrastructure this year, which is a nearly 17-percent increase year over year.
Public-cloud hardware spend this year will total $21.7 billion, or 32 percent up from 2014, according to IDC.
The analysts expect companies to invest $67 billion in hardware to support private clouds, which, as already mentioned, is about the same as last year.
IDC’s five-year forecast sees public and private cloud IT spend grow at a compound 15.6 percent annually, and non-cloud IT spend decline 1.4 percent a year.
Here’s a chart showing how the mix of public cloud, private cloud, and non-cloud IT hardware spend will change over the next five years, courtesy of IDC:
 |
|