Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, December 30th, 2013
| Time |
Event |
| 1:00p |
Colocation Outlook 2014: Connectivity is Critical in a Changing Landscape  Looking ahead for 2014: Colo, wholesales and connectivity.
Data center connectivity is becoming a key differentiator as wholesale data center providers compete for customers with colocation companies. As the industry heads into 2014, the blurring of lines between “retail” colo and wholesale space is leading more providers to offer flavors of both products, and boosting interest in open internet exchanges that could extend peering activity to a broader spectrum of facilities.
The ability to connect with multiple networks has always been a key selling point for facilities in major markets, leading to the creation of large ecosystems of carriers, content providers and network services in key geographic hubs like northern Virginia, New York and Silicon Valley.
Are the boundaries between retail and wholesale data center services blurring, or becoming more distinct? That often depends upon who you ask, and what they’re selling. What’s clear is that competition for customers is intensifying, spanning boundaries and offering end users a growing array of choices for deploying their IT equipment in third-party data center space.
Market Definitions
First, let’s define the market:
In colocation, a customer leases a smaller chunk of space within a data center, usually in a caged-off area or within a cabinet or rack. In the wholesale data center model, a tenant leases a dedicated, fully-built data center space. The wholesale data center model offers greater control and security than shared colocation space, but it’s not a fit for everyone. The economics of wholesale space initially were most attractive to companies requiring at least 1 megawatt of power capacity for their data center. Over the last two years, that boundary has shifted, with some wholesale providers now courting deals as small as 250 kW.
Some industry executives say the distinctions are becoming less relevant.
“I have never once had a customer approach me and ask me for wholesale data center or retail data center space,” said Gary Wojtaszek, President and CEO of CyrusOne. “Although we discuss our business in this way to help our investor audience understand our company, we don’t fundamentally think about the products in this simple two-state way, because our customers don’t. Our customers are trying to solve a specific data center problem.”
Yet these models exist because they create meaningful differences in how data center space is priced, paid for and supported.
“Retail (colocation) is a service industry with a lot of handholding,” said Chris Crosby, the founder and CEO of Compass Datacenters. “If you sell a megawatt of power on the wholesale side, it’s a different business.”
Broader Product Mix for Many Providers
A growing number of providers are offering both “retail” colocation and wholesale suites, including CoreSite, CyrusOne, QTS, Digital Realty, RagingWire and Equinix.
“We’re seeing strong growth in both (retail and wholesale),” said Doug Adams, Senior Vice President and Chief Revenue Office of RagingWire. ”There is truth to the idea that they’re blurring. The determination has always been the size of the oportunity and the length of the deal. The tipping point for what defines a wholesale deal used to be 1 megawatt, but it has definitely dipped down into the 250 kW range. We’re differentiating on the quality of the infrastructure and high reliability.”
Digital Realty, the largest player in the wholesale space, projects 20 percent annual revenue growth from colocation in the next three years, with colo revenue hitting $140 million by 2016. Equinix, the market leader in colocation, has launched a “business suites” product in northern Virginia and will soon expand the offering to its New York metro facilities.
Lines Are Brightening, not Blurring
Equinix and CoreSite are among the providers drawing a distinction between network-dense interconnection facilities and “undifferentiated” data center space to support large server installations.
“It’s interesting that many people have talked about blurring the lines between wholesale and retail,” said Charles Myers, Chief Operating Office of Equinix. “Frankly, I’d argue exactly the opposite. As the market segments, the choice for people to move large, non-performance-sensitive applications into wholesale is becoming an easy choice for them at the price points that wholesalers are offering.
“These undifferentiated deals that close at low price points simply are uninteresting to us,” said Myers. “It is true that some wholesale players have retail offerings or are signaling their intent to develop retail offerings. But in my mind, that’s not a blurring of the lines. That’s an entirely different matter, and that decision by a wholesaler would come with a very significant investment required to support a large number of retail customers. All you have to do is look at our employee count, look at our (expenses) necessary to support that retail model and compare that to pure wholesalers, and I think you’ll very quickly conclude that (wholesale) players can’t just declare themselves as retailers and expect that they’re going to successfully meet customer needs.” | | 1:30p |
The Fruits of Innovation: Top 10 IT Trends in 2014 Mark Harris is the vice president of marketing and data center strategy at Nlyte Software with more than 30 years experience in product and channel marketing, sales, and corporate strategy. Nlyte Software is the independent provider of data center infrastructure Management (DCIM) solutions.
 MARK HARRIS</p>
Nylte Software
The IT industry and its data centers are going through change today at a breakneck pace. Changes are underway to the very fundamentals of how we create IT, how we leverage IT, and how we innovate in IT. Information Technology has always been about making changes that stretch the limits of creativity, but when so many core components change at the same time, it becomes both exciting and challenging even for the most astute of us IT professionals.
The changes we’re due to see in 2014 start with the way people think. A good bit of the change going on in IT is about the maturity of its business leaders and their business planning skills associated with all of those changes. In the end, these leaders are now tasked to accurately manage, predict, execute and justify. Hence, the CIO’s role will evolve. Previously, CIOs were mostly technologists that were measured almost exclusively by availability and uptime. The CIO’s job was all about crafting a level of IT services that the company could count on, and the budgeting process needed to do so was a mostly a formality.
Best Qualities in a CIO
The most effective CIOs in 2014 will be business managers that understand the wealth of technology options now available, the costs associated with each as well as the business value of each of the various services they are chartered to deliver. He or she will map out a plan that delivers just the right amount service within their agreed business plan. Email, for instance, may have an entirely different value to a company than their online store, so the means to deliver these diverse services will need to be different. It is the 2014 CIO’s job to empower their organizations to deliver just the right services at just the right cost.
For technologists, 2014 will be a banner year for change at a nearly unprecedented rate.. If we look back at the IT industry’s history overall, it really started about 60 years ago with IBM’s first commercial release of the mainframe, became a distributed computing world in the late 1970’s, transitioned to an Internet connected world in the mid-1990’s, and then exploded into the current generation of dynamic abstracted computing beginning in the mid-2000s. This new approach to computing puts a tremendous emphasis on the back-end data center services rather than the capability of the end-user’s device. After a few false-starts (like the netbook), the mature web-based, handheld, mobile and VDI revolution has become a cornerstone of computing, and it has become a race to put all of the actual computing back into the data center and do so in a modular fashion. That said, most existing data centers pre-date this dynamic period and hence their entire supporting infrastructure is mostly ill-prepared to handle this dramatic shift towards cost-oriented data center services.
What Lies Ahead?
In 2014, we see much of the fruit of this race for innovation. These trends for 2014 are not just niche possibilities nor proof-of-concepts, but are those that we will see the rapid production-level adoption.
1. Big Data is finding its footing as the initial hype has settled and its commercial applicability has been demonstrated across all major industries. Big Data has become one of the biggest topics for the Enterprise. The premise is simple, put all of the historical transactional and business knowledge in one place, and then use various analytic means to extract actionable inferences. The keys to Big Data’s promise are just how to put ALL of a company’s data logically in one place, and then when this massive data set has grown to over 2TB (the very definition of Big Data), how to retrieve various forms of meaningful and actionable knowledge from the dataset interactively. This reality is now possible and in 2014, many end-users will move their Big Data experimental pilots into full production. The Big Data vendors will also keep progressing with support for even large memory-based data structures, real-time streamed data and even more advanced data mining techniques based upon the maturity of their own unique technology. Expect to see many of the world’s largest organizations looking to consolidate their many discrete data stores into one logical Big Data structure and then in some pretty innovative ways, looking for ways to interpret this wealth of knowledge into impressive new business drivers.
2. The Software Defined Networking (SDN) market will continue to be slightly fragmented, but adoption rate for SDN will grow. In 2014 we will see many networking vendors attempting to differentiate themselves in the SDN space. Important to remember is that Software Defined Networking has two key components, the intelligent controller and the physical switches. That said, there is more than one technical approach to these SDN systems, so the end-user community will be faced with making some choices. The largest of SDN vendors will each refine and articulate their specific approach to SDN, and attempt to help their prospects understand why the combination of their robust switching and controllers has the most value. The common premise of these vendors will be that SDN allows better value through economies of scale and choice in a multi-vendor interoperable world.Many SDN vendors will focus on differentiation by demonstrating overall performance and/or capacity. Appliance-level switch and controller comparisons and performance benchmarks will emerge (much like the gigabit switching revolution 10 years ago) and interoperability and scale will be a hot topic. In 2014 the rate at which SDN will be adopted by the end-user community will increase significantly as all of the mainstream players have now announced and delivered compelling components in their SDN space.
3. Asset Lifecycle management will be adopted to control costs. The traditional goal of IT was to assure that applications worked as needed and remained available to their users. The costs to provide this were viewed mostly as a purchasing department’s challenge. Shave a few percentage points off of the purchase of servers or switches and everyone considered the job done well. As a result, it has been commonplace to leave equipment in service for years and years and ignore the economic impacts of doing so. If it didn’t break, don’t fix it mentality. In 2014 we will see an increasing rate of adoption of innovative new tools that manage these physical devices as business assets, with standard accounting processes and costs being addressed. Thanks in part to the abstractions now available through software-defined approaches, physical devices are no longer tied so tightly to their installed applications so the complexity of making hardware changes (including those demanded by technology refresh cycles) has been greatly reduced. It is becoming very apparent that servers or switches that exceed their depreciation and warranty schedules can easily be replaced to reduce operating costs, so the adoption of new tools to manage all of this change will become a higher priority. Data Center Infrastructure Management (DCIM) tools will take their place as the strategic business management solution to manage all of the costs associated with this change.
4. The Server is being redefined. For the past 20 years, a server was generally defined as an x86-compatible chip that was housed in a 19-inch rack-mounted package. Every server in the data center fit this definition. Sure there were many differences in how fast each model of server was, or the amount of memory, disk or I/O capacity, but everything was essentially the same server architecture at the core. In 2014, we see dramatic alternatives. The major server vendors and a handful of startups will offer both low-power x86 designs as well as even lower-power “ARM” and “Power” chipsets. The cost of electricity is a major component of operating costs, so innovative ways of processing data on a lower power budget just makes sense. It turns out many applications are perfectly suited for these new low-power CPU options and in 2014 we will see these applications being touted as wild successes. Surprisingly, some of these new low-power CPU designs rival the IOPS performance of their x86 cousins! In addition to the CPU chip changes, we will see form-factors shipping in volume that are intended for dense applications. Traditionally we had just two form-factor choices; 1) proprietary chassis or 2) Standard 19-inch. In 2014 we will see an onslaught of half-width servers and we will also see the first version of an alternative server packaging scheme referred to as Open-Compute. Open-Compute has been pioneered by companies like Facebook and Google and is a standardized ‘open’ form-factor. Think of this design as the Open-Sourced version of server hardware. In 2014 we will see initial adoption of Open-Compute by a much wider commercial audience.
5. The Public Cloud grows up. Once viewed as a platform for only a limited set of specialized applications as well as a bit of a non-critical platform, the Public Cloud has struggled to take a primary spot at ALL of the IT tables. In fact, the Public Cloud industry’s report card for the last few years would reveal a mixed bag of successes and failures which has stunted it’s growth. At these critical point in the adoption curves, we have seen high-profile failures limiting its strategic adoption. Leveraging Public Cloud technology also requires some fundamental re-education and new economic modeling which is all new to most IT organizations. In 2014, we will see the largest Public Cloud players pro-actively discuss these failures and provide detailed insight into how these issues will be addressed to assure they do not happen in the future. In 2014 the mainstream virtualization vendors will extend their capacity and migration tools to span from in-house to Public Cloud-based instances. As computing capacity is used up internally, it will become commonplace to simply expand to the Public Cloud for demand peaks. In addition, the Open Source world of public cloud computing (dominated by OpenStack) will see its newest release (“Havana”) open some eyes into what is possible in a multi-vendor Cloud world. In 2014 OpenStack’s Havana now supports OpenFlow controllers (one of the main multi-vendor approaches to SDN), OpenvSwitch & and VMware’s NSX. Havana will also deliver support for orchestration including direct support Amazon’s CloudFormation templates. Finally, significant user-level accounting will appear making OpenStack a business-grade option in a multi-tenant world. OpenStack and the tools that are built for it will likely prove to be one of the biggest enablers in creating a multi-vendor Public Cloud. | | 1:38p |
Seagate Acquires Xyratex Seagate (STX) announced it has entered into a definitive agreement to acquire hard drive test equipment maker Xyratex for approximately $374 million. The move will strengthen Seagate’s vertically integrated supply and manufacturing chain for disk drives and ensure uninterrupted access to important capital equipment. It also adds Xyratex’s enterprise data storage systems and high-performance computing business. Seagate will operate this business as a standalone entity and will focus on opportunities to improve and expand the business.
“This is a strategically important acquisition for Seagate as we continue to focus on delivering best-in-class storage solutions for our customers,” said Dave Mosley, President of Operations and Technology at Seagate. “As the average capacity per drive increases to multi-terabytes, the time to test these drives increases dramatically. Therefore, access to world-class test equipment becomes an increasingly strategic capability. As a premier provider of HDD testing equipment, Xyratex is an important partner and we are excited to integrate these important capabilities which will considerably streamline our supply and manufacturing chain for our core HDD business. We are also pleased to acquire Xyratex’s storage systems and high-performance computing business, which provides us additional opportunities to serve our customers with a broader array of storage solutions.”
According to IDC, Xyratex is ranked as the number one provider of OEM disk storage systems for the fifth consecutive year. Earlier in 2013 Xyratex acquired the original Lustre trademark, logo, website and associated intellectual property from Oracle. There is some question in the HPC community as to what intentions Seagate has for Lustre intellectual property, and direction for Lustre going forward. Xyratex has also focused on its ClusterStor services this year, with professional services to help partners and end users capture and execute on their storage requirements to best match their most demanding application needs.
“Xyratex is very pleased to become a part of Seagate’s industry-leading organization,” said Ernie Sampias, Chief Executive Officer of Xyratex. “Seagate shares our commitment to innovation and the critical role that test plays in providing the best storage products at the lowest possible cost. After a thorough strategic review process in which we evaluated a wide range of alternatives, the Xyratex Board of Directors determined that this all-cash transaction with Seagate maximizes shareholder value through an attractive premium, and also affirms the significant value that our employees have created.” |
|