Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, October 31st, 2016
| Time |
Event |
| 12:00p |
Top 10 Data Center Stories of the Month: October To help you stay up to date on the latest in the data center industry, here are the 10 top data center stories of October from Data Center Knowledge:
With Microsoft Data Center Deal, EdgeConneX Takes on Wholesale Giants – The Comcast Ventures-backed data center provider appears to be throwing its hat into the ring to compete for large wholesale cloud deals with the likes of Digital Realty Trust and DuPont Fabros Technology.
Verizon Said to Be Nearing Data Center Deal, Most Likely With Equinix – The portfolio consist primarily of data centers Verizon gained when it acquired Terremark in 2011, and the analysts estimate that Equinix may pay about $3.5 billion in the transaction, which would be neutral for Verizon and positive for Equinix, given high quality of the Terremark facilities and their locations, which would further increase Equinix’s already enormous global scale.
Oracle’s Cloud, Built by Former AWS, Microsoft Engineers, Comes Online – While Larry Ellison has been touting the might of Oracle’s cloud for years, the company has been far behind its rivals in the cloud infrastructure services market in terms of revenue. This latest attempt is its biggest effort yet to catch up, leveraging its experience in the enterprise, its hardware design and global supply chain expertise, and the cloud infrastructure know-how of the biggest players in the market.
Cloud by the Megawatt: Inside IBM’s Cloud Data Center Strategy – All major US hardware vendors have tried and failed to become formidable rivals to Amazon in the cutthroat cloud infrastructure market; all except Oracle and IBM, which continues to expand both technological capabilities of its cloud and the global infrastructure that makes it all possible.
Digital Realty to Build Data Center Tower in Downtown Chicago
 Rendering of Digital Realty’s planned data center at 330 E. Cermak in Chicago. The company’s existing carrier hotel at 350 E. Cermak is immediately to the right. (Image: Digital Realty)
The data center REIT is billing the future Chicago data center as an expansion of 350 E. Cermak, which has historically been in high demand because of the large amount of networks that can be accessed there. Until recently, Telx, which Digital acquired last year, operated the network meet-me room in the building.
VMware Gives AWS Keys to Its Enterprise Data Center Kingdom
 VMware CEO Pat Gelsigner speaking at VMworld 2014 in San Francisco. (Photo: VMware)
The move, aimed squarely at the enterprise market, is a strategic about-face for both companies, who in the past viewed each other as competition. As the world’s biggest provider of cloud infrastructure services, Amazon has been perceived as a huge competitive threat to VMware’s ubiquitous presence in enterprise data centers.
VMware Sells Its Governments Cloud Business to QTS – The deal comes about a month after VMware, now majority-owned by Dell, announced a major shift in cloud strategy, choosing to focus more on providing technologies that help customers use other cloud providers’ services rather than trying to compete with market leaders, such as Amazon and Microsoft, as a cloud provider itself.
Here’s Google’s Plan for Calming Enterprise Cloud Anxiety – Enterprises that sign up for Google’s cloud services will now have the choice to submit their software development and IT operations teams to the same level of operational rigor Google submits its own engineers to.
Iron Mountain Entering N. Virginia with Massive Data Center Build
 Rendering of Iron Mountain’s future 60MW data center campus in Manassas, Virginia (Image: Iron Mountain)
Already one of the most in-demand data center locations, Northern Virginia is expected to see annual demand growth in the high double digits over the next five years, according to real estate brokers. That growth will be driven by demand from cloud providers and managed services companies.
Space: the Ultimate Network Edge – A whole new global network backbone is being built, consisting of intercontinental submarine cables, capable of handling unprecedented amounts of bandwidth, 5G wireless networks, and satellites that beam data down to earth using lasers.
Stay current on data center industry news by subscribing to our RSS feed and daily e-mail updates, or by following us on Twitter or Facebook or join our LinkedIn Group – Data Center Knowledge. | | 5:53p |
CenturyLink to Buy Level 3 for $34 Billion in Cash, Stock (Bloomberg) — CenturyLink Inc. agreed to buy Level 3 Communications Inc. for about $34 billion in cash and stock, creating a more formidable competitor to AT&T Inc. in the market to handle heavy internet traffic for businesses.
The acquisition values Level 3 at $66.50 a share, the companies said in a statement Monday. That’s about 42 percent above where the Broomfield, Colorado-based company was trading last week, before reports surfaced of a potential acquisition by CenturyLink, which is based in Monroe, Louisiana. The value of the deal includes assumed debt.
Fiber Diet
Both companies have amassed giant networks to haul internet traffic through deals over the years.
Level 3 is one of the largest providers used by internet services including Netflix Inc. and Google to route traffic across the web, operations that would bolster CenturyLink’s core offerings to businesses. The deal also promises to help CenturyLink by giving it access to about $10 billion in tax credits that Level 3 is carrying on its books, Jennifer Fritzsche, an analyst with Wells Fargo Securities LLC, said last week.
CenturyLink Chief Executive Officer Glen Post will remain CEO of the combined company, while Level 3 Chief Financial Officer Sunit Patel will be CFO.
Level 3 was the second-biggest U.S. provider of ethernet services — running high-bandwidth internet connections for companies — in the first half of this year, trailing only AT&T, according to Vertical Systems Group Inc. CenturyLink was fifth on the list.
CenturyLink shares fell 8 percent to $27.95 in early trading Monday in New York. Level 3 shares rose 7 percent to $57.81.
CenturyLink’s $1.4 billion of bonds paying 5.8 percent and maturing in 2022 dropped 1.56 cents to 103 cents at 8:31 a.m. in New York, according to Trace, the bond price reporting system of the Financial Industry regulatory authority. The cost to protect against losses on the phone company’s notes jumped 35 basis points to 336 basis points, according to data provider CMA.
‘A Lot of Sense’
CenturyLink, which has been exploring the sale of its data center business, is one of the biggest phone companies in the U.S., formed after CenturyTel Inc. bought Embarq Corp. in 2009 and acquired Qwest Communications International Inc. two years later.
“The combination makes a lot of sense given the combination of Level 3’s and CTL’s legacy Qwest national wireline business networks,” Phil Cusick, an analyst at JPMorgan Chase & Co., said in a note Monday, referring to CenturyLink by its ticker symbol.
CenturyLink on Monday reported third-quarter profit of 56 cents a share, after one-time items, beating the 55-cent average of analysts’ estimates compiled by Bloomberg, on revenue of $4.38 billion, which matched projections. Level 3 reported earnings of 39 cents a share, short of the 42-cent average estimate. Its sales fell to $2.03 billion, compared with the $2.07 billion prediction of analysts.
Both companies have contended with growing competition from cable providers and other smaller rivals offering internet and phone connections for businesses. CenturyLink, which also offers residential landline phone and internet services in cities such as Phoenix and Seattle, gets about two-thirds of its revenue from business customers.
The acquisition is one of the biggest telecommunications deals of the year. Level 3 had a market value of $19.4 billion at Friday’s close and has about $11 billion in debt. CenturyLink was valued at about $16.6 billion and has about $19 billion in debt.
Aiding Netflix
Level 3 provides so-called content-delivery network services, particularly to Netflix. With more people streaming TV shows and movies over the web, distributors like Netflix have to arrange with a content delivery network to set aside enough servers and transportation capacity for faster load times. By moving the content closer to users and managing traffic patterns viewers can benefit from less delays and buffering of shows.
Bank of America Corp. and Morgan Stanley advised CenturyLink on the deal. Those two banks have committed to lending the company about $10.2 billion in new secured debt. Evercore Partners Inc. issued a fairness opinion while Wachtell, Lipton, Rosen & Katz and Jones Walker provided legal advice.
Level 3 was advised by Citigroup Inc., with a fairness opinion from Lazard Ltd., and Wilkie Farr & Gallagher LLP gave legal advice. | | 6:09p |
Microsoft’s New Cloud Server Design is Half-Baked, and That’s the Point Microsoft has released a work-in-progress design for the next generation cloud server that will power Azure, Office 365 and other cloud services in Microsoft data centers. The now open source design is about 50 percent complete, and that’s how Microsoft wants it.
The company wants to change the open source hardware development process of the Open Compute Project to make it more like open source software development. The way the process is today makes it so that hardware designs, when they are submitted for approval by OCP, are largely complete, which means they don’t advantage of a community of contributing engineers the way open source software projects normally do.
Not only does the design not benefit from the outside input an open source community can provide, it also means the ecosystem of vendors that would supply parts for the product or create other products based on the design doesn’t have access before the design is complete. “This late contribution delays the development of derivative designs, limits interactive community engagement and adoption, and slows down overall delivery,” Kushagra Vaid, general manager for Azure hardware in Microsoft data centers, wrote in a blog post Monday.
The company’s new open source cloud server design effort is called Project Olympus, and it applies the same model of open source collaboration that’s been used in open source software development.
See also: Latest Microsoft Data Center Design Gets Close to Unity PUE
The half-baked design is being submitted to OCP a lot earlier in the development cycle than any other previous OCP project, according to Vaid. “By sharing designs that are actively in development, Project Olympus will allow the community to contribute to the ecosystem by downloading, modifying, and forking the hardware design just like open source software,” he wrote.
The OCP Foundation, which oversees OCP, is on board, with Bill Carter, the foundation’s CTO, saying:
“Microsoft is opening the door to a new era of open source hardware development. Project Olympus, the re-imagined collaboration model and the way they’re bringing it to market, is unprecedented in the history of OCP and open source data center hardware.”
Microsoft has been an OCP member since 2014, when it released its first open source cloud server design through the project. It was the first hyperscale data center operator to join OCP, which today also lists Apple, Google, and Rackspace as members, in addition to Facebook, which started the project in 2011.
See also: Microsoft Moves Away from Data Center Containers
OCP-inspired hardware is now standard across the entire global Microsoft data center infrastructure. More than 90 percent of severs the company buys today are based on OCP specs, Vaid wrote.

Microsoft’s third-generation OCP-based cloud server (Image: Microsoft)
The company’s latest, third-generation, cloud server will feature a universal motherboard whose modular design makes it easy to reconfigure for different types of workloads.
The motherboard also supports both integrated power supplies as well as a new universal rack power distribution unit to make it easier to deploy the hardware in data centers in different parts of the world with different power standards.
Like its previous-generation server, the new motherboard will support FPGAs (Field Programmable Gate Arrays), which provide additional computing horsepower to the two CPUs for high-demand workloads, such as machine-learning applications.

Microsoft is also contributing a new rack and universal rack PDU design to OCP (Image: Microsoft)
Microsoft has contributed design specs for the motherboard, a power supply with integrated backup batteries, server chassis, a high-density storage expansion, the universal rack PDU, and a standards-compliant rack management card to the open source Project Olympus (hosted on GitHub).
Vaid told Bloomberg he expects to see first deployments of the new servers in Microsoft data centers by the middle of next year. | | 6:25p |
How Does Data Fabric Fit With Software-Defined Storage, and What Can It Do For You? Marc Fleischmann is CEO and Founder of Datera.
Infrastructure matters. The power of big data hinges upon its accessibility, so when information is siloed, it curtails the functionality of analytics. Unfortunately, this is an all too common occurrence for companies who rely on traditional monolithic storage solutions, which have fixed boundaries. Infrastructure impacts applications’ performance, cost and scalability, yet few people grasp the importance of selecting the right type of platform.
At one time, storage was relatively simple, with pre-defined capacity, performance and cost, but also relatively straightforward. Back then it was a simple matter of integrated hardware and software delivering a clearly defined service. But with the advent of the cloud, data proliferated at unprecedented rates and created exciting new possibilities. Similar to software-defined networking, software-defined storage has taken off over the past few years. If networking entails the movement of data from place to place, storage requires the preservation of data–its quality, reliability and endurance. Software-defined storage brings life to stored information, sorting it, organizing it and automating retrieval processes.
There are many implementations of software-defined storage. Most recently, however, hyper-converged solutions and scale-out distributed system (or data fabrics) have driven most of the use cases. Hyper-converged solutions have the benefit of being simple, turnkey, focused for support for virtual machines and targeted for small to medium size deployments. On the other hand, data fabrics provide a wide spectrum of capabilities, add scale, can support adaptive policies and morph as storage requirements evolve. The latter is more efficient, as it allows for independent scaling of compute and storage, while the former is more simple, as it packages compute and storage scaling.
Unlike traditional monolithic storage systems, a data fabric is agile, conforming to evolving application needs. As a result, companies can access data more readily, spend resources more sustainably and deploy their applications faster. According to Forrester, data fabric can help enterprise architects “accelerate their big data initiatives, monetize big data sources, and respond more quickly to business needs and competitive threats.”
Here are some of the major ways that data fabric adds to software-defined storage, differs from traditional data storage, and how it impacts IT.
- Rapid scalability. The average application can take months to deploy, and deployment is often more of the bottleneck than development. Using a data fabric accelerates the process by automatically molding storage around the application. Removing the manual aspects saves IT personnel time and exponentially speeds the time to value. Companies that use an elastic data fabric can bring their application to more users, faster.
- Intent-defined. Unlike traditional storage, data fabrics adapt to applications’ specific requirements, learning the different infrastructure capabilities and intelligently matching them to application intent with targeted placement. This allows for multi-tenant deployment and performance guarantees.
- Infrastructure-as-code. Because a data fabric is built from code, it provides architects with the same flexibility that programming affords developers. Users don’t have to handcraft infrastructure for their applications–it is composed automatically and continuously. Developers, applications and tenants can instantly access storage when they need it.
- Minimized costs. Rather than fixed capital expense commitments, enterprise data fabric users only pay for what their application actually needs–a number that’s constantly changing in real-time. Data fabrics are supported on commodity hardware, so they are far less expensive than alternative options. Moore’s Law, articulated by Intel founder Gordon Moore, expressed the fundamental driving force of the IT industry: the capabilities of integrated circuits double roughly every two years. For businesses buying traditional monolithic storage systems, that means investing in expensive hardware that will become obsolete quickly. Data fabrics enable companies to use software on easily replaceable commodity hardware, ensuring that they always get the best value at the lowest price. Many organizations opt for a data fabric precisely because this flexibility and affordability.
Overall, a data fabric platform automates storage provisioning for applications, significantly simplifying the consumption compared to legacy systems that require that to be done manually. It’s faster and more adaptive, which allows enterprise IT teams to focus on building and improving the applications themselves. Data fabric technology is likely to become the data center solution of choice for the majority of enterprises within the next few years–a competitive advantage that will set IT-savvy companies miles ahead.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 8:17p |
CEO: Level 3 Deal Doesn’t Affect CenturyLink Data Center Strategy Review CenturyLink’s plan to acquire Level 3 Communications in a blockbuster $34 billion deal that if closed will shake up the enterprise telecommunications market will not have a substantial impact on CenturyLink’s ongoing review of its data center strategy, which may result in a sale of some or all of the company’s data center assets, CenturyLink CEO said in a call with analysts Monday.
The Monroe, Louisiana-based telco’s biggest business is providing wireline phone and internet services to businesses and consumers around the US, but it also operates a global network of data centers, where it provides everything from colocation to cloud and managed services. As wireline revenues declined and as the cloud and colocation markets became increasingly competitive, dominated respectively by the likes of Amazon and Equinix, CenturyLink’s management last year kicked off a strategic review of its data center assets, explaining it by a desire to reduce the amount of capital investment the segment demands.
The data center strategy review continues, and “we expect to successfully complete our process … during the fourth quarter,” Glen Post, CenturyLink CEO, said on the call dedicated to the Level 3 deal.
While Level 3 has many data centers around the world – 350, according to its website – they are primarily focused on providing colocation services for network operators, and most of them are not comparable to CenturyLink’s colocation facilities, Post said.
Still, the acquisition would nearly double CenturyLink’s colocation market share in North America, according to Liz Cruz, associate director of data center infrastructure at the market research firm IHS Markit. Here’s IHS’s breakdown of the North American colocation market, including a combined CenturyLink/Level 3:

CenturyLink’s current data center operation started taking shape when the company acquired Savvis in 2011, dishing out $2.5 billion for the data center provider. It has been expanding its data center portfolio ever since, including after the announcement of the strategic review.
Read more: CenturyLink Data Center Team Keeps Eye on the Ball Despite Uncertain Future
The company doesn’t own much of the real estate that houses its colocation data centers, leasing space from data center landlords, primarily from Digital Realty Trust, which lists CenturyLink as its second-biggest tenant by annualized rent. The telco has about 2.3 million square feet across 51 locations under contract with Digital, which is also reportedly a major hurdle in the process of divestment of its data center assets.
Earlier this month, Reuters reported that European investment firm BC Partners was the leading bidder on CenturyLink’s data center portfolio, citing anonymous sources.
Last month, CenturyLink announced it was looking to cut 8 percent of its workforce, which amounts to about 3,400 people. |
|