Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, March 27th, 2017
Time |
Event |
12:00p |
Digital Bridge Buys Vantage, Silicon Valley’s Largest Wholesale Data Center Firm Boca Raton-based Digital Bridge Holdings just cut another large notch into its already ample M&A belt, acquiring Vantage Data Centers, the largest wholesale data center landlord in Silicon Valley, a deal that has been rumored since January.
Santa Clara-based Vantage becomes the wholesale data center platform for Digital Bridge, a communications infrastructure investor that got into the data center space last year, intending to become one of the forces driving the current wave of consolidation in the market. Digital Bridge plans to invest in expanding Vantage, which is currently in Silicon Valley and Quincy, Washington, into new markets along with its existing cloud, IT services, and large enterprise customers.
The due-diligence process around the deal showed Digital Bridge that Vantage “under promised and over delivered,” Digital Bridge CEO Marc Ganzi said in an interview with Data Center Knowledge. “At the end of the day, Vantage will be able to expand if customers have confidence and want to follow Vantage into other markets to assist with future capacity needs.”
See also: Meet Digital Bridge, a New Consolidator in the US Data Center Market
Rapidly Buying Data Centers
Digital Bridge began its data center buying spree last July with acquisition of retail colocation and managed services provider DataBank. In January 2017, DataBank announced acquisition of Salt Lake City-based C7 Data Centers, as well as two data centers located in Cleveland and Pittsburgh, considered “key interconnection assets,”, purchased from 365 Data Centers.
Vantage was purchased by a consortium, including: “Digital Bridge Holdings, LLC, a leading global communications infrastructure company, Public Sector Pension Investment Board (PSP Investments), and TIAA Investments (an affiliate of Nuveen), which made the investment on behalf of TIAA’s general account.” Financial terms of the private purchase from Silver Lake were not disclosed.
To steer Digital Bridge’s data center strategy, Ganzi brought on board Michael Foust, co-founder of the world’s largest wholesale data center provider Digital Realty Trust and its former CEO. He’s been serving as DataBank chairman and has now also been named chairman of Vantage.
In addition to data centers, Digital Bridge owns several wireless tower and communications infrastructure companies, including: Vertical Bridge, ExteNet Systems, Mexico Tower Partners, and Andean Tower Partners.
“Resetting the Shot Clock”
Sureel Choksi, Vantage president and CEO who is staying in his seat post-acquisition, told Data Center Knowledge that he felt “relieved and excited” to be teaming up with Digital Bridge and Faust after an eight-month process. He said the deal was “the ideal scenario,” since existing Vantage management, employees, and customer relationships all remain in place.
Each member of the Vantage management team has also invested in the company alongside the buyer consortium, the company said in a statement.
As the company’s former private-equity owners Silver Lake Partners were exploring their options regarding a sale of Vantage, it felt like “running out the clock” at the end of an NCAA tournament game, he said. The shot clock has now been reset.
Since 2010, Vantage has built 51MW of IT load in Santa Clara and secured expansion capacity for 93MW total. The company’s Quincy campus currently has a 6MW data center and additional land and power for expansion..
Read more: How Vantage Data Centers ‘Created Land’ For a 51 MW Santa Clara Expansion Campus
Building a Platform of Scale
According to Ganzi, in addition to building first-class facilities, the Vantage team understood the intricacies of underwriting and allocating capital wisely, things that are very important to the long-time real estate investor.
The three investors acquiring Vantage in aggregate have over $1 trillion worth of assets under management.
Ganzi previously was CEO and sole founder of Global Tower Partners, which was acquired by publicly traded wireless tower REIT American Tower Corporation in October 2013.
“The data center space is actually in the early innings,” he told us in an interview earlier this year. “There’s still a fantastic opportunity to roll up the space and to create a platform of scale.”
His company is now well on its way to making that happen with both Choksi and Foust on board. | 3:00p |
Deep Learning Driving Up Data Center Power Density 
Few people on the planet know more about building computers for Artificial Intelligence than Rob Ober. As the top technology exec at Nvidia’s Accelerated Computing Group, he’s the chief platform architect behind Tesla, the most powerful GPU on the market for Machine Learning, which is the most widespread type of AI today.
GPUs, or Graphics Processing Units, take their name from their original purpose, but their applications today stretch far beyond that. Supercomputer designers have found them ideal for offloading huge chunks of workloads from CPUs in the systems they build; they’ve also proven to be super-efficient processors for a Machine Learning approach called Deep Learning. That’s the type of AI Google uses to serve targeted ads and Amazon Alexa taps into for instantaneous answers to voice queries.
Creating algorithms that enable computers to learn by observation and iteration is undoubtedly complex; also incredibly complex is designing computer systems to execute those instructions and data center infrastructure to power and cool those systems. Ober has seen this firsthand working with Nvidia’s hyper-scale customers on their data center systems for deep learning.
“We’ve been working with a lot of the hyper-scales – really all of the hyper-scales – in the large data centers,” he said in an interview with Data Center Knowledge. “It’s a really hard engineering problem to build a system for GPUs for deep learning training. It’s really, really hard. Even the big guys like Facebook and Microsoft struggled.”
See also: Tencent Cranks up Cloud AI Power with Nvidia’s Mightiest GPUs Yet

Big Basin, Facebook’s latest AI server. Each of the eight heat sinks hides a GPU. (Photo: Facebook)
It Takes a Lot of Power to Train an AI
Training is one type of computing workload involved in deep learning (or rather a category of workloads, since the field is evolving, and there are several different approaches to training). Its purpose is to teach a deep neural network — a network of computing nodes aiming to mimic the way neurons interact in the human brain — a new capability from existing data. For example, a neural net can learn to recognize dogs in photos by repeatedly “looking” at various images that have dogs in them, where dogs are tagged as dogs.
The other category of workloads is inference, which is where a neural net applies its knowledge to new data (e.g. recognizes a dog in an image it hasn’t seen before).
Nvidia makes GPUs for both categories, but training is the part that’s especially difficult in the data center, because hardware for training requires extremely dense clusters of GPUs, or interconnected servers with up to eight GPUs per server. One such cabinet can easily require 30kW or more — power density most data centers outside of the supercomputer realm aren’t designed to support. Even though that’s the low end of the range, about 20 such cabinets need as much power as the Dallas Cowboys jumbotron at the AT&T stadium, the world’s largest 1080p video display, which contains 30 million lightbulbs.
“We put real stresses on a lot of data center infrastructure,” Ober said about Nvidia’s GPUs. “With deep learning training you typically want to make as dense a compute pool as possible, and that becomes incredibly power-dense, and that’s a real challenge.” Another problem is controlling the voltage in these clusters. GPU computing, by its nature, produces lots of power transients (sudden spikes in voltage), “and those are difficult to deal with.”
Interconnecting the nodes is another big challenge. “Depending on where your training data comes from it can be an incredible load on the data center network,” Ober said. “You can be creating a real intense hot spot.” Power density and networking are probably the two biggest design challenges in data center systems for deep learning, according to him.

Tesla P100, Nvidia’s most powerful GPU (Image: Nvidia)
Cooling the Artificial Brain
Hyper-scale data center operators – the likes of Facebook and Microsoft – mostly address the power density challenge by spreading their deep learning clusters over many racks, although some “dabble” in liquid cooling or liquid-assist, Ober said. Liquid cooling is when chilled water is delivered directly to the chips on the motherboard (a common approach to cooling supercomputers), while liquid-assist cooling is when chilled water is brought to a heat exchanger attached to an IT cabinet to cool air that is then pushed through the servers.
Not everybody that needs to support high-density deep learning hardware has the luxury of hundreds of thousands of square feet of data center space, and those who don’t, such as the few data center providers that have chosen to specialize in high density, have gone the liquid-assist route. Recently, these providers have seen a spike in demand for their services, driven to a large extent by the growing interest in machine learning.
Both startups and large companies are looking for ways to leverage the technology that is widely predicted to drive the next big wave of innovation, but most of them don’t have the infrastructure necessary to support this development work. “Right now the GPU-enabled workloads are the ones where we’re seeing the largest amount of growth, and it’s definitely the enterprise sector,” Chris Orlando, co-founder of high-density data center provider ScaleMatrix, said in an interview. “The enterprise data center is not equipped for this.”
Hockey-Stick Growth
That spike in growth started only recently. Orlando said his company has seen a hockey stick-shaped growth trajectory with the knee somewhere around the middle of last year. Other applications driving the spike have been computing for life sciences and genomics (one of the biggest customers at ScaleMatrix’s flagship data center outside of San Diego, a hub for that types of research, is the genomics powerhouse J. Craig Venter Institute), geospacial research, and big data analytics. In Houston, its second data center location, most of the demand comes from the oil and gas industry whose exploration work requires some high-octane computing power.
Another major ScaleMatrix customer in San Diego is Cirrascale, a hardware maker and cloud provider that specializes in infrastructure for Deep Learning. Read our feature on Cirrascale here.

Inside ScaleMatrix’s data center in San Diego (Photo: ScaleMatrix)
Each ScaleMatrix cabinet can support up to 52kW by bringing chilled water from a central plant to cool air in the fully enclosed cabinet. The custom-designed system’s chilled-water loop is on top of the cabinet, where hot exhaust air from the servers rises to get cooled and pushed back over the motherboards. Seeing growing enterprise demand for high-density computing, the company recently started selling this technology to companies interested in deploying it in-house.
Colovore, a data center provider in Silicon Valley, also specializes in high-density colocation. It is using the more typical rear-door heat exchanger to provide up to 20kW per rack in the current first phase, and 35kW in the upcoming second phase. At least one of its customers is interested in pushing beyond 35kW, so the company is exploring the possibility of a supercomputer-like system that brings chilled water directly to the motherboards.
Today a “large percentage” of Colovore’s data center capacity is supporting GPU clusters for machine learning, Sean Holzknecht, the company’s co-founder and president, said in an interview. Like ScaleMatrix, Colovore is in a good location for what it does. Silicon Valley is a hotbed for companies that are pushing the envelope in machine learning, self-driving cars, and bioinformatics, and there’s no shortage of demand for the boutique provider’s high-density data center space.
Read our feature on Colovore and its niche play in Silicon Valley here.

A look beneath the floor tiles at Colovore displays the infrastructure to support water cooled doors. (Photo: Colovore)
Demand for AI Hardware Surging
And demand for the kind of infrastructure Colovore and ScaleMatrix provide is likely to continue growing. Machine learning is only in the early innings, and few companies outside of the large cloud platforms, the likes of Google, Facebook, Microsoft, and Alibaba, are using the technology in production. Much of the current activity in the field today consists of development, but that work still requires a lot of GPU horsepower.
Nvidia says demand for AI hardware is surging, a lot of it driven by enterprise cloud giants like Amazon Web Services, Google Cloud Platform, and Microsoft Azure, who offer both machine learning-enhanced cloud services and raw GPU power for rent. There’s hunger for the most powerful cloud GPU instances available. “The cloud vendors who currently have GPU instances are seeing unbelievable consumption and traction,” Nvidia’s Ober said. “It really is telling that people are drifting to the largest instances they can get.” | 4:32p |
German Software Maker SAP Sees Benefits From Trump Tax Plans
Sarah McBride (Bloomberg) — German software maker SAP SE plans to capitalize on the Trump administration’s efforts to encourage the repatriation of cash held by U.S. companies overseas, which could set the stage for spending by businesses on large-scale software upgrades.“If a large company repatriated cash and wanted to put it to work, software projects would be an obvious choice,” SAP Chief Executive Officer Bill McDermott said in an interview.
Business from the U.S. made up about 31 percent of SAP’s fourth-quarter revenue of 6.72 billion euros ($7.25 billion), and about one-quarter of its 84,000 employees are U.S. based. Coupled with any incentives that might emerge for infrastructure spending, a tax break for repatriation of overseas profits may offer sizeable benefits for SAP, McDermott said, along with other software companies.
Separately, the software sector could see stepped-up acquisitions if the cash repatriation tax break comes through. However, SAP, as a German company, would not benefit from that tax incentive. McDermott has said that after an acquisitions binge in recent years, SAP would likely be open to only smaller deals this year.
It is unclear what results might emerge from a repatriation of cash to the U.S. After a 2004 tax holiday, many businesses used the proceeds to buy back shares or increase dividends rather than to invest.
McDermott was in San Francisco last Wednesday for the announcement of SAP.iO, a fund to which the company has allocated an initial $35 million for early-stage investments in software companies. SAP also announced new incubator programs in San Francisco and Berlin.
“This is a chance to bring you right into the core business, to give you a shot at things entrepreneurs wouldn’t normally be able to do,” he told the startup founders gathered at the San Francisco incubator. Those selected will receive SAP mentoring and customer introductions.
SAP.iO is separate from Sapphire Ventures, an SAP-backed investment firm that generally invests at later stages.
| 5:05p |
What’s Fueling the Rise of Micro Data Centers Two years ago, a record number of Star Wars fans trying to simultaneously buy tickets to the opening of The Force Awakens crashed the online booking systems of nearly all major movie theaters. Traffic surged to seven times its typical peak level in less than 24 hours, causing outages and frazzled nerves.
It’s easy enough to shrug off a minor inconvenience like having to wait a few extra hours to buy a movie ticket; it’s a whole different matter when it halts automated trains and buses, stops a manufacturing plant, or stops the processing of key medical data at a hospital. Latency simply cannot be tolerated in certain applications.
The sheer number of mobile devices now connected to the internet, machine to machine communication (M2M), the Industrial Internet of Things, resource-dependent applications — such as data-heavy streaming video and wearables — all contribute to network congestion. Gartner expects digital traffic to expand by 23 percent annually with 8.6 zettabytes of IP traffic by 2018, so it’s bound to get worse before it gets better.
Installing bigger switches to increase bandwidth in centralized enterprise data centers only goes so far in reducing latency. So, businesses are looking for ways to expand data processing infrastructure closer to where data is actually generated.
Today, many organizations that need to share and analyze a quickly growing amount of data — retailers, manufacturers, telcos, financial services firms, and many more — are turning to localized micro data centers installed on the factory floor, in the telco central office, the back of a retail outlet, etc. The solution applies to a broad base of applications that require low latency, high bandwidth, or both.
Schneider Electric defines a micro data center as “a self-contained, secure computing environment that includes all the storage, processing, and networking required to run the customer’s applications.” They are assembled and tested in a factory environment and shipped in single enclosures that include all necessary power, cooling, security, and associated management tools (DCIM software).
At least three companies in the business of micro data centers — Rittal, Panduit, and Z Modular — will be exhibiting at Data Center World on April 5 and 6. Schneider, on the other hand, will have two representatives sitting on a panel to discuss another of its specialties, “The Energizer Bunny for Data Centers: Microgrids,” on Monday, April 3, from 3:45 p.m. to 4:45 p.m. at Data Center World in Los Angeles. Register here for the conference.
Micro data centers are designed to minimize capital outlay, reduce footprint, energy consumption, and increase speed of deployment.
Several business and technology trends have created the conditions for micro data centers to emerge as a solution. According to a Schneider whitepaper titled Practical Options for Deploying Small Server Rooms and Micro Data Centers, they are:
- Compaction: Virtualized IT equipment in cloud architectures that used to require 10 IT racks can now fit into one.
- IT convergence and integration: Servers, storage, networking equipment, and software are being integrated in factories for more of an “out of the box” experience.
- Latency: Tere is a strong desire, business need, or sometimes even life-critical need to reduce latency between centralized data centers (e.g. cloud) and applications.
- Speed to deployment: To either gain a competitive advantage or secure business.
- Cost: In many cases micro data centers can utilize “sunk costs” in facility power and cooling infrastructure, meaning they can take advantage of excess capacity that for one reason or another isn’t being used. This kind of under-utilization is a common issue in enterprise data centers.
Micro data centers are key for optimizing the performance and usefulness of mobile and other networked devices via the cloud. Service providers have embraced this vision most strongly from the start, but it won’t be long before enterprise IT pros will likely do the same.
A version of this article originally appeared on AFCOM. | 6:58p |
NTT Plans Global Data Center Network for Connected Cars NTT Communications Corp. is planning to build a global technology infrastructure optimized for the Internet of Things and focused mainly on the needs of connected cars. The infrastructure will consist of data centers, network backbone, and other NTT services.
The project is part of a series of steps various subsidiaries of the Japanese telecommunications giant NTT Group have agreed to take as part of the parent company’s new collaboration agreement with Toyota. The two companies have teamed up to push research and development, as well as standardization of connected-car technology, both at the car and the backend data infrastructure levels.
A major focus of the collaboration will be studying “the network topology of global infrastructure and optimal data center deployment necessary for safe and reliable collection and distribution of large amounts of data, based on assumption of vehicle use cases,” the companies said in a statement released Monday.
As connected-car technology evolves, and especially as automakers move toward self-driving vehicles, latency of data transfer between a car and some processing backend becomes more and more crucial. The time it takes for data to travel between a car and a remote centralized data center facility is too long for the kind of real-time decision making that needs to be done on the road. That’s why the processing infrastructure topology for connected cars is leaning toward a more distributed design, where compute and storage nodes are placed closer to where most of the driving happens (for example at city intersections).
NTT Com’s global data center and network infrastructure puts the company in a good position to take on such a project. Its global IP backbone stretches across the US, Europe, and Asia, and has points of presence in Australia and Brazil. It is the fifth-largest retail colocation provider in the world (although it also provides wholesale data center space). NTT’s data center brands include RagingWire in the US, e-shelter and Gyron in Europe, and Netmagic in India, among others.
Read more: NTT Names Adams RagingWire CEO, Takes Full Ownership of Company
The high-level goals of NTT’s partnership with Toyota are to identify the right platform for connected-car data collection and analytics, the right data center infrastructure, the optimal mobile communications system (leaning heavily toward standardization on 5G), and to develop technologies for driver services, such as driving advice based on data inside and outside the vehicle, using voice interaction technology and artificial intelligence.
While Toyota works on technology on the car end of the system, NTT subsidiaries will work on the various infrastructure pieces. NTT Data Corp., for example, will work on the data collection, accumulation, and analytics platform; NTT Docomo will build on its efforts to standardize 5G to promote standardization of the latest mobile communications technology for connected cars. |
|