Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, July 18th, 2016
Time |
Event |
12:00p |
Data Center Customers Want More Clean Energy Options Today, renewable energy as core part of a company’s data center strategy makes more sense than ever, and not only because it looks good as part of a corporate sustainability strategy. The price of renewable energy has come down enough over the last several years to be competitive with energy generated by burning coal or natural gas, but there’s another business advantage to the way most large-scale renewable energy purchase deals are structured today.
Called Power Purchase Agreements, they secure a fixed energy price for the buyer over long periods of time, often decades, giving the buyer an effective way to hedge against energy-market volatility. A 20-year PPA with a big wind-farm developer insures against sticker shock at the pump for a long time, which for any major data center operator, for whom energy is one of the biggest operating costs, is a valuable proposition.
Internet and cloud services giants, who operate some of the world’s largest data centers, are privy to this, and so is the Pentagon. The US military is second only to Google in the amount of renewable energy generation capacity it has secured through long-term PPAs, according to a recent Bloomberg report.
Data Center Providers Enter Clean-Energy Space
Google’s giant peers, including Amazon Web Services, Facebook, Microsoft, and Apple, are also on Bloomberg’s list of 20 institutions that consume the most renewable energy through such agreements, and so are numerous US corporations in different lines of business, such as Wal-Mart Stores, Dow Chemical, and Target. There are two names on the list, however, that wouldn’t have ended up on it had Bloomberg complied it before last year: Equinix and Switch SuperNAP.
Both are data center service providers, companies that provide data center space and power to other companies, including probably all of the other organizations on the list, as a service. The main reason companies like Equinix and Switch wouldn’t make the list in 2014 is that there wasn’t a strong-enough business case for them to invest in renewable energy for their data centers. There was little interest from customers in data center services powered by renewable energy.
While still true to a large extent, this is changing. Some of the biggest and most coveted users of data center services are more interested than ever in powering as much of their infrastructure with renewable energy as possible, and being able to offer this service will continue growing in importance as a competitive strategy for data center providers.
Just last week, Digital Realty Trust, also one of the world’s largest data center providers, announced it had secured a wind power purchase agreement that would cover the energy consumption of all of its colocation data centers in the US.
More Interest from Data Center Customers
According to a recent survey of consumers of retail colocation and wholesale data center services by Data Center Knowledge, 70 percent of these users consider sustainability issues when selecting data center providers. About one-third of the ones that do said it was very important that their data center providers power their facilities with renewable energy, and 15 percent said it was critical.
Survey respondents are about equally split between wholesale data center and retail colocation users from companies of various sizes in a variety of industry verticals, with data center requirements ranging from less than 10kW to over 100MW. More than half are directly involved in data center selection within their organizations.
Most respondents (70 percent) said their interest in data centers powered by renewable energy would increase over the next five years. More than 60 percent have an official sustainability policy, while 25 percent are considering developing one within the next 18 months.
Download results of the Data Center Knowledge survey in full: Renewable Energy and Data Center Services in 2016
While competitive with fossil-fuel-based energy, renewable energy still often comes at a premium. The industry isn’t yet at the level of sophistication where a customer can choose between data center services powered by renewables as an option – and pay accordingly – or regular grid energy that’s theoretically cheaper. Even utilities, save for a few exceptions, don’t have a separate rate structure for renewable energy.
The options for bringing renewable energy directly to data centers today are extremely limited. Like internet giants, Equinix and Switch have committed to buying an amount of renewable energy that’s equivalent to the amount of regular grid energy their data centers in North America consume, but it doesn’t mean all that energy will go directly to their facilities. This is an effective way to bring more renewable generation capacity online, but it does little to reduce data center reliance on whatever fuel mix supplies the grids the facilities are on for both existing and future demand.
If, however, more utilities started offering renewable energy as a separate product, with its own rate – as Duke Energy has done in North Carolina after being lobbied by Google – data center providers would be able to offer the option to their customers, and it would probably be a popular option, even if it meant paying a premium. According to our survey, close to one-quarter of data center customers would “probably” be willing to pay a premium for such a service. Eight percent said they would “definitely” be willing to do so, and 37 percent said “possibly.”
At no additional cost, however, 40 percent said they would “definitely” be more interested in using data center services powered by renewable energy.
As the survey shows, interest in renewable energy among users of third-party data center services is on the rise, and if more utilities and data center providers can find effective ways to offer clean energy to their end users, they will find that there is not only an appetite for it in the market, but also that the appetite is growing.
Download results of the Data Center Knowledge survey in full: Renewable Energy and Data Center Services in 2016 | 2:00p |
Performance Indicator, Green Grid’s New Data Center Metric, Explained The Green Grid, the data center industry group known best for creating the industry’s most popular data center efficiency metric, Power Usage Effectiveness (PUE), has developed a new metric for data center operators, called Performance Indicator.
The paper that describes it is currently available to the organization’s members only, but Data Center Knowledge received an early look. Here is what you need to know:
The Green Grid published PUE in 2007. Since then, the metric has become widely used in the data center industry. Not only is it a straightforward way to take a pulse of a data center’s electrical and mechanical infrastructure efficiency, but it is also a way to communicate how efficient or inefficient that infrastructure is to people who aren’t data center experts.
Building on PUE with Two More Dimensions
Performance Indicator builds on PUE, using a version of it, but also adds two other dimensions to infrastructure efficiency, measuring how well a data center’s cooling system does its job under normal circumstances and how well it is designed to withstand failure.
Unlike PUE, which focuses on both cooling and electrical infrastructure, PI is focused on cooling. The Green Grid’s aim in creating it was to address the fact that efficiency isn’t the only thing data center operators are concerned with. Efficiency is important to them, but so are performance of their cooling systems and their resiliency.
All three – efficiency, performance, and resiliency – are inextricably linked. You can improve one to the detriment of the other two.
By raising the temperature on the data center floor, for example, you can get better energy efficiency by reducing the amount of cold air your air conditioning system is supplying, but raise it too much, and some IT equipment may fail. Similarly, you can make a system more resilient by increasing redundancy, but increasing redundancy often has negative effect on efficiency, since you now have more equipment that needs to be powered and more opportunity for electrical losses. At the same time, more equipment means more potential points of failure, which is bad for resilience.
Different businesses value these three performance characteristics differently, Mark Seymour, CTO of Future Facilities and one of the PI metric’s lead creators, says. It may not be a big deal for Google or Facebook if one or two servers in a cluster go down, for example, and they may choose not to sacrifice an entire multi-megawatt facility’s energy efficiency to make sure that doesn’t happen. If you’re a high-frequency trader, however, a failed server may mean missing out on a lucrative trade, and you’d rather tolerate an extra degree of inefficiency than let something like that happen.
PI measures where your data center is on all three of these parameters and, crucially, how a change in one will affect the two others. This is another crucial difference from PUE: PI, used to its full potential, has a predictive quality PUE does not.
It is three numbers instead of one, making PI not quite as simple as PUE, but Seymour says not to worry: “It’s three numbers, but they’re all pretty simple.”
The Holy Trinity of Data Center Metrics
The three dimensions of PI are PUE ratio, or PUEr, IT Thermal Conformance, and IT Thermal Resilience. Their relationship is visualized as a triangle on a three-axis diagram:

Example visualization of Performance Indicator for a data center (Courtesy of The Green Grid)
PUEr is a way to express how far your data center is from your target PUE. The Green Grid defines seven PUE ranges, from A to G, each representing a different level of efficiency. A, the most efficient range, is 1.15 to 1.00, while G, the least efficient one, ranges from 4.20 to 3.20.
Every data center falls into one of the seven categories, and your PUEr shows how far you currently are from the lower end of your target range (remember, lower PUE means higher efficiency).
So, if your facility’s current PUE is 1.5, which places you into category C (1.63 – 1.35), and your target is to be at the top of C, you would divide 1.35 by 1.5 and get a PUEr of 90% as a result. You do have to specify the category you’re in, however, so the correct way to express it would be PUEr(C)=90%.
IT Thermal Conformance is simply the proportion of IT equipment that is operating inside ASHRAE’s recommended inlet-air temperature ranges. In other words, it shows you how well your cooling system is doing what it’s designed to do. To find it, divide the amount of equipment that’s within the ranges by the total amount of equipment, Seymour explains.
The Green Grid chose to use ASHRAE’s recommendations, but data center operators may choose to determine themselves what temperature ranges are acceptable to them or use manufacturer-specified thermal limits without degrading the metric’s usefulness, he adds.
IT Thermal Resilience shows how much IT equipment is receiving cool air within ASHRAE’s allowable or recommended temperature ranges when redundant cooling units are not operating, either because of a malfunction or because of scheduled maintenance. In other words, if instead of 2N or N+1, you’re left only with N, how likely are you to suffer an outage?
This is calculated the same way IT Thermal Conformance is calculated, only the calculation is done while the redundant cooling units are off-line. Of course, The Green Grid would never tell you to intentionally turn off redundant cooling units. Instead, they recommend that this measurement be taken either when the units are down for maintenance, or, better yet, that you use modeling software to simulate the conditions.
Modeling Makes PI Much More Useful
Modeling software with simulation capabilities used in combination with PI can be a powerful tool for making decisions about changes in your data center. You can see how adding more servers will affect efficiency, resiliency, and cooling capacity in your facility, for example.
This is where it’s important to note that Future Facilities is a vendor of modeling software for data centers. But Symour says that about 50 members of The Green Grid from many different companies, including Teradata, IBM, Schneider Electric, and Siemens, participated in the metric’s development, implying that the process wasn’t influenced by a single vendor’s commercial interest.
Four Levels of Performance Indicator
The Green Grid describes four levels of PI assessment, ranging from least to most precise. Not every data center is instrumented with temperature sensors at every server, and Level 1 is an entry-level assessment, based on rack-level temperature measurements. ASHRAE recommends taking temperature readings at three points per rack, which would work well for a Level 1 PI assessment, Seymour explains.
Level 2 is also based on measurements, but it requires measurements at every server. To get this level of assessment, a data center has to be instrumented with server-level sensors and DCIM software or some other kind of monitoring system.
If you want to get into predictive modeling, welcome to PI Level 3. This is where you make a PI assessment based on rack-level temperature readings, but you use them to create a model, which enables you to simulate future states and get an idea of how the system may behave if you make various changes. “That gives the opportunity to start making better future plans,” Seymour says.
This is where you can also find out whether your data center can handle the load it’s designed for. Say you’re running at 50% of the data center’s design load, which happens to be 2MW. If you create a model, simulate a full-load scenario, and find that either your IT Thermal Conformance or your IT Thermal Resilience is only what you want it to be at 1.8MW, you’ve wasted your money.
Those are just a couple of possible use cases. There are many more, especially with PI Level 4, which is similar to Level 3 but with a much more precise model. This model is calibrated using temperature readings from as many points on the data center floor as possible: servers, perforated tiles, return-air intake on cooling units, etc. This is about making sure the model truly represents the state of the data center.
Different operators will choose to start at different levels of PI assessment, Seymour says. Which level they choose will depend on their current facility and their business needs. The point of having all four levels to avoid preventing anyone from using the new metric because their facility doesn’t have enough instrumentation or because they haven’t been using monitoring or modeling software. | 3:00p |
Five Key Emerging Trends Impacting Data Centers in 2016  Brought to you by AFCOM
“Show me an IT professional who can predict the exact timing, size, method, and location for their next data center and I will show you someone with a defective crystal ball. That’s the nature of this industry,” says Data Center World speaker Jack Pouchet, the VP of marketing development and energy initiatives for Emerson Network Power.
Change has always been the cornerstone of technology, and that has never been more apparent than today. The sheer amount of data being generated by Internet users is reason alone that the data center of today must change. Pouchet will address other key emerging trends he expects to substantially impact future data centers are built and designed at Data Center World, Sept. 12-15 in New Orleans. Here’s a sneak peek.
The Cloud of Many Drops
More and more companies are looking beyond virtualization and to the cloud to address underutilization of computing resources, and for good reason. A 2015 study by Stanford’s Jonathan Koomey, found that enterprise data center servers still only deliver, on average, between 5 and 15 percent of their maximum computing output over the course of a year. A surprising 30 percent of physical servers had been comatose for six months or more. Enter the shared services cloud arena. The fact that companies can now offload space-consuming applications and non-critical workloads to shared space means fewer data center builds and a little breathing room. “That allows for more intelligent decisions on the core building they already have,” said Pouchet.
The Data Fortress
It’s hard not to put security first when it comes to data center design. The total cost of privacy-related data security breach stands at $3.8 million, and the number of security-related downtime incidents rose from two percent in 2010 to 22 percent in 2015, according to the Ponemon Institute. This affects the way enterprises approach resiliency, availability, storage, you name it. Will data reside on the cloud or on-site? Are you capable of bringing up systems fast enough to avoid serious downtime and loss to data and oftentimes reputation? Those are questions that every existing and builder of new data centers must consider.
Beyond PUE and Green
Data centers have certainly made plenty of headlines with respect to being energy hogs, thus the push toward great efficiency, new cooling techniques and the acronym PUE. Today, they’re being singled out as abusers of what is rapidly becoming a rare commodity: water. In fact, when Pouchet talked about the 5 billion people without Internet access, he added that 1 billion of those do not have access to potable water. So, it doesn’t bode well that, according to Pouchet, also a Green Grid board member, that a modest 1 MW facility can easily consume more than 4.4 million liters (1.2 million gallons) annually.
This focus on water has spawned a new acronym: WUE (Water Usage Efficiency) and new thinking about ways to cool the dater center. That’s the typical way of increasing efficiency: by cooling the data center. However, Pouchet pointed to a new approach that actually removes heat at the rack or aisle. Other new considerations include evaporative cooling technologies and economizers that utilize outdoor air. This has become yet another key factor set to impact the future data center.
Edge Computing
Because the fabric of the Internet is changing so rapidly, we’re seeing more and more data centers decentralizing and being supported by micro data centers. In other words, data, and its processing component, are moving as close to user groups as possible in terms of edge and (small but growing) micro data centers. For example, just as content delivery networks cached data closer to customers, there will be more satellite data centers providing cloud-based content closer to the network’s edge. As a result, the importance of tier two and three cities grows more important as traffic moves away from tier one cities such as London, New York, Chicago and San Francisco, to the next set of markets closer to the edge and to users.
Data centers are in a constant state of flux, and the above emerging trends will definitely shape how they look, feel and perform in the future. It’s important that you keep all of them in mind—and their possible ramifications—as you make decisions when building or redesigning existing data centers.
Pouchet will present, “Five Key Emerging Trends Impacting Data Centers in 2016” on Monday, Sept. 12, from 2:10 – 3:10. Register for Data Center World today!
This first ran at http://afcom.com/Public/Resource_Center/Articles_Public/Five_Key_Emerging_Trends_Impacting_Data%20Centers_in_2016 | 5:18p |
SoftBank to Buy Britain’s ARM for $32 Billion in Record Deal (Bloomberg) — SoftBank Group agreed to buy ARM Holdings for 24.3 billion pounds ($32 billion), securing a slice of virtually every mobile computing gadget on the planet and future connected devices in the home.
The Japanese company is offering 1,700 pence in cash per share or a 43 percent premium to Friday’s close, according to a statement Monday. The deal would be the biggest-ever for SoftBank, which under CEO Masayoshi Son became one of Japan’s most acquisitive companies with stakes in wireless carrier Sprint and Alibaba Group Holding.
DCK: While most of its revenue comes from the mobile market, ARM has been actively trying to expand into the data center space. Multiple vendors, such as Applied Micro, AMD, and Cavium, have built processors for servers and other data center gear using ARM’s architecture
Companies that have said publicly that they have deployed ARM servers in data centers include PayPal — although it is unclear how big PayPal’s deployment of these machines is — and Online.net, the web hosting subsidiary of the French telco Iliad, which uses ARM chips to power bare metal cloud servers it designs in-house.
Vapor IO, a data center hardware and management software startup, uses ARM chips to power its top-of-rack server management controller.
Read more: ARM Expects to Challenge Intel for Server Customers
“I have admired this company for over ten years,” Son told reporters at a press conference in London on Monday. “This is an endorsement into the view of the future of the UK.”
Son said that he had held intermittent conversations about a strategic partnership or joint venture with ARM over the course of several years, but only made a serious approach to ARM’s chairman Stuart Chambers two weeks ago. All due diligence for the deal has been done in the past two weeks, Son said.
“This all happened very, very quickly,” ARM CEO Simon Segars said in a telephone interview. “They made an offer that was very, very compelling for our shareholders and a proposal for how to invest in the company for the future.”
SoftBank will gain control of a cash-generating mobile industry leader that gets royalties every time clients such as Apple, Samsung Electronics or Qualcomm adopt its designs, which are considered power-saving and efficient.
The deal is the biggest ever Japanese acquisition in Europe, surpassing Japan Tobacco’s approximately $15 billion acquisition of Gallaher Group completed in 2007. It is also the biggest ever Asian takeover of a UK company according to data compiled by Bloomberg.
SoftBank will fund the acquisition partly through cash and loans, according to a statement Monday. ARM’s shareholders will get a dividend of 3.78 pence per share.
ARM shares rose as much as 47 percent to a more than 18-year record on Monday and were trading 43 percent higher at 1,695 pence as of 12 p.m. in London. The company has a market valuation of 23.9 billion pounds.
Brexit Cost
Brexit and the pound’s subsequent decline against the yen and the dollar was not a factor in SoftBank’s decision, Son said. “I did not make the investment because of Brexit,” Son said. “It is not opportunistic about the currency.”
In fact, Son said because ARM’s sales are mostly to customers in the US and Asia, and are largely dollar-denominated, its stock has risen about 15 percent since the EU referendum vote. That means the deal actually became more expensive for SoftBank because of Brexit, not cheaper, he said.
ARM’s headquarters will remain in Cambridge as well as its senior management team, and SoftBank pledged to at least double employee headcount in the UK over the next five years. ARM currently employs about 4,000 people.
Segars said there will be no changes in the way ARM operates, and SoftBank will allow the company to continue to exist as a standalone unit. “We’ve been completely independent since our IPO and that is something that our partners value,” Segars said, referring to ARM’s semiconductor manufacturing customers. He added that SoftBank “will keep investing in our roadmap of existing technologies” and that “we are not getting acquired by someone who wants to strip costs out of the business.”
Goldman, Lazard
Beyond its sheer scale, the ARM acquisition is unusual for a company that’s preferred to take control through hefty stakes in smaller companies, or those with high-growth potential. The chip designer alone will account for more than a third of SoftBank’s current total international holdings of 8.3 trillion yen ($79 billion) as of July 15.
Goldman Sachs and Lazard & Co. were the lead financial advisers to ARM on the deal, the company said. Raine Group, Robey Warshaw and Mizuho Securities are acting as financial advisers to SoftBank, it said.
ARM has come to dominate the design of smartphone chips and is pushing into servers to challenge Intel, evolving from a small lab in a converted barn to a company whose designs are found in 95 percent of smartphones. With the mobile phone market slowing, ARM is adding new customers in the automotive industry and targeting growth in processors for network equipment makers and servers. The company is also exploring chip designs to boost the graphics capabilities of phones.
“ARM has what we view as an unassailable library of IP processor designs based on chip performance, size and power performance,” Neil Campling, an analyst with Northern Trust Capital Markets, wrote in an e-mail. “ARM is growing at 10x the rate of the semiconductor industry it serves and is increasingly the glue that binds the disruptive forces of the entire digital world, not just $700 smartphones.”
Increased Debt
Any deal for ARM comes less than a month after Nikesh Arora, Son’s heir apparent at SoftBank, quit the company. The former Google executive was brought on board to spearhead a search for the next Alibaba, the Chinese e-commerce company SoftBank backed that went on to pull off the world’s largest initial public offering in 2014.
SoftBank will now need to add to its massive debt load to see the acquisition through. The Tokyo-based company carried almost $106 billion of total debt on its balance sheet at the end of March, but less than $23 billion in cash and marketable securities.
However, the price tag may be justified because ARM’s dominance in mobile computing translates into consistent cash-flow, and its hardware-light business model yields margins north of 95 percent in every quarter since late 2014, Amir Anvarzadeh, Singapore-based head of Japanese equity sales at BGC Partners, said in an e-mail.
China Challenge?
“As much as we hated the decision of buying Sprint, we believe if Son wanted to bet on the sterling recovering he picked a great name in the tech sector,” Anvarzadeh said. “The market will be seeing this acquisition in a positive light despite stretching its balance sheet further.”
ARM’s model is based on the idea of spreading risk and profits. It does the underlying work and makes more money when its customers sell more things. That means brands like Apple and Samsung can focus on higher-level innovations instead of grunt work, while custom chipmakers like Taiwan Semiconductor Manufacturing deal with actual fabrication.
Some smartphone players may dislike the loss of independence at ARM, which is held by mainly institutional investors at the moment, said Gartner analyst Roger Sheng. Japanese ownership may even hamper ARM’s efforts to expand in China, where tensions with its neighbor run deep and the government is pushing local technology companies to come up with alternatives to foreign-owned technology.
“Their clients and partners are happy to see that ARM is independent because it works out better for the ecosystem,” Sheng said. And “the Chinese government has some political issues with the Japanese government so if ARM is acquired by SoftBank I believe China will invest more to develop their own architecture and maybe some Chinese companies will use other architectures.” | 9:18p |
Singapore Data Center Startup to Challenge Asia Pacific Players *Updated with analyst commentary
Asia Pacific as a whole is viewed as a major growth market for cloud services. As such, it is also a major growth market for data center service providers, who give cloud companies based overseas a lower-risk way to get started in new markets than building their own data centers.
Australia, Singapore, and Hong Kong are three of the region’s especially promising markets. There are also Japan and mainland China, but a new Singapore-based data center provider is going after the other three first.
The startup, called AirTrunk, is planning to build data centers in Sydney and Melbourne for an anchor tenant it has already secured, according to a Bloomberg report. It is going after cloud providers as its primary target customers and plans to undercut competitors with lower prices, the report said.
The company claims it will be able to provide lower prices because of its efficient data center design, although details of the design have not been disclosed. This value proposition will be key to AirTrunk’s ability to compete in these highly competitive Asia Pacific markets.
“AirTrunk’s ability to deliver a more cost-effective colocation service compared to the current landscape of wholesale providers in the Asia Pacific region, coupled with management’s experience of doing business in the region, puts it in a favorable position despite being a newcomer in the already hyper-competitive Sydney, Singapore, and Hong Kong markets,” Jabez Tan, research director for data center infrastructure at Structure Research, told Data Center Knowledge.
AirTrunk’s founder, Robin Khuda, who formerly served as CFO of the major Australian data center provider NextDC, told Bloomberg that his company plans to invest A$1.23 billion ($928 million) in Australia over the next three to four years.
About $350 of that will be spent within the next 12 months, he said, although the financing for this initial round of investment has not been finalized. He expects to finalize it with the next three months.
It will cost another A$1 billion to build the Hong Kong and Singapore data centers the company has planned.
AirTrunk is up against established data center services heavyweights in all markets it is going after. In Australia, it will be competing with the likes of Telstra, which is also a major partner of Khuda’s former employer NextDC, Japan’s NTT Communications, and Silicon Valley-based Equinix, among others.
In the Singapore data center market, which despite its tiny size has about 50 data center providers, AirTrunk will be up against local providers Singtel and Keppel, as well as US-based Digital Realty Trust and Equinix, to name just a few.
Read more: Singapore is a $1B Data Center Market and Growing Fast
Equinix and NTT are also major players in Hong Kong. Other big providers there include Hong Kong-based PCCW and iAdvantage, a subsidiary of SUNeVision.
Read more: Hong Kong, China’s Data Center Gateway to the World
While it’s initial focus is on the Australia, Hong Kong, and Singapore data center markets, AirTrunk is likely to expand to growing secondary markets in the region as well. AirTrunk “management also understands the importance of entering new frontiers within the region – particularly in growing tier-two markets, such as Philippines, Indonesia, and Thailand,” Tan said. |
|