Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, October 27th, 2015
| Time |
Event |
| 12:00p |
Immersion Cooling Finds its Second Big Application: Bitcoin Mining Data Centers What do the 1950s military avionics technology and bitcoin mining servers have in common? In a word – heat, or, more precisely, cooling requirements.
Blowing cold air over the innards of military aircraft and satellite electronics 60 years ago was as inefficient as blowing cold air through power-hungry bitcoin mining servers today. It was in the 50s that 3M introduced its first dielectric fluid, which at the time found its first application in cooling avionics systems.
In the decades that followed, the single biggest application for 3M’s fluids has been in supercomputers, which because of their power density were much better off cooled with liquid than air. Over the last several years, however, as the bitcoin mining industry grew and reached a point where the biggest players were building their own mining hardware and data centers to house it, cooling electronics with dielectric fluid found its second big application.
Power densities in bitcoin mining data centers are radically higher than in data centers that house traditional IT equipment, and operators of these facilities tend to squeeze every last watt and square foot they can out of them. Some of them have found that by bringing bitcoin mining ASICs, or Application Specific Integrated Circuits, in direct contact with dielectric fluid allows them to pack a lot more mining horsepower in a square foot of data center space.
Non-HPC Liquid Cooling at Massive Scale
A data center being built in Georgia (the former Soviet republic) is one of the world’s biggest showcases for the most unusual of approaches to liquid cooling: submerging servers in fluid completely. The facility is being built by the bitcoin mining giant BitFury, and the cooling system was designed by Allied Control, a Hong Kong-based engineering company BitFury recently acquired.
According to a case study of the deployment, published this week, BitFury expects its 40 MW facility to support 250 kW per rack at launch. This will not be the power-density limit of the design. The company expects it to support future-generation bitcoin mining hardware that will be even more energy-intensive. For comparison, typical power density of IT gear deployed in traditional enterprise or colocation data centers ranges between 2 kW and 5 kW per rack.
IT racks, in this case, are liquid-filled tanks. The BitFury data center will have 160 of them.
Boiling Servers
3M’s Novec 7100 fluid used in Allied’s design does not transfer heat very well, so it’s not enough to simply flush electronics with it continuously. But it does boil at a relatively low temperature of 142F. When servers submerged in a specially designed tub heat up, the liquid starts boiling, and the resulting gas carries the heat up. Once above the surface, it reaches water-cooled cooling coils, turns back into liquid as its temperature drops, and falls back into the tank.

Illustration by Allied Control
3M actually created the concept itself but made it freely available for anyone to use, which is what Allied did, Michael Garceau, business development manager at 3M, said. The approach is called “two-phase immersion cooling,” or 2PIC.
There is at least one other company that sells similar immersion-cooling systems for data centers, but there is one key difference. Green Revolution Cooling also submerges servers into a dielectric mineral oil blend it designed itself, but its approach does not use two-phase immersion. The company claims that its oil, called ElectroSafe, has 1,200 more heat capacity by volume than air.
Other approaches are filling sealed server enclosures with coolant or isolating coolant flow to CPUs alone. The latter approach involves installing a small chamber for coolant directly over the CPU and pushing coolant through it by a system of narrow pipes.
Breaking out of Niche Markets
The power-density advantages of direct liquid cooling and especially immersion cooling are clear. What is unclear is whether it will eventually become useful in more mainstream computing scenarios, which is something 3M is hoping to see.
The alarmist forecasts from about a decade ago of an imminent data center power density crisis have not materialized. Densities have generally gone up, but not to the extent or at the scale predicted.
“The reality is, only in the last two or three years have companies started to more broadly deploy high density to be able to take advantage of the efficiency that that drives,” Sureel Choksi, CEO of the wholesale data center provider Vantage Data Centers, said. “The average data center rack today, in terms of actual utilization, probably has density of about 2kW a rack, which is extraordinarily low.”
The higher the power density the more sense immersion cooling makes. 3M’s Novec fluid itself is a major cost, and at lower densities economics of the technology get “less compelling,” Garceau said. He declined to share the price of the fluid, but a report on the liquid cooling market by 451 Research said it could cost up to $50 per liter.
Also undisclosed is the amount of fluid required to cool BitFury’s 250 kW racks. A previous project for which the data was disclosed used 3 liters per kW, Garceau said. In lab tests, 3M simulated a higher IT load and was able to cool 4 kW with less than one liter, he said.
But Garceau’s current challenge extends beyond the bitcoin mining industry. The company wants to make the case that the cooling technology can be useful in more than just a few specialized applications. “There’s a tendency to believe that liquid cooling is special and expensive and niche for supercomputing,” he said.
At least today, however, that belief is not unfounded, since non-HPC deployments of immersion cooling are rare, and the overwhelming majority of the world’s IT gear is cooled by air. Garceau views the emergence of bitcoin mining as the second big application for direct liquid cooling as an opportunity to demonstrate that the story of the technology does not start and end with supercomputers.
One potential application can be cloud-scale computing, he said. “They would have to optimize around high density hardware for the purposes of energy efficiency, construction savings, all the benefits of two-phase immersion cooling,” Garceau said. But making the case to an industry that has an opposite philosophy of data center architecture – scale out versus scale up – will be difficult. | | 3:00p |
SSD or No SSD? That is the Question Wai T. Lam is co-founder and CTO of Cirrus Data Solutions.
If Shakespeare were writing plays today, he would probably pen a drama or perhaps a comedy about the imbroglios storage administrators get tangled up in every day. The debates over whether to implement SSD for storage are great plot material.
As SSD technology continues to drop in price every few months, many administrators see these disks as viable replacements for the spinning variety. The main advantage of SSD is its superb performance. Even with all the promises of SSD, reality tells us nothing changes overnight in the commercial world of technology, especially in enterprise SAN storage.
For proof, we do not need to look further than the history of tape. In my previous life, the virtual tape libraries (VTLs) we built were bestsellers; yet, ironically, one of the most important sales features of a VTL was its ability to move data to physical tapes.
For all the putative magical properties of SSD, users still face many logistical considerations. The following are just a few examples:
- What to do with the vast infrastructure of existing storage technologies?
- What is the cost and effort required to migrate all the data?
- And, that most nightmarish question: What if I switch over and the application performance does not increase?
Clearly, the allure of SSD performance is irresistible for people looking to improve the performance of their systems. There is plenty of compelling data proving SSD can be very effective. For those who are ready and can afford to switch over, SSD is a very promising paradigm. On the other hand, for those who lack the luxury of switching immediately, the best way to get a taste of SSD’s advantages — while avoiding a frantic scramble with a new infrastructure — is through using a small amount of SSD to accelerate spinning disks (i.e., caching).
This way, without committing to a huge expenditure and a gargantuan effort, one can test the waters before jumping in.
This sounds good, but embracing SSD caching is easier said than done, as cache comes with its own labyrinth of different issues, namely:
- Installing SSD caching in the application servers will introduce many risks simply because there will be new hardware and software.
- Factoring in the downtime required.
- Typically, the SSD cache data cannot be shared.
- Adding SSD caching will be costly, and will likely involve vendor lock-in.
- Not all storage systems provide the cache option.
- Inserting a cache appliance into a SAN sounds better — but this raises the question of “how much change will be required to accommodate it?”
I think centralized caching is the most palatable option, particularly if the cache appliance can be inserted easily into a SAN environment. This solution will make SSD storage available to all applications in different hosts, without forcing the storage administrator to replace the entire storage system — and without costing too much.
Yet burning questions remain. What impact will caching have on an already convoluted SAN environment? Will it disrupt production? What if the performance is not better? How does one know whether the cache scheme is working? Is it possible to undo negative changes made to the storage environment?
Pondering all those questions, we can start to create a dream specification for a centralized cache appliance for SAN. The following points are a good start:
- A centralized cache appliance should provide a very large amount of SSD storage as cache — at least 10 percent of the existing storage.
- It should be transparently inserted into the storage links, such that nothing in the SAN environment needs to be changed, including LUN masking, zoning, application host configurations, and so on.
- The appliance should be removed as easily and transparently as it is inserted. This is especially important if one finds the system is not conducive to caching.
- It allows I/O traffic to be analyzed in detail, including the individual paths, initiators, targets, and LUNs — and it delivers complete historic profiles.
- The appliance enables individual hosts or LUNs to be identified and selected for caching.
- It provides detailed I/O access and data read/write patterns in real-time and over time, and clearly describes the reasons for cache hits and misses. All of this functionality helps pinpoint the exact amount of cache needed for each LUN.
- The appliance should be highly available, without a point of failure.
- The cost of the appliance should be substantially lower than switching existing storage to all-SSD.
The bottom line is that the cache appliance should just plug in with minimal effort. Whether the application performance is improved or not, the appliance must provide definitive information on why the cache is efficacious or why it is not.
Based on this information, users will at least have a clear direction on what needs to be done. It may very well turn out that all one needs to do is to balance the data paths a bit, or redistribute the load of certain LUNs to eliminate bottlenecks. And if the appliance works out well, one may use it strategically — to defer switching over to SSD by methodically planning the best moves.
With today’s advanced technologies a hard-working cache appliance (as described above) should be readily available. Such a device would remove the Hamlet-like angst storage administrators feel about “SSD or no SSD.” | | 6:49p |
FedRAMP and Cloud Service Providers – Critical Success Factors Regulations around cloud computing and the kinds of workloads that it hosts are quickly evolving. Today, many cloud service providers (CSPs) are actively looking at ways to support more organizations, more use-cases, and a lot more workloads. In some cases, like those around federal agencies, there needed to be a new model which could host these federal workloads. With that, we saw the entrance of the Federal Risk and Authorization Management Program (FedRAMP) authorization process. On one hand, organizations looking to host these new kinds of workloads can definitely see a boost in business. However, does it really make sense to go through the FedRAMP authorization process? If you choose to do so, what are the success factors?
In this whitepaper, we will take a deep dive into FedRAMP to answer this question. We also learn how a third party assessment organization (3PAO) can help streamline the authorization process and help create very real business strategies around federal workloads. It’s like an experienced hand guiding you through the process including:
- Security assessments
- Leveraging provisional authorization
- Creating on-going assessments and authorizations
Also, partners around the FedRAMP authorization process can help CSPs understand the specific responsibilities they have to the customer. Furthermore, 3PAO partnerships help align budgets, business leadership, communication, and even outsourcing resources. All of these factors are often overlooked when a FedRAMP assessment begins.
There is good news, however. The right 3PAO will help guide your organization through the entire FedRAMP assessment process. Download this whitepaper today to learn more about FedRAMP and the benefits of a long-term, effective partnership with a 3PAO, which helps keep workloads secure and the business around government clients growing. | | 8:06p |
Contractors Fined after Electrocution Death at Morgan Stanley Data Center Almost exactly five years ago, in October 2010, an electrical worker at a Morgan Stanley data center in the UK was electrocuted to death when his forehead accidentally came in contact with live 415V electrical terminals.
Last week, a court in Ipswich found Balfour Beatty Engineering Services and Norland Managed Services, a subsidiary of the Los Angeles-based commercial real estate giant CBRE Group, guilty of negligence that led to the death of 27-year-old Martin Walton and issued fines totaling £380,000, according to the Health and Safety Executive, a UK government body.
London-based Balfour Beatty was contracted to carry out multi-million-pound infrastructure upgrades at Morgan Stanley’s Heathrow data center in Hounslow, near the Heathrow airport, while Norland had control of the site as a mechanical and electrical maintenance contractor. The project included connecting the data center to a second electrical substation and installing additional power distribution units.
Walton was doing cabling work as a subcontractor employed by Integrated Cable Services. He was electrocuted during live load testing of the new PDUs that had to be done before the units were connected to the facility’s existing infrastructure, HSE said in a statement.
The units had to be tested with live power supplies because of “last-minute modifications.” They had to be tested with two power supplies: the existing one controlled by Norland and the new one controlled by Balfour Beatty.
HSE told the court the incident was caused by “a succession of failures indicative of the complete breakdown of [Balfour Beaty’s] management of health and safety in relation to this project, particularly the breakdown of communication.”
The first of three PDUs was modified, tested, and connected successfully. Walton was electrocuted “when his forehead made contact with the 415V live terminals of the second unit.”
The court ordered Balfour Beatty to pay £280,000 in fines for violating two sections of the Health and Safety Work Act. The company admitted the two breaches, according to HSE.
Norland was found guilty of breaching one section of the act and fined £100,000.
Norland did not participate in the project but was punished because of the way it managed the project’s impact on existing infrastructure that was under its control. According to HSE, Norland should not have issued Walton a permit to reroute existing power supply through the new distribution unit while knowing that the unit could receive power supply from a source that wasn’t under its control and without making sure the other supply was isolated.
In a statement, a Norland spokesperson said the court had “recognized that the breach arose not as a result of systemic or management failings, but inadvertence on the part of one individual.” The statement did not specify which individual it was referring to.
“This is the only time that [Norland] has faced any formal enforcement action, and we conducted a full investigation and cooperated fully with the Health and Safety Executive,” it read. “Safeguarding the health and safety of all those within the buildings we manage sits at the heart of our business, and we regularly review all our processes and ensure all our staff are fully trained in the correct procedures and latest legislation.”
In addition to the Heathrow data center, Norland has worked as a contractor at Morgan Stanley’s Croydon data center, about 15 miles away. The spokesperson declined to say whether the company was still working for the client.
A Morgan Stanley spokesperson declined to comment.
We have reached out to Balfour Beatty for comment and will update this post once we hear from them.
Generally speaking, the danger of having people work near powered electrical equipment is a widespread issue in the data center industry that some in the industry have become more outspoken about recently.
About one third of respondents to a recent Uptime Institute survey of data center professionals said their organizations allowed maintenance activities on energized electrical equipment. About 60 percent said they were uncomfortable working in such environments, but only 30 percent said local regulations in their areas prohibited maintenance on energized gear.
Chris Crosby, CEO of Compass Datacenters, has been speaking and writing on the dangers of deadly arc flash occurring in data center electrical rooms.
The rise of modularity in electrical systems in recent years has made the issue even more acute. Building out only part of the infrastructure to reduce upfront capital expenditures means adding electrical equipment later to a live system, increasing the chances of technicians coming in contact with powered gear.
“Some of the modular builds – the ‘add it later’ type of scenarios – require energized work and frankly, from an OSHA and an NFPA perspective, that’s against the law. More important than that, it’s not just against the law. It’s really a moral issue. You’re sending employees … into an environment in which they can die,” Crosby said in a video message posted on Compass’s website. “You can’t do hot work, so whenever you hold off on adding that last UPS, that last PDU, as soon as you make that decision, you’re also making the decision that you will shut that board down.” | | 10:14p |
Oracle OpenWorld 2015: Ellison Disses IBM, SAP as ‘Nowhere in the Cloud’ 
This post originally appeared at The Var Guy
Oracle CTO Larry Ellison kicked off Oracle OpenWorld 2015 in true Ellison style—with both guns blazing, pointing squarely at Oracle’s biggest competitors in the cloud space.
“Our two biggest competitors in last two decades have been IBM and SAP and we no longer pay any attention to either one,” Ellison said during his keynote event Oct. 25. “It’s quite a shock. SAP is nowhere in cloud, and only Oracle and Microsoft is in every level of the cloud—applications, platform and infrastructure.”
Rather, he said, Amazon Web Services today is Oracle’s biggest competitor. “We compete with Amazon in cloud infrastructure and never, ever see IBM—this is how much our world has changed.”
Indeed, Oracle is dead-set on being the all-powerful cloud ruler (which is interesting when you think about how Ellison once considered cloud computing a fad), and plans to announce a slew of new offerings and services to help it in its quest both during the event and over the next few months.
Already, the company announced vertical cloud applications for e-commerce and manufacturing, which Ellison noted were stepping stones as Oracle “fills out its footprint” in the cloud.
Ellison may be talking a good game in the cloud, but judging from its latest earnings numbers—which include a cloud revenue miss for its first quarter—Oracle’s got some ground to cover. Its software and cloud revenues for the quarter declined 2 percent year over year to $6.5 billion, and cloud revenues totaled $611 million, below analysts’ expectations of $630 million.
Still, Oracle in August said it’s going “all-in” on cloud, with CEO Mark Hurd proclaiming at least 95 percent of Oracle’s products would be available in the cloud by this month.
Whether the company can pull such an ambitious feat is yet to be seen (there’s less than a week until November).
Also during the opening keynote, Hurd and Intel CEO Brian Krzanich took to the stage to debut a new program dubbed “Exa Your Power,” designed to draw IBM Power systems users to Oracle Engineered Systems powered by Intel. The program offers to qualified customers:
- A free proof-of-concept migration of sample databases
- Customized report documenting the migration process and test results; and
- A comprehensive road map for modernizing their database environment to Oracle Engineered Systems optimized for Oracle Database.
“We are committed to ensuring that Oracle runs faster with Intel,” Krzanich said.
This first ran at http://thevarguy.com/information-technology-events-and-conferences/102615/oracle-openworld-2015-ellison-disses-ibm-sap-nowhere-c | | 10:36p |
Synergy Research: Quarterly Cloud Service Revenue Exceeds $6B 
This article originally ran at Talkin’ Cloud
There’s some good news for those who sell cloud services: business has never been better.
According to Q3 data from Synergy Research Group, quarterly revenues from cloud infrastructure services have exceeded the $6 billion dollar mark, proving that the is one of the most lucrative in all of enterprise IT. When combined, the last four quarters alone have accounted for more than $21 billion in revenue.
The study took into account Q3 revenue for Infrastructure as a Service, Platform as a Service, private and hybrid cloud infrastructure services.
Synergy’s latest study also showed that the annualized growth rate increased slightly for the second consecutive quarter, placing it slightly above the 50 percent mark. The annual growth rate has not exceeded 50 percent since the third quarter of 2013, according to Synergy’s research.
Unsurprisingly, the largest benefactors of the boom in cloud services are Amazon Web Services, Microsoft (MSFT), IBM and Google (GOOG) who together account for more than half of worldwide cloud infrastructure service revenues.
All four companies are growing more rapidly than the industry as a whole, with both Microsoft and Google showing revenue growth rates in excess of 100 percent.
Of these four, AWS dominates the market with more than 30 percent market share, which is in keeping with the company’s previous ownership of the cloud infrastructure services market and its widespread success in the public cloud market, according to Synergy.
The Q3 study reflects a positive change from last year’s figures, which were slowed by aggressive price competition, according to John Dinsdale, a chief analyst and research director at Synergy Research Group. The recent strengthening of the American dollar has also proved positive for the cloud services market, and is helping analysts to gain a clearer overall image of the market and its continued growth.
“It might be tempting to think of cloud technologies as now being relatively mature, but the truth is that this is a market which is still in its very early stages of development,” said Dinsdale in a statement. “As the leading cloud operators continue to launch an impressive array of new services we will continue to see a huge swing away from traditional IT practices to a world that will be dominated by the cloud.”
This first ran at http://talkincloud.com/cloud-services/synergy-research-quarterly-cloud-service-revenue-exceeds-6-billion |
|