Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, September 4th, 2014
Time |
Event |
12:00p |
Cisco Unveils UCS for Scale-Out Data Centers Cisco introduced two entirely new types of servers aimed at entirely new types of workloads to its Unified Computing System product family Thursday, billing the announcement as its most significant one since the launch of the original Cisco UCS five years ago.
The company has added modular servers for scale-out data center architectures to the family, a mini-UCS for edge locations, ever more powerful rack and blade servers and a new version of its UCS Director software, which now has extensive Big Data capabilities.
The new rack and blade servers are fourth generation of the bread-and-butter two-socket UCS machines. The updated UCS Director software now features native support for both SAP Hana and Hadoop.
The Mini and the M-Series, however, are completely new product categories designed to bring new workloads within the realm of UCS.
Fundamental architecture change
“The M-Series is a ‘relook’ at the fundamental server architecture,” Todd Brannon, director of product marketing for Cisco UCS, said.
Designed with scale-out IT architecture in mind, it features bare-bones compute cartridges that share everything except CPU and memory. There are two compute nodes on each cartridge, and eight cartridges fit in a 2U chassis.
Echoing Intel’s Rack Scale Architecture and Facebook’s disaggregated-rack concepts, the M-Series enables users to upgrade CPU and memory only, without ripping and replacing entire servers. The cartridges share disk, IO, power and cooling resources.
This is made possible by Cisco’s UCS Manager software, which abstracts all components in the system and manages them in a scalable and programmable way. The software manages every configuration setting and enables users to reconfigure the system on the fly.
Disaggregation of individual components is made possible by a virtual interface card, which turns the system into a PCI fabric.
Cisco not after web-scale market
While things like scale-out architecture and fabrics of disaggregated server components are generally from the world of web-scale data centers operated by the likes of Google and Facebook, Cisco’s M-Series line is not aimed at the web-scale market.
“They have already moved on to designing and procuring their own systems,” Brannon said about the web-scale clients.
Instead, Cisco is going after the 50 to 100 customers whose scale is below that of Googles and Facebooks of the world, he explained. It is a new market for Cisco servers – a world where a single application runs across many servers and scales by adding more servers.
The M-Series server architecture is very flexible. While the nodes currently come with Intel Xeon E3, they can support a variety of processor types. The architecture could theoretically support ARM chips if necessary, Brannon said.
Mini UCS for edge locations
The new UCS Mini is a converged all-in-one system built for non-data center environments. Users can install it in edge locations but use UCS Manager to manage it along with UCS deployed in their data centers, merging the two types of environments into one.
The Mini scales from one to 15 servers and can be deployed in branch offices, retail locations or on premises of service provider clients.
UCS helps maintain solid x86 server business
UCS is now a $3 billion business for Cisco and growing, Brannon said. This is while many of the company’s competitors in the x86 server market are “flat-lining.”
“We met a lot of unmet need in the market when we introduced UCS five years ago,” Brannon said. That unmet need was for a converged infrastructure solution that consisted of pre-integrated components of the typical IT stack. | 12:30p |
ROOT Says It’s Canada’s First Colo to Accept Bitcoin ROOT, a data center services startup building its first facility in the Montreal market, announced that it will be the first data center provider in Canada to accept Bitcoin as payment from customers.
The company positions itself as an innovator, touting a KyotoCooling system that will supposedly allow it to charge a lot less for its services than its competitors. Accepting cryptocurrency as form of payment fits well into that image.
At least two data center providers in the U.S. accept Bitcoin as payment. Both C7 Data Centers and Server Farm Realty cater to Bitcoin mining companies and allow them to pay using the virtual currency.
In addition to looking like pioneers, these providers may benefit when Bitcoin they have grows in value. C7 and SFR told us they planned to hold on to portions of cryptocurrency they receive from customers, hoping its value will increase.
ROOT is also going after clients in the Bitcoin mining market. These clients require extremely high power densities but have comparatively little use for 100-percent uptime.
ROOT said its facility would be able to cool more than 30 kW per rack. The 5-megawatt data center will have capacity for 500 server racks.
Bitcoin mining companies are not generally willing to pay typical colocation rates providers charge for hosting them in highly redundant facilities. ROOT’s business model has from the start revolved around offering much lower rates than its competitors.
Because of low-cost real estate and hydroelectric power in Montreal, as well as energy efficient design of the facility, the company’s CEO Jason van Gaal told us earlier this year that he would be able to charge 30 percent less than his nearest competitor charged.
“Accepting Bitcoin is definitely a way that we can demonstrate our support to Bitcoin clients from Montreal and around the world,” van Gaal said in a statement. “We feel strongly that Bitcoin will play an important role in how we conduct local and international trade today, but also in the foreseeable future.” | 2:00p |
Learn to Unlock Greater Value From DCIM With Asset Intelligence Take a look at the modern data center and you’ll see a complex, powerful machine helping to run the world’s most important applications. When it comes to controlling and managing critical data center resources, there is a new challenge around the growing data center model.
The data center is getting bigger and more complex and so too is the asset inventory. Every new asset has an impact on the day–to–day operations of the data center – from power consumption and problem resolution to capacity planning and change management. To achieve – and maintain – operational excellence, organizations don’t just need to know the location of their data center assets, they need to know if they are over-heating, under–performing or sitting idle. So how can you create direct asset intelligence for an ever-evolving data center? What can administrators do to truly optimize their operations?
By trading in manual audits and fragmented data for real-time sensors and integrated information, organizations can not only improve how they manage infrastructure assets but also improve operations of the entire data center.
In this whitepaper form CA you’ll learn how combining greater asset intelligence with Data Center Infrastructure Management (DCIM) software enables IT and facilities departments to track assets through their lifecycle along with their operational performance and environmental conditions.
Before you start down the road of DCIM and asset intelligence, there are some key management questions to consider:
- How are assets and their associated connectivity discovered?
- Is it possible to categorize assets into certain groups to aid reporting?
- Will disparate asset data be converted into standard formats?
- Are asset moves and changes automatically captured?
- Can supplementary asset data, for example warranty status or configuration, be integrated from other management systems?
- Is it possible to set asset performance thresholds that trigger automated alerts when breached?
- Can energy consumption be tracked back to an individual device?
- What thermal conditions can be monitored in an asset’s surroundings?
- Can the location of each asset be visualized in a 3-D representation?
- Is it possible to model ‘what if’ scenarios involving changes to assets or data center environmentals?
With that in mind – you can begin to create your recipe for asset management success. Remember, the data center and its assets can never be separated. However, with greater intelligence on both, organizations will be able to achieve greater operational excellence and greater business value. This means enabling tools that interact with DCIM as well as asset control mechanisms to help with:
- Capacity
- Availability
- Efficiency
- Sustainability
- Direct cost reduction
As the asset footprint in the data center continues to expand, today’s operational challenges will only exacerbate, resulting in yet more cost and complexity for both IT and facilities departments.
Download this whitepaper today to see how by combining greater asset intelligence with DCIM, organizations can ensure their data center is not only fit for purpose but also fit for the future.
- They can tap into idle capacity
- They can maintain higher availability
- They can achieve greater efficiency
Ultimately, your organization will be ready when business demands come calling. | 2:30p |
Data Center Commissioning: What’s the Best Approach? Would you drive your new Ferrari 458 off the auto dealer’s parking lot without taking it for a test drive? No, probably not. The same applies to data centers, while these facilities, which represent huge investments, are engineered and built to exacting standards, they still need a “test drive” to verify that the individual components work togehter and that they are fully ready to “hit the road” so to speak.
Data center commissioning is important to ensure a mission critical facility can support its workload as anticipated. Chris Crosby, founder and CEO of Compass Datacenters, and former senior executive and founding member of Digital Realty Trust, will present on “Understanding Data Center Commissioning” at the Orlando Data Center World in October. Compass also published a detailed blog post on data center commissioning recently.
With the significance of commissioning in mind, Data Center Knowledge asked him a few questions.
What are best practices in commissioning?
“Use a third party with demonstrated experience in commissioning,” Crosby said. “This commissioning “agent” should be involved during the design process of the facility to understand its operational purpose and requirements. This information will enable them to produce the most effective commissioning scripts (tests) possible.”
Further, the commissioning process should include all five phases, according to Crosby. “Many data centers are only commissioned through the fourth phase, which only documents that each individual component functions as required. Only by performing phase 5, otherwise known as Integrated Systems Testing (IST), where the entire facility is tested under full load and in failure scenarios is the inter-operability of all components and systems verified.”
To be fully commissioned, a facility must be tested in all modes:
- Failure
- Safety
- Emergency
- Test in real life scenarios—not planned
.
Data centers that perform Level Five commissioning have verified reliability of design and compatibility among all critical systems, such as:
- Electrical
- Mechanical
- Environmental
Why is Integrated Systems Testing a preferred method?
“Only by completing IST testing can the operator be assured that all systems operate as required under full load and in failure scenarios.” he explained. “This level of assurance is essential if the site is to support mission critical operations.
“Many providers are unable to perform this level of testing due to their use of shared backplane architectures. In these structures all data halls share the MEP, so it is impossible to test an individual unit’s operation in a power failure mode, for example, since all of the attached data halls would be taken down as well. This limitation makes it important for prospective operators to probe deeper when a provider tells them that they perform commissioning to ensure that this includes phase 5/IST testing,” Crosby said.
Find out more on data center commissioning
Want to learn more? Attend the session on Understanding Data Center Commissioning or dive into any of the other 20 trends topical sessions curated by Data Center Knowledge at the event. Visit our previous post on Software-Defined Data Centers: What Lies Ahead?
Check out the conference details and register at Orlando Data Center World conference page. | 2:37p |
The Four Pillars of DCIM Integration Dhesi Ananchaperumal, SVP Software Engineering and DCIM Evangelist at CA Technologies.
DCIM systems are central to so many processes in the data center and beyond that integration is essential. Yet it can also be incredibly daunting. With some organizations using as many as 40 systems to manage the data center ecosystem, the potential for rationalization and retirement is considerable.
This level of integration should not be tackled in a single pass. Instead, organizations should start with the systems that will add the most value to their DCIM implementation and their business.
A requirements workshop at the beginning of the DCIM journey will help identify which integrations matter most. It will also identify where an integrated DCIM system can be used to replace aging, expensive or disparate tools.
DCIM integration opportunities can be split into four key areas:
- Data: This is paramount, and is usually the first area to tackle. For DCIM to deliver better visibility of data center operations and resources, organizations need to be able to integrate data from different platforms and in different formats.
- Service management applications: From building management systems and change management databases to service desk ticketing platforms, integrating DCIM with service management systems can simplify and unify common operational processes. For example, if a PDU rack fails an integrated DCIM system can not only raise a service desk ticket but also correlate other alerts to the same issue.
- Strategic planning: DCIM can be an enabler of business growth. I recently spoke to a large retailer that needed to better manage its power, space and cooling capacity to deliver on its corporate strategy to grow by nearly 30 percent. To be effective, DCIM needs to be integrated with enterprise capacity planning and management processes, which will help provide greater visibility of costs and performance.
- Process automation: With this kind of integration in place, organizations can tap into the next tier of operational efficiencies. For example, workloads can be automatically moved between data centers to take advantage of idle capacity and cheaper energy rates as well as in response to disaster situations. Not many data center managers are prioritizing automation yet, but as adoption of public and private clouds increases, I see this becoming an important opportunity for DCIM.
To realize the full potential of DCIM integration, organizations need to ensure they deploy a DCIM system that has been designed to do just that: integrate. With the right DCIM solution in place, organizations can establish an integration roadmap for the short and long term.
After all, DCIM integration is not a revolution; it’s an evolution. And the results will keep getting better over time.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:58p |
Cobalt Gets License to Host Online Gambling Apps in Las Vegas Data Center Nevada’s Cobalt Data Centers is now authorized to host odds, bets and other regulated gambling applications in its Cheyenne data center in northwest Las Vegas.
Gambling applications are a big potential boost for Nevada data centers. It is a massive sector that has predominantly been relegated to offshore hosting in the past. A trusted in-state data center goes a long way in legitimizing what has been a fringe, albeit huge industry. Nevada broke ground for online gaming nationally, with new laws allowing online poker passed in February of 2013.
Cobalt is now a Registered Hosting Center, pursuant to the regulations of the Nevada Gaming Commission and State Gaming Control Board (NGCB), meaning gaming licensees can host within Cobalt. Cobalt has proven its best practices for securing, operating and scaling gaming infrastructure.
Online gambling is projected to be a $7.4 billion business in the U.S. by 2017, according to researchers at H2 Gaming Capital, who say Nevada will represent about $400 million of that total.
“Some of the biggest casinos on the Strip already trust Cobalt to host their non-gaming applications,” said Jeff Brown, CEO, Cobalt Data Centers. “With our NGCB registration, Cobalt becomes the one-stop, full-service hosting partner for licensees and unlocks all sorts of strategic options for an industry so vital to our state. This is an exciting and progressive leap forward for gaming in Nevada.”
Gaming IT service provider NetEffect colocates within the facility. “NetEffect understands the high bar necessary to be NGCB accredited,” said Jeff Grace, CEO, NetEffect. “We were the first to receive an IT Service Provider license from the Commission in 2012. We’re very happy to have a partner in Cobalt who can host the complex applications that we manage for our gaming clients.”
The services are restricted to players at least 21 years old and physically located in Nevada. Station Gaming (Ultimate Poker) and Caesar’s (World Series of Poker) were the first companies to launch legal online poker operations in Nevada.
Cobalt’s data center was commissioned in 2013. The company provided a look into the 34,000-square-foot 5.5 megawatt facility following the ribbon cutting.
The Nevada Online Gaming Commission also conducted rigorous inspections of both Switch and ViaWest facilities and determined they too met requirements necessary for mission critical gaming data.
Cobalt is sponsoring the Gaming & Leisure Rountable at the Red Rock Resort to help it get the word out. | 5:16p |
Interxion Adding Stockholm Data Center, Expanding in Vienna European colocation provider Interxion is building its fourth data center in Stockholm and expanding further in Vienna. The combined expansion will add around 29,000 square feet and 4.5 megawatts to its portfolio.
Stockholm is shaping up to be the tech hub of the Nordic region. Interxion expanded in Sweden in 2013, so a new expansion suggests the location is doing very well. The company makes use of seawater to cool its data centers in Sweden.
Vienna is also a growing market for Interxion, the company touting customer wins like the Vienna Stock Exchange. The company will employ its phased approach to design in Vienna, a practice it has pioneered since its founding in 1999.
Interxion has been continuously expanding its footprint. It recently acquired a well-connected interconnection hub in Marseille and expanded in Amsterdam and Frankfurt on the back of strong leasing activity.
The Stockholm data center is being constructed on the company’s campus at Kista. STO4 is expected to provide about 11,840 square feet of equipped space with about 1.5 megawatts of customer power. It is scheduled to be operational in the second quarter of 2015.
Nearly 50 carriers and ISPs are currently available on the campus, which also has direct access to the Netnod Internet Exchange. Capital expenditure associated with the construction of STO4 is expected to be about €15 million.
“Stockholm is the economic heart of the Nordic region and a strategic location for reaching northern Europe and the rapidly growing Internet economies of Russia and the Baltics,” Interxion CEO David Ruberg said. “One of Interxion’s fastest-growing locations, it has strength in the digital media and cloud communities and also in systems integrators.”
In Vienna, Interxion is building out two phases in its second facility totaling 3 megawatts of customer power. A 7,534-square-foot phase will be followed by a 10,000-square-foot phase.
The two phases will be completed in 2015. Capital expenditure associated with the two expansion phases of VIE2 is expected to be approximately €17 million.
“In addition to serving domestic demand in Austria, Vienna is a gateway hub to eastern and southern Europe. As Austria’s leading connectivity-rich player, Interxion is experiencing strong demand from cloud service providers that are seeking to expand their capabilities,” Ruberg said. “We are expanding VIE2 to meet this customer demand.”
Interxion operates 37 data centers in 11 countries in Europe and provides access to more than 500 carriers and 50 European Internet exchanges across its footprint. | 5:38p |
Teradata Acquires Big Data Consultancy Think Big Just six weeks after Teradata built up its Hadoop prowess through Revelytix and Hadapt acquisitions, it continues to build out its Big Data ecosystem by acquiring a consulting practice called Think Big.
Teradata has its own consulting group, but has watched the four-year-old Silicon Valley firm for some time. The deal will augment its U.S. market strategy and implementation services.
The move marks a continued effort by Teradata to go beyond warehouse and legacy offerings as it looks to a more holistic approach to Big Data technologies and services it can provide. Think Big is a pure-play company that brings experts, an expanded practice of training and recruiting Big Data talent and a portfolio of industry-specific pre-built solutions.
Think Big was one of the first professional services firms that focused exclusively on Big Data opportunities, helping its customers embrace vendor-neutral open source solutions and providing full lifecycle support for Big Data projects. The company is known for its expertise in open source technologies such as Hadoop, NoSQL databases including HBase, Cassandra and MongoDB and Storm for real-time event processing.
Think Big says it has provided strategy and architecture advice to companies such as EMC, Facebook, Intel, Johnson & Johnson and NetApp.
Think Big CEO Ron Bodkin said the two companies “have a huge opportunity ahead to help customers embrace the new wave of Big Data with Hadoop, NoSQL and emerging open source platforms and technologies while leveraging existing investments and skills in traditional data processing systems and Teradata’s deep experience in best-in-class analytics in all industries.”
Citing a recent Wikibon survey, Teradata Labs President Scott Gnau noted that 70 percent of companies employ outside professional services consultants to help architect, deploy or run their big data projects. Teradata said Think Big’s on-shore deployment model complemented its existing Big Data consulting business, and the Think Big Academy will provide hands-on expert training courses for customers. | 8:00p |
Enterprise Cassandra Player DataStax Raises $106M DataStax, a startup that does enterprise implementations of the open source database Apache Cassandra, has raised a $106 million Series E round led by Kleiner, Perkins, Caufield and Byers (KPCB). The round gives the company a valuation north of $830 million and brings total funding to over $190 million.
The company will continue to drive enterprise adoption and innovation in Apache Cassandra and will further invest in the Cassandra developer community, which now spans over 80 countries.
“Investors have not only bought into the vision, but the execution,” said Matt Pfeil, DataStax co-founder and chief customer officer. “The equity will go towards further growth. Last January we expanded in Europe with an office in London and it has been very healthy growth.”
Other investors that participated in the round include ClearBridge, Cross Creek and Wasatch, which collectively manage more than $100 billion in mutual funds. PremjiInvest and Comcast Ventures came on board as additional new investors. All existing investors, including Lightspeed Venture Partners and Scale Venture Partners, showed strong participation in the new financing.
DataStax counts 25 percent of the Fortune 100 as clients. Its enviable customer list includes Intuit, Intercontinental Hotels Group, Clear Capital, Netflix and eBay. The company has landed many customers that have migrated from traditional Oracle relational database management systems (RDBMS) to DataStax and the Apache Cassandra NoSQL database platform.
The company now has customers in 50 countries and its employee count has doubled since 2013, currently over 350 employees. Its revenue has grown more than 125 percent year over year.
“We’re solving the problems that the enterprises have better than anyone else,” said Jonathan Ellis, the company’s co-founder and CTO. “DataStax is for building applications that deal with a country’s worth of users rather than a single company. Internet-first and mobile-first workloads and datasets mean you have to build an ad-hoc NoSQL database.”
NoSQL market bifurcation on horizon
Venture capital continues to find its way to the database sector.
“There’s a lot of fast followers in the market,” Ellis said. “There’s money to be made, and VC will continue to fund more competition. I have two predictions: first, the weaker players will get shaken out over the next year. The other prediction is that you’re going to see a bifurcation in the NoSQL market. There will be a market for what I call ‘hackers,’ those interested in playing with the coolest new technology and prototyping, but they’re usually on small startup teams. Their problem is not dealing with scale. The other part of the market is ‘my data doesn’t fit on single Oracle big iron, I need to distribute it’. Sharding Oracle gives horrible replication, and Cassandra solves that problem for me.”
“Cassandra is not all things to all people,” he continued. “I won’t target the hacker market if it compromises the enterprise market. In the hacker market, there will be hot new things every couple of years. When you talk about hundreds or thousands of servers worth of data, Cassandra is the answer.”
Ellis sees this as the third generation of databases. “In the 70s it was relational mainframes; the 90s saw relational on server; today it’s post-relational. You had efforts to move to non-relational before, but [non-relational databases] didn’t offer enough benefit over relational and reason to switch to something new. Now that has changed.”
KPCB Partner Matt Murphy said, “DataStax is the leader in a massive shift that is underway from relational databases to more agile NoSQL data stores for new workloads. The scalability, manageability and cost effectiveness of DataStax is not just timely, but critical at a time when companies are rapidly building out new applications and customer experiences as a competitive advantage.”
Evolution of DataStax
DataStax added Apache Spark integration a few months ago. “It’s our first toe in the water with Spark,” said Ellis. “You’ll see more in terms of making that integration deeper as we partnered with Databricks.”
“First DataStax was built on Hadoop; Hadoop is good at what it does but it didn’t do performance,” he said. “Spark delivers better performance but both are good compliments.”
DataStax also recently added a powerful in-memory option with version 4.0.
The company has developed strong partnerships with over 115 companies including Google, Accenture, Microsoft and HP. The Cassandra player is also partnering with a company called Instacluster on hosted DataStax for enterprise offering. Instacluster is hosting on Amazon Web Services and Google Cloud.
| 9:00p |
NSA Exploring Use of Mineral Oil to Cool its Servers Security secrets may soon be stored on swimming servers. The National Security Agency has been testing the use of immersion cooling for its massive data centers, dunking servers in tanks of a coolant fluid similar to mineral oil. The agency says the technology has the potential to slash cooling costs.
“The National Security Agency’s Laboratory for Physical Sciences (LPS) acquired and installed an oil-immersion cooling system in 2012 and has evaluated its pros and cons,” the agency said in a technology publication. “Cooling computer equipment by using oil immersion can substantially reduce cooling costs; in fact, this method has the potential to cut in half the construction costs of future data centers.”
That’s of interest to the NSA, which spent more than $1.5 billion to build a massive data center in Bluffdale, Utah, that spans more than 1 million square feet of facilities. That’s why it has been testing immersion cooling technology from Austin-based Green Revolution Cooling. The initiative reflects the NSA’s ongoing interest in adopting cutting-edge technology in its computing infrastructure.
Liquid cooling is used primarily in high-performance computing (HPC) requiring high-density deployments that are difficult to manage with air cooling. Interest in liquid cooling has been on the rise as more applications and services require high-density configurations, prompting data centers to consider infrastructure previously limited to HPC and supercomputing facilities.
Data centers at scale
The NSA says the massive Utah facility, along with a similar one under construction near Baltimore, will be used to protect national security networks and provide U.S. authorities with intelligence and warnings about cyber threats. But the agency data centers have become a flash point for controversy in the wake of public disclosures about the NSA’s covert data collection efforts.
The project will have a power capacity of 65 megawatts, making power a big component of its operations. The 1 million square-foot Camp Williams facility houses 100,000 square feet of data center space, while the remaining 900,000 square feet is used for technical support and administrative space.
 An aerial view of the NSA data center in Utah. (Photo: Electronic Frontier Foundation and Wikimedia Commons)
Green Revolution says its liquid-filled enclosures can cool high-density server installations for a fraction of the cost of air cooling in traditional data centers. The company’s approach allows users to operate servers without a raised floor, computer room air conditioning (CRAC) units or chillers. Green Revolution’s CarnotJet cooling racks are filled with 250 gallons of dielectric fluid, with servers inserted vertically into slots in the enclosure. Fluid temperature is maintained by a pump with a heat exchanger using a standard water loop.
Intel recently concluded a year-long test with immersion cooling equipment from Green Revolution Cooling, and affirmed that the technology is highly efficient and safe for servers. Current GRC projects include Tsubame 2.0, the world’s most energy efficient supercomputer, and energy exploration specialist CGG, which operates an entire data hall of submerged servers.
Advantage of mineral oil
Mineral oil has been used in immersion cooling because it is not hazardous and transfers heat almost as well as water but doesn’t conduct an electric charge.
“While mineral oil does not have the heat capacity of water, it still holds over 1,000 times more heat than air,” wrote David Prucnal from the DoD’s Advanced Computing Systems, in The Next Wave, a quarterly research publication from the NSA.
The primary advantage of liquid cooling is that it will support much higher power densities than air cooling. The NSA said that immersion cooling systems can support loads of 30kW per rack with no special engineering or operating considerations, compared to an upper range of 10kw to 15kW per rack with air cooling.
 Four of the many tanks of servers submerged in liquid coolant at a CGG data center in Houston, Texas. (Photo: Rich Miller)
Immersion cooling also allows the removal of fans, which are standard on most commercial servers and maintain proper airflow through a server chassis, consuming about 10 percent of the total energy use for a server.
Prucnal found that immersion technology also has the potential to lead to fewer equipment failures by maintaining an even temperature across server components, and reducing exposure to dust and dirt from air blowing through the equipment.
“The final side benefit of immersion cooling is silence,” Prucnal wrote. “Immersion cooling systems make virtually no noise,” Prucnal writes. “This is not an insignificant benefit, as many modern air-cooled data centers operate near or above the Occupational Safety and Health Administration’s allowable limits for hearing protection.”
Heavy racks
One potential challenge is the weight of immersion racks, Prucnal writes, noting that the a Green Revolution rack loaded with servers and cooling fluid can weigh 3,300 pounds, or about 1.6 tons.
But the opportunity for savings in future facilities is significant, Prucnal noted as he applied the economics to the scale of the agency’s recent builds.
“For large data centers, where the technical load is in the neighborhood of 60 MW, construction costs can approach one billion dollars,” he notes. “This means that about 500 million dollars is being spent on cooling infrastructure per data center. Since immersion-cooled systems do not require chillers, CRAC units, raised flooring, and temperature and humidity controls, etc., they offer a substantial reduction in capital expenditures over air-cooled systems. Immersion cooling can enable more computation using less energy and infrastructure, and in these times of fiscal uncertainty, the path to success is all about finding ways to do more with less.” |
|