Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, December 9th, 2014
| Time |
Event |
| 1:00p |
Rise of Direct Liquid Cooling in Data Centers Likely Inevitable NEW ORLEANS – It’s been a decade since cooling vendors began predicting that power densities would force servers to be cooled by liquid rather than cool air. Instead, the industry has seen major advances in the efficiency of air cooling, while liquid cooling has been largely confined to specialized computing niches.
Experts in high performance computing say that will begin to change over the next three to five years due to increased data-crunching requirements of scientific research, cloud computing, and big data. A key driver is the HPC community’s bid to super-charge the processing power of supercomputers, creating exascale machines that can tackle massive datasets. The exascale effort is driven to a large extent by the U.S. government, which spends tens of millions annually on grants for research in exascale computing.
“In the HPC world, everything will move to liquid cooling,” said Paul Arts, technical director Eurotech. “In our vision, this is the only way to get to exascale. We think this is the start of a new generation of HPC, with enormous power. We are just at the beginning of the revolution.”
A recent report by 451 Group said liquid cooling was making somewhat of a comeback outside of the world of scientific computing too. More and more non-scientific workloads are approaching the level of compute power previously reserved for research.
Higher Density vs. Hydrophobia
The exascale initiative is one of several trends boosting adoption of liquid cooling. HPC researchers say liquid cooling offers clear benefits in managing compute density and may also extend the life of components. Although it can offer savings over the life of a project, liquid cooling often requires higher up-front costs, making it a tougher sell during procurement.
In addition, many IT and facility managers still experience “data center hydrophobia,” the concern that introducing water into the IT environment boosts the potential for equipment damage.
Vendors of liquid cooling technology say they are seeing business gains, including traction from the rapid growth of the bitcoin sector. But HPC represents the front lines for liquid cooling, and will be the harbinger of any large-scale shift.
At a packed session on liquid cooling at the recent SC14 conference in New Orleans panelists outlines the benefits of liquid cooling and made a case for educating end users to overcome wariness about the technology.
“As leaders in HPC, we have to drive liquid cooling,” said Nicolas Dube, distinguished technologist at HP who focuses on data center design. “By 2020, the density and heat we will be seeing will require liquid cooling. We’ve got to push it.”
Light Adoption, But a Boost From Hyperscale
Despite widespread concern about rising heat loads in the data center, liquid cooling systems tend to be widely discussed and lightly implemented. A recent survey by the Uptime Institute found that just 11 percent of data center operators were using liquid cooling.
Frequent predictions of a huge spike in data center power densities have also proven premature, at least in the enterprise. Although many customers clamor for space that can support high power densities, Uptime said the median density in enterprise and colocation centers remains below 5kW per rack, a level that can be easily managed with air cooling.
 At SC14, CoolIT Systems illustrated a design for a full rack system cooled by warm water pumped from an adjacent chilling distribution unit (CDU). The piping is visible overhead, running between the rack and CDU. (Photo: Rich Miller)
But the emergence of massive hyperscale computing facilities, along with the business world’s embrace of cloud computing and big data, is beginning to change the playing field. The 451 Research paper noted that hyperscale operators like Facebook and Google have “legitimized a less conservative approach to the design and operation of facilities and paved the way for use of technologies such as (direct liquid cooling) by other operators.”
Methods of Liquid Cooling
Liquid cooling comes in a variety of flavors. “Anytime we talk about liquid cooling, we need to talk about what we mean,” said Michael Patterson, a senior power and thermal architect at Intel. “There are very different approaches.”
Cold water is often used to chill air in room-level and row-level systems, and these systems are widely adopted in data centers. The real shift is in designs that bring liquids into the server chassis to cool chips and components. This can be done through enclosed systems featuring pipes and plates, or by immersing servers in fluids. Some vendors integrate water cooling into the rear-door of a rack or cabinet.
Cool IT Systems, which makes direct-contact liquid cooling systems, is seeing strong growth this year, according to CEO and CTO Geoff Lyon. The company has seen revenue growth of 382 percent over the past five years, placing it on Deloitte’s Fast 50 list of fast-growing Canadian technology companies. A growth area this year has been the bitcoin sector, where CoolIT is working with several hardware vendors on direct-contact systems for custom ASICs to mine cryptocurrencies.
Asetek has seen year-to-year revenue improvement of 53 percent in the data center sector, primarily boosted by customers in the defense sector. The company, which partners with Cray and other OEMs, is beefing up its sales staff to focus on opportunities in the HPC sector.
Opportunities in Immersion
On the immersion front, Green Revolution Cooling supports the world’s most efficient supercomputer, the Tsubame-KFC system, which houses immersion tanks in a data center container. The company has several installations spanning multiple megawatts of power loads, including a seismic imaging application at CGG in Houston.
3M, which uses a slightly different approach known as open bath immersion, has seen traction in the bitcoin sector. The company’s Novec fluid was used in a high-density bitcoin mine within a Hong Kong high-rise building that can support 100kW racks.
Immersion solutions usually come into play when an end user is building a new greenfield data center project, and is seen less frequently in expansions or redesigns of existing facilities. Direct-contact solutions are more likely candidates for existing facilities, but require bringing water to the rack requires piping (either below the raised-floor or overhead) that is not standard in most data centers.
 LiquidCool Solutions shows off servers immersed in a tank of liquid coolant at the SC14 conference in New Orleans. (Photo: Rich Miller)
In the HPC sector, liquid cooling presents opportunities in two areas, according to Ingmar Meijer, a senior researcher at IBM Research.
“If you’re someplace where energy is expensive, it’s about cost,” said Meijer, the lead architect of SuperMUC, an IBM petascale supercomputer in Zurich. “If you’re in Oak Ridge (Tennessee), where its 3 or 4 cents (per kW hr), the benefit is clearly about density. Liquid cooling makes the best sense where energy costs are high.”
Up-Front Cost vs. Long-Term TCO
Meijer says liquid-based solutions like the warm water cooling approach used in SuperMUC offer significant savings over the life of a project. But the up-front installation cost can be higher than those for air-cooled systems, which can be a barrier to adoption for some users.
“If we don’t get the cost down, the acceptance of liquid cooling will suffer,” said Meijer. “The customer is not always good at calculating their TCO.”
HP’s Dube agreed that cost issues can complicate procurement, but said HPC professionals must press the issue and evangelize the benefits of liquid cooling.
“You have to be ready to climb that hill and carry the flag,” said Dube. “There’s no liquid cooling system that will ever be as cheap as a Chinese-made air-cooled platform.”
Reliability as a Selling Point?
While some data center professionals are wary of introducing liquids into racks and servers, HPC veterans say liquid cooling can offer advantages in reliability.
“In almost every instance of liquid cooling, the processor will run much lower and be more reliable,” said Intel’s Patterson. “With liquid cooling, there will always be less temperature fluctuation than with air cooling, which will help reliability.”
A number of solutions offer direct-contact cooling for server memory as well as processors. Panelists at SC14 were split on whether liquid cooling improved the reliability of memory, with some seeing improvement and others citing failure rates equivalent to air cooling.
The Future: The Absolute Zero Data Center?
The new frontier for liquid cooling may be network equipment, according to panelists. Extending liquid cooling to network gear – whether using direct-contact cold plates or immersion – could free data center designers from the need to cool the data hall to support network equipment. Networking advances such as silicon photonics could boost interest in new cooling technologies.
“For switches, there will be a transition in the next 5 years,” said Dube. “As we go to photonics, that will need liquid cooling.”
Some approaches to exascale computing could drive larger paradigm shifts in facility design. An example is quantum computing, an approach being explored by Google and several government agencies. Quantum computing pioneer D-Wave had a booth at SC14, and some see its technology as the best way to reach exascale within the target power envelope of 20 megawatts. The challenge: D-Wave’s systems need to operate at a temperature near absolute zero, or about 273 degrees below zero fahrenheit. The systems operate inside refrigeration units using liquid helium as a coolant.
 D-Wave Systems had a booth at SC14 to discuss its approach to quantum computing, in which processing occurs at temperatures nearing absolute zero. (Photo: Rich Miller) | | 4:00p |
Oracle’s Latest ZFS Storage Tightly Integrated With Oracle 12c Database Oracle announced its next generation NAS storage system with tight integration to its 12c database in a direct attempt to lure EMC and NetApp customers away. Highlighting that its offering delivers a storage system with analytics for pluggable databases, its new ZFS Storage ZS4-4 is co-engineered with Oracle Database 12c.
With more than double the performance improvement over the previous generation Oracle says that the new Zs4-4 delivers greater than 30 GB/sec throughput and 50 percent more dynamic random access memory (DRAM) and CPU cores. Engineered for extreme performance the new storage appliance is built on an in-memory, DRAM-based Hybrid Storage Pool architecture and multi-threaded symmetric multiprocessing (SMP) OS which takes advantage of all 120 cores in parallel and 3 TB DRAM per cluster.
Oracle adds that the co-engineered database and storage integration is enhanced with a new Oracle Intelligent Storage Protocol 1.1 to help identify database-related storage issues in 67 percent fewer steps.
Besides the speed and performance enhancements designed to power analytics for pluggable databases, the tight integration with Oracle hardware and software gives it additional advantages. While EMC and NetApp storage see all pluggable databases as one instance, the ZS4-4 gives pluggable database visibility into containers and an enhanced troubleshooting ability for Oracle Database 12c and Oracle Multitenant environments. | | 4:30p |
What’s Driving Greater Adoption of IT Operations Analytics? Sasha Gilenson is the CEO at Evolven and an innovator in IT Operations Analytics.
With modern business becoming more complex and facing constant changes, unpredictable events, and dynamic demand by the end users – all happening at unprecedented speed – IT Operations & Management is looking to adopt the right tools to optimize operations to handle the complexity and pace of change.
Need to Do More With Less
In 2009, IT budgets fell sharply. According to Gartner, they shrank 8.1 percent in 2009, and another 1.1 percent the year after. Though IT budgets started growing again in 2011, they are only at the level they were in 2005.
At the same time, IT operations teams are running with fewer people and resources, while not only managing an increasing number of systems, but also dealing with the new complexity that comes with hybrid environments and the rapid pace of changes nurtured by agile processes. Increasing productivity while lowering costs seems like a difficult proposition, especially since increased demands are placed on operations staff to manage a variety of rapidly evolving applications across the environment.
Managing Enormous Amounts of Data
Everything from system successes to system failures, and all points in between, are logged and saved as IT operations data. IT services, applications, and technology infrastructure generate data every second of every day. All of that raw, unstructured or polystructured data is needed to manage operations successfully. The problem is that doing more with less requires a level of efficiency that can only come from complete visibility and intelligent control based on the detailed information coming out of IT systems.
Frequent Changes Occur in IT Operations
With the operations staff responsible for the health of the entire business it is in their DNA to resist anything that might introduce unpredictable changes within the IT infrastructure or applications, so much so that IT Ops are rewarded for consistency and for preventing the unexpected or unauthorized from happening.
However, solving business problems requires creativity and flexibility to meet the frequent changes dictated by business requirements. New agile approaches eschew the standard method of releasing software in infrequent, highly tested, comprehensive increments in favor of a near-constant development cycle that produces frequent, relatively minor changes to applications in production. With hundreds or thousands of dependencies, even if the agile iterations are properly tested throughout development, unforeseen problems can arise in production that can seriously affect the stability.
Since every IT service is based on many parameters from different layers, platforms, and infrastructure, a small change in one of the parameters amongst millions of others can create significant impact. When this happens, finding the root cause can take hours and days particularly given the pace and diversity of changes. In many cases unplanned changes lie at the root of many failures. This can create business and IT crises that should be resolved quickly to avoid productivity and business losses.
Traditional Approaches Failed
Problems can be difficult to manage or even identify because so many businesses rely only on monitoring software, which is not sufficient alone to address challenges described above. In fact, problems are often not detected until they have grown out of control. If these issues are not resolved quickly, the result is downtime.
All of the technology infrastructure running an enterprise or organization generates massive streams of data in such an array of unpredictable formats that it can be difficult to leverage using traditional methods or handle in a timely manner. IT operations management based on a collection of limited function and non-integrated tools lacks the agility, automation, and intelligence required to maintain stability in today’s dynamic data centers. Collecting data, filtering it to make it more manageable, and presenting it in a dashboard is nice, but not prescriptive.
One of the holy grails still unresolved in IT management is intelligent IT automation. There are pieces of activities that are automated, targeted at the repetitive, well-known, mundane activities. This can free up people and resources to perform more innovative activities, and offer a more agile, speedy response from IT.
However, while automation is an important tool in the kit, it’s just one of the tools. The effort to automate complex environments is proportional to the complexity. Essentially, automation is just another generation of scripting of those activities that are running as part of operations designed to spawn and manage slave automation gofers.
The Rise of IT Operations Analytics
Given that changes to the operational model are almost guaranteed, a change in perspective is needed where IT operations takes a proactive approach to service management. Applying big data concepts to the reams of data collected by IT operations tools allows IT management software vendors to efficiently address a wide range of operational decisions. Because of the complexity of environments and processes and the dynamics of the environment, organizations need to have automation that is analytics driven.
With all of this data, IT Operations Analytics (ITOA) tools stand as powerful solutions for IT, helping to sift through all of the big data to generate valuable insights and business solutions. IT Operations Analytics can provide the necessary insight buried in piles of complex data, and can help IT operations teams to proactively determine risks, impacts, or the potential for outages that may come out of various events that take place in the environment.
Allowing a new way for operations to proactively manage IT system performance, availability, and security in complex and dynamic environments with less resources and greater speed, ITOA contributes both to the top and bottom line of any organization, cutting operations costs and increasing business value through both greater user experience and reliability of business transactions.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:30p |
Data Center Jobs: McKinstry At the Data Center Jobs Board, we have a new job listing from McKinstry, which is seeking an Electrical Program Manager in Cheyenne, Wyoming.
The Electrical Program Manager is responsible for the definition of customer project requirements, owning, managing and implementing the project schedule as a project management and client management visibility tool, establishing schedule to meet or exceed customer requirements, determining and facilitating the usages of resources, internal and external, required for the successful completion of the project, and completing the maintenance service tasks on the basic electrical systems that support the critical power infrastructure. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 6:37p |
DataGravity Raises $50M for Intelligent Storage Appliance DataGravity has raised $50 million in a series C investment led by Accel Partners. Total investments in DataGravity is now $92 million. The investment will go toward product development and go-to-market strategy
The company focuses not only on storage, doing something with that data. DataGravity offers what it calls a data-aware storage appliance. It is an early-stage company with the mission of turning data into information and making discovery simple to use.
It launched “data-aware storage platform” DataGravity Discovery Series earlier this year. The Discovery Series is storage appliance that integrates data protection, data governance, and search and discovery. It’s used to drive insights into data and streamline data management.
“The DataGravity vision of the direction ‘storage’ should be going is very compelling and exciting. It is grounded in deep technical innovation – businesses that deploy this technology will gain a competitive advantage,” said Ping Li, venture capitalist with Accel Partners, in a release. “Speaking with DataGravity channel partners and customers crystallized the value DataGravity will bring to businesses. I look forward to working with the team as it makes storage data-aware for more organizations.”
Li joins a board consisting of other members such as Peter Levine, partner at Andreessen Horowitz; David Orfao, managing director at General Catalyst Partners; and Bruce Sachs, general partner at CRV.
“We’re excited to have Ping and Accel Partners joining our mission,” said Paula Long, CEO of DataGravity. “The enthusiastic support we have received from customers, partners, analysts, and investors underscores the deep need in the market for data-aware storage that delivers a competitive business advantage.” | | 6:50p |
SingleHop Opens Chicago Data Center Hosted private cloud and managed hosting provider SingleHop has launched its largest data center to date in a ten-year $30 million investment with Digital Realty Trust in the Chicago area. The new 13,000 square foot white space has capacity for up to 20,000 servers and is adjacent to expansion space for another 10,000 square feet.
The Chicago data center is the first facility custom designed by the company. SingleHop chose Digital Realty and its Franklin Park campus (just outside of Chicago) in part because it meant it would be able to customize the data center. The company also spoke highly of Digital Realty’s operating experience.
Chicago-based SingleHop was established in 2006. The company opened a data center in Phoenix in 2012, its first data center outside Chicago . The expansion was driven by customer requests for a west coast location. The company made its first foray into Europe with an Interxion data center in Amsterdam.
Earlier this year, it raised a $14.2 million round to accelerate its cloud business.“We’re the new private cloud experts,” said Jordan Jacobs, vice president of products at SingleHop. “We were heads-down, investing in technology. Now we’re telling everyone.” Product-wise, it recently launched a VMware-based virtual private cloud (VPC) to its line of cloud offerings. The VPC cloud service is interoperable with a customer’s VMware-based internal data center.
The company handles a lot of enterprise private clouds, which necessitated doing things a little differently than typical retail colo, said Jacobs.
Robust Security
The Chicago data center features on-site security guards, dual checkpoints, and numerous layers of biometric security, including fingerprint and iris scanners. There is a property perimeter, SingleHop perimeter with guard at checkpoint, and cage perimeter.
The scanner takes continuous video and matches a person up to a profile in less than a second. The iris reader uses a one-to-many relationship to match up person and profile. No more getting a badge done by security that has to look up your profile because they can’t match you against the system with a fingerprint.
Custom Racks, Painted White
SingleHop uses an open-cabinet design and specifically fabricated metal server brackets. It makes custom racks, fabricated for the way it routes and hangs cables.
The racks are also painted all white so it takes less power to light the data center. “I will never put a black rack in another data center again,” said Jacobs. “The difference is significant.”
A 2,000 square foot NOC (Network Operations Center) is built inside the security perimeter. It has a 26 foot video wall showing operations. There is also a workroom within the perimeter and a large workforce.
Redundant Power Distribution
There are two separate power grid systems, going all the way down to UPS and generators. “A” power comes from hallway left and “B” from hallway right. They both go over the rack and never through .
“We have the ability from top of rack, to PDU, to swing power back and forth,” said Jacobs. “It’s concurrently maintainable, we can go a step further and isolate an incident on a single PDU.”
Enterprise Private Clouds Drive Growth
Private cloud is the company’s fastest-growing product segment. Many customers are companies with with $50 million to $500 million in revenue switching from on-premise IT, but it has seen more customers coming from public cloud to private. They make the switch to private to get a handle on their IT bills, said Jacobs.
Many companies are making the same mistakes in public cloud that they were making on premises, namely overpaying for unused VMs and resources. So, many are moving to private cloud, but the on-prem private cloud option is unsuitable because it removes agility.
“There are a few reasons for the interest,” said Jacobs, “If you set up private cloud in your office, you have to consult with multiple vendors, even the most forward-thinking vendors, it takes four-six months of negotiating before you can get a private cloud on-prem. We build the exact same infrastructure – EMC, Cisco, VMware – in a week.”
There is an argument that private cloud is too expensive, or more expensive than public cloud. Jacobs believes this is a misconception because people are making the same mistakes in public cloud that they previously made with physical servers: they’re paying for resources they don’t use.
“Amazon uses an allocation model: you’re paying for the full amount and none of the resources,” said Jacobs. “Even with all the price cuts, the average virtual machine uses around 3-7 percent so your overpaying for the rest. We assign resources on a reservation basis.”
Amazon Web Services offers discounts on cloud capacity reserved in advance for long terms. | | 7:41p |
Microsoft’s Government Cloud Comes Online Services on Microsoft’s cloud computing infrastructure designed and built specifically for serving government clients are now in general availability. Microsoft CEO Satya Nadella made the announcement at the company’s event in Washington, D.C., Tuesday.
The infrastructure that supports Microsoft Azure Government is physically and virtually separated from non-government cloud infrastructure and lives in data centers within the U.S. It is operated by personnel that has gone through specialized screening.
The cloud is for federal, state, and municipal government agencies. Government cloud is a huge market opportunity for cloud service providers. Agencies at every level of government are looking to cut cost and optimize IT operations by transitioning to cloud services.
The federal government has had an official Cloud First program for more than three years. It requires agencies to consider cloud for any application deployment before they consider any other hosting options.
Microsoft is competing with a pool of strong competitors for federal cloud dollars. Amazon Web Services has a data center dedicated to government cloud infrastructure; IBM SoftLayer is working on bringing two dedicated government cloud facilities online; HP has launched a government flavor of its Helion cloud services; QTS has opened a data center specifically for hosting government cloud infrastructure. There are numerous other examples.
FedRAMP Certification in Future
Azure Government offers compute, storage, data, networking, and applications. The company is leveraging its ability to give users a consistent experience for applications hosted on premises and in the cloud to differentiate.
Microsoft is working on securing a certificate of compliance for Azure Government with the Federal Risk and Authorization Management Program (FedRAMP) – a must for hosting federal agencies’ applications. The company’s big public cloud service, Microsoft Azure, has been FedRAMP-compliant.
Azure’s FedRAMP-compliant competitors include CGI, HP, Lockheed Martin, Oracle, AWS, Salesforce, and Verizon, among others.
Cloud CRM for Government Coming Up
Microsoft also said Dynamics CRM Online for Government will be generally available in January. Like the IaaS offerings, it will be hosted on isolated infrastructure and operated by cleared U.S. personnel.
Agencies will be able to integrate their Office 365 and Azure cloud assets with CRM Online for Government. | | 8:44p |
2015 Data Center World Global Conference The 2015 Data Center World Global Conference will be held April 19-23 at The Mirage Hotel and Casino in Las Vegas.
Data Center World is managed by AFCOM, a professional organization at the forefront of data center innovation for 35 years. The conference agenda—which includes over 20 user-led sessions this spring—is shaped by data center professionals and refined by attendee feedback. As a result, you’ll find a vendor-neutral environment, not sales pitches posing as education sessions.
Beyond the industry’s most comprehensive content and largest trade show, what separates Data Center World most from other conferences are the extras you’ll find as an attendee:
- Session presenters must provide three key take-aways and certify vendor-neutral content
- Network online with all attendees—before the conference and beyond—with CONNECTTM
- An AFCOM Attendance Certificate recording each of the sessions where you gained valuable knowledge
- The Data Center Manager of the Year Award—an additional networking venue
- Our friendly, knowledgeable and responsive staff
- Hosted by AFCOM President Tom Roberts, data center professional for 25+ years
For more information about this year’s conference, visit the 2015 Data Center World Global Conference website.
To view additional events, return to the Data Center Knowledge Events Calendar. | | 8:55p |
Iomart Acquires Hosting Provider ServerSpace for $6.65 Million US 
This article originally appeared at The WHIR
Iomart reported its half yearly results on Tuesday, including the disclosure that it has acquired hosting and cloud company ServerSpace for £4.25 million ($6.65 million USD).
ServerSpace is based in London and was founded in 2006 by Irish entrepreneur Tim Pat Dufficy. It provides managed hosting, cloud hosting, collocation, and connectivity services, and was named by Deloitte as one of the fastest growing UK tech companies in both 2012 and 2013.
ServerSpace currently rents its data center space, according to the Irish Times, so as a subsidiary of iomart, which operates eight data centers in the UK alone, it should be able to substantially reduce its overhead cost.
“The hosting market is hugely competitive and one of the problems we’ve had is that although we’ve got the expertise and know-how, we’ve often been beaten to the bigger deals because we’re small. Having the weight of a big and impressive parent company like iomart behind us will give us a much better chance of winning those deals,” Dufficy, who will remain with ServerSpace, told the Irish Times.
Iomart considers itself big enough to compete, as shown by its rejection of a takeover bid by Host Europe in July.
The company’s statement also mentions recent partnerships with Microsoft for its Cloud Solution Provider Program and with EMC to be its European partner for the launch of EMC Enterprise Hybrid Cloud.
Iomart shares fell over 20 percent in Tuesday trading on the London Stock Exchange.
Recent hosting acquisitions of hosting companies include Cloud Equity Group’s purchase of Just199 Hosting and Arvixe’s acquisition by EIG, and in Europe cloud provider Ipeer’s October acquisition by TeliaSonera.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/iomart-acquires-hosting-provider-serverspace-6-65-million-us | | 9:28p |
7X24 Exchange 2015 Spring Conference 7×24 Exchange will host its 2015 spring conference June 7-10 at the JW Marriott Orlando Grande Lakes in Orlando, Florida. The theme of the conference is Connect, Collaborate, Deliver.
The 7X24 Exchange is aimed at knowledge exchange among those who design, build, operate and maintain mission-critical enterprise information infrastructures, 7×24 Exchange’s goal is to improve end-to-end reliability by promoting dialogue among these groups.
For more information about this year’s conference, visit the 7X24 Exchange website.
To view additional events, return to the Data Center Knowledge Events Calendar. | | 10:15p |
Sabey, McKinstry Pitch Alternative to Heating Banks in Commissioning McKinstry and Sabey Data Centers have developed a device the firms claim can simulate operating conditions in a data center before it is populated with servers and other IT gear.
The companies have filed to patent the thermal simulation technology, which they say can replace heating banks companies usually rent to test new data centers in the commissioning process. Unlike heating banks, the Mobile Commissioning Assistant simulates airflow in addition to temperature.
Simulating power and heat loads is an important part of the commissioning process, used to test whether supporting infrastructure systems in a new data center act as designed.
This is an unusual announcement since neither company is an equipment vendor. McKinstry designs, builds, and operates data centers for companies, and Sabey is a major Seattle-based data center landlord with properties in Washington State, Ashburn, Virginia, and New York City.
Since both companies are heavily involved in data center design and construction, however, commissioning is an important part of their business.
The thermal simulation device physically recreates data center conditions. Companies also use virtual predictive modeling (usually computational fluid dynamics modeling) to see how a certain layout and cooling system design will affect a data center’s thermal environment.
The product includes a heating coil, a fan, and an adjustable duct, all housed on a four-wheeled cart that can be pushed around the data center. The duct mimics effects of containment barriers, while the fan creates air flow and simulates pressurization patterns.
 Mobile Commissioning Assistant diagram, courtesy of McKinstry and Sabey
The companies said the thermal simulation device will pay for itself after being used four times instead of renting traditional commissioning equipment.
“Our team is always developing methods to improving operating efficiencies for our clients, and the innovative Mobile Commissioning Assistant product is another way we will accomplish this with our data center projects,” Dean Allen, CEO of McKinstry, said in a statement. |
|