Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, September 18th, 2015
| Time |
Event |
| 12:00p |
California Officials Greenlight Water-Saving Data Center Cooling Tech California building regulators recently approved a type of free data center cooling technology that saves water, writing the change into the part of the state’s building standards code that deals with energy efficiency of buildings.
Data center operators in California can now install economization systems that use specialized refrigerant fluid as a medium for exchanging heat with the outside environment instead of water. Until recently, the California Building Standards Code, also known as Title 24, required use of economizers in data centers but only economizers that either pull outside air into the building (air-side economizers) or systems that use water to transfer heat outside (water-side economizers).
Emerson Network Power lobbied the California Energy Commission to make the change because it wanted to sell a data center cooling system that uses pumped refrigerant for economization to data center operators in the state.
John Peter Valiulis, VP of North America marketing for Emerson’s Thermal Management unit, said the regulators were scrupulous in evaluating proposed changes and the data the company submitted to make its case, but there was a lot of willingness to cooperate on the state’s part because of its extreme drought conditions. Because of these conditions, there is high demand for water-saving data center cooling solutions in California, and he expects more innovation in the space as a result.
“There’s enormous amount of interest,” Valiulis said. “Most interest we’ve experienced in well over a decade. You’re going to see a lot of new technologies come out like this.”
Simulation files Emerson submitted to the CEC, which CEC staff reviewed and endorsed, showed that the company’s refrigerant-based system would not only avoid the use of water in free cooling altogether but also use less energy than a water-side economizer would in 14 of California’s 16 climate zones. The data were analyzed by CEC staffers and an outside consultant.
Water-side economizers are considered the preferred option for large data centers over air-side economizers, because pushing a lot of outside air into a data center requires a lot of filtration for contaminants and more effort to control humidity. The pumped-refrigerant option has the same benefits as the water option, minus the water. It can take as much as 4 million gallons of water per year to cool 1 megawatt of data center capacity using a water-side economizer, Valiulis said.
Economizers work together with mechanical cooling, supplementing cooling capacity when outside air is cool enough. Depending on the conditions, the free-cooling system can either substitute the mechanical chiller completely during cool hours or provide part of the load.
Emerson’s product is called Liebert DSE thermal management system with EconoPhase Pumped Refrigerant Economizer. The company launched it a little over one year ago. Outside of California, it is now deployed at about 50 sites in North America, as well as some in UK and Australia, Valiulis said.
It is mostly a traditional direct-expansion cooling system with an indoor Computer Room Air Conditioner and an outside condenser. The unusual third component is a refrigerant pump. It has two refrigerant circuits: one to the pump and one to the condenser.
There are two compressors, which can be switched off individually, when the system switches to refrigerant cooling. It measures temperature on the data center floor and outside in real-time. If it is cool enough outside, it will switch off one compressor and rely on the economization loop. If it gets even colder, it switches off the second compressor.
While data center water use has been on the industry radar at least since The Green Grid started working on its Water Usage Effectiveness metric, published in 2011, the issue has become more acute in California recently due to the drought. As there is no telling whether water availability in the state will ever improve, data center operators will increasingly look to alternative technological solutions to reducing reliance on water for cooling.
Emerson announced earlier this year a plan to spin off Network Power as a separate entity. | | 3:00p |
Friday Funny: Data Center Ceiling I wonder what’s up there….
Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon, and we challenge our readers to submit the funniest, most clever caption they think will be a fit. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.
Congratulations to Michael, whose caption won the “Floating Data Center” edition of the contest. Michael’s caption was: “Thar she blows! PUE like a snow hill! It’s Mobi-data!”
Lots of submissions came in for the “Blob” edition – now all we need is a winner. Help us out by submitting your vote below!
Take Our Poll
| | 4:34p |
Weekly DCIM Software News Update: September 18 Cormant-CS releases version 8 of its DCIM software. Greenfield Software launches a new edition of its GFS Crane DCIM software.
Cormant-CS releases version 8 of its DCIM software. Cormant-CS announced the release of version 8 of its Cormant-CS DCIM software. The new release features over 50 enhancements including alerts, a new infrastructure console, HTML5 interface, searchable URL page links, and a web UI overhaul.
GFS launches new version of DCIM software. Greenfield Software announced the next of version of its Data Center Infrastructure Management software, GFS Crane. New features in this release include the ability for colocation providers to provide a value-added service to customers with self-provisioning and role-based access, and it also includes new functionality to provide greater accuracy in customer provisioning and billing.
We’ve also added two new vendor profiles to the Solutions and Suppliers section of the DCIM InfoCenter this week. Learn what Device42 and Maya HTT and their DCIM software offerings are all about here. | | 5:20p |
Carter Validus Buys Level 3-Leased Minnesota Data Center Carter Validus Mission Critical REIT II has acquired a three-building property in Minnesota, consisting of two data centers and an office building. The property is fully occupied by tw telecom of Minnesota, a Level 3 Communications subsidiary, Uroplasty, and a publicly traded Fortune 500 company whose name Carter Validus did not disclose.
Carter Validus Mission Critical REIT II is one of two subsidiaries of Carter Validus REIT Investment Management Company. The second subsidiary is called Carter Validus Mission Critical. The investment management firm specializes in buying leased data centers and healthcare facilities in tier-two markets around the US.
Carter Validus II was launched recently and now counts three data center properties in its portfolio: two in Minnesota and one in Indiana. Its healthcare portfolio is much larger, consisting of about 20 properties in the South, the Midwest, and the Northeast. Its sister company owns close to 20 data centers and close to 50 healthcare facilities across the country.
The new Minnesota, campus in the Carter Validus II portfolio measures about 135,000 rentable square feet and sits on a 15.54-acre property in Minnetonka, southwest of St. Paul and Minneapolis. The main data center facility has 3 megawatts of critical power capacity; the second one has 1.25MW. The third building houses office space and a research lab.
tw telecom occupies more than 100,000 square feet. Uroplasty occupies about 22 percent of the property. The rest is leased to the unnamed Fortune 500 tenant.
“The high concentration of Fortune 500 companies in the Twin Cities, paired with a large demand for data center space and its cooler inland location, makes this market ideal for expanding our portfolio,” “Michael Seton, the company’s president, said in a statement. | | 6:51p |
Rackspace Seeks Data Center Tax Breaks in Reno, Nevada Rackspace is exploring the possibility of building a data center in Reno, Nevada. The company applied for a package of data center tax breaks with the state governor’s economic development office this week, Reno Gazette-Journal reported, citing the managed cloud and hosting services provider’s application for incentives.
The company has not made a final decision to build in Reno, Rackspace COO Mark Roenigk said in a statement emailed to Data Center Knowledge. “We have identified some compelling benefits to adding a data center presence in Nevada,” he said. “However, at this time we do not have any finalized build-out or operational plans to share with you for this specific location or any other West Coast location.”
Rackspace is asking for a 20-year sales tax abatement on equipment and a 75-percent personal property tax abatement for the same period of time, according to RGJ. The tax breaks would apply to a potential 150,000-square-foot data center in Reno Technology Park, which is currently home to an expanding Apple data center campus. The data center outlined in Rackspace’s application would cost $422 million.
Switch, famous for its massive SuperNap campus in Las Vegas, is building a $1 billion data center campus across the freeway from RTP, close to the construction site of a Tesla battery manufacturing plant. eBay will be the anchor tenant in the Switch facility.
Nevada Governor Brian Sandoval signed a new set of data center tax breaks into law in June. Applying to both data center owners and colocation customers, the bill gives 10-year tax abatements to projects that cost $25 million or more and hire and keep at least 10 Nevada residents employed. A company that invests $100 million or more in a data center and hires 50 employees can enjoy tax abatements for up to 20 years.
If Rackspace decides Reno will be its next data center location, it is not likely to build it on its own. Like it has done elsewhere, the company will use a data center provider, such as Digital Realty, for example, which built its latest UK data center. “We can re-affirm that we do plan to remain consistent with our data center strategy of working with third-party data center development and operational partners,” Roenigk said.
State economic development officials are supportive of Rackspace’s application, according to RGJ. Spokesperson for the governor’s economic development office did not respond to a request for comment. | | 9:46p |
Microsoft Builds Own Linux-Based Data Center Network OS for Azure Microsoft has built a Linux-based data center network operating system for its global Azure cloud infrastructure to have more control over network management software than networking vendors can provide.
Like other companies that provide services over the internet globally, companies like Google, Facebook, and its main cloud-services rival Amazon, Microsoft designs its own data center hardware and much of the software that runs on that hardware. These companies have a lot to gain from having a custom technology stack that does exactly what they need – nothing less and nothing more.
“What the cloud and enterprise networks find challenging is integrating the radically different software running on each different type of switch [sold by vendors] into a cloud-wide network management platform,” Kamala Subramaniam, principal architect for Azure networking at Microsoft, wrote in a blog post. “Ideally, we would like all the benefits of the features we have implemented and the bugs we have fixed to stay with us, even as we ride the tide of newer switch hardware innovation.
Google reportedly designs its own data center networking hardware. Facebook started designing its own data center switches recently. The social networking giant has talked publicly about its Wedge and Six Pack switches. It appears that Microsoft does not make its own networking hardware, relying instead on vendor-supplied switches.
Its network OS, called Azure Cloud Switch, enables the company to use the same software stack “across hardware from multiple switch vendors,” Subramaniam wrote.
Microsoft does design its own servers to support Azure, using specs open sourced through the Open Compute Project, the Facebook-led open source hardware and data center design initiative, as the basis. Microsoft joined OCP last year.
ACS has enabled Microsoft to identify, fix, and test software bugs much faster. It also allows the company to run a lean software stack that doesn’t have unnecessary features for its data center networking needs. Vendors design traditional switch software for a variety of customers, all with different needs, which means an individual customer ends up with features they never use.
It also allows Microsoft to try new hardware faster and makes it easier to integrate the networking stack with the company’s monitoring and diagnostics system. It also means networking switches can be managed the same way servers are, “with weekly software rollouts and roll-backs, thus ensuring a mature configuration and deployment model.”
What enables the company to run ACS across different suppliers’ hardware is the Switch Abstraction Interface spec, an open API for programming switch ASICs. The SAI effort is part of the Open Compute Project, and Microsoft was a founding member of the effort, along with Facebook, Dell, Broadcom, Intel, and Mellanox. OCP officially accepted ACI in July.
SAI abstracts data center networking hardware underneath to make it easier for users or vendors to write networking management software without writing it for specific products. SAI was an “instrumental piece to make the ACS a success,” Subramaniam wrote. |
|