Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, December 4th, 2015

    Time Event
    1:00p
    The Problem of Inefficient Cooling in Smaller Data Centers

    A lot of the conversation about inefficiency of data centers focuses on underutilized servers. The famous splashy New York Times article in 2012 talked about poor server utilization rates and pollution from generator exhaust; we covered a Stanford study earlier this year that revealed just how wide the problem of underutilized compute is in scope.

    While those are important issues – the Stanford study found that the 10 million servers humming idly in data centers around the world are worth about $30 billion – there’s another big source of energy waste: data center cooling. It’s no secret that a cooling system can guzzle as much as half of a data center’s entire energy intake.

    While web-scale data center operators like Google, Facebook, or Microsoft expound the virtues of their super-efficient design, getting a lot of attention from the press, the fact that’s often ignored is that these companies only comprise a small fraction of the world’s total data center footprint.

    The data center on campus operated by a university IT department; the mid-size enterprise data center; the local government IT facility. These facilities, and others like them, are data centers anybody hardly ever hears about. But they house the majority of the world’s IT equipment and use the bulk of the energy consumed by the data center industry as a whole.

    And they are usually the ones with inefficient cooling systems, either because the teams running them don’t have the resources for costly and lengthy infrastructure upgrades, or because those teams never see the energy bill and don’t feel the pressure to reduce energy consumption.

    Data center engineering firm Future Resource Engineering found ways to improve efficiency in 40 data centers this year that would save more than 24 million kWh of energy total. Most of the improvements were improvements to cooling systems. Data center floor area in these facilities ranged from 5,000 square feet to 95,000 square feet.

    The biggest culprit? Too much cooling. “The trend is still overcooling data centers,” Tim Hirschenhofer, director of data center engineering at FRE, said. And the fact that they’re overcooling is not lost on the operators. “A lot of customers definitely understand that they overcool. They know what they should be doing, but they don’t have the time or the resources to make the improvements.”

    There are generally two reasons to overcool: redundancy and hot spots. Both are problems that can be addressed with proper air management systems. “You overcool because you don’t have good air management,” Magnus Herrlin, program manager for the High Tech Group at Lawrence Berkeley National Lab, said.

    Because data center reliability always trumps energy efficiency, many data centers have redundant cooling systems that are all blasting full-time at full capacity. With proper controls and knowledge of the actual cooling needs of the IT load, you can keep redundant cooling units in standby mode but turn them back on automatically when they’re needed, when some primary capacity is lost, or when the load increases.

    But most smaller data centers don’t have control systems in place that can do that. “Air management is not a technology that has been widely implemented in smaller data centers,” Herrlin said. Recognizing the problem if widespread inefficiency in smaller data centers, LBNL, one of the US Department of Energy’s many labs around the country, is focusing more and more on this segment of the industry. “We need to understand and provide solutions for the smaller data centers,” he said.

    Overcooling is also a common but extremely inefficient way to fight hot spots. That’s when some servers run hotter than others, and the operator floods the room with enough cold air to make sure the handful of offending machines are happy. “That means the rest of the data center is ice-cold,” Herrlin said.

    Another common problem is poor separation between hot and cold air. Without proper containment or with poorly directed airflow, hot exhaust air gets mixed with cold supply air, resulting in the need to pump more cold air to bring the overall temperature to the right level. It goes the other way too: cold air ends up getting sucked into the cooling system together with hot air instead of being directed to the air intake of the IT equipment, where it’s needed.

    While Google uses artificial intelligence techniques to squeeze every watt out of its data center infrastructure, many smaller data centers don’t have even basic air management capabilities. The large Facebook or Microsoft data centers have built extremely efficient facilities to power their applications, Herrlin said, “but they don’t represent the bulk of the energy consumed in data centers. That is done in much smaller data centers.”

    Leonard Marx, manager of business development at Clearesult, another engineering firm focused on energy efficiency, said hardly anybody has a perfectly efficient data center, and because the staff managing the data center are seldom responsible for the electric bill, the philosophy of “if it ain’t broke, don’t fix it” prevails.

    Understandably, a data center manager’s first priority is reliability, and building more reliable systems through redundancy creates inefficiency. With a system that’s reliable but inefficient, and with a data center manager who is not responsible for energy costs, there’s little incentive to improve. Without changes that divert more attention in the organization to data center energy consumption, the problem of energy waste in the industry overall will persist, regardless of how efficient the next Facebook data center is.

    5:30p
    Weekly DCIM Software News Update: December 4

    CommScope adds capacity forecasting to its iTRACS DCIM software, No Limits Software adds two new products to its DCIM suite, and 451 Research evaluates the Data Center Management Software market.

    CommScope adds Capacity Forecaster to iTRACS DCIM 4.2. CommScope announced the addition of a new capability in its DCIM software suite to take the guesswork out of planning and running a data center. The new Capacity Forecaster is a browser-based engine allowing data center operators to predict, understand and act upon their future capacity needs across the entire physical ecosystem, including power, space (floor and rack), cooling and network connectivity.

    No Limits Software adds products to DCIM suite. No Limits software announced the release of two new products to their RaMP DCIM software suite. RaMP Asset provides management of data center assets, from virtual machines to IT equipment (servers, storage, and network) to facilities equipment (power and cooling). RaMP Power discovers and manages all SNMP power equipment including rack PDUs, PDUs, and UPSs, as well as monitoring environmental conditions.

    451 Research releases research report on Datacenter Management Software. 451 Research released a new Market Monitor overview report on the Datacenter Management Software (DCIM and DCSO) marketplace. The report includes a bottom-up market-sizing analysis that incorporates revenue estimates and forecasts for 69 competing vendors in the Datacenter Management Software (DMS) market.

    6:25p
    Friday Funny: Christmas in the Cold Aisle

    Time to put up the lights!

    Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon, and we challenge our readers to submit the funniest, most clever caption they think will be a fit. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.

    Congratulations to Stanley, whose caption won the Halloween edition of the contest. His caption was: “Yes, they’re a direct competitor of Apple.”

    Some good submissions came in for the Thanksgiving edition – now all we need is a winner. Help us out by submitting your vote below!

    Take Our Poll

    For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website!

    9:07p
    IBM Strikes More Direct Cloud Connectivity Deals with Data Center Providers

    Cloud infrastructure providers appear to have found a way to convince enterprises to put some of their more sensitive data and applications into the cloud, and that way is linking customers’ servers to their own with direct, private connections, often in the same data center, without using the public internet.

    While IBM SoftLayer has offered private connectivity to its cloud out of colocation data centers before, this week it announced a substantial expansion of that effort. It has partnered with several major data center and network service providers to sell this kind of cloud connectivity services in their data centers around the world.

    The data center providers are Equinix, Digital Realty Trust, Amsterdam’s Interxion, and Australia’s NextDC. IBM named Verizon and Colt as partner network operators, although both offer data center services too.

    IBM has also added the option to take a colocation cabinet within some of its own data centers for the purpose of connecting directly to cloud servers. IBM is a major Digital Realty customer, and so is Equinix, so in many cases those cabinets are likely to be within Digital Realty facilities.

    The pitch is giving enterprises a reliable and secure way to build hybrid infrastructure, combining servers under their control with rented cloud capacity. IBM is promising a seamless hybrid cloud, where users can move workloads to and from its cloud servers, as if they were part of their own internal network.

    While security is a big part of the appeal, performance is an almost equally important aspect. Equinix ran a test, comparing a data transfer over the internet to the same data transfer over a direct link, in this case ExpressRoute, a private cloud connectivity option offered by Microsoft Azure.

    The difference was stark: a 1GB file transfer over the internet took 93 seconds, while the same transfer over Azure ExpressRoute took 41 seconds. That’s more than twice as fast.

    Private cloud connectivity promises to fuel the rate of cloud adoption by enterprises, but it’s also expected to drive a lot of growth for data center providers that offer it.

    9:16p
    Intel Open Sources Cloud Performance Monitoring Tool

    varguylogo

    This post originally appeared at The Var Guy

    Intel’s latest move in its “Cloud for All” initiative — which it says will accelerate enterprise adoption of public, private, and hybrid clouds — is an open source tool called snap, which helps organizations understand the telemetry of their clouds.

    In other words, snap reveals automated information about cloud performance and resources. It works across clouds large and small, and is designed to be compatible with different types of storage and computing systems.

    Intel says snap is especially important as more and more cloud infrastructure becomes software-defined. When that happens, it gets harder to identify and monitor resources based on physical hardware, since most of the infrastructure is abstracted from bare-metal resources.

    “Snap-enabled software tools will give system integrators, operators, solutions providers, and the data center analytics ecosystem a much more comprehensive view of infrastructure capabilities, utilization, and events in real time — making full automation and orchestration of workloads across server, storage, and network resources a reality,” Intel said in a statement announcing snap.

    Intel didn’t mention which open source license it would use for snap in announcing the news, but the code is available with an Apache 2.0 license on GitHub.

    Intel announced snap at the Tectonic Summit this week in New York.

    This first ran at http://thevarguy.com/open-source-application-software-companies/intel-open-sources-snap-cloud-telemetry-tool-promote-clou

    << Previous Day 2015/12/04
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org