Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, May 1st, 2017

    Time Event
    12:00p
    How Practical is Dunking Servers in Mineral Oil Exactly?

    We’ve covered oil immersion cooling technology from Green Revolution Cooling (GRC) here on Data Center Knowledge since the turn of the decade.  It’s an astonishingly simple concept that somehow still leaves one reaching for a towel:  If you submerge your heat-producing racks in a substance that absorbs heat twelve hundred times better than air, then circulate that substance with a radically ordinary pump, you prolong the active life of your servers while protecting them from damage and erosion.

    Still. . . e-e-ew.  Mineral oil?

    “For safety, we keep paper towels nearby to wipe up any drips,” explained Alex McManis, an applications engineer with GRC, in an e-mail exchange with us.  “There are various products commonly used in industrial environments to place on the floor for absorbing oil and preventing slipping.”

    Neither Grease Nor Lightning

    All these years later, it’s still the kind of novel proposition you’d expect would generate plenty of anchor-banter at the tail end of local TV newscasts:  You take a 42U server rack, with the servers attached, you tip it sideways, and you submerge it in what cannot avoid looking like a convenience store ice cream freezer.  (The green LEDs and the GRC logo help, but only somewhat.)  In that containment unit, the racks are completely submerged in a non-flammable, dielectric mineral oil bath that GRC calls ElectroSafe.

    There, the oil absorbs the heat from the fully operative servers, and pumps it out through a completely passive dry cooling tower.  Because the oil does not need to be cooled down as much as air to be effective, it can be warmer than the ambient air temperature (GRC suggests as high as 100° F / 38° C) and still fulfill its main task of absorbing heat.

    Still, states GRC’s McManis, there’s no measurable degradation in the oil’s heat absorption capacity over time.

    “The oil operates far below a temperature where degradation happens like in an engine,” he told us.  “We’ve tested the oil yearly and see zero changes; as far as we can tell, it lasts forever.  Forever is a long time, so we say a lifetime of 15 years.  The oil is continuously filtered, so the racks can be placed in rooms without air filtration, such as warehouses.”

    The oil has the added benefit, McManis stated, of serving as a rust and corrosion preventative.  According to him, Intel detected no degradation to the working ability of components that it tested.

    In a mid-April company blog post, Barcelona-based Port d’Informació Cientifica (PIC) claimed that over the last 18-month period since its installation the GRC system was able to reduce its total requirements by 50 percent.  PIC is the infrastructure provider for many of Europe’s scientific services, including Switzerland’s CERN, which operates the Large Hadron Collider.  PIC went on to report no failures in the cooling or server systems, and exclaimed all this was achieved without any use of water whatsoever.

    As part of regular annual maintenance, McManis stated, PIC’s filters had to be replaced once, in a process that takes only a few minutes.

    Water, Water Nowhere

    Not all GRC installations are waterless; in cases where the system is installed in existing data centers, the company clearly states, it can use pre-existing water-based heat exchangers.  But recently, that’s changed.

    “Our containerized data centers have evolved via co-designing with the U.S. Air Force,” wrote McManis.  “Besides testing component by component for reliability, we’ve switched to a water-free design using dry coolers.  There’s no intermediate water loop, so part count is reduced and water treatment is no longer needed.  The pumps and heat exchangers are underneath the walkway, freeing up space for more racks and electrical equipment, such as a flywheel.”

    The elimination of water as a factor in data center equipment cooling may not be as obvious a breakthrough for a CIO or for DevOps personnel as for, say, a licensed HVAC technician.  Last July, we reported on a Berkeley National Lab study revealing that, while the generation of 1 kilowatt-hour of energy requires 7.6 liters of water on average, the average US data center consumes 1.8 liters of water in the cooling process for every 7.6 liters consumed.  That means a data center uses one-quarter as much water again to cool down from the use of any given unit of electrical power, as it did to generate that power in the first place.

    A recent Schneider Electric white paper [PDF] demonstrated why the drive to increase server density is so critically important.  The amount of cubic feet per minute required to keep a system cool for every kilowatt consumed (CFM/kW) is reduced steadily as server density increases — to the extent that it may cost half as much to cool an ordinary 6 kW rack as it does a 3 kW rack.

    The question of managing airflow and water consumption has become so critical that Hewlett Packard Enterprise has been experimenting with how it can optimize the distribution of software workloads among its servers.  A 2012 HP Labs project, in conjunction with Cal Tech, showed how climate data and capacity planning forecasts, along with cooling coefficients derived from their chilled water systems, integrated with their servers’ workload management software, could measurably reduce power consumption and the costs associated with it.

    The HPE/Cal Tech team portrayed the goal of their research as seeking a “practical” approach, taking care to put the word in quotation marks.  One goal, the team wrote, is “to provide an integrated workload management system for data centers that takes advantage of the efficiency gains possible by shifting demand in a way that exploits time variations in electricity price, the availability of renewable energy, and the efficiency of cooling.”  Practicality, as this team perceives it, means accepting the existing boundaries, inarguable limitations, and everyday facts of data center architecture, and working within those boundaries.

    Among these inviolable facts of everyday life in the everyday data center is airflow.

    Typically, the refrigeration of air depends upon the refrigeration of water.  The GRC system literally flushes air out of the equation entirely.  In so doing, it can eliminate water as a factor in managing airflow, since there’s no airflow to be managed.  In the GRC company blog post, a representative of PIC’s IT team said it’s running its oil-submerged servers at nearly 50 kW per rack, without incident.

    That’s an astonishing figure.

    Usually, according to Schneider’s report, “As densities per rack increase from 15 kW and beyond, there are design complexities injected into the data center project that often outweigh the potential savings.”  The GRC system may not be a design complexity, though it certainly turns the whole design question on its side.  However, PIC is claiming the savings are measurable and worthwhile.

    The Slickest Solution Out There

    We wondered whether GRC can leverage the oil’s acute heat absorption capability as an indicator of relative server stress.

    “We have enough sensors to measure the heat load going into the racks,” McManis responded.  “Current meters on the [power distribution units] are going to be more precise, but the heat capacity calculation is sufficient to check efficiency of pumps and heat exchangers.  We calculate if they are maintaining their rated capacity without requiring a 100 percent capacity test.  For example, we can remotely see percent efficiency loss of a heat exchanger to monitor scaling from less than perfect water treatment.  Without this monitoring, an inefficiency might not be found until the system couldn’t perform as requested.”

    From a cost standpoint, the sacrifices a data center operator makes in practicality when implementing an oil immersion system such as GRC’s may seem within the margin of tolerability.  Our videos of GRC CarnoJet from 2013 made it look like system operators could get away with wearing tight gloves, perhaps hairnets, and keeping those paper towel rolls handy.

    Let’s face it:  It can’t be easy to get a grip on an oily server.  Since those videos were produced, McManis told us, there have indeed been refinements to this process.

    “The easiest way to remove a server is using an overhead lift with a specially made lifting hook that attaches to the server ears,” he wrote.  “The server is then laid down on service rails which drain the server back into the tank while it’s being serviced.  The server can be dripping while parts are being replaced.”

    Over the past few years, he said, GRC has made “ergonomic improvements, such as lowering the racks, auto-draining service platforms, and using an integrated overhead hoist for lifting the servers.”

    When parts are being replaced and sent back to their manufacturers, is there a way to ship them out without the recipients ending up with saggy boxes?  “Drip dry is clean enough for RMA,” McManis responded.  “An aerosolized electronics cleaner is the fastest for small items.  Using an electronics cleaning solution in an ultrasonic cleaner will restore to clean as new.”

    It may not be the most aesthetically pleasing solution to the data center cooling problem ever devised.  But GRC’s oil immersion method is far from the most nose-wrinkling system put forth to the public: in 2011, an HP Sustainable Data Center engineer suggested data centers be built next to dairy farms, where 10,000 or more cows each could produce what’s called biogas.

    So GRC can happily declare, “We’re not biogas.”  Yet with electricity costs worldwide continuing to rise and the scarcity of water becoming a reality for everyone, we may soon be in the position where we replace our water consumption models with projections for rolls of paper towels.

    3:00p
    Uptime: Cloud Gives Many Enterprise Data Centers New Lease on Life

    The volume of corporate software workloads being deployed in the cloud is quickly growing, but that does not mean on-premise enterprise data center footprint is shrinking at a similar rate. While they’re not investing in new data centers to expand capacity – cloud and colocation providers satisfy that need – many enterprises are spending money to upgrade their existing facilities, extending their useful life for many years to come.

    That’s according to the latest survey of enterprise data center operators by The 451 Group’s Uptime Institute. Uptime surveys senior executives, IT, and facilities managers who operate data centers for traditional enterprise companies, such as banks, retailers, manufacturers, etc.

    Nearly half of respondents on the facilities side of the house said they were doing infrastructure upgrades, refreshing power and cooling infrastructure in their on-premise data centers as part of their capacity planning activities. At the same time, many are planning to offset demand with cloud services and server consolidation, among other measures.

    The percentage of respondents who said they were planning to build new data centers was also notably high: 30 percent.

    Here’s how the responses break down (click chart to enlarge):

    Source: Uptime Institute’s 2017 Data Center Industry Survey

    Matt Stansberry, Uptime’s senior director of content and publications, said these results indicates that while companies are using third-party services to absorb additional demand, they see value in holding on to on-prem data center investments they made in the early 2000s. “Increased cloud adoption is giving the enterprise data center a little room to breathe,” he said in an interview with Data Center Knowledge.

    While the days when a typical big-company exec would agree to a $50 million project to build a new data center to support growth are gone (“We don’t see the new builds that you might have seen a few years ago.”), that exec appear less reluctant to invest a smaller sum in improving existing facilities to get more out of them, Stansberry explained.

    Cloud, colocation services, and processor improvements (which enable companies to do more with fewer servers) take the pressure off data center teams to expand capacity by building new sites, freeing up time and budget to upgrade and “retrench” in their facilities, he said.

    See also: Cloud Giants Disagree on the Future of Corporate Data Centers

    As the above percentages show, this trend does not apply across the board. There have been many examples of enterprises moving every workload they can to the cloud.

    The most recent one is The New York Times, which is working to move all workloads out of three colocation data centers into Google’s and Amazon’s clouds, reluctantly retaining an on-premise data center at its headquarters just to support several legacy systems that are impossible to migrate to the cloud.

    Another example is Juniper Networks, which recently went from 18 company-operated data centers supporting corporate backend workloads to one. Like The Times, the network technology vendor moved everything it could to the cloud, leaving only a single colocation site hosting legacy applications it could not move. (Juniper also retained on-prem data centers used by its engineers for testing and simulation, but that’s an entirely different purpose.)

    Responses to Uptime’s survey indicate that 60 percent of enterprise IT server footprints are flat or shrinking due to better processor performance, growing rates of server virtualization, and rapid cloud adoption:

    Source: Uptime Institute’s 2017 Data Center Industry Survey

    4:00p
    How Do You Define Cloud Computing?

    Steve Lack is Vice President of Cloud Solutions for Astadia.

    New technology that experiences high growth rates will inevitably attract hyperbole. Cloud computing is no exception, and almost everyone has his or her own definition of cloud from “it’s on the internet” to a full-blown technical explanation of the myriad compute options available from a given cloud service provider.

    Cloud Adoption Success Factor: Understand the Cloud Essentials

    Knowing what is and what is not a cloud service can be confusing. Fortunately, the National Institute of Standards and Technology (NIST) has provided us with a cloud computing definition that identifies “five essential characteristics.”

    On-demand self-service. A consumer [of cloud services] can unilaterally provision computing capabilities, such as server time and network storage, as needed, automatically without requiring human interaction with each service provider.

    Read: Get what you want, when you want it, with little fuss.

    Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops and workstations).

    Read: Anyone, anywhere can access anything you build for them.

    Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.

    Read: Economies of scale on galactic proportions.

    Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.

    Read: Get what you want, when you want it … then give it back.

    Measured service. Cloud systems automatically control and optimize resource usage by providing a metering capability as appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled and reported, providing transparency for both the provider and consumer of the utilized service.

    Read: Get what you want, when you want it, then give it back … and only pay for what you use.

    Each of these five characteristics must be present, or it is just not a cloud service, regardless of what a vendor may claim. Now that public cloud services exist that fully meet this cloud computing definition, you — the consumer of cloud services — can log onto one of the cloud service providers’ dashboards and order up X units of compute capacity, Y units of storage capacity and toss in other services and capabilities as needed. Your IT team is not provisioning any of the hardware, building images, etc., and this all happens within minutes vs. the weeks it would normally take in a conventional on-premise scenario.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    6:34p
    Done Deal: Equinix Closes $3.6B Verizon Data Center Acquisition

    Equinix announced closing of its $3.6 billion acquisition of a Verizon data center portfolio, a deal the companies agreed to in December.

    The acquisition expands data center portfolio of the world’s largest retail colocation provider by 29 facilities in 15 metro areas in North America and South America, including the portfolio’s crown jewels in Miami and Culpeper, Virginia.

    This was the largest in a series of recent deals where large telcos offloaded data center assets to reduce operational costs and raise capital. CenturyLink, Tata Communications, AT&T, and Windstream have all recently sold off large data center portfolios to service providers and private equity investors.

    While Verizon was reportedly hoping to sell more than 50 data centers, including facilities in Europe and Asia, but Equinix ended up cherry-picking the sites that best fit its goals. Most of the sites it bought are in the US, with the exception of one location in Bogotá and one in São Paulo.

    The two sites in Miami and Culpeper are responsible for more than half of the $450 million in revenue the entire portfolio generates annually, according to Equinix.

    In-depth: Why Equinix Bought Verizon Data Centers for $3.6B

    The NAP of the Americas carrier hotel in Miami is the crown jewel in the acquired portfolio. The building is a key interconnection hub between networks in the US and South America, making Equinix a gatekeeper for companies wanting to do digital business across the border between the two regions.

    The other stand-out site is the four-building NAP of the Capital Region campus in Culpeper, serving lots of big enterprise and government customers.

    About 250 Verizon employees (mostly operations staff) will be joining Equinix as part of the integration process, Equinix CEO Stephen Smith said on the company’s first-quarter earnings call last week. Another early integration step will be to interconnect Verizon and Equinix data centers in the markets where their footprints overlap.

    Verizon will continue providing its enterprise services out of Equinix data centers, acting as the colocation company’s customer and reseller of its colocation services, which it plans to bundle with its own.

    Equinix reported $950 million for the first quarter, up from $845 million in same period last year. Its net income for the quarter was $42 million, up from a $37 million loss reported in the first quarter of 2016 but down from $61.7 million in income reported for the fourth quarter of last year.

    Here’s a look at other major telco data center divestments in recent years:

    CenturyLink Sells Data Center Portfolio to Private Equity Investors for $2.3B

    Tata Communications Sells Data Center Portfolio to ST Telemedia for $633M

    Windstream Sells Data Center Business to TierPoint for $575M

    IBM Takes Over AT&T’s Managed Hosting Business

    << Previous Day 2017/05/01
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org