Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, March 24th, 2017

    Time Event
    3:00p
    Rittal Rolls Out Edge Data Center, Partnership with HPE

    IT infrastructure giant Rittal just took two big strides into the market for systems that bring data center capacity closer to where data is being generated by sensors or other devices collectively referred to as the Internet of Things.

    First, the company revealed its Rittal Edge Data Center at CeBIT 2017 on Tuesday, comprised of modular cabinets designed to operate remotely, close to end users, and with low network latency—all requirements for companies struggling to support mobile services and IoT applications.

    Consisting of two TS IT racks, the Rittal Edge Data Center comes pre-configured so it can be shipped and installed at edge locations where a small amount of equipment is required to be close to users or devices. The modules are equipped with climate control, power distribution, UPS, fire suppression, monitoring and secure access, and can be extended two racks at a time. Installation can be done in an IT security room, or in a container, basically wherever it is required.

    The enormous amount of data being generated by the IoT requires that it be processed close to its place of origin with real-time capabilities. Then, additional IT capacities are created via these modules at the edges of the corporate network. Rittal said providers will increasingly use modular, pre-configured IT solutions this year that they can quickly and easily set up and operate at their locations. This could be in an IT security room or in a container, basically wherever it is required. At the same time, these systems must also support the future growth of the company, so they must be scalable and based on open technologies.

    Customers who would prefer not to operate the edge data center themselves can opt for Rittal’s managed services. This is part of Rittal’s data-center-as-a-service (DCaaS) offering.

    According to market analysts IDC, by 2019, around 43 percent of data generated through the IoT will be processed by edge computing systems.

    The second announcement made by Rittal—a partnership with Hewlett Packard Enterprise—positions both companies to capture a chunk of the growing market.

    Rittal’s shift from micro data centers to a new focus on largely scalable container solutions will be complemented by HPE’s Pointnext services to provide customers with what Rittal referred to as a “one-stop” shop. The partnership will provide complete IT solutions, from equipment to worldwide technical support and consultation from 25,000 experts on cloud computing, hybrid IT, Big Data and Analytics, Intelligent Edge, and the IoT.

    “Hewlett Packard Enterprise is a valuable partner for Rittal, bringing us a broader reach to help customers quickly with high end complete data center solutions according to their needs”, said Andreas Keiger, Executive Vice President Sales, Rittal, in a press release.

    Commenting on this partnership Brian Whelan, WW Director Data Center Facilities, HPE, said: “Rittal´s modular ‘lego-style’ system for IT Infrastructures fits perfectly with our own offerings and capabilities, helping to provide customers a simplified and seamless experience.”

    Rittal is one of nearly 150 companies exhibiting at Data Center World – Global – 2017 on April 5 and 6 in Los Angeles. 

    3:30p
    Mashup: Monitoring Data Center Power and Cooling Simultaneously

    Coy Stine is Vice President, Data Center Division for Fairbanks Energy Services.

    Data center facilities of all types must monitor power and cooling data to be responsive when things go wrong, but they don’t often analyze combined data to obtain operational efficiencies. While a data center’s function is to protect the servers and information it stores, its success is increasingly dependent on the kinds of operational data it produces and how this data is used to control and respond to issues. Having partial cooling or power metrics will not complete the “big picture” of facility health necessary to empower managers and operators to avoid downtime, maintain optimal conditions, and automatically adjust operation to increase efficiency.

    In my experience, monitoring both power and cooling data together offers much more value than monitoring either one alone. Working in the field of data center efficiency, my team is consistently surprised to see that far too many data centers still have four to five different monitoring or automation systems running concurrently, each independently handling a different critical system (generators, HVAC, power distribution, server monitoring, temperature monitoring, etc.). Unsurprisingly, because of this uncoordinated setup, data center operators tend to ignore the relationship between power and cooling information from these systems. They in turn also lose the opportunity to understand how their many integrated systems, DCIM tools, and pieces of equipment are running in relation to each other.

    Retaining reliability and familiarity with individual critical systems versus implementing a holistic improvement is one common reason why facilities aren’t incorporating single-system solutions for keeping track of their energy data. The specific reasons are usually:

    1. Data center owners don’t necessarily know what they need to monitor to get to a new, higher level of integrated information and data analysis that can create effective efficiency changes.
    2. Those facility owners who do know what kind of information and data would be useful are often concerned that installation of the actual monitoring devices will create unacceptable risks for downtime by either requiring an extensive maintenance window to power down critical equipment or by inadvertently causing an unplanned shutdown during project installation.

    Since this level of data mining and analysis is beyond industry best practices, review of the importance of gathering data and a detailed explanation about how the process can work should educate data center stakeholders. It will also illustrate how this kind of control system will dramatically improve facility efficiency and lower operational costs.

    Case Study

    Recently, the efficiency firm that I work at completed a project at a nationwide colocation provider. Providing retail multi-tenant colocation services, its goal is to offer cost-effective but reliable space, power, and cooling for tenants.  Bottom-line operational savings are achieved through lower power and cooling costs. We were brought in to formulate and implement efficiency measures to achieve goals of increasing efficiency to lower energy costs, increase space profitability and reclaim capacity on HVAC and power systems. To make these efforts successful, we established methods for monitoring both power and cooling data together within a single system.

    During our initial evaluation, all CRAC units were running and server entry temperatures were colder than necessary in some areas but much warmer than ideal in other areas.  The 100,000-square-foot facility had both slab and raised floor configurations. This strategy of precautionary cooling uses too much electricity and rarely distributes air effectively.

    At our recommendation, the company first installed monitoring technology on all cooling and electrical equipment. Our team then installed low-cost airflow best practices including blanking panels, sealing up gaps in the racks and floor, and doors at the ends of aisles to better separate the hot and cold airstreams.  Valves in key CRAC units were installed to stop flow when units were controlled off, allowing for reduction in water flow through the heat rejection system, reducing utilization of pump and cooling tower equipment and further increasing energy savings.

    We also installed sensors to ensure that enough cool air was feeding the tenants’ servers, an inexpensive measure to ensure that cold air was effectively utilized.  Air flow management, with its resulting information, is an important dataset that many facilities neglect.

    Once installed, we utilized logic in the new monitoring system that made smart decisions regarding which CRAC units to turn off, based on server load.  The overall temperatures became optimal throughout all areas.  Additionally, many sections of the data center actually got colder even as the CRAC units were shut off because the ones left running were much more efficient.  There was less strain on heat rejection equipment and backroom costs were lower.  This is the type of result that we’re looking for. In the end, through this project alone, monitoring of both the electrical and cooling systems along with implementation of other measures, allowed this facility to drop PUE from 2.1 to 1.5, even while the site continued to add tenants and new space to the operation.

    Too Much of a Good Thing

    As the industry continues to move forward with IoT, data centers often have installed equipment with the ability to communicate operating and utilization data.  Although “smart” equipment can gather information about itself, it doesn’t normally communicate with other equipment and rarely takes automatic steps to regulate usage based on other system information. As a result, operators have to sift through massive amounts of data from numerous pieces of equipment and try to find trends needed to make improvements. The key takeaway here is to implement a centralized system that can make sense of all the information and, ideally, offer recommendations or even take steps to make improvements.

    Even with these advances, there still remains a great deal of “dumb” equipment that has yet to evolve. At the facility in the above case study, my team created a program of sensors and monitoring equipment for this kind of equipment, which was a vital piece in the puzzle as we lowered operational costs. Systems like these may take time and effort to design and install, but the savings are well worth waiting for.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    << Previous Day 2017/03/24
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org