Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет Data Center Knowledge | News and analysis for the data center industry - Industr ([info]syn_dcknowledge)
@ 2015-06-22 12:00:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
DCIM Implementation – the Challenges

This is Part 4 of our five-part series on the countless number of decisions an organization needs to make as it embarks on the DCIM purchase, implementation, and operation journey. The series is produced for the Data Center Knowledge DCIM InfoCenter.

In Part 1 we gave an overview of the promises, the challenges, and the politics of DCIM. Read Part 1 here.

In Part 2 we described the key considerations an organization should keep in mind before starting the process of selecting a DCIM solution. Read Part 2 here.

In Part 3 we weighed DCIM benefits versus its costs, direct, indirect, and hidden. Read Part 3 here.

The first three parts of this series examined vendor promises, purchasing guidelines, and potential benefits of DCIM. However, while it all may look good on the whiteboard, actual implementation may not be quite as simple as the vendor’s sales teams would suggest. Existing facilities, and especially older ones, tend to have lower energy efficiency and also far less energy monitoring. In this part we will review some of the challenges of retrofitting an operating data center, as well as some of the considerations for incorporating a DCIM system into a new design.

Facility Systems Instrumentation

Virtually all data centers have Building Management Systems to supervise the operation of primary facility components. These generally monitor the status of the electrical power chain and subsystems include utility feeds, switchboards, automatic transfer switches, generators, UPS, and downstream power distribution panels. They are also connected to cooling system components. However, in many cases, the BMS systems are not very granular in the amount and type of data they collect. In some cases, the information is limited to very basic device status information (on-off) and alarm conditions.

Therefore, these sites are prime candidates for reaping the potential benefits of DCIM. In order for DCIM systems to gather and analyze energy usage information, they require remotely readable energy metering. Unfortunately, some data centers may not even have any real-time utility energy metering at all and can only base their total energy usage on the monthly utility bill. While this has been the de-facto practice for some sites in the past, it does not provide enough discrete data (or sometimes any data) about where the energy is used or about facility efficiency. More recently, DCIM (and some BMS) systems have been designed to measure and track far more granular information from all of these systems. However, the typical bottleneck is the lack of energy meters in power panels in these older facilities or the lack of internal temperature or other sensors within older cooling equipment (that can be remotely polled), such as CRAC/CRAH units or chillers.

The retro-fitting of energy metering and environmental sensors is one of the major impediments of DCIM adoption. This is especially true in sites with lower levels of redundancy of power and cooling systems. This requires the installation of current transformers (CT) and potential transformers (PT) to measure voltage. Although there are “snap-on” type CTs that do not require disconnecting a conductor to install them, more recently OSHA has restricted so called “hot work” on energized panels and therefore may require shutting down some systems to safely do the electrical work required. And of course in the mission critical data center world “shutdown” is simply not in the vernacular. So, in addition to getting funding and internal support and resources for a DCIM project, approving this type of potentially disruptive retro-fit work requires management approval and cooperation by facility and IT domains, an inherent bottleneck in many organizations.

Basic Facility Monitoring: Start With PUE

At its most elementary level, a DCIM system should display real-time data, historic trends, and provide annualized reporting of Power Usage Effectiveness (PUE). This involves installing energy metering hardware at the point of utility handoff to the facility and at a minimum also collecting IT energy usage (typically at the UPS output). However, for maximum benefit, other facilities related equipment (chillers, CRAH/CRACs, pumps, cooling towers, etc.) should have energy metering and environmental monitoring sensors installed. This allows DCIM to provide an in-depth analysis and permits optimization of the cooling infrastructure performance, as well as provide early failure detection warnings and predictive maintenance functions.

Whitespace: IT Rack-Level Power Monitoring

While metering total IT energy at the UPS output is the simplest and most common method to derive PUE readings, it does not provide any insight into how IT energy is used. This is a key function necessary to fulfill the promised holistic view of the overall data center, not just the facility. However, compared to the facility equipment, the number of racks (and IT devices), and therefore the number of required sensors, is far greater. The two areas that have been given the most attention at the rack level are power/energy metering and environmental sensors. The two most common places to measure rack-level power/energy is either at the floor level PDU (with branch circuit monitoring) or by metered PDUs within the rack (intelligent power strips, some of which can even meter per outlet to track energy used by the IT device).

From a retrofit perspective, if the floor-level PDU is not already equipped with branch-circuit current monitoring, adding CTs to each individual cable feeding the racks are subject to the same “hot-work” restrictions as any other electrical work, another impediment to implementation. However, another method to measure rack-level IT equipment power which has been used for many years is the installation of the metered rack power distribution units (rack power strips). This normally avoids any hot work, since the rack PDUs plug into existing receptacles. While installing a rack PDU does require briefly disconnecting the IT equipment to replace a non-metered power strip, it can potentially be far less disruptive than the shutdown of a floor-level PDU, since it can be done one rack at a time (and if the IT hardware is equipped with dual power supplies, may not require shutting down the IT equipment). While this is also true for A-B redundant floor-level PDUs, some people are more hesitant to do so, in case some servers may not have the dual-feed A-B power supply cords correctly plugged-in to the matching A-B PDUs.

The rack level PDU also commonly uses TCP/IP (SNMP), so it can connect via the existing cabling and network. However, while this avoids the need to install specialized cabling to each rack, it is not without cost. Network cabling positions are an IT resource, as are network ports on an expensive production switch. The most cost-effective option may be to add a low-cost 48-port switch for each row to create a dedicated network, which can also be isolated for additional security.



(Читать комментарии) (Добавить комментарий)