| |||
![]()
|
![]() ![]() |
![]()
Data Center Optimization: How to Do More Without More Money Data centers are pushing the boundaries of the possible, using new paradigms to operate efficiently in an environment that continually demands more power, more storage, more compute capacity… more everything. Operating efficiently and effectively in the land of “more” without more money requires increased data center optimization at all levels, including hardware and software, and even policies and procedures. The Existing EnvironmentAlthough cloud computing, virtualization and hosted data centers are popular, most organizations still have at least part of their compute capacity in-house. According to a 451 Research survey of 1,200 IT professionals, 83 percent of North American enterprises maintain their own data centers. Only 17 percent have moved all IT operations to the cloud, and 49 percent use a hybrid model that integrates cloud or colocation hosts into their data center operations. The same study says most data center budgets have remained stable, although the heavily regulated healthcare and finance sectors are increasing funding throughout data center operations. Among enterprises with growing budgets, most are investing in upgrades or retrofits to enable data center optimization and to support increased density. At the same time, server density has increased. Since the mid-1990s when the IBM AS/400 mini-computers were popular and many of today’s data centers were designed, server density has increased by 84-fold. Power needs have increased from about 100 watts per square foot for many legacy computers, to about 600 watts for cutting-edge blade servers. As server density increases and the data center footprint shrinks, any gains may be taken up by the additional air handling and power equipment, including uninterruptable power supplies and power generators. In fact, data center energy usage is expected to increase by 81 percent by 2020, according to CIO magazine. Contracts and ProceduresIn order to operate in such an environment at NaviSite, “We set our goals annually, targeting a five percent annual savings. That forces us to be creative, because you can only squeeze so much from the turnip,” notes Ron Pepin, global director of data center operations for Time Warner Cable’s business class service, NaviSite. Savings may come from a variety of sources. For example, the Natural Resources Defense Council recommends that data centers “review their internal organizational structure and external contractual arrangements and ensure that incentives are aligned to provide financial rewards for efficiency best practices.” John Miecielica, produce management principal for data center optimization specialist TeamQuest, advises managers to look at risk and efficiency when evaluating contractual relationships. “External agreements are about risks, such as ensuring you have the capacity to meet service level agreements. Review them periodically to ensure they remain efficient. “For example, when Lady Gaga promoted her single on Amazon in 2011, it crashed the servers. She had to halt the promotion until Amazon added capacity. As another example, when Healthcare.gov went live in 2013, the system crashed and was down for six months,” Miecielica recalls. Right-SizingOften, identifying and decommissioning unused servers during a data center optimization project is a challenge, along with right-sizing provisioning. Virtualization makes it easy to spin up resources as needed, but it also makes tracking those resources harder. The result is that unused servers may be running because no one is certain they’re not being used. A study by the Natural Resources Defense Council and Anthesis reports that up to 30 percent of servers are unused, but still running. Likewise, a system may be provisioned with four CPUs but is really only using two. Such situations tie up compute capacity that may be needed by other machines, Miecielica explains. “Right-size your environment. Whether it’s physical or virtual is irrelevant,” Miecielica says. “Evaluate the risk of running out of capacity, provisions to meet that risk and resources that may be repurposed to avoid that risk.” Along with right-sizing hardware, Miecielica also advises scrutinizing applications to ensure they’re written efficiently. One company, for example, habitually upgraded its hardware but found it could delay those upgrades by optimizing the applications. A similar principle extends to storage. While data deduplication (removing duplicate files) is widely used, over-crowded storage remains an issue for small to medium-sized enterprises (SMEs). Deduplication can free much-needed storage space. Miecielica says it is one of the top two issues (along with security) for SMEs. Facilities“Optimizing operations requires a constant balancing of the environment,” Pepin says. NaviSite looks at LED lighting, air flow, placement dynamics and individual components to optimize individual and overall efficiency. For example, he continues, balancing involves not just heating and cooling but also balancing the air pressure underneath the floor. “As data centers grow, managers don’t consider the open floor tiles or how many are required to produce the best pressure. We conduct monthly efficiency assessments for power and cooling, and determine whether our facilities are still within the guidelines of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE).” In considering data center optimization around heating, ventilation and air conditioning (HVAC), equipment placement matters. “The air flow is colder where it leaves the HVAC unit, so we place our hotter equipment nearest the source of the cooling,” Pepin says. NaviSite also deployed adiabatic cooling and air side economizers in its last two major buildouts. “This eliminates the need for compressors,” Pepin says. “In our Santa Clara data center, we only use full mechanical cooling 210 hours (about nine days) per year. We use free cooling, filtered for dust, 83 percent of the time. We also eliminated air conditioning in the electrical and utility rooms for further savings. “Deploying free cooling or air side economizers isn’t a major undertaking if you have an underfloor pressurized environment,” Pepin says. It also opens up more floor space because the HVAC equipment can be placed outside the building. After deployment, “Maintenance costs dropped substantially,” he adds. Estimated savings of $23 million in a 15-year period helped NaviSite justify building a new data center. Optimizing air flow is vital for free cooling strategies to work. But, hot aisle/cold aisle containment is not yet ubiquitous, Pepin says. “It’s catching on, though. In July, California became the first state to mandate hot aisle/cold aisle containment for new builds and retrofits.” Rather than build strict hot and cold aisles, NaviSite built upon what it had. As a data center host, NaviSite altered its strategy from caging servers inside hurricane fences to securing them within thin-wall cages that let it contain hot and cold air in smaller segments. “This solid cage suite gives me a better ability to manage the environment for each customer, based on density,” Pepin says. NaviSite also uses LED lighting throughout its data facilities. LEDs use less power than fluorescent fixtures and have come down in price considerably since they were introduced. “We’re retrofitting our data centers now with LEDs and are installing sensors so lights are only on in areas where people are working. That was the only change that occurred in our Chicago data center last year, and it netted a five percent savings in electrical costs. That retrofit saved the 10,000 square foot data center $30,000. Monitor EverythingThe other major undertaking managers should consider when going through data center optimization is to institute robust monitoring systems for infrastructure and cloud computing. Data center infrastructure management (DCIM) systems, for example, enable management decisions to be made based upon actual usage rather than on manufacturers’ specifications. NaviSite linked a DCIM to branch circuit monitoring, so managers could see the actual power draw for each rack and device. “That lets us see what’s actually utilized by each customer (rather than merely their contracted usage) and to identify hot spots so we can manage more efficiently.” Pepin says a good return on investment (ROI) for projects is three to five years. “Our ROI was 3.75 years.” In addition to monitoring, managers also need analytics in place to accurately predict and resolve problems. “DCIM and server monitoring, coupled with analytics that link the two, can be very powerful,” Miecielica says. The analytics help managers see, for instance, that moving workflow from X to Y can improve efficiency, but that moving it from X to Z can be even more efficient.” Rather than looking at the data center only as a collective of individual systems to be optimized, Miecielica advises also looking at the data center holistically. “Systems don’t operate in isolation. They are part of a comprehensive package.” As such, synergisms can be identified that may yield additional data center optimization opportunities. Source CreativelyIf data center optimization is concerned with saving money, managers also should examine their purchasing programs. NaviSite looked for cost efficiencies within volume projects, Pepin recounts. “Look at large commodity items like cabinets, racks, cabling and plug strips.” Pepin eliminated middlemen whenever possible. “For big purchases, we went directly to the manufacturers in China.” He also sought innovative young technology vendors, working with them to design specifications that met all his requirements while significantly lowering the price. Data center optimization, clearly, extends beyond hardware to become a system-wide activity. It is the key to providing more power, more capacity and more storage without requiring more money. |
|||||||||||||
![]() |
![]() |