Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, May 1st, 2015

    Time Event
    12:00p
    Three Ways to Get a Better Data Center Model

    There are few better ways to get an idea of how a data center will behave when something is changed in the environment before actually applying the change than having a sophisticated, accurate virtual model of the facility.

    As people increasingly rely on software running in data centers for nearly every aspect of their lives, the issue of data center power consumption becomes more and more acute. Using data center modeling is one of the best ways to ensure a new or existing facility will use energy in the most efficient way.

    Because data center energy efficiency is such an important topic, Facebook, Bloomberg, IBM, Comcast, Intel, and Verizon, among others, have partnered with the U.S. National Science Foundation and four universities to study ways to improve efficiency at the Center for Energy-Smart Electronic Systems (ES2) at Binghamton University in New York State.

    ES2 recently built a data center lab for use in its research, and one of the latest research papers to come out of the department examines data center modeling.

    The research project basically compared a data center model created by a software solution and its predictions to empirical measurements taken in the facility over time. The researchers found a model can be made substantially more accurate if it is adjusted based on monitoring data collected from the operational facility.

    “The models without experimental validation are questionable,” Husam Alissa, a Binghamton PhD candidate and the paper’s lead author, who was also deeply involved in designing the ES2 lab, said.

    Using a Computational Fluid Dynamics model to design a data center before it is built is a valid approach, he said. Once the facility is built, however, the model will be a lot more useful if it’s adjusted based on operational data.

    “If you really want to understand your data center… you definitely need to spend some time on the validation process.”

    The team used Future Facilities’ CFD modeling software and consulted with the vendor’s representatives extensively. Here are three key things they learned can help improve a data center model:

    1. Measure Airflow Everywhere

    One of the discrepancies between the model and reality was the rate of airflow through perforated floor tiles. The further a tile was from a CRAH (Computer Room Air Handling) unit, the higher its flow rate was, for example, since some of the air the unit supplied bypassed tiles that were right next to it.

    They also measured a drop in flow rate in the middle of one of the aisles, which they suspected was being caused by a vortex elsewhere under the aisle. CFD modeling combined with empirical data can be very helpful in identifying such phenomena, they wrote.

    Alissa and his team also traced some flow deficit to things like unsealed floor holes that were used to route cooling pipes and power conduits, tile cuts around the air conditioning unit, seams, and the point where the raised floor met the side walls.

    2. Start With as Clean a Slate as Possible

    The simpler the room’s conditions are, the easier it is to understand its behavior. If possible, it helps a lot to eliminate things that complicate air flow and take measurements in that simplified state. These can range from shutting down IT equipment to equalizing room pressure or identifying floor leakage.

    Having an idea of how a room behaves under such simplified conditions can be very helpful in understanding raised-floor behavior, plenum flow patterns, and their effect on the delivery of cold air through the tiles and server temperature.

    3. It’s Not Just a Box

    It’s important to not oversimplify the physical aspects of the cold-air plenum. Details like cutouts, floor jacks, and supply-vent locations all have an effect on airflow, and a good model should account for all of them.

    Floor jacks, for example, the vertical columns that support the raised floor, affect air flow significantly simply because there are so many of them. Without accounting for jack resistance, the data center model overestimated plenum pressure buildup and predicted tile delivery. In some spots, closer to the walls, where there’s higher pressure buildup, it overshot predicted tile delivery by as much as 30 percent, compared to measured data.

    It took Alissa and his team about seven months to complete the work. After it was completed, however, they recalibrated nearly every aspect of airflow in their model, including cooling units, servers, tiles, plenums, leakage, room geometry, and more. Once this calibration was done, the model became a truly predictive one, representative of measurements taken in experimentation.

    3:00p
    Cosentry Extends Managed Cloud Services to Azure

    Midwest data center and IT solutions provider Cosentry has extended its managed cloud services to Microsoft Azure. The new service provides expertise in connecting to Azure with a focus on using it for SQL Server testing and development, SQL Server backup, and SQL Server disaster recovery.

    Cosentry has been supporting SQL Server database environments for more than a decade and now will lend its expertise and management to public cloud. As more customers seek to leverage hybrid cloud, managed service providers will have to extend their services to outside of their data center’s walls and provide managed cloud services.

    “Cosentry’s managed services and in-depth SQL Server skills make it an ideal fit for supporting hybrid cloud environments using Azure,” said Aziz Benmalek, vice president of Microsoft’s Hosting Service Provider business. “These combined services will further enhance mission-critical SQL Server environments.”

    Cosentry offers a managed services portfolio across cloud, operating systems, database, security and network services. It recently extended its managed security services offerings.

    “We are focused on offering our customers more choices of industry leading solutions to solve their IT challenges,” said Brad Hokamp, CEO of Cosentry, in a press release. “Our expanded offering underscores this philosophy by offering hybrid cloud support of Azure, with a focus on providing more cost-effective options for database environments, fully managed by Cosentry.”

    In the quest for hybrid, help is needed from initial migration, to security and compliance, to ongoing operations. Managed service providers are increasingly extending their portfolio to address cloud management needs. Companies include: Datapipe, which began with services around AWS and extended to Azure this year; and Rackspace, which recently began offering managed private VMware vCloud.

    Cosentry has expanded both its services and footprint in the last few years, making acquisitions like Red Anvil in Milwaukee and managed services provider XIOLINK in St. Louis.

    3:13p
    Amazon Piloting Tesla Batteries to Power Cloud Data Center

    Amazon Web Services is piloting Tesla’s new stackable battery units to supplement data center power capacity in its US West region. The company is rolling out a 4.8 megawatt hour pilot of the Tesla energy storage batteries at the site.

    Amazon made a commitment to using 100 percent renewable energy last November, following increasing criticism by Greenpeace. The Tesla announcement is part of these ambitions in practice.

    The Tesla energy storage systems are based on the powertrain architecture and components of Tesla electric vehicles. They integrate batteries, power electronics, thermal management, and controls into a turn-key system.

    Target, Enernoc, and Jackson Family Wines are also participating in pilots.

    Batteries are not only important for data center reliability, but are enablers for the efficient application of renewable power, said James Hamilton, Distinguished Engineer at AWS, in a statement. One of the biggest barriers to widespread adoption of wind and solar energy is intermittency of generation. Efficient energy storage addresses this problem.

    “We’ve been working closely with Tesla for the past year to drive innovative applications of high-capacity battery technology in data center applications with the ultimate goal of reducing the technical barriers limiting widespread adoption of renewables in the grid,” Hamilton said.

    Batteries are used to back up critical business operations in the event of a power outage, but benefits extend beyond remaining online. The Tesla batteries help a business maximize consumption of on-site clean power, avoid peak demand charges and buy electricity when it’s cheapest. Utilities and intermediate service providers often pay users for participating demand-response programs.

    Amazon opened a 100 percent carbon neutral AWS cloud region in Frankfurt, Germany last year, its third carbon neutral region, counting its GovCloud. The other is US West in Oregon.

    In April, Amazon joined the American Council on Renewable Energy and announced participation in the U.S. Partnership for Renewable Energy Finance (US PREF) to increase its work with state and federal policymakers and other stakeholders to enable more renewable energy opportunities for cloud providers.

    Cloud computing in general is more energy efficient than traditional data centers both in implementation and utilization. AWS said it uses rack-optimized systems that use less than one-eighth the energy of blade enclosures commonly used in corporate data centers.

    An ACORE posting discusses the effect of cloud on data center utilization. A well run data center would operate at an average utilization of 20 percent, meaning the 80 percent of capacity and the energy needed to keep it “ready” was wasted.

    In January, Amazon teamed with Pattern Development to support the construction and operation of a 150 megawatt wind farm in Benton County, Indiana, called the Amazon Wind Farm (Fowler Ridge). In as early as January 2016, the project will start generating approximately 500,000 megawatt hours of wind power.

    We are entering an age of interesting data center power generation alternatives. Microsoft is experimenting in Wyoming, Google, Facebook, and Apple have all committed to using renewable energy and have made large renewable energy generation investments.

    3:30p
    Friday Funny: Pick a Caption for Beached Data Center

    It’s starting to get warm, and they say modular data centers can go anywhere?

    Diane Alber, the Arizona artist who created Kip and Gary, has a new cartoon for Data Center Knowledge’s cartoon caption contest. What do you think would be the funniest text for the bubble? Post your caption in the comments. Then, next week, our readers will vote for the best submission.

    Congratulations to Rodney, whose caption for the “Natural Lighting” edition of Kip and Gary won the last contest. Rodney won with: “Well, it looks like the mega church just signed the lease for the colo space.”

    Here’s the poll for the caption contest for last week’s “Data Center Treadmill” edition. Please vote!

    Take Our Poll

     

    For more cartoons on DCK, see our Humor Channel. For more of Diane’s work, visit Kip and Gary’s website.

    4:55p
    OpenStack Storage: HP Adds Support for 3PAR in Kilo

    HP announced it is bringing new HP storage support to the 11th OpenStack release called Kilo, which launched earlier this week.

    Now a regular contributor to multiple OpenStack projects. HP is committing new solutions for Kilo that focus on the enterprise and support application-centric, automated, converged storage management. Taking a flexible hybrid cloud infrastructure approach HP notes that its contributions are designed to increase OpenStack storage and management efficiency in order to reduce acquisition and operational costs in cloud and hybrid environments.

    Bringing support for its 3PAR arrays into OpenStack, HP is offering “flash caching,” or the use of flash capacity as a virtual extension to storage system DRAM cache. Manilla file services will allow 3PAR StoreServ Storage to serve both block and file workloads in open cloud and hybrid environments.

    HP says it is also contributing an elevator scheduler for assigning storage resources based on workload requirements and thin deduplication for increasing capacity utilization.

    “Enterprises today struggle with the ‘all-in-one’ cloud model because they don’t use a single operating system or database software or management tool,” said Eileen Evans, HP vice president, said in a statement. “HP’s approach is to support a flexible hybrid cloud infrastructure with open source technologies as its core DNA, and this includes a commitment to supporting OpenStack technology at the storage layer.”

    After rebranding and re-launching cloud offerings several times and forging the Helion line about a year ago, HP launched its OpenStack distribution called Helion last fall. The company now claims several large companies are using the distro worldwide.

    HP will incorporate OpenStack Kilo into future Helion OpenStack releases.

    5:34p
    Cushman & Wakefield to Pitch Huge Colorado Energy Park for Data Center Use

    The owner of a large parcel of real estate in Colorado that could potentially serve as a site for clusters of data centers is now soliciting bids for what is described as a shovel-ready energy and development property that has the potential to become one of the world’s largest microgrids, or electrical grids that can operate independently of large utility grids.

    Commercial real estate firm Cushman & Wakefield has been named exclusive agent for Niobrara Energy Development, a 662-acre property in Northern Colorado that is described as being an ideal data center location for cloud-scale facilities. The developer behind the project is Loveland, Colorado-based Harrison Resource Corp., which has been marketing the site since 2012.

    Sean Ivery, senior director for Cushman & Wakefield, says that in addition to a friendly corporate tax environment in Colorado, the site is located on an unincorporated parcel of land. There’s also access to high-voltage power lines carrying electricity generated by low-cost natural gas, and fiber optic lines connected to 21 carriers run directly through the property. The site has access to all its own water rights.

    “There are a number of industries that would be interested,” Ivery says. “But the property has all the ingredients for a data center cluster.”

    Similar data center clusters have recently popped up in Colorado Springs, Phoenix, eastern Washington, and southern Oregon. In addition, Ivery notes that just across the river from NED, in Cheyenne, Wyoming, is a number of data center facilities.

    Cheyenne is home to a massive Microsoft data center. Several data center providers also have smaller facilities in the area, including Cobalt and Green House Data.

    Preparing a development site with plenty of access to energy, fiber infrastructure and securing all the necessary planning permissions in hopes of attracting companies shopping around for data center locations is a common developer strategy. One recent example is the Reno Technology Park in Nevada, which now boast a large and growing Apple data center campus.

    NED is the latest example of real estate developers looking to cash in on the ongoing build-out of data center facilities, and developers behind this site are going big.

    The site is zoned for 52 energy and data center uses. Energy-related zoning includes up to 50 megawatts of solar, geothermal and wind, unlimited energy storage, as well as up to 650 megawatts of natural gas plants and fuel-cell power plants.

    There’s a lot of fierce competition between cities, states, and countries for data center construction projects. While thanks to automation these facilities don’t generate as many IT jobs as they once might have, the construction of these facilities alone often represents a major boon to local economies that goes well beyond the number of people actually working inside any given data center.

    6:08p
    Mesosphere’s Data Center OS Comes to Azure and AWS

    Mesosphere has launched a public beta of its Datacenter Operating System on Amazon Web Services and Microsoft Azure. DCOS treats the data center infrastructure like one big computer.

    Microsoft Azure CTO Mark Russinovich gave a presentation during Microsoft Build in San Francisco earlier this week, demonstrating Mesosphere’s data center OS on Azure in action, launching several hundred nodes and thousands of Docker containers from the console in quick fashion.

    The same capability is now available on AWS as well. Mesosphere has also worked closely with Google and its Compute Platform for easy Mesosphere deployment. Developer cloud provider DigitalOcean also has added early Mesosphere support.

    Mesosphere’s data center OS is based on the open source Apache Mesos distributed systems kernel used by data center operators like Twitter, Airbnb, and Hubspot to power internet-scale applications. Apple recently disclosed it was using Mesos to power Siri’s backend.

    It helps with better data center resource utilization and better fault tolerance. It’s value is making it easier to launch applications like Hadoop and Cassandra across a cluster.

    Mesosphere has expanded its value proposition beyond just Mesos, repackaging and re-marketing as DCOS. In addition to the Mesos core, DCOS also includes a set of core system services, including a distributed init system (Marathon), distributed cron (Chronos), service discovery (DNS), storage (HDFS), and others, all capable of launching containers at scale.

    The data center OS also provides an API and software development kit that lets programmers develop for a data center like it’s one big computer.

    The Microsoft demonstration positioned Mesosphere’s utility in launching Docker containers at scale, something all the major clouds have been ensuring is possible on their offerings. Google added a Docker container management service to its cloud platform, and AWS did the same.

    Microsoft introduced a Docker command line interface for Windows following a partnership with Docker last October. It recently launched Hyper-V Containers, a new hypervisor that runs containers safely on Windows Server and Microsoft Nano Server, a stripped-down, minimal-footprint Windows Server install option made for cloud and containers.

    << Previous Day 2015/05/01
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org