Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, December 28th, 2016

    Time Event
    7:20p
    Are You Getting Everything Your Data Center Design is Meant to Deliver?

    You can measure your data center’s present PUE and set a PUE goal to work toward, but how do you know you’ve set the right goal? How do you know your goal is achievable yet represents the best efficiency rating you can get out of the facility?

    That’s one of the questions a new metric proposed by a former Amazon data center designer can help answer. Engineering Operations Ratio, or EOR, is a way to assess and express the gap between a computing facility’s actual performance and the performance it was designed (or claimed to have been designed) to deliver.

    “My goal with EOR is to provide a simple but effective way to drive data center operational effectiveness,” Osvaldo Morales wrote in a recently published paper describing the concept. “Having EOR data would allow us to double down on the good and fix the bad.”

    Morales spent the period between 2000 and 2016 designing the data centers that underpin Amazon’s vast ecommerce, online video, and cloud computing empire. He left the company this past March after three years as VP, Global Data Center Platform for Amazon Web Services, according to his LinkedIn profile.

    His EOR paper, released earlier this month, is the first publication by Infrastructure Masons, a data center design think tank of sorts, started earlier this year by Dean Nelson, the former eBay data center chief who now oversees all things computing infrastructure at Uber. IM’s advisory council members oversee some of the biggest global data center networks – Microsoft, Google, Facebook, eBay – and the operations of data center provider Switch, which lists most of them as customers.

    See alsoPerformance Indicator, Green Grid’s New Data Center Metric, Explained

    The first draft of the proposed metric describes it as a master ratio between data center design and operational performance levels that combines multiple similar ratios for various data center subsystems, primarily mechanical and electrical infrastructure. Amazon’s infrastructure team did not use a metric similar to EOR while he was there, but the company may decide to adopt the idea today, Morales told Data Center Knowledge.

    Here’s an example of EOR calculations for a hypothetical 20MW data center’s top-level PUE and its subsystems Morales included in the paper:

    Each ratio is derived by dividing an observed performance value by the number the design intended to achieve in cases where a higher value indicates better performance and vice versa (design divided by observed) in cases where a lower value indicates better performance, such as PUE, where lower PUE is better than higher PUE.

    As you can see, EOR dictates that you take into account both underperforming and overperforming systems. It’s important to keep tabs on both, according to Morales. (more details in the paper itself)

    Like PUE, the proposed “unit-less” metric is meant to be simple to communicate a data center’s performance to non-technical team members.

    The paper is a first draft, and Infrastructure Masons invites others involved in data center design and operation to comment and make suggestions for improvement.

    See alsoData Center Design: Which Standards to Follow?

    8:07p
    Have You Been a Naughty or Nice Data Center Manager?

    Jeff Klaus is the GM, Data Center Solutions at Intel Corporation, and Kim Polvsen is VP & GM, Digital Services & Data Center Software at Schneider Electric.

    Brooklyn-born songwriter J. Fred Coots was riding the New York City subway in the spring of 1934 and thinking about writing a children’s song when he ran into lyricist Haven Gillespie. What began as a Coots and Gillespie collaboration on the ‘L’ train ended with comedian Eddie Cantor singing “Santa Claus Is Coming to Town” a few months later during his weekly radio show. As the instant megahit’s chorus has it:

    You better watch out, you better not cry

    Better not pout, I’m telling you why

    Santa Claus is coming to town

    He’s making a list and checking it twice

    Going find out who’s naughty and nice

    And so, data center managers, what kind of data center manager are you? Are you naughty and keeping company with one of the 10 million zombie servers, which, by one estimate, consume energy equivalent to eight large power plants worldwide? Or are you being nice and using DCIM tools to determine what additional capacity you might free up if a zombie server is decommissioned?

    Now’s your time to come clean, because we’re heading into a new year with new beginnings.

    Naughty and Nice Practices Within the Data Center

    Naughty: Neglecting to Leverage Real-Time Analytics – In today’s always-on, connected data center environment where unexpected spikes in usage are becoming the new normal, leveraging real-time analytics within the facility is the key to successful operations. Neglecting to do so, puts the data center at a disadvantage and often causes data center managers and their teams to play catch-up. Real-time analytics allow data center operators to work agilely by monitoring data center workloads in real-time and making adjustments as needed, on-demand.

    Nice: Introducing New IoT devices Within the Data Center to Create a Holistic View of Infrastructure and Performance –  IoT is commonly associated with wearables, refrigerators, connected cars and homes, and all things mobile. However, the introduction of IoT technologies, or those that have an IP address, is also occurring within the data center itself. How? New technology and sensors that help to monitor changes within the data center environment are helping data center managers to obtain a 360-degree, holistic view about how their facility is performing while also giving them insight into where they may need to make adjustments or changes.

    Naughty: Not Adopting Automation to Replace Manual Processes – Automation has improved processes for many daily tasks. Water was first brought into homes by fetching buckets of water and advanced into individual automated plumbing systems. Washing cloths required each item to be done by hand, and through technology advancements we now use washing machines to automate the process of washing full loads at ant time. Innovation has a common thread: eliminating a once tedious or manual process with one that does the job more efficiently. So put the Stanley tape measure away and turn to automation.

    Automation in the data center allows data center operators to recoup approximately 40 percent of their work week. This is invaluable time that was previously allocated to manual processes such as living in a spreadsheet or physically walking the data center floor with a measuring tape to accomplish capacity planning and forecasting. Still, 45 percent continue to rely on manual processes, wasting time and resources, and leaving them squarely on the naughty list.

    Especially Nice: Encouraging Routine or Annual Health Management Assessments of the Data Center – Each year, you pay a visit to your doctor for a routine health check-up. As an ever-changing and developing entity, the data center also requires regular health checks to enable data center managers to stay on the pulse of their facilities, and to maintain business continuity. Preventative measures are critical to avoiding outages and downtime.

    Worthy of shiny new toys on Christmas morning, nice data center managers maintain the health of data center hardware by leveraging automated tools that conduct ongoing monitoring, analytics, diagnostics and remediation functions.

    Just how important is automating data center health monitoring? According to a study by the Ponemon Institute, the average cost of a single data center outage today is nearly three-quarters of a million dollars. Faced with this scenario, any data center manager expecting a partridge in a pear tree on Christmas morning is grossly misguided.

    Decidedly Naughty: Not Using Micro-Level Controls for Individual Servers or Macro-Level Policies for Racks of Servers – Data center managers, we need you to turn your attention to runaway energy consumption, which at 30 billion watts of electricity across the world’s facilities, the equivalent output of 30 nuclear power plants, is enough to power all the households in Italy.

    The point is that we can actually do something about this unfortunate circumstance. Rampant energy consumption can be effectively combated with a combination of micro-level controls for individual servers, Power Distribution Units (PDUs), air-flow controllers and cooling units, as well as macro-level controls and policies for racks of servers and entire data centers. These software and technology products can be deployed in less than a week, feature intuitive dashboards and require a short learning curve.

    Provided with real-time analytics, data center operators benefit from early detection of thermal spikes and can prepare for unexpected surges in usage by mitigating workloads. They can even reduce a facility’s overall carbon footprint by lowering cooling costs, a major component of data center energy expenditures.

    So, do get with the program, naughty data center managers and make resolutions to make next year different.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    9:38p
    GI Partners Buys Seattle’s KOMO Plaza, Including Data Center Hub

    An entity associated with GI Partners, a San Francisco-based private equity firm that has backed some of the most well-known data center companies, has acquired the two-building complex next to Seattle’s Space Needle called KOMO Plaza (formerly Fisher Plaza).

    The complex, which houses numerous TV and radio stations and retail outlets, is also a prime Seattle data center and carrier hub. It has wholesale data center space, occupied by the likes of Internap and TierPoint, as well as retail colocation space. Carriers present there include CenturyLink, Verizon, AT&T, and Zayo, among others.

    KOMO Plaza’s previous owner, Houston-based Hines, has sold it for $276 million to a buyer linked to GI, according to local news reports that cite property sale records filed earlier this month.

    GI is one of the leading private equity players in the data center services business. Some of its most well-known deals include the 2013 sale of cloud provider SoftLayer to IBM, which has turned it into the platform that underpins its entire global cloud services business; the 2004 IPO of Digital Realty Trust, one of the world’s largest data center providers; the 2011 sale of Telx to ABRY Partners and Berkshire Partners, who later sold it to Digital Realty; and the 2014 acquisition of Peak 10, a major US data center provider that specializes in secondary markets.

    Hines appears to have made a hefty profit on the Seattle deal. It bought the property from Fisher Communications in 2011 for $160 million.

    Hines will continue acting as manager of the 290,000-square foot property following the sale, according to its website.

    According to Seattle Times, the sale of KOMO Plaza was one of the city’s biggest real-estate deals of the year. The Times also noted that the two buildings’ exteriors have appeared on the TV show Grey’s Anatomy as the exterior of the fictional hospital featured in the show.

    << Previous Day 2016/12/28
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org