Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, October 2nd, 2014

    Time Event
    12:00a
    Oracle Bets on Becoming One-Stop Cloud Shop for Every Layer of the Stack

    Oracle CTO and former CEO Larry Ellison kicked off his OpenWorld keynote on Tuesday afternoon with an apology for his no-show to a keynote he was scheduled to deliver at last year’s OpenWorld conference. He said he was tied up watching his team race in the America’s Cup, which they ultimately won. “Every day was a sudden death, but somehow we made it,” he said about last year’s regatta on San Francisco Bay.

    But Ellison, who stepped down from the CEO position earlier this month to be replaced by the company’s president Mark Hurd and CFO Safra Catz, was on stage in San Francisco Tuesday to talk business. He was there to make the case that the company he co-founded in the late 70s and stood at the helm of as CEO until less than two weeks ago would dominate as a provider of cloud services for enterprises.

    The overall message at this week’s OpenWorld was that Oracle’s cloud play was a platform one, and that providing every layer of the IT infrastructure and application stack as a single platform would be a winning combination. Ellison took the stage on Tuesday to reinforce that message and to do some live demos of Oracle’s cloud capabilities. “Because of my new job – I’m CTO now – I have to do my demos myself,” he joked.

    All the Oracle cloud messaging is aggressive, but the company still has a long way to go if it wants to be a dominant cloud player. In terms of growing cloud services into a substantial part of its business, Oracle is pretty much starting from scratch. SaaS, PaaS and IaaS together contributed only four percent of its total revenue in its most recently completed quarter.

    The company’s traditional big source of revenue, its enterprise software business, is not demonstrating any staggering growth rates, which can at least partially be attributed to competition from other SaaS companies. New software license revenue was down two percent last quarter, and license updates and software support revenue was up seven percent.

    Catz said earlier this month that she expected update and support revenue to shrink over time as well. The company hopes to compensate for shrinking software revenues with growth in Oracle cloud services revenue, which it said was rapid. SaaS and PaaS combined grew 32 percent in revenue in the last quarter. IaaS revenue grew 26 percent.

    From legacy to cloud-borne and modern

    Oracle claims it has developed the capability to move its applications deployed in customers’ own data centers onto its cloud platform quickly and easily. “We wanted to make it really easy to move existing Oracle databases and existing Oracle applications to the cloud,” Ellison said.

    Besides simply moving the apps, the strategy is to modernize them in the process, adding things like multitenancy, data analytics, social and mobile capabilities.

    “Facebook-like interface capabilities are built into our platform, and the applications that are built on top of that platform inherit those capabilities,” he said. “To build analytics … as part of your applications is easier because Big Data analytics is part of the platform. We implement mutlitenancy not at the application layer, but we implement multitenancy at the platform layer.”

    Ellison emphasized Oracle’s strategy to build and provide its huge array of business applications on top of the same platform it is offering to developers as a service. “Most SaaS companies do not sell platform services, period,” he said. “The few that do, do not offer you the same platform they build on.”

    Courting enterprise developers

    In a keynote and a press conference earlier in the day, Thomas Kurian, executive vice president of product development, announced a number of enhancements to the Oracle cloud platform.

    To make the platform attractive for developers Oracle has rolled out a number of features designed to make their job easier. It has introduced the option to buy a dedicated Oracle database as a service via the Oracle cloud. It also now provides a dedicated Java application server environment as a cloud service.

    Together with some developer tools and the Oracle Infrastructure-as-a-Service offering, these capabilities make the company’s cloud a one-stop shop for hosting code, compiling and deploying applications. The company has automated database configuration and encryption functions, as well as backup scheduling and disaster recovery.

    SQL on Hadoop comes to Oracle

    In the Big Data analytics category, the company introduced Oracle Big Data SQL, which enables users to run SQL queries against data stored on Hadoop clusters. “It opens up the data in Hadoop to all users who know SQL,” Kurian said.

    Oracle’s edge here is integration of this SQL-on-Hadoop system with its enterprise software, such as ERP (Enterprise Resource Planning), CRM (Customer Relationship Management) or its relational database software. The idea is to combine raw data stored in Hadoop with the more structured data generated by the enterprise applications.

    Another addition to the Oracle Big Data toolbox is Oracle Big Data Discovery, a visualization tool for Hadoop. Also aiming to open Hadoop to enterprise users without specialized data analytics skills, the browser-based tool enables data analysis, search or identification of problems with data sets.

    The in-memory option

    For the speed-conscious, Oracle announced that all of its major software suites are now certified to run with its in-memory database technology. The company said deploying the in-memory database option with any existing application that is compatible with its database software was now “as easy as flipping a switch.”

    Making mobile easier

    Another area Oracle is now heavily focused on is tools for mobile application development in the enterprise. It announced a mobile application development framework that allows a developer to write a single code base for an application that will then automatically adapt to different mobile platforms (iOS, Android, Windows Phone or Blackberry) and across different device types (, smartphones, small tablets, big tablets and PC browsers).

    Unlike startup developers, enterprise developers have an inherent set of problems that make mobile difficult.

    They have legacy enterprise systems to deal with and strict security policies. Oracle is now offering a cloud-based service it says can expose a company’s legacy systems as services that can be used on mobile devices.

    Oracle also now offers a feature developers can use to provide single-sign-on capabilities to a user across all of their enterprise applications and devices.

    Another new security feature is a secure container that holds enterprise applications and data on a user’s device. Once the employee leaves the company or loses the device, the container can be removed along with all the data it contains without affecting anything else on the device.

    3:30p
    Planning to Succeed in the Face of Failure: Big Data and the Data Center

    K.G. Anand is director, Global Solutions and Product Marketing, Avocent Products and Services, Emerson Network Power. He leads these efforts globally for the company’s data center hardware, software and services offerings.

    In our high-tech world, it feels as though a new acronym (or “buzzcronym”) springs up each and every day, and it occurred to me that the most successful terms are those we use without a second thought. I believe this happens when a term becomes more than marketing lingo and instead, represents something of real value.

    Cloud computing, for example, was just a term several years ago. Today, like it or not, it has come to represent an entire realm of functionalities (each with their own acronym, of course).

    Likewise, other buzzcronyms have come to represent things we take for granted today, like SAN/NAS in storage, IDS/IPS in security and yes, Big Data. This term is seemingly everywhere.

    Data security no matter the size

    What defines Big Data is still up for debate. One prominent analyst firm says all data is Big Data to the companies that need it to operate. Others talk about enormous amounts of unstructured data, like video. I would suggest, however, that regardless of the size of your data or even what you call it, it must be kept both secure and available in your data center.

    As such, it is incumbent upon you to increase the efficiency of your data center to improve the availability of the underlying infrastructure that provides access to your data. Centralized management has proven itself to be a reliable method of achieving this goal, allowing you to monitor and access the devices throughout your center, regardless of their type, vendor or location.

    Another way to reduce your risk is to optimize your Disaster Recovery and Business Continuity strategy. Remote access to your network assets enables you to make things right when they go wrong, in a fraction of the time it would take to physically visit the site of the failure. Conversely, remote management also lets you shut things down in a hurry, if that is the best course of action.

    Finally, the remote monitoring of your devices allows you to determine how each is operating, its resource consumption, even identifying potential power overloads before they happen. To do that, you need to monitor down to the individual node level; automatic updates are needed by zone, rack, PDU, outlet and more.

    When all is said and done, it comes down to delivering a consistent level of service that your business demands. In a world filled with buzzcronyms that actually mean something, that is doubly important.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    Top 5 Reasons Why Environment Sensors are in All Modern Data Centers

    Your data center is a well-oiled machine processing massive amounts of user information, applications and complex workloads. You work hard to keep this machine running optimally with maintenance, powerful software tools, and knowledgeable engineers.

    However, as more users connect and demand resources, your business model must evolve.

    So how do you keep an eye on an ever-expanding data center platform?

    Enter, environment sensors. As this whitepaper from Raritan points out, working with environment sensors in a modern data center has a number of benefits:

    • Sensors can help prevent overcooling, undercooling, electrostatic discharge, corrosion and short circuits
    • Sensors help organizations to reduce operational costs, defer capital expenditures, improve uptime, and increase capacity for future growth
    • Sensors provide environmental monitoring and alert managers to potential problems like the presence of water, smoke, and open cabinet doors
    • Sensors can save you up to four percent in energy costs for every degree of upward change in the baseline temperature, known as a set point

    The key is to understand that just like any technology, modern environment sensors have come a long way to support the next-generation data center.

    For example, new plug-and-play temperature and temperature/humidity sensors are field replaceable. When the humidity sensor accuracy naturally diminishes, you don’t need to remove the entire sensor, just the sensor head to maintain a high degree of accuracy.

    Download this whitepaper today to find out why environment sensors are now in modern data centers. A few reasons include:

    • Save on cooling by confidently raising data center temperatures
    • Ensure uptime by monitoring airflow and air pressure to and from racks
    • Maintain cabinet security with contact closure sensors
    • Improve data center uptime by receiving environment alerts
    • Make strategic decisions on environmental designs and modifications

    Remember, your data center is an ever-evolving piece of infrastructure. As businesses create more demand in the digital world, it’ll be up to your environment to keep pace. Make sure you always have the right tools in place to control environmental variables and the overall health of your data center.

    4:41p
    Google Cuts Compute Engine Pricing Once Again

    Google announced (once again) across-the-board cuts to Compute Engine pricing. It’s been half a year since the formal launch of its Infrastructure-as-a-Service offering, which itself came with a dramatic price reduction relative to others in the market and prompted others to follow suit.

    Cuts were made to U.S., Europe and Asia pricing, with Europe and Asia marginally higher (100ths of a cent). Providers with the largest cloud platforms, such as Amazon Web Services, Google Cloud Platform and Microsoft Azure, continue to make aggressive cuts.

    These clouds have the scale to compete in the increasingly commoditized raw cloud computing world. Big scale means operational efficiency and a higher margin to play with.

    In the U.S. pricing dropped by 10 percent, or seven tenths of a cent for the standard virtual machine — from $0.07 to $0.063. While the reduction doesn’t seem like much, it adds up with lengthy and large deployments. High-memory instances dropped from $0.082 to $0.074, while high-CPU instances dropped from $0.044 to $0.040.

    A list of all the recent Google Compute price cuts (source: Google Blog)

    The new cloud price cuts (source: Google Blog)

    5:09p
    Virtus Opens Tier III Certified London Data Center

    Virtus Data Centres has opened London2, its second London colocation data center.

    Built in Hayes, just outside of London, the facility offers customers 11.4 megawatts of IT load across 65,000 square feet of space. Densities up to 40kW per rack are possible in some spots. The Uptime Institute has awarded Tier III Certification of Design Documents to the data center.

    Virtus builds its data centers modularly with an emphasis on efficiency through a method it calls ‘intelligent by design.’ It has six data halls all capable of being subdivided, offering clients anything from a cabinet to a suite with dedicated power and cooling.

    The company said its new London colocation data center is powered by renewable energy sources and uses indirect fresh-air evaporative cooling technology by a company called Excool.

    The provider offers Virtus Intelligent Portal (VIP), a free data center infrastructure management (DCIM) solution for colocation customers. The self-service tools allow clients to visualize and manage space, power, cooling and energy efficiency through a single pane of glass.

    Through a dashboard called Virtus Flex, customers can monitor their IT power usage in real time, flex their contracted space and power up or down according to demand.

    The building is away from main roads and behind a five meter-high security fence, with 24/7 security. It is close enough to central London for low-latency performance and active-active replication but far enough to meet government and financial services business continuity standards.

    The data center is strategically positioned between existing fiber routes running to the north and south of the data center. It also has around 10 carriers so far, six of whom are fiber owners and operators: BT, Virgin Media, COLT, euNetworks, Zayo and SSE. Virtus constructs multiple, diverse sub-ducts to give carrier partners easy access to thousands of fiber pairs.

    The facility touts five customers upon opening. These include the expansion of existing customers and cloud service providers C4L and Exponential-e, which take space in the company’s other London colocation facility, London1, built in 2011.

    “We are delighted that the project has been completed on time, on budget, and that five valued customers as well as 10 carriers have signed up for London2 before the doors have opened,” Virtus CEO Neil Cresswell said.

    5:31p
    Navy Awards CGI Federal Data Center Consolidation Contract

    CGI Federal has won more of the U.S. Navy budget with a new $50.3 million contract to provide IT support services for the ongoing data center consolidation program of Space and Naval Warfare Systems Command (SPAWAR).

    The Navy is consolidating its main legacy data centers under a program called the Navy Data Center and Application Optimization (DCAO). It is attempting to consolidate and modernize 61 legacy Navy sites and approximately 5,614 servers. CGI will deliver overall project management and IT support services for the effort.

    CGI Federal is somewhat of a household name, as it was the company behind the botched launch of HealthCare.gov.

    CGI has provided support services to SPAWAR commands for more than 19 years, and the latest contract will increase the amount of the vendor’s revenue that will come out of the Navy budget. The contract includes systems engineering, integration, test and evaluation, information assurance, configuration management, logistics, training and production and integration services.

    “With the help of CGI, the Navy will work to eliminate duplicative systems, potentially saving taxpayers billions of dollars,” said Timothy Hurlebaus, CGI Federal senior vice president and National Security and Defense Programs business unit leader.

    “CGI is proud to support the Space and Naval Warfare Systems Command in its effort to reduce cost through consolidation,” said James Peake president of CGI Federal and a retired lieutenant general. “By leveraging our existing corporate frameworks and capabilities, CGI will play a valuable role in helping the Space and Naval Warfare Systems Command achieve its goal of increasing IT efficiencies.”

    Data centers continue to be a priority issue for the federal government, which has been consolidating its sprawling critical facilities infrastructure.

    The Government Accountability Office reported on the ongoing Federal Data Center Consolidation Initiative (FDCCI) last Thursday, estimating that agencies would save as much as $3.1 billion through next year by consolidating. The amount of savings they reported for the same period was $876 million, however, according to the report, which charged that there were big problems with reporting savings.

    A bill that would make compliance with consolidation deliverables under FDCCI law passed in the Senate last week.

    6:02p
    Oracle Network Fabric Gets OpenStack Neutron Plugin

    Oracle announced a new plugin for the software-defined Oracle network fabric called Virtual Networking that makes it compatible with OpenStack, the popular open source cloud architecture.

    Oracle recently added OpenStack support for its own Linux distribution, a feature that went into general availability last week. The new Neutron plugin enables Oracle’s SDN and network fabric capabilities to be created, provisioned and managed by OpenStack.

    Coined as a a wire-once, software-defined fabric for a heterogeneous data center, roots of the virtualized Oracle network solution go back to 2012, when the company acquired Xsigo Systems and its Fabric Directors product. It also acquired Corenete earlier this year — an SDN company with a software-defined WAN virtualization platform aimed at carriers.

    OpenStack’s network component Neutron started in 2012 as Quantum and has plugins developed by a long list of vendors. Oracle’s use of Neutron will aid in the technology-agnostic network abstraction that OpenStack deployments will demand.

    Neutron has matured since its rocky Quantum beginnings, present in Rackspace’s recently re-architected OpenStack private clouds and in new HP SDN-enabled switches.

    6:30p
    Healthcare.gov Extends $15 Million Terremark Cloud Contract (Again)

    logo-WHIR

    This article originally appeared at The WHIR

    After suffering setbacks and downtime in November, Healthcare.gov decided to change cloud hosts from Verizon Communications Terremark unit to HP. In February, the government extended its contract with Terremark to give ample time for a smooth transition to HP. According to fcw.com the contract has been extended yet again after the initial seven-month extension and a $31 million extension in July.

    For now, Terremark will continue to host most of the site for the Centers for Medicaid and Medicare Services (CMS) including the Federally Facilitated Marketplace and the Data Services Hub. HP hosts the backup site and will be used for the small business marketplace starting in November with open enrollment. AWS is acting as a subcontractor to HP for a simplified application being launched for new users.

    The allotted time for transitioning to the new HP system is apparently not enough despite now having almost a year.

    The contracting document for the most recent extension issued by Paul Weiss of the Office of Information Services/Consumer Information and Insurance Systems Group states, “While CMS has successfully migrated several Marketplace systems to HP, CMS has elected to continue to utilize the Verizon Terremark Data Center as the primary site for production infrastructure hosting services through the OE 2015 period for the core Marketplace systems to ensure adequate time for end to end testing of the entire Marketplace environment to validate interoperability and new functionality implemented for the OE 2015 perform properly.”

    The document also said that the HP migration timeline doesn’t allow enough time for testing before open enrollment. The contract awarded to HP for cloud and hosting services was $38 million.

    Although it’s impossible to avoid overlap when transferring providers, the government is likely paying more than it should to make the transition to HP. The extensions to the Terremark contract now exceed $70 million.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/healthcare-gov-extends-15-million-terremark-cloud-contract

    7:00p
    TeliaSonera Acquires Swedish Cloud Provider Ipeer

    logo-WHIR

    This article originally appeared at The WHIR

    TeliaSonera announced on Wednesday that it has acquired Swedish cloud provider Ipeer from Applewise AB. The terms of the deal have not been disclosed.

    Launched in 2006, Ipeer offers cloud and hosting services through its data centers in Sweden and offices in Karlstad, Stockholm and Bangalore, India. It has 65 employees.

    Ipeer will join TeliaSonera’s brand Cygate, and will be a part of its business area of cloud and hosting services. Cygate offers unified communications, storage and servers, application hosting and data centers as well as cloud and web hosting. It joined TeliaSonera via acquisition in 2006.

    With a staggering number of Swedish businesses reporting security breaches, secure hosting solutions are critical for the stability of these companies.

    ”The acquisition of Ipeer means that we will be able to deliver new types of cloud services supplementing our strong offering of business solutions. Ipeer provides technology and competence in cloud and hosting services for both small and large companies. Thanks to our billion investment in infrastructure, we will be able to offer total solutions allowing our business customers to use the almost unlimited opportunities provided by the digitalization in society,” TeliaSonera EVP and head of region Sweden Malin Frenning said.

    Ipeer CEO Johan and Daniel Hedlund will remain in charge of the business unit, and will continue to be based in Karlstad.

    “This is a fantastic opportunity for Swedish cloud services. With TeliaSonera we see great possibilities and synergies for our private cloud- and hybrid services as well as a potential to develop our public cloud solutions as part of more comprehensive corporate services,” Applewise board member Daniel Hedlund said.

    Last year, Ipeer was selected as one of five companies to beta test Windows Azure Pack from Parallels. Earlier this year it launched its Swedish hosted Azure Pack cloud.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/teliasonera-acquires-swedish-cloud-provider-ipeer

     

    7:30p
    Maker of Fastest Supercomputers Cray Launches Latest System

    Cray , the company that makes some of fastest supercomputers in the world, has launched its next-generation XC40 supercomputer and the Cray CS400 cluster supercomputer, featuring Cray DataWarp technology and Intel Xeon E5-2600 v3 processors.

    New capabilities and support for future processors and accelerators build on its successful XC line of supercomputers, which has helped Cray reach new markets. Cray supercomputer systems power 51 of the latest Top500 list of the world’s fastest supercomputers, including three in the top 10.

    Built for driving down cost on I/O-intensive applications Cray’s new DataWarp technology is described by the company an applications I/O accelerator that delivers a balanced and cohesive end-to-end system architecture from compute to storage. It adds a new tier of high performance flash SSD directly connected to the Cray XC40 compute nodes.

    In addition to the latest Xeon processors, the XC40 and CS400 systems will have the option to come equipped with Intel Xeon Phi coprocessors and NVIDIA Tesla GPU accelerators. Cray recently launched an extreme GPU-dense system called CS-Storm, making it possible to pack 176 NVIDIA Tesla K40 GPU accelerators in 22 servers.

    Like the CS300 series, the CS400 series are industry standard building blocks servers built into an integrated system, available as air- or liquid-cooled architectures. The systems also feature Cray’s Aries interconnect and Dragonfly network topology.

    A new Cray XC40 will be deployed for the recently signed contract with the Korea Meteorological Administration (KMA). Cray also sold a new system to the U.S. National Nuclear Security Administration (NNSA) in the summer.

    Cray also announced that it has been awarded a $26 million contract with the Department of Defense High Performance Computing Modernization Program. The company will deploy a XC40 supercomputer and a 4-petabyte Sonexion storage system at the HPCMP’s DoD Supercomputing Resource Center (DSRC) located at the U.S. Army Research Laboratory (ARL).

    Cray said the new system will help the HPCMP run complex simulations that deliver scientific discoveries, technological advances and analyses that provide soldiers with the capabilities to execute full-spectrum operations.

    8:37p
    Connectivity News: DE-CIX Upgrades; TeliaSonera Plugs Into Telehouse Chelsea

    German Internet exchange operator DE-CIX has upgraded its flagship Apollon Internet exchange in Frankfurt with a new 7950 XRS-40 routing systems from Alcatel-Lucent.

    It is the first deployment of the XRS-40 outside of North America, according to Basil Alwan, president of Alcatel-Lucent’s IP Routing and Transport division.

    The upgrade is designed to increase the density and scalability of the exchange, while capacity is added to better serve customers. DE-CIX has added 80 new customers to the Apollon platform this year.

    Apollon is an Ethernet interconnection platform that consists of a 100 Gigabit Ethernet-capable switching system that supports a large number of 100 GE ports across the switching fabric.

    The XRS-40 system combines two XRS-20 chassis back-to-back, allowing DE-CIX to aggregate more customers on a single edge router, as well as keep IX traffic local. It will also reduce the amount of traffic that needs to be routed through the Apollon core nodes, which are based on XRS-20 platforms, keeping latency low and data flow manageable in its growing marketplace.

    “No other IXP continues to upgrade to the latest generation of available hardware at the same speed as we do,” Frank Orlowski, chief marketing officer at DE-CIX, said.

    TeliaSonera Expands Presence in New York With Telehouse

    TeliaSonera International Carrier (TSIC) has expanded its network into data center provider Telehouse’s New York Chelsea facility. This will be the seventh Telehouse data center with a TSIC network presence.

    TSIC owns and operates its 100G-enabled fiber network, which has more 200 PoPs (Points of Presence) worldwide. Network expansion is a continuous, ongoing process.

    Telehouse Chelsea is located at 85 10th Avenue. It has 60,000 square feet of colocation space and offers connection to the company’s New York International Internet Exchange (NYIIX), one of the world’s largest peering networks.

    Telehouse is best known for its European and Asia Pacific footprint, making TSIC’s global network a good fit. In the U.S., Telehouse continues to grow. It has a second New York facility in Staten Island and a data center in Los Angeles, making for a combined footprint of 235,000 square feet.

    The New York metro is a key market for carriers and data center providers alike.

    “TSIC has maintained a long-term presence in the New York metro market, which to this day remains a critical aspect of our North American network,” said Ivo Pascucci, regional director, Americas for TSIC. “The addition of several strategic regional PoPs as well as the Telehouse Chelsea data center provides our customers with greater reach and network diversity throughout the New York market, as well as across the nation and into Latin America.”

    9:00p
    Digital Realty to Tackle Cooling Efficiency to Meet White House Challenge

    To deliver on its commitment to a big reduction in energy use of a block of data centers in its massive U.S. property portfolio, Digital Realty Trust is going to focus on improving data center cooling efficiency.

    After IT hardware, the cooling system is the second biggest energy consumer in a data center. Consequently, improving energy efficiency of the cooling infrastructure is the best way to make the biggest dent in the facility’s overall energy consumption.

    Digital Realty, the San Francisco-based wholesale data center giant with more than 131 properties in its global portfolio, recently joined a White House program called the Better Buildings Challenge. Overseen by the Department of Energy, it challenges participating public and private sector organizations to commit to drastic cuts in energy consumption of their buildings.

    Earlier this week, the DoE announced a list of organizations that accepted a challenge under the program aimed specifically at data centers. There are 19 organizations on the list, including two large data center providers: Digital Realty and CoreSite Realty Corp.

    A CoreSite spokesperson could not provide comment in time for publication, and we’ll be following up with them in the near future.

    Digital to select block of U.S. data centers

    By joining the challenge, Digital Realty (as well as CoreSite) committed to selecting a block of properties from its U.S. portfolio and reducing their energy consumption by 20 percent over the next 10 years. They have to cut energy use on the facilities side, so simply removing or upgrading IT gear will not count.

    David Schirmacher, senior vice president of portfolio operations at Digital Realty, said the company can select a block of facilities and set the total energy use baseline to reduce from based on energy consumption in 2011 and onward, given that power metering data from three years ago is available.

    David Schirmacher, Senior Vice President, Data Center Operations, Digital Realty, uses an in-house developed DCIM tool to measure and manage about 130 facilities in Digital Realty's portfolio. (Photo by Colleen Miller.)

    David Schirmacher, Senior Vice President, Data Center Operations, Digital Realty, uses an in-house developed DCIM tool to measure and manage about 130 facilities in Digital Realty’s portfolio. (Photo by Colleen Miller.)

    In other words, if you have a data center whose power use was metered in 2011, you can use that data as baseline; and if you made some efficiency improvements between then and now, you can count the resulting reduction in energy use toward the total goal.

    Effort will take dedicated staff resources, capital investment

    While using past improvement data is an option, Schirmacher said he was not suggesting the company would select it. “We’re not intending to do any paper exercises,” he said.

    Digital Realty has dedicated members on its technical operations team focused exclusively on identifying opportunities to improve energy efficiency across the portfolio. Schirmacher expects the company’s capital investment in the effort to be significant.

    Economization, VFDs, containment

    The provider has not selected the facilities it will include in the Better Buildings block, but the actual data center cooling improvements will fall into three general areas: maximizing free cooling through the use of outside air, optimizing variable frequency drives on air handlers and improving containment.

    Click here to read about The Ten Most Common Cooling Mistakes Data Center Operators Make

    The first measure, maximization of free cooling, involves tuning cooling systems in the facilities that have airside economizers to rely on outside air more than they do today. Schirmacher gave an example of a Digital Realty data center he visited this week in Australia, which he said ran for a whole day with all of its mechanical cooling systems offline, relying entirely on outside air.

    The second tactic is to fine-tune the means by which air gets pushed around the data center floor. This means fine-tuning VFDs on CRAHs in some facilities and installing VFDs in others that don’t have them. This is going to be the most capital-intensive part of the effort, Schirmacher said.

    Finally, containment, which will involve minimizing cold air leakage in inappropriate spots on the raised floor and containing hot and cold air streams to keep them from mixing.

    Digital Realty is planning to track progress of the effort using EnVisionSM, its homegrown data center infrastructure management software.

    DoE will validate the block of properties the company will have identified. Digital Realty will have to demonstrate that they are properly metered.

    DoE scientists will also have to validate the data center provider’s baseline energy use data and future energy savings data.

    << Previous Day 2014/10/02
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org