Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, August 26th, 2014

    Time Event
    11:58a
    VMware, Docker, Google And Pivotal Team For Enterprise Container Adoption

    VMware, Docker, Google and Pivotal are teaming up to help enterprises run and manage container-based applications on a common platform at scale. The collaboration will enable enterprises to leverage existing VMware infrastructure to run and manage applications whether in a container, virtual machine or container within a virtual machine in a platform as a service (the Russian nesting doll of the virtual world).

    Containers are a technology that package an application in a way that makes it portable across different data center or cloud environments. It clearly separates applications from infrastructure so the applications can easily move, enabling “write once, run anywhere”. VMware wants to help enterprises run their containerized applications on their VMware infrastructure.

    Docker, the hottest startup in the container space, already garnered tremendous support from the likes of Google, IBM, Microsoft and Red Hat, among many others, and is said to be close to completing a $40 to $75 million funding round.

    The group is collaborating in a variety of different ways, all which aim to make container adoption in the VMware-based enterprise easier.

    Docker and VMware will collaborate on interoperability between their two product porfolios. They will work on enabling Docker Engine on VMware workflows from build to deploy for VMware vSphere to VMware vCloud Air. The two will also collaborate on Docker-related open source projects libswarm, libcontainer and libchan. Further areas where they will work on interoperability include Docker Hub with VMware vCloud Air, VMware vCenter Server and VMware vCloud Automation Center.

    VMware joined the Kubernetes community and will make Kubernetes’ patterns, APIs and tools available to enterprises. Kubernetes, Google’s open source container manager, deploys and manages containers into a fleet of servers. VMware has contributed code to bring Kubernetes to VMware vSphere, its cloud computing virtualization operating system, to make it easier for enterprises to use containers. Google and VMware will work together to bring the pod based networking model of Open vSwitch to enable multi-cloud integration of Kubernetes.

    VMware, Pivotal and Docker will collaborate on enhancing the Docker libcontainer project with capabilities from Warden, a Linux Container technology originally developed at VMware for Cloud Foundry.

    “The Pivotal CF team is a fan of Docker,” wrote Ferran Rodenas, Pivotal engineer. “We firmly believe PaaS and containers are a good match, and Docker makes it super simple to build and share a consistent container image.”

    12:30p
    Data Erasure Technology: Ensuring Security, Savings and Compliance

    Markku Willgren is President of US Operations for Blancco, where he focuses on the data security sector. His expertise encompasses asset disposal security and process efficiency, regulatory compliance and data erasure technology.

    In data centers, swapping failed drives from a host system with new ones is a standard process for optimal operation of storage arrays and servers. Failed drives are typically stored, physically destroyed or sent back to OEMs with data intact, but these approaches present problems.

    Stored or shipped drives are vulnerable to data breaches if a drive is lost or stolen. Physical destruction may not provide tamper-proof reports for regulatory compliance and can accumulate costs from not exercising OEM drive warranties.

    For secure and cost-effective operations, data centers should fully erase failed drives on-site for safe transport to the OEM. Turnkey hardware appliances that use advanced data erasure software to sanitize failed drives offer data centers enhanced security, cost savings, regulatory compliance and an efficient data disposal process.

    Enhancing data security and compliance

    Data centers can quickly accumulate failed “amber light” drives, as an estimated 2 percent of drives are replaced yearly. These drives could become at risk unless immediately erased.

    A Compliance Standards poll found that lost or stolen devices affected nearly 58 percent of U.S. enterprises with 10,000 or more employees. For example, Coca Cola lost sensitive data of 74,000 employees this year due to theft at the IT decommissioning phase.

    Erasing data from failed drives is critical, as up to 80 percent of them are still operational and vulnerable to data breach. Many industry standards and regulations like healthcare (HIPAA, HITECH), finance (GLBA, SOX, FACTA) and retail (PCI DSS) require data sanitization and proof of erasure for each drive in the form of auditable reports. Non-compliance may result in large fines, civil liability and costly damage to brand image.

    Hardware appliances that sanitize drives in-house using advanced data erasure ensure data integrity and regulatory compliance with audit-ready reports, and enable data centers to safely return failed disks to OEMs within RMA timeframes.

    Reaping cost savings from OEM warranties

    A data center replacing 200 drives annually could save $240,000* with effective RMA processing. Quick and safe return of failed drives is possible with erasure policies and equipment in place.

    In-house erasure of loose drives supports a secure chain-of-custody for transport to an OEM, without requiring specialized and expensive carriers, as for disks with intact data. Third-party on-site erasure or building your own erasure stations can be cost and labor prohibitive, so an appliance pre-loaded with advanced data erasure software can offer a cost-effective, long-term solution.

    Implementing an improved, unified erasure process

    Data centers gain control of loose drive decommissioning by using the right processes and equipment. A policy for prompt disk erasure that prohibits failed drive accumulation helps avoid data breaches from lost or stolen disks. Such policy is complemented by equipment that automates the in-house erasure process.

    For security and efficiency, a hardware appliance with advanced data erasure software simultaneously erases SATA, SAS, Fiber Channel, and SCSI drives from all major OEMs, as well as SATA solid state drives (SSDs). The turnkey appliance supports efficient, regular “disk housekeeping.” Automatically created erasure reports provide critical information, like disk serial number, make and model, and erasure details. Reports can be saved to a centrally hosted asset management suite for a complete audit trail.

    Getting the most from failed drives

    Failed drives are a fact of life, but they don’t have to become a security risk, compliance nightmare or drain on personnel. With low upfront and ongoing costs, hardware appliances help control loose drive security using an easy, efficient process.

    Backed by advanced data erasure software, an appliance conforms to rigorous erasure technology guidelines, such as DoD 5220.22-M and NIST 800-88. The ROI for an appliance is quickly achieved, especially when compared with the costs and risks of poor warranty drive return rates, a compliance failure or data breach.

    *If an average new drive cost is $1,500 and 80 percent of the failed drives can be securely wiped.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    IBM SoftLayer Opening Melbourne, Australia Data Center

    IBM is launching a new SoftLayer data center in Melbourne, Australia. The company made a commitment of $1.2 billion to expand its global cloud footprint by 15 data centers. Melbourne will be the sixth data center since it made the commitment.

    The new data center replicates the build of its other data centers around the globe. It will have initial capacity of 5,000 servers with room to grow to up to 15,000 servers. SoftLayer’s full portfolio of services will be available out of Melbourne, all on one integrated platform.

    The company is building in places where it already has an audience but no local data center. Several Australian businesses including Rightship, the Loft Group, HotelsCombined and several tech startups are already implementing IBM’s cloud services in the country.

    “Australia is an important market for IBM and SoftLayer. We are seeing a strong appetite for cloud in this market, particularly towards the hybrid cloud model,” said Lance Crosby CEO SoftLayer. “We are investing in Australia, combining and strengthening our existing cloud capabilities.”

    SoftLayer’s reach continues to expand internationally under IBM. IBM acquired SoftLayer for about $2 billion last year, SoftLayer forming the basis and infrastructure for its cloud play. Since then, SoftLayer has added over 6,000 customers. The expansion plan is to open 15 new SoftLayer data centers globally, with IBM recently opening SoftLayer data centers in LondonHong Kong and Toronto.

    There are several cloud competitors in Australia including AWS, Dell, Telstra and Dimension Data.

    IBM has recently partnered with one of Australia’s largest IT distributors, Avnet Technology Solutions, to build a robust business partner network in Australia to deliver SoftLayer services to the mid-market.

    SoftLayer adds to bare metal offerings

    SoftLayer is also rolling out new hourly-billed bare metal server offerings across its entire data center footprint. The pre-configured servers are available in four configurations and are deployed in under 30 minutes.

    Bare metal offers the raw performance of a physical server and is the evolution of dedicated servers. SoftLayer has been offering bare metal for many years, but it continues to shrink the time to deploy through automation. Competition in the space is also heating up, with Rackspace recently launching On-Metal, its bare metal offering.

    SoftLayer also offers public cloud in addition to bare metal. “We’re huge proponents of choice,” said Marc Jones, VP Product Innovation. “Bare metal and virtualized public cloud each have a front seat. There’s great workloads for both.”

    Does bare metal equal cloud? The lines are blurry

    The strictest definition of cloud requires it to be multi-tenant and virtualized, and billed like a utility. These bare metal offerings are single tenant servers. However, the provisioning time for bare metal has drastically reduced and it is offered hourly, further blurring the lines. Bare metal has a performance advantage over public cloud and the flexibility advantage gap that public cloud once had is shrinking.

    “The idea of cloud is to help businesses solve their IT challenges with on-demand resources,” said Philbert Shih, managing director Structure Research. “But every business has unique challenges, so to be a true solution cloud, services need to simultaneously be flexible, diverse, and integrated. These hourly bare metal servers take the advantages that the SoftLayer platform already offered—seamlessly integrated bare metal and virtual in one platform, control system, and API—and adds even more flexibility, without sacrificing any of the integration.”

    The  four pre-configured bare metal servers are:

    • Intel 1270, Single processor, 8GB RAM, 2 1TB SATA storage, $0.46/hour: Good, solid server for standard workloads like a web server or caching node
    • Intel 1270, 32GB RAM, 2 400GB SSD $1.09/hour: the SSD drives make this a performance box with better IO, good for running a database

    The last two configurations are for running Big Data workloads such as Hadoop:

    • -Intel 2620, 2 processors, 32 GB RAM, 4 x 1TB SATA $1.24/hour
    • -Intel 2650, 2 processors, 64 GB RAM, 4 x1TB SATA $1.32/hour

    Customers may choose from four base configurations with CentOS, Debian, FreeBSD, or Ubuntu operating system installed. The base configuration is deployed within 30 minutes, after which the server may be further customized with additional OS or application installations.

    2:00p
    Free Cooling Concepts for Your Data Center

    The modern data center has truly evolved into a complex system designed for a variety of applications. Now, there are more requirements around density, constant uptime and efficiency all while keeping costs down.

    Cooling costs can account for more than half of a data center’s total annualized operating cost as energy costs and IT power consumption continue to rise. As such, data center operators are pursuing various strategies to increase their data center cooling efficiency. One of these strategies is leveraging free cooling—an approach to lowering the air temperature in a building or data center by using naturally cool air or water instead of mechanical refrigeration.

    With so many advancements in cooling efficiency technologies, the concept of free cooling is very real and can be applied to your data center.

    In this whitepaper from CES Group, you’ll find out how:

    • Free cooling can offer impressive energy cost savings for data centers
    • New recommendations for both thermal and dew point ranges mean free cooling can be used in more climates
    • Several available methods of free cooling provide data center operators with the opportunity to select the option that best meets their specific system requirements

    Some data centers utilize the cooling tower to cool the condenser water to a low enough temperature when outdoor air conditions permit, to pre-cool the data center loop without using the chiller. Alternatively a source of cold water from local rivers, lakes or ocean sources can be circulated into a data center and used to achieve the same result. Systems using this approach are often called water-side economizers, which can either be used to cool room air or directly liquid cool IT equipment cabinets using rear-door heat exchangers or other systems.

    In both cases, mechanical cooling would only be needed when the outdoor air temperature becomes too high for free cooling systems like air or water-side economizers to be effective. Consequently, the working life of installed refrigeration systems can be significantly extended.

    Download this whitepaper today to learn about the types of free cooling that are available within the data center as well as the various methods of data center cooling. This includes:

    • Strainer Cycle
    • Plane & Frame Heat Exchanger
    • Refrigeration Migration

    You’ll also learn about free cooling throughout the various seasons. This means understanding winter operations, various seasonal limitations, high ambient operations and operating climates mid-season.

    Remember, reductions in cooling system use also mean drastic reductions in data center power consumption and service/repairs, lowering the energy and maintenance costs for facility owners. If local climatic conditions allow continuous use of air or water-side economizers, mechanical cooling systems may be eliminated entirely.

    2:00p
    What the Internet of Things Really Means for Your Business

    There’s been a lot of hype around the idea of the Internet of Things (IoT). Organizations are certainly hearing the message that there are more devices connected, more users utilizing cloud-based resources and a new data-on-demand generation requiring constant connectivity. The world and how we connect with it is constantly changing. This also means incorporating once dormant electronics with the cloud. This is the concept behind IoT: complete interconnection on an intelligent cloud platform.

    At the recent Cisco Partner Exchange, an interesting topic was brought up: cloud-connected recycle bins. Not just the bins, but the trucks as well. This created a highly efficient waste management system with very little overhead and direct impact on the company’s bottom line. Trucks knew which were full, which could be dynamically rerouted, and which bins needed to be emptied. Suddenly, these “things” that were never cloud facing are now creating direct business efficiencies.

    There are other advancements as well. For example, Tesla already supports HTML5 on their center console. Soon, these kinds of capabilities will expand to even more interconnected IoT end-points. Sounds cool right? But what does that mean for your business? How does that impact your organizational model? Ultimately, how can a major organization prepare its own infrastructure for so many interconnected things?

    • Creating a new business and cloud plan. Accept the fact that the world will become completely interconnected and you’re on your way to creating a next-generation business plan. Leaders in the enterprise world are the ones that listen to their end-users, keep pace with their own technological capabilities, and always innovate. Understand that more devices and end-points will be coming online. Know that the way a user process information will continue to change and evolve. Finally, create an aligned technology and business vision that can take you 5 to even 10 years out.
    • Incorporating logical optimizations. There are so many ways outside of adding hardware to improve your IT infrastructure. Logical optimizations like allowing VMs to utilize RAM as a storage repository allow you to conserve resources and optimize your entire environment. The great piece here is that you begin to abstract physical resources and allow them to go to the application which needs them the most. In the future this could be applied to a number of different use-cases spanning devices, users and locations.
    • Integrating automation. With so many new things connecting into your data center, management and control operations must become more efficient. This is where automation through policies and virtual services can really help. Right now, entire physical platforms can be dynamical re-provisioned to allow a new set of users to come online from a completely different time zone. Or, an intelligent load-balancing service automatically points incoming users to a new data center when a primary location has reached maximum connections. The point here is that the admin can now perform proactive tasks to keep your business running optimally.
    • Working around a new type of user and device. In a few years it’ll be common for household appliances and other various everyday devices to connect into a cloud environment and process information. Throughout all of this the user will continue to evolve and change their methods of consuming data. The challenge will be for the business to keep up. One way to be ready is to create a business and IT environment that is capable of dynamic change.
    • New methodologies around security. There is absolutely no doubt that security will continue to play a huge role around IoT technologies. Next-generation security solutions are already gearing up for a number of new types of connections, devices, and services which are being pushed through the modern data center. New types of virtual security appliances can be dynamically provisioned throughout a data center to support very specific services like IPS/IDS, DLP and even application firewall deployments. These new security features can span from a private data center all the way to a pubic cloud and everywhere in between. Remember, no security platform is every 100%. The best way to stay secure in an IoT world is to be proactive, constantly test your own systems, and incorporate data and device security best practices whenever possible.

    The number of devices coming online every year is very impressive. In June, Cisco released their Visual Networking Index report which helped paint a clear picture to the truly emerging IoT trend.

    • The number of devices connected to IP networks will be nearly twice as high as the global population in 2018. There will be nearly three networked devices per capita by 2018, up from nearly two networked devices per capita in 2013. Accelerated in part by the increase in devices and the capabilities of those devices, IP traffic per capita will reach 17 GB per capita by 2018, up from 7 GB per capita in 2013.
    • Globally, mobile data traffic will increase 11-fold between 2013 and 2018. Mobile data traffic will grow at a CAGR of 61 percent between 2013 and 2018, reaching 15.9 exabytes per month by 2018.
    • Traffic from wireless and mobile devices will exceed traffic from wired devices by 2018. By 2018, wired devices will account for 39 percent of IP traffic, while Wi-Fi and mobile devices will account for 61 percent of IP traffic. In 2013, wired devices accounted for the majority of IP traffic at 56 percent.
    • Global Internet traffic in 2018 will be equivalent to 64 times the volume of the entire global Internet in 2005. Globally, Internet traffic will reach 14 gigabytes (GB) per capita by 2018, up from 5 GB per capita in 2013.

    The term “IoT” does have a lot of marketing and hype around it. And yes, some organizations are still confused by what it really means to them. However, it’s important to understand that IoT will impact users, organizations, and how we communicate with everyday devices in general. The home is becoming more interconnected – your car can stream Pandora radio, your recycling bin is letting you know it’s full, and your refrigerator just placed an order for milk and eggs. Over the next couple of years our world is going to get a lot more interconnected.

    2:54p
    Switch SUPERNAP Partners With 6fusion For Better Usage And Cost Transparency

    Switch SUPERNAP has partnered with 6fusion for better cost transparency. 6fusion’s technology and utility methodology will be integrated into SUPERNAP’s technology environment to provide cost transparency through 6fusion’s patented unit of measure, the Workload Allocation Cube (WAC).

    The partnership will form the cornerstone of the recently announced spot market for Infrastructure-as-a-Service, where WAC will serve as the standard unit of measure for contracts. Colocation providers continue to look for ways to enable cloud for their customers.

    6fusion enables the delivery of IT-as-a-Utility. It shows organizations what they are actually using and spending on their IT infrastructure. Through its WAC unit of measure, customers see the Total Cost of Consumption (TCC) of their business services. This helps to optimize both cost and energy efficiency as well as forecast future use.

    The two companies will focus on enabling enterprises and cloud providers to get a better grip on infrastructure usage and cost. Cloud operators will be able to provide an “apples-to-apples” transaction language. Usage trends and identifying patterns critical to business are used to plan going forward.

    “The combination of SUPERNAP’s ultra-scale environment and 6fusion’s capability to measure and quantify IT infrastructure as a utility will deliver unmatched value to buyers and sellers in the industry,” said Jason Mendenhall, Switch SUPERNAP Executive Vice President of Cloud.

    SuperNAP 8 is the newest facility on the Switch campus. The 350,000 square foot facility was built using pre-fabricated modular components manufactured by Switch, and features new innovations in cooling and power distribution. In February it became the first colocation facility to earn Tier IV Construction certification from Uptime.

    “6fusion’s unique technology and vision for an open market combined with the depth and breadth of the Switch SUPERNAP ecosystem makes this partnership one to watch,” said William Fellows, Vice President 451 Research.

    4:07p
    VMware Launches EVO: RAIL Hyperscale Converged Infrastructure

    At its annual VMworld event in San Francisco Monday VMware launched the EVO family of hyper-converged infrastructure offerings, beginning with EVO:RAIL – an appliance for streamlining the deployment a software-defined data center.

    The emphasis with this product offering is on the software defined data center. It is not a hardware bundle, but instead relies on VMware partners to offer the converged hardware infrastructure to work with the VMware software stack. VMware controls the complete software stack, makes management and support simple and easy, and leverages the benefits of its hypervisor.

    EVOlutionary Building Blocks

    VMware says that EVO:RAIL is a 100 percent VMware software stack, and will drastically reduce the time it takes to deploy virtual machines – within minutes of powering on the appliance. The company said that the single SKU offering covers hardware, software, and support and services costs, with partners acting as the single point of contact support. VMware has partnered with Dell, EMC, Fujitsu, Inspur, Net One Systems and Supermicro on the appliance.

    Formerly code named Marvin, EVO: RAIL starts with smaller building block options, compared to larger converged infrastructure alternatives like HP and VCE offer. The VMware branded convergence offering is friendly to larger partner offerings, but does look to challenge smaller startups that have built on top of the VMware hypervisor, and integrated virtual storage area networks into their product. VMware EVO: RAIL integrates its own VSAN hardware pools, which are certified on a slew of storage vendors.

    Pre-engineered as a rapid and repeatable build, the new EVO: RAIL engine is supported by integrated compute, network, storage and management software.

    Its goal is simplification - from single ordering unit to single management interface and support, to Simplifying all parts of the stack, the new offering  is aimed at use cases for mid-market and enterprise segments, but can easily fit VDI or remote office/branch deployments as well.

    Dell announced a virtual infrastructure edition for EVO: RAIL, as well as an VDI-specific engineered solution for EVO: RAIL, VMware Horizon 6.

    VMware is building the EVO family to be a suite of products, of which RAIL will be first and smallest building block. VMware’s Chief Technologist Duncan Epping says that RAIL stands for the smallest unit of measurement for a Hyper-Converged Infrastructure (HCIA) offering, and that it is not a reference architecture.

    VMware says a single appliance will support around 100 virtual machines or 250 virtual desktops, and have a Virtual SAN datastore with a capacity of 13TB. New appliances added to an EVO: RAIL cluster will be automatically discovered. The converged infrastructure offering is based on vSphere and is meant to co-exist with or serve as an on-ramp to VMware vCloud Air, formerly vCLoud Hybrid Service.

    “We currently have over 40,000 VMware virtual machines running at Rackspace. We were very pleased with the results we experienced in VMware’s Hyper-Converged Infrastructure Appliance Early Adopter Program. VMware EVO: RAIL opens up a new and exciting opportunity to serve our customers,” said Pranav Parekh, Product Manager, Managed Virtualization at Rackspace.
    VMware EVO: RAIL is expected to be available from partners starting in the second half of 2014.
    5:31p
    Data Center Jobs: McKinstry

    At the Data Center Jobs Board, we have a new job listing from McKinstry, which is seeking an Account Executive – Facility Management in Seattle, Washington.

    The Account Executive – Facility Management is responsible for client relations, providing or developing expertise in one or more industry verticals (data centers, healthcare, industrial, etc.) including an in-depth understanding of the general business issues and facility infrastructure and operational issues within that vertical, initiating and developing consultative relationships with potential and existing clients in one or more industry verticals, and establishing productive, professional relationships with key personnel in assigned and targeted client accounts. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

     

    7:44p
    Amazon Planning $1.1bn Data Center Project In Central Ohio

    Amazon is planning a major data center in Ohio. Columbus Business First (CBF) has surfaced plans for a massive project in Central Ohio. The e-commerce and cloud computing giant’s data center development unit Vadata Inc. was approved for two state tax credits of over $81 million from the Ohio Tax Credit Authority for the data center project.

    The approved tax credit is a 15-year, 100 percent sales tax exemption. The facility is expected to bring 120 jobs and an estimated $1.1 billion investment over several years. The jobs will have a combined annual payroll of $9.63 million with an average salary of $80,000. The 15-year state income tax credit will generate $4 million in savings for Vadata and parent Amazon.

    Not much information has surfaced other than the tax documents filed with Ohio first uncovered by CBF.

    “The state has not released any information regarding a site for the project beyond saying it’s targeted for Central Ohio,” writes CBF reporter Brian R Ball. “It also did not name Amazon (NASDAQ: AMZN) as the company behind Vadata, although several other sources made the connection.”

    The project is still extremely secretive – Amazon has yet to comment. However, the fact that the project and tax credit is listed under the data center unit of the e-commerce giant means it’s most definitely a data center.

    The size of the tax credit and the number of jobs created suggests the project will be immense. There are currently 10 AWS Regions worldwide with four in the U.S. – perhaps soon to be five.

     

    8:50p
    NRDC: Multi-Tenant Data Centers Need To Play Bigger Energy Efficiency Role

    Data centers are among the fastest growing users of electricity in the U.S. consuming an estimated 91 billion kilowatt-hours of electricity in 2013. Annual consumption is projected to increase by roughly 47 billion kilowatt-hours by 2020.

    And while the industry has made progress in cutting energy waste, it still holds many wasteful practices, according to a Natural Resources Defense Council (NRDC) report.

    “Data centers have an important role to play in making the economy more efficient,” said Pierre Delforge, NRDC’s director of high-tech energy efficiency. “We see great leaders in the cloud space, but we need the whole industry to participate.” Delforge believes small, medium, corporate and multi-tenant data centers are still squandering huge amounts of energy and that the social responsibility extends to all data centers.

    The two big hurdles to better energy efficiency are under-utilization of servers and misalignment of incentives, the latter particularly found within multi-tenant facilities. Incentives need alignment between those who make decisions affecting efficiency and those who pay the energy bills. The NRDC recommends steps to accelerate the pace and scale of energy savings.

    Industry needs to better align incentives

    While multi-tenant facilities are seen as more energy efficient than server closets, the NRDC says the model creates problems when it comes to lining up incentives, and doesn’t do enough to incentivize better monitoring and metrics.

    “This is an interesting area because it hasn’t had the awareness from a public perspective,” said Delforge. “People still think of  traditional data centers. The efficiency debate hasn’t been considered as much in the multi-tenant space. This sector is growing rapidly, faster than the rest of the market. It presents both opportunities and barriers. When you move from onsite to colocation, you’re leveraging economies of scale and expertise and gain energy efficiency. However, in the long term it adds a layer of misaligned incentives.”

    There is a division of accountability and incentives within an organization, but the problem is even more acute with multi-tenant facilities. The data center owner pays the electricity bill, while tenants pay for blocks of power regardless of use.  Delforge says that because many still bill by power block instead of usage, there isn’t enough incentive for customers to be efficient since they’ll pay for power regardless of use. Smaller sized customers require individual metering so better energy efficiency has a financial benefit for multi-tenant customers.

    Multi-tenant data center contracts reflect the cost of power and cooling, providing little motivation for customers to invest in more efficient equipment. IT purchasers separately specify which equipment tenants should buy, adding more misalignment.

    In addition to metered billing, another notable suggestion is data center stakeholders should develop a “green lease” contract template to more easily incentivize energy savings.

    Recommendations include better transparency, higher server utilization

    The report finds that the typical computer server operates at no more than 12 to 18 percent of capacity and up to 30 percent of the other servers are “comatose” and no longer needed. Projects end and business processes change, but these servers remain plugged in and continue to draw power. Better monitoring means better information and metrics on entire data center operations, which can in turn be used to better line up incentives all along the ownership chain.

    The NRDC suggests that the industry adopts a simple metric such as average utilization of the server central processing unit(s) (CPU) to help resolve the underutilization problem.

    “I’d like to see a more sophisticated metric adopted eventually, but it lacks simplicity for now,” said Delforge. “Let’s start with something simple that will provide a good indicator to start with. CPU utilization is not perfect, as it might be misleading depending on the applications and doesn’t account for other factors such as IO, but it’s a starting point.”

    Industry groups like the Green Grid have been developing metrics for years. Other metrics like Power Usage Effectiveness (PUE) are already used heavily. However, the call is for a simple, universal server utilization metric.

    Other causes of wasted energy identified include:

    • Installing equipment to handle peak load and underutilizing equipment for the majority of the time
    • Limited deployment of virtualization. Virtualization reduces the number of servers.
    • Shortsighted procurement practices: More efficient servers might have a higher initial price tag, but are significantly cheaper over the course of its lifetime
    • Energy efficiency is low priority for multi-tenant facility customers: A focus on security, reliability and uptime often undermines interest in energy efficiency

    << Previous Day 2014/08/26
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org