Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, April 18th, 2014

    Time Event
    12:30p
    DCeP-tive Metrics Are Not Productive

    Mark Monroe is Chief Technology Officer and VP of DLB Associates. He recently served as Executive Director of The Green Grid, an IT industry and end user consortium focused on resource efficient data centers and business computing environments.

    Think back to a few weeks ago, when March Madness was gripping the U.S. with basketball playoff fever. The ultimate measure of basketball productivity–total points–was carefully measured and tallied, using transparent, standard methods and highly objective reviews of whether a production event (a “shot”) resulted in 0, 1, 2 or 3 points. No one argued about how to measure total productivity one had to measure how many times and how fast the teams moved the basketball up and down the court, or how many basketballs were stored on the racks behind each teams’ benches, not being used, but ready in case anyone needed quick access to them.

    The teams totals were objectively compared on a public display (the scoreboard) using a simple standard that had been established 123 years ago. The team that was most productive in the time allotted is allowed to continue, while the less productive team has to go out of business.

    This way of looking at the basketball competition puts it in terms similar to the business world, where companies must compete, mostly with less objective, less transparent, and less standard methods of measuring performance than sporting events.

    Wouldn’t it be great if there was something as simple for the information technology industry: counting the number of times something happened, adding it up on a scoreboard, then having everyone be able to understand what had happened, what should be done to improve. And, that the measurement was able to do what everyone really wants to do, compare themselves to their competition to see who “won.”

    Data Center energy Productivity (DCeP) Gets The Nod

    The Green Grid released a memo from their Global Harmonization Taskforce (GHT) 1 in the middle of March, describing new agreements that the team of global experts had reached over the last 18 months of discussion and negotiation. (Side note: I am very familiar with the people and working of the Global Harmonization Taskforce: I was on the board of directors of The Green Grid for 5 years, and was Executive Director of the organization Jan 2011-Aug 2012. I attended GHT meetings and applaud their efforts to come to agreement on many aspects of data center metrics from Power Usage Effectiveness (PUE) to Green Energy Coefficient (GEC). It’s a venerable group and does great work.)

    One of the metrics endorsed in the GHT memo of March 14, 2014 is Data Center energy Productivity, abbreviated DCeP.

    DCeP was first described 6 years ago in The Green Grid’s publication, WP#13-A Framework for Data Center Energy Productivity, released in April 2008.2 The original paper, and the Mar 2014 memo, describes DCeP as a metric that “quantifies the amount of useful work a data center produces relative to the amount of energy it consumes.”

    equation-DCeP

    The paper and memo describe a complicated equation that takes into account the relative value of transactions, the time-based decay of that value, and a normalization factor for the transactions. These last three parameters are set arbitrarily based on each business’ understanding of their IT operations. A business can pick any measure of utility and value for any transaction in their IT infrastructure, and use that to develop a picture of their DCeP value that applies to their business.

    Peter Judge, a London-based IT journalist and consultant, wrote a great piece in Techweek Europe 3 in response to the announcement. In it, Peter wonders if YouTube would be measuring productivity in terms of “Kitten videos per kWh,” which some interpretations of DCeP would lead you to agree. I’d even go so far as to say “Kittens per kWh” might be the right measure of productivity for YouTube!

    The Necessity of Simplicity

    Despite the constant flailing that PUE takes from critics, it is the most used, most effective efficiency metric in the IT industry today. Other contenders from CADE to FVER have not gained the coverage, reporting, or public improvement that PUE has.

    PUE-chart

    From the first study by LBNL in 2007 to Facebook’s and eBay’s real time, online meters, there has been real change in the industry because of the use of this measure. Since the first reports by Lawrence Berkeley National Labs in 2008 to the real-time Facebook display, reliably reported PUE has dropped from an average of 2.2 to 1.08, a 93 percent reduction in wasted energy.

    2:14p
    Quanta Offers APS Appliance for Microsoft Analytics Platform

    Quanta QCT announced a collaboration with Microsoft to ship an appliance solution for customers with demanding data warehousing workloads, big data and business intelligence needs. The APS appliance with Microsoft Analytics Platform System, integrates SQL Server Parallel Data Warehouse (PDW) software and optional HDInsight software for Apache Hadoop processing, on a certified hardware platform that includes a suite of tools for working with data.

    As a prebuilt solution, each rack is configured to customer requirements, with as many as nine nodes and a choice of three disk capacities. A base unit has 113 terabytes of storage capacity and can be doubled or tripled by adding 2TB or 3TB drives. Three scale units can fit into one rack for more than 1 Petabyte of usable storage. As many as 6 racks can be configured for even more compute resources and usable storage.

    “Customers want a big data and business analytics platform without integration or time-to-value risk,” said Mike Yang, general manager, Quanta QCT. “The collaboration between Microsoft and Quanta solves those problems by giving our customers a ready-to-ship, pre-integrated solution that is both economical and scales.”

    Quanta’s innovative Quad-enclosure server design offers as many as nine compute nodes per rack and can scale to as many as six racks. Raw disk capacity scales to 1.2 petabytes per rack. The appliance is designed with redundancy throughout for high availability. Each unit is composed of three compute nodes to provide fault-tolerant operation.

    “With Analytics Platform System, it’s now easier than ever to deploy a complete data warehousing solution that meets the most complex Big Data and business intelligence needs,” said Eron Kelly, general manager, SQL Server product marketing, , Microsoft. “The Quanta APS Appliance fills a need in the market with customers who need to move fast, manage risk and plan for future data growth.”

    2:22p
    CyrusOne Adds Data Halls in Dallas and Houston

    CyrusOne continues its expansion streak in 2o14, adding a new data hall in its Carrollton (Dallas market) data center and two data halls in Houston West II. The company is adding 60,000 feet and 6 megawatts to Carollton and 42,000 square feet and 6 megawatts in Houston.

    In Carollton, the company started construction on the addition of 60,000 square feet and 6 megawatts of power capacity. At full capacity, the Carrollton data center will have 400,000 square feet of raised floor space, 60,000 square feet of Class A office space and up to 80 megawatts of power.

    “Due to strong customer demand for our data center product offering, we made the decision to add 60,000 square feet of colocation space with 6MW of critical capacity to this facility,” said John Hatem, senior vice president, design and construction, CyrusOne. “As one of our largest facilities, the Carrollton location illustrates our Massively Modular engineering and design philosophy. We can quickly scale our data center footprint to meet the requirements of our customers to handle their growing mission-critical production and disaster recovery needs.”

    Major Growth in Houston

    The Houston expansion brings total capacity there to 160,000 square feet and 18 megawatts. The company is adding two new data halls. The Houston facility is focused on seismic exploration computing for the oil and gas industry.

    “Our decision to add 42,000 square feet and 6MW of critical power was driven by strong customer demand for data center space in a campus capable of delivering superior performance and the ability to support high density compute environments,” said John Hatem, senior vice president, design and construction, CyrusOne.

    CyrusOne is on an overall expansion streak. While it continues to grow in Texas, the main hub of its data center infrastructure, it also recently began construction of a brand new facility in Ashburn, VA.

    The company began construction of its third Houston West facility in March. The Houston West III data center will include 428,000 square feet of raised floor capacity, 86,000 square feet of class A office space, and up to 96 megawatts of critical load upon completion.It is also expanding its Austin campus.

    CyrusOne has a footprint of 1 million square feet of space in 25 data centers across the United States, Europe, and Asia, with much of its capacity concentrated in Texas, Ohio and Phoenix.

    2:58p
    Red Hat and Dell Partner on OpenStack

    After launching a collaboration late last year Red Hat and Dell announced the availability Dell Red Hat Cloud Solutions powered by Red Hat Enterprise Linux OpenStack Platform. The solutions are are pre-tested solutions designed to let customers adopt a private cloud that enables cost reduction and improved agility.

    Proof of concept configurations are designed for customers looking to explore OpenStack capabilities, research deployment options, pilot application deployments, and begin the development of an OpenStack cloud environment. Pilot configurations are designed for testing cloud applications, and for customers beginning a production environment. For customers seeking massive scale-out designs, Dell Cloud Services will engage with customers to design and architect OpenStack-based clouds, building upon the years of experience Dell and Red Hat have with OpenStack technologies.

    “Cloud innovation is happening first in open source, and what we’re seeing from global customers is growing demand for open hybrid cloud solutions that meet a wide variety of requirements,” said Paul Cormier, President, Products and Technologies at Red Hat. ”Whether it is enterprise-grade private cloud solutions, or DevOps solutions to build applications for both public and private clouds, through our expanded work with Dell, we’re focused on delivering the flexible solutions that meet these varied needs.”

    The two companies will also work within the OpenShift community to build OpenShift solutions that provide support for enterprise application developers looking for efficient ways to make their current and future data and applications more portable and accessible.  Dell’s OpenShift integration is a necessary step toward Dell and Red Hat co-engineering next-generation Linux Container enhancements from Docker. These solutions help companies with compatibility for PaaS offerings within enterprise environments, and developers to write applications using any language to be portable across public, private and hybrid cloud environments.

    The OpenShift-based solution will provide support for customers to use within their frameworks and databases, through the use of Docker-based images and cartridges, with the goal of enabling integration with any other platform that supports Docker, including public clouds. Dell and Red Hat have partnered for more than 14 years to bring global customers value by collaborating on Red Hat solutions across Dell’s enterprise offerings.

    “Dell is a long-time supporter of OpenStack and this important extension of our commitment to the community now will include work for OpenShift and Docker, which we think will help customers with choice in cloud resources, and application development and optimization,” said Sam Greenblatt, Vice President, Enterprise Solutions Group Technology Strategy at Dell. ”We are building on our long history with open source and will apply that expertise to our new cloud solutions and co-engineering work with Red Hat.”

    3:30p
    Cloud Roundup: VMware, IBM, Oracle

    VMware launches vCloud Hybrid service for disaster recovery, The Hartford selects IBM to build private cloud infrastructure, and Oracle adds database backup and storage cloud services to its portfolio of Platform and Infrastructure services.

    VMware delivers vCloud Hybrid service for disaster recovery. VMware (VMW) announced VMware vCloud Hybrid Service – Disaster Recovery, a new cloud-based disaster recovery (DR) service that provides a continuously available recovery site for VMware virtualized data centers.  The service is available immediately in all five vCloud Hybrid Service data centers in the U.S. and U.K. The service continuously replicates virtual machines to a virtual data center within vCloud Hybrid Service, with a recovery point objective (RPO) of up to 15 minutes. VMware partners can offer the new DR service as a value added solution to customers who already have deployed infrastructure, or offer a comprehensive solution for net-new deployments, creating the ability for bigger opportunities. ”Everyone wants enterprise-class disaster recovery, but without the complexity and cost of traditional DR,” said Jerry Sanchez, vice president of Hosting Operations, Planview. “We know how to administer vSphere, so the simplicity and familiarity of administering vCloud Hybrid Service – Disaster Recovery is just as easy, making this service a natural fit for us. Typically, DR services require expensive professional services to install and maintain. With the VMware solution, the data and applications are simply mirrored in vCloud Hybrid Service™, ready to go whenever trouble strikes, and with the benefits of a cloud-based economic model.”

    IBM selected by The Hartford to move IT to the cloud.  IBM announced a new six-year technology services agreement to implement a new service model that includes a private cloud infrastructure. The Hartford will move to a private cloud-based infrastructure on IBM’s PureFlex System. Under the $500 million agreement, IBM will also provide a number of other services related to mainframe, storage, backup and resiliency. The two companies will also partner on the creation of a joint innovation committee to foster collaboration on strategic initiatives. The project will leverage the expertise of both firms, market insights and research to build new business models and competitive capabilities that will enhance The Hartford’s ability to anticipate and meet the needs of customers and agents. “As The Hartford continues to execute on its strategic plan, we are making significant technology investments to increase operational effectiveness and improve our competitiveness,” said Andy Napoli, president of Consumer Markets and Enterprise Business Services at The Hartford. “The partnership with IBM will help The Hartford implement a strategic technology infrastructure that will provide us with greater agility and offer us more flexibility and transparency as we continue to grow our businesses.”

    Oracle launches new backup and storage cloud services. Oracle (ORCL) announced the availability of Oracle Database Backup Service and Oracle Storage Cloud Service. As a part of its portfolio of enterprise-grade cloud solutions, the new services expand the portfolio of Platform and Infrastructure Services, all available on a subscription basis. Oracle Database Backup Service provides a simple, scalable, and low-cost Oracle Database cloud backup solution, and can be an integral part of multi-tier database backup and recovery strategies. It is tightly integrated with RMAN commands for seamlessly and securely performing backup and recovery operations between on-premises Oracle Databases and the Oracle Cloud.  Oracle Storage Cloud provides a secure, scalable, and reliable object storage solution that enables organizations to effortlessly store, access, and manage data in the cloud. It is API compatible with OpenStack Swift and provides access to data through REST and Java APIs. ”To remain competitive in today’s highly connected business environment, organizations are increasingly adopting and building new cloud-based solutions. There is also a huge push to migrate existing on-premises workloads to the public cloud and support portability between on-premises and cloud environments,” said Chris Pinkham, senior vice president, Product Development at Oracle. “To help customers achieve these goals, Oracle has further expanded its comprehensive set of enterprise-grade infrastructure cloud services. The new services are based on open standards, integrated to work together seamlessly, and designed to support full portability between on-premises and cloud environments.”

    7:17p
    Microsoft to Build New $1.1 Billion Data Center in Iowa

    Microsoft is building even bigger in Iowa. The company today unveiled plans to invest a whopping $1.1 billion in a new data center campus in West Des Moines, where it already operates a large server farm. The project will be one of the largest in the history of the data center industry, with plans calling for 1.2 million square feet of facilities across a 154-acre property.

    The announcement ends the suspense about the identity of the mustery company behind Project Alluvion, the latest in a series of stealthy “codename” projects that have made Iowa a major data center destination. Combined with the existing Microsoft data center, the Alluvion project brings the total investment in the state to just under $2 billion.

    According to plans, Microsoft will take over 154 acres of land and construct a more than 1.2 million square foot data center. The project Alluvion site is approximately 8 miles east from the current Microsoft data center in West Des Moines, and negotiations for land are what had previously put the project on hold.

    During its meeting Friday, the Iowa Economic Development Authority Board approved granting Microsoft a $20.3 million sales tax rebate for the project. That tax rebate would be available until 2021. Those state incentives come on top of the $18 million in incentives already promised by West Des Moines. The value that Microsoft brings back to the city translates to around $8 million a year in property taxes. Microsoft will create 84 jobs when fully built out, with 66 of those jobs required to have a wage of $24.32 an hour.

    Microsoft originally built in West Des Moines in 2008, with a major $677.6 million expansion last year. That expansion was part of a $1.4 billion sum that all data center projects brought to the state in 2013.

    << Previous Day 2014/04/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org