Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, May 10th, 2016
Time |
Event |
12:00p |
How to Reuse Waste Heat from Data Centers Intelligently Data centers worldwide are energy transformation devices. They draw in raw electric power on one side, spin a few electrons around, spit out a bit of useful work, and then shed more than 98 percent of the electricity as not-so-useful low-grade heat energy. They are almost the opposite of hydroelectric dams and wind turbines, which transform kinetic energy of moving fluids into clean, cheap, highly transportable electricity to be consumed tens or hundreds of miles away.

Image: Energetic Consulting
But maybe data centers don’t have to be the complete opposite of generation facilities. Energy transformation is not inherently a bad thing. Cradle-to-Cradle author and thought leader William McDonough teaches companies how to think differently, so that process waste isn’t just reduced, but actively reused. This same thinking can be applied to data center design so that heat-creating operations like data centers might be paired with heat-consuming operations like district energy systems, creating a closed-loop system that has no waste.
It’s not a new idea for data centers. There are dozens of examples around the globe of data centers cooperating with businesses in the area to turn waste heat into great heat. Lots of people know about IBM in Switzerland reusing data center heat to warm a local swimming pool. In Finland, data centers by Yandex and Academica share heat with local residents, replacing the heat energy used by 500-1000 homes with data center energy that would have been vented to the atmosphere. There are heat-reuse data centers in Canada, England, even the US. Cloud computing giant Amazon has gotten great visibility from reuse of a nearby data center’s heat at the biosphere project in downtown Seattle.
 Rendering of an Amazon campus currently under construction in Seattle’s Denny Triangle neighborhood
Crank Up the Temperature
There are two big issues with data center waste heat reuse: the relatively low temperatures involved and the difficulty of transporting heat. Many of the reuse applications to date have used the low-grade server exhaust heat in an application physically adjacent to the data center, such as a greenhouse or swimming pool in the building next door. This is reasonable given the relatively low temperatures of data center return air, usually between 28o and 35oC (80-95oF), and the difficulty in moving heat around. Moving heat energy frequently requires insulated ducting or plumbing instead of cheap, convenient electrical cables. Trenching and installation to run a hot water pipe from a data center to a heat user may cost as much as $600 per linear foot. Just the piping to share heat with a facility one-quarter mile away might add $750,000 or more to a data center construction project. There’s currently not much that can be done to reduce this cost.
To address the low-temperature issue, some data center operators have started using heat pumps to increase the temperature of waste heat, making the thermal energy much more valuable, and marketable. Waste heat coming out of heat pumps at temperatures in the range of 55o to 70oC (130-160oF) can be transferred to a liquid medium for easier transport and can be used in district heating, commercial laundry, industrial process heat, and many more. There are even High Temperature (HT) and Very High Temperature (VHT) heat pumps capable of moving low-grade data center heat up to 140oC.
 Image: Energetic Consulting
The heat pumps appropriate for this type of work are highly efficient, with Coefficient of Performance (COP) of 3.0 to 6.0, and the energy used by the heat pumps gets added to the stream of energy moving to the heat user, as shown in the diagram below. If a data center is using heat pumps with a COP of 5.0, running on electricity that costs $0.10 per kWh, the energy can be moved up to higher temperatures for as little as $0.0083 per kWh.
Waste heat could be a source of income for the data center. New York’s Con Edison produces steam heat at $0.07 per kWh (€0.06 per kWh), and there have been examples of heat-and-power systems selling waste heat to district heating systems for €0.1-€0.3 per kWh. For a 1.2MW data centers that sells all of its waste heat, that could translate into more than $350,000 (€300,000) per year. That may be as much as 14% of the annual gross rental income from a data center that size, with very high profit margins.
Closing the Loop
There’s also the possibility of combining data centers with power plants for increased efficiency and reuse of waste heat. Not just in the CHP-data center sense described by Christian Mueller in this publication in February, or the purpose-built complex like The Data Centers LLC proposed in Delaware. Building data centers in close proximity to existing power plants could be beneficial in several ways. In the US, transmission losses of 8-10% are typical across the grid. Co-locating data centers right next to power plants would eliminate this loss and the capital expense of transporting large amounts of power.
Second, power plants make “dumb” electrons, general-purpose packets of energy that need to be processed by data centers to turn into “smart” electrons that are part of someone’s Facebook update screen, a weather model graphic output, or digital music streaming across the internet. Why transport the dumb electrons all the way to the data center to be converted?
Third, a co-located data center could transfer heat pump-boosted thermal energy back to the power plant for use in the feed water heater or low-pressure turbine stages, creating a neat closed-loop system.
There are important carbon footprint benefits in addition to the financial perks. Using the US national average of 1.23 lb CO2 per kWh, a 1.2MW data center could save nearly 6,000 metric tons of CO2 per year by recycling the waste heat.
These applications are starting to appear in small and large projects around the world. The key is to find an application that needs waste heat year round, use efficient, high-temperature heat pumps, and find a way to actively convert this wasted resource into revenue and carbon savings.
About the author: Mark Monroe is president at Energetic Consulting. His past endeavors include executive director of The Green Grid and CTO of DLB Associates. His 30 years’ experience in the IT industry includes data center design and operations, software development, , professional services, sales, program management, and outsourcing management. He works on sustainability advisory boards with the University of Colorado and local Colorado governments, and is a Six Sigma Master Black Belt. | 4:03p |
How Storage Admins Can Stay Relevant in Age of Software Defined Storage Rob Whiteley is the VPM of Hedvig.
Good news: The reality of today’s business means managing and storing data is more critical than ever before.
Bad news: This doesn’t necessarily benefit storage admins.
More good news: If you’re a storage admin, then there’s a lot you can do to increase your relevancy.
Make no mistake; the storage admin role — a staple of all medium-to-large IT shops in the past few decades — is changing. If you’re currently a storage admin and increasingly nervous about these changes and their implications for your job security, don’t lose hope. But be ready to undertake an evolution of your skillset to remain relevant in today’s rapidly evolving IT landscape. In fact, in today’s environment, it can be hard to keep up.
To understand how we arrived at where we are today, a bit of a history is helpful.
As few as five years ago, server, storage and networking were distinct skillsets with equally distinct responsibilities. But once virtualization gathered momentum, propelled specifically by VMware, the hard edges that separated these skillsets softened. VMware enabled administrators from within a single infrastructure management environment to set networking and storage policies for each virtual server.
Virtualization abstracts the hardware, fundamentally changing how infrastructure is deployed within data centers. As companies march towards 100-percent virtualized data centers, then virtualization admins — usually folks with a server background, but with enough network and storage know-how to be dangerous — now have the ability to set policies and manage operations across all infrastructure silos.
Goodbye to Storage Stovepipes and Silos
With the proliferation of virtualization and cloud computing, many organizations have reorganized their IT departments, moving them away from the old, stovepipe/silo models in which server, storage and networking were separate to a more horizontal model. Putting server, storage and networking into one team improves communication and collaboration. Companies that embrace this model improve troubleshooting time and focus more on providing infrastructure services to the business. But even so, oftentimes it still feels like much of the same: these distinct specialties simply now reside in a more collaborative environment.
We’ve seen this dynamic persist among even our own customers – organizations that have already embraced the value of software-defined storage. For example, an IT architect at one of our customers brought in our product as part of a private cloud initiative. His goal is to help move the company beyond silos and traditional IT. All went well until the software-defined storage pilot broadened.
Questions regarding platform ownership and responsibility for each piece of hardware soon cropped up. With software-defined storage touching multiple parts of the business, who now owns the actual storage infrastructure? Who configures it? If IT needs new hardware to power their storage, does the server team or the storage team buy it? Advanced capabilities like client-side caching only further blur the lines. Now, a major part of the “storage infrastructure” is a service actually running at the compute tier!
Finally, the engineer told his coworkers to step back. He reframed the issue: don’t consider the boxes as either application servers or storage servers. Instead, just think of them as the hardware needed to power the storage software that helps everyone do their jobs. People with storage-specific jobs in today’s environment need to move beyond thinking that a physical piece of hardware is their responsibility. Instead, they must understand they support a storage service within their organization. Thus, if a particular hardware or software component is critical to the success of that storage service, then, yes, the storage team owns it! It’s all about thinking end to end (from user to data), not top to bottom (from app to server to storage to network).
Two Paths Forward for Storage Admins
Consider a recent prediction by IDC that software-defined infrastructure and cloud will eliminate 25 percent of traditional IT operations job titles by 2019. In short, we’re going to see more of the same: convergence and integration as software and hardware layers become harder to differentiate. It’s not so much that the job of the storage admin will recede into obscurity, as it is that the responsibilities and skills of the “old” job will evolve.
Moving forward, as more enterprises adopt software-defined storage across their environments, I see two paths — and they don’t necessarily conflict:
- Path #1: The DevOps path. Software-defined infrastructure enables enterprises to do what AWS taught everyone: make all infrastructure more developer oriented. One can now easily create a self-service interface with which a developer can log in and directly configure storage. Here, storage admins are not immune to the DevOps phenomenon. DevOps teams will be the interface between infrastructure and applications. Storage admins need to evolve to understand how storage and data fit in building, shipping, and running applications (as Docker would say).
What storage admins should do: A tactical piece of advice is to learn programming languages like Python or Go. These are key languages in automating and programming IT infrastructure. Learning these languages is akin to being certified in configuring traditional storage arrays. Infrastructure coding is the new device configuration of the software-defined era.
- Path #2: The analytics path. Consider a company concluding (rightly, in our view) that data is a critical business asset. These companies need to rethink how they protect, analyze and monitor this critical asset. On this path, the infrastructure is more focused on supporting business insights and improving business outcomes. Think of it as the Big Data analogue to the DevOps path. Here storage admins need to take a business view of why the data exists and help provide actionable information. These insights will come from both the storage infrastructure and third-party monitoring tools.
What storage admins should do: The skillset required to support this type of arrangement demands more than simply knowing whether the storage system is up or down. It mandates digging into the data and determining how the data is being analyzed and monitoring its health. Learn tools like Hadoop, Spark, and Graphite, which help you expand beyond storage admins to become data custodians.
Change is the Only Constant for Storage Admins
There’s an old saying that you can’t step into the same river twice, meaning roughly there’s no constant except change. I think there are few doubts that the software-defined data center is transformative and will only increase the velocity of change. At the same time, it wasn’t so long ago that most everyone assumed outsourcing would wipe out IT teams. Of course, that hasn’t happened.
The last 20 years have taught us that it’s possible to outsource execution, but outsourcing strategy is rarely — if ever — successful. The strategy of storing, automating, programming, monitoring and analyzing data is only becoming more critical and central to enterprises. The foreseeable future is a hybrid one, with both on-premises and public-cloud infrastructure. The role of the storage admin is critical in this hybrid world, albeit modified along the paths we described. | 6:02p |
Global Data Center Connectivity Update Telia Carrier Plugs into CyrusOne
CyrusOne, a US data center provider with a high concentration of capacity in Texas, its home state, has partnered with the Stockholm-based Telia Carrier, formerly TeliaSonera International Carrier. The agreement will bring access to Telia’s global IP network to customers in CyrusOne facilities.
Telia owns and operates a network with lots of points of presence in the US and Europe and some in the Middle East and Asia. CyrusOne has more than 30 data centers, most of them concentrated in Texas and the Midwest. It also has facilities in London and Singapore.
Tata Links Digital Realty in Oregon to Asia
Indian network operator Tata Communications has brought its network to a Digital Realty data center in Hillsboro, Oregon, a hub for access to the numerous transpacific submarine cables that land there.
Tata operates an extensive submarine cable network, linking countries in Asia Pacific, Middle East, Africa, Europe, North America, and South America.
Digital is one of the world’s largest data center providers whose recent focus has been on reeling in more and more business from major cloud providers, expecting them to attract enterprise customers who need to set up hybrid infrastructure that consists of both cloud services and data center capacity they control. Global connectivity services like Tata’s are especially important to cloud providers, who serve customers around the world.
Taiwan’s Largest Telco Connects to Seattle Hub
Centeris, a new data center provider that recently launched what it bills as a transpacific connectivity hub outside of Seattle, has partnered with the US subsidiary of Chunghwa Telecom, Taiwan’s largest telecoommunications company, for connectivity between the hub and Chunghwa’s network end points in Asia.
The partnership establishes new connectivity options between the US and Asia via the Trans-pacific Express, New Cross-Pacific, and FASTER submarine cable systems. Centeris partners in the US provide connectivity from the Seattle facility to other key interconnection hubs on the West Coast.
Hurricane Electric Strikes Deal to Grow Network Reach in Asia
Silicon Valley-based data center provider and network connectivity provider Hurricane Electric has partnered with Telekom Malaysia Berhad to expand broadband services in emerging markets in Asia. TM owns stakes in multiple Asia Pacific submarine cable systems, both existing and under development, and the partnership calls on it to leverage those assets to advance HE’s reach.
HE operates data centers in the US and has built the world’s largest IPv6-native internet backbone.
XO Connects Tenth vXchnge Data Center
XO Communications has brought its network into the Philadelphia data center operated by vXchnge. This is the tenth vXchnge data center where XO has connected.
XO operates an extensive inter-city network in the US and Canada, as well as high-density coverage in major metros. It also links to international markets in South America, Europe, Africa, Middle East, and Asia Pacific.
Hibernia to Augment Network in South US and Mexico
Transtelco, which operates a network on both sides of the US-Mexico border, has partnered with Hibernia Networks to expand its reach. Hibernia’s network spans the globe, with especially high concentration of points of presence in the US and Europe. It also links to the Middle East and Asia. | 6:50p |
Microsoft to Launch Cloud Data Centers in Korea Microsoft announced plans to build cloud data centers in South Korea as the race to expand global reach of their cloud infrastructure among the largest public cloud providers, including Amazon and Google, continues.
Microsoft Azure continues to lead in terms of the number of physical locations its customers can choose to host their virtual infrastructure in. Twenty-four Azure regions are available today, and, including the upcoming Korea regions, the company has announced eight more that are underway.
While ahead of the competition in global reach, in Korea Microsoft is catching up to Amazon, which launched a Seoul cloud region in January. Today, there are nine Azure regions in Asia, including China, Hong Kong, Singapore, India, and Japan, and two in Australia.
Google, which has fewer dedicated cloud data center locations than both of its largest competitors in the space, has only one in Asia, a three-zone region in Taiwan.
After a brief slowdown in data center spending in 2015, the big three cloud providers have ramped up cloud data center construction this year.
Microsoft reported a 65-percent increase in data center spend year over year in the first quarter. Google said in March it would add 12 new data center locations to expand its cloud infrastructure. Amazon increased capital spending by 35 percent in the first quarter and attributed a big portion of the increase to investment in AWS.
The new Azure data center region will be located in Seoul, Takeshi Numoto, Microsoft’s corporate vice president for cloud and enterprise, wrote in a blog post announcing the plans.
He also announced that new data centers hosting Azure and 365 have come online in Toronto and Quebec City, Microsoft’s first cloud data centers in Canada. | 8:48p |
Ford, Microsoft Lead $235M Round in EMC’s Cloud Spinoff Pivotal  By Talkin’ Cloud
Cloud platform startup Pivotal announced that it expects to close a $253 million Series C financing round led by new investors Ford Motor Company and Microsoft. Previous investors GE, EMC and VMware also participated in the round.
The deal is expected to close in May, subject to regulatory approval.
Pivotal has seen growth in the enterprise since its launch three years ago; the company works with top US financial institutions and telecommunications providers.
Ford started working with Pivotal last year to build its Ford Smart Mobility connected vehicle platform by leveraging Pivotal Cloud Foundry and Pivotal Big Data Suite.
“Here at Pivotal we are partnering with customers to create a world where the largest and most admired companies can build and run software like Google, Uber or any venture-backed startup. This investment will accelerate our global reach to bring our unique software development methodology and modern cloud platform and analytics tools to every forward-thinking CEO,” Rob Mee, Pivotal CEO said in a statement. “We are excited to announce Ford and Microsoft as strategic partners to help introduce Pivotal’s transformative cloud and analytics software to the next thousand customers.”
Pivotal recently announced its first quarter 2016 revenue of $83 million, up 56 percent year over year.
“Expanding our business to be both an auto and mobility company requires leading-edge software expertise to deliver outstanding customer experiences,” said Mark Fields, Ford president and CEO. “Our investment in Pivotal will help strengthen our ability to deliver these customer experiences at the speed of Silicon Valley, including continually expanding FordPass – our digital, physical and personal mobility experience platform.”
This first ran at http://talkincloud.com/cloud-computing/cloud-startup-pivotal-lands-253-million-led-ford-microsoft | 9:59p |
RightScale Cuts Own Cloud Costs by Switching to Docker Less than two months ago, the engineering team behind the cloud management platform RackScale kicked off a project to rethink the entire infrastructure its services run on. They decided to package as much of its backend as possible in Docker containers, the method of deploying software whose popularity spiked over the last couple of years, becoming one of the most talked about technology shifts in IT.
It took the team seven weeks to complete most of the project, and Tom Miller, RightScale’s VP of engineering, declared the project a success in a blog post Tuesday, saying they achieved both goals they had set out to achieve: reduced cost and accelerated development.
There are two Dockers. There is the Docker container, which is a standard, open source way to package a piece of software in a filesystem with everything that piece of software needs to run: code, runtime, system tools, system libraries, etc. There is also Docker Inc., the company that created the open source technology and that has built a substantial set of tools for developers and IT teams to build, test, and deploy applications using Docker containers.
In the sense that a container can contain an application that can be moved from one host to another, Docker containers are similar to VMs. Docker argues that they are a more efficient, lighter-weight way to package software than VMs, since each VM has its own OS instance, while Docker runs on top of a single OS, and countless individual containers can be spun up in that single environment.
Another big advantage of containers is portability. Because containers are standardized and contain everything the application needs to run, they can reportedly be easily moved from server to server, VM to VM (they can and do run in VMs), cloud to cloud, server to laptop, etc.
Google uses a technology similar to Docker containers to power its services, and many of the world’s largest enterprises have been evaluating and adopting containers since Docker came on the scene about two years ago.
Read more: Docker CEO: Docker’s Impact on Data Center Industry Will Be Huge
RightScale offers a Software-as-a-Service application that helps users manage their cloud resources. It supports all major cloud providers, including Amazon, Microsoft, Google, Rackspace, and IBM SoftLayer, and key private cloud platforms, such as VMware vSphere, OpenStack, and Apache CloudStack.
Its entire platform consists of 52 services that used to run on 1,028 cloud instances. Over the past seven weeks, the engineering team containerized 48 of those services in an initiative they dubbed “Project Sherpa.”
They only migrated 670 cloud instances to Docker containers. That’s how many instances ran dynamic apps. Static apps – things like SQL databases, Cassandra rings, MogoDB clusters Redis, Memcached, etc. – wouldn’t benefit much from switching to containers, Miller wrote.
The instances running static apps now support containers running dynamic apps in a hybrid environment. “We believe that this will be a common model for many companies that are using Docker because some components (such as storage systems) may not always benefit from containerization and may even incur a performance or maintenance penalty if containerized,” he wrote.
As a result the number of cloud instances running dynamic apps was reduced by 55 percent and the cloud infrastructure costs of running those apps came down by 53 percent on average.
RightScale has also already noticed an improvement in development speed. Standardization and portability containers offer help developers with debugging, working on applications they have no experience with, and flexibility in accessing integration systems. Product managers can check out features that are being developed without getting developers involved.
“There are certainly more improvements that we will make in our use of Docker, but we would definitely consider Project Sherpa a success based on the early results, Miller wrote. |
|