Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, July 20th, 2015
Time |
Event |
12:00p |
Cleaning Up Data Center Power is Dirty Work The tech sector’s investment in renewables is on the rise, growing faster than any other sector’s, and some of the biggest investments are in connection with massive data center projects.
Just this month, Facebook announced a 200 MW wind-power contract for its upcoming Texas data center, and Amazon said it had invested in a wind farm of similar capacity in North Carolina to address the energy use of its expansive data center cluster in Virginia. Announcements of investment in huge renewable-energy projects by web-scale data center operators like Facebook, Amazon, Google, Microsoft, and Apple are becoming more and more frequent as they try to deliver on their carbon-neutrality commitments.
The household-name tech giants are not the only ones in their sector investing in renewables, but they are in the group of companies that drive the bulk of the spending. “There are a lot of pretty big purchasers, and then it falls off pretty dramatically,” Jenny Heeter, senior energy analyst at the US government’s National Renewable Energy Laboratory, said.
Heeter was the lead author of a report on renewable energy investment in the ICT sector published by NREL today. Only very rough estimates of the sector’s energy consumption are available, since companies aren’t required to disclose that kind of data. Some, however, disclose it voluntarily through various industry and government programs, and estimates used in NREL’s paper are based largely on that voluntarily disclosed information.
Citing one study by the Ghent University in Belgium, the report pegged the global ICT sector’s total energy consumption in 2012 at about 900 million MWh. The estimate includes data centers (29 percent), telecommunications infrastructure (37 percent), and end-user devices (34 percent).
List of Biggest Users Top-Heavy
Citing other studies, NREL researchers said US data centers consumed about 91 million MWh in 2013. For levels of renewable-energy consumption, they turned to the Carbon Disclosure Project, a UK organization that promotes corporate disclosure of greenhouse-gas emissions, and the US Environmental Protection Agency’s Green Power Partnership.
There are 113 ICT companies reporting data on their use of renewables through one or both programs. Combined, their operations in the US consumed 59.2 million MWh in 2014 total. Of that, 8.3 MWh was renewable energy. The researchers acknowledged that the list was not comprehensive, pointing out that data center service providers in particular were largely excluded from the sample.
Out of the 113 companies, 27 used renewables to power 100 percent of their operations. Among them were Intel, SAP, Datapipe, and Motorola. Intel used more renewable energy than anyone else on the list: a little over 3 million MWh. Because it operates its own manufacturing facilities, the chipmaker consumes more energy than others.
Microsoft is the second-largest user of renewables, having consumed 1.4 million MWh in 2014 or half of its total energy use that year, followed by Google, which consumed about 890,000 MWh of renewable energy – about 40 percent of its total.
If one was to use this data to derive an average amount of renewable energy consumed by a tech company in the US, the result would be misleading. Only two companies used more than 1 million MWh last year; one consumed more than 500,000 MWh, and eight consumed more than 100,000 MWh. The list of top 30 consumers ranges from 12,000 MWh to 3 million MWh, with the amounts tapering off steeply as you go down the list.
Unbundling ‘Green’ and ‘Energy’
There are several ways for a company to be able to claim it is using renewable energy, but “unbundled” Renewable Energy Credits are the most common method, responsible for 61 percent of all reported use of renewables last year, according to NREL.
These are credits sold or used separately from renewable energy that’s generated to create them. A company can use 1 kWh of coal energy but buy a REC to apply to that 1 kWh and make it carbon-neutral. One of the ways Google gets RECs is buying clean energy, unbundling and keeping the RECs, and then selling the energy as non-renewable on the wholesale market.
Google gets the energy through long-term power purchase agreements, or PPAs, with producers. Such PPAs are gaining popularity in the ICT sector, especially among web-scale data center operators. One of the big benefits they bring is providing developers with the funding they need to bring more large-scale clean-energy generation capacity online.
Direct PPAs between a customer and a producer that circumvent the local utility aren’t allowed everywhere. In traditionally regulated energy markets, users have to negotiate with utilities, but not all utilities sell renewable energy, which is where it becomes complicated.
In North Carolina, for example, Google and other data center operators lobbied the utility company Duke Energy to create a renewable-energy tariff – a special renewable-energy rate for big customers. Integrating renewables into the overall fuel mix, turning renewable energy into a product, and creating rates for it that make sense for the customer, the utility, and all of its existing customers is a delicate, complicated process, Gary Demasi, director of operations at Google who oversees the company’s data center energy and location strategy, said.
Among web-scale data center operators, Google has one of the most sophisticated strategies for getting renewables. It even has a subsidiary that’s registered as an energy company and has an authorization from federal utility regulators to buy and sell power on the wholesale market.
The Problem of Location
Most of the complexity that’s involved in buying renewable energy for data centers stems from the issue of location. One place may be ideal for a data center, while another is ideal for a utility-scale wind farm. “Those two places are rarely the same,” Demasi said.
Because every state in the US has its own energy market, it’s extremely difficult for a customer to source renewable energy in one state but use it in another. Unbundled RECs and Google’s energy subsidiary are ways around this problem of location.
It’s not this difficult everywhere. According to Demasi, one of the easiest places to buy renewable energy for large-scale data center projects in the world is northern Europe, where Finland, Sweden, Norway, and Eastern Denmark operate a unified utility grid and have deregulated energy markets. There, Google can tap renewable energy in Sweden and apply it to its data center in Hamina, Finland, without much bureaucratic intervention.
Google is now working on getting renewable energy for its data centers in Asia. It also has a data center in Chile, and while the country has “world-class” wind, solar, and geothermal resources, the renewable-energy market there is in its infancy. “We’re very excited about the long-term prospects there,” Demasi said.
It Should Be Up to the Utilities
Sourcing renewable energy on its own or acting as an energy company is not the way Google prefers to go about it. It’s just that in some places it’s necessary. “In markets where utilities are our partners we do believe that they are best-situated to get us the products that we want,” Demasi said. Nobody is better at sourcing energy and delivering it to customers than utilities are.
Having observed it, he empathizes with the painstaking rate-making process utilities have to go through if they want to add renewable energy to their product portfolio. It’s not a small challenge, he said, but “I don’t think it’s a challenge you can’t overcome.” | 3:00p |
Enterprise Storage: How to Manage the Inevitable Jason Phippen is Head of Global Product and Solutions Marketing for SUSE.
The vast majority of IT departments are experiencing enormous increases in the demand for storage and computing power. Few, if any, will have the budget to meet rising requirements that continue to outpace the growth of their budgets. This raises a difficult question for IT teams everywhere: how long is the usual approach of managing the install, upgrade, retire and replace cycle going to be?
By now, it should be obvious to all that the strategy that built the data center of the past isn’t going to deliver the data center of the future. New models and approaches are being embraced by hyperscalers, based on open source software and commodity hardware. Cloud, we are told, has made IT a utility―as simple and as easy to manage as your gas bill. Yet, while we all know there are many advantages to paying by OpEx over CapEx, over time cloud can mean paying more ― just in smaller installments.
As the changes come through, there is considerable risk for IT teams, who will need to best maximize their existing assets while frugally spending on future ones, by wisely navigating the gap between hype and reality.
In this foggy world, some things are crystal clear. Here are three things to consider:
- Outside of the “hyperscalers,” hardly anyone will be able to afford to own and host all their compute power on premise. In the future a proportion of your compute power is going to be in public clouds, one way or another, sooner or later.
- Storage growth is massive and unsustainable. You are going to need to find a better, cheaper way of doing it, and that way is going to need to work in harmony with your compute decisions.
- Vendor lock-in is never a good idea. In a world where business models change, discovering you’re locked into a cloud provider might be one of the most unpleasant discoveries of your life.
There’s a joint conclusion that many in the industry have arrived on. The growth of the software-defined storage (SDS) industry is loosely defined as a method of storage that is organized by a software provider, regardless of hardware or location.
It’s well documented that SDS is the inevitable destination for much of your future storage needs. If you don’t believe me, consider the latest market forecast from Gartner:
- By 2016, server-based storage solutions will lower storage hardware costs by 50 percent or more
- By 2019, 70 percent of existing storage array products will also be available as software only versions
- By 2020, between 70 and 80 percent of unstructured data will be held on lower-cost storage managed by SDS environments
So the question instead shifts to how SDS must be implemented and the types of needs it can serve for your data center. Here are a few common data center concerns, and how SDS can be deployed correctly to fit each need.
Agility
Businesses are moving too fast to rely on storage architectures that are proprietary, overpriced, and inflexible. At the same time, IT is also challenged with organizing their storage assets as a bridge between new and old, with the same level of performance across locations and classes.
SDS should deliver storage functionality comparable to mid and high-end storage products at a fraction of the cost. It should be an open, self-healing, self-managing, storage solution that scales from a terabyte to multi-petabyte storage network. Coupling SDS with commodity off-the-shelf storage building blocks results amazingly cost-efficient storage. Truly unlimited scalability enables enterprise IT organizations to deliver the agility businesses demand by non-disruptively adding capacity at the cost they want to pay. Intelligent, self-healing, self-managing distributed storage enables storage administrators to minimize the amount of time spent managing storage. This enables organizations to support more capacity per storage administrator or spend more time focused on delivering future innovations to the business
Flexibility
Flexibility is one of the core tenants of SDS, as the increased ability to shift storage across locations and hardware leads to its agility and cost benefits.
But flexibility cannot be obtained without true interoperability. It’s possible that your new SDS provider could have a long-term roadmap towards standardizing other components of your IT infrastructure on the same vendor, which limits many of the benefits of SDS in the first place.
In order to achieve maximum flexibility with your SDS project, make sure that you evaluate solutions that play well with others. Examine a vendor’s alliances and alternate IT solutions, and evaluate whether open source or proprietary plays into this alignment.
Decoupling Hardware from Software
SDS is an approach to data storage in which the programming that controls storage-related tasks is decoupled from the physical storage hardware. Software-defined storage is part of a larger industry trend that includes software-defined networking (SDN) and software-defined data centers (SDDC).
Software-defined storage puts the emphasis on storage services such as deduplication or replication, instead of storage hardware. Without the constraints of a physical system, a storage resource can be used more efficiently and its administration can be simplified through automated policy-based management. For example, a storage administrator can use service levels when deciding how to provision storage and not even have to think about hardware attributes. Storage can, in effect, becomes a shared pool that runs on commodity hardware.
SDS is part of a larger industry trend that includes software-defined networking (SDN) and software-defined data centers (SDDC). As is the case with SDN, software-defined storage enables flexible management at a much more granular level through programming.
Reducing Costs
The shift towards SDS is largely driven by a substantial reduction in cost without compromising (even improving) a previously commoditized technology. By the way it’s consumed and managed, your capital storage expenses by way of hardware into bills that you pay as you use. With future storage demands growing and unpredictable, having technology that’s handed as an operating expense can lead to a huge reduction in costs. SDS also saves on any hardware maintenance and support costs, as storage resources are no longer tied to hardware.
However, cost advantages don’t start and end by shifting to SDS alone. There are various solutions in the market that maximize a potential IT investment. For example, the solutions available within the open source community allow data center managers with additional cost savings, compared to their proprietary counterparts.
There’s no question that the storage industry is in the middle of an inflection point in its adoption. With nearly overwhelming advantages in cost, flexibility and performance, SDS is a solution any storage-conscious data center manager will evaluate in the next several years.
But adopting SDS alone isn’t enough. Instead, make sure that you know the goals for your storage project, and evaluate vendors that align with your IT objectives. SDS is the future – make sure that you’re on its leading edge.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:32p |
Dell Extends Hybrid Cloud Management Platform to Azure Looking to grow its role as a linchpin around hybrid cloud services that revolve around the data center, Dell today added support for Windows Azure Pack and enhanced support for Microsoft Azure to its cloud management software.
While most clouds today are managed in a semi-autonomous fashion, it’s clear that IT operations teams are starting toward unifying the management of hybrid cloud computing environments, George Hadjiyanis, director of sales and marketing for Dell Cloud Manager, said.
“A year or two ago that might not have been the case,” he said. “Now the cloud is central to IT operations.”
The software is designed to provide IT organizations with a single pane of glass for access to specific cloud applications and to cloud management tools, such as tracking usage and spending, while giving developers self-service provisioning capabilities.
Deployed on a virtual appliance, the cloud management software provides integration with Chef and Puppet, the provisioning tools popular with DevOps professionals, along with an ability to auto-scale and auto-heal applications based on user-defined policies.
Based on technology Dell gained when it acquired Enstratius in 2013, version 11 of the Dell Cloud Manager also adds support for the Topology and Orchestration Specification for Cloud Applications (TOSCA). TOSCA provides a common description of application and infrastructure cloud services, the relationships between parts of the service, and the operational behavior of those services.
Adding support for TOSCA is critical for Dell. Launched in the early part of 2014, TOSCA is an open standard designed to make it possible to more easily switch between cloud service providers.
Without TOSCA, the amount of demand for a tool that unifies the management of those clouds would be severely limited.
Dell Cloud Manager already supports Amazon Web Services, Google Compute Engine, Joyent, ScaleMatrix, CloudStack, OpenStack, vSphere, Virtustream. According to Hadjiyanis, support for Digital Ocean is coming in the very near future.
Dell’s cloud management software, available as both a freemium service and as commercial software, can be deployed on-premise or as a hosted service managed by Dell. There is also a Software-as-a-Service edition for managing public clouds.
Hadjiyanis said one of the things that distinguishes Dell Cloud Manager is that as a platform for hybrid cloud management it integrates directly with Microsoft Active Directory, which is widely used on-premise to manage access to existing applications. Without that capability, internal IT organizations are going to be reluctant to embrace a hybrid cloud management platform that doesn’t serve to extend their existing workflow and application provisioning processes. | 5:08p |
Iceotope, Mellanox Plunge HPC Switches Into Liquid Cooling Cooling specialist Iceotope, in partnership with Mellanox, announced a total liquid-cooled InfiniBand- and Ethernet-based network and interconnect switch targeted at high-performance computing installations.
Submerging electronics in dielectric fluid is an unorthodox approach to cooling that is gaining popularity in the HPC sector. There are also some signs of higher interest in the approach among companies with high-density compute requirements that don’t run traditional HPC workloads.
The jointly developed fan-less switches take one standard slot inside of the Iceotope Petagen cabinet, which it developed with Intel and launched late last year. Citing the significant efficiency benefits that liquid cooling brings, Iceotope noted that the switches are cooled with warm water (of up to 45C/113F). They will complement blade servers in the Petagen cabinet, where up to 60 kW of IT load can be provisioned.
Going toward an entirely fan-less HPC deployment, Iceotope founder and chief visionary office Peter Hopton said HPC is “embracing liquid cooling at a remarkable rate. It’s just a case of technologies being available to match demand. Until now, it was widely accepted that the interconnect switch would never be liquid cooled. It’s great to be able say that, thanks to our work with Mellanox, that’s no longer the case.”
UK-based Iceotope has developed patented technology around its convective cell — a sealed immersion device that harnesses the power of natural convection. The company has worked with 3M Novec and Solvay Galden PFPE on primary and secondary coolants inside of its system.
Mellanox switches supported in the Petagen cabinet include a 36-port FDR 56 Gbps InfiniBand and 40/56Gb Ethernet. Iceotope said it is looking to support technologies such as EDR 100Gbps InfiniBand as part of its development pipeline.
“Liquid cooling is an extremely exciting proposition in the HPC market,” said Gilad Shainer, vice president of marketing at Mellanox. “Mellanox is dedicated to providing best-of-breed products and solutions for HPC, and we are pleased to collaborate with Iceotope to further data center efficiency with high-quality liquid cooled alternatives to air.” | 5:25p |
Report: Microsoft Plans to Acquire Israeli Cybersecurity Firm Adallom for $320M 
This article originally appeared at The WHIR
Microsoft plans to acquire Israeli cybersecurity company Adallom for $320 million, according to a report in the Calcalist financial newspaper on Monday. Adallom is a security platform for SaaS applications that can be deployed across devices.
Founded in 2012, Adallom has 80 employees with offices in Israel and the US. Its platform aims to make SaaS applications “as secure as on-premise applications” by combining intelligence into application usage, monitoring of user accounts, and security and compliance. In June, Adallom announced integration with Dropbox for Business.
Adallom also offers a security platform for IaaS, providing security for AWS andMicrosoft Azure environments.
In April, Adallom raised $30 million in Series C funding, including a strategic investment by newly launched Hewlett Packard Ventures, Rembrandt Venture Partners, and previous investors Sequoia Capital andIndex Ventures.
The acquisition of Adallom would add to several other Microsoft acquisitions in Israel, including the recent acquisition of security software developer Aorato.According to the Wall Street Journal, this would be its fourth acquisition in Israel since the start of 2015. Adallom would continue to operate from Israel and build up Microsoft’s cybersecurity-focused operations in the country, WSJ reports, citing people familiar with the matter.
Israel has been a hotbed for cybersecurity startups and M&A activity, with PayPal andDropbox among tech companies that have made acquisitions in Israel this year.
Neither Microsoft nor Adallom has made an official statement on the news.
This first ran at http://www.thewhir.com/web-hosting-news/report-microsoft-plans-to-acquire-israeli-cybersecurity-firm-adallom-for-320m
| 5:43p |
Making the Case for Remote Power Supply Monitoring For reasons spanning everything from security to economics, it now makes more sense to remotely monitor power supplies inside data center facilities than ever.
Rather than replacing batteries based on a three-year budget cycle that is applied as a rule of thumb by most organizations, remote monitoring enables IT organizations to determine which batteries are still operating at peak performance levels regardless of how old they are, Wirth said. More often than not those batteries are operating at a level of performance that doesn’t require them to be replaced for four to five years.
A proactive approach to IT maintenance based on predictive analytics winds up saving IT organizations money, Edward Wirth, director of business development, marketing and sales for Power Service Concepts, a provider of battery environment support services, said. Wirth is speaking on the subject at the upcoming Data Center World conference in National Harbor, Maryland, this September.
Wirth said remote monitoring also addresses a number of tactical issues. Gaining access to data centers has become increasingly problematic for third-party specialists. Background checks are now routinely required, and the time slots being made available to those third-party specialists are usually now on the weekends or after normal business hours.
Remote monitoring eliminates a lot of administrative headaches associated with having to provide clearance to third-party maintenance workers that often get paid overtime to visit data center sites afterhours, Wirth said.
“Ever since 9/11, gaining access to data centers has become a real pain,” he said. “Continuous monitoring provides 24/7 views that reduce the number of physical visits that need to be made to the data center.
Wirth said the biggest issue in making the shift to continuous remote monitoring is the “break-fix” culture that still permeates IT environments. Instead of taking a proactive approach to IT management, there is still a tendency to wait for something to break and then fix it. IT organizations need to realzie that ultimately countinuous preventative maintenance winds up being much less expensive.
In other words, an ounce of prevention is not only worth a pound of proverbial cure; it generally winds up being a whole lot less intrusive for all concerned.
For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend Edward’s session titled “AC and DC Critical Power Supply Monitoring Systems.” | 9:31p |
CyrusOne Expands North Texas Data Center When global colocation solutions provider CyrusOne finished the first data hall in its 670,000-square-foot Carrollton, Texas, data center facility back in April 2013, workers used scooters to travel across the quarter-mile stretch to office space at the front of the building. But they might need something a little faster to get around the campus today.
The company just announced that it has expanded the facility again, this time with a data hall that added nearly 60,000 colocation square feet and 4.5 MW of power capacity.
“Our customer base in this key market continues to grow rapidly,” said John Hatem, CyrusOne senior VP of design and construction, in a press release. “Adding more space and power capabilities to our Carrollton facility ensures that we can continue to scale effectively and provide mission-critical infrastructure capabilities to meet our growing customer demand.”
Considered one of the most energy-efficient data centers in the US, the Carrollton one, just north of Dallas, is the largest facility of its kind in the state, CyrusOne claims. Among other customers, it houses the infrastructure and 911-dispatch center for Carrollton, Coppell, Farmers Branch, and Addison, Texas. The communities have consolidated 911-dispatch services to form the North Texas Emergency Communications Center.
CyrusOne isn’t the only company that finds north Texas attractive. The Dallas-Fort Worth area is one of the busiest Texas data center markets. Facebook, for example, recently announced its plan to build a $1 billion data center powered by wind energy in Fort Worth.
With 31 carrier-neutral data center facilities across the US, Europe, and Asia, CyrusOne provides customers with the flexibility and scale to match their specific growth needs. The company also recently closed a $400 million acquisition of Cervalis, marking the data center provider’s official entrance into the New York data center market.
CyrusOne said it serves nine of the Fortune 20 and more than 160 of the Fortune 1000 among its nearly 900 customers. |
|