Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, July 1st, 2014

    Time Event
    12:30p
    Microsoft and Partners Get $5M in Federal Funding for Fuel Cell Research

    Microsoft has taken on yet another research project exploring the use of fuel cells installed directly in data center IT racks. The company believes fuel cells will eventually revolutionize data center power and the energy industry in general.

    This time, Microsoft has partnered with two vendors and a university on a project that received $5 million in funding from the U.S. government.

    The Redmond, Washington, software giant recently completed a proof-of-concept study of powering an IT rack with an in-rack fuel cell without much of the power conditioning equipment that sits between a power source and IT gear in a data center.

    The difference between that study and the new one is the type of fuel used by the fuel cell. The previous study used hydrogen-powered fuel cells, and the new project is looking at methane, Sean James, technology research program manager at Microsoft, explained in an email.

    Msft: fuel cells will be a game changer

    Fuel cells are pitched at data center operators as a way to reduce reliance on utility grids, as a replacement for backup generators or as a more environmentally friendly alternative source of power. Microsoft’s experimentation with using small in-rack fuel cells to bypass different stages of power conversion focuses on decreasing energy losses that occur every time power is converted.

    “The resulting system could be significantly less expensive than traditional data center designs,” James said. “Overall, we believe the advancements being made in fuel cells will someday change the game in terms of how energy is delivered and managed.”

    Microsoft’s role in the new project is to integrate fuel cells developed by the research partners in its server racks and perform independent live testing. The partners are fuel cell vendor Redox Power Systems, advanced material specialist Trans-Tech and the University of Maryland.

    The funding was provided by the Advanced Research project Agency – Energy, an agency within the U.S. Department of Energy tasked with investing in alternative energy technologies.

    Only the latest in Microsoft’s fuel cell exploration

    Microsoft has been evaluating methane-powered fuel cells since at least 2012, when it announced its Data Plant project. A Data Plant is a data center module installed at a waste treatment plant, using fuel cells to convert methane (a byproduct of waste treatment) into electricity to power the IT gear.

    The company has deployed a prototype Data Plant in Cheyenne, Wyoming.

    Fuel cells nascent in data center market

    To date, fuel cell deployments at data center sites have been deployments of the large, off-the-raised-floor variety. A number of service provider and corporate data center operators have bought fuel cells to contribute to overall energy supply for their facilities, many of them for evaluation purposes.

    Examples of large deployments that go beyond that are few and far in between. The biggest ones are Apple’s deployment in North Carolina and eBay’s in Utah. The fuel cell vendor in both cases was Bloom Energy.

    Apple deployed Bloom’s natural-gas-fueled Energy Servers to provide a big portion of the power supply for its Maiden, North Carolina, data center. eBay deployed the solution to provide 100 percent of the power requirement of its Salt Lake City, Utah, facility, using the utility grid as the backup source.

    12:30p
    Resource Sharing Unleashes Performance Storms on the Data Center

    Jagan Jagannathan is the founder and chief technology officer of Xangati.

    We have all experienced the good and the bad in the world of computing.  We share files on a server, we share a network for sending and receiving email, and we share resources as a number of people try to establish and participate in a web conference.

    Today’s data centers are also sharing more and more resources, leading to better return-on-investment as its capacities are better utilized. However, while high capacity utilization is generally good, it could lead to situations such as users standing by the printer waiting for their printout to emerge.

    Caught in the storm

    When critical resources are shared to capacity limits, shared computing environments can suffer spontaneous contention “storms” impacting the application performance and creating a drag on end-user productivity.

    At Xangati, we talk about “performance storms,” likening them to stormy weather that comes up seemingly out of nowhere and can quickly disappear leaving a path of destruction.  A performance storm in the computing environment leaves destruction of your service-level agreements in its wake.

    Wreaking havoc on the varied cross-silo shared resources in the data center, these storms can entangle multiple objects:  virtual machines, storage, hosts, servers of all kinds, and applications. For example, you can experience:

    • Storage storms that occur when applications unknowingly and excessively share a datastore, deteriorating the storage performance.
    • Memory storms that occur when multiple virtual machines (VMs) access an insufficient amount of memory. Or memory storms can occur when a single VM “hogs” the available memory. In either case, performance takes a hit.
    • CPU storms that occur when there aren’t enough CPU cycles or virtual CPUs for virtual machines, leaving some with more and others with less.
    • Network storms that occur when too many VMs are attempting to communicate at the same time on a specific interface or when a few VMs hog a specific interface.

    Time ticks away

    One brutal reality of these storms is their extreme brevity; many contention storms surge and subside within a matter of seconds. This short window in which to capture information about a storm can severely hamper an IT organization’s ability to track down its root cause. Often, the IT folks shrug their shoulders, understanding that the only remediation is to wait and see if it happens again.

    Many management solutions, at best, identify only the effects of storms. The more daunting challenge is to perform a root-cause analysis. Three challenges complicate the problem:

    • Real-time Insights at Scale – Providing real-time insights into interactions in the environment is critical, but is made more challenging by doing this “at scale” as the number of objects in the network multiplies. One approach to solving this problem is to build in-memory analytics into memory to quickly identify, analyze and remediate performance storms. That’s because access to data in memory is typically orders of magnitude faster than access to data on disk.
    • Understanding Consumptive and Interactional Behaviors – The cause of contention storms cannot be identified without knowledge of both consumptive and interactional object behaviors. Consumptive behaviors pertain to how objects consume resources, and interactional behaviors pertain to how objects interact with each other. Today, technology is available to visualize and analyze the cross-silo interactions that are causing a contention storm as they occur.
    • Proper Capacity Utilization – Performance storms are difficult to remedy if the environment is improperly provisioned, which is hard to identify in a dynamic, virtual environment.  Nevertheless, by analyzing the links between performance and capacity, users can reallocate or otherwise provision infrastructure resources to mitigate or avoid future storms.

    The challenge of scaling

    Understanding what is happening on the network at any precise moment is critical to uncovering and fully understanding the causality of behaviors and interactions between objects. That capability usually requires scalability to track up to hundreds of thousands of objects on a second-by-second basis. In that environment, you gain a decided edge by deploying agent-less technology. Why? Because technologies that build on a multitude of agents do not scale.

    In the end, it’s still no small feat to penetrate the innards of a complex infrastructure and track down the source of a random, possibly seconds-long, anomaly that can wreak havoc on performance. Once the anomaly passes, after all, there’s nothing left to examine. Ultimately, proper remediation comes down to deploying scalable technology for performance assurance and applying split-second responsiveness to quickly identify and eliminate any issue that could otherwise lead to significant performance loss – and worst of all, a poor end-user experience.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    Study: Facebook’s Luleå Data Center Boosts Local Economy

    Facebook’s decision to build a data center in Luleå, Sweden, has directly created nearly 1,000 new jobs and generated local economic impact that amounts to hundreds of millions of dollars, according to a recently completed study by the Boston Consulting Group, which the social network company hired to assess its impact on the local economy.

    Big data center builds, such as Facebook’s 290,000 square foot Luleå facility, are considered to be very good for local economies, especially in rural underdeveloped areas. Nordic countries have been competing among each other for data center projects and economic development they bring, and so have various U.S. states.

    “Continued digitization must be a key priority for Sweden if the country is going to enhance productivity and economic development,” the report’s authors wrote. “Investments in digital infrastructure, such as large-scale data centers, are an important contribution to this agenda.”

    Since Facebook broke ground in Luleå in 2011, it has created 900 direct jobs in Sweden and generated SEK 1.5 billion (about $225 million) in domestic spending. The project’s overall economic impact in the country so far has been about SEK 3.5 billion (about $524 million).

    A second Facebook data center in Luleå is currently under construction, and the BCG report includes a forecast of economic impact both buildings will have by 2020:

    • Generate SEK 9 billion ($1.35 billion) of economic impact in Sweden
    • Directly create nearly 2,200 jobs, two-thirds of them locally in Luleå, contributing about 1.5 percent of the local region’s total economy
    • Benefit a total of 4,500 full-time employees

    Facebook Sweden jobs report

    The company said it chose the site in Sweden to build a data center (at the time its third around the world) because it offered access to low-cost renewable energy, a cool climate and a strong pool of skilled workforce.

    Facebook had a similar study done for its data center site in Prineville, Oregon, by economic consultants ECONorthwest. That study, announced in May, concluded the company’s data center construction over five years had created about 650 jobs in Central Oregon and about 3,600 jobs in the state overall.

    The construction projects led to $573 million in capital spending statewide, the report said.

    5:12p
    IBM’s Bluemix Enters General Availability, Intros Gamification to PaaS

    IBM‘s Platform-as-a-Service offering Bluemix has entered general availability after what the company says was a strong beta period.

    While SoftLayer IBM’s is Infrastructure-as-a-Service, underpinning the company’s cloud, Bluemix sits a level above as mission control for DevOps of sorts, helping customers launch applications while abstracting infrastructure orchestration and providing various application services.

    Part of IBM’s billion-dollar cloud push, Bluemix is based on Cloud Foundry, the open source PaaS spearheaded by EMC- and VMware-owned pseudo-startup Pivotal. According to IBM, its PaaS is already one of the largest Cloud Foundry deployments.

    The PaaS flavor offered by the likes of IBM, Pivotal and a few others differs from the well-established offerings from top cloud PaaS providers, such as Salesforce’s Heroku, Google’s App Engine or Microsoft’s Azure, in that they are aimed squarely at the enterprise developer. There is a trend among vendors to enable enterprise developers to build and deploy applications and features quickly the way Google’s and Facebook’s of this world do, using the agile iterative development process, and PaaS is a way to abstract the infrastructure layer from the developer so they don’t have to worry about the underlying resources when coding.

    Big Blue positions its PaaS as one with advantages only it can offer, primarily its array of middleware. But it has also been adding features beyond that, both on its own and through its partner ecosystem.

    Gamification comes to PaaS

    As part of the general-availability release, IBM added “gamification” services to the platform. Developers build systems of engagement for users that offer game-like incentives.

    Such systems look to simulate feelings like those of leveling up in a role playing game when users complete parts of their profiles or complete certain tasks. It’s a way to attract users to an application and to keep them using it.

    Other new features include:

    • MQ Light: a messaging service
    • Sonian: partner services that help developers organize and mine Big Data
    • Email Archive: an intellectual-property-conscious feature that helps users sort through email, including attachments
    • Clearchat: enables users to develop and test for multiple mobile platforms like iOS and Android

    Gary Barrett, an Ovum Research analyst, said Bluemix was part of a transformation of IBM’s cloud play, where breadth of technology the company and its partners could offer played a major role. ”If IBM continues to enhance the platform and can bring on enough partners, Bluemix could transform the Platform-as-a-Service market,” he said.

    5:34p
    Amazon Launches New Tiny Cloud VM Instances

    Amazon Web Services launched T2, a set of cloud compute instances suited for low-impact applications, such as remote desktops, development environments, small databases and low-traffic web sites. The instances can burst up to higher power if needed through CPU credits.

    The feature is yet another attempt to “right-size” Amazon cloud servers to give users confidence that they are using and paying for only the capacity they need. Very often customers will provision enough capacity to handle peak demand periods and pay for it throughout, even though most of that capacity remains unused most of the time.

    “In many of these cases, long periods of low CPU utilization are punctuated by bursts of full-throttle, pedal-to-the-floor processing that can consume an entire CPU core,” writes Amazon chief evangelist Jeff Barr on the AWS Blog. “Many of these workloads are cost-sensitive as well.”

    He used a car analogy: “Even though the speedometer in my car maxes out at 150 MPH, I rarely drive at that speed (and the top end may be more optimistic than realistic), but it is certainly nice to have the option to do so when the time and the circumstances are right.”

    Like a car that rarely tops out, the new instances are for compute workloads with modest demands for continuous compute power that occasionally need more.

    They have a “Baseline Performance”, which indicates the percentage of single-core performance of the underlying physical CPU allocated to the instance. Each instance also comes with CPU credits-per-hour, which indicates the rate of credits that the instance receives each hour when the instance doesn’t use its baseline allocation of CPU.

    The credits are spent when the instance is active and unused credits are stored for up to 24 hours. The higher the baseline, the more credits the instance accumulates.

    A t2.small instance has access to 20 percent of a single core of an Intel Xeon processor running at 2.5 GHz (up to 3.3 GHz in Turbo mode). A t2.medium has access to 40 percent of the performance of a single core, which the operating can use on one or both cores as dictated by demand. The smallest, t2.micro, has 10-percent baseline performance.

    Barr noted that the new instances are perfect for business processes that need a burst of CPU power at regular but infrequent intervals and dynamic web sites that received unpredictable bursts of traffic, like some external news drawing a response, getting linked on Reddit (called the “Reddit hug of death”) or inclement weather.

    Credits will continue to accumulate if they aren’t used, until they reach the level which represents an entire day’s worth of baseline accumulation. If you’re constantly maxing out on credits, you can switch down to a smaller-size instance.

    7:28p
    Data Center Provider Involta Raises $50M in Private Equity

    Data center service provider Involta has raised $50 million in private equity in a round led by M/C Partners, with participation by Morgan Stanley.

    Cedar Rapids, Iowa-based Involta provides a range of data center outsourcing services, including colocation, managed services and consulting. It operates five data centers in Ohio, Idaho, Minnesota, Iowa and Arizona.

    Investment in lower-tier U.S. data center markets has been pouring in at a steady pace this year, and the Involta deal is only the most recent example of the continued flow of data center private equity.

    Other examples include a $100 million round raised by Compass Datacenters, Online Tech’s acquisition of an Indianapolis data center and DataBank’s acquisition of Arsalon – a Lenexa, Kansas, provider. All three deals took place in May.

    In June, we reported on TierPoint’s acquisition of a data center in Philadelphia, following recapitalization of the company via acquisition by its own management team together with a group of investors.

    Involta data center locations:

    • Akron, Ohio
    • Boise, Idaho
    • Duluth, Minnesota
    • Marion, Iowa
    • Tucson, Arizona

    Gillis Cashman, managing partner at M/C Partners, said the private equity firm saw Involta play a leading role as a data center provider in “underdeveloped” Tier II and Tier III markets. “There is a tremendous need for infrastructure in those areas, and Involta is very well positioned to address those needs,” Cashman said.

    M/C Partners already has a few data center deals with successful exits under its belt. It has invested in Fusepoint, a managed hosting and colocation firm sold to Savvis in 2010, and in Attenda, a UK managed hosting provider sold to London’s Darwin Private Equity in 2011.

    8:00p
    UK Data Center Providers Get Carbon Tax Break

    Many UK data center operators were not happy when the government did not include them on the list of about 50 energy intensive industries that would get special breaks on the country’s carbon tax, designated for companies that contend for market share with overseas competitors.

    This state of affairs is expected to change this week, as the new Climate Change Agreement for Data Centers goes into effect. Negotiated by the data center lobby, the agreement slashes the amount of carbon tax the providers are obligated to pay in return for a promise to deliver measurable energy efficiency improvements.

    The country’s data center operators view the agreement as a formal recognition of the sector by the government and a boost to investor confidence. It is also expected to increase the sector’s competitiveness, accelerate IT infrastructure consolidation and improve energy stewardship.

    Emma Fryer, associate director of climate change programs at techUK, said the measure would help UK data center providers compete with their foreign peers and create financial incentives for more efficient data centers. “It strengthens the business case for investing in efficiency,” she said.

    TechUK is a trade association for the UK technology sector that has been deeply involved in lobbying the government to enact CCA for data centers.

    Industry’s existence finally acknowledged

    Data centers did not get such an agreement in the 14 years that the government has been issuing them because “the government didn’t realize they existed,” Fryer said. The scheme has traditionally been aimed at manufacturing-oriented industries that make physical products.

    “We don’t actually produce a measurable product in the way other industries do,” she said, adding that techUK had been fighting to raise awareness of the data center industry among government officials for more than four years.

    The concession applies to colocation or wholesale data center providers only. Companies that operate their own data centers to support their corporate IT function are not covered.

    To get the concession, customers have to commit to reducing energy consumption of their mechanical and electrical infrastructure (non-IT equipment). The reduction will be measured using the Power Usage Effectiveness metric.

    To get a carbon tax break, a data center provider has to reduce their PUE by 15 percent by 2020 from a 2011 baseline without increasing IT power consumption.

    Recognizing that PUE is far from being a perfect efficiency metric, there are provisions in the new rules for revising the targets in 2016, “when we expect a more sophisticated metrics and standards available,” according to techUK.

    U.S. industry spared from carbon tax

    There isn’t a comparable carbon-tax law in the U.S. on the federal level. The latest Climate Action Plan released by President Barack Obama’s administration in 2013 focused on limiting carbon emissions, promoting development of clean energy sources, training alternative energy workforce and improving building energy efficiency.

    The only climate-related pressure the U.S. data center industry has felt has been applied by Greenpeace, which has been drumming up the industry’s reliance on coal power. Data center providers have been spared from Greenpeace’s public relations wrath until this year’s report, which in addition to the usual suspects – the likes of Apple, Amazon, Google, Facebook and Twitter – also included names like Digital Realty Trust, DuPont Fabros Technology and Equinix.

    Gary Cook, senior IT analyst at Greenpeace who has led the activist organization’s effort to put pressure on the U.S. data center industry, said the UK’s plan to use PUE to benchmark efficiency was questionable. “Using PUE as a benchmark for this kind of incentives … seems like an unintended application of PUE,” he said.

    The Green Grid, the organization behind PUE, has been promoting it as a metric companies should use internally to track their own data center energy efficiency.

    Cook also questioned the scheme’s effectiveness in incentivizing energy efficiency. Data centers are a very energy intensive industry which already has a big incentive to use less energy since it has a direct impact on their bottom line.

    Exempting data center providers from paying a tax on emissions associated with their energy use may actually weaken the financial incentive to increase energy efficiency they already have because of high energy costs.

    << Previous Day 2014/07/01
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org