Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, May 6th, 2015

    Time Event
    12:00p
    Internal Carbon Fees Fund Data Center Energy Innovation at Microsoft

    When it committed to 100 percent carbon-neutral operations in 2012, Microsoft started the unusual practice of charging each of its internal departments fees for the amount of greenhouse-gas emissions they are responsible for. Today, three years later, the company claims that the strategy works.

    In a recently published update on the program, Microsoft said carbon fees generated enough money to pay for 10 billion kilowatt-hours of clean power, stimulated energy-use reduction initiatives that now save $10 million a year, and continue to fund innovation programs focused on using technology to reduce carbon footprint inside and outside of the company. The program did also deliver on its primary objective, reducing the company’s CO2-equivalent emissions by 7.5 million metric tons.

    Data centers are some of Microsoft’s biggest energy consumers, and as the company continues to transform into a provider of cloud services instead of relying primarily on individual software-license sales, its data center footprint and corresponding energy consumption will only continue to grow.

    Perhaps that is the reason some of the money from its carbon-fee program has gone to fund some unusual data center innovation projects around energy use.

    One of those projects is a data center module (or ITPAC as Microsoft calls it) that is getting all of its power from biogas generated as byproduct of a waste-treatment plant in Cheyenne, Wyoming.

    Converting biogas into electricity, fuel cells power 200 servers in the ITPAC. The system recycles excess heat from the data center by pushing it back to the sewage-treatment facility, giving it an extra 150kW of power to use in converting waste into energy.

    Microsoft calls this setup the “first ever zero-carbon data center.” It’s a pilot project, but once complete, the company will have enough data to expand the concept to other locations that can provide easy access to biogas resources.

    The second big ongoing Microsoft data center innovation project the company has spoken about publicly is integration of small-size fuel cells directly into server racks. This project is taking place at the National Fuel Cell Research Center at the University of California, Irvine. The project also received a $5 million grant from the U.S. government.

    The idea is to cut electrical losses by getting rid of all the power conditioning equipment that traditionally sits between utility transformers and servers in a data center. The added benefit is reliability, since it reduced the amount of points of potential failure in the chain.

    The approach “brings the power plant inside the data center, effectively eliminating most of the energy loss that otherwise occurs between the generator and the data center, doubling the efficiency of traditional data centers,” Sean James, senior research program manager at Microsoft, said in a statement.

    Size of the fee is calculated by multiplying a division’s carbon emissions by carbon price, which is a universal formula, according to Mindy Lubber, president of Ceres, a non-profit that promotes corporate sustainability. Lubber wrote the foreword to a 2013 paper outlining Microsoft’s carbon-fee program.

    “For a company to choose to become carbon-neutral is not novel, but Microsoft is taking an additional step by detailing the way to get there with a carbon fee,” she wrote.

    Nearly 20 percent of the funds generated by the fees pay for internal carbon reduction grants. Almost 60 percent pays for green power and sustainable energy innovation. The company spends another 20 percent on carbon-offset projects in communities in developing countries.

    1:00p
    Dell Beefs Up Data Center Storage Lineup

    Dell has launched new entry-level and enterprise data center storage offerings as well as a new integration with its software-defined solutions portfolio, using Microsoft Storage Spaces. The product releases span what Dell refers to as traditional and new IT architectures, piecing together the right storage solutions for the various workloads and strategies that the modern enterprise must address.

    For small enterprises operating traditional IT workloads Dell has introduced the Dell Storage SCv2000 Series. The arrays contain several features that used to be included in the higher-end SC Series arrays only. Starting around $14,000 per array, Dell says, three models will be offered with expansion options.

    The company expanded the PS Series Arrays line with new offerings in the PS6610 enterprise portfolio. Dell says the new arrays improve performance up to seven times, compared to the previous generation, and will scale out to 504TB per array. Catering to big data needs, the arrays can combine flash and hard drives and can perform with up to 98,000 IOPS random read performance, according to the company.

    To help the enterprise save space with compression of snapshots and replicas, Dell introduced new EqualLogic PS Series Array Software 8.0 for virtual environments with many new features. Dell says the new series support virtual environments with advanced VMware vSphere Virtual Volumes integration that allows arrays to be managed on a virtual machine basis instead of per volume or LUN.

    “The compression feature in EqualLogic PS Series Array Software 8 will have the biggest impact on our business,” said John Dembishack, senior systems engineer at Flagship Networks. “In our test environment, we saw an overall compression ratio of 41 percent, and as high as 50 percent on some individual volumes. This means huge savings on storage capacity and associated costs for our clients.”

    For Dell’s Blue Thunder software defined storage initiative the company announced an integration of Dell Storage with Microsoft Spaces. After listening to what mutual Dell and Microsoft customers wanted, Dell says the new Scale-Out File Server (SOFS) solution will come in in five configurations and will support customers seeking an SDS and virtualized storage approach to various workloads.

    Dell plans for a global general availability of this offering next month. The company hopes its Blue Thunder will be a common platform with a common management layer, and has worked with Nexenta, Nutanix, Red Hat, VMware and now Microsoft to evolve the SDS options.

    3:00p
    Lenovo Sets Sights on High-End x86 Server Market

    Coinciding with Tuesday’s launch of new high-end Intel Xeon processors, Lenovo announced updates to its rack-server portfolio with additional four- and eight-socket x86 servers.

    Powered by the new Intel Xeon E7-4800 v3 series processors, the four-socket Lenovo System x3850 X6 and the eight-socket Lenovo System x3950 X6 make use of modular Compute Books that Lenovo developed to simplify adding and upgrading components, including modules running different classes of Intel processors.

    When configured with Xeon E7-4800 v3 processors, these system are 50 percent faster than previous generations of Lenovo servers thanks to the usage of DDR4 memory that can be deployed across 192 DIMM slots, according to the vendor.

    Lenovo is also extending support for the new Intel chips to its converged four- and eight-socket Flex System X6 Compute Nodes to provide access to as many as 144 processor cores.

    Stuart McRae, director of enterprise server marketing for Lenovo, says both four- and eight-socket platforms will enable Lenovo to compete more aggressively at the higher end of the x86 server market.

    Prior to the Chinese company’s acquisition of IBM’s x86 server business, most of Lenovo’s server efforts were focused on lower-end tower servers. Since then the company has been making a concerted effort to expand its presence in both the blade and rack server categories.

    Lenovo has stated it plans to increase server sales by 30 percent in the coming year. Much of that sales effort, says McRae, will focus on replacing RISC servers.

    “These platforms are ideal for databases such as Oracle and SAP Hana,” says McRae. “All the benchmarks show these systems are 50 percent faster than any RISC system.”

    To that end, Lenovo today also announced it is packaging new EMC storage systems with these servers to create offerings that are optimized for SAP Hana environments.

    Intel claims Xeon E7-4800/8800 v3 processors on average provide a 40-percent performance improvement over the prior generation of processors, while delivering a six-fold performance improvement on applications that run in-memory thanks to new Intel Transactional Synchronization Extensions the chipmaker has made to the Xeon instruction set.

    Each member of the family, says Intel, can be configured with up to 18 cores – a 20-percent increase in cores compared to the prior generation – and up to 45 megabytes of last-level cache. The processors themselves can support servers configured with as many as 32 sockets.

    The end result, says Intel, is as much as a ten-fold improvement in performance per dollar depending on how those servers are actually configured.

    3:30p
    Proliferation of Remote Data Centers Creates New Networking Challenges

    Brian Lavallée is the Director of Product & Technology Solutions at Ciena.

    For a long time, businesses located their data center facilities in urban metro locations to be as close to their business points as possible to improve access performance to end users. But this is changing.

    In fact, network operators and content providers to colocation operators, hosting and cloud-based service providers have started to build data centers further away from their customers due to improved costs related to real estate, power, and cooling, in order to lower operational costs and enhance geographic resiliency. For instance, Google, Yahoo, Dell, Microsoft, and Amazon have all built large-content data centers in the rural town of Quincy, Washington, which has access to several nearby hydroelectric dams that deliver low-cost green power.

    This rise in remote data centers has helped enterprises cut costs and trim budgets significantly, but not without its share of networking challenges.

    Relocation Incentives

    Let’s start by examining the triggers for the exodus of data centers from urban metro areas to remote locations. Data centers spend a considerable amount of money on hardware – upwards of a billion dollars in some cases – with additional associated taxes and fees. One way to keep costs in check is to minimize real estate expenditures. Savvy state legislatures, such as the one in Oregon, are aware of this and working to provide tax incentives for enterprises to relocate data centers to rural communities with available land, drier climate, and access to lower-cost power.

    Another major operating cost relates to energy. Data centers are one of the largest and fastest growing consumers of electricity in the United States. According to a report from the Natural Resources Defense Council (NRDC), in 2013, U.S. data centers consumed an estimated 91 billion kilowatt-hours of electricity — enough to power all the households in New York City twice over. To minimize power usage, rural locations like Quincy have become much more attractive due to a climate that supports less costly renewable power sources such as hydro-electric power.

    Thirdly, geographic diversity is fostering the movement to rural-based data centers. In the case of a power outage, organizations maintaining secondary data centers on different power grids from the primary location create added protection in avoiding a complete blackout. Also, areas prone to natural disasters, including earthquakes, floods, hurricanes, and tornados, are more reliably served by building data centers in different geographic zones to reduce catastrophic risks – natural or manmade.

    Changing Dynamic for Data Center Interconnect (DCI)

    The advantages of moving data center facilities to locations that provide lower-cost space, cheap energy, and geographic diversity are quite evident. However, as data centers are located further away from the urban centers that they serve, legacy optical network designs that demarcate metro and regional networks hinder performance and lead to cost-ineffective connectivity. To make remote data centers sustainable over the long term, it is imperative that traditional metro-regional domains evolve into seamless “user-to-content” and “content-to-content” networks. This allows the programmable network to dynamically connect end users – man or machine – to the required content and resources.

    One of the challenges that must be addressed is increased intra- and inter-metro bandwidth requirements. The rapid growth of video-centric content is driving bandwidth requirements from 1 Gigabit Ethernet (GbE) to 10GbE and 100GbE services, which inevitably travels outside the metro network and onto regional and long-haul networks. In addition, enterprises are increasingly adopting a cloud computing and storage utility model, driving the need for on-demand connectivity that aligns with the emerging on-demand cloud consumption model. It’s very clear as data travels longer distances to end users that highly reliable and resilient network architectures are critical.

    Another challenge brought on by crossing network boundaries involves inefficiencies. With the relocation of data centers into a regional network footprint connected via a backbone network, connectivity now has to span across an originating metro, through a regional network, and back through a terminating metro to reach the secondary data center. This back-to-back, cross-domain approach results in network inefficiencies due to multiple network demarcation handoffs, increased provisioning and engineering efforts for new services (including limitations in available service offerings), and a complicated process to manage end-to-end service performance.

    By blurring the boundaries between metro-regional networks using seamless user-to-content and content-to-content network architectures, service providers can address the surging traffic growth across geographical metro-regional boundaries at distances up to hundreds of kilometers, or even further.

    The Evolution to a Content-Based Environment

    As traffic demands center more and more around user-to-content and content-to-content patterns, the traditional demarcation of metro and regional networks is now being reconsidered. It’s especially the case with the advent of coherent optics and ROADMs that allow network operators to erase these historical, and now obsolete, demarcation points. It is now possible to create far simpler networks by removing the traditional seams resulting in less networking equipment, thus, lowering capital expenditures and associated power, space, and energy costs.

    These optical networking advances ensure that as data centers move away from the urban city centers that they primarily serve, connections will be reliable, high-performing, and most importantly, cost-effective.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:54p
    Data Center 2.0 – The Emerging Trend in Colocation

    In the past decade, companies in many industries have discovered the benefits of colocation. A purpose-built data center provides a safe and stable physical environment for a company’s critical computing systems, with sufficient power, cooling, and connectivity to guarantee server uptime and availability. Multiple security layers at the facility ensure the safety of confidential information stored on the client’s servers. Also, with ever-increasing costs of land, construction, power, and labor, it has become more cost-effective for companies to house their mission-critical infrastructure in a facility owned and operated by a colocation provider than to build their own data centers.

    However, despite these benefits, the service can vary by provider. Historically, not all colocation providers have offered the same level of customer service, but there’s a new trend known as “Data Center 2.0″ to improve it. Download this white paper to read more about the Data Center 2.0 and six key principles of it you need to know.

    6:18p
    Microsoft Intros Azure Stack for Private Data Centers

    Microsoft introduced a “home version” of its Azure public cloud, expected to enter preview this summer. Unlike home versions of game shows, Azure Stack is the real thing: customers can run the same technology behind Microsoft’s public cloud offering in their own private data center.

    Azure Stack extends Azure to any data center, a move that will boost both Microsoft’s hybrid play and Azure’s adoption in general. The new public-private platform is the company’s power play behind the hybrid trend.

    “They [IT leaders] recognize the careful balance between moving at speed while still providing the level of stability and discipline their companies depend on them for,” wrote Mike Neil, general manager for Microsoft’s Windows Server. “Hybrid cloud is an ideal solution for many organizations facing this situation, bringing together the agility of public cloud and the control of on-premises systems.”

    Azure Stack combines several Microsoft technologies for handling storage, compute and network in a cloud setup, and the Infrastructure-as-a-Service and Platform-as-a-Service capabilities of Azure. It controls what is provisioned into the enterprise and integrates with internal billing or chargeback systems. The same self-service application provisioning available on public Azure will now be available in an on-premise environment.

    Azure Stack offers a compelling reason to use the Azure platform because it’s easy to develop for Azure in-house. Service providers also stand to gain by offering Azure-based services and connecting to the public counterpart for hybrid needs.

    The service is integrated with Azure Preview Portal for provisioning needs on local cloud or bursting to public cloud, and Azure Service Fabric heals and scales itself to run services in public, private, or hybrid cloud environments.

    Microsoft also previewed Azure Resource Manager which allows developers to easily choose where to deploy applications: in the public Azure cloud, or to an Azure Stack data center. For example, an application can be developed and tested on-premise, then pushed out to public Azure. Custom application templates are built into Resource Manager, potentially appealing to service providers wanting to offer services based on these templates.

    Microsoft also introduced the cloud-based Operations Management Suite. It makes it easier to monitor and manage applications regardless of where they are running, and can extend and integrate with other cloud infrastructures like OpenStack and AWS. The suite combines log analytics, security, automation, and application and data protection services – a culimination of Microsoft’s experience running its own cloud.

    “This approach is unique in the industry and gives your developers the flexibility to create applications once and then decide where to deploy them later – all with role-based access control to meet your compliance needs,” wrote Neil.

    9:08p
    Vantage Signs Another Multi-Megawatt Data Center Lab Tenant

    Another tenant has signed a multi-megawatt data center lab deal with Vantage Data Centers on the provider’s massive Santa Clara, California, campus.

    This is the second lab consolidation deal in Santa Clara Vantage has talked about publicly this year. The trend is driven by high cost of office space in Silicon Valley, and in the case of the new customer, desire to evolve beyond an older data center it was operating itself.

    The customer’s data center was non-upgradable and was more expensive to operate that leasing space from Vantage. Room for growth on Vantage’s campus also played a part in the decision.

    Vantage’s previous big data center lab deal in Santa Clara was with Symantec. The most recent one is with a Silicon Valley software company whose name the data center provider has not disclosed.

    The Symantec deal included both lower-redundancy lab space as well as data center space with redundant power infrastructure. The new customer is employing 2N redundancy across the entire data center lab footprint, using about 10 kW per rack.

    Companies have traditionally built IT labs in their office buildings. One of the reasons there is demand for lab space in data centers in Silicon Valley is skyrocketing office-space rent. Lab customers have traditionally shied away from wholesale, as many providers offer uniform space and redundancy they don’t need or use, but Vantage has been vocal in its willingness to tailor to a variety of needs.

    Vantage CEO Sureel Choksi recently emphasized the company’s flexibility in terms of what it can deliver. Late last year, COO Chris Yetman discussed Vantage’s desire to help out with Open Compute deployments as well.

    The Santa Clara data center market is on the rise. Vantage competitor CoreSite recently announced it was building a massive data center there for a single customer. Chinese internet giant Alibaba said earlier this year it was going to launch a Silicon Valley data center, and more big tech companies from China have been shopping around for data center space in the region as well.

    Vantage signed its first data center lab deal at the end of 2014 with an undisclosed customer taking 3 megawatts.

    Other deals for Vantage in the past year include one with enterprise Hadoop specialist Cloudera for an undisclosed but “significant amount of data center space,” and an unnamed e-commerce company expanded its footprint on the campus by 1.5 MW.

    Enterprise NoSQL provider MarkLogic switched from retail colocation to wholesale space with Vantage in Santa Clara last October.

    9:45p
    QTS Buys Government Cloud Heavyweight Carpathia for $326M

    Data center provider QTS Realty Trust has agreed to buy Carpathia Hosting, a colocation, cloud, and managed services provider that does a lot of business with U.S. government agencies, for $326 million.

    Growing its government cloud business has been a major focus for QTS in recent years. The acquisition will strengthen that part of the strategy and enhance overall geographic footprint and variety of services the Overland Park, Kansas-based company can provide.

    Carpathia, based in Dulles, Virginia, has a number of authorizations necessary to provide cloud and other data center services to federal agencies. It also acts as a provider of VMware vCloud Government Service through a partnership with VMware.

    Carpathia also has a partnership with one of its new parent company’s biggest competitors: Digital Realty. Through this partnership the two companies provide solutions that combine Digital’s data center space and the range of services Carpathia provides.

    Carpathia has data centers in Virginia, Arizona, California, and Canada, as well as U.K., Holland, Hong Kong, and Australia. It leases space from data center providers in those locations, including from QTS competitors Digital and Equinix.

    QTS representatives were not immediately available for comment.

    The current QTS footprint consists of 12 data centers across the U.S., including facilities on both coast and the Midwest.

    “Joining QTS means leveraging common strengths and continuing the development of innovative hybrid cloud solutions for enterprise and public sector customers,” Peter Weber, CEO of Carpathia, said in a statement.

    QTS provides its government cloud services out of its massive data center outside of Richmond, Virginia. Called QTS Federal Cloud, it is an Infrastructure-as-a-Service offering designed specifically for U.S. government agencies.

    QTS received FedRAMP certification for the offering late last year. The certification indicates to federal agencies that the service meets the government’s security standards for third-party cloud services.

    In 2013, QTS launched a data center lab in the Richmond facility to develop solutions tailored for its government cloud customers.

    << Previous Day 2015/05/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org