Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, April 16th, 2014

    Time Event
    12:30p
    Optimizing Power System Monitoring and Control

    Bhavesh Patel is Director of Marketing and Customer Support at ASCO Power Technologies, Florham Park, NJ, a business of Emerson Network Power.

    Optimizing power system monitoring and control at a data center can go a long way to helping ensure power reliability – a universal goal of data center management.

    There are four prevalent sophisticated technologies that can provide key information about the status of onsite power: two legacy systems – Building Management Systems (BMS), and Supervisory Control and Data Acquisition (SCADA) systems, and two newer systems – Data Center Infrastructure Management (DCIM) and Critical Power Management Systems (CPMS).

    The first three aim to monitor and control an entire facility or campus, including critical power. The fourth is designed to control only critical power generation and distribution systems. Each system has its particular capabilities, strengths, and limitations which should be considered when evaluating options.

    A BMS is a computer based control system that provides integrated management of the control and monitoring of a building’s core electrical and mechanical equipment. Installed in new buildings or in renovations, a BMS typically covers heating, ventilation, and HVAC systems, and often includes lighting, security, fire alarm systems, plumbing and water monitoring. A BMS also tracks and schedules building maintenance. A system can use proprietary controls or, as is increasingly more common, open standard controls. While it does not have the scope of capabilities of more sophisticated systems and does not specifically address power reliability, a BMS can provide early detection of problems with electrical power via basic alarm and control notification and may include remote as well as onsite alarm monitoring.

    Because of the narrow bandwidth at which it operates, a BMS has limited capabilities with respect to high speed monitoring and control. The speed and bandwidth at which data transfer occurs between critical power equipment components could incapacitate most BMSs. Power quality data such as transient harmonic displays or wave form capture are possible examples of that.

    Pro: Popular at standalone, single function buildings, including data centers.

    Con: May not distinguish between critical and non-critical monitoring and does not necessarily include software to manage mission critical operations and processes.

    A SCADA system is designed to monitor and control business operations and processes via sensors placed at multiple sites at various locations which are monitored from a single centralized location, utilizing coded signals over communications channels. Most are PLC-based. Aiming to improve efficiency and operational reliability, lower costs, and enhance worker safety, these systems are particularly suited for enterprises across large distances or occupying multiple facilities under one management, such as a data center with multiple sites.

    Today’s sophisticated SCADAs include a computer and open (off-the-shelf) system architecture that acquire data from, and send commands to, monitored equipment, a human-machine interface, usually a computer monitor screen, a networked communication infrastructure, sensors and control relays, remote terminals units (RTU) and programmable logic controllers (PLC). Functions include: alarm handling, trending, diagnostics, maintenance scheduling, logistics management, detailed schematics for a particular sensor or machine, and expert-system troubleshooting guides.

    Pro: Can provide equipment status to remote Internet-connected mobile devices including tablets and smartphones, which can expedite alarm notification to key personnel, and improve alarm handling and response time.

    Con: For alarm handling, a cascade of quick alarm events may ‘hide’ the underlying cause. Standard protocols and Internet accessibility of networked SCADA systems make them susceptible to remote natural or human-made electromagnetic pulse (EMP) attack. Not the best choice when reliable power is critical.

    The newer systems are DCIMs and CPMSs.

    A DCIM, which by design is focused on data centers, can provide more of a holistic view of a data center’s IT and facilities infrastructure. DCIMs, which can handle the lightning speed of data generated for analytics, have specialized capabilities to monitor, measure, and manage both facility infrastructure components and IT equipment at a data center, especially larger ones, using data derived from SNMP, Modbus, or BACnet. A DCIM can not only monitor the facility infrastructure but can also use powerful analytics to provide “intelligence” and reporting for decision making that can improve efficiency of operation. That said, it cannot do everything a BMS does and may be utilized as complementary to a BMS.

    Pro: A sophisticated system can improve uptime and efficient capacity planning and management, and provide valuable business analytics and deeper process and change management.

    Con: Like with a BMS, DCIMs need to be sophisticated enough to import volumes of operational data from power controls in order to effectively monitor and control critical power systems. However, the majority of that data transfer (such as transient harmonic displays or wave form capture) occurs at speeds and bandwidths that may incapacitate many DCIM systems. Given that there are no standardized platforms or protocols like Modbus or BACnet at that level of technology, to benefit such sophisticated analysis, IT would need to rely on vendor proprietary software.

    A CPMS, which is focused on any type of facility, is designed to monitor, control and analyze equipment for power generation and distribution, both for normal power and for emergency/back up power. It is an excellent choice when power reliability is crucial, 24/7, as is the case in a data center serving a varied clientele. Usually, a CPMS is set up to monitor all data from the point electricity enters the facility from the utility main throughout the entire facility. And for the emergency/backup power system, a CPMS generally oversees gen-sets, transfer switches, paralleling control switchgear, uninterruptible power systems, circuit breakers, bus bar, and other critical power distribution equipment.

    Today’s full featured CPMSs have wide bandwidth and operate at extremely high speed and can cache or share large amounts of data from one device to another without disrupting building functions.
    A CPMS will monitor normal and emergency voltages and frequency, indicate transfer switch position, source availability, normal and emergency voltage and frequency, current, power, and power factor; and display transfer switch event logs, time-delay settings, rating and identification. It will also facilitate critical power system load management, bus bar optimization, testing, maintenance, reporting, trending and analytics, all with an aim of ensuring power reliability during surges, sags and outages.

    In addition, CPMSs often have functions and alarms integrated into the data center’s building management system (BMS). For example, CPMS could send automatic alerts on system operation via email, text, or selected system alarms to the BMS. High-end CPMSs feature integrated devices communicating on a dedicated network.

    Compared to monitoring and control capabilities of BMSs, SCADAs, and DCIMs which can address much more than electrical power components, CPMS monitoring and control capabilities are more narrowly and sharply focused and are dedicated to managing critical power generation and distribution power. While they typically work in concert with a BMS, SCADA, or DCIM, CPMSs provide the necessary sophistication, speed, and analytics specific to power generation and distribution.

    The ability of a CPMS to operate at very extremely high speed and share or cache tremendous amounts of data between devices without disrupting building functions is advantageous when doing post-event troubleshooting or forensics, which benefits from fast and accurate time marks to track down where and when things went wrong. The analytics benefit from a recording scale fast enough to identify, with time marks, precisely what started the event within a very short time frame (often milliseconds). For example, the analytics can look into why the data center lost a particular breaker that tripped the PDU and caused a chain of events that caused a switchover to the UPS and help determine whether the precipitating event was an electrical spike, a floating ground or a short.

    Typically, CPMSs have the scalability to accommodate expansions and upgrades of a data center facility or campus as the enterprise grows and if/when the business model changes.

    Pro: Can take advantage of continuous monitoring, which utilizes intelligent controls and sensors along with testing and retesting to make sure facility systems operate as designed and constructed, not only initially but also over time as equipment ages, with the aim of optimizing owner cost and occupant comfort.

    Con: Does not have some of the IT operational details that a DCIM can offer a data center. Furthermore, a CPMS generates a lot of information which, without good data visualization capability built into the system, could become overwhelming and even unproductive.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:02p
    Fusion-io Accelerates SQL Server 2014

    Unlocking performance gains for MS-SQL, Fusion-io announced that the ioMemory platform has been optimized for performance with Microsoft SQL Server 2014, which was made generally available on Tuesday as a part of Microsoft’s data platform.

    The Fusion ioMemory platform builds upon the in-memory innovation in SQL Server 2014, delivering up to 4x improvements in transactions per second and a significant reduction in data latencies. SQL Server 2014 delivers new in-memory capabilities built into the core database for online transaction processing (OLTP) that speeds the process to analyze real-time transaction data.

    “We have seen in-Memory OLTP capabilities in SQL Server 2014 provide tremendous performance improvements to business applications,” said Eron Kelly, general manager, SQL Server product marketing, Microsoft. “Adding the Fusion ioMemory platform to an existing SQL Server 2014 in-memory OLTP configuration can deliver up to 4x additional performance gains, building on our in-memory innovation to provide even greater performance benefits so customers can quickly uncover valuable business insights from their data and transform their business with greater scale at a low total cost of ownership.”

    Fusion-io’s persistent, high capacity ioMemory platform gives servers native access to flash memory to improve data center efficiency. Fusion-io also supports Buffer Pool Extension, a new feature in SQL Server 2014. With Buffer Pool Extension and low latency Fusion-io flash memory, customers now have the ability to drastically reduce the amount of user wait time throughout their database environment. It integrates into the SQL Server 2014 Database Engine buffer pool to significantly improve I/O throughput and reduce disk latency by offloading clean data pages from traditional storage to flash.

    Fusion-io will embark on a 35-city tour appearing at Microsoft Technology Centers (MTCs) worldwide to showcase how Fusion ioMemory products maximize SQL Server 2014.

    2:30p
    Texas Instruments Launches Internet of Things Ecosystem

    Texas Instruments introduces an Internet of Things (IoT) ecosystem for cloud providers and ARM and Sensor Platforms team to launch open source software for sensor hubs.

    Texas Instruments launches IoT Cloud ecosystem. Texas Instruments (TXN) introduced a third party ecosystem of Internet of Things (IoT) cloud service providers.  This will let manufacturers using TI technology to connect with the IoT more easily and rapidly.

    The first members of the ecosystem include 2lemetry, ARM, Arrayent, Exosite, IBM, LogMeIn, Spark, and Thingsquare. Each member has demonstrated its cloud service offering on one or more of TI’s wireless connectivity, microcontroller (MCU) and processor solutions for a wide-range of IoT applications spanning industrial, home automation, health and fitness, automotive and more.

    The new ecosystem is open to cloud service providers with a differentiated service offering and value-added services running on one of TI’s IoT solutions.

    “We believe the IoT’s true value lies in empowering companies to transform how they do business. By leveraging insights gained from connected, data-driven devices, you can unlock new opportunities to optimize operations, boost revenue and delight customers,” said Mario Finocchiaro, director of Xively business development at LogMeIn. “By combining TI’s products with Xively’s IoT platform and Business Services, we believe that, together, we can provide the hardware, software and expertise needed to help businesses quickly turn their IoT visions into reality.”

    ARM and Sensor Platforms introduce Open Source software.  Algorithmic sensor software company Sensor Platforms and ARM introduced Open Sensor Platform (OSP), as open source software for sensor hub applications. OSP will simplify the integration of sensors across multiple applications, and provide a flexible framework for more sophisticated interpretation and analysis of sensor data.

    OSP has been designed for the ARM architecture which is pervasive in sensor hub applications because of the low-power capabilities, simplicity of integration, and broad ecosystem. It will be open sourced under Apache License, Version 2.0, and will actively manage and incorporate community contributions.

    “Contextual, sensing information is becoming more important as end devices for the Internet-of-Things rapidly proliferate,” said Charlene Marini, Vice President of Marketing, Embedded Segment, ARM. “As an open source platform for sensor fusion fundamentals, OSP will enable a community of developers to accelerate new functionality for ongoing innovation in sensor hubs across applications. As a result, we should see devices and applications that are more aware of their user and their environment, making technology more useful for all.”

    2:44p
    Riverbed’s SteelFusion 3.0 Will Transform Branch Office IT

    Changing the economics of IT for remote offices, Riverbed (RVBD) launched SteelFusion 3.0,  the first branch converged infrastructure that centralizes data in the data center, delivers local performance, and offers nearly instant recovery at the branch office.

    Previously known as Riverbed Granite, the new SteelFusion 3.0 brings together branch servers, storage, networking, and virtualization infrastructure into a single solution that, as measured by Taneja Group, reduces the average time to provision branch services by 30x (from five hours to ten minutes) and recovery from branch outages by 96x (from 24 hours to 15 minutes).

    “With more and more branches and data, organizations are struggling with the cost and inefficiency of managing islands of infrastructure – servers, apps and storage – outside the data center,” said John Martin, Senior Vice President and General Manager, Storage Delivery at Riverbed. “Riverbed developed the first solution that delivers all the benefits of converged infrastructure found in the data center, such as performance, security, scalability, better disaster recovery, and manageability, but optimized for the unique requirements of the branch. Businesses get the best of both worlds:  centralizing data to eliminate branch downtime, improve data security, and lower TCO while still delivering local access to apps and data for branch workers.”

    With the 3.0 release, SteelFusion 1360P features a 6x performance gain over the previous high-end SteelFusion appliance. Higher capacity on the storage delivery controller, the SteelFusion Core 3000, means storage admins can now support up to 100TB consolidated data, a 3x improvement over the previous high of 35TB supported.

    A new predictive pre-fetch capability on the converged appliance, the Branch Recovery Agent further reduces recovery time from major outages. This ensures employees can remain productive with constant access to applications and data, despite major disasters.

    Offering simplified operations, release 3 offers an improved scale-out architecture with pooled management of storage delivery controllers. A new recovery agent for the converged appliance makes it simpler and faster for administrators to recover a branch from a major outage.

    Support for NetApp cluster mode and EMC VNX2 snapshots continue seamless integration advancements to ensure SteelFusion integrates seamlessly with data center SAN infrastructure to provide backup consolidation.

    Riverbed SteelFusion 3.0 will be available in May. This video shows a Riverbed chalk talk, describing instant branch office recovery with SteelFusion 3.0.

    3:31p
    WHIR Networking Event: Toronto

    The WHIR brings together professionals in the hosting industry for fun (and free!) networking events at different locales in the U.S. and internationally as well. The one-night event is an opportunity to meet like-minded industry executives and corporate decision makers face-to-face in a relaxed environment with complimentary drinks and appetizers.

    The WHIR provides a great local venue, and you do the rest – do business, make new connections and learn more about those in the web hosting industry. Gather with your colleagues from Toronto and meet new associates from your region.

    Date: Thursday, May 15, 2014
    Time: 6:00 pm to 9:00 pm
    Place: Pravda Vodka House
     44 Wellington St. E., Toronto, ON, M5E 1C7, Canada

    RSVP Today!

    YOU MUST BRING A BUSINESS CARD TO WIN A PRIZE

    For more events, return to the Data Center Knowledge Events Calendar.

    5:00p
    IDC’s Analysis of Worldwide DCIM Vendors

    Data center management and control has come a long way. Now, we have direct integration with numerous components – both logical and physical.

    As organizations create new demands around their IT environment, the data center will need to be better and more intelligently managed. This means visibility into virtual, physical, and distributed environments.

    Here’s the challenge: Confusion exists in the market today about the scope of what DCIM should be, and participants in this space are seeking to define their brand by offering full-scale solutions that include complete power and cooling and IT infrastructure visibility, control, and analytic capabilities.

    In this Emerson-sponsored whitepaper from IDC, we quickly identify the key players and drivers behind the modern DCIM market. Based on IDC’s analysis of current product capabilities, go-to-market strategies, and general business analysis, this IDC report finds that the DCIM market is evolving, with several providers realizing strong growth and adoption and others struggling to survive in a competitive market. Considering changing business needs, IDC believes the following to be critical success factors in this evolving market:

    • Technology partnerships.
    • Selling partnerships.
    • Open architecture that supports full upstream and downstream support of data.
    • Exceptional customer service with global service and support.

    Not only do we look at what DCIM needs to be, this IDC study uses the vendor assessment model called “IDC MarketScape” to really understand specific offerings. This research is a quantitative and qualitative assessment of the characteristics that explain a vendor’s success in the marketplace and help anticipate its ascendancy.

    As the paper outlines, there is a direct need for data center to optimize their control layer. Datacenters are expanding in capacity to power a new era of computing built on mobile devices and applications, cloud services, big data and analytics, and social technologies. Expansion into new geographies, improvements in disaster recovery, and expansion of workloads beyond current power and processing capabilities of current datacenters has challenged datacenter managers to find ways to manage resources and change.

    Download this whitepaper today to learn about how IDC evaluated 10 providers that offer a DCIM solution which met the following criteria:

    • Provide visibility into one or more elements on the facilities side of the datacenter
    • Provide visibility into one or more elements on the IT side of the datacenter
    • Earn at least $2 million from the sale of its DCIM solution in 2012

    In their research IDC evaluated:

    • CA Technologies
    • Emerson Network Power
    • Nlyte
    • Panduit
    • Raritan
    • Schneider Electric
    • And others

    Not only does IDC evaluate some of the top DCIM vendors, the paper also offers sound advice around acquiring a DCIM solution. Remember, DCIM purchase decisions for both enterprise and service provider datacenters will be shaped by the need to accurately plan for future capacity, increase efficiency of datacenter resources, and manage growth and change.

    7:41p
    More Downtime for HostGator and BlueHost Customers as Router Issues Plague Utah Data Center

    logo-WHIRThis article originally appeared at The WHIR.

    BlueHost and HostGator have been hit by another outage on Wednesday. According to HostGator, router issues in its Provo, UT, data center are to blame. It appears that the issues began around 11 am ET this morning.

    These lengthy outages are becoming too common for some customers, after a massive outage in the same Provo data center took down sites for hours on New Year’s Eve. Some customers on Twitter have reported spotty access to sites over the past three days.

    While it is unclear how many customers have been impacted by Wednesday’s outage, some BlueHost VPS customers are unable to access their accounts. The outage is also affecting BlueHost’s own sites, including its help.bluehost.com support site. Support.hostgator.com appears to be loading though and HostGator is updating customers on its support forum.

    “Our NetOps team is still working to resolve an issue with a router and working to restore services for all affected customers,” Sean Valant, PR manager for HostGator said in the forum post. “Some sites are coming back online, though we appreciate everyone’s patience as we work through the issue. As we get more information, we’ll continue to provide updates here on the forums till everyone affected has their service restored.”

    Customers are reporting issues in contacting BlueHost support as well because its live chat and phone support are down. The company is using its Twitter @bluehostsupport and Facebook to update customers at this time.

    As of 3 pm ET, BlueHost Support assured customers on Twitter that its engineers have identified the problem and its admins are working to resolve the issue. BlueHost or HostGator haven’t provided an ETA for a full resolution of the issue but will be updating customers with details of the outage.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/downtime-hostgator-bluehost-customers-router-issues-plague-utah-data-center

    7:47p
    Open IX News: AMS-IX Opens at Sabey, EvoSwitch and Continuum Earn Certification

    It’s been a busy week for data center providers focused on the Open-IX opportunity. EvoSwitch, home of the first London Internet Exchange (LINX) node in the U.S., has received Data Center Technical Standards certification from Open-IX  for its WDC1 data center in northern Virginia. Chicago provider Continuum has also received Open-IX certification.

    As a refresher, Open-IX is a different approach to existing Internet traffic exchanges. Rather than being concentrated in a few select facilities, Open-IX  adopts the European nonprofit model in which exchange operations are spread across multiple data centers within a geographic market. The goal is to create a new network of Internet Exchange Points housed in multiple neutral data center facilities that allow participants to interact and exchange content.

    The first Open-IX certified facilities are now popping up. CyrusOne was the first to achieve certification. OIX standards are already being adopted by LINX, AMS-IX, Digital Realty, DuPont Fabros, Raging Wire, and many other data center and IX providers.

    Open-IX created the Data Center Technical Standards to establish a recommended standard for Data Centers to support an Internet Exchange, as defined by the Open-­IX IXP Technical Standards. The standard also provides guidance for acceptable exceptions for pre-existing facilities.

    AMS-IX

    Sabey Data Center Properties and AMS-IX (Amsterdam Internet Exchange) have launched an AMS-IX New York point-of-presence within Intergate.Manhattan, Sabey’s 1 million-square-foot facility at 375 Pearl Street in Lower Manhattan. The exchange has already signed Netflix, the world’s leading Internet television network, and IX Reach, a global leading provider of wholesale carrier solutions, as the first two customers for AMS-IX New York. At present, 16 additional customers have applied for a connection to the exchange.

    The opening ceremony was attended by Eberhard van der Laan, Mayor of the City of Amsterdam, as well as officials from Sabey and AMS-IX. ““Historically, the Dutch people are global trading leaders,” said van der Laan. “We were the first European settlers to come to this island, and we named it New Amsterdam. Today, I envision the same business potential here that my ancestors recognized – New York City as a global center of the new Internet economy.”

    “I strongly believe that a distributed exchange model offers the best value by enabling a choice of connectivity options in as many data center locations as makes business sense,” said Job Witteman, CEO of AMS-IX. “In Amsterdam, the AMS-IX exchange platform is currently distributed over 12 different data centers serving well over 650 IP networks from around the globe, making it the largest Internet exchange in the world.”

    “We are very happy to partner with AMS-IX New York on this important market-changing initiative,” said John Sabey, President of Sabey Data Centers. “Offering more choice in the New York City market to carriers and customers is good for everyone.”

    EvoSwitch Lands Open-IX Certification in Virginia

    EvoSwitch is an Amsterdam-based provider that first entered the U.S. market in 2012, leasing space in the COPT building in Manassas. The facility is outside of the major data center cluster in Ashburn, meaning it stands to benefit from adoption of the European exchange model. EvoSwitch has built its story around connectivity, and was the first company to land a LINX node in the states.

    “We are very pleased to receive recognition from Open-IX for our data center operations in Virginia,” said Eric Boonstra, EvoSwitch USA President. “Not just for meeting the strict technical standards, but also for our approach to neutral interconnection and willingness to partner with outside, neutral and distributed Internet Exchanges like we do with LINX NoVA here in Virginia. We were the first data center operator to introduce the European ‘style’ Inter Exchange (IX) in the USA and  to announce our partnership with LINX NoVA,. We firmly believe that for our customers to have choice, we cannot tie them to interconnection services only available in our data centers. We are already seeing increased demand for colocation services since LINX NoVA went live inside our facility.”

    “EvoSwitch USA joins an elite and growing list of DC OIX-2 data centers in the Northern Virginia Market,” said Barry Tishgart, Board Member of the Open-IX Association. “Certification is achieved through a thorough peer-reviewed process, we welcome EvoSwitch USA to the Open-IX standards.”

    Continuum Receives Open-IX Certification in Chicago

    Continuum Data Centers (CDC) is a multi-tenant data center operator in the western Chicago suburbs. The company is currently redeveloping to operate an 80,000 square foot facility in West Chicago. Its CDC 603 data center is expected to open in mid 2014. Its CDC 835 is a 21,000 square foot data center in Lombard, Il. The company, which focuses on colocation just outside of Chicago proper, also stands to benefit from decentralization of exchanges.

    “We are extremely excited to receive data center certification from OIX, and eager to participate in the continued growth of the organization,” said Eli D. Scher, CEO of Continuum. “OIX from our perspective is an ideal platform to compel fiber carriers to build into our facility, and to create a transparent and efficient peering environment in our meet me room.” He continued “the fiber density and diversity we are currently building ultimately serves our customers and makes our facility a more integral part of the IT infrastructure landscape.”

    9:07p
    As It Expands in San Antonio, Microsoft Sees Data Centers Transforming the Power Grid

    Microsoft wants to use its data centers to help transform the way electricity is delivered in the United States. That process will focus on San Antonio, where the company has confirmed plans to build a $250 million data center, along with a research project to develop new ways to use renewable energy to power Microsoft’s cloud.

    With its expansion, Microsoft will build a new 256,000 square foot data center next door to its existing facility in Westover Hills, a western suburb of San Antonio. When the new building is complete, Microsoft will have more than 700,000 square feet of data center space on its San Antonio campus. Data Center Knowledge reported the expansion plans last month, but Microsoft had no comment at the time.

    Microsoft is also teaming with the University of Texas at San Antonio (UTSA) on a three-year research project to develop sustainable technologies to make data centers more energy efficient and economically viable. Microsoft director of energy strategy Brian Janous says the project is part of the company’s vision for a new energy paradigm.

    “This is one of our first major research partnerships to use data centers as a laboratory for next-generation energy technologies, but it certainly will not be our last,” Janous in blog post. “Distributed generation will be an important part of how we power our datacenters as we continue to pursue Microsoft’s energy strategy of transforming the energy supply chain.”

    An Ambitious Vision for Distributed Power

    The research is part of Microsoft’s focus on distributed power, part of a larger vision to break new ground in integrating cloud computing and distributed energy generation. The company hopes to place data centers alongside sources of renewable energy, creating “data plants” than operate with no connection to the utility power grid, using methane or other gases from landfills and water treatment plants. Microsoft is also researching the use of data center racks with on-board fuel cells, and even a distributed network of in-home data furnaces that use server exhaust to heat living spaces.

    That big vision will start to take shape at UTSA. The multi-disciplinary research will focus on new distributed energy technology that reduces energy consumption and emissions, improves reliability and contributes to a sustainable energy future. Microsoft will contribute $1 million to UTSA’s Sustainable Energy Reseach Institute (SERI) support the project.

    “As part of this research, UTSA students will work hand-in-hand with Microsoft researchers to look into new ‘fast-start generation’ energy technologies such as micro-turbines to replace the diesel generators that are used during times of peak demand and grid outages,” said Janous.

    “Our objective is to bring together technology, economics and commercialization to create a smart intelligent energy system,” said C. Mauli Agrawal, vice president for research at UTSA. “We want to identify economically viable technologies that will reduce the environmental footprint of data centers.”

    Greenpeace recently called out a bunch of companies, and the tactic seems to be working. Microsoft wasn’t on the “nice” list, despite making significant strides in the last year. So it is being both more transparent about the significant work it already has done, as well as increasing its efforts honorably.

    Ongoing Focus on Data Center Sustainability

    Microsoft is no slouch when it comes to renewables. Microsoft recently announced a power purchase agreement with a 110 megawatt wind farm in Texas. The company has also been working on data center innovations like in-rack power generation and biogas-powered datacenters. In November, the company began testing racks with built-in fuel cells, a move that would eliminate the need for expensive power distribution systems seen in traditional data centers – making them cheaper and greener. There’s also the waste powered data center in Cheyenne, Wyoming.

    “These initiatives are bound together by our objective to transform the energy supply chain toward radically greater efficiency and reduced environmental impact,” Janous writes.  The company isn’t solely focused on green sources of energy; it’s doing a lot of work within the data center.

    It’s thinking about how it integrate power generation and energy storage into the design of a datacenter. The new agreement with UTSA will go a long way to achieving this somewhat less glamorous, but immensely important goal. Making data centers more efficient benefits the entire industry, and this research might help the type of company that doesn’t have access to 100 megawatts of wind power to grow more energy efficient at a smaller scale.

    Why San Antonio?

    Microsoft opened a data center in San Antonio in 2009. it was one of the first in the industry to employ technologies like using wastewater for cooling to reduce energy consumption, and there was the recent 110 megawatt purchase of wind energy there.

    “Very few cities have embraced the clean energy economy like San Antonio and its mayor, Julian Castro,” writes Janous. ”In addition, The University of Texas at San Antonio (UTSA) has demonstrated its commitment to a more sustainable energy future by establishing The Texas Sustainable Energy Research Institute (SERI) under the leadership of Dr. Les Shephard, formerly of Sandia National Lab.”

    Data Center Knowledge Editor-in-Chief Rich Miller contributed to this story.

    << Previous Day 2014/04/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org