Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, January 31st, 2013

    Time Event
    1:34p
    Microsoft’s $1 Billion Data Center
    Microsoft-Boydton-ITPACs-47

    Some of the data center modules at Microsoft’s campus in Boydton, Virginia are housed outdoors, with no roof. These modules, known as IT-PACs, house thousands of servers to support Microsoft’s fast-growing cloud computing operation. (Photo: Microsoft)

    With its latest expansion, Microsoft’s investment in its data center campus in southern Virginia has reached $997 million – and that’s minus the cost of a roof.

    The Microsoft campus in Boydton, Va. will expand to include two more data center facilities, the company said yesterday. Microsoft also provided a first public glimpse of its new  data center design, which features pre-fabricated modules housing thousands of servers, some of which sit on a slab, open to the sky and the outdoors.

    The Virginia facility marks the latest evolution of Microsoft’s modular approach, which has transformed the company’s Internet infrastructure and its supply chain, allowing for faster and cheaper deployment of cloud capacity. Microsoft has also pushed the boundaries of data center design, abandoning chillers and data halls – and in some cases, even roofs.

    Microsoft now builds much of its data center equipment in factories, and ships the components to its data center campuses, where they are assembled on-site. This focus on PACs (Pre-Assembled Components) allows Microsoft to standardize many elements of its IT and power infrastructure.

    A Module for All Seasons

    The key driver in this model are IT-PACs, container-like modular data centers that are designed to operate in all environments, and employ a free cooling approach in which fresh air is drawn into the enclosure through louvers in the side of the container – which effectively functions as a huge air handler with racks of servers inside.

    In Virginia, these IT-PACs can operate outdoors, realizing a vision put forth by Christian Belady, general manager of Microsoft Data Center Services. Back in 2008, Belady and his Microsoft colleague Sean James put a rack of servers in a pup tent for eight months, with 100 percent uptime. That experiment helped the data center industry rethink assumptions about the impact of temperature and humidity on server health.

    When Microsoft first developed the IT-PAC modular deployment model, it considered building data centers with no roofs, but ultimately opted for a lightweight building to house the modules. But Belady remained intrigued by the roof-less data center, as noted in an interview with Data Center Knowledge in 2011, while the Virginia campus was under construction.

    Dramatically Lower Water Use

    That vision has been realized in the latest phases at the Boydton campus, which also houses more traditional data center space. Microsoft has built out the first two phases of the campus, which it describes as “316,300 square feet and growing.” Parts of the campus feature modules housed under pre-manufactured metal buildings, similar to a design the company used in Quincy, Washington.  In other parts, the IT-PAC modules are housed outside.

    The climate in Virginia is warmer than previous sites where Microsoft has deployed modules – including Chicago, Dublin and Quincy. The IT-PACs use an adiabatic cooling system in which warm outside air enters the enclosure and passes through a layer of media, which is dampened by a small flow of water. The air is cooled as it passes through the wet media. Microsoft says this approach allows it to keep servers cool while using just 1 percent of the water consumed in a traditional data center.

    When the temperature is cooler, waste heat from servers can be mixed with outside air to adjust the temperature as needed.

    The Boydton facility, which opened in February 2012, operates with a Power Usage Effectiveness (PUE) rating between 1.13 to 1.2 at peak usage.

    Yesterday Microsoft said it will invest an additional $348 million to built two more phases to the Virginia facility, bringing its total investment to $997 million. Thus far Apple has been the only company to announce a $1 billion pricetag for a single data center campus. But as companies build out larger campuses for their cloud computing infrastructure, it is changing the math for Internet infrastructure investment.

    4:00p
    Digital Realty Powers Up its POD Architecture
    Digital_Realty_Skid-Install

    Employees of Digital Realty deliver a pre-fabricated electrical room on a skid to a data center site. The company has updated its POD architecture to make more effective use of these type of components. (Photo: Digital Realty Trust).

    Digital Realty Trust has updated a key building block in its data center construction process to provide tenants with more power to support their IT infrastructure. Digital Realty, the world’s largest landlord of data center properties, has introduced the next next generation of the POD Architecture for its data center halls.

    The new version, known as POD 3.0, makes more effective use of pre-fabricated designs. This has allowed Digital Realty to offer customers up to 1.2 megawatts of IT capacity in each data hall, up from 1.125 megawatts. Those 75 extra kilowatts are a meaningful boost in capacity for companies with growing infrastructure.

    “This new generation of POD Architecture will enable us to do more in terms of capacity and energy performance, using the same operating scale that we successfully deployed as POD 2.0,” said Jim Smith, chief technology officer at Digital Realty. “Using real-time information, we have been able to fine-tune our design and develop the next generation of our POD Architecture. We were able to demand more from the existing platform and deliver an enhanced solution to our customers in terms of performance, reliability and cost efficiency.”

    Pre-Fab Components Streamline Process

    The key to the improvements in the POD Architecture prcoess, Smith said, is the pre-fabrication of major electrical and mechanical systems that traditionally have played a lareg role in data center construction timelines. Pre-fabricated components are now manufactured in a factory environment and then warehoused for on-time delivery to project sites. The cooling and electrical systems are pre-commissioned in the factory and then re-commissioned along with the completed data center.

    The POD 3.0  design uses just two electrical skids, compared with the three skids in POD 2.0.  The reduction of the infrastructure footprint will help improve the yield on building space for data halls, allowing the increase in critical IT load capacity to 1.2 megawatts. The design retains the same cost point, but will allow customers to improve their energy efficiency, enabling Power Usage Effectiveness (PUE) ratings below 1.2.

    Digital Realty’s use of POD Architecture helped the company deliver 49 megawatts of data center capacity in 2012, and it expects to deliver another 89 megawatts in 2013, Smith said.

    Digital Realty (DLR) operates 110 properties with approximately 21.2 million square feet of space in 32 markets throughout Europe, North America, Asia and Australia.

    4:50p
    Experiencing Heavy Server Load? Just Slow Down Time
    ccp-asakai_1

    A screen shot of the Battle of Asakai in the EVE Online unvierse, in which more than 2,700 gamers waged a resource-intensive battle on a single server. Admins managed the server load by altering the time continuum in the game. (Image: CCP Games)

    When demand on a server spikes dramatically, sometimes you need to improvise to keep things online. An interesting example is provided by CCP Games, which operates EVE Online, a science fiction gaming universe in which faction of players battle with fleets of space ships.

    When an enormous battle recently broke out on a node with limited resources, the engineers at EVE Online managed extraordinary server loads by using “time dilation” – altering time within the game universe to effectively throttle activity to match system resources.

    EVE Online is unusual in that it functions as a single game environment, with a single copy of its universe on a massive cluster of servers. Resources for specific solar systems are supported by a particular server, with players and spaceships able to move between solar systems. That means that a burst of activity in a particular sector of the EVE Online universe can create scalability problems. Administrators can shift load by moving activity to other servers, but that interrupts the player experience, and so is not ideal when large space battles break out.

    One Bad Click Tests Capacity

    A single misclick would test the system. On Jan. 27 a player accidentally “warped” an extremely valuable Titan spaceship into the midst of a large enemy fleet (more details at  Penny Arcade and PC Gamer). Both sides called in reinforcements, and in short order more than 2,750 players were waging a hectic battle on a server that doesn’t normally see anywhere near that level of activity.

    The customer service duders (GMs) keep an eye out for gigantic fights like this,” recounted CCP Veritas, an engineer at CCP. “We’ve got a cluster status webpage that shows big red numbers when a node gets overloaded like it was by this fight, so it’s pretty easy to see what’s up.”

    Admins isolated the battle by quickly moving non-combatants to other servers.  That’s where time dilation comes in.

    “A large majority of the load in large engagements is tied to the clock – modules, physics, travel, warp-outs, all of these things happen over a time period, so spacing out time will lower their load impact proportionally,” writes CCP Veritas. “So, the idea here is to slow down the game clock enough to maintain a very small queue of waiting tasklets, then when the load clears, raise time back up to normal as we can handle it.  This will be done dynamically and in very fine increments; there’s no reason we can’t run at 98% time if we’re just slightly overloaded.”

    The Jan. 27 event, known in EVE as the Battle of Asakai, tested that approach, but kept the game functioning until the battle was completed. ”Even though Time Dilation was pushed to its configured limit of 10%, it still allowed a more graceful degradation than the unpredictable battles of old,” CCP Veritas shared. “We’re pretty sure that without the recent efforts on the software and hardware front, such a fight of this scale would simply not have been possible.”

    5:00p
    Report: Big Data Market May Hit $23 Billion

    International Data Corporation (IDC) released a new report on Big Data Technology, forecasting that the worldwide market for big data technology and services will reach $23.8 billion in 2016. A key finding from the report shows that a shortage of analytics and Big Data technology skills will drive a growing number of buyers toward cloud solutions and appliances. The IDC study segments the Big Data market into server, storage, networking, software, and services.

    In other Big Data news:

    Teradata big analytics for Communication providers

    Teradata (TDC) announced the availability of an integrated CSP (Communication Service Provider) framework to provide a 360-degree view of customers, leveraging both conventional transaction data and granular, detailed interaction data.  The CSP framework delivers useful new big data analytic insights into customer behavior and product preferences through visibility to ALL data interaction. It takes advantage of partner capabilities such as Guavus’ SevenFlow, its marketing decisioning application, which provides deep insight into subscriber behavior and data usage. Teradata’s Unified Data Architecture embraces Hadoop and Aster’sSQLMapReduce platforms for quick analysis of multi-structured data. Guavus recently announced a $30 million funding round.

    Additionally, CSPs can leverage new capabilities from Teradata’s Communications Logical Data Model (cLDM), which serves as a map to a CSP organization’s information. The  map would organize data pertaining to social media/networks, multi-structured data, set top box analytics (where relevant), multimedia, geospatial, advertisement, ecommerce and web intelligence.

    “Revenue generation and customer loyalty are driving the market for big data,” said Patrick Kelly, an analyst from Analysys Mason. “CSPs should understand the business outcomes in specific areas of their business before investing in big data and analytics. For example, they could increase net profit margins by 12 percent with cross-marketing and sales promotions; improve customer retention by 0.2 percent via loyalty campaigns; and defer capital investments in the RAN* without degrading service, yielding hundreds of millions in savings in capital spending.”

    “The combination of Guavus products and Teradata data warehouse technology enables CSPs to analyze mobile data traffic at very granular levels with long retention periods for extremely large number of subscribers,” said Scott Sobers, Director, Communications Industry Marketing & Strategy, Teradata. “No one else in the industry can provide this kind of insight and actionable information. This will be the standard for CSPs to create new revenue streams and deliver the best possible service for customers.”

    EMC updates Greenplum appliance

    EMC announced that it has enhanced its first appliance-based unified Big Data analytics offering, the EMC Greenplum Data Computing Appliance (DCA).  The new EMC Greenplum DCA Unified Analytics Platform (UAP) Edition analytics appliance enables analysis of both structured and unstructured data together within a single integrated appliance. It integrates Greenplum Databases for analytics-optimized SQL, Greenplum HD for Hadoop-based processing and Greenplum partner business intelligence, ETL, and analtyics applications. The new DCA  UAP edition delivers 70 percent performance gains over the prior generation for data loading and scanning, and 100 percent performance increases for concurrent query workloads.

    “Enterprises looking to make strategic investments in a Big Data platform need to consider the breadth of capabilities required of a complete solution—high speed data ingestion, support for structured and unstructured data, interfaces for data scientists as well as business intelligence users, and the ability to scale horizontally as data volumes grow.  Customers can take advantage of the new DCA to increase the performance of Greenplum Database for best-in-class SQL processing and data loading, and also leverage the innovative capabilities of Greenplum’s Hadoop distribution (GPHD). With the release of the DCA Unified Analytics Platform Edition, we are continuing our history of innovation—with improved options for Hadoop deployments leveraging EMC Isilon’s scale-out NAS storage, enhanced partner ecosystem support including such partners as SAS and Informatica.”

    7:41p
    Sequoia Supercomputer Breaks 1 Million Core Barrier
    Sequoia-Nov2012-470

    The Sequoia supercomputer at Lawrence Livermore National Laboratory recently harnessed more than 1 million compute cores to run a complex fluid dynamics simulation. (Image: LLNL)

    The Stanford Center for Turbulence Research (CTR) has set a new record in computational science, using the Sequoia supercomputer with more than one million computing cores to solve a complex fluid dynamics problem — he prediction of noise generated by a supersonic jet engine.  Installed at the Lawrence Livermore National Laboratory (LLNL)  Sequoia was named the most powerful supercomputer in the world on the June 2012 Top500 list, and moved to number two in November 2012.

    With a total of 1,572,864 compute cores installed, research associate Joseph Nichols was able to show for the first time that million-core fluid dynamics simulations are possible—and also to contribute to research aimed at designing quieter aircraft engines.  Predictive simulations aid in the process of peering inside and measuring processes  occurring within the harsh aircraft exhaust environment that is otherwise inaccessible to experimental equipment. The data gleaned from these simulations are driving computation-based scientific discovery as researchers uncover the physics of noise.

    “Computational fluid dynamics (CFD) simulations are incredibly complex,” said Parviz Moin, the Director of CTR. “Only recently, with the advent of massive supercomputers boasting hundreds of thousands of computing cores, have engineers been able to model jet engines and the noise they produce with accuracy and speed.”

    Recently Stanford researchers and LLNL computing staff have been working closely to iron out the last few wrinkles. They were glued to their terminals during the first “full-system scaling” to see whether initial runs would achieve stable run-time performance. They watched eagerly as the first CFD simulation passed through initialization then thrilled as the code performance continued to scale up to and beyond the all-important one-million-core threshold, and as the time-to-solution declined dramatically.

    << Previous Day 2013/01/31
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org