Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, March 21st, 2013
| Time |
Event |
| 11:30a |
Report: Amazon To Build $600M Private Cloud For CIA 
According to a report in Federal Computer Week (FCW), the CIA has agreed to a cloud computing contract with Amazon Web Services to build a private cloud infrastructure. Spread over 10 years, the $600 million deal would help the CIA keep up with technologies such as big data in a cost-effective manner not possible under previous cloud efforts.
FCW states that neither Amazon or the CIA would confirm the existence of the contract, or comment on the matter. It was however hinted that the way the agency procures software will change, as well as how it uses big-data analytics. Amazon recently launched its Redshift data warehouse service in the cloud , starting in northern Virginia. Amazon Web Services also lists a number of certifications and accreditations for its infrastructure, including: FISMA, PCI DSS, ISO 27001, SOC 1/SSAE 16/ISAE 3402, and HIPAA. The AWS GovCloud is a region designed for US Government agencies, and works with a range of system integrators and independent software vendors.
Directly supporting the Federal Cloud First strategy, the CIA would be an excellent reference case for Amazon to entice other agencies to its GovCloud service. IT decisions made by the CIA follow the Intelligence Community Information Technology Enterprise strategy, which suggests other intelligence agencies would benefit through shared information from a private cloud created in the Amazon-CIA deal. Dave Powner, director of IT management issues at the Government Accountability Office, told FCW “he was unfamiliar with the CIA-Amazon deal, but stated it would make sense – especially given spending cuts across the board at most agencies.”
In 2011 CIA CTO Gus Hunt spoke at the AWS Gov Summit 2011 event, and listed a key technology enabler for the agency as - “an ultra-high performance data environment that enables CIA missions to acquire, federate, and position and securely exploit huge volumes of data.” FCW reports that last month at an event Mr. Hunt was quoted by Reuters as saying, “Think Amazon – that model really works”, regarding purchasing software services on a ‘metered’ basis. | | 12:30p |
Exposing the Six Myths of Deduplication Darrell Riddle, senior director of product marketing for FalconStor Software., is a professional with more than 23 years of experience in the data protection industry. Darrell has an extensive understanding of both the technical and business aspects of marketing, product management and go-to-market strategies. Prior to joining FalconStor, Darrell worked at Symantec.
 DARRELL RIDDLE
FalconStor
Most companies have lots of duplicate data. That’s a fact. Many companies are aware of it, but it falls in the category of cleaning out the garage or a spare room. You see the problem, but until you run completely out of space, it usually doesn’t get straightened up.
Many IT managers believe the software and/or hardware they purchased already deals with this kind of problem. The truth is, this may or may not be correct. In fact, enterprises are taking full advantage of how current technology can eliminate redundant data. In some cases, companies have not turned on features that help them with duplicate data (hereafter known as “deduplication” or “dedupe”), nor are they actively using deduplication as a key aspect of their data protection plans. The reluctance of IT administrators to embrace dedupe usually stems from their lack of knowledge of the potential benefits of deduplication or past experience with a less-than-robust solution.
However, deduplication is a critical aspect of every backup environment that brings cost-savings and efficiency to the enterprise. Depending on which report you read, companies are faced with data growing at the rate of 50 percent to nearly doubling data annually. That impacts the entire data protection strategy. It also makes data slow and dopey like a koala bear. Backup windows aren’t being met, and there is no way that disaster recovery testing can take place. Think of this entire problem like picking up a squirt gun to put out a fire – it just won’t work.
Deduplication solutions are also valuable to disaster recovery (DR) efforts. Once the data is deduplicated, it is then transferred (or replicated) to the remote data center or offsite DR facility, ensuring that the most critical data is available at all times. Deduplication is crucial as it reduces storage and bandwidth costs, provides flexibility and data availability, and integrates with tape archival systems. Deduplication is a vital part of the future of data protection and needs to be integrated.
In this article, I will dispel six myths attached to deduplication, bring clarity to the technology and outline the cost savings and efficiencies enterprises can reap.
Myth 1: Deduplication methodology is a life sentence with no chance of parole. Most enterprise IT admins feel that if they purchased a specific deduplication solution, they are stuck with that method for life.
Reality: Flexibility is at the core of modern deduplication solutions, which allow firms to choose the deduplication methods that are the best fit for specific data sets. Many companies offer portable solutions, similar to being able to move electronic music from one device to the next. By doing this, IT can align its backup policies with business goals.
Myth 2: Each server is its own island and there are no boats. The myth is that each server is its own island with separate deduplication processes and none of the islands talk to each other.
Reality: As the Internet has expanded our ability to communicate globally, deduplication solutions have also gone global to eliminate any multiple copies of data. With global deduplication, each node within the backup system is deduplicated against all the data in the repository. Global deduplication spans multiple application sources, heterogeneous environments and storage protocols.
Myth 3: I don’t have the money to swap out or upgrade my hardware, and even if I did, I would spend it on something else. The perception is that deduplication servers need to be replaced when space on the server runs out. The system doesn’t allow for upgrades. To increase capacity, companies need to exchange the equipment and implement more servers and memory.
Reality: Scalability is key to all IT environments, as the rate of data is growing exponentially. IT administrators must be able to scale capacity to the backup target disk pool and build disk-to-disk-to-tape backup architectures around the deduplication system. Rather than a swap out replacement, deduplication repositories can scale as needed with cluster and storage expansions.
Myth 4: Deduplication slows down performance worse than my antivirus product. IT admins feel that the performance of their systems will slow down because there is too much work for the deduplication server to handle. This performance will hamper the entire backup environment and cause issues when data needs to be recovered quickly.
Reality: Deduplication can scale up to high speeds and has the ability to pull data into post processing to take the pressure off the backup window and increase the speed. In choosing a deduplication solution, IT administrators must consider how it will support the latest high-speed storage area networks (SANs). This is critical for achieving fast deduplication times. Those solutions with unique read-ahead technology provide fast data restore, even from deduplicated tapes. | | 1:00p |
Joyent, Cloudant Launch Database Service Atop SmartOS The database as a service market is heating up, as cloud providers look to court application developers with promising offerings. The latest such move is the result of a deepening partnership between Joyent and Cloudant.
Cloudant is now available on the Joyent high-performance cloud platform, and the two today came out today with a multi-tenant Cloudant cluster running in Joyent’s Jubilee data center in Ashburn, Virginia. Dedicated Cloudant clusters running on Joyent will be available this month and can be hosted in any Joyent data center.
Cloudant initially partnered back with in April of 2012. That partnership made Cloudant’s NoSQL DBaaS available on Joyent’s infrastructure.
Joyent has always been tuned towards application developers. “We built Joyent to power real-time web, social and mobile applications, so it makes sense to have a DBaaS partner like Cloudant that’s geared toward operational application data,” said Steve Tuck, SVP and general manager at Joyent Cloud. “Giving Cloudant the ability to quickly deploy throughout the global environment of our public cloud service aligns with our focus on scalability and performance. That’s what our customers care about most: low cost, real-time systems that are easy to use and support their apps.”
Joyent’s cloud was built with and runs on SmartOS, an open-source distribution of the OpenSolaris fork illumos, which is optimized to support high-scalability apps for cloud computing. Another differentiator for it is DTrace, a dynamic tracing framework with real-time troubleshooting which provides insight into global system performance. DTrace allows both Cloudant and Joyent to deliver increased application performance.
“Collaborating with Joyent to run Cloudant on SmartOS is just another example of how we efficiently improve service,” said Alan Hoffman, co-founder and chief product officer at Cloudant. “Making sure operational data scales flawlessly with application code is challenging, which is why when we see technology that helps our customers, we start integrating it. Now, with SmartOS, we’re able to quickly provision Cloudant accounts across the global network of Joyent data centers.”
One interesting feature of Joyent is it allows a customer to conceptually build a stack on its website. The company has been partnering with companies throughout the platform, application, data, infrastructure, and services layers.
Cloudant recently received strategic investment from Samsung Venture Investment Corporation. | | 1:30p |
ProfitBricks Raises $19.5 Million For Its Muscular Cloud  Profitbricks CEO Achim Weiss in front of a diagram of the company’s data center network. (Images: Profitbricks)
Profitbricks has raised a $19.5 million investment from the company’s founders and from United Internet AG, a European Internet services provider. United Internet is the parent company of mass market hosting giant 1&1 (you might have seen their commercials) so it’s starting to look like quite the web power play in Germany. The founders of ProfitBricks, Achim Weiss and Andreas Gauger, were also the founders of 1&1.
Profitbricks has now raised $38.3 million since its founding in 2010. The funding will go towards development of ProfitBricks’ “virtual data center” offering, as well as helping the company expand into new industries.
Profitbricks seeks to differentiate itself through the ability to provide both vertical and horizontal scale, flexibility in the network, and a data center design tool with an interface that makes building a virtual data center a fairly easy and straightforward endeavor. The ability to perform live vertical scale, that is to run applications on a large server rather than many servers, combined with the infiniband network it uses, sets the company apart. ProfitBricks calls itself the “second generation of cloud infrastructure,” and it has been growing at a quick clip since launching.
Check out a recent profile on the company here. | | 2:00p |
How the Data Center Has Evolved to Support the Modern Cloud 
There’s little argument among IT and data center professionals that over that past few years, there have been some serious technological movements in the industry. This doesn’t only mean data centers. More computers, devices, and the strong push behind IT consumerization have forced many professionals to rethink their designs and optimize to this evolving environment.
When cloud computing came to the forefront of the technological discussion, data center operators quickly realized that they would have to adapt or be replaced by some other provider who is more agile.
The changes have come in all forms, both in the data center itself and how data flows outside of its walls. The bottom line is this: If cloud computing has a home, without a doubt, it’s within the data center.
There are several technologies that have helped not only with data center growth, but with the expansion of the cloud environment. Although there are many platforms, tools and solutions which help facilitate data center usability in conjunction with the cloud – the ones below outline just how far we’ve come from a technological perspective.
- High-density computing. Switches, servers, storage devices, and racks are all now being designed to reduce the hardware footprint while still supporting more users. Let’s put this in perspective. A single Cisco UCS Chassis is capable of 160Gbps. From there, a single B200M3 blade can hold two Xeon 8-core processors (16 processing cores) and 768GB of RAM. Each blade can also support 2TB of storage and up to 32GB for flash memory. Now, if you place 8 of these blades into a single UCS Chassis, you can have 128 processing cores, over 6TB of RAM, and 16TB of storage. This means a lot of users, a lot of workload and plenty of room for expansion. This holds true for logical storage segmentation and better usage around other computing devices.
- Data center efficiency. To help support larger amounts of users and a greater cloud environment, data center environments had to restructure some of their efficiency practices. Whether through a better analysis of their cooling capacity factors (CCF) or a better understanding around power utilization, modern technologies are allowing the data center to operate more optimally. Remember, with high-density computing we are potentially reducing the amount of hardware, but the hardware replacing older machines may require more cooling and energy. Data centers are now focusing on lowering their PUE and are looking for ways to cool and power their environments more optimally. As cloud continues to grow, there will be more emphasis on placing larger workloads with the data center environment.
- Virtualization. Virtualization has helped reduce the amount of hardware within a data center. However, we’re not just discussing server virtualization any longer. New types of technologies have taken efficiency and data center distribution to a whole new level. Aside from server virtualization, IT professionals are now working with: Storage virtualization, user virtualization (hardware abstraction), network virtualization, storage virtualization, and security virtualization. All of these technologies strive to lessen the administrative burden while increasing efficiency, resiliency and improving business continuity.
More appliances can be placed at various points within the data center to help control data flow and further secure an environment.
- WAN technologies. The Wide Area Network has helped the data center evolve in the sense that it brings facilities “close together.” Fewer hops and more connections are becoming available to enterprise data center environments where administrators are able to leverage new types of solutions to create an even more agile infrastructure. Having the capability to dedicate massive amounts of private bandwidth between regional data centers has proven to be a huge factor. Data center resiliency, recovery and manageability have become a little bit easier because of these new types of WAN services. Furthermore, site-to-site replication of data and massive systems is now happening at a much faster pace. Even now, big data has new types of developments to help large data center quantify and effectively distribute enormous data sets. Projects like the Hadoop Distributed File System (HDFS) are helping data center realize that open-source technologies are powerful engines for data distribution and management.
- Distributed data center management. This is, arguably, one of the biggest pieces of evidence in how well the data center has evolved to help support the modern cloud. Original data center infrastructure management (DCIM) solutions usually focused on singular data centers without too much visibility into other sites. Now, DCIM has evolved to help support a truly global data center environment. In fact, new terms are being used to help describe this new type of data center platform. Some have called it “data center virtualization” or the abstraction of the hardware layer within the data center itself. This means managing and fully optimizing processes running within the data center and then replicating it to other sites. In some other cases, a new type of management solution is starting to take form: The Data Center Operating System. The goal is to create a global computing and data center cluster which is capable of providing business intelligence, real-time visibility and control of the data center environment from a single pane of glass.
The conversation has shifted from central data points to a truly distributed data center world. Now, our information is heavily replicated over the WAN and stored in numerous different data center points. Remember, much of this technology is still new, being developed, and is only now beginning to have some standardization. This means that best practices and thorough planning should never be avoided. Even large organizations sometimes find themselves in cloud conundrums. For example, all those that experienced the recent Microsoft Azure or Amazon AWS outages are definitely thinking of how to make their environment more resilient.
The use of the Internet as well as various types of WAN services is only going to continue to grow. Now, there are even cloud API models which are striving to unify cloud environments and allow for improved cloud communication. More devices are requesting access to the cloud and some of these are no longer just your common tablet or smartphone. Soon, homes, entire business, cars, and other daily-use objects will be communicating with the cloud. All of this information has to be stored, processed and controlled. This is where the data center steps in and continues to help the cloud grow. | | 2:30p |
Big Week For Big Data It’s been a week full of big events for big data and analytics, as Hadoop Summit Europe took place in Amsterdam, Gartner Business Intelligence and Analytics Summit in Grapevine, Texas, and GigaOm Structure: Data in New York.
MapR closes $30 million in funding
MapR announced that it has secured $30 million in series C financing to accelerate global expansion and continue our segment-leading product development. Mayfield Fund, which led the funding round, joins our existing investors Lightspeed Venture Partners, NEA and Redpoint Ventures in this round, bringing total funds raised to $59 million. AllthingsD.com interviewed MapR CEO John Schroeder last week, and talked about company direction, competition and global expansion plans.
Hortonworks Opens European Operations
Commercial Hadoop vendor Hortonworks was a host for the Hadoop summit, and announced the opening of its European operations, with a London-based headquarters. Already supporting more than 25 customers across Europe,global partnerships with companies such as Microsoft, Teradata and Rackspace are enabling Hortonworks to rapidly grow and support its European customer base.
“The European market is aggressively looking for solutions that enable the processing and analysis of big data, and Apache Hadoop presents an enterprise-grade platform for harnessing the power of this information,” said Herb Cunitz, president, Hortonworks. “We are seeing organizations across the globe choose the 100-percent open source Hortonworks Data Platform to prevent vendor lock-in and ensure that their big data strategies can quickly scale for future growth. We look forward to connecting with European Hadoop users to help broaden the reach of the Hadoop ecosystem to more markets across the globe.”
Gartner: Big Data becoming the norm
Gartner research vice president Mark Beyer stated that on the Gartner Hype Cycle, big data is heading into the Trough of Disillusionment. They predict big data will become the new normal between 2015 and 2017. “The Trough means that market dynamics have changed. Experienced market vendors and implementers know what it takes for a solution to mature and reach enterprise capacity. When the market starts to reach 15-20 percent adoption, then big data will have reached the Plateau, that’s the end of ‘hype’ and the beginning productivity,” said Mr. Beyer. “For something to move into the Trough is a maturation process. Implementers and organizations will begin to choose the winning solution architectures and technologies that support them. The definition of hype is over-promising without a basis of market experience and proof. The Trough is what does that. It will then rise along the Slope of Enlightenment while others drop by the way side.” | | 3:00p |
CDN.net Launches Customizable Content Delivery Network After securing a valuable domain name last year, London-based OnApp has launched CDN.net. The content delivery network (CDN) is a user customizable and usage-based solution, catering to budget and quality demands on a completely pay-per-use basis.
The company’s vision is to use its distributed infrastructure and service provider relationships to allow the end user to provision a highly customized CDN service “on the fly,”, which can be matched to individual needs and personalize in exactly the way the user wants. Taking on the established and large networks of CDN companies like Akamai and Limelight, CDN.net hopes to sell a federated capacity directly to end users. It offers access to 30 premium PoPs, and a total of 160 locations across 40 countries.
The OnApp CDN.net service can be setup using its marketplace locations, as a CDN PoP in a company cloud, using internal infrastructure to make CDN servers, or used to sell spare capacity on the CDN marketplace. Its primary use-case examples are for web acceleration, application performance, and rich media content streaming. Like a pre-paid phone card, the CDN.net features pay-by-usage pricing, with no long-term contracts or commitments – recharging an account when necessary.
“Today’s rich content demands plague online businesses with bounce-back, high latency and downtime that are harmful to their customer retention, marketing, and SEO efforts,” says James Fletcher, marketing director for CDN.net. “That’s why CDNs are now essential for all web property owners. However, buying CDN is generally an expensive, frustrating experience where customers pay for resources they don’t end up using, and get a pre-packaged service that doesn’t match their business.”
At HostingCon 2012, Data Center Knowledge spoke with OnApp CMO Kosten Metreweli about OnApp’s cloud products, including its cloud platform, its CDN marketplace and cloud storage offerings. Here’s a video of the conversation:
| | 4:00p |
At the Optical Transport Conference News, a 100G Party At the OFC/NFOEC (Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference) conference in Anaheim, California this week several vendors have competing 100G technology announcements, fueling the ability to drive big data through ultra-fast networks.
Juniper launches small supercore and 100G routing interface. Juniper Networks (JNPR) announced the new PTX3000 Packet Transport Router. Featuring a 10.6 inch depth design, it can rapidly scale up to 24 terabits per second (Tbps), which allows it to simultaneously stream HD video to as many as three million households. The router follows Juniper’s 2011 introduction of the Converged Supercore, a new architecture to bring together the packet and transport worlds. Additionally Juniper announced an integrated packet-transport physical interface card (PIC) with two-ports of line rate 100 Gigabit forwarding for the entire PTX family, which will now enable service providers to cost-effectively interconnect sites more than 2000 kilometers (1,243 miles) apart. ”To effectively deliver advanced services and remain competitive, service providers need a core network solution that will help streamline their business and reduce operational costs,” said Rami Rahim, executive vice president, Platform Systems Division, Juniper Networks. ”The Converged Supercore is an innovative platform that enhances service provider economics while providing greater value to their subscribers. Following on the heels of the revolutionary PTX5000, the PTX3000 extends these benefits to new markets and geographies with a solution that is tailored for their specific needs.”
Kotura launches 100G with WDM in dense package. At the OFC/NFOEC event Kotura demonstrated its Optical Engine in a Quad Small Form-factor Pluggable (QSFP) package. Kotura is the only photonics provider to demonstrate WDM (Wavelength Division Multiplexing) in a 100 gigabits per second (Gb/s) 4×25 QSFP package with 3.5 watts of power. “The QSFP package enables our customers to fit 40 transceivers across the front panel of a switch, providing 10 times more bandwidth than CFP solutions,” said Jean-Louis Malinge, Kotura president and CEO. “Because we monolithically integrate WDM and use standard Single Mode Fiber duplex cabling, our solution eliminates the need for expensive parallel fibers. No other silicon photonics provider can offer WDM in a 3.5 watt QSFP package.”
Applied Micro Launches stand alone OTN processor. Applied Micro (AMCC) announced the TPO215 processor, a standalone OTN processor that enables 10 x 10G line cards for OTN cross connect and Packet-Optical Transport System (P-OTS) applications. Delivering advanced framing, mapping and multiplexing, the TPO215 doubles the capacity of existing OTN framers while providing advanced security features. The product supports 10 x 10G channels for a total capacity of 100G. “AppliedMicro continues to pioneer technologies that will drive a new generation of networking equipment for telecommunications, data center and cloud connectivity,” said George Jones, vice president and co-general manager, Connectivity Products, at AppliedMicro. “The desire to transition to packet-aware optical transport networks requires network equipment vendors to partner with semiconductor companies that have established expertise in the latest optical networking solutions. This processor helps enable the required infrastructure for dramatically improved user experiences.”
Broadcom enables higher density 100G long haul. Broadcom (BRCM) announced a fast CMOS transmitter PHY for long-haul, regional and metropolitan data transport. The BCM84128 100G transmitter achieves an aggregate data rate of 128 Gbps at a low power draw of only two watts. Using 40 nanometer CMOS process technology it provides a full-rate clock output at 32 GHz and paves the way to 100G long-haul networks. ”The BCM84128 high performance transmitter PHY reflects the industry-leading innovation we are known for, allowing OEMs to leverage 100G PHYs developed in standard CMOS process technology with its inherent advantages of lower power and reliability,” said Lorenzo Longo, Broadcom Vice President and General Manager, Physical Layer Products (PLP). “Today’s introduction provides Broadcom with the opportunity to participate in a new market segment and pave the way for 100G optical transport.” | | 6:00p |
Sabey Opens High-Rise Manhattan Data Tower  Sabey Data Centers has retrofitted the Verizon building at 375 Pearl Street as Intergate.Manhattan, a data hub for the 21st century. (Photo: Sabey)
Some New Yorkers who look upon the huge Verizon high-rise at 375 Pearl Street have trouble seeing past its foreboding stone facade. The team at Sabey Data Centers saw it as a blank canvas: an opportunity to remake 1 million square feet of Manhattan real estate as a high-tech data hub.
“This has been an extremely exciting project,” said John Sasser, Vice President of Operations at Sabey Data Centers. ”There aren’t many opportunities to go into a 32 story building and remake it as a purpose-built data center.”
On Wednesday Sabey opened the doors on the new Intergate.Manhattan, having completed an extensive retrofit and commissioning process. Sabey, a Seattle-based developer, outfitted the property with all new core infrastructure and upgraded the power capacity from 18 megawatts to 40 megawatts.
The building was developed in 1975 as a Verizon telecom switching hub and later served as a back office facility. Verizon continues to occupy three floors, which it owns as a condominium. The property was purchased in 2007 by Taconic, which later abandoned its redevelopment plans. Sabey and partner Young Woo acquired the building in 2011.
Plenty of Challenges
Sabey saw an opportunity at 375 Pearl, but there were many challenges as well, according to Leonard Ruff, a principal with the design firm Callison, which has partnered with Sabey on several of its data center projects. Ruff and Sasser shared details of the retrofit project earlier this month at the DatacenterDynamics Converged conference in New York.
One issue was more than 35 years of undocumented changes in the building’s mechanical and electrical systems, according to Ruff. The vertical chases were highly congested with conduit, wiring and piping, and the tight site footprint didn’t allow much space for storing construction supplies and equipment. There was also the presence of Verizon and “severe penalties” should the construction process interrupt the telco’s operations, which continued apace on floors eight through 10.
The building provided an opportunity to re-think data center operations in a vertical layout. “We can take that 40 megawatts and spread it across the building in whatever way makes sense,” said Ruff.
The 13.2 kV electrical service enters the building at a substations on the second and third floor. The fourth and fifth floors house the UPS infrastructure (double-conversion systems with efficiency of up to 97 percent), while diesel backup generators are housed on floors two, three, four and 31 and vent their exhaust through the roof.
The initial phases of data center technical space are being deployed on four floors – 6, 7, 11 and 12. Each floor has generous vertical space, with ceiling clear heights of between 14 and 23 feet. Sabey will offer hot aisle containment for its customers, with a water-side economization system supported by five cooling towers on the roof. The roof can accommodate up to 16 cooling towers if needed as Sabey expands it data center operations to additional floors within the building.
Diesel fuel will be stored in the basement, which has a larger footprint than the high-rise section of the building, allowing Sabey to store 155,000 gallons of fuel at present, with the ability to add another 100,000 gallons as it expands. Sasser said the basement level at 375 Pearl remains more than a dozen feet above the high water mark seen during Superstorm Sandy, but said Sabey is taking no chances and equipping the fuel depot with submersible pumps.
Ceremony Marks Building’s Opening
On Wednesday, the building was opened to New York media for a ceremony with city officials. That included Mayor Michael Bloomberg, who used most of his podium time to speak about anti-crime measures to address political news developments in New York.
Company president Dave Sabey took it in stride, noting that the city’s progress on crime helped construction workers and staff feel safe during the development process. “This is a big day for Sabey, and for New York City,” said Sabey.
Sabey now operates 3 million square feet of data center space as part of a larger 5.3 million square foot portfolio of owned and managed commercial real estate. The company has developed a national fiber network to connect its East Coast operations with its campuses in Washington state, where it is the largest provider of hydro-powered facilities.
Sabey’s data center properties include the huge Intergate.East and Intergate.West developments in the Seattle suburb of Tukwila, the Intergate.Columbia project in Wenatchee and Intergate.Quincy.
The opening of Intergate.Manhattan comes amid an eventful period for the New York City data center market. After buying 111 8th Avenue, Google has discontinued efforts to lease vacant space at the historic New York telco hub, apparently intending to dedicate the remainder of the building for use as Google office space. At 60 Hudson Street, newcomer DataGryd is marketing new space. Meanwhile, two buildings have recently added new data center space. Data Center NYC Group has recently opened space at 121 Varick Street, while Telehouse has acquired data center space at 85 10th Avenue.
All this comes against the backdrop of Superstorm Sandy, which is prompting a variety of responses in data center operations and real estate as companies assess the storm and its implications for disaster recovery. While some companies are now wary of New York, others are seeking new space outside of the financial district, which experienced the brunt of the flooding from Sandy’s storm surge. Those companies represent some of the most promising prospects for new space in other areas of Manhattan, including the Sabey building. | | 6:26p |
Squeezing More Efficiency Out of Microsoft’s Cloud  The new data halls in Microsoft’s Dublin data center feature white cabinets and narrower hot aisle containment systems. Rows of cabinets are nestled into each containment enclosures, the structures with the green end doors. Newly-arrived cabinets are in place and waiting for the remainder of the row to be filled and then enclosed. The white cabinets are a new feature, reflecting available light and allowing Microsoft to use less energy on overhead lighting. Click the image to see a larger version. (Photo: Microsoft)
DUBLIN, Ireland – After you’ve built one of the most efficient data centers on earth, how do you make it even better? One refinement at a time, as Microsoft has found in its data center in Dublin, the primary European hub that powers the company’s online services throughout the region.
When the $500 million Dublin facility came online in 2009, it was an early example of a data center operating with no chillers, relying almost totally upon fresh air to cool thousands of servers in the 550.000 square foot facility, which powers the company’s suite of online services for tens of millions of users in Europe, the Middle East and Africa
Early last year Microsoft added a $130 million expansion that nearly doubled the capacity of the data center. The expansion allowed Microsoft to implement several new tweaks to its design that have allowed it to more than double the compute density of each server hall while using less power.
Along the way, Microsoft has also improved the facility’s energy efficiency, lowering the Power Usage Effectiveness (PUE ) from 1.24 in the first phase to 1.17 in the newest data hall. The PUE metric compares a facility’s total power usage to the amount of power used by the IT equipment, revealing how much is lost in distribution and conversion. The average PUE for enterprise data centers is about 1.8.
Date-Driven Refinement: The Next Phase of Efficiency
Squeezing more efficiency and density out of bleeding-edge facilities is the next phase in the data center arms race. It’s a process that other leading players will be undertaking as they seek to get more mileage out of new server farms that came online in the huge construction boom from 2007 to 2010.
“We’re all moving towards constant evolution and improvement,” said David Gauthier, Director of Data Center Architecture and Design at Microsoft, who helped design and launch the Dublin facility in 2009.
One key to improvement is relentless review of data from the early operations of new data centers, according to David Gauthier, Director of Data Center Architecture and Design at Microsoft. As it studied the operating data it collected, Gauthier says Microsoft found that it could be more aggressive in its use of free cooling.
“We were being conservative at first, because it was new and we hadn’t done it before,” said Gauthier. Microsoft had installed a small number of DX (direct expansion) cooling units in the first phase to provide backup cooling if the temperature rose above 85 degrees. The climate in Dublin, which has ideal temperature and humidity ranges for data center operations, never tested those levels. The DX units were retired, making additional power available, which was used to install more servers and cabinets in the data halls.
In place of the DX units, Microsoft added a less energy-intensive backup system to address “just in case” scenarios of unusually warm weather. It used adiabatic cooling, in which warm outside air enters the enclosure and passes through a layer of media, which is dampened by a small flow of water. The air is cooled as it passes through the wet media.
But Microsoft has now shelved the adiabatic systems in its most recent data halls, as Dublin’s weather simply doesn’t require it. “The climate in Dublin is awesome,” said Gauthier.
Greater Density, Same Power Footprint
Inside the data center, Microsoft is using more powerful and efficient servers, and configuring data halls to house more cabinets and servers. Each row of cabinets is housed in a “server pod” featuring a hot aisle containment system, with the cabinets housed in a fitted opening in the side of a fixed enclosure.
Microsoft designed the contained hot aisles so they could easily use cabinets of different heights, with one enclosure fitted with some 40U cabinets and some 48U cabinets, for example. This allows the company flexibility if it opts to use different server vendors. It has also narrowed the hot aisles themselves, which frees up more space for servers n each data hall.
These refinements, along with advances in processor power and efficiency, have helped boost Microsoft’s server power
Other recent refinements include the installation of energy-saving LED lights tied to motion sensors, meaning Microsoft uses less energy to power its lights, and only uses them when staff are present in a room. It has also adopted white cabinets, which can save on energy since the white surfaces reflect more light. This helps illuminate the server room when less intense lighting.
The focus on energy savings extend to the backup power systems. Microsoft uses short-duration UPS units, which provide about 1 minute of runtime during a utility outage before shifting load to the building generators. This approach allows Microsoft to forego a huge battery room in favor of a smaller enclosure within its power room. Rather than cooling the entire power room to protect the battery life, the enclosure is air conditioned, using ony eneough energy to cool a small space instead of the entire room.
Microsoft is not alone in the effort to pursue energy gains in company-built facilities. Google recently “gutted” the electrical infrastructure of its data centers in The Dalles, Oregon to upgrade it for more powerful servers. The facility in The Dalles was built in 2006. |
|