Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, March 25th, 2013

    Time Event
    12:30p
    Comparing Cost of a Custom Data Centers

    This the third article in series on DCK Executive Guide to Custom Data Centers.

    It should be noted that a custom data center design may cost somewhat more than a standard data center. This aspect should be examined closely, a higher initial Capex alone (whether amortized or factored into a lease) should not be the deciding factor alone. It is possible that over the long run it can actually represent a lower Total Cost of Ownership (TCO) if the custom design results in lower operating costs from improved energy efficiency. Data center designs have also been evolving, particularly over the last several years to im¬prove energy efficiency. There have been several new designs involving the use of so-called “Free Cooling”, which can greatly impact the TCO.

    Higher Power and Cooling Densities
    Most standard general purpose data center designs can accommodate 100 -150 watts per square foot (and/or an average of up to 5 kilowatts per rack). This design is typically based on the use of a raised floor as cool air delivery plenum, coupled with down-flow perimeter cooling units. This design has the inherent advantage of a proven track record with standard cooling equipment and offers the ability to easily accom¬modate moves, additions and changes by placing (or replacing) floor tiles to meet the heat load of the rows of racks as needed (until the maximum cooling capac¬ity per rack limitation is reached).

    Some organizations have moved to significantly high¬er power density levels, ranging from 10-25 kilowatts per rack. While some data center cooling designs can accommodate more than 5 kilowatts per rack, typically it is available on a limited case by case basis. Most stan¬dard designs cannot properly cool large quantities of high density racks across the entire data center. These higher power densities requirements typically are valid candidates for a custom data center.

    Designs for Extremely High Energy Efficiency
    While good energy efficiency is important to any data center, there are two areas where some new develop¬ments are occurring that can significantly improve the energy efficiency of the major infrastructure, but may have some other limitations.

    Power Systems
    In the US market most data centers will typically use industry standard voltages within the data center; 480 volts AC for the UPS and cooling equipment, which is then stepped down to 208 or 120 volts AC for most IT equipment. However, there are some systems which are beginning to find their way into US data centers which are purported to be more energy efficient than the standard power systems. They generally fall into two categories: First the European type systems which are based on distributing 400/230 volts AC within the data center to power the IT equipment. Since this sys¬tem can be implemented relatively easily and supports virtually any new IT equipment with no change, it is beginning to make some inroads in the US market.

    The second is Direct Current “DC” based systems, which generally fall into two sub-categories; one at 380 volts DC and the others at one or more lower voltages; 48 volts DC (US telephone system standard) and several other variations based on other lower DC voltages. It should be noted that while these DC based systems have been built and are in operation in a limited number of sites, however at this time they generally require specially designed and custom built or modified IT equipment. There are technical and economic pros and cons to all these DC based systems and are still actively debated, but is beyond the scope of this article to explore this in detail. However, before com¬mitting to a DC powered design be aware that a DC based system cannot easily or cheaply be retrofitted back support to US standards AC based, off-the-shelf computing equipment, if a universal DC IT equipment standard does not emerge.

    It should be noted that while older data centers had much greater losses in their electrical power chain, this was primarily due to older technology UPS systems. The newest UPS systems are far more energy efficient than their predecessors and therefore minimize the energy saving difference that the non-standard power systems offer. Consider this carefully before moving toward a non-standard power system.

    Alternate and Sustainable Energy Sources
    In most cases the data center simply purchases electricity generated by a utility. The origin of that power has become a source of public awareness and has been criticized by some sustainability organizations, even if the data center itself is a new energy efficient facility. This can impact the public image and reputation of the data center operators. In some cases this has impacted the potential location of the data center, based on the type of fuel used to generate the power, whereas pre¬viously those decisions were strictly driven by the low¬est cost of power. Some new leading edge data centers have even begun to build solar and wind generation capacity to partially offset or minimize their use of less sustainable local utility generation fuel sources, such as coal. This would certainly fall under the category of a custom design and however it would also change the TCO economics, since it raises the upfront capital cost significantly.

    Cooling Systems
    Of all the factors that can impact the energy efficiency (and therefore OpEx) cooling represents the majority of facility related energy usage in the data center, outside of the actual IT load itself. The opportunity to save significant amounts of cooling energy by moderating the mechanical (compressor based) requirements and the expanded use of “free cooling” is enormous.

    One of the areas where an investment is customization can produce significant OPEX saving is the expand¬ing use of “Free Cooling”. The traditional standard data center cooling system primarily consists of standard data center grade cooling systems (CRAC – CRAH, see part 3 “Energy Efficiency” for more information) typically placed around the perimeter of the room blowing cold air into a raised floor. This is typically a closed loop air path, there is virtually no introduction of outside fresh air. This means that mechanical cooling is the primary method that requires significant energy to operate the compressors to effect heat removal. This is the time test¬ed and most commonly utilized design. Some systems include some form of economizers to lower the amount of annual cooling energy, but few standard systems can totally eliminate the use of mechanical cooling.

    However, more recently some data centers have been built using so called “Fresh Air Cooling”, which brings cool outside air directly into the data center and ex¬hausts the warmed air out of the building, whenever outside conditions permit. There are many variations on this methodology and it is still being developed and refined. This method was pioneered and built mostly by Internet giants such as Facebook, Google and Yahoo and would be considered unthinkable only a few years ago for an enterprise class data center. While this is not yet a widespread commonly accept¬ed method of cooling, it is being considered by some more mainstream operators for their own data centers. Of course, its effectiveness is greatly related to climatic conditions and therefore is not ideal for every location. (Please see part 3 “Energy Efficiency”.)

    You can download a complete PDF of this article series on DCK Executive Guide to Custom Data Centers courtesy of Digital Realty.

    12:45p
    GPU News: Cray Awarded $32 Million Contract

    Surrounding the GPU Technical Conference this week in San Jose – Cray, Cirrascale and Penguin Computing have GPU announcements:

    Cray awarded $32 million Swiss contract.  Cray announced that it has signed a contract with the Swiss National Supercomputing Centre (CSCS) to upgrade and expand its Cray XC30 supercomputer. When the upgrade is completed, the system nicknamed “Piz Daint” will be the first petascale supercomputer in Switzerland. Currently a 750 teraflop Cray XC30, it will be transformed to include NVIDIA K20X GPU accelerators. CSCS is the first customer to order a Cray XC supercomputer with NVIDIA GPUs. ”Piz Daint will help advance the research projects of our diverse user community by leaps and bounds,” said Thomas Schulthess, director of CSCS. “With GPU acceleration integrated into Cray’s latest generation supercomputer, the application performance and the energy efficiency of our simulations will improve significantly. We are very excited about the collaborative development of a truly general-purpose, hybrid multi-core system with Cray.” The contract is valued at more than $32 million, and the upgraded system is expected to be operational in 2014.

    Cirrascale launches GB5400 GPGPU blade server.  Cirrascale  announced the next generation of its GB5400 blade server supporting up to eight GPU cards. Utilizing a pair of the company’s proprietary 80-lane PCIe switch-enabled risers, the GB5400 supports up to eight discrete GPGPU (General-Purpose Graphics Processing Unit) cards in a single blade. ”We have redesigned the GB5400 to handle the latest cards from leading GPU providers, including NVIDIA and their wide assortment of high-end GPU cards,” said David Driggers, CEO, Cirrascale Corporation. “Our customers and licensed partners in cloud and High Performance Computing are asking for this increased density and performance, while maintaining the ability to scale the solutions they choose. We’re confident that the GB5400 meets these needs, and in fact, surpasses them.” The Cirrascale GB5400 blade server, including the entire GB series line of GPGPU solutions, as well as the Cirrascale proprietary PCIe switch-enabled riser, are immediately available to order and are shipping to customers now.

    Penguin Computing unveils Relion 2808GT. Penguin Computing announced the availability of the Relion 2808GT. The Relion 2808GT supports eight GPUs or coprocessors in two rack units and provides a higher compute density that any other server on the market. The GPUs are supported by a dual socket platform based on Intel’s Xeon E5-2600 product family. The Relion 2808GT also features an on board Dual 10GbE BASE-T controller and up to 512GB of ECC memory. “Penguin has been delivering integrated GPU computing clusters since the version 1.0 of this technology,” said CEO Charles Wuischpard. “The new Relion 2808GT platform in conjunction with the latest GPU and coprocessor technology delivers unprecedented levels of performance. The Relion 2808GT enables our HPC customers to further accelerate their research by shortening the time to result for their simulations.”

    12:51p
    American Internet Lands $43.5 Million Credit Facility

    American Internet Services has refinanced at a great time, lowering its cost of capital due to historically low interest rates, as well as providing some capital for continued growth. Fortress Credit Corp, an affiliate of Fortress Investment Group, has provided the company a $43.5 million senior secured credit facility. Terms of the transaction were not disclosed.

    AIS was founded way back in 1989. It provides tailored data center and cloud service solutions to companies with an emphasis on security, compliance, connectivity, and customer service. AIS operates SSAE 16-compliant, SOC 1-, 2-, and 3-audited, redundant facilities in San Diego, Los Angeles and Phoenix. It has more than 600 enterprise customers worldwide, and is backed by private equity firms Seaport Capital, Viridian Investments, and DuPont Capital Management,

    “Refinancing AIS’ debt enables us to take advantage of historically low interest rates while providing resources for additional investment in new products and services, continuing our market expansion, and meeting the developing needs of our customers,” said Tim Caulfield, Chief Executive Officer at AIS. “We look forward to working with Fortress, a knowledgeable and experienced lender to the internet infrastructure sector.”

    Fortress Investment Group LLC is a global investment firm with over $53 billion in assets under management as of December 31, 2012. Founded in 1998, Fortress manages assets on behalf of over 1,400 institutional clients and private investors worldwide across a range of investment strategies - private equity, credit, liquid hedge funds and traditional fixed income.

    “This financing illustrates Fortress’s continuing support for middle market companies in the data center and cloud service sector,” said Ken Sands, a Managing Director of the Credit Funds, Fortress Investment Group. “AIS is a recognized regional leader in tailored data center and cloud service solutions and we are pleased to have arranged this financing that will support their future growth.”

    DH Capital served as advisor to AIS on the financing. DH Capital is a private investment banking partnership specializing in Internet infrastructure, telecommunications, and SaaS with a focus on M&A and capital placements.

    1:33p
    Colocation Communities Are a Match for Cloud

    Kevin Dean is Chief Marketing Officer at Interxion

    Kevin-Dean-tnKEVIN DEAN
    Interxion

    The number of data centers may be shrinking, but their capacity is growing. Current market calculations show disruptive technologies like server virtualization and cloud computing are effectively consolidating servers enough to actually shrink the United States’ vast data center footprint. In fact, IDC predicts that the total number of U.S. data centers will fall from 2.94 million in 2012 to 2.89 million in 2016. However, while new data center facilities themselves may be on the decline, the data they house certainly isn’t, given that 2.5 quintillion bytes of data are created every day.

    With fewer facilities, but more data than ever, what can companies do?

    While the shift to more virtualized and cloud-based environments means that companies will build fewer data centers overall, it also means many will look to specialized, third-party data centers to support their data-intensive needs. Further predictions from IDC reveal that data center capacity will grow from 611.4 million square feet in 2012 to more than 700 million square feet in 2016, so colocation facilities that have the capacity to handle cloud and virtualization requirements are well suited to support this data boom.

    Statistics aside, today’s information explosion is proof enough that data is the lifeblood of any business and, therefore, how it’s contained is a top concern. As more enterprises put their internal servers under scrutiny, they are noticing that legacy enterprise data centers are becoming increasingly ineffective, no longer able to provide the space, power and security requirements necessary to support a company’s transition to the cloud. As a result, companies are optimizing their data through outsourcing options, such as data center colocation facilities. By choosing to colocate, enterprises benefit from a wide range of power connections with full backup, multi-layer security to protect data, lower maintenance expenses and more cost-effective cooling.

    Beyond these colocation benefits, however, one big advantage remains: communities of interest. These communities located within such colocation facilities are one of the biggest draws for companies to choose colocation in the first place. For instance, cloud communities offered by carrier-neutral colocation providers allow service providers across cloud markets to scale their resources and match fluctuating customer requests. Similarly, businesses that are part of communities of interest within finance and digital media content hubs benefit from colocation facilities’ interconnection with leading cloud platforms, which enable community members to take advantage of cloud computing and its cost efficiencies.

    Cloud Communities Make Gains

    Cloud service providers in particular benefit from multi-tenant colocation facilities’ communities, which enable members to connect with each other and with partners over near-instantaneous connections. The traditional selection criteria for data center facilities such as power, space, security and cooling capacity are now topped by the requirements of close proximity to end users with unbeatable connectivity and performance speeds – speeds that are achieved in such highly-connected industry hubs. Since these hubs host a variety of service providers, CDNs, carriers and ISPs and Internet exchanges under one roof, enterprises and cloud service providers have access to a marketplace of a variety of cloud-based services at their fingertips.

    Additionally, cloud hub participants benefit from partnership opportunities and additional revenue streams made possible through member interaction.  Furthermore, as the market shifts to more dynamic, hybrid cloud environments, the connectivity between private infrastructure and public cloud servers is more essential now than ever before. To ensure that these connections are performing as fast as possible for their customers, many colocation participants have the ability to establish private connectivity between a public cloud platform and their existing dedicated IT infrastructure. This interconnectivity allows members to take control over their hybrid environment while reducing network costs, increasing bandwidth and providing a more consistent network experience than Internet-based connections.

    1:49p
    Yahoo Building a Bigger Computing Coop
    The exterior of the Yahoo Computing Coop buildings in Lockport, New York. The data center opened for business today.

    The exterior of the Yahoo Computing Coop buildings in Lockport, New York. The company is planning to expand its campus in Lockport. (Photo: Yahoo)

    Yahoo’s ultra-efficient “chicken coop” data center in upstate New York is getting bigger. The Internet company has announced plans to invest an additional $168 million in the campus for its hydro-powered, wind-cooled server farm in Lockport, N.Y. The expansion will include an additional 7.2 megawatts of data center space, along with a call center. The projects are expected to create 115 jobs between them.

    The expansion was expected, as Yahoo indicated last year that it would buy additional land at its property in Lockport. The company is seeking breaks on property taxes and sales taxes on servers and equipment for the project, according to the Buffalo News. The New York Power Authority will expand the site’s power capacity to support the new construction.

    The Yahoo Lockport facility, which is optimized for air-cooling, is one of the world’s most efficient data centers, operating with a Power Usage Effectiveness (PUE) of 1.08. The data center, which is supported by hydro-electric power from the NYPA, requires mechanical cooling for a handful of hours each year.

    The first two phases of the Lockport project, built in 2010 and 2011, featured multiple 120-by-60 foot prefabricated metal structures using the Yahoo Computing Coop data center design. The coops -modeled on the thermal design of chicken coops- have louvers built into the side to allow cool air to pass through the computing area. The air then flows through two rows of cabinets and into a contained center hot aisle, which has a chimney on top. The chimney directs the waste heat into the top of the facility, where it can either be recirculated or vented through the cupola. See A Closer Look at Yahoo’s New Data Center for more photos and video.

    This approach to heat management allows the Lockport data center to operate without chillers, which provide refrigerated water for cooling systems and are among the most energy-intensive components of a data center. The facility uses an evaporative cooling system during those 9 days a year when it is too warm to use fresh air. The buildings were positioned on the Lockport property to allow Yahoo to bring in cool air from either side of the coop, based on the prevailing winds.

    2:30p
    Network News: Zayo Partners With Internet2

    Here’s our review of some of today’s noteworthy links for the networking sector of the data center industry:

    Zayo and Internet2 bring 100G to the north. Zayo Group announced its partnership with Internet2 to add substantial new capacity on Zayo’s fiber route from Chicago to Seattle. The system will have greater than 4 terabits of overall capacity to support Internet2’s new 100G network. Set to be completed in the spring, the infrastructure will extend the capability to reach the nation’s leading research and education network’s 100G services to universities and research centers in Idaho, Montana, North Dakota, Minnesota, Washington, and Wisconsin. The project will provide new 100G national backbone paths between Internet2’s core routers in Seattle and Chicago, reduce latency for time-sensitive applications and increase capacity for global innovation with partners throughout the west and Asia that connect through Seattle. “This project demonstrates Zayo’s commitment to building strong strategic alliances with the research and education community and our investment into our extensive fiber footprint,” says Zach Nebergall, vice president of Wavelength Product Group at Zayo. “Internet2 is helping to bring substantial amounts of additional capacity to the research and education community via its partnerships.”

    CenturyLink Deploys Ciena 100G.  Ciena (CIEN) announced that CenturyLink (CTL) recently utilized its converged packet optical platform with WaveLogic coherent optical technology to modernize and upgrade its network that spans more than 50 metropolitan locations across the United States. With the upgrade CenturyLink can offer 1GE, 10GE, 100GE and equivalent wavelengths, utilizing Ciena’s 6500 Packet-Optical Platform. The 6500 platform will also offer integrated packet switching which gives CenturyLink agility in the delivery of groomed Ethernet services to its enterprise customers. “CenturyLink understands the increasing need for scalability, capacity and high-speed network services for today’s business requirements,” said Pieter Poll, senior vice president of national and international network planning, engineering and construction, CenturyLink. ”Ciena’s converged packet and coherent optical technology allows us to provide speed and capacity improvements to our international and domestic regional networks, creating a true, end-to-end 100G network to deliver today’s bandwidth-intensive services and applications.”

    Level 3 to build data center in Bogota Columbia. Level 3 Communications (LVLT)  announced the construction of its newest data center in Bogota, Colombia, as a result of increased demand for IT services among its customers. This new, 500 square-meter Premier Elite data center, designed to support managed services, will provide onsite technical staff, high levels of availability, enhanced security and high power density cabinets and suites. ”The Colombian market shows a growing demand for colocation, housing, hosting and value-added services,” said Luis Carlos Guerrero, sales vice president for Level 3′s Andean region. “The trend to outsource these services to a trusted business partner – one that will support the customer in its expansion strategy – is crucial for companies today so they can focus on their core business.”

    Extreme Networks solutions tested by EANTC. Extreme Networks (EXTR)  announced that its high performance cloud and Mobile Backhaul Ethernet switching solutions were among the first to be tested by European Advance Network Test Center (EANTC) for Carrier-focused Software Defined Networking (SDN), MPLS and Hybrid Timing combining Synchronous Ethernet (Sync E) and IEEE-1588 Precision Time Protocol (PTP). EANTC’s final test plan for the SDN/MPLS and IPv6 testing was rigorous and included 51 test outcomes with 19 of them making fresh paths to SDN testing. The SDN OpenFlow tests highlighted Layer 2 & 3 forwarding, OpenFlow topology discovery, failure recovery in OpenFlow, and policy based routing. ”Extreme Networks continues to deliver first to market and high performance SDN and Mobile Backhaul Ethernet solutions for sophisticated multi-tenant data centers and mobile 4G networks,” said David Ginsburg, CMO for Extreme Networks. ”Our successful completion of the EANTC organized testing in Berlin in 2013 further validates our ability to support the network architectures required by new carrier service offerings.”

    3:27p
    Cloud News: Red Hat, Panzura, Avaya

    News from the cloud computing sector includes developments from Red Hat, Panzura and Avaya:

    Red Hat collaborates with Code for America.  Red Hat (RHT) announced a collaboration with Code for America (CfA), a non-profit organization that partners with local governments to foster civic innovation, focused on using technology to increase civic engagement. The collaboration brings Red Hat’s OpenShift Platform-as-a-Service (PaaS) offering to CfA Fellows and partner communities free-of-charge to help achieve CfA’s goal of fostering collaboration between city hall and city residents and innovative problem solving through technology. In a contribution worth approximately $300,000,the CfA Fellows will have access to OpenShift free-of-charge for one year and have the option of one additional year of free hosting and services. OpenShift supports many popular frameworks, such as Zend, Java EE, Spring, Rails, Play, with built-in platform support for node.js, Ruby, Python, PHP, Perl and Java. OpenShift offers an application platform in the cloud that manages the stack so that developers can focus on their application code.

    Panzura selected by California State, Northridge.  Cloud storage provider Panzura announced that California State University, Northridge (CSUN), selected the Panzura Global Cloud Storage System to transform its off-site data backup protection process by utilizing Panzura’s encrypted Quicksilver cloud storage controllers. CSUN will streamline its off-site data protection efforts and significantly reduced storage needs, while also shifting CapEx to OpEx. The university manages approximately 300TB of NAS/SAN storage, protected by tape and disk storage. Backup processes were becoming increasingly slow and cumbersome and they needed to transition from the endless CapEx cycle of refreshing tape backup equipment and provisioning more capacity for offsite data protection. “Our goal was to increase process efficiency and reduce storage footprint for off-site backups by eliminating tape and avoiding use of campus-owned disk for off-site backup storage,” said Chris Olsen, Sr. Director of Infrastructure Services and ISO at CSUN. “We had some initial hesitations about cloud storage, including the cost to get data to the cloud, data security, and controlling capacity. With Panzura’s Global Cloud Storage System, we simply pointed our Symantec NetBackup application to the Panzura Quicksilver Cloud Storage Controller and the cloud became our backup target, while RSA 4096-bit encryption protected our data with us owning the encryption keys. The solution was straightforward to deploy and the deduplication exceeded our expectations.”

    Avaya Collaborative Cloud for Cloud Service Providers. Avaya announced additions to the Avaya Collaborative Cloud with new offers specifically designed for cloud service providers (CSPs) that allow them to brand and deliver Avaya’s unified communications, contact center and video solutions. With these new solutions CSPs can help organizations off-load the challenges of managing BYOD environments, widely dispersed workforces and the shifting demands of end-customers. The new offers enable CSPs to evolve and augment enterprise communications with cloud-based solutions as well as provide greater interoperability across vendors, domains and protocols. ”With Avaya Collaborative Cloud, cloud service providers can offer a differentiated UC, contact center or video solution to enterprises,” said Joel Hackney, SVP and general manager, Cloud Solutions, Avaya.

    To see other cloud computing news, visit our Cloud Computing Channel.

    9:35p
    Debra Chrapaty Departs Zynga to Head Nirvanix

    Debra Chrapaty, who has served in infrastructure leadership positions for some of the Internet’s largest players, is now calling the shots for a cloud service provider. Chrapaty is departing Zynga, where she had been CIO, to become the chief executive officer at enterprise cloud storage specialist Nirvanix.

    Chrapaty’s tenure at Zynga spanned the company’s effort to shift away from a heavy dependence upon cloud services, moving much of the gaming company’s infrastructure to in-house data centers. Her experience prior to Zynga included a stint as VP of Global Foundation Services for Microsoft . She also served as Senior VP of the software collaboration group at Cisco Systems and COO of E-TRADE Technologies.

    Nirvanix provides cloud storage services focused on enterprises with massive amounts of large unstructured content files, and offers usage-based pricing across public, hybrid and private cloud storage deployments. Last year the company raised $25 million from backers including Khosla Ventures, Intel Capital, Valhalla Partners, Mission Ventures and Windward Ventures. Customers include Cerner Corporation, IBM, USC Digital Repository, National Geographic and Relativity Media.

    “I believe there is room for innovation in the enterprise storage market,” said Chrapaty. “Nirvanix is already ahead of the game and differentiating services and gaining traction against some of the storage goliaths. Having built and run some of the industry’s largest cloud environments, I know the importance of secure, available, cost efficient infrastructure and storage. Now we have the chance to build a truly differentiated cloud offering and pass that value on to our enterprise customers. That’s exciting for me and, more important, for the industry.”  

     

    << Previous Day 2013/03/25
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org