Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, December 17th, 2012

    Time Event
    1:25p
    Data Center Links: Interxion, AWS, Verizon, Level 3

    Here’s our review of some of this week’s noteworthy links for the data center industry:

    Interxion connects Amazon Web Services.  Interxion (INXN) announced that connectivity to AWS Direct Connect is now available from its 32 data centres across 11 countries in Europe, made possible by teaming up with Level 3 Communications and Amazon Web Services (AWS). This allows customers to establish private connectivity between the AWS platform in the cloud and their existing dedicated IT infrastructure, which can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience. “AWS Direct Connect helps our customers to build hybrid cloud solutions between their dedicated IT infrastructures and the AWS platform more effectively by providing cost, security and performance control”, said Vincent in‘t Veld, Director Cloud Segment at Interxion. “This supports the wider IT transformation process by allowing organisations to further maximise their IT efficiency. Thanks to the presence of leading network service providers such as Level 3, AWS Direct Connect can now be accessed by our customer community across our entire data centre footprint.”

    Verizon expands 100G networks.  Verizon (VZ) announced it has expanded its 100G network in the U.S. and Europe this year, with 20,921 additional kilometers (13,000 miles) in the U.S. and 2,600 kilometers (1,616 miles) in Europe. Deployment of 100G was for selected routes in the U.S., including Atlanta to Tampa, Kansas City to Dallas and Salt Lake City to Seattle. In Europe, Verizon expanded its 100G network with two new routes – between London and Paris, and London and Frankfurt. “Expanding 100G technology on our high-performance U.S. and European networks means Verizon is able to successfully meet traffic demand while increasing efficiency and capacity,” said Kyle Malady, senior vice president of global network operations and engineering for Verizon. “Increased video traffic, LTE 4G growth and cloud usage are driving bandwidth demand and 100G is critical to creating that rich end-user experience.”

    Level 3 Latin America receives ISAE 3402. Level 3 Communications (LVLT) announced  it has been awarded the International Standard on Assurance Engagements (ISAE) 3402 Type II Certification for its Premier Elite data centers in Buenos Aires, Argentina; Sao Paulo, Brazil; Rio de Janeiro, Brazil; and Santiago, Chile.  The ISAE 3402 is a global assurance standard for reporting on financial processes and controls at service organizations. “The ISAE provides our customers and their shareholders with a report based on an international standard reporting system that provides transparency and simplification, and efficiently delivers all the information required for their internal auditing needs,” said Gabriel del Campo, Level 3′s senior vice president of Data Centers for Latin America. “The certification of these Level 3 data centers was a direct consequence of customer demand.”

    2:55p
    The Year in Downtime: Top 10 Outages of 2012

    SuperStorm Sandy was a regional event that presented unprecedented challenges for the data center industry, along with the region’s other infrastructure. (Photo: NASA)

    In a business built on uptime, outages make headlines. The major downtime incidents of 2012 illustrate the range of causes of outages – major disasters, equipment failures, software behaving badly, undetected Leap Year date issues, and human error. Each incident caused pain for customers and end users, but also offered the opportunity to learn lessons that will make data centers and applications more reliable.

    A case in point: 2012 was the year of the cloud outage, as several leading cloud platforms experienced downtime, most notably Amazon Web Services. The incidents raised questions about the reliability of leading cloud providers, but also prompted greater focus on architecting cloud-based applications across multiple zones and locations for greater resilience. Meanwhile, the post-mortems on SuperStorm Sandy have just begun, and will continue at industry conferences in 2013. Here’s a look at our list of the Top 10 outages of 2012:

    1. SuperStorm Sandy, Oct. 29-30: Data centers throughout New York and New Jersey felt the effects of Sandy, with the impacts ranging from flooding and downtime for some facilities in Lower Manhattan, to days on generator power for data centers around the region. Sandy was an event that went beyond a single outage, and tested the resilience and determination of the data center industry on an unprecedented scale. One of the affected providers willing to share their story was Datagram CEO Alex Reppen, who described the company’s battle to rebound from “apocalyptic” flooding that shut down its diesel fuel pumps. Indeed, diesel became the lifeblood of the recovery effort, as backup power systems took over IT loads across the region, prompting extraordinary measures to keep generators fueled. With the immediate recovery effort behind us, the focus is shifting to longer-term discussions about location, engineering and disaster recovery – a conversation that will continue for months, if not years.

    2. Go Daddy DNS Outage,Sept. 10: Domain giant Go Daddy is one of the most important providers of DNS service, as it hosts 5 million web sites and manages more than 50 million domain names. That’s why a Sept. 10 outage was one of the most disruptive incidents of 2012. Tweet-driven speculation led some to believe that the six-hour  incident was the result of a denial of service attack, but Go Daddy later said it was caused by corrupted data in router tables. “The service outage was not caused by external influences,” said Scott Wagner, Go Daddy’s Interim CEO. “It was not a ‘hack’ and it was not a denial of service attack (DDoS). We have determined the service outage was due to a series of internal network events that corrupted router data tables.”

    3. Amazon Outage, June 29-30: Amazon’s EC2 cloud computing service powers some of the web’s most popular sites and services, including Netflix, Heroku, Pinterest, Quora, Hootsuite and Instagram. That success has a flip side: when an Amazon data center loses power, the outage ripples across the web. On June 29, a system of unusually strong thunderstorms, known as a derecho, rolled through northern Virginia. When an Amazon facility in the region lost utility power, the generators failed to operate properly, depleting the emergency power in the uninterruptible power supply (UPS) systems. Amazon said the data center outage affected a small percentage of its operations, but was exacerbated by problems with systems that allow customers to spread workloads across multiple data centers. The incident came just two weeks after another outage in the same region. Amazon experienced another cloud outage in late October.

    4. Calgary Data Center Fire, July 11: A data center fire in a Shaw Communications facility in Calgary, Alberta crippled city services and delayed hundreds of surgeries at local hospitals. The incident knocked out both the primary and backup systems that supported key public services, providing a wake-up call for government agencies to ensure that the data centers that manage emergency services have recovery and failover systems that can survive a series of adversities – the “perfect storm of impossible events” that combine to defeat disaster management plans.

    5. Australian Airport Chaos, July 1: The “Leap Second Bug,” in which a single second was added to the world’s atomic clocks, made headlines on July 1. The change caused computer problems with the Amadeus airline reservation system, triggering long lines and traveler delays at airports across Australia, as the outage wreaked havoc with the check-in systems used by Qantas and Virgin Australia.

    3:47p
    Cable Management Decisions in the Data Center

    Ian Timmins is Director of Engineering of Optical Cable Corporation. 

    Ian-Timmins-Opical-Cable-CorpIAN TIMMINS
    Optical Cable Corp.

    Data centers represent some of the most demanding performance technology in the communications infrastructure market. Choosing the right cabling and cable management system is one of the most important aspects of data center design. Reliability, in combination with extreme density, should guide your choice of products. Data center longevity is also a consideration, especially when it comes time to upgrade to 40G and 100G Ethernet.

    Tight Buffer Cable

    Cabling infrastructure is the backbone of your data center. The rugged characteristics of tight buffer cabling reduce risk of downtime caused by cable failure. With increasing density and the subsequent increased risk of strain on your cabling system, reliability is essential when evaluating cabling needs. As fiber becomes the prominent transport medium in the data center, it’s important to research and select the latest in data center products and technologies. Choose companies with experience in developing and supplying cable products perfected for the most critical communications applications.

    Preterminated Fiber and Copper Cabling Systems

    The sheer number of connections in the data center has increased dramatically and will continue to do so with the advent of the 40G and 100Gbit/s Ethernet standards. These new standards require increased fiber counts for a single data connection beyond the conventional duplex configuration. Look for the highest density of connectivity. There are currently products on the market that will support upward of 144 LCs for fiber optic installs and 48 ports of Cat6A copper in one rack unit. In order to accommodate this, preterminated copper and fiber cables are the preferred choice for several reasons. First, the operator is less of a factor in the termination process required for either fiber optic or copper cables since no special skill set is required for termination or testing of preterminated cables. This eliminates the on-site polishing and termination for high count fiber optic links on the optical side, as well as the concern of alien crosstalk testing on the copper side. No external contractors are needed, saving your company both time and money. Secondly, preterminated cables eliminate excess loops, as they are cut to specified lengths. Preterminated copper or fiber optic cables can be customized to your specific needs for easy, perfectly cabled installation, eliminating the problem of excessive slack storage.

    10G Copper Cat6A PoE Ready Preterminated Cabling System

    If your data center supports PoE enabled devices, such as cameras for surveillance and security or IP telephony, then copper cabling may be an appropriate choice for your facility. Choose products that allow you to build a foundation that will support both high-speed data and provide device power simultaneously. The migration to fiber in the effort to eliminate concerns of attenuation on long cabling runs and alien crosstalk effects for high density applications leaves the need for infrastructure supporting high bandwidth PoE devices somewhat out in the cold. Preterminated Cat6A panels and cabling systems offer the foundation you are looking for to support current and future PoE devices.

    Integrated Cable Management

    Cable management is the key for accessibility and a clean visual appearance of any data center. In recent years, this has become even more pressing as the value of rack space has driven densities for both copper and fiber applications. Combined with the high data counts for the 40G and 100GBit/s Ethernet standards, cable management has seen an unprecedented level of interest to accommodate the densities of today and anticipated densities of tomorrow. Ideally, choose products that ensure accessibility of all ports, have horizontal and vertical management mechanisms on the front of the panel, and smooth coupling mechanisms between the cassettes and the panel chassis.

    Upgradability

    A solid technical roadmap is key to installing infrastructure that is suitable for your current needs, as well as accommodating both anticipated and unanticipated needs for the future. Spend some time now researching the marketplace and choose products designed specifically with this in mind. Choose fiber cassettes that are upgradable from 10G Ethernet to either 40G or 100G, and panels suitable for all technologies. Preterminated trunk cables used for 10G today can be leveraged into 40 or 100G tomorrow, simply by installing the respective new technology cassettes that suit your bandwidth needs.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:50p
    Contegix Leases HQ and Data Center in St. Louis

    The exterior of 210 N. Tucker, the St. Louis data center hub, which had a major power upgrade in 2010.

    Cloud, colo and managed hosting company Contegix has a new data center and new headquarters in St Louis. The company is leasing the entire sixth floor at Digital Realty-owned 210 North Tucker that will act as HQ, and it is building a new data center.

    “We saw an opportunity to simultaneously upgrade our facilities and unite our company under one roof,” said Matthew Porter, CEO of Contegix. The lease is effective immediately, and Contegix is expected to be moved into the offices and data center by the end of January 2013.

    Porter and Contegix co-founder Craig McElroy are lifelong residents of St. Louis, and made a conscious decision to keep the company downtown, “which has seen a renaissance over the past decade,” a trend we’ve previously highlighted here at DCK.

    Contegix saw an 86% increase in new customer sales during 2012, and the move helps facilitate that growth. “We owe it to our customers and our employees to invest in facilities that propel us forward and ensure future success,” said Matthew Porter, CEO of Contegix in a release. The company says it is seeing the highest growth rates in managed services and high-end colocation for high-density services.

    Power Upgrade by Digital Realty

    Contegix approached Digital Realty to design a data center for it with N+1 redundancy, including generators.  “As one of the world’s largest data center providers, Digital Realty gives us the ability to expand our footprint as we continue to grow,” said Porter.

    210 N Tucker is an 18-story building in downtown St. Louis that offers 300,000 square feet of space. Digital Realty began upgrades to the building back in 2010. Last September, the company unveiled a $30 million renovation that added 16 megawatts of new power capacity, effectively quadrupling the power available to the facility. The company also completed the construction of new colocation data center space within the building.

    Contegix will be an anchor tenant in the facility, housing both its offices and a data center there. “We are pleased to be a part of Contegix’s growth and expansion plans in St. Louis,” said Michael Foust, CEO of Digital Realty. “With our most recent improvements, the power, space and enhanced technical attributes available at 210 North Tucker make this data center property an ideal solution for Contegix and its customers.”

    7:01p
    Schneider, ETAP Team to Simulate Power Changes

    Schneider Electric and analytical engineering firm ETAP announced a new advanced electrical power analysis and simulation solution module for data centers, which provides data center operators with added power system intelligence to ensure reliable power distribution infrastructure in data centers.

    Through a cooperative agreement, the two companies have developed a power analysis and simulation solution module for data centers that joins Schneider Electric’s power monitoring systems with ETAP’s power system management and simulation software. The solution will enable data center operators to analyze and understand the potential  impact of changes they make to the power system through simulation.

    “In today’s mission-critical data centers, operators need better tools to fully understand the full impact  on their power system before they act,” said Pankaj Lal, offer management director for Energy Solutions, Schneider Electric. “Predictive simulation is the next frontier for data centers to mitigate the risk of equipment failure and outages.”

    To aid in solving power system operational challenges and provide key insights into the electrical power network in a data center, Schneider Electric and ETAP have developed a module that extends power management capabilities of StruxureWare Power Monitoring and  SCADA software – expert-level software applications within the StruxureWare for Data Centers management software suite – to deliver an added level of reliability and peace of mind for data center operations.  In addition, tight integration between the StruxureWare real-time power monitoring systems and ETAP’s power modeling and simulation software enables the data center operator to run customized pre-engineered “what-if” scenarios.

    “Using ETAP’s robust modeling and analysis engine together with real-time data acquisition has allowed us to add the intelligence behind Schneider Electric’s powerful monitoring suite,” said Shervin Shokooh, COO of ETAP. “We look forward to delivering an added level of reliability and operational peace of mind to data center owners, operators and managers.”

    8:03p
    Study: Granular Fan Control Lowers Cooling Energy Costs

    Looking to lower cooling costs? A new study highlights how granular control of fans on air handlers can significantly lower cooling costs. A joint study from Digital Realty Trust, Vigilent Corporation, and Lawrence Berkeley National Laboratory (LBNL) resulted in a 66 percent decrease of cooling energy usage in a California data center.

    The project focused on replacing constant speed scroll fans with variable speed electronically commutated motor (ECM) fans and deploying the Vigilent Intelligent Energy Management system to control fan speeds and computer room air handler (CRAH) output. The result was a 66 percent decrease in cooling energy usage.

    “The goal of the study was to assess whether the fan speed control system was a viable solution for commercial data centers,” said Jim Smith, Chief Technology Officer at Digital Realty. “We found that upgrading fans and adding fan speed controls in our data centers allow us to cool them more effectively and efficiently. In addition, the facility’s electrical energy usage was reduced, as was the average and peak electric power demand, resulting in a more energy efficient and sustainable data center environment.”

    The facility used for the study was Digital Realty’s 135,000 square foot data center located in El Segundo. LBNL monitored the overall effort, creating the baseline and result metrics, and acted as project manager for the energy efficiency grant awarded by the California Energy Commission’s PIER Program.

    The results of this project, which were presented at the Silicon Valley Leadership’s Data Center Summit last fall, puts some numbers on the energy savings potential brought by granular control of fans on air handlers, and the automation of that function. The study makes a case for granular controls, and for the additional investment of installing variable speed drives in CRAH units. Since they used software from Vigilent to automate these controls, it’s also a good advertisement for that company.

    << Previous Day 2012/12/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org