Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, January 6th, 2016
| Time |
Event |
| 12:58a |
Report: Verizon Kicks Off Auction Process for 48 Data Centers Verizon Communications has kicked off the process to auction 48 data centers as it looks to sharpen focus on its core business, hoping to make more than $2.5 billion from sale of the assets, which include the data center portfolio it gained through its $1.4 billion acquisition of data center provider Terremark Worldwide in 2011, Reuters reported, citing anonymous sources familiar with the matter.
A report that the company was considering selling many of its data centers surfaced earlier and also relied on anonymous sources. Verizon CFO Fran Shammo sought to discredit it as speculation. The new report, however, says the company has in fact already initiated the auction process and selected Citigroup as an advisor.
Verizon is one of several telecoms looking for alternatives to owning massive data center portfolios they built out over the past several years in the hopes of taking advantage of a growing enterprise IT outsourcing market by providing colocation, managed services, and cloud infrastructure services.
It has proven to be a tough market for them to compete in, however. CentryLink, which last year said publicly it was looking to get out of owning its massive data center portfolio, has struggled to reverse a trend of falling revenue from its data center services business. Verizon’s wireline business segment, which includes its data center services, has been reporting declining revenues as well.
Verizon’s 48-data center colocation portfolio generates about $275 million a year in earnings, sources told Reuters. Equinix, the world’s largest colocation provider, reported $1.1 billion in earnings for 2014.
AT&T is another major telco that’s been looking for ways to offload its data center portfolio. A report came out early last year that the company had $2 billion worth of data center assets up for sale. In December, the company announced that IBM had taken over at least part of that portfolio, acquiring AT&T’s managed hosting business, including equipment and access to data centers that hosted it.
Windstream, a smaller telecom, sold its data center business to data center provider TierPoint last October. | | 1:00p |
Data Center Design: Which Standards to Follow? This month, we focus on data center design. We’ll look into design best practices, examine in depth some of the most interesting recent design trends, and talk with leading data center design experts.
Below is the first part in a series on data center best practices by Steven Shapiro, an engineer with almost 25 years of experience in the mission critical industry.
The data center is a dedicated space were your firm houses it’s most important information and relies on it being safe and accessible. Best practices ensure that you are doing everything possible to keep it that way.
Best practices mean different things to different people and organizations. This series of articles will focus on the major best practices applicable across all types of data centers, including enterprise, colocation, and internet facilities. We will review codes, design standards, and operational standards. We will discuss best practices with respect to facility conceptual design, space planning, building construction, and physical security, as well as mechanical, electrical, plumbing, and fire protection. Facility operations, maintenance, and procedures will be the final topics for the series.
Following appropriate codes and standards would seem to be an obvious direction when designing new or upgrading an existing data center. Data center design and infrastructure standards can range from national codes (required), like those of the NFPA, local codes (required), like the New York State Energy Conservation Construction Code, and performance standards like the Uptime Institute’s Tier Standard (optional). Green certifications, such as LEED, Green Globes, and Energy Star are also considered optional.

Codes must be followed when designing, building, and operating your data center, but “code” is the minimum performance requirement to ensure life safety and energy efficiency in most cases. A data center is going to probably be the most expensive facility your company ever builds or operates. Should it have the minimum required by code? It is clear from past history that code minimum is not the best practice. Code minimum fire suppression would involve having wet pipe sprinklers in your data center. That is definitely not best practice.
The Big Three
The three major data center design and infrastructure standards developed for the industry include:
Uptime Institute’s Tier Standard

This standard develops a performance-based methodology for the data center during the design, construction, and commissioning phases to determine the resiliency of the facility with respect to four Tiers or levels of redundancy/reliability. The Tiers are compared in the table below and can be found in greater definition in UI’s white paper TUI3026E. The origins of the Uptime Institute as a data center users group established it as the first group to measure and compare a data center’s reliability. It is a for-profit entity that will certify a facility to its standard, for which the standard is often criticized.

ANSI/BICSI 002-2014

Data Center Design and Implementation Best Practices: This standard covers the major aspects of planning, design, construction, and commissioning of the MEP building trades, as well as fire protection, IT, and maintenance. It is arranged as a guide for data center design, construction, and operation. Ratings/Reliability is defined by Class 0 to 4 and certified by BICSI-trained and certified professionals.
ANSI/TIA 942-A 2014

Telecommunication Infrastructure Standard for Data Centers: This standard is more IT cable and network oriented and has various infrastructure redundancy and reliability concepts based on the Uptime Institute’s Tier Standard. In 2013, UI requested that TIA stop using the Tier system to describe reliability levels, and TIA switched to using the word “Rated” in lieu of “Tiers,” defined as Rated 1-4. TIA uses tables within the standard to easily identify the ratings for telecommunications, architectural, electrical, and mechanical systems. Here’s a sample from the 2005 standard (click the image to enlarge):

TIA has a certification system in place with dedicated vendors that can be retained to provide facility certification.
EN 50600: an International Standard
An international series of data center standards in continuous development is the EN 50600 series. Many aspects of this standard reflect the UI, TIA, and BCSI standards. Facility ratings are based on Availability Classes, from 1 to 4. The standard breaks down as follows:
- EN 50600-1 General concepts
- EN 50600-2-1 Building construction
- EN 50600-2-2 Power distribution
- EN 50600-2-3 Environmental control
- EN 50600-2-4 Telecommunications cabling infrastructure
- EN 50600-2-5 Security systems
- EN 50600-2-6 Management and operational information systems
Regulatory Standards
Government regulations for data centers will depend on the nature of the business and can include HIPPA (Health Insurance Portability and Accountability Act), SOX (Sarbanes Oxley) 2002, SAS 70 Type I or II, GLBA (Gramm-Leach Bliley Act), as well as new regulations that may be implemented depending on the nature of your business and the present security situation.
Operational Standards
There are also many operational standards to choose from. These are standards that guide your day-to-day processes and procedures once the data center is built:
- Uptime Institute: Operational Sustainability (with and without Tier certification)
- ISO 9000 – Quality System
- ISO 14000 – Environmental Management System
- ISO 27001 – Information Security
- PCI – Payment Card Industry Security Standard
- SOC, SAS70 & ISAE 3402 or SSAE16, FFIEC (USA) – Assurance Controls
- AMS-IX – Amsterdam Internet Exchange – Data Centre Business Continuity Standard
- EN50600-2-6 Management and Operational Information
These standards will also vary based on the nature of the business and include guidelines associated with detailed operations and maintenance procedures for all of the equipment in the data center.
Consistency and Documentation are Key
The nature of your business will determine which standards are appropriate for your facility. If you have multiple facilities across the US, then the US standards may apply. For those with international facilities or a mix of both, an international standard may be more appropriate. The key is to choose a standard and follow it. If deviations are necessary because of site limitations, financial limitations, or availability limitations, they should be documented and accepted by all stakeholders of the facility.
Regardless of the standard followed, documentation and record keeping of your operation and maintenance activities is one of the most important parts of the process. Software management tools such as DCIM (Data Center Infrastructure Management), CMMS (Computerized Maintenance Management System), EPMS (Electrical Power Monitoring System), and DMS (Document Management System) for operations and maintenance can provide a “single pane of glass” to view all required procedures, infrastructure assets, maintenance activities, and operational issues.
Your facility must meet the business mission. Data center design, construction, and operational standards should be chosen based on definition of that mission. Not all facilities supporting your specific industry will meet your defined mission, so your facility may not look or operate like another, even in the same industry.
About the author: Steven Shapiro has been in the mission critical industry since 1988 and has a diverse background in the study, reporting, design, commissioning, development and management of reliable electrical distribution, emergency power, lighting, and fire protection systems for high tech environments. His experience also includes providing analysis of critical application support facilities. Mr. Shapiro has extensive experience in the design and management of corporate and mission critical facilities projects with over 4 million square feet of raised floor experience, over 175 MW of UPS experience and over 350 MW of generator experience. Mr. Shapiro is the author of numerous technical articles and is also a speaker at many technical industry seminars. | | 4:00p |
Small Data Center Markets Bracing for Big Change Gillis S. Cashman is a Managing Partner at M/C Partners.
The architecture of data centers and network infrastructure is undergoing a major transformation driven by mobility and accelerated by the Internet of Things. At a macro level, rather than seeing the need for 50 servers in one data center in the middle of nowhere, we are seeking out servers in 50 data centers very close to the edge.
The advancements in technology and platforms, as well as advancements in the broadband infrastructure, is also contributing to this transition. With more broadband networks being deployed and computing platforms advancing, pricepoints for outsourcing are decreasing. The fact that outsourcing eliminates the need to staff multiple environments makes it an even more attractive option.
The requirements in the smaller markets are similar to those in Tier-1 markets. For a third-party data center provider, it’s a very capital-intensive business. There has been so much demand, focus and investments in Tier-1 markets that Tier-2 or smaller markets are largely ignored. However, you’re going to start seeing a shift in focus into these smaller markets.
We are seeing it from the content side; we are seeing companies like Akamai and cable companies, even Netflix, wanting to have their servers really close to the edge so latency is reduced. They’re also looking to establish multiple points of presence within a given market, so that redundancy is more in the network than it is in an actual facility. On the content side, it’s actually very similar to how the cable architecture developed over time.
You initially had node sizes of 5,000-7,000 homes per node. When providers started launching advanced services like voice and data that were latency-sensitive, they had to reduce their nodes to as small as 100 homes per node. We’re starting to see the same thing on the content side. Instead of having one massive data center, we’re now having hundreds of servers across the country in multiple data centers. Netflix is a good example of this. If everyone is trying to watch “Game of Thrones”, and Netflix servers were all in one data center, the network would suffer. If you have hundreds of servers at the edge of the network, you have fewer people “pinging” each server, increasing redundancy and minimizng latency.
Will data center providers offer solutions tailored to specific markets? I think it’s becoming increasingly important to understand the unique business requirements and compliance requirements of a given vertical. What are the regulatory requirements, what are the security requirements of a specific application? Then, you can design solutions that solve those needs. As more organizations outsource their infrastructure, it increases the need for third parties to really understand all aspects of the environment, applications, performance requirements, compliance requirements, etc.
There are many opportunities for investment in this space, but it depends on the market. In the smaller markets, there is a lack of infrastructure, so there is definitely a need, and the supply/demand dynamic is very favorable. However, certain Tier-1 markets are fairly saturated.
We are excited about what the future holds in this space. There’s been a lot of talk about hybrid cloud. The organizations that can figure out how to best utilize it will come out winners. The companies that can tell a customer, “I can support your servers and applications in my data center or yours. If you want to host certain applications in Amazon’s cloud, I will manage the environment and provide you with the right monitoring and application performance reports. ”
Having the ability to manage the environment of multiple platforms and multiple cloud providers is a pretty interesting dynamic. It’s still early in the game, but the companies that can demonstrate and execute on those promises are well-positioned moving forward.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:51p |
Amazon Lowers Cloud Prices (Again), Especially for Linux 
Via The Var Guy
Amazon EC2 cloud hosting has become cheaper than ever following a price reduction for certain services and regions, especially if you use Amazon’s Linux images.
The pricing changes, which Amazon announced Tuesday, include 5-percent reductions in the cost of Linux cloud servers on the following configurations:
- C4 and M4 instances in select regions in the US, Europe and Asia-Pacific.
- R3 instances in the same regions, plus Brazil.
- R3 instances in Amazon’s GovCloud US service, which is designed for hosting sensitive government data.
Amazon has also slashed prices for hosting based on other operating systems, including Windows, SUSE Linux Enterprise Server (SLES) and Red Hat Enterprise Linux (RHEL). The cost reductions in those cases vary but are less than the 5% reduction for generic Linux hosting.
This will be welcome news to many AWS users, but it’s especially beneficial for organizations who run Amazon’s version of EC2-optimized Linux. The changes are one more small factor to drive the appeal of open source cloud hosting (using generic Linux, not the commercial versions of SLES or RHEL).
This first ran at http://thevarguy.com/open-source-application-software-companies/amazon-makes-ec2-cloud-hosting-cheaper-especially-linux | | 6:43p |
Seven Biggest Cloud Outages of 2015 
Via Talkin’ Cloud
Which cloud outages topped headlines last year? Here’s a closer look at seven of the biggest cloud outages from 2015:
1. Apple iCloud
 The Apple logo hangs in front of an Apple store in New York City. (Photo by Spencer Platt/Getty Images)
Apple reported a widespread iCloud outage in June that affected users in the US, Canada, and other countries. Also, a second iCloud outage occurred on May 20.
2. Google Cloud
 Urs Holzle, Senior Vice President for Technical Infrastructure at Google, speaks on the Google Cloud Platform during the Google I/O Developers Conference at Moscone Center on June 25, 2014 in San Francisco, California.(Photo by Stephen Lam/Getty Images)
Last month’s Google Cloud outage lasted nearly 22 hours, according to Computer Business Review. The outage occurred due to “a minor update to the Compute Engine API inadvertently changed the case-sensitivity of the ‘sessionAffinity’ enum variable in the target pool definition, and this variation was not covered by testing,” Google wrote in an incident report.
3. Amazon Web Services
 Werner Vogels, CTO, Amazon, speaking at AWS re:Invent 2015 in Las Vegas (Photo: AWS)
Amazon Web Services suffered an outage on Sept. 20. InformationWeek reported the outage affected Netflix, Buffer and other web companies.
4. Google Compute Engine
 Chillers and cooling towers of the Google data center campus in St. Ghislain, Belgium (Photo: Google)
Google Compute Engine was unavailable for a short period of time on Feb. 18 and Feb. 19. The outage was caused by network issues and affected users in several zones. Google Compute Engine also suffered an outage in March.
5. Microsoft Azure
 An aerial view of Microsoft’s Dublin data center (Image: Microsoft Corp.)
Microsoft Azure last month experienced an outage that prevented many of its European customers from logging into their email accounts or accessing Azure-hosted websites, according to Computing.
6. Verizon Cloud
 Inside a Verizon data center. (Photo: Verizon)
Verizon temporarily shut down its Infrastructure-as-a-Service cloud last January. The outage lasted approximately 40 hours.
7. Apple iTunes
 The Apple data center in Maiden, North Carolina. (Photo: Apple)
Apple iTunes was unavailable several times in October. In fact, AppleInsider reported iTunes suffered four outages in one week.
This first ran at http://talkincloud.com/cloud-computing/7-biggest-cloud-outages-2015#slide-0-field_images-51431 | | 8:10p |
GI Buys ViaWest Data Center in Dallas Market GI Partners, a San Francisco-based investment firm with a focus on data centers, has acquired a data center and office property in the Dallas market, adjacent to a University of Texas at Dallas campus. The data center portion is occupied by ViaWest, the data center provider GI sold to Canada’s Shaw Communications in 2014 for $1.2 billion.
The 300,000-square-foot Synergy Park facility in Richardson, one of the major data center hotspots in the Dallas-Fort Worth area, has about 50,000 square feet of data center space.
The office space is leased to university affiliates, GI said, noting that a “data center solutions provider” was leasing the data center portion of the facility but stopped short of naming the provider. ViaWest, however, advertises a Synergy Park data center in Richardson on its website.
Terms of the transaction were not disclosed.
GI made the acquisition through DataCore, a $500 million real estate fund it manages on behalf of the California State Teachers’ Retirement System, or CalSTRS.
GI principal Mike Armstrong said the facility was attractive because of the data center and infrastructure inside. It was also attractive because of its adjacency to the university. “The Dallas MSA remains one of the most dynamic data center markets in the country, bolstered by a strong, growing economy,” he said in a statement.
The year started off with a lot of data center sale activity. Verizon has reportedly kicked off the process to auction some $2.5 billion worth of data center assets; DuPont Fabros Technology announced plans to sell its NJ1 data center in New Jersey and get out of the New Jersey market altogether; and Mayo Clinic has sold a data center in Rochester, Minnesota, to Epic Systems. |
|