Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, December 18th, 2014
Time |
Event |
1:00p |
Photo Tour: Iliad’s DC3 Data Center on Paris Outskirts Paris is a good place to run a data center business. It is a major European business center, but unlike other European business centers, electricity there is relatively cheap, because France’s predominantly nuclear generation has ensured an abundance of energy.
The city’s extensive underground sewer system, famous for its role in the French Revolution and World War II and described by Victor Hugo in Les Misérables, has also been used to build out fiber network infrastructure, so connectivity in the city is abundant as well.
One of the biggest data center providers in Paris is Iliad SA, better known for being one of the country’s largest telcos. But it also has a sizable hosting and colocation business, with two large data centers in and around Paris and a nascent bare-metal cloud service, offering on-demand ARM-powered servers.
Iliad’s data center business is run by Arnaud de Bermingham, CEO of the company’s hosting division called Online.net. de Bermingham, who came up with the standard data center design the company now uses across its footprint, gave Data Center Knowledge a tour of one of the facilities in November.
 Arnaud de Bermingham, CEO of Online.net, in front of an Eaton UPS unit at Iliad’s DC3
Huge Fire-Containing Data Center Modules
Iliad’s 25-megawatt DC3 data center, located in Vitry-sur-Seine, a Paris suburb just south of the city, was built about two years ago. It is one of two data centers in the country and the only hosting data center in the country to have received Uptime Institute’s Tier III certification for design documents. No data center in France has received Tier certification for constructed facility.
The hosting data center has several unique features, one of which is the use of massive metal rooms as a way to build out capacity.
The big metal boxes are manufactured at a factory and shipped to the building for assembly. Each box has its own electrical and mechanical infrastructure. One of the benefits of this approach is fire containment. If there is a fire, the metal walls will isolate it for up to two hours, de Bermingham said.
Each room has capacity for about 350 kW and 132 IT racks and takes six weeks to fit out. It has four IT power loops and two cooling power loops. Each of the IT loops cannot be loaded to more than 75 percent of capacity, so if one goes down, the remaining three can pick up the load. This four-and-two setup is what gives the data center concurrent maintainability, the all-important standard required for Tier III certification.
Another unique feature is concrete-encased electrical busway, which is another fire-avoidance measure.
 Encasing electrical busway in concrete is a design feature de Bermingham came up with to enhance fire protection
 The data center’s cooling capacity is adjusted automatically based on the amount of power consumed (power meters pictured)
 UPS and transfer switches serving one of the big data halls at Illiad’s DC3
 The data center relies primarily on mechanical cooling but does use free cooling about 30 percent of the year, according to de Bermingham
 Cold-aisle isolation: a fairly standard feature in modern data centers
 While Iliad is a connectivity services provider itself, it does not provide bandwidth to its data center customers, keeping its colocation and hosting business carrier-neutral
Room to Grow
There is enough space and power to add two more big rooms at the site. The company also plans to build out DC4, an undeveloped building it owns in Paris proper. That building has a nuclear bunker.
The Online.net cloud was launched only recently and currently lives in DC2, an older facility in Vitry-sur-Seine. If demand for its cloud services grows, the company will expand the infrastructure to other locations, including the U.S.
While there is a building called DC1 in Iliad’s portfolio – the facility used to belong to Exodus Communications, the colocation company that went out of business following the dot-com bust of the late 90s – the building currently sits unused. | 4:30p |
The Continued Threat of DDoS Attacks, Four Ways to Address the Concern Bill Barry is executive vice president of Nexusguard, a technology innovator providing highly customized Internet security solutions for global customers of all sizes across a range of industries.
Many data centers that rely on websites to serve customers and communicate with partners are on edge lately, alarmed by media reports of high-profile hacking incidents. The technology press tends to focus on cyber attacks that involve the exploitation of operating system vulnerabilities. But another type of threat is quietly growing under the radar: the Distributed Denial of Service (DDoS) attack. DDoS attacks were up 75 percent in 2013, according to an NBC News report.
DDoS is becoming the preferred method of attack for hackers, hacktivists and rogue governments due to its simplicity, ease of distribution and potential for major disruption, especially if the target is a financial institution or real-time service provider. Unlike attacks that rely on an operating system security vulnerability, DDoS attacks are relatively low tech and easy to stage: cybercriminals simply bombard the targeted site with fake traffic until it shuts down, creating havoc for the business.
Although most data centers have defenses in place for viruses and malware and deploy the latest operating system patches, many overlook true DDoS protection, and the results can be devastating. The best way to counter DDoS threats is to partner with an experienced DDoS security professional, but for data centers that choose to handle DDoS in-house, here are four ways to address DDoS attacks:
- Evaluate protection options – cloud vs. appliance. When a data center site comes under attack, every second counts, so it pays to be prepared ahead of time and know which type of protection option is the best fit – a cloud-based or appliance-based solution. Both options involve implementation lead times, but cloud-based solutions are typically faster to deploy. Prior to an attack, data center security professionals should analyze deployment times and make a decision about outage tolerance levels.
- Determine who is responsible for protecting against attacks and addressing incidents. For data center operators, it’s also crucial to know who is responsible for safeguarding the system from DDoS attacks and define who will address incidents. The efficiencies businesses enjoy while sharing an infrastructure are significant, but there’s also an associated risk. It should be clear upfront who is responsible for providing DDoS protection and addressing DDoS attacks. Operators can’t force every client to have individual protection, but they bear ultimate responsibility for the damage other clients suffer if a high-risk “neighbor” comes under a DDoS attack that brings the whole data center down.
- Deploy backup IPs. DDoS attacks typically unfold when a master program deploys “zombies” or “bots” – compromised systems that are instructed to flood the site with phony traffic. It’s critical at that point for the data center security team to implement a backup set of unpublished IPs that are in a different subnet than the data center’s normal IP range. This will enable the DDoS protection service to reroute legitimate customer traffic to the site while funneling zombie and bot traffic to the protection service’s proxies via a DNS change.
- Implement a damage control plan. While technical issues are typically the primary focus for data centers undergoing a DDoS attack, it’s also important to have a script in place to address customer, vendor and business partner concerns about the outage, including the possibility that it will affect data center service level agreements (SLAs). It’s a good idea to prepare talking points in advance to explain the reasons for the outage and underscore the company’s commitment to provide reliable access to minimize harm to the brand.
While many companies are focusing on patching security holes to thwart hackers who are looking for operating system vulnerabilities, too many data centers remain at risk for DDoS attacks, which can result in millions in lost revenue while significantly undermining brand value. For data centers that choose to handle the growing threat of DDoS attacks in-house, following these steps can help the company recover more quickly and contain damage to the brand.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:30p |
Data Center Jobs: RagingWire Data Centers At the Data Center Jobs Board, we have a new job listing from RagingWire Data Centers, which is seeking a Manager, Critical Facilities in Sacramento, California.
The Manager, Critical Facilities is responsible for ensuring the CFOps team works effectively to achieve the CFOps goals, managing budget, staff planning, NPS feedback, operational efficiency, etc., directly interfacing with construction management team, contractors and consultants for all phased data center construction commissioning, including integration and testing of new systems while maintaining critical systems online, and working with the Senior Manager, Critical Facilities Operations to develop and track annual budgets. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | 6:16p |
Akamai Launches Japan Data Center to Combat DDoS Attacks The voracity of Distributed Denial of Service attacks is causing content delivery network provider Akamai Technologies to expand its global network. The company has opened a data center in Japan to improve its DDoS prevention capabilities. The data center was launched in early October and is now fully operational.
It is in a region where Akamai has seen an increase in DDoS attacks. Regional “DDoS scrubbing” centers are located close to where DDoS attacks originate as part of the strategy. Two additional regional data centers will come online in 2015, one in Japan and another in the EMEA region.
A DDoS attack makes a machine or network unavailable by flooding the victim with requests so that legitimate users can’t access services properly. The “distributed” part means more than one person, or bot, attacking at once. Akamai notes that there’s been a big increase in DDoS attacks recently.
In addition to helping effectively distribute content at high performance to distributed end users, there is a growing security play for the CDN provider. Akamai uses its network to identify and stop unwanted traffic such as DDoS. At the data center, malicious traffic will be “scrubbed” before the remaining clean traffic routes back to the network.
The U.S. is still the biggest source of these attacks, accounting for nearly a quarter. However, attacks originating from Asia Pacific have increased over the last 18 months, according to Akamai’s State of Security and quarterly State of the Internet reports. (form required). These numbers are surging due to DDoS-related malware.
Akamai needs a lot of distributed capacity in order to effectively provide its services. The company usually chooses the most connected colocation data centers in a given market. As of last September, it had 150,000 servers embedded inside networks in over 90 countries.
The company views its breadth of infrastructure as a differentiator in the CDN space and security as a growth area, according to Greg Lord, who oversees enterprise product marketing at Akamai.
The third quarter was record setting for DDoS attacks. Average peak bandwidth increased 80 percent, compared to the previous quarter, and 389 percent from the same period a year ago. There was a 321 Gigabits per second (Gbps) attack and 16 other attacks that each peaked above 100 Gbps.
There have been several recent high-profile DDoS attacks. One recent victim in Japan was Sony. In addition to the recent big hack, the Playstation Network underwent a DDoS attack affecting services. Hosting provider 1&1 was hit with a DDoS attack earlier this month, which took its service down for 12 hours.
Emerging markets follow the U.S. in terms of attack origination. China accounts for 20 percent and Brazil accounts for 18 percent. Japan is seventh with 4 percent.
“This new state-of-the-art data center improves network performance for our clients in Japan while significantly increasing network capacity in the Asia Pacific region,” said John Summers, vice president, Security Business Unit, Akamai. “Locating another scrubbing center in Japan also enables more clients to access Akamai’s global DDoS mitigation network.”
Akamai is the leader in the CDN space, but has seen increasing competition and pricing pressure. As traffic increases, CDN’s role in enabling a good end user experience and secure delivery increases too.
Other commercial CDN providers include Limelight Networks, Amazon Web Services, Microsoft Azure, Level 3, Internap and Rackspace, among many others.. Recent entrants to the larger CDN space include Fastly who which focuses on dynamic content, and Instart Logic, which raised $26 million for its “CDN Replacement” technology.
Another provider Highwinds recapitalized and raised several rounds, and Edgecast Networks was recently acquired by Verizon.
| 6:31p |
Verizon Adds Direct Links to HP Helion, Salesforce Clouds Verizon has added HP Helion and Salesforce to its platform that offers private network links to cloud services.
HP brings its managed cloud Infrastructure-as-a-Service and Salesforce brings a suite of sales, service, marketing, collaboration, and the new big data analytics applications delivered as cloud services.
Verizon now offers access to six cloud service providers, the other four are Amazon Web Services (added recently), Google, Microsoft, and Verizon’s own cloud. The aim is to give enterprise customers as wide a selection of providers as possible. Providers of network and data center services have been making their services more attractive by giving access to as many other service providers as possible.
Verizon handles maintenance and network connectivity with dynamic bandwidth allocation to each cloud. It also provides security and encryption for traffic passing across the network, application performance throughput, and quality of service options. Clients have the option to add Verizon Managed Security Services, including firewall, anti-virus, anti-spam, image, and content control.
One caveat is that end users can access these cloud services on mobile devices so long as they’re on Verizon’s 4G LTE network or from laptops connected to Verizon’s global IP network for security purposes. The service is targeted at big enterprises wanting to use cloud services securely.
Verizon also recently launched a Cloud Marketplace, tuned more for small and mid-size businesses and companies transitioning applications to cloud. It’s available in public cloud and virtual private cloud-reserved performance deployments, with no-cost and bring-your-own-license pricing models at launch, and metered billing options in 2015. | 6:54p |
Teradata Buys Data Archiving Firm RainStor Data warehouse solutions provider Teradata, which has been growing its big data analytics capabilities, has acquired data-archiving specialist RainStor for an undisclosed sum. As the company’s fourth acquisition of the year, it illustrates a continued focus on enterprise-grade Hadoop solutions.
RainStor’s archival system ridrd on top of Hadoop. Gartner recently named RainStor a visionary in the June 2014 Magic Quadrant for Structured Data Archiving and Application Retirement.
RainStor brings a wealth of big data partnerships, large global customers, and patents to Teradata. The two companies have partnered on technology implementations in the past.
RainStor assets, intellectual property, and most RainStor employees will be added to Teradata’s analytics business.
San Francisco-based RainStor has raised about $26 million since it was founded in 2004. The firm specializes in analytical archive and compliance archive solutions.
Data warehousing powerhouse Teradata has been embracing the complete big data ecosystem, paying special attention to the flourishing Hadoop scene. Like RainStor, previous acquisitions of Revelytix and Hadapt brought Hadoop talent and intellectual property as well. In the past few months Teradata has launched the Teradata Cloud for Hadoop and gained a Hortonworks certification.
“The addition of RainStor underscores Teradata’s commitment to support customers as they transform their organizations into data-driven enterprises,” said Scott Gnau, president of Teradata Labs. “The new archival capability will help customers cost-effectively and efficiently address their data archiving requirements using Hadoop.”
| 8:30p |
Recent Microsoft Azure Outage Came Down to Human Error: Report 
This article originally appeared at The WHIR
Microsoft offered more details this week about the cause of the Microsoft Azure outage in November that caused downtime for thousands of sites, including Microsoft’s own msn.com and Windows Store.
The Microsoft Azure service interruption on Nov. 18 resulted in intermittent connectivity issues with the Azure Storage service in multiple regions.
In a lengthy, detailed post on its Microsoft Azure blog on Wednesday, corporate vice president, Microsoft Azure, Jason Zander said that the issue arose after Microsoft deployed a software change to improve Azure Storage performance by reducing CPU footprint of the Azure Storage Table Front-Ends.
During deployment, there were two operational errors, Zander said.
“The standard flighting deployment policy of incrementally deploying changes across small slices was not followed,” he said. Secondly, “although validation in test and pre-production had been done against Azure Table storage Front-Ends, the configuration switch was incorrectly enabled for Azure Blob storage Front-Ends,” which exposed a bug that resulted in Blob storage Front-Ends entering an infinite loop.
Microsoft’s final Root Cause Analysis for the event determined that “there was a gap in the deployment tooling that relied on human decisions and protocol.”
“With the tooling updates the policy is now enforced by the deployment platform itself,” he said.
Aside from technical failure, Microsoft said that it failed to in communicating with affected clients during the incident. It fell short in posting status information to the Service Health Dashboard, and had insufficient channels of communication (Tweets, Blogs, Forums). It also admitted the response from Microsoft support was slow.
“We sincerely apologize and recognize the significant impact this service interruption may have had on your applications and services,” Zander said. “We appreciate the trust our customers place in Microsoft Azure, and I want to personally thank everyone for the feedback which will help our business continually improve.”
Below is the full timeline of the Nov. 18 Microsoft Azure service interruption:
- 11/19 00:50 AM– Detected Multi-Region Storage Service Interruption Event
- 11/19 00:51 AM – 05:50 AM– Primary Multi-Region Storage Impact. Vast majority of customers would have experienced impact and recovery during this timeframe
- 11/19 05:51 AM – 11:00 AM – Storage impact isolated to a small subset of customers
- 11/19 10:50 AM – Storage impact completely resolved, identified continued impact to small subset of Virtual Machines resulting from Primary Storage Service Interruption Event
- 11/19 11:00 AM– Azure Engineering ran continued platform automation to detect and repair any remaining impacted Virtual Machines
- 11/21 11:00 AM– Automated recovery completed across the Azure environment. The Azure Team remained available to any customers with follow-up questions or requests for assistance
This article originally appeared at: http://www.thewhir.com/web-hosting-news/recent-microsoft-azure-outage-came-human-error-report | 9:26p |
Senate Bill With Data Center Energy Provisions Blocked Federal energy efficiency legislation that included several provisions about data center efficiency has stalled in the Senate.
Senator Tom Coburn, an Oklahoma Republican, blocked the bipartisan Energy Efficiency Improvement Act of 2014, a bill the House passed in March. The bill calls for numerous energy efficiency improvement measures for buildings, water heaters, and government technology.
The bill calls for government collaboration on efficiency with data center industry experts, for creation of a certification program for assessors of energy efficiency in federal data centers, and creation of another data center efficiency metric.
But the most influential part in the section of the bill that has to do with data centers is a requirement that the Department of Energy updates the government’s official estimate of the total amount of energy all data centers in the U.S. consume.
The most current estimate the government has is from 2007. Those figures have been used extensively in the public and private sectors for a variety of purposes, from creating and advocating for policies to environmental activism, company sustainability goals, and vendor marketing materials.
The 2007 report, created by the Environmental Protection Agency, estimated that U.S. data centers had consumed about 61 billion kWh in 2006, or 1.5 percent of all electricity consumed in the country.
The report said the federal government’s data centers were responsible for 10 percent of the total. But, as the Federal Data Center Consolidation Initiative that kicked off in 2010 showed, we learned that far from all government agencies could actually provide a reliable estimate of their data centers’ energy consumption.
The 2007 report was also important because it contained an official acknowledgement that half of the energy most data centers used was consumed by power and cooling infrastructure and not IT equipment.
The EPA report forecasted that national data center energy consumption could double by 2011. Data centers’ 2006 peak load on the power grid was estimated to be about 7 gigawatts and would reach 12 gigawatts by 2011, according to the government’s estimates.
Needless to say, the 2007 forecasts need to be checked, and the government’s official data center energy consumption figures need to be updated, which is what the bill calls for. It also calls for an evaluation of the impact cloud computing, mobile devices, social networks, and big data technologies that have exploded over the past several years have had on data center energy demand.
Coburn is retiring at the end of the current Congress session, which, according to The Hill, means Congress may revive the energy efficiency bill in the next session. |
|