Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, October 23rd, 2014
| Time |
Event |
| 5:47p |
AWS Adds Second European Cloud Region in Germany Amazon Web Services has launched a cloud data center in Frankfurt, Germany, its second availability region in Europe. It joins a region in Ireland and expands AWS’ ability to deliver cloud services to Europeans. The zone is available a day earlier than anticipated and has been announced on AWS’ blog.
The new region (11th for AWS) means Europeans can add resilience to cloud setups with redundant zones on a single continent and have overall lower-latency service. Frankfurt is considered one of the most connected countries in Europe, with DE-CIX one of the largest Internet exchanges in the world.
Germany is a top data center market worldwide, and AWS Germany will extend the cloud’s appeal further eastward. There has been high demand for a German-dedicated AWS data center, in big part due to data sovereignty concerns in the country. “In Germany data protection is of great importance. Therefore, our customers want the ability to store personal data securely within the country,” an AWS statement read.
For local German businesses, the new cloud data center addresses potential data governance issues, residents now able to keep an in-country AWS cloud. Germany is particularly big on keeping data within the country’s borders. Data sovereignty laws have thus far prevented many companies for running several workloads on AWS.
In-country data needs have recently driven other cloud providers, such as VMware, to launch a cloud data center in Germany. Deutsche Telekom subsidiary T-Systems is another local German cloud provider that has emphasized data sovereignty as a central rallying cry for its cloud. There have been reports that Microsoft was also planning to open a German region for its Azure cloud.
Amazon’s plans to establish a German region were reported in July. Bitplaces’ Nils Jünemann discovered via traceroute to ec2.eu-central-1.amazonaws.com that traffic was going to Frankfurt am Main. AWS has been extremely secretive about everything surrounding data centers, including precise locations.
The rest of Amazon’s European data center footprint consists of a cluster of facilities in Ireland, and “edge locations” in Amsterdam, Marseillle, Milan, Frankfurt, Paris, London, Stockholm, Madrid and Warsaw.
Third carbon-neutral AWS region
The cloud data center region is also carbon-neutral, joining Amazon’s two other existing carbon-neutral regions: US West (Oregon) and Federally-focused GovCloud. Greenpeace has long been pressuring the company about “cleaning up” its cloud energy mix.
| | 6:00p |
Mitigating Risk of UPS System Failure Your data center is an absolutely critical component of your business. In fact, modern organizations now create entire organizational plans based on the capabilities of their IT department.
The reality is, bad things happen and systems can fail. It could be a server, it could be an entire rack, or it could be even worse. When it comes to battery power and UPS platforms, how protected is your infrastructure?
This study from Active Power quantifies the likelihood of system failure during three different classes of failure:
- A long utility outage lasting more than 10 seconds
- A short utility outage lasting less than 10 seconds
- A demand failure
The study also evaluates the probability of failure of CleanSource 750HD UPS with a secondary energy source and flywheel energy storage compared to a double-conversion UPS with batteries.
To perform the testing, Active Power retained MTechnology, Inc., (MTech) to perform a reliability analysis of its CleanSource 750HD UPS versus double-conversion UPS with batteries. The study included two classes of utility failure:
- Long utility outages lasting longer than 10 seconds where the AC source is transferred to the generator requiring the automatic transfer switch (ATS) to operate and the generator start and run.
- Short utility outages lasting less than 10 seconds where the UPS energy storage is sufficient to support the load until utility service is restored and transfer to generator is not considered. This amplifies the core reliability differences of the two UPS systems.
So what did they find out? In long outages, the generator must start and run and the transfer switch must operate to successfully support the load. According to the study, the probability of system failure is 21 percent lower with CleanSource 750HD UPS than the failure probability of a system with double-conversion UPS during a long outage. The difference in UPS reliability is mitigated by the failure probabilities of the generators, ATS, and main switchgear, all of which must operate to support the load.
For short outages less than 10 seconds, CleanSource 750HD is nearly five times more reliable than a double-conversion UPS, with a probability of failure 80 percent lower. EPRI reports 96 percent of all sags and outages occur within this time period. This scenario is 100 times more frequent than long outages meaning that facilities are at significantly higher risk from this class of outage. This elevates the importance of the short outage UPS failure rate.
Download this whitepaper and study today to learn the key benefits of a dynamic electromechanical system like that of the CleanSource 750HD. Now, you can deploy powerful power control platforms which help reduce demand failure probabilities by more than 99 percent compared to double-conversion UPS with batteries. | | 6:11p |
Hydro66 Kicks Off Sweden Data Center Construction in Facebook’s Neighborhood Hydro66 has commenced construction on a data center in Sweden, close to the Arctic Circle. The first phase will consist of about 85,000 square feet of white space. The company owns a sizable tract of land (about 500,000 square feet) to expand in the future.
The construction is in Boden, near Luleå, a town well-known in the data center industry for being home to a one-billion-dollar Facebook data center. The new build is about 10 miles away from Facebook. Hydro66 said it wants to bring many of the region’s benefits that Facebook leveraged to colocation customers and is emphasizing the region’s power profile and environment as an opportunity.
Hydro66 is a new player led by industry veterans in Europe. The company said it wants to change the cost model and environmental impact of cloud, Internet and large-scale IT deployments. The company’s financial backer is David Rowe from venture capital firm Black Green Capital. Rowe founded UK Internet service provider Easynet in the 90s, as well as Cyberia, an early commercial cyber café.
The company said it will initially be able to deploy 4 megawatts at its Sweden data center but will have access to more than 120 megawatts from a new purpose-built substation by summer 2015.
“Huge-scale quad-drop power from the regional grid is readily available,” said Hydro66 business development director Paul Morrison. Four different regional grid feeders give clients the option to run applications of low- and medium-level criticality without traditional electrical infrastructure redundancy, UPS and generators.
“There is no limit to the amount of clean low-cost resilient power available to the company as needed,” said Morrison. The new Sweden data center will be located less than half a mile from a 78-megawatt hydropower dam as well.
The name of the game is energy flexibility. “We expect to achieve a step-change in cost per MW deployed and in energy efficiency and simplicity whilst still providing enterprise clients with all the benefits of world-class colocation,” said Morrison. “To this point, we have designed in variable customer choice to achieve their required resilience levels.”
Boden’s arctic climate offers potential for year-round free air cooling even at high power density requirements. This environment is one big reason the company saw the region as appealing.
“Whilst not sharing all our design secrets just yet, it’s all about working with the natural characteristics of how hot and cold air behaves,” said Morrison. “Also the underlying extreme reliability of the regional grid and having quad-drop power available allows new thinking on infrastructure that otherwise adds overhead.”
Hydro66 engaged in a global site selection process over several months and Luleå ended up being the perfect fit. Low energy costs versus other European countries, abundant hydro power and cool climate promise to cut the biggest data center operating cost: powering and cooling the facility.
The company claims it is seeing early interest from the EU and U.S. alike. Hydro 66′s anchor tenant in their first facility is a Bitcoin operation called MegaMine, which has the same equity backers and management as Hydro66.
Economic development organizations in Nordic countries have been hard at work pitching their countries as perfect locations for large data center projects. A recent Facebook study showed that its data center has greatly benefited the local economy. | | 7:54p |
IBM to Open SoftLayer Data Center in Paris The IBM SoftLayer data center expansion rolls on, and the next stop is Paris. IBM announced it will establish SoftLayer’s first cloud data center in France. The site will complement recently opened London and Amsterdam locations.
The expansion is part of an ongoing $1.2 billion investment by IBM to grow its SoftLayer data center infrastructure and cloud offerings. The investment was committed in January, with 15 data center expansions planned, which will bring the total portfolio to 40 total data centers across five continents.
The move is more European cloud in the name of proving data sovereignty and lower latency. AWS just opened a region in Germany citing data privacy as a major driving force as well. SoftLayer CEO Lance Crosby said the Paris location will allow it to support workloads and applications from customers who want their data to stay in the country and secure in the cloud.
It’s no longer satisfactory to have a single cloud region covering all of Europe. Several providers are building clouds inside individual countries to serve local needs. The largest amount of activity is occurring in the major connectivity hubs.
SoftLayer already does solid business in France, citing the country as one of its top 10 best performing markets in EMEA. Worldwide, SoftLayer said it has over 100,000 devices under management.
IBM SoftLayer’s cloud differentiates from AWS largely with bare metal offerings. The company also offers virtual servers, storage and networking out of the location.
The new deployment has capacity for “thousands of servers.” Notice the difference in parlance from previous expansions as part of the $1.2 billion push, which have all been uniformly described as space for 15,000 physical servers.
SoftLayer is partnering with Global Switch here. Most of the recent expansions have been in Digital Realty Trust facilities. This has been true with recent launches in Toronto, London, Hong Kong, and Melbourne. | | 9:00p |
IBM and Microsoft Partner on Enterprise-Focused Hybrid Cloud Solutions 
This article originally appeared at The WHIR
Microsoft and IBM announced a partnership that will make key IBM middleware including WebSphere Liberty, MQ, and DB2 available on Microsoft Azure, and, in turn, IBM Cloud will support Windows Server and SQL Server.
According to the Wednesday announcement, IBM and Microsoft are also working together to deliver a Microsoft .NET runtime for Bluemix, the cloud development platform IBM launched earlier this year.
To support hybrid cloud deployments, IBM will also expand its support of software running on the Hyper-V hypervisor, and enable Azure users to leverage IBM PureApplication, a Softlayer utility that helps extend applications applications across and between on-premise and off-premise environments.
Interoperability among different cloud solutions has been a major theme this week for Microsoft, which just announced that Azure supports CoreOS, a container-based Linux operating system, along with the many Linux distributions already supported. The company also said that around 20 percent of VMs running on Azure are running Linux (an operating system Steve Ballmer once called a “cancer” Ars Technica reminded us).
IBM Software and Cloud Solutions Group SVP Robert LeBlanc stated that the collaboration between IBM and Microsoft will help “drive innovation in hybrid cloud,” and reinforce “IBM’s strategy in providing open cloud technology for the enterprise.”
Microsoft Cloud and Enterprise executive vice president Scott Guthrie said in a statement, “Microsoft is committed to helping enterprise customers realize the tremendous benefits of cloud computing across their own systems, partner clouds and Microsoft Azure. With this agreement more customers will be able to take advantage of the hyper-scale, enterprise performance and hybrid capabilities of Azure.”
This article originally appeared at: http://www.thewhir.com/web-hosting-news/ibm-microsoft-partner-enterprise-focused-hybrid-cloud-solutions | | 9:02p |
Intel Security Switches to All-Colo Data Center Strategy Each organization’s data center strategy is dictated by that organization’s unique infrastructure needs, and there isn’t a strategy that works best for everyone. The Intel Security Group (formerly McAfee) has changed its strategy from a mix of on-premise and colocation data centers to an all-colo approach.
The organization – which the parent company recently rebranded to distance itself from founder John McAfee’s streak of bad publicity – has moved all infrastructure out of on-premise data centers and into various colocation facilities around the world over the course of the past several years, Intel Security data center manager Doug Chansky said.
“We don’t see a need to have an internal data center,” he said. Reasons for the switch are numerous, ranging from the need for flexibility to the difficulty of complying with building codes in certain locations.
Chansky’s group has between 15 and 20 data centers worldwide, all of them running around the clock, regardless of the fact that some are non-production R&D environments.
Avoiding sunk investment
One of the biggest reasons for the switch was that managing data center capacity in colo is much easier. “You gain a lot of flexibility,” he said.
It takes a lot of time and money to build an in-house data center. If you plan and build for an X amount of server capacity, you may find yourself with too much space after a hardware refresh that packs exponentially more computing power into each rack.
Colocation customers do contract space for certain periods of time, but if they end up with more than they need, they still lose less if they sit on additional colocation space until their contracts run out than if they end up owning a chunk of unutilized data center space, Chansky reasoned.
Future capacity needs are often unpredictable
Overcapacity is a fairly common problem that may occur for a number of reasons.
Facebook, for example, ended up with too much data center center space in leased wholesale facilities after it started building its own massive data centers and moving equipment out of the sites it had signed long-term leases on. The company recovered some of the lost investment by sub-leasing the space to others.
Another famous example was Washington State’s data center consolidation project that resulted in a brand new facility with way more space than the state needed. In 2012 Washington officials retained commercial real estate firm Jones Lang LaSalle to help it lease the excess capacity.
In a more recent example, Dell ended up with 4 megawatts of capacity more than it needed at a Quincy, Washington, facility. The company found a creative solution in partnering with a provider of hosting services for bitcoin mining servers that now markets the data center to its customers.
The mixed-use building problem
One of the problems with on-premise data centers in Intel Security’s case is the group’s former approach of building data centers within its existing office buildings. The strategy often made it difficult to bring in the necessary electrical and mechanical equipment and comply with building codes for office buildings.
In one location in the U.S. northeast, for example, Chansky’s team wanted to install a dedicated backup generator, but local building officials did not allow it. “We could not persuade them to allow us to put in a generator for the life of us,” he said.
Chansky also likes how much time the group saves on deploying equipment in colo facilities. Data center providers’ remote-hands services manage to install servers in a fraction of time it used to take in on-premise facilities. | | 9:30p |
NIST Publishes US Government Cloud Computing Roadmap 
This article originally appeared at The WHIR
The National Institute of Standards and Technology (NIST) published its final US Government Cloud Computing Technology Roadmap this week. It identifies 10 requirements expected to encourage cloud adoption by government agencies while also generally supporting innovation in cloud computing technology.
The 10 requirements collectively relate to interoperability, performance, portability, and security, and are largely intact from the draft version of the roadmap, published in 2011.
The second requirement “Solutions for High-priority Security Requirements, technically de-coupled from organizational policy decisions” has been changed to reflect the need for industry to develop technical solutions which support diverse policy rules, including not only legal ones, but also government or business policy rules, for instance.
Requirement six identifies a need for “Updated Organization Policy that reflects the Cloud Computing Business and Technology model.” This requirement has been updated to reflect the changing practical reality of cloud security requirements, and refers specifically to the difficulties in a current cloud model of enforcing law and policy through physical location. The section on this requirement also notes the inhibiting effect on cloud adoption of outdated government policies on issues like domestic storage.
“Cloud computing is still in an early deployment stage, and standards are crucial to increased adoption,” the report says. “The urgency is driven by rapid deployment of cloud computing in response to financial incentives. Standards are critical to ensure cost-effective and easy migration, to ensure that mission-critical requirements can be met, and to reduce the risk that sizable investments may become prematurely technologically obsolete.”
The Roadmap (PDF) consists of two volumes. Volume 1 covers “High-Priority Requirements to Further USG Agency Cloud Computing Adoption,” while volume 2 consists of “Useful Information for Cloud Adopters.” The final version takes into consideration input from over 200 US and international commenters.
Each requirement is addressed with recommended priority action plans which consist of one to four elements. Each element is identified as periodic, ongoing, or given a year for targeted completion, between 2014 and 2017.
NIST has also been working on other aspects of cloud computing, and suggested in July that cloud providers should adopt protocols to aid forensic investigators. It is also working on the US government’s cybersecurity framework.
Government cloud adoption has grown since the draft of the report was released, but some agencies, such as the Department of Defense, have delayed adoption, despite the massive potential savings for taxpayers, which was estimated a year ago by MeriTalk to be $20.5 billion annually.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/nist-publishes-us-government-cloud-computing-roadmap | | 10:00p |
Snead: NSA Revelations Have Chilling Effect on Cloud Growth in U.S. ORLANDO, Fla. – Data center customers are beginning to avoid the U.S. and place their infrastructure elsewhere because of data sovereignty concerns caused by revelations about NSA surveillance, according to David Snead, founder of the Internet Infrastructure Coalition (I2C).
“Our members are seeing a very real shift in putting data outside the U.S. rather than inside the U.S.,” said Snead, whose group includes more than 100 companies in the hosting and data center business. “The NSA disclosures have undermined worldwide confidence in U.S. infrastructure.”
It’s not an accident, Snead said, that a large hosting company in Switzerland recently reported a 45-percent increase in business in the wake of the revelations of former NSA contractor Edward Snowden. One coalition member reports that it used to get 70 percent of its new business from overseas customers, but that now has dropped to 35 percent.
Spying and surveillance by state agencies is nothing new, and the U.S. isn’t the only country engaged in surveillance and requesting information from service providers. But the U.S. has more at stake because it is the leading player in Internet infrastructure.
“The vast majority of data transfer traffic touches the United States,” said Snead. “The U.S. remains an enormous market for the data center industry.”
Secret process erodes confidence
The key issue is the secret nature of information requests by the NSA and other agencies. Service providers are barred from discussing whether they’ve received classified requests for user data. The I2C argues that companies should be able to explain how the process works and disclose the number of requests they have received from the government.
“Most of you have never received these requests, and your users assume that you have,” said Snead, who said providers should be allowed to make this clear to their users.
One way for cloud platforms and service providers to defuse data sovereignty concerns from international clients would be to add infrastructure in other countries, allowing customer data to stay within their borders, rather than traveling through U.S. infrastructure where it might be accessed by federal agencies.
But this approach has been complicated by the U.S. government’s effort to access data stored by Microsoft in a data center in Ireland, a case that has broad consequences for the data center industry, making it difficult for American providers to communicate with customers and assess how to expand their global networks.
Providers should pay attention
In April, a judge ruled that Microsoft must comply with search warrants from U.S. law enforcement agencies seeking customer data regardless of where that data is stored. In this case, the data is in a Microsoft facility in Dublin. Microsoft refused to comply with the request, arguing that a U.S. warrant did not apply to data located overseas, and the dispute ended up in court.
“We’re convinced that the law and the U.S. Constitution are on our side, and we are committed to pursuing this case as far and as long as needed,” said Microsoft General Counsel Brad Smith.
Snead said the Microsoft decision is “extremely troublesome” to U.S. companies. “This is a huge issue that the industry is not paying very much attention to,” he said. “Companies should be able to place data where they think is necessary, and respect how the local law works.”
Invite the FBI to visit
Snead noted that the relationship between data centers and law enforcement need not be adversarial. In fact, he said, there are times when it can be a good thing to have the FBI visit your facility.
“Develop a relationship with law enforcement,” he said. “Call the local FBI office and invite them over for coffee, and then give them a tour of your data center. If there’s no relationship, they’ll just come in looking for a single customer’s data and take the entire server. That’s a huge problem, since you have other customers and SLAs.
“You never want to figure out your subpoena and access policy when the FBI knocks on your door,” said Snead. “You have to work it out beforehand. The last thing you want to do is ask the FBI to sit in your conference room while you go call your lawyer.” |
|