Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, December 10th, 2014
Time |
Event |
1:00p |
OVH Raises $327M to Build More Data Centers Globally French hosting provider OVH has raised $327 million to expand data centers internationally. The money is from a new syndicated loan and private bond.
OVH started in 1999 and has grown into one of the largest hosting providers worldwide. Its focus nowadays is on growing the OVH cloud business.
The company has several international expansion plans and has been busy in North America. A company spokesperson said via email that it is too early to communicate the long-term strategy for the new capital, but that it is actively considering expansion to the West Coast in North America and in Asia Pacific.
A West Coast presence would greatly complement the East Coast footprint the company has been building up. It first entered North American hosting market with a data center just outside of Montreal in 2012. It recently expanded capacity there with a 10,000-server container.
Forward-Thinking Design
The company is innovating both inside and outside the data center. It has an innovative data center design consisting of “hosting towers,” pioneered at its facility in Roubaix, France, replicated using shipping containers at a second site in Strasbourg, and inspiring expansion in Quebec.
Inside the data center, it builds custom servers and is able to deploy custom water-cooled servers in an hour. The company recently launched a big data cloud on IBM Power8 chips and OpenStack.
“They need a large amount of capital because they like to build everything from the ground up,” said Philbert Shih, managing director, Structure Research. “Every bit is built by them, from the data center and infrastructure down to servers and components. They’re very specific about the design and this requires a lot of capital.”
Shih said OVH is currently working on a multi-brand strategy built on the foundation of its various technology capabilities. “That innovation is being productized in various offerings for different markets,” he said. The strategy includes “different brands for cloud, small business hosting and OVH.com for standard hosting, and focusing on global expansion in general.”
From Hosting to OVH Cloud
OVH made a name for itself in dedicated server hosting and has been making a transition to cloud, said Liam Eagle, analyst with 451 Research.
“OVH, like most of the providers with a legacy in dedicated hosting, is making the transition to a more cloud-like offering,” he said. “And, like many of those providers, it has struggled somewhat with the influence of utility [cloud] services on its business model. However, the company has positioned itself with low-cost virtual server products that keep it competitive for the same types of customers it has always served.”
Despite being a large provider, OVH isn’t well known in North America, Eagle added. “The company’s plans certainly include increasing its presence in the U.S., which will require significant capital investment, both to acquire customers and to build out the infrastructure to support them.”
Debt to Fund Data Center Construction
Institutions taking part in the syndicated loan provided a $196 million revolving credit facility, maturing in six years, while Euro PP bond investors committed $131 million over six, seven, and eight years.
“The goal is to provide the means to become a key player in the global cloud, able to compete with the big American companies,” Nicolas Boyer, CFO of OVH, said in a statement. The investment “allows us to diversify our funding sources and extend the average maturity of our debt, assuring the next three years of development.”
The debt will finance an OVH investment program of more than $490 million (the rest is self-funded). “We can then intensify the deployment of our data center and network infrastructures, supporting our customers in the cloud while capturing new markets.”
OVH is frequently on the list of web companies with the most servers. It had 150,000 as of early 2013. The company currently touts 700,000 customers. | 4:00p |
US Intelligence Wants Superconducting Computer in Five Years U.S. federal agency that invests in “high-risk, high-payoff” research to advance capabilities of the government’s intelligence agencies has started a five-year program to develop superconducting circuits for the most powerful spy computer to date.
The Intelligence Advanced Research Projects Activity (IARPA) has signed research contracts with IBM, Raytheon-BBN, and Northrop Grumman Corp. to support the program called C3, or Cryogenic Computing Complexity.
The promise of superconducting processing is to take computing far beyond limits of the currently used complementary metal oxide semiconductor (CMOS) technology. If the technology proves to be effective and possible to manufacture at low enough cost, it will greatly reduce power and cooling requirements of supercomputers and data centers as well as the amount of space required to house cooling infrastructure.
“The power, space, and cooling requirements for current supercomputers based on complementary metal oxide semiconductor (CMOS) technology are becoming unmanageable,” Marc Manheimer, C3 program manager, said in a statement.
Superconducting circuits make quantum computing possible. Quantum computing takes advantage of the special property of subatomic particles called qubits to be in more than one state at once, which theoretically can deliver much faster computing than the current binary paradigm.
Besides the spy computer research program, there are multiple ongoing R&D projects around superconducting computing in academia and the public sector, including efforts by researchers at MIT, University of California Santa Barbara, Google, and National Aeronautics and Space Administration.
Some superconducting circuits have been clocked at 770 gigahertz, according to a report by MIT News. For comparison, Intel’s fastest processor to date, Core i7-4790K, clocks at a maximum of 4.40 GHz.
Tianhe-2, the Chinese system currently considered the world’s most powerful supercomputer, runs on 2.2 GHz processors, albeit it runs on more than 3.1 million processor cores. The system requires nearly 18 MW of power.
Superconducting circuits have no electrical resistance and thus produce no heat. This is achieved by cooling the material down to a point where the atoms stop moving, allowing electrons to pass without bumping into them and producing heat as a result.
A recently published paper by MIT researchers describes superconducting circuits made of niobium nitride that operate at minus 257 degrees Celsius. They are cooled to that temperature by liquid helium.
These circuits need about 1 percent of the energy a conventional chip needs.
Another major ongoing supercomputing research effort by the federal government is the Department of Energy’s program focused on reaching exascale computing. An exascale computer can perform billion billion calculations per second (this is measured as 1 exaFLOP/s).
The aforementioned Tianhe-2 system’s maximum theoretical performance is about 54,900 teraFLOP/s. To do what a 1 exaFLOP/s system can do in one second, one would have to make one calculation per second for about 31.7 billion years, according to the website of Indiana University.
IARPA’s program is looking well beyond exascale. C3 administrators expect to have the technology needed to demonstrate a small superconducting processor, and in five years a “small-scale working model of a superconducting computer.” | 4:30p |
cloudControl Moves All PaaS Clients from AWS to Google Cloud cloudControl, the German company that bought Docker’s Platform-as-a-Service business called dotCloud in August, has moved all 500 of its dotCloud customers from Amazon Web Services to Google Cloud Platform.
The company made the move primarily to give developers more options, Philipp Strube, cloudControl CEO, wrote in a blog post. Those include a wide selection of programming languages and add-on services.
He also praised Google’s “unparalleled global network infrastructure” and reliability of Google Cloud Storage.
Containers Ensure Smooth Migration
It took one week to “bootstrap” cloudControl’s technology on Google’s cloud and six to eight more weeks to prepare the platform for production. The move was painless because the dotCloud architecture is infrastructure agnostic due to its use of application containers.
“All customer application processes and 98 percent of our own platform components run inside the containers,” Straube wrote.
The container technology that underpins dotCloud was the foundation of the container technology and standard that caused Docker’s skyrocketing rise to fame.
Before it became Docker, the company was called dotCloud and was originally a PaaS provider. Eventually, its founder Solomon Hykes decided that the company had better chances of success if it focused on developing the container technology.
Earlier this year, Docker sold dotCloud to cloudControl for an undisclosed sum so it could focus on its new direction.
Move Results in Lower PaaS Prices
Using Google’s sustained-usage discounts, cloudControl was able to reduce per-memory prices for dotCloud customers by 50 percent to 80 percent, Strube wrote. | 4:30p |
Forecasting to Improve Your Data Center Portfolio Santiago Bernal (CSCP) is a Senior Project Manager in the Cloud and Enterprise team at Microsoft Corporation.
Building and managing data center infrastructure represent large amounts of investments; it can easily reach up to hundreds of millions of dollars. Knowing whether or not you need additional capacity or whether your company can wait a few months to make this additional investment, can translate into a significant financial improvement to your data center portfolio.
In retail, there are two forecasting models that allow you to manage your inventory levels between Original Equipment Manufacturers (OEM), distribution centers, retail stores and end customer: sell in and sell through models.
Sell In Forecasting Model: allows you to determine your inventory needs based on how much inventory your company has at your retail stores. Based on re-order points, additional inventory is requested from distribution centers or the OEM.
Sell Trough Forecasting Model: ensures that inventory in the channel is replenished when your existing inventory is sold to end customers. This type of model can be seen as a deeper dive into your current existing inventory. Is your inventory being sold to your customers? Or is it just moving from the stock room to your shelves on the floor?
By applying these retail principles, we can ask ourselves the same question regarding the number of data centers and servers that a company has deployed to meet customer demand. Is your current server fleet meeting actual customer demand (selling though) or are they just being installed to meet future (sell in) forecast? Is your company solely looking at how full (servers installed or sell in) your data centers are or is it looking at the utilization of these servers (sell through)?
- Sell In = Servers installed at your data center
- Sell Through = Server utilization at your data center
Let’s simulate these principles with the scenario below:
Company ABC has three data centers in a specific global region. Based on the lead-times built into their re-ordering point model, this company will trigger the build of another data center when sell in reaches 80 percent at a regional level. The percentages of sell in for each data center are illustrated below:

Based on additional sell in demand from customers, this company will trigger the signal to build a fourth data center in this region because this additional sell in demand will reach 80 percent regional sell in target. This additional investment will translate into a $100 million investment.

Company XYZ has a similar data center portfolio and has decided to include sell through into their forecast and inventory management logic. Based on the lead-times built into their re-ordering point model, this company will trigger the build of another data center when sell through reaches 70 percent at a regional level.

Based on additional sell through (actual server utilization), this company decides not to trigger a build for a fourth data center in this region (even though the sell in target has been reached). This decision saves Company XYZ millions of dollars.
Company XYZ has also given themselves a few more months to analyze the age of their current inventory on hand before having to make a decision to acquire additional capacity. During this period, Company XYZ determined that the servers in the first data center are approaching the end of their life cycle and will therefore be decommissioned. These servers will need to be retired and replaced with new, more efficient servers and network designs. Company XYZ determines that data center one can now be used to home the new and more efficient network and servers instead of having to invest in a fourth data center.
This analysis and scenario is described below:
Company ABC invested in the fourth data center, not realizing that the inventory in data center one is becoming obsolete. Sell in for data center four has now increased, but sell in for data center one is now at 0 percent as the servers residing in this data center have now reached their end of life.

Company XYZ decided not to invest in data center four as they now have capacity available in data center one.

To be able to implement this type of additional forecast model, you will need to invest in automation so actual utilization is able to be measured accurately and recorded in real time. Inability to do so could add risk to your current data center portfolio growth plans. In addition, the implementation of a basic Sales and Operations Planning (S&OP) process will allow you to have direct conversations between your customers, finance, engineering groups and operations. Yes, “Forecasts are ALWAYS wrong,” but additional investments in getting additional details about the actual utilization of your servers will translate into a big pay day.
Adding focus to sell through (server utilization) forecasting in addition to sell in, can improve your data center portfolio performance.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:30p |
Data Center Jobs: CBRE At the Data Center Jobs Board, we have a new job listing from CBRE, which is seeking a Field Support Technician, 3rd Shift in Alpharetta, Georgia.
The Field Support Technican is responsible for performing troubleshooting and repairs of telecommunication systems to include but not limited to: phone circuits; Internet, backbone and subnet connections; fiber terminations and MDF cable plant management, union between client(s) and operations team for special projects and ongoing infrastructure management, assisting with installation and modification of building equipment systems, and troubleshooting, evaluating and making recommendations to upgrade maintenance operations and/or implement savings opportunities. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | 6:12p |
Microsoft, Accenture to Launch Hybrid Cloud in Biggest Collaboration since Avanade Microsoft and Accenture have teamed up on enterprise hybrid cloud. The two companies have a longstanding relationship and have now introduced Accenture Hybrid Cloud Solution for Azure.
The companies teamed up on an end-to-end public cloud solution in 2012. The two launched a joint venture called Avanade in 2000, which is a big Microsoft solutions provider with more than $2 billion in global sales. The latest hybrid cloud agreement is the most far-reaching collaboration since Avanade, according to the companies.
Working with Avanade, the companies are co-funding and co-engineering this platform with new hybrid cloud technologies and services to help enterprises build and manage enterprise-wide cloud infrastructure and applications.
Accenture has other high-profile partnerships, such as the recently announced SAP HANA on HP collaboration and a partnership with Huawei on private cloud.
This is the latest bid to help enterprises “transform” infrastructure toward cloud. Accenture Hybrid Cloud Solution for Microsoft Azure includes new technologies to migrate and manage applications between private and public clouds in a controlled, seamless, and automated way, the companies said. The “everything-as-a-service” offering is made up of:
- Microsoft’s Azure cloud platform connected with Windows Server with Hyper-V, System Center and Azure Pack running in customer data centers. The connection is what enables hybrid infrastructure
- Accenture Cloud Platform, which provides enterprises with self service application provisioning and supports multi-platform environments. Its central dashboard controls cloud brokerage and management capabilities and provides an enterprise-grade environment including governance, security.
- Professional services from Accenture, who will consult on strategy and transformation, as well as migration and managed services.
- Avanade’s Microsoft solutions experts
An early customer is Freeport-McMoRan, one of the largest international natural resource companies. The hybrid cloud offering was used to protype a new Industrial Internet of Things (IIoT) platform to improve mining operations. Data from trucks, drills, and other assets in the mine is securely harvested to show supervisors what’s going on in real or near-to-real time.
“With new demands being placed on IT departments every day, enterprises need to smartly connect their infrastructure, software applications, data and operations capabilities in order to become agile, intelligent, digital businesses,” said Accenture chairman and CEO Pierre Nanterme in a release. “This unique collaboration with Microsoft and Avanade is one of Accenture’s most strategic and important initiatives for driving enterprise-wide cloud adoption.” | 6:27p |
Altiscale Raises $30M for Hadoop Cloud Services Hadoop cloud service provider Altiscale has raised $30 million in a Series B funding round led by Northgate, with participation from previous investors Sequoia Capital and General Catalyst Partners.
The two-year-old Silicon Valley company has some big competition in the Hadoop cloud services market, but feels its approach addresses a market that is looking for assistance with effectively utilizing the open source big data software.
Recently the company added the ability to use SQL to run queries on data stored in Hadoop, and partnered with Carpathia as a data center provider.
In a statement, Altiscale founder and CEO Raymie Stata, a former Yahoo CTO, said, “The simple truth is that businesses don’t want to cope with Hadoop’s complexity. They just want the insights that can be used to improve their business. We enable them to focus on generating those insights and not worry about the underlying infrastructure.”
Altiscale’s Hadoop-as-a-Service offering brings the elasticity of a cloud service to big data projects, while also eliminating the complexities of installing and administering Hadoop software and related applications.
Altiscale has East Coast and West Coast data center partners to deliver the service. | 7:45p |
Pirate Bay Down After Stockholm Area Data Center Raid Swedish national police has brought Pirate Bay down by confiscating numerous servers that hosted the immensely popular torrent website for peer-to-peer file sharing in a Tuesday raid on a data center outside of Stockholm.
The data center is located in Nacka, a town east of Stockholm, Torrent Freak reported. The torrent news site posted a statement by Paul Pintér, police national coordinator for IP enforcement, confirming the raid.
“There has been a crackdown on a server room in Greater Stockholm,” Pintér said. “This is in connection with violations of copyright law.”
Data center raids by authorities are infrequent but sometimes damage companies unrelated to those targeted by the raids. If a company’s website shares a server with another company’s website authorities are after, the authorities will often take the entire server, potentially causing downtime for both.
Authorities told Torrent Freak that the raid targeted a data center in Nacka built into a mountain.
A hosting company called Portlane advertises a Nacka data center that is built into bedrock and takes advantage of “natural cooling that exists inside the mountain…” Portlane confirmed the raid in a statement to the BBC.
The Nacka Station data center is operated by wholesale data center provider Swedish Datacenter AB.
Pirate Bay was not the only torrent site to go down Tuesday. Others include EZTV, Zoink, Torrange, and Istole.
Numerous news reports have appeared claiming that Pirate Bay had gone back up under a different domain name. But, as Torrent Freak explained, these reports are based on proxy homepages that use content from the actual Pirate Bay website, which is down.
A Wednesday tweet on Pirate Bay’s Twitter feed said there was a new domain but the site had not been switched to it yet: “New domain, and soon we’ll switch again!”
The last time a police raid on its data center brought Pirate Bay down was in 2006. The site went back up shortly thereafter. | 9:54p |
DigitalOcean Gets $50M, Helping It Keep Up with Rapid Customer Acquisition 
Helping fuel the continual growth of cloud hosting company DigitalOcean, Fortress Investment Group has granted DigitalOcean a $50 million credit facility.
According to DigitalOcean COO Karl Alomar, the line of credit will ensure the company can keep up with its rapid customers growth through high-volume equipment leases, and international expansion. The funding will also help build the feature set, and hire more engineering talent with the goal of tripling the number of employees before year end.
Since its launch in 2011, DigitalOcean has grown considerably. It recently became the world’s third largest web host based on web-facing servers, It has also positioned itself as a favorite web host among developers, and has been able to build an active developer community.
In recent months, 20,000 new users have been deploying servers on DigitalOcean per month on average. The company saw its most customer acquisitions in October 2014 when the number of new active users exceeded 26,000.
“In our view, DigitalOcean is well positioned to succeed in the expanding cloud infrastructure sector,” says Aaron Blanchette, Managing Director at Fortress Investment Group LLC. “We look forward to playing a part in DigitalOcean’s organic expansion as they bring their cloud offerings to more international users.”
Given that running an Infrastructure-as-a-Service cloud is very capital intensive, DigitalOcean has already raised a significant amount of venture capital to fuel its growth. In March, the company closed a $37.2 million Series A led by Andreessen Horowitz that also included IA Ventures and CrunchFund.
It has also been focusing efforts around international customers. Earlier this year, DigitalOcean launched services in Singapore through a partnership with data center provider Equinix, foreseeing growth in the Asia Pacific region. It also added a London region in July also in an Equinix data center, which complements its other European location in Amsterdam that opened late last year through a facility run by TelecityGroup.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/digitalocean-gets-50m-helping-keep-rapid-customer-acquisition |
|