Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, April 15th, 2015
| Time |
Event |
| 1:00p |
DigitalOcean’s Developer Cloud Adds German Data Center DigitalOcean, a fast-growing provider of cloud infrastructure services geared toward developers, has opened a data center in Germany. The Interxion facility in Frankfurt brings the total amount of data centers supporting DigitalOcean’s developer cloud to five in Europe and 10 worldwide.
It has become necessary for cloud service providers who are not big companies like Amazon, Google, or Microsoft, all of whom have drastically slashed prices for raw cloud infrastructure services as they compete with each other, to differentiate by focusing on niche markets. DigitalOcean has chosen to be a developer cloud and has so far enjoyed tremendous growth and adoption. This latest German data center joins others in Amsterdam and London as the company’s worldwide presence expands rapidly.
“Europe counts for one-third of our total user base, so we’ve seen a lot of love there,” said DigitalOcean co-founder and chief marketing officer Mitch Wainer. When choosing its next location in Europe, Wainer said, the priority was figuring out where it could be of greatest service to developers.
“There are a lot of developers in Germany – over 350,000 using the cloud – and only 5,000 of them currently use DigitalOcean,” said Wainer. “Also, having a local data center ensures our users in Germany can follow all EU/German data privacy laws. Frankfurt is a great location because more than 35 percent of all cloud traffic in Germany flows through there and it has good connectivity to all central Europe.”
The entire network in Interxion’s German data center has recently been completely redesigned from the group up, and it impressed DigitalOcean. “This data center brings some radical improvements,” said Wainer. “The new design has 40Gbs of bandwidth down to every hypervisor, is highly scalable, and has no single points of failure. These improvements will drastically enhance the customer experience.”
The company plans on playing a much bigger role in the local ecosystem by sponsoring events, hosting meetings and giving talks locally.
Last year, internet services company Netcraft named DigitalOcean as the third-largest web facing cloud and the fastest growing cloud.
DigitalOcean recently launched its 4 millionth cloud server and nearly half a million developers have used the company to deploy a cloud server. As software and applications overtake the world, its role is greatly benefiting that group.
“More and more developers are making decisions about infrastructure services that were traditionally left to systems administrators, developer operations or CTOs,” said Mitch Wainer. “With the Droplet, we believe we already have a great compute product that developers love to deploy their application code onto.”
Wainer said that the company’s product and feature goals in the near term revolve around advancing network and storage technologies.
Correction 4/15: DigitalOcean has five data centers in Europe, not three. DCK regrets the error. | | 3:00p |
Lenovo Server Management Strategy Centers on REST APIs As dust starts to settle after Lenovo’s acquisition of IBM’s x86 server business, the Chinese giant is starting to make progress modernizing management frameworks for Lenovo servers.
As a first step toward providing a modern framework for server management based on REST application programming interfaces, the company has introduced the Lenovo Clarity systems management framework.
Mark Edwards, director of systems management at Lenovo, says the framework was designed to work on the latest generation of Flex Systems converged infrastructure platforms that Lenovo gained from IBM, and that it’s only a matter of time before that framework will be extended out to the Lenovo ThinkServer platforms that brought to market before acquiring the IBM x-Series line of servers.
“Lenovo Clarity is built on a foundation of REST APIs,” says Edwards. “It’s a lighter-weight framework [that] doesn’t require any software agents.”
In addition, Lenovo is rounding out its data center line with the addition of two top-of-rack network switches and an interconnect module that supports either 10G Ethernet or Fiber Channel over Ethernet (FCoE) connections.
In the meantime, Lenovo will continue to support older server management frameworks that IBM used on previous generation of its servers, says Edwards.
Beyond making use of REST APIs, Edwards says, Lenovo Clarity is notable because it extends management reach to both top-of-rack network switches based on technology that Lenovo gained from IBM and virtual machines. Available in two forms, IT organizations can opt to deploy a standard administrator version of Clarity or a Professional edition that can be integrated with VMware vCenter or Microsoft System Center management platforms.
In general, REST APIs are transforming how infrastructure is managed across the enterprise. In fact, the rise of REST APIs just might signal the end of specialization inside the data center.
Once diverse sets of infrastructure are all addressable using a common set of interfaces, organizations will increasingly standardize on a common management framework through which generalists can more easily administer computing, storage, and networking resources. There will also be the need for architects to initially structure those services, but the daily management of data center infrastructure will increasingly be left to generalists relying on IT automation tools invoking RESTful APIs to dynamically adjust settings.
Naturally, that may take a while to play out across individual data center environments. Less clear is the degree that IT organizations will opt to standardize on a common set of server, network, and storage resources from a single vendor, or if they will opt to leverage REST APIs to more easily manage heterogeneous data center environments. | | 3:30p |
Why Can’t Data Protection Be as Simple as Pizza? Dan Wolfe is a member of IBM’s Tivoli Storage Software Advanced Technology Team.
I recently read a blog by IBM’s Albert Barron, who did a great job simplifying the concepts of cloud computing by using the analogy of pizza-making. (Check out, “Pizza as a Service.”) Aside from making me hungry, the article helped me realize how complex things have become in my area of IT, data protection and recovery. So why shouldn’t data protection be as simple as making pizza?
The reason is that defining a solution for data protection has become a complex and lengthy task, requiring deep skills across a range of technologies. This complexity not only increases the time and cost of deploying a data protection solution, but makes it difficult for customers to evaluate and compare data protection solutions. Taking it a step further, then, shouldn’t people be able to order data protection ‘as a service,’ with the recovery features they want, similar to ordering a pizza?
The First Case: In-House Data Protection
Let’s first consider the case of defining and deploying your own in-house data protection solution, or, to use the analogy – making your pizza – minus the guesswork, complexity, cost, and risk of coming up with your own recipe.
Like family recipes for pizza dough and sauce, there seem to be infinite variations for designing and configuring data protection solutions. Some issues can be complex, such as choosing the right backup storage devices, disaster recovery features, and backup server protection strategy. Also, configuration settings may vary, depending on your choice of backup media and replication strategy. This is like making a pizza from scratch and having to decide on every ingredient that goes into it, from dough, to types of cheese and additional toppings.
IBM recently published guidelines that can dramatically simplify the process of defining and designing customized data protection solutions: the “Tivoli Storage Manager Solutions” guide. Guidelines are based on patterns that evolved over years of experience deploying data protection solutions. Readers can find help in choosing backup storage devices, determining which disaster recovery features to use, and protecting against failure of the backup server.
The Tivoli Storage Manager Solutions guide helps you determine the architecture that’s right for your desired data protection solution. But what about the details of defining and configuring a backup server that meets scalability requirements, not to mention the numerous details of configuring and tuning for optimal performance?
Fortunately, IBM has also published TSM Server “Blueprints,” which not only specifies the system components, operating system, and storage device configuration details, but actually provides a script that will configure the system to optimal specifications. These Blueprint specifications and scripts come in three sizes: small, medium, and large. They simplify and shorten the process of deploying a TSM server substantially, compared to an administrator standing up a server manually. The process is quicker, and the resulting configuration is consistent and repeatable. It also eliminates mistakes in setting up features such as operating system parameters and storage configurations. According to one administrator I spoke with, the time needed to deploy a TSM backup server has been reduced from days to about 15 minutes!
With simplified deployment, best practices, and intuitive administration, solution providers like IBM are taking the complexity out of data protection and recovery. Now, design and deployment can be easy as (pizza) pie.
The Second Case: Outsourcing Your Pizza
Let’s consider the second scenario, in which you choose not to deploy your own data protection solution in-house. Returning to the pizza analogy, you simply want to order a pizza that’s made by somebody else. The good news is, ordering data protection can indeed be that easy. More good news—you can choose how you want to consume the service, whether in your data center, in the cloud, or a combination of the two. In my work at IBM, I see an increasing number of organizations adopting the “as-a-service” approach to IT. As one of the oldest IT processes, data protection is a top candidate for this type of transformation. Not surprisingly, cloud service providers, who are in the business of delivering IT as a service, are rapidly adding data protection and recovery services to their menus.
Whether the data protection service is public or private, I’ve noticed some common practices that may help others adopt the “as- a-service” approach:
- Like a busy pizza kitchen, you want standardization and automation, so orders can be filled rapidly and economically.
- To keep customers satisfied, you want a menu of options that you can deliver consistently and confidently.
- To make the right business decisions, data owners need to see the cost of data protection options.
That may seem like a tall order compared to your current environment, but cloud service providers implement these practices with great effectiveness every day.
The doorbell is ringing. My pizza is here.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:30p |
DigiPlex Funds Norway Data Center With Public Bonds European data center service provider DigiPlex held a grand opening for its new data center in Norway powered 100 percent by renewable energy. The data center uniquely had its construction financed by the issue of publicly traded bonds rather than with equity or bank debt.
Featuring two three-story buildings, the site has 18 megawatts of power available, which is enough to power future additional data center space on available adjacent land. The facility’s anchor tenant is the large IT company EVRY ASA. EVRY is taking down 45,000 square feet and 11 megawatts in the facility.
The data center is located in Fetsund, a suburb of Oslo. One of its energy-efficiency features is an air-cooling system that uses naturally cool Nordic climate to reduce energy consumption by a quarter, according to the provider. The data center also features DigiPlex’s innovative De-Ox system that creates a low-oxygen atmosphere to lower risk of fire.
DigiPlex uses 100-percent-renewable energy across all of its data centers.
DigiPlex paid for construction for the data center by issuing floating-rate notes on the Oslo Stock Exchange (Oslo Børs) raising approximately $84 million. The company said this was the first time a European data center issued a publicly traded bond for any purpose. “It is an excellent model for future projects,” said Byrne Murphy, chairman of DigiPlex, in a press release.
John Harry Skoglund, mayor of the Fet Municipality was in attendance, as well as Minister for Climate and Environment Tine Sundtoft. Sundtoft praised the facility’s environmental performance, emphasizing Norway’s potential for data centers in general.
“Our electricity is almost 100-percent renewable,” said Sundtoft at the ceremony. “It is good for the climate to have such data centers located in the Nordics. We would like more of them, as they can become the foundation for a new industry.”
Besides abundant and relatively cheap renewable power and its cool climate, Norway is politically and economically stable. However, its Nordic neighbors Iceland, Sweden, and Finland have seen the majority of data center activity recently, all touting similar benefits of renewable energy and cool climate.
Google and Microsoft have built data centers in Finland, and a massive Facebook site is in Lulea, Sweden, other projects like Hydro66 following in tow. Iceland has Verne Global, and the country has been pushing aggressively for more data centers.
There has been growing interest in Nordic countries as data center destinations in recent years.
Norway’s is also home to the Green Mountain data center, where Norway’s largest bank, DNB, houses its primary IT infrastructure. Located in an underground bunker, DNB draws frigid water from an adjacent fjord to cool data halls. | | 6:00p |
DoE Taps Intel, Cray for What May Become 2018’s Fastest Supercomputer Intel announced that it has been selected by the U.S. Department of Energy to deliver two powerful next-generation supercomputers for the Argonne Leadership Computing Facility at the DoE’s Argonne National Laboratory in Lemont, Illinois (close to Chicago). The two systems are part of a multi-million-dollar collaboration between Argonne, Oak Ridge, and Lawrence Livermore national labs and may become the world’s fastest supercomputer.
Intel noted that for the first time in almost two decades it was selected as the prime contractor to build the two systems, in collaboration with Cray. The larger of the two systems, called Aurora, will be based on its scalable high-performance-computing-system framework and reach peak performance of 180 petaflops. Currently the fastest supercomputer in the world, according to the most recent Top500 list, is China’s Tianhe-2, with peak performance of 33.86 petaflops. An option exists for Aurora to increase its peak performance up to 450 petaflops, according to Intel.
Aurora will feature Cray’s next-generation “Shasta” supercomputer. Cray CTO Steve Scott gave a few early details about the Shasta system architecture, saying that it will be the infrastructure that takes them all the way to exascale computing.
In order to lead the way for what will most likely be the fastest supercomputer in the world when it is completed in 2018, Intel says, Aurora will combine Shasta with many new Intel HPC building blocks. Aurora will feature the third generation of Xeon Phi processors, code named Knights Hill, and the next generation of Omni-Path Fabric high-speed interconnect technology. Intel says Aurora will use its Silicon Photonics technology, a new non-volatile memory architecture, and advanced file system storage using Intel Lustre software.
The new HPC scalable system framework is designed for performance, but equally emphasizes energy efficiency. With over 50,000 nodes, more than 150 petabytes of storage, and peak power consumption of 13 megawatts Aurora will be six times more energy efficient than the current Argonne National Lab MIRA supercomputer, according to Intel. MIRA, an IBM BlueGene/Q supercomputer currently has peak performance of 8.58 petaflops, placing it at number 5 on the Top500 list.
To give the ALCF an early production system, Intel will deliver Theta, a next-generation Cray XC supercomputer capable of 8.5 petaflops. Theta will feature Xeon processors and second-generation Phi processors (Knights Landing).
“Argonne’s decision to utilize Intel’s HPC scalable system framework stems from the fact it is designed to deliver a well-balanced and adaptable system capable of supporting both compute-intensive and data-intensive workloads,” said Rick Stevens, associate laboratory director for Argonne National Laboratory. “We look forward to collaborating with both Intel and Cray on this important project that will be critically important to U.S. high-performance computing efforts for years to come.” | | 7:00p |
ViaWest Qualifies for Minnesota Data Center Tax Breaks ViaWest’s Chaska, Minnesota, facility has qualified for the state’s data center tax incentives. Customers within the data center receive a sales tax exemption for IT equipment investments within the facility.
The exemption means significant savings for those with sizable deployments choosing colocation. Servers, cabinets, networking gear, software, and other hardware qualify.
Other service providers, including CenturyLink and Cologix, have also qualified and list the data center tax exemption as a customer benefit.
ViaWest officially opened the data center one year ago. The facility will offer 70,000 square feet of white space at full build.
The area has had a data center boom over the last few years, in part due to the friendly business environment. Other recent projects include a 20-megawatt data center from Dallas-based DataBank and third data center for Cologix. The latter company’s officials called Minnesota “the new edge.” Other Minnesota data centers include facilities by Stream Data Centers, Digital Realty, OneNeck IT, and Zayo’s zColo.
The wider Minneapolis-St. Paul area was previously underserved in terms of multi-tenant data centers, forcing businesses to either build themselves or look to outside markets.
“This program, combined with Minnesota’s low electricity costs and lack of personal property taxes, allows our customers and future customers to reduce significantly the total cost of ownership and reinvest those savings in their businesses,” said Dan Curry, ViaWest’s director of sales for Minnesota, in a press release.
The data center tax incentive is sponsored by Minnesota’s Department of Employment and Economic Development. | | 7:30p |
Ubiquity Hosting Taps Net Access for Infrastructure Expansion 
This article originally appeared at The WHIR
Data center operator Net Access will host Ubiquity Hosting’s cloud business services from one of its New Jersey data centers, the company announced Tuesday. Ubiquity is seeking to expand its platform to meet growing demand for its cloud and virtual server solutions.
Those solutions will now be offered from Net Access’ 120,000 square foot Parsippany II data center, which offers the scalability and reliability they require to meet enterprise needs, the company says. When it was opened in 2011, the Tier 4 facility was touted as one of metropolitan New York’s greenest.
“Net Access has an impressive track record of delivering high performance infrastructure solutions backed by an experienced support team, and the speed at which they have delivered on their promises far surpasses other providers in the market,” said Clint Chapman, Co-Founder and CEO of Ubiquity Hosting. “Their Parsippany II data center is a key element of our growth strategy.”
Ubiquity shifted gears substantially just over a year ago, launching new cloud services and consolidating its “Servers” brand into its “Hosting” one. Ubiquity receiveda strategic investment from private equity firm Seaport Capital a couple of months later to expand into Europe and Asia in 2014. Those expansions are now planned for this year.
This first ran on our sister site The Whir: http://www.datacenterknowledge.com/archives/2015/04/14/manufacturers-are-turning-to-public-and-private-cloud-idc-report/ | | 11:16p |
Data Center Switch Cooling in Consortium’s Crosshairs As data center network density grows, one unintended consequence that has the potential to impede further growth is physical design of the data center switches themselves. The optical module, the place where Ethernet cables plug in, has grown physically denser to a point where cooling the switches has become problematic.
A new industry consortium has been formed to standardize on a different data center switch design that will move the optical module from the switch faceplate inside the system, where it can be mounted onto the motherboard, allowing for better airflow and improving energy efficiency as a result. Besides more efficient use of cooling, bringing the modules closer to network silicon reduces the amount of power the chip needs to interface with the modules.
The Consortium of On-Board Optics (COBO) was officially formed in March and held its first meeting in Santa Clara, California, this week. One of the founding members is Microsoft, which is an example of a company that would benefit the most from even the most incremental efficiency improvements in switch design due to its massive scale.
“At this scale, a change that may seem insignificant when you look at one switch or one network interface card gets magnified by the million or more devices we have in our networks,” Brad Booth, principal service engineer for networking at Microsoft, wrote in a blog post.
In addition to Microsoft, members include numerous major network equipment and processor vendors. The list includes Cisco, Juniper, Broadcom, Mellanox, Arista, Dell, and Intel, among others.
COBO’s first order of business will be to create a set of standards to define electrical interfaces, management interfaces, thermal requirements, and pinouts to allow for interchangeable and interoperable optical modules that can be mounted or socketed on the network switch or adapter motherboard.
The data center industry is going through a phase of significant rethinking of the norms, a lot of which has been driven by the likes of Microsoft, Google, and Facebook. The momentum behind Facebook’s Open Compute Project has illustrated that there is pent-up need in the market for hardware that is different from the products leading vendors have traditionally supplied.
It is now beyond buzz. Numerous large enterprises, including a handful of major financial institutions, are either testing Open Compute servers or getting ready to deploy it in production.
Data center switches are undergoing a similar transition. Open-source switch designs are now available through OCP, and several major network vendors have launched data center product lines that can be used with Linux-based operating systems. |
|