Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, December 17th, 2014
Time |
Event |
1:00p |
Why HP is Investing a Lot in the OpenStack Project Bill Hilf knows a lot about vertical integration after spending 10 years at Microsoft. In his current role running cloud product strategy at HP, he knows enough to realize that vertical integration is not a good strategy for the company’s cloud business.
That realization is an important principle in HP’s cloud plans, centered on OpenStack, the open source cloud architecture. Trying to lock customers into an all-HP OpenStack cloud is a bad idea and one a company like HP is in a good position to avoid.
“We would just limit the amount of market opportunity,” Hilf, senior vice president of product and service management for HP Cloud, said in an interview with Data Center Knowledge. “We don’t have one way to monetize at HP; we have lots of ways to monetize.”
Last two of his 10 years at Microsoft Hilf oversaw product management for Azure, the umbrella brand for a multitude of the company’s cloud services. Prior to that, he held general manager positions at several divisions of the software giant, including open source and platform strategy, Windows Server, and technical computing. He joined HP in 2013.
Of course he’d love for a customer to use HP OpenStack software and services to set up their cloud on top of new HP hardware. “We’d love for you to buy HP servers,” he said. “What we won’t say is ‘It only runs on HP servers.’”
Not all of his competitors have the same approach. Oracle, for example, has been focused on enabling OpenStack on its own operating systems and its own hardware. Red Hat’s OpenStack distro is tied to its Enterprise Linux operating system.
Stable Core Platform, Plug-ins for Everything
HP is a highly differentiated company that has lots of different kinds of relationships with customers, not just as a hardware vendor. Some relationships are purely service-based, where the customers don’t use any HP hardware at all. With OpenStack clouds, lots of the relationships are just around software. In fact, most of them have been, Hilf said.
Composability is the main theme. HP wants to have a pure OpenStack platform that can be augmented with as many different plug-ins as possible. “The enterprise needs that flexibility,” Hilf said.
Ability to plug in different kinds of software defined network controllers, hypervisors, or storage systems is very important. “We can’t only have the HP storage solution as the answer. It can’t only be that HP SDN controller is the answer.”
For this model to work in the long run, OpenStack, the core platform, has to be solid, which is why HP has been investing so much in the open source project. It is now considered one of the top contributors, on par with Red Hat and Mirantis, according to Stackalytics, which tracks contributions to OpenStack.
“Real IT”
Like many others, Hilf believes OpenStack’s evolution will follow a trajectory similar to the evolution of Linux. The open source OS went from not being considered a serious alternative to Unix to one of the heavyweights in the server world.
“[Linux] eradicated Unix, for sure,” Hilf said. “And it put a major dent into the Windows ecosystem as well.”
Linux was successful because of its flexibility. “As a customer I could do with it what I needed it to do.” Flexibility is what makes OpenStack a major contender to Amazon Web Services, which offers only a few modern configurations.
The amount of customization most enterprises need goes well beyond what AWS has to offer, Hilf said. He brought an example of a large aircraft manufacturer that is an HP customer and the IT architecture the company used to build its latest aircraft.
“You name it, they’ve got one of everything in there. Every hardware switch, storage solution, every database, everything is there. And you look at it, and you go, ‘god, that looks like something from 1983.’ And it is. It’s from 1983, 1987, 2004. That’s real IT.”
As OpenStack Foundation COO Mark Collier said in his keynote address at the foundation’s Paris summit in November, there won’t be one cloud strategy that will work for everyone. HP, with its open cloud strategy, wants to be in the position to provide each customer the cloud they need. | 4:00p |
MongoDB Acquires WiredTiger and its Open Source Storage Engine MongoDB has acquired WiredTiger, a company with database storage engine technology. WiredTiger will be integrated into MongoDB for performance, scalability, and hardware efficiency gains in the upcoming MongoDB 2.8. Terms of the transaction were not disclosed.
MongoDB is one of the front runners in the high-growth NoSQL database market with a vast community of developers as supporters. It has acted as the foundation for companies like Compose, which built its initial offering around MongoDB. In 2013, Rackspace acquired ObjectRocket, a database-as-a-service offering that uses MongoDB.
The database has grown in popularity during the last five years. The company raised a $150 million in financing in 2013, one of the largest single rounds of funding for a database startup at the time. The company has been using the financing to further develop the technology and expand its reach globally.
Some of that work is in the upcoming release. It supports “pluggable storage engines,” which are essentially capabilities that are easily plugged into the database extending MongoDB and optimizing it for different hardware architectures. These extensions will make MongoDB applicable for a wider variety of applications, lower cost and lower overall complexity.
WiredTiger is an open source storage engine used to power many high-performance systems, including services at Amazon. Combining and leveraging modern hardware architecture and software algorithms, it juices up application performance. The benefits are lower storage costs, greater hardware utilization and more predictable performance.
The acquisition comes with talent in the form of co-founders Keith Bostic and Michael Cahill and colleagues. They were architects of BerkeleyDB, widely-used embedded data management software.
“Our focus at WiredTiger has been to rethink data management and create high performance software that solves the challenges of the world’s most demanding applications,” said Michael Cahill, now director of engineering at MongoDB, in a company release. “MongoDB has long been on our radar. Joining its vast community is a huge opportunity for WiredTiger to more broadly benefit organizations of all sizes, in all industries, around the globe.” | 4:30p |
Undertaking the Challenge to Reduce the Data Center Carbon Footprint Brian Lavallée is the Director of Product & Technology Solutions at Ciena.
Our never-ending hunger for more data and bandwidth has resulted in an unintended consequence – a substantial increase in global energy consumption. Increasingly, data centers have come under intense scrutiny from environmental groups because of their significant contribution to carbon emissions.
According to the Natural Resources Defense Council (NRDC), nationwide, data centers in total used 91 billion kilowatt-hours (kWh) of electrical energy in 2013, and they will use 139 billion kWh by 2020. Currently, data centers consume up to 3 percent of all global electricity production while producing 200 million metric tons of carbon dioxide.
The migration to the cloud is driving the need for more data center capacity, which in turn is increasing energy consumption. And while many large data center, cloud, and telecom service providers are mindful of this growing problem, there are thousands of other business and government data centers, and small, corporate or multitenant operations, that are not, and could do more to reduce the carbon footprint.
NRDC projections show a 53 percent increase in data center energy use over a seven year period, but this does not have to be inevitable. These trends can be turned upside down if organizations take appropriate action both inside and outside the data center.
Inside the Data Center
Inside the data center, network operators can adopt multiple strategies and tactics to reduce energy consumption. Here are just a few:
Take inventory: Begin with an inventory of all IT assets to assess and understand current power usage patterns; find out what the power costs per transaction and transactions per kWh are. The goal is to identify inefficiencies in existing power and cooling patterns, and the areas that are most susceptible to creating a positive impact in reducing costs and power usage. Once established, prioritize what systems need to be upgraded, reconfigured, or removed.
Consider DCIM tools: Embrace non-disruptive power-management tools to perform trend analyses. Data center infrastructure management (DCIM) tools can assist companies in making their data centers energy-efficient. These functions can provide a holistic view of the data center by analyzing data center design, asset discovery, systems management functions, capacity planning and energy management.
Go green: Adopt the latest green innovations. For example, free cooling uses outside air or water to cool data center facilities versus using powered refrigeration or air-conditioning units. Free cooling requires more than simply keeping the data center’s windows open – data centers must have filters to catch dust particles that can harm server equipment. The filtered air from outside must then be treated to meet specific humidity levels; high humidity can lead to metal rust, while low humidity can create static electricity problems. Smart temperature controls, solar panels and wind energy can also help data center managers meet environmental requirements, while also cutting electricity costs.
Address server inefficiencies: The strain on servers to ensure the availability of emails, sensitive information and bandwidth-rich files is unquestioned, but according to the NRDC, up to 30 percent of these servers are running when it’s not required. Recognized as “zombie” servers, many data center managers are completely unware of the problem. Given how there are millions of servers running only at 10 to 15 percent capacity, or zero in the case of “zombie” servers, network operators can cut power usage by addressing server inefficiencies. Virtualization is and has been a strong industry trend that helps to minimize zombie servers by consolidating multiple servers within a single compute platform.
Outside the Data Center
Conversely, there are several factors outside the data center that have a direct influence on the energy footprint of a data center. Keep these in mind:
Upgrade metro networks: The drive for efficiency is now expanding to the metro network, which acts as the highways between data centers and end users who connect to them. As bandwidth-hungry applications rise, existing network architectures, which were never intended to aggregate such high-capacity connections, are struggling to handle 10 Gigabit Ethernet (GbE) and 100GbE connections coming into the data center.
With unpredictable bandwidth demands, network operators need to ensure their networks can rapidly deliver high-capacity services, efficiently aggregate users, and provide express connections to data centers. Upgrading to a flexible and agile network architecture that responds well to today’s highly dynamic cloud-connected world will reduce power usage through optimized usage of network assets.
Lay a foundation for network convergence: The strain created by the shift to cloud-based business models is most acutely being felt in the metro network where end users and content data centers are predominantly situated. Realize that the network is a strategic and critical business asset that will ultimately dictate the financial viability of corporations across many different market segments. Look for a service provider that is laying a foundation for network convergence to simplify metro network designs and lower operating costs, and also to increase agility. Ethernet-based metro networks are increasingly recognized as the technology of choice to simplify overall network designs and deliver on the promise of significant reductions in energy usage leading to significant savings in operating expenses.
Embrace synergies between metro and data center technologies: Data centers are all about packet-based connectivity with an emphasis on programmability, density, scalability, low cost, and low energy consumption. Metro networks are all about coherent optics, scalability, resilience, and operations, administration and management (OAM) to maintain the ongoing health of the network that can span hundreds of kilometers.
It is important to enable data center interconnectivity that optimizes 10GbE and 100GbE aggregation onto coherent-based 100G DWDM wavelengths to enable robust and scalable connectivity between data centers over metro Ethernet networks, and provide the required Packet OAM capabilities to ensure that strict Service Level Agreements are properly guaranteed.
Once you do make adjustments inside and outside the data center to be more energy efficient, conduct ongoing reviews of IT requirements and services to ensure continuous alignment with business goals. A quarterly review of delivered services over the previous term, and ongoing discussions of future requirements, is a wise practice to make sure you are meeting your current and future objectives.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 6:12p |
Western Digital Buys All-Flash Storage Array Maker Skyera Storage giant Western Digital and its wholly-owned subsidiary HGST are digging further into the data center and advancing HGST’S enterprise flash business with the acquisition of all-flash array maker Skyera.
Western Digital and Dell previously invested $51 million in Skyera, whose ultra-dense enterprise solid state storage systems have won many awards and enterprise customers in its short four-year history. Terms of the transaction were not disclosed.
Skyera’s intellectual property and engineering talent will be folded into HGST’s operations and support strategic growth objectives for the solid-state storage portfolio, according to the company. HGST purchased another early innovator in the SSD market called sTec in 2013.
Skyera brings plenty of system-level hardware design knowledge and enterprise flash expertise to HGST. Its founders Radoslav Danilak and Rod Mullendore have worked at SandForce, a flash memory controller company now owned by Seagate.
Skyera partners for its NAND Flash with Toshiba, and then adds its proprietary high-performance Flash controller and hardware accelerated services. The company has also partnered with flash chip provider SK hynix for its skyEagle all-Flash enterprise storage array.
“Western Digital and Skyera have had a long-term strategic partnership. By combining Skyera’s innovative flash platform with HGST’s leading solid-state storage solutions and flash virtualization software we plan to provide breakthrough value and capabilities to help customers transform their cloud and enterprise data center infrastructure,” Mike Cordano, president of HGST, said in a statement. “Flash solutions represent a large, exciting growth area for HGST and uniquely complement our existing portfolio.”
HGST will find plenty of competition in the active and growing enterprise SSD market, from Violin Memory and SanDisk among many others. Last spring EMC boosted its enterprise flash storage portfolio by acquiring SSD startup DSSD. | 6:35p |
Arista Launches Network Automation Software for DevOps Arista Networks has launched EOS+, a software platform for network programmability and network automation that represents an evolution of the company’s EOS (Extensible Operating System).
The software defined networking company said the additional layer of flexibility would appeal to DevOps teams and join the compute, storage, and application teams to integrate with the network. Arista hopes EOS+ will enable use of pre-built and custom EOS applications, as well as integration with a wide range of technology partner solutions.
Arista has built EOS with programmability features from the beginning, and made specific integrations with clouds, technology partners, OpenStack and other environments. Evolving EOS to deliver a formal programming and network automation layer further enhances its offering and helps shift from NetOps to DevOps models of architecting and operating networks.
Arista targets web-scale data center operators, and its customer roster includes Facebook, Morgan Stanley, Netflix, and Equinix. The company had a successful initial public offering on the New York Stock Exchange in June.
Almost the entire senior management team of Arista comes from Cisco. Earlier this month, the networking giant filed a lawsuit against the SDN company, claiming patent and copyright infringement.
The EOS+ platform is made up of a development framework with native access to all levels of EOS, a vEOS virtual machine instance of EOS, and EOS Consulting Services.
EOS+ use case examples that Arista cites are pre-built applications such as ZTPServer for rapid provisioning and its Network Telemetry Application for Splunk Enterprise. The company also promotes a do-it-yourself cloud networking approach to applications that need to be custom-fit for specific network environments.
Najam Ahmad, vice president of infrastructure at Facebook, an Arista customer, said in a statement, “Arista EOS has proven to be a valuable component of our current designs, providing us with a series of useful features, including better control-plane and data-path programmability, the ability to write traffic steering and monitoring applications that integrate with Sysdb and the entire EOS stack running on our Arista devices, and an SDK framework is fairly easy to develop and test our code in. All this allows us to have more visibility in and greater control over our network — and that helps us continue to move fast as we scale.” | 8:00p |
European Space Agency Picks Orange Business Services for Private Cloud 
This article originally appeared at The WHIR
The European Space Agency (ESA) has chosen France-based Orange Business Services to deploy and manage its private cloud. The ESA private cloud is expected to deliver increased efficiency, flexibility and security to the agency’s network of 2,200 staff in eight locations.
The computing needs of the ESA are large-scale and diverse, including mission operations support, mission simulation and testing. The new solution, called esacloud, will provide a common, secure, rapidly provisioned infrastructure for the organization, according to Orange, which will improve productivity while providing lower cost computing resources.
“Esacloud will allow our scientists to do rocket science rather than IT, and our business to jump ahead in time more than five years,” said Filippo Angelucci, ESA head of IT Department and CIO. “We put a high value in close partnership with suppliers in IT and since being selected in 2000, Orange Business Services has helped ESA innovate and be a pioneer in many areas, such as the first European converged MPLS IP VPN. Esacloud marks a new milestone in our joint path.”
Esacloud will be delivered from two mirrored data centers to ensure redundancy for applications and services. Role-based access control and customized security design will provide the necessary high level of security, Orange says.
Orange created Orange Cyberdefense in January after acquiring security management company Atheos.
The selection of Orange as private cloud provider, while perhaps not a foregone conclusion, is not surprising, given the industry and geopolitical circumstances. While American companies like AWS and Microsoft supply cloud services for NASA, and while new private cloud offerings have become available from companies like US-based CenturyLink and China-based Huawei this year, none of those were likely choices.
Due to the scale of the ESA’s needs and prevailing mistrust of US-based clouds, as shown by a May survey of European IT managers, it is probable that only large European providers were in the running for the high-profile cloud services contract.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/european-space-agency-picks-orange-business-services-private-cloud | 8:30p |
Host Europe Group Acquires Intergenia 
This article originally appeared at The WHIR
Oakley Capital Investments announced on Wednesday that it has agreed to dispose of Intergenia Holding GMBH and its subsidiaries to Host Europe, a deal valued at 210 million euros (around $261.61 million US).
The consideration will be satisfied in cash.
The disposal is still subject to approval from the German and Austrian merger control authorities, according to a report Wednesday by Alliance News. Upon approval of the acquisition, HEG will be the largest managed hosting provider in Germany, the company says.
“intergenia is a market leader in their field, and we welcome them into the HEG fold,” Patrick Pulvermüller, Group CEO of HEG said in a statement. “We look forward to unlocking our combined expertise to complement and build on our offerings across Europe. We are always looking for ways to enhance the services that we provide our customers. The intergenia product range will add to HEG’s already sophisticated offering, allowing us to continue to help businesses make the most of their online potential.”
Oakley Capital acquired a 51 percent stake in Intergenia in 2011.The company operates the PlusServer, serverloft and SERVER4YOU hosting brands. It also owns the hosting conference WorldHostingDays.
We are enormously proud to join forces with HEG,” Thomas Strohe, CEO of intergenia said in a statement. “There is a clear synergy between our two combined businesses and we are excited to move into this new era. Being part of HEG will let us do even more for our customers and help us to expand our already successful business. I wish to thank all the employees at intergenia for their enormous contributions, and look forward to a bright future together.”
Intergenia acquired German managed hosting provider internet24 last year.
On Wednesday morning, Oakley Capital’s shares were up 0.91 percent with a share price of 152.25.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/host-europe-group-acquires-intergenia-holding | 9:00p |
IBM adds 12 Cloud Data Centers, Endorses OpenStack Throughout IBM has added 12 cloud data centers to the list of locations its SoftLayer services are now delivered out of. Nine of them are inside Equinix data centers. IBM SoftLayer is now part of the Equinix Cloud Exchange, which means Equinix tenants can provision SoftLayer cloud services via APIs, instead of setting up physical cross-connects. The new cloud data center locations are part of the $1.2 billion infrastructure push the company kicked off at the start of the year.
The three new locations that are not Equinix sites are dedicated IBM Cloud centers in Frankfurt, Mexico City, and Tokyo. The Equinix data centers are in Australia, France, Japan, Singapore, the Netherlands, and the U.S.
The Equinix partnership lets customers put key apps in Equinix colocation facilities while directly connecting to other apps in any cloud setting that’s most suitable for them.
Puzzle pieces of IBM’s cloud strategy are coming together. The company is going after enterprise hybrid cloud by enabling it in any way it can, be it geographically or technologically.
“The main theme has been around working with our enterprise clients consistently through various entry points,” said Moe Abdula, a vice president at IBM, said.”The main one has been around providing the necessary reach and expansion clients need to enable cloud. The other is we are expanding our reach; the whole way by which we’re going about expanding the cloud portfolio, specifically from a hybrid angle.”
Openness is Driving Cloud
IBM also announced it has endorsed OpenStack, the open source cloud architecture, across its entire portfolio.
Cloud in general, and especially hybrid cloud, needs open standards This is driving unparalleled cooperation between the large vendors, and projects like OpenStack are where that cooperation is in plain sight. Vendors recognize that the value is not in individual pieces, but in making it all work as a whole. This is something that cannot be commoditized.
Much of IBM’s OpenStack work was around the creation of a common language or API. “We shepherded the creation of the OpenStack Governing board,” Abdula said. “We’re taking this a step forward. We are now endorsing OpenStack across all of our environments.”
While the company has offered private on-prem OpenStack before, it will now be available on all consumption models.
“Now, we have private off-premise in a managed or public way. We’ve enabled the full API of OpenStack across any target environments.”
IBM’s will also continue to invest in its Bluemix Platform-as-a-Service based on the open source PaaS Cloud Foundry. “We’re finding that much more, PaaS is going to become a critical entry point for folks not just building social, mobile, but more advanced and richer sets of applications,” Abdula said. “We recently launched Watson capabilities on the PaaS and will have a couple announcements that expand that further.”
Balancing Cloud With Huge Legacy Business
“IBM has a huge client base in traditional enterprise IT services which gives it a built in target audience for its cloud infrastructure services,” Synergy Research Chief Analyst John Dinsdale wrote in an email. “I think that IBM should be able to maintain some exciting growth rates for its private and hybrid cloud services; the bigger challenge that IBM faces is maximizing revenues from its legacy services while the market goes through this huge shift to the cloud.”
Abdula believes that IBM’s place is not in either cloud or legacy, but enabling hybrid openly and completely. “One of the key things we observe, from a cloud perspective, if you endorse and think of hybrid—and look at it as a shift to a hybrid model—what that means for on-premise systems, is that there is a part that also has to be played there. It’s a part that becomes part of the hybrid dimension.”
The legacy business has its place, so long as it doesn’t interfere or compromise with the hybrid infrastructure it will ultimately reside. | 9:00p |
IBM SoftLayer Joins Equinix Cloud Exchange IBM SoftLayer’s cloud is now available on the Equinix Cloud Exchange, the companies announced today. IBM also announced addition of 12 data center locations to the SoftLayer cloud, nine of them at Equinix facilities.
Direct links to SoftLayer’s cloud have been previously available at Equinix data centers, but the cloud’s addition to Cloud Exchange means Equinix customers using the exchange can now dynamically provision its cloud resources without setting up the physical cross-connect. SoftLayer’s full cloud portfolio will be available in nine Cloud Exchange markets.
The arrangement is an enterprise hybrid cloud play. Equinix has been going after the enterprise, and so has SoftLayer, especially following its acquisition by IBM.
Colo: Where Hybrid Clouds Happen
There is a symbiotic relationship between colo and cloud. Cloud growth has helped the colocation market, and colocation companies have been big enablers for cloud service providers in terms of infrastructure and access to customers.
Equinix’s Cloud Exchange has made it possible to dynamically and securely hook into a variety of Infrastructure-as-a-Service offerings. The company recently said private links to cloud providers were its fastest-growing business segment.
A Gartner report recently said nearly half of all enterprises will have a hybrid cloud deployed by 2017. “I actually see it happening a little bit quicker,” said Chris Sharp, vice president of cloud innovation at Equinix. “The majority of enterprise customers are not only worried about what they’re trying to do today, they’re trying to future-proof their deployments.”
Enterprises want to ensure they are in the position to take advantage of cloud economics and flexibility. A colo data center — where an enterprise customer takes space to house their servers — that gives them an easy way to privately connect their own infrastructure to a cloud of their choice is an ideal place for that.
Sharp said IBM SoftLayer was one of the more requested clouds among its customers.
Many enterprises that are new to colocation start with a Wide Area Network, and evolve as the WAN becomes a bottleneck. This is where the Cloud Exchange comes in, offering secure, dynamic Infrastructure-as-a-Service options as these businesses grow comfortable.
“Most of the customers we speak with are interested in this type of relationship,” Steven Canale, senior vice president of global sales at SoftLayer, said. “They’re looking to marry tech, to have disaster recovery, and to burst into the cloud. In a colo facility, they want the flexibility to move into the cloud.”
New API Functionality Makes Hybrid Cloud Easier
Equinix also announced enhancements to its Cloud Exchange platform, including API functionality that make it easier for enterprises to deploy hybrid cloud applications. Equinix set up the API functionality using Apigee, one of the leading API management companies.
“Once you get provisioned into the Cloud Exchange, you can dynamically establish a virtual circuit,” Sharp said. “It’s truly elastic. Before it was a traditional cross-connect. Now, once you get it in, it has real-time access. The API functionality integration means efficiently setting up secure, private links to cloud providers.”
“It’s really bringing a cloud-like feel, whereas it was much more manual before,” Canale said.
Enterprises Want Control
While many enterprises use cloud services in one way or another, they still remain largely skittish of going “full cloud.” One of the main reasons is that they don’t trust the public Internet, which is largely how public cloud providers deliver their services to customers.
“It’s why we came up with Cloud Exchange,” Sharp said. “Better visibility and control, the security they need with cloud accessible through portals and APIs.”
Equinix has continued to grow the ecosystem of cloud providers its data centers give users access to. The company recently partnered with Datapipe to provide managed cloud and started offering private links to Google’s cloud.
The rise of hybrid cloud has pushed an unparalleled rate of partnerships between service providers. Data center companies recognize that they need to be able to enable the cloud piece of hybrid infrastructure, and partnering with cloud providers is the fastest way to do it.
“Facilitating access to cloud infrastructure is a win-win for colocation providers,” said Jabez Tan, senior analyst at Structure Research. “Interconnection services are an additional source of revenue for providers and the reduced latency of a direct connection provides improved levels of performance that are increasingly crucial given the growing complexity of workloads.”
Customers like the convenience of working with a single primary provider for a variety of infrastructure needs, he added. Providers like customer “stickiness” such arrangements create and “tangible demonstration of value-add.” |
|