Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, June 13th, 2016
Time |
Event |
12:00p |
Why HPE Chose to Ship Docker in All Its Servers One of the big headlines that emerged from Hewlett Packard Enterprise’s Discover conference in Las Vegas last week was CEO Meg Whitman’s statement that she would be willing to consider public cloud partnerships with Google and Amazon similar to the deal the company struck with Microsoft last December for Azure services. Another was the announcement that HPE would soon begin shipping Docker – by all accounts the world’s leading application container platform – with all its servers.
There was a time not long ago when, if a manufacturer of HPE’s stature shipped commercially branded software produced by a vendor of Google’s or Microsoft’s stature, it would immediately trigger skepticism and a truckload of negative comments. Docker is not a brand at that level, at least not today. What the move does accomplish is the inclusion of a critical infrastructure component in modern data centers – especially of the hyperscale variety – in servers by a company responsible for about one-quarter of all global server sales by revenue, according to IDC.
Yes, it’s an open source component; yes, it’s not booted automatically; yes, it’s not part of some classic co-branding scheme where server cases are adorned by blazing blue logos of certification. We live and work in a very different world of data centers, where what we call our “hardware” is actually more of a liquid commodity, and it’s the software that firmly bonds it all together.
“While Docker is a disruptive technology,” said Docker Inc. CEO, Ben Golub, on stage with Whitman during the opening-day keynotes at HPE Discover, “we don’t want the adoption to be disruptive. We want it to be as easy and evolutionary as possible. The fact that you can get servers from HPE, hyper-converged systems from HPE that out-of-the-box have Docker’s commercial engine and commercial support involved, bundled – and you can get that from one company, working in concert – is really remarkable.”
Golub went on to praise his company’s and HPE’s collective efforts at integrating their tools – for instance, Docker’s orchestration tool Swarm in conjunction with HPE’s classic OneView. All of that makes for a nice footnote at the end of a convention news rundown story. But historically, back when giants made bundling deals with giants, both partners would argue that something about the joining of their two services or products would be greater than the sum of their parts.
That surplus element was not discussed at length on stage, perhaps because Docker is not perceived as an A-list market player. Analysts asked about the benefits of the deal, chocked it up as a great boost for Docker and probably a blanket endorsement of containerization as a viable platform for workload orchestration. But to understand the elements at play here, you have to look more deeply into the state of the technology orchestrating workloads in today’s data centers.
Global corporations – and to a larger extent than ever before also mid-level enterprises – want to provide IT services to their employees at all levels, as well as to their customers, using a service provider model. They’ve seen the Amazon model and the Google model succeed, and they want their own pieces of them. OpenStack and Docker have both made tremendous headway toward enabling organizations to adopt service provider-style IT models, with self-provisioning, variable distribution at scale, and continuous integration.
So the three big reasons why HPE including Docker is at the very least not a small deal, involve how this decision plays into the needs and desires of this SP-style customer base.
1. Docker helps HPE support more cross-platform workloads. One of the least appreciated aspects of Docker in recent months has been its assimilation into the universe of Windows Server, which remains so very relevant to the established HPE customer base. Indeed, some HPE engineers present at Discover this week were not quite aware that, for well over a year, Docker has been a staging platform for more than just Linux containers.
Helion Cloud Suite is the umbrella brand for HPE’s line of software for building data centers into private and hybrid cloud platforms. Helion OpenStack is the most obvious member of that suite, and Docker effectively joins the line-up, but without sacrificing its native branding or product identity. HPE’s CloudSystem 10 is the latest version of its server, storage, networking, and software bundle pre-configured for provisioning cloud services. The delivery of these services to any client or consumer, even internally, is a process HPE describes as “vending.” In this model, variety of product is key.
As HPE executives described it to Data Center Knowledge this week, this system has three primary customer use cases: 1) virtual machine vending, which is the first generation of virtualization; 2) private cloud for application vending; 3) hyperscale multi-cloud deployment, involving the distribution of applications across clouds. Docker plays a role in use cases #2 and #3.
“Our view is, we want an open ecosystem with different alternatives to deliver those use cases,” Ric Lewis, HPE’s senior VP and general manager for data center infrastructure, said in an interview. “We want to enable the VMwares; we want to enable the Dockers; we want to enable the Hyper-Vs; we want to enable the Helion Cloud Suites for our own stuff. Just like some of the software stacks that enable multiple hardware, we want to do the same thing for them, because we know customers want it. But we also know they’ll want things integrated.”
Lewis went on to remind us that HPE has bundled VMware with servers for years, so bundling Docker is not in any way a departure, from a market perspective. But as HPE engineers reminded us this week, that point Lewis made about customers wanting things integrated is critical. As data centers adopt service provider models, they cannot set themselves up for delivering all one flavor of virtualization, or all another flavor. And as prospective customers told HPE engineers during several demos we witnessed, it’s absolutely vital that the SP model they adopt enable the staging of containerized workloads alongside support for virtual machines, by way of hypervisors such as VMware NSX, Xen, KVM, and still bringing up the rear, Hyper-V.
So the cross-platform nature of what customers are demanding goes far beyond the border between Windows and Linux. They need to be able to deploy networks that can connect applications staged in containers, with virtual machines managed by hypervisors and in turn with SaaS applications served by public cloud providers.
Docker utilizes its own form of network overlays to enable communication between containers. Last year, it acquired a team of developers called SocketPlane to extend its ability in SDN, and the growing ecosystem of Docker support products includes alternative overlays – for instance from WeaveWorks. While these options open up possibilities for microservices architecture, they do not (at least by themselves) enable networking across container and hypervisor workloads.
At Discover, HPE engineers demonstrated a networking scheme they’re currently building called Distributed Cloud Networking (DCN). Its basic purpose is to enable policy-driven networking across containers, virtual machines (all flavors except Hyper-V for now), and bare metal servers. One of the inhibitors to the adoption of Docker in data centers, beyond developer sandboxes, has been its seclusion from the rest of the network. Bringing Docker into the Helion fold lets HPE work to integrate workloads and implement multi-tenant distribution of VMs with containers and with bare metal servers at the full scale of the data center.
Simply letting customers choose Docker at their leisure and provision it themselves would not have risen to this level. Still, expanding Docker’s use cases beyond its own ecosystem is a very new subject. Docker declined to comment on this subject.
2. Integrating Docker lets HPE build security in. One of the goals of the DCN project is to enable microsegmentation. There are many competing definitions of this term from various vendors – including HPE – all of whom want to put their own stamp on this emerging technology. But what these definitions have in common is this: DevOps professionals and security engineers can define access and usage policies for all workload classes in a data center network without them having to be segregated into separate subnets.
Ethan Melloul, a CSA with HPE, demonstrated DCN with microsegmentation to attendees. “If you don’t have DCN, you can’t do microsegmentation with a container,” Melloul told us as he implemented a security policy that was applicable to a VM and a Docker container in the same network. With a similar method, he continued, an operator could perform a service insertion – effectively re-mapping multiple virtual firewalls, or other security services, to an appliance.
During a demo session, HPE cloud architect Daryl Wan told attendees it’s a relatively trivial task for a security engineer to devise access control policies that apply to whatever’s in a subnet. But when two classes of workload are routed to the same subnet, it’s next to impossible without microsegmentation. But the side-benefit of this solution is that security policy follows workloads as they migrate to different hosts.
So as the DCN matures, HPE will also be filling a security gap that many say has been lacking in the Docker ecosystem up to now – and which has been another historically touchy subject for Docker. During the keynotes, Docker’s Golub addressed Whitman’s question about Docker container security by assuring the audience that in Docker’s repository all assets are digitally signed. It will be wonderful when digital signatures can be leveraged for other purposes besides identity: for example, workload class identification for purposes of defining security policy. This is something else that HPE’s direct participation brings to the table.
3. A single point of support. This has been a problem with respect to most every commercially available open source project this decade, including OpenStack and Hadoop, as well as Docker: When multiple providers coalesce to provide service under one vendor, how well can that vendor provide support? This is a significant problem, especially when you consider that support is how these vendors earn their revenue.
During his segment of the keynotes, Golub was careful to mention Docker’s continuing role in providing support for Docker as a component of Helion Cloud Suite. However, multiple HPE product support personnel made it clear to prospective customers this week that, while Docker would provide expertise, HPE would serve as customers’ primary points of contact. And more than once we were reminded that HPE was effectively founded prior to America’s involvement in World War II, while Docker is barely six years old, and its product just going on four.
Perhaps the key lasting takeaway behind Microsoft’s partnership agreement with Red Hat, announced last November, was a sharing of support resources, to the extent that the two companies would exchange personnel among each other’s offices. Certainly attendees of HPE Discover this week would like to see a similar arrangement between Docker and HPE. At any rate, what they clearly do not want to see is HPE fielding Docker-related support questions, posting them to Stack Overflow, and waiting for responses from the community. HPE cloud architects and product managers told Data Center Knowledge this week that HPE would be providing Docker expertise for Helion customers, to which Docker may contribute.
We asked Paul Miller, HPE’s VP of marketing for data center infrastructure, whether HPE’s integration of Docker with Helion (a process that began last year with enabling Docker visibility from its OneView management tool) was done more because HPE needed to weld Helion Cloud Suite’s components into a cohesive product, or because customers came to HPE directly and asked for it.
“Customers are seeing [Docker] as an alternative to virtualization, to simplify the delivery of applications, like Meg [Whitman] talked about on the stage,” Miller responded. “Since we’ve done integration of Docker with OneView… I can tell you, I’ve had more customers call us up and bring up OneView because of Docker integration than almost any other integration that we’ve done.” Yes, customers are having issues with this ongoing integration, and they’re also having successes. But the message is, they’re coming to HPE because they perceive OneView as the supplier of record.
The common theme here is this: Making Docker available by way of Helion has compelled HPE to apply engineering and support expertise to the problem of integrating Docker with its existing product lines and services. That integration can only serve to improve how Docker works with Helion servers and can certainly open up new avenues for Docker containers cohabitating with other virtualized workloads. VMs, we are frequently reminded, are not going away soon and may not disappear ever. If Docker doesn’t engineer a peaceful co-existence, perhaps a company like HPE should. | 3:00p |
Transition to an Agile Storage Infrastructure or Perish Shachar Fienblit is the CTO of Kaminario.
Today, competitive success depends on being able to quickly respond to changing market demands, economic and regulatory conditions. Every business process in the enterprise has to be done much faster than it was done in the past. This is a paradigm shift, since enterprise IT must respond and change directions fast to support critical initiatives.
Unfortunately, most enterprise IT shops are far from being agile. In fact, quick response to changing conditions and requirements is not an enterprise IT characteristic – and that has to change. Enterprise IT has to be reshaped to become an agile asset to compete in today’s on-demand, competitive and information-driven global marketplace.
For enterprise IT teams, storage solutions continue to be the most challenging area. Companies must accommodate the ever increasing amount of data that’s being generated. According to recent research, the industry creates 2.5 quintillion bytes of data every day. In fact, 90 percent of the data in the world today has been created in the last two years alone, while another study estimates data production will be 44 times greater in 2020 than it was in 2009.
As a result, storage solutions today must support such data deluge, while providing high level of performance that users have come to expect – all in a scalable and cost-effective way. Whether driven by service level agreements (SLAs) or high customer expectations, storage systems must be resilient. At the same time, current trends in business applications require storage systems to be agile, flexible and respond to the ever changing demands, in a flash.
So what type of storage infrastructure should companies select? Definitely one that delivers performance, resiliency and agility, while helping to maintain reduce storage-related operational and capital expenditure.
Here are a few tips on designing an agile storage infrastructure:
- Consistent performance under a mix of unpredictable workloads: Enterprises are running on the same system mixture of online processing, analytics and virtual workloads. This is a new challenge as performance SLAs are not always met, and it is practically impossible to predict and plan the performance requirements with dynamic workloads changing on a daily basis. The use of flash technology in an all-flash storage array that is adaptive to mixed workloads is the only way to guarantee consistent performance.
- Ability to scale capacity and storage performance independently: To meet SLAs, customers tend to over provision storage compute and capacity. This can be a temporary solution, but it comes with a significant cost premium. A storage architecture that can scale-up and scale-out on-demand is the only architecture that allows customers to meet the storage and performance requirements, without overprovisioning unnecessary capacity or compute resources.
- Deliver predictable storage in an unpredictable business world: Since it is becoming impossible to predict infrastructure needs for the next two to three years, agile infrastructure provides maximum flexibility to adjust the storage infrastructure to future needs. For example, a software-as-a-service (SaaS) company needs a storage infrastructure that is highly agile so it can boost capacity and computing power to meet customer demands. This approach eliminates the need to buy specific array models with limited scalability as the infrastructure will not meet the new and unpredictable requirements in the future.
- Software-defined architecture for quick deployment of new hardware technologies: Hardware technology, specifically flash media, is improving at a very fast rate. The price of flash declines between 25 to 30 percent per year. As a result, it is crucial that organizations look to reap these benefits of technology upgrades. Modern software defined storage architectures should support storage expansion while utilizing new flash and controllers’ technologies and allow customers to mix and match legacy technologies with new technologies. This will enable customers to invest in storage infrastructure that is optimized for today’s needs and for future hardware improvements.
- Eliminate “future forklift” upgrades: The cycle of upgrading storage systems every three to five years with a forklift upgrade is becoming too risky, complex and resource draining. IT departments should focus on bringing business value from the data they store, and not spend time and effort on long, complex migration projects. A modern storage architecture should allow a company to add and/or decommission new or old hardware technology in a very simple way, without disrupting the IT infrastructure.
The IT challenges faced by enterprise customers are driving them to change the way they architect data centers and storage infrastructure. Today’s business requirements bring a great opportunity for companies to transition into an agile storage architecture, that will not only meet today’s demands, but also support what’s next to come.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:33p |
VMware CEO on Cloud, Containers, and Looking Beyond the Dell-EMC Merger  Brought to You by The WHIR
TORONTO — VMware may be in a holding pattern until July 19 when EMC shareholders vote on the Dell-EMC merger, but it hasn’t stopped CEO Pat Gelsinger from planning for a future where every enterprise uses multiple clouds and VMware plays a critical role in helping its customers navigate and connect all of the pieces.
At a media roundtable in Toronto last week, Gelsinger said that in his 36 years working in IT, this mobile-cloud era is the most transformative, and that everyone – from vendors to channel partners – will have to adapt.
“None is as significant as the period we are in right now,” he said. “You have consumer-driven technologies, the shift from on-premise to off-premise, the disruptive effects of mobile and mobile cloud, change of business models from perpetual and capitalized to subscription, all of these are creating such violent shifts that everybody, including us, needs to navigate to the other side of that.”
“We believe we have huge assets and opportunities to go through that but we like everybody else have to navigate our business, business model, customer relationships, to the other side of this tectonic shift in the industry.”
See also: VMware Cloud Chief Bill Fathers to Step Down
VMware Bets on NSX
One of these assets is the VMware’s NSX network virtualization platform for the software defined data center (SDDC) – a technology that he says is his number one priority.
“I compare NSX, our networking and security platform, to ESX of 2004. It is that big. It will be even more important than compute virtualization is. I see this as huge for us. We’re going after billions of dollars of potential markets,” he said.
See also: Big Switch Networks Embraces VMware NSX
One of the areas that will be less of a priority – at least for now – is containers. Gelsinger says that “containers are early in the hype cycle” and played down recent comments from HPE CEO Meg Whitman that suggested containers could make VMware irrelevant.
“We’ve announced a complete new product family, Photon, which is optimized for container environments. Sort of the thesis behind Meg’s [Whitman] comments were some of the things behind virtualization that have moved from the infrastructure to the application layer and as that moves you don’t need some of that in the infrastructure. We would agree.”
Read more: Why HPE Chose to Ship Docker in All Its Servers
VMware Channel Partners
VMware and other vendors aren’t the only ones navigating these changes; in many ways partners are facing the biggest hurdle on the frontlines as they figure out what all of these changes mean to them.
“We talk about the disruptive cycle and what’s happening; they are absorbing that into their businesses. One of the big shifts that is happening is that business model shift from perpetual to subscription it affects the entire financials of their organization and how they look at their business,” Donna Wittmann, Executive Director, Channels, Alliances and Commercial Sales for VMware Canada said.
“They’re looking at compensation models for their teams and how do they drive the transition and get the right balance.”
“How did channel partners make money in the past? They’d deliver boxes, and then they supported the box and the integration of it and services. That was sort of the standard channel business model,” Gelsinger said. “Now it’s a cloud-delivered service. There’s no box. There are no services to install the box. All of the sudden, how do they work in that environment? It’s a very dramatic shift for a lot of those partners. “
“Some will make it through the transition and some won’t. Some will need to migrate in other different directions for their business models in the future,” he added.
Industry Predictions
As for the industry as a whole, Gelsinger said that the Dell-EMC deal is on the “front end” of a lot of industry consolidation.
“Obviously in this space there will be enormous scale, supply-chain efficiencies that will result,” he said. “Customers will be big winners in all of that.”
While Gelsinger acknowledges the important role that mega cloud providers play in enterprise cloud environments (he referenced a VMware customer, one of the top three car manufacterers in Germany that uses AWS, Microsoft Azure, and VMware) he said that there is “enormous diversity that will occur in this environment” that doesn’t often get addressed.
“I’m not trying to dismiss how big and exciting the mega-clouds are, but to say ‘there’s going to be four global cloud providers’ is dead wrong,” Gelsinger said.
Looking to the Future
The theme of navigation comes up a lot talking to Gelsinger: partners navigating through business model changes, enterprises navigating from on-premise to multi-cloud environments, and of course, VMware’s navigation through the impending Dell-EMC merger.
“When people talk about VMware in the case of these enormous, tectonic shifts that are occurring in the industry [my hope is] that we will have created enough evidence, the growth of those products and our strategy that everybody will say, oh, they’re clearly part of our strategic future,” he said.
This first ran at http://www.thewhir.com/web-hosting-news/vmware-chief-cloud-consolidation-and-beyond-the-dell-emc-merger | 5:11p |
Symantec to Buy Security Software Firm Blue Coat for $4.65B (Bloomberg) — Symantec is preparing to acquire Blue Coat Systems for about $4.65 billion in cash, a deal that will add to its cyberdefense technology and fill a high-turnover CEO position.
Blue Coat CEO Greg Clark will take the helm of the combined corporation and join its board after the deal closes in the third quarter, both companies said in a statement. Clark will become Symantec’s fourth CEO in as many years, fulfilling its search for a leader with experience running a cybersecurity company and providing what investors hope will be much-needed stability. He replaces Michael Brown, who had overseen the January sale of Symantec’s Veritas data-storage division for $7.4 billion to the Carlyle Group.
The Mountain View, California-based company is in the midst of a major transition as it tries to recapture its momentum in the fast-growing cybersecurity market. The world’s largest developer of security software — which reduced its sales and earnings forecasts in April — is trying to re-invent itself for an industry that’s now vastly different from the antivirus software arena it helped pioneer.
See also: Symantec Signs Multi-Megawatt Lease at Santa Clara Data Center
“With this transaction, we will have the scale, portfolio, and resources necessary to usher in a new era of innovation designed to help protect large customers and individual consumers against insider threats and sophisticated cybercriminals,” Dan Schulman, chairman of Symantec, said in the statement.
Clark will assume leadership of a company that’s fallen behind Palo Alto Networks, FireEye, and other younger rivals in developing technologies to detect more advanced threats. Blue Coat brings a suite of products that can make Symantec more competitive in areas such as protecting customers’ data in the cloud, performing digital forensics in the hunt for hackers, and managing encrypted network traffic.
Symantec had been in advanced talks to acquire FireEye earlier this year, according to a person familiar with the matter. But discussions broke down over Symantec’s concerns about FireEye’s future growth potential, the person said, declining to be identified because the discussions were private. Like Symantec, FireEye is struggling with competition and had also recently cut revenue forecasts.
Talks with Blue Coat then accelerated, the person said. As part of the deal, Bain Capital, which controls the target company, will put $750 million of its proceeds from the sale into the combined company. Private equity firm Silver Lake will double its investment to $1 billion. The Blue Coat transaction should help Symantec realize another $150 million in annual net cost savings, on top of a previously announced $400 million yearly reduction in expenses, they said.
The Wall Street Journal first reported on the acquisition Sunday. | 7:19p |
New LinkedIn Data Center Strategy Similar to Microsoft’s We’re not so different, you and I.
While Microsoft has pledged to let LinkedIn “retain its distinct brand, culture, and independence,” after its $26.2 billion acquisition of the biggest social network for professionals is closed, it is likely that its new parent will eventually want LinkedIn to adopt its data center strategy to match its own.
Microsoft transitioned to a uniform infrastructure strategy across all its product segments in the 2013-2014 time frame, going from a past model where every product group made its own IT decisions to the current one, where the company has several server SKUs optimized for various purposes, and its various groups have to choose from that list. The company switched to this approach because it was a more economical way to scale globally.
There are several recent examples where a company that’s been acquired eventually transitions its infrastructure strategy to match the parent company’s one. The process also usually includes consolidation of the data center resources, with the subsidiary moving its applications onto the parent’s infrastructure.
One example is Instagram, which in 2014 moved its applications from Amazon Web Services into Facebook’s data centers, about two years after Facebook acquired it. Another example is LinkedIn itself, which last year moved the application stack of SlideShare, the company it acquired several years prior, out of a managed hosting provider’s data center into one of its own data centers.
 LinkedIn’s Singapore data center (Photo: LinkedIn)
Whether the process of LinkedIn’s absorption into Microsoft will include infrastructure consolidation is unclear at the moment. As the two examples above illustrate, companies can take several years to make such a decision and implement it. However, because of some changes LinkedIn recently made to its data center strategy, aligning with Microsoft’s will not be an end-to-end switch for LinkedIn’s infrastructure team.
LinkedIn has been transitioning to a hyperscale data center strategy, one that’s similar to the strategy used by Microsoft, Google, Facebook, and other companies that provide services over the internet at global scale. That strategy includes designing a lot of the technology that underpins its application in-house, including everything from custom data center cooling and power infrastructure to networking switches. It also includes shifting to a limited number of uniform server SKUs in all data centers.
Read more:
The first facility where LinkedIn is applying this strategy is its new data center outside of Portland, which it leases from Infomart Data Centers. The second was a LinkedIn data center in Singapore, which came online in March, its first data center outside of the US, leased from Digital Realty Trust.
The company has been planning to transition its other data centers, located in California, Texas, and Virginia, to the hyperscale way in the future as well, Yuval Bachar, LinkedIn’s principal engineer of global infrastructure architecture, told us in an interview earlier. |
|