Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, March 9th, 2015
| Time |
Event |
| 12:00p |
SocketPlane Deal Illustrates Docker’s Careful Acquisition Strategy Docker continued to expand the breadth of its capabilities by acquiring networking startup SocketPlane in a deal announced earlier this month.
The company’s recent moves, such as the addition of container orchestration toolsets, have been aimed at making Docker containers easier to manage at scale, in distributed fashion, and in production. SocketPlane helps with the networking piece.
David Messina, vice president of enterprise marketing at Docker, said the acquisition was driven primarily by the need to address multi-container and multi-host applications developers were building.
“You have all these orchestration tools that carry [multi-container, multi-host apps] forward, but by solving one problem in the stack, you put pressure on the next,” he said. “Networking is incredibly sophisticated. We have a definition of network connectivity for multi-host and multi-container apps, but we need to build things around monitoring and policy. Once I have defined networking, how can it be portable?”
The goal is to make it so that Docker users don’t have to redefine the network piece to fit those complex applications. It’s extending the portability containers provide to the networking part of the stack. “They shouldn’t have to redefine everything from a networking standpoint,” said Messina.
SocketPlane is a software defined networking technology that integrates Docker container management platform with open source virtual switch Open vSwitch. As part of Docker, the company will work on the Docker project rather than working on standalone networking solutions.
Docker’s acquisitions to date have all been small operations with specific skillsets, and that trend continues with SocketPlane and its six-man team. Smaller teams pose fewer integration issues.
Last year, Docker acquired a four-man team called Koality and its continuous integration framework and a two-man operation called Orchard Labs, providers of an orchestration tool and a hosted Docker service. The hosted Docker service stopped sign ups in October, so that the team could focus on Docker itself.
Like the companies acquired previously, SocketPlane has been very active in the Docker community and has a good reputation.
“Everybody understands these guys are highly collaborative – and that is our goal [with acquisitions],” Messina said. “This is just another example of putting the emphasis on ‘batteries included but swappable.’ SocketPlane will build up the API, so others can deliver their networking technology.”
Docker has to be careful with its roadmap, much like any technology company dependent on channels, ecosystems or development communities.With the rise of Docker comes rise of an ecosystem that extends and complements Docker containers. Acquisitions impact the wider community.
Docker’s goal here is to offer a higher order of networking APIs in a way that ensures all these different vendors continue to build differentiated technologies, while continuing freedom of choice, according to Messina.
The company’s acquisitions have been informed by what the community around its technology finds valuable enough to include in the project.
“It is exactly what Docker needs to do as a company to fulfill its vision and the expectations that come along with aggressive fundraising,” StackEngine CEO Bob Quillin said, commenting on the deal. StackEngine is one of the companies in the Docker ecosystem. The startup, which provides automation and management for Docker containers, emerged from stealth last October.
“This natural encroachment has to be included as part of the planning and strategy for companies operating in this space,” Quillin said. “The challenge for Docker going forward will be navigating this slippery slope without losing the hearts and minds of the community that they worked so hard to build. A rising tide lifts all boats – but just make sure you have a fast boat.” | | 3:00p |
SAP Company Fieldglass Opens Three European Data Centers SAP subsidiary Fieldglass has launched three European data centers in the U.K., Netherlands, and Germany. Data center provider ServerCentral will host it in all three locations. The U.K. and Netherlands sites were live and serving customers at the time of launch.
The German enterprise software giant SAP acquired Illinois-based Fieldglass last year in a bid to expand its Software-as-a-Service capabilities. Fieldglass provides a cloud-based Vendor Management System that lets companies get a grip and manage external, or non-permanent employees. The company touts more than 300 customers.
Proximity to customers has traditionally been the big driver for European expansion, but the establishment multiple European data centers for Fieldglass also come in response to customer concern about data privacy and sovereignty. The nature of the data it handles is sensitive, so the new data centers will help it appeal to European customers.
The data centers also give Fieldglass the ability to replicate data and execute disaster recovery directly within the European Union.
Regulations are impacting the relationship between the buyer and worker, so the company is proactively addressing data privacy and protection laws, Mikael Lindmark, Fieldglass vice president for EMEA, said in a statement.
In the past, many SaaS offerings served data from within U.S. borders. Many U.S.-based companies are launching data centers in Europe following the NSA spying scandal.
Companies including Apple, Amazon Web Services, and Saleforce.com have recently announced strategic European data center investments. Besides data privacy, however, the cloud market in Europe is growing rapidly, so having infrastructure in the region is generally good for business.
The Fieldglass and ServerCentral relationship began in the U.S. Both companies call Illinois home base. ServerCentral is a big DuPont Fabros Technology customer, leasing space in DuPont’s CH1 facility in Chicago suburbs. ServerCentral also has data centers on both U.S. coasts and in Japan.
“As we expand operations globally, we are addressing data privacy and security with the same rigor as our well-established U.S. operations,” said Dan Bell, vice president of quality assurance at Fieldglass, in a statement. “The EU has very strict data privacy requirements, and I’m confident about our expansion in the region.” | | 3:30p |
Virtualization and Security: Overcoming the Risks Michael Thompson is the Director of Systems Management Product Marketing at SolarWinds.
Virtualization has been around a long time, and its benefits—from flexibility and scalability to quality assurance and cost savings—are well documented. Nonetheless, it’s not uncommon for it to still be considered a “new” or “emerging” technology because of its fairly recent rise over the past five or so years in truly widespread popularity.
With any burgeoning technology, whether it be virtualization, mobility, cloud, etc., security can be a major stumbling block to greater adoption. And as is usually the case, the security concerns surrounding virtualization are not unfounded. For example, risks associated with dynamic workloads causing security holes that potentially put entire systems in jeopardy because of how resources are shared, certainly exist.
Case in point: In November of last year, an attacker sent customers of browser-based testing vendor BrowserStack an email about the company’s VM security, or lack thereof. The email, meant to appear as though it was sent from the company, stated, “We have no firewalls in place, and our password policies are atrocious … it is almost certain all of your data has been compromised.” While BrowserStack denies any truth to the claims made in the email, the incident has naturally spurred many to question whether adequate steps to ensure a secure virtual environment were being taken by the company.
This particular incident coupled with pre-existing fears have served to heighten concern over the security implications of virtualization and virtual environments.
Risks Associated with Virtualization
So, what are the primary security risks associated with virtualization?
First, virtualization adds additional layers of infrastructure complexity. This means monitoring for unusual events and anomalies also becomes more complex, which in turn makes it more difficult than it already is to identify security issues, such as advanced persistent threats.
Next, virtualized environments are dynamic by design, rapidly changing on a regular basis. Unlike physical environments, virtual machines can be spun up in a matter of minutes. It can be easy to lose track of what’s online, offline and what potential security holes are exposed, as a result. This is related to a phenomenon known as virtual sprawl, which refers to when the number of virtual machines in existence within an environment reaches a point where they can no longer be effectively managed, such as having all security patches properly applied. In such cases, the security of all virtual machines can no longer be guaranteed. Attackers have used offline virtual machines as a gateway to gain access to a company’s systems, as claimed in the BrowserStack breach.
Finally, in addition to the dynamic nature of a virtual machines themselves, workloads can be moved quickly. This also poses a security risk. For example, a certain workload may need a high level of security, and the initial virtual machine the workload is assigned to may provide that security. But when faced with the need to make room for more mission-critical workloads, without proper checks and balances in place, it could easily be moved to a new virtual machine with lower level security, thereby opening a potential hole.
Mitigating the Risks
The BrowserStack incident is just one of the many reasons why, despite the benefits of virtualization, there are lingering concerns about the security risks associated with virtualization. However, that’s not to say the risks are unmanageable.
The following are tactics that, if followed, can help mitigate potential threats to virtual environments without the need for burdensome, expensive processes and solutions that simply aren’t an option for many organizations.
- Separation: Establish how and where to separate development, test and production virtual machines.
- Process enforcement: Enable IT-specific processes via self-service portals to increase efficiency and simplify management.
- Sprawl management: Actively manage the virtual environment in terms of what is being used, what’s needed and what’s not.
- Complete stack management: Focus on end-to-end connections within the virtual environment.
- Built-in auditing: Leverage tools to automate security checks, balances and processes wherever possible.
- Patching: Implement a patch maintenance and management process and schedule to make sure patches are up-to-date for both online and offline virtual machines.
With a knowledge of the primary security risks associated with virtualization and a commitment to following best practices that will mitigate those risks, it’s possible for any organization to find a balance between taking advantage of the benefits of virtualization and maintaining the highest levels of security.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:41p |
Bank of America Takes Cue from Facebook, Evolves IT With Open Compute Bank of America is taking a cue from Facebook‘s infrastructure playbook with a transition to Open Compute and software-defined technologies, the Wall Street Journal reported. The company is shifting most of its workloads to a software-defined data center setup.
This is a perfect example of the widespread change and reform occurring within enterprise IT. The company moved the majority of backend systems to the cloud a few years ago and wants 80 percent of systems running in software-defined data centers within three years.
The company is considering ways to tap and create its own software-defined technology to eliminate compatibility issues that come with traditional vendors and better connect the business. The process began in 2013,
Open Compute has the potential to lower the total cost of ownership for large-scale IT deployments. The open source hardware movement began in 2011, when Facebook shared and open sourced the specs for the hardware inside their Open Compute-optimized data center.
Facebook designed and built their own servers, power supplies, server racks and battery backup system. Last year, Facebook said that Open Compute had saved them $1.2 billion. The savings go beyond hardware; Facebook saved enough energy to power 40,000 homes and brought carbon savings equivalent to taking 50,000 cars off the road, according to Mark Zuckerberg.
The move will have major impact on Bank of America’s current vendors. Open Compute has been shaking up the supply chain for years. A growing vendor ecosystem in support means more options for potential users and more pressure for OEMs to participate. Bank of America going Open Compute has implications beyond the company itself, it’s another significant enterprise win for the project prompting more activity on both buy and sell side of the Open Compute equation.
The financial services industry is one of the largest cash cows for traditional IT vendors. Part of the difficulty of making the switch is Bank of America’s long-established deals and partnerships with these vendors. It spends billions annually on infrastructure.
Last year,the company created a separate team to develop the company’s next generation architecture, David Reilly, global technology infrastructure executive for Bank of America told InformationWeek. Reilly discussed the need for evolution on the hardware side in particular.
Other big Open Compute success stories include Rackspace and IO. IO worked with AMAX to create Open Compute servers and storage to power IO.Cloud.
Hyve Solutions, a division of Synnex, qualified as a government vendor under General Services Administration (GSA) Schedule 70, meaning government agencies can also tap Facebook’s data center designs if they choose through Hyve. | | 5:00p |
VMware Rolls Out Native CoreOS Support VMware on Monday rolled out native vSphere and vCloud Air support for CoreOS, the Linux distribution built specifically for web-scale data center infrastructure and application containers.
vSphere is VMware’s flagship suite of software tools for building cloud infrastructure based on the company’s server virtualization technology, and vCloud Air is the company’s public cloud service that extends into clients’ on-premise VMware environments.
Native VMware CoreOS support is a major step forward for the San Francisco-based software startup. It opens lots of doors to enterprise data centers that until now have been closed for its very young operating system. It also opens lots of doors for Docker, a Linux container technology popular with developers CoreOS was built to work with.
“We bring Docker to the table for a lot of people, where it was pretty inaccessible before,” Kelsey Hightower, a senior engineer at CoreOS, said. Lots of enterprise IT shops are interested in CoreOS and Docker containers but cannot experiment with new technologies that aren’t already “tried and true.” An official stamp of approval by a company like VMware, whose presence is ubiquitous in enterprise data centers, makes it a lot easier for enterprise IT to bring products by companies like CoreOS and Docker into their environments, Hightower explained.
“We’ll be able to tackle the entire Fortune 500 with this,” he said. “They’ve been looking at us for a while.’
VMware CoreOS support starts with vSphere 5.5, but VMware and CoreOS are planning to continue working together to bring it to the recently announced vSphere 6.
Besides having been designed to work with Linux containers, CoreOS has a robust feature set for running on large clusters of servers. The company’s aim has been to enable traditional enterprises to build and operate data center infrastructure the same way internet giants like Facebook, Google, and Amazon do. Containers and compute clusters are both cornerstones of these web-scale data centers.
By using CoreOS, an enterprise IT shop can have all the latest features from Docker, unlike other enterprise Linux distributions, some of whom are only shipping Docker 1.2, Hightower said. The latest version of Docker available today is 1.5.0. Enterprise IT upgrade cycles are usually slow, but CoreOS is designed to be constantly upgraded, so users always have the latest OS kernel and the latest Docker features. | | 6:02p |
Google Traces Sunday’s Cloud Outage to Faulty Patch Google Compute Engine, the company’s Infrastructure-as-a-Service cloud, suffered its second outage in less than one month’s time. While not as serious as the Google cloud outage a few weeks ago, the network was again the culprit. Some Google cloud users experienced disruptions for 45 minutes beginning Sunday around 10 a.m. PST.
Google identified a patch problem as the culprit for network egress issues that caused the cloud outage for some users. The configuration change was tested prior to deployment to production, but it still had a negative impact on some VMs when made live.
The configuration change introduced to the network stack was designed to provide greater isolation between VMs and projects by capping the traffic volume allowed by an individual VM, according to Google.
It was a partial outage. Some users weren’t impacted, some saw slowdown, while some were experiencing timeouts when trying to contact their cloud VMs.
Google engineers are changing the protocol in response to the latest outage. The rollout protocol for network configuration has been changed, so future production changes will be applied incrementally across small fractions of VMs at a time, reducing the exposure if something unpredictable occurs.
The test suite that gave the a-O.K. signal will be modified in response to the incident as well.
“Future changes will not be applied to production until the test suite has been improved to demonstrate parity with behavior observed in production during this incident,” said the company in a statement.
Last month, a network issue led to loss of connectivity to multiple zones. That cloud outage lasted roughly an hour.
Google’s IaaS cloud had a total of 4.5 hours of downtime last year across more than 70 outages, according to CloudHarmony. | | 6:47p |
Telehouse Building 11-Story London Data Center Telehouse Europe, the data center service provider subsidiary of the major Japanese telco KDDI, announced plans to build an 11-story data center in London. This will be the company’s fifth London data center, adding nearly 250,000 square feet of space to its portfolio.
The Telehouse North Two facility will be on the company’s London Docklands campus. Its existing data center there, Telehouse North, is the primary site of the London Internet Exchange. The campus provides access to more than 500 network carriers, internet service providers, and Software-as-a-Service companies.
The company is investing £135 million in the London data center expansion.
As a major player in the European market, Telehouse is looking at a different competitive landscape today than it was about one month ago. The balance of power in the market shifted substantially after its competitors TelecityGroup and Interxion announced a merger in early February, and NTT (also a giant Japanese telco) acquired a controlling stake in e-shelter, another major European data center provider, earlier this month.
Redwood City, California-based Equinix remained the top data center provider in Europe, with the post-merger TelecityGroup taking the number-two spot, and NTT becoming third.
While extremely competitive, London is one of the world’s data center markets where capacity is always in high demand. Expanding capacity in markets such as London, Amsterdam, New York, or Silicon Valley is not a risky move for data center providers, especially when they can offer robust connectivity options.
Telehouse is planning to bring some initial capacity at North Two online in the first quarter of 2016. |
|