Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, November 13th, 2014
Time |
Event |
1:00p |
Are Legacy Vendors Pulling OpenStack in the Wrong Direction? Over the past couple of years, OpenStack has turned from a small skunkworks effort to build Amazon Web Services-like clouds but open source into a movement backed by some of the IT industry’s biggest legacy vendors. It’s not uncommon nowadays to hear that OpenStack has become the de facto standard for building cloud infrastructure.
Such mainstream support, however, comes at a cost, threatening to detract from the project’s original goal. That’s according to Jim Morrisroe, CEO of Piston Cloud Computing, a San Francisco-based startup co-founded by Joshua McKenty (one of OpenStack’s founding fathers) that helps customers stand up OpenStack clouds of their own.
Morrisroe believes the open source project’s original mission was to help companies build clouds like the ones Amazon has built: web-scale, hyper-converged, running on cheap commodity hardware. But now that the likes of Cisco, HP, and IBM have gotten into it, they’ve been busy building OpenStack plug-ins for their proprietary hardware, diluting the focus on that homogeneous web-scale future.
“Amazon doesn’t buy Cisco, and EMC, and Hitachi,” Morrisroe said. “Anybody that has a box business – proprietary hardware – has jumped into OpenStack to try to get some new life into their proprietary boxes. And so OpenStack, the community, has gotten a little too focused on all of these plug-ins and drivers.”
Juno, the latest OpenStack release that came out in October, was a case in point. It included 10 new drivers for block storage systems by the likes of Fujitsu, FusionIO, Hitachi, Huawei, and EMC. IBM, Mellanox, Juniper, and Brocade all contributed plug-ins for Neutron, the software-defined networking portion of OpenStack, in the release. Cisco added a Neutron plug-in for its Application Policy Infrastructure Controller technology.
Diverse Goals May Be a Good Thing
But the APIC plug-in isn’t the only thing Cisco added in Juno, which illustrates that the issue isn’t quite as black-and-white as it may at first seem. Some of the big vendors do make a lot of contributions that are not exclusively self-serving.
HP, for example, is one of the top contributors to the project. The vendor has a huge vested interest in making sure core components of OpenStack work, since its entire cloud services strategy revolves around the open source technology.
“Of course we want to sell HP hardware,” Bill Hilf, senior vice president of products for HP Helion (the name of HP’s cloud business), said. But, the Helion OpenStack distribution supports other vendor’s hardware, not just HP’s.
There are many vendors in the community, each trying to differentiate based on its line of business, and that’s a good thing, Hilf said. “You actually want multiple types of business models to be in the ecosystem.”
To Each His Own
There isn’t a one-size-fits-all OpenStack cloud, obviously. Whether a company buys commodity x86 boxes to deploy its cloud or tries to bolt it onto existing hardware depends on its individual circumstances.
Mark Baker, Ubuntu server and cloud product manager at Canonical, the open source software powerhouse, said he saw big players gravitate more toward commodity infrastructure for their OpenStack clouds, especially companies in the Far East. Financial services and media companies, on the other hand, tend to favor their trusted providers like Dell or HP. But they buy hardware from those trusted vendors’ commodity lines.
Enterprises and service providers typically buy new hardware to build their clouds, while telcos have huge amounts of infrastructure already in place, a lot of which is telco-specific, non-x86 hardware. They are keen to use what they already have.
“All customers have complex heterogeneous environments,” Baker said. “If they have EMC storage, they want to be able to connect [cloud] with EMC storage. If they have a big investment in Cisco or Juniper, than they’ll want to be able to use that technology.”
A Vision for the Future
Piston is proposing an infrastructure vision for the future, but many companies are simply not ready to make the leap just yet. Morrisroe said vendors realize that the future data center is one where hardware is cheap and homogeneous, its functions defined by software, but getting to that future without going out of business is a challenge for them.
CIOs want to put OpenStack in front of their legacy gear because their developers have gotten too used to going around the IT department, renting cloud VMs from Amazon with their credit cards, he said. If they make their own platform more agile, they can stop the “bleed” to the public cloud.
But agility alone won’t cut it. “No matter how agile you make this platform, it still has a cost of goods that is radically different than the construct of the cost of goods on the public cloud side,” Morrisroe said. An agile platform on legacy systems doesn’t address the ultimate problem, which is that the web-scale cloud is not only fast and fungible, it’s also infinitely scalable and more cost-effective, he explained. | 4:30p |
Increased Data Means Increased Storage, Managing Demand Nikhil Premanandan is a marketing analyst at ManageEngine, the real-time IT management company.
According to SAP, 90 percent of the data in the world has been created in the last two years. The company further estimates that 2.5 quintillion bytes of data are created every day as a result of the boom in social networking, the rising number of Internet and smartphone users, and the content being shared online. Meanwhile, in a recent study, Cisco estimates that mobile data traffic will grow at a CAGR of 61 percent from 2013 to 2018.
According to IDC, enterprise storage typically grows by 35 to 40 percent per annum, driven in large part by the adoption of virtualization. Desktop and server virtualization have taken the physical storage outside the appliance thereby creating a need for specialized storage devices. The data in these devices is backed up over and over again adding to the need for more and more storage space.
Where will you store this abundance of data?
Enterprises now face the challenge of selecting the right devices to store their precious data. They can go with either a single vendor or a multi-vendor storage strategy. Introducing a second storage vendor helps companies get the best prices from vendors and thereby reduces total cost of ownership by at least 15 to 25 percent over a five-year time frame, according to Gartner. It also helps avoid vendor lock-in.
While a multi-vendor strategy helps reduce costs, it also requires the integration of myriad devices in a storage network, which increases complexity. After all, each device type comes with its own management console. Integrating all of the tools into a single console is a challenge for the storage admin who must counter-productively juggle between different consoles to isolate a performance bottleneck.
Even with those separate tools, the admin is not able to isolate the layer in which the issue resides, e.g., the RAID, switch or server layer. Managing systems in a multi-vendor storage environment can be extremely difficult. With different vendors supplying different device types, each with different attributes, a new set of integration and monitoring challenges has emerged.
Keeping the chaos under control
Third-party, multi-vendor tools can be a good option for storage admins who want to monitor their storage arrays, FC switches, tapes, and host and backup servers from a single pane of glass. Such tools typically monitor the basic statistics of hardware and its components like disks, controllers, switch ports and LUNs.
Some of the tools provide a dynamic, graphical representation of the entire storage infrastructure to help the admin isolate issues instantly. With this feature, interconnect issues and port malfunctions can be identified at a glance. Many of the enterprises do not have a scientific approach for forecasting their storage requirements. Automated scheduled reports on the configured capacity, capacity used and capacity forecast go a long way in justifying storage purchases.
Storage admins may believe these features are exhaustive, but they would be overlooking the most important part of any storage network — backup and recovery. Backup servers are an essential element of any backup strategy, and monitoring backup servers for their performance and health is inevitable. Multi-vendor storage monitoring tools wouldn’t truly be “multi-vendor” if the storage admin had to switch to the backup server’s web client to monitor backups. For a comprehensive storage monitoring solution, integration of backup servers is essential.
Staying current among rapid change
Current trends like flash devices and SDS are gaining momentum, and vendors need to add support for these devices to ride this wave. With rapid innovations in storage technologies and trends, vendors have to ensure that their solutions offer the latest features to attract the early adopters.
Admins should select a storage management vendor that has augmented its product to include all the aspects of a storage environment. The solution should be comprehensive to ensure that admins do not have to use another tool for monitoring their storage infrastructure. Only then can multi-vendor storage management tools truly empower storage admins to effectively manage the data explosion taking place on their networks.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:36p |
Motivated To Influence Data Center San Francisco 2014 Motivated To Influence Data Center San Francisco 2014 will be held Tuesday, December 2, at the InterContinental in San Francisco, California.
This is a new management and technology conference where data center industry experts share insights and best practices to support the delivery of digital services and drive data center transformation. Bringing together business leaders, IT professionals and data center practitioners, the event offers a comprehensive agenda for data center topics, networking, and opportunities to extend invitations to socialize.
Participants will leave with the knowledge and know-how to optimize their organizations’ data centers for efficiency, growth and leading-edge innovation.
For more information – including speakers, sponsors, registration and more – follow this link.
To view additional events, return to the Data Center Knowledge Event Calendar. | 5:00p |
How to Ensure Fast Failover and Disaster Recovery for Enterprise Apps Your applications are critical to your business and your users. In fact, much of the focus around mobility, content delivery, and data control directly revolves around virtualized enterprise applications.
Both new and legacy applications are moving to virtualization because it reduces cost, improves hardware utilization and simplifies management. Originally, proprietary virtualization platforms were the dominant forces in the industry. Lately, customers have realized that open virtualization platforms provide higher performance, better functionality and are more cost effective.
The rapid adoption of virtualization has driven demand for support of high availability and disaster recovery functionality. Businesses rely on an “always on” IT infrastructure in order to provide competitive advantage and create unique customer values. Applications built on virtualized environments must deliver appropriate levels of availability with minimal or no downtime.
In this whitepaper, you’ll learn how Symantec and Red Hat worked together to deliver an optimized failover solution. Furthermore, you’ll learn about the distinct advantages that the integration of Red Hat Enterprise Virtualization and Symantec Cluster Server provides to address application high availability and disaster recovery for virtualized environments.
Many organizations rely on more than one virtualization technology. Each server virtualization technology comes with different tools and different management procedures. For example, start and stop of virtual machines are managed differently. In addition, virtual machine operations are usually managed by a virtualization/platform team, and the application is usually managed by an application team.
Another pain point can be management of high availability. If every virtualization technology copes with high availability in a different way, this will introduce complexity and management overhead.
The combination of Red Hat Enterprise Virtualization and Symantec Cluster Server addresses these challenges. Key elements of the Red Hat and Symantec solution include:
- Robust, cost-effective virtualization platform
- Ability to scale physical and virtual workloads
- Easy management of heterogeneous server clusters providing universal failover and recovery
- Automation for multi-tier business service recovery
One of the most important features is creating constant application availability. Symantec Cluster Server running with Red Hat Enterprise Virtualization automates the off-site application recovery in case of a disruption at the primary site VM and increases the business resiliency of applications hosted within the Red Hat Enterprise Virtualization infrastructure. It allows instant detection of application failures running within a Red Hat Enterprise Virtualization VM, and automated, quick failover to either a local or remote site to ensure continuation of normal business operations.
The IT landscape is rapidly evolving from proprietary systems with all the information located within the organization’s data centers to one with distributed and/or hosted systems based on open source technologies, interconnected to customers and suppliers. This makes reliable and flexible solutions more needed than ever.
Download this whitepaper today to learn how the combination of Red Hat’s virtualization platform and Symantec’s cluster technologies offers organizations an excellent blend of cost efficiency and robust resiliency. | 6:50p |
Amazon Intros Docker Container Management Service Riding the wave of rising popularity of Docker containers, Amazon Web Services announced a service meant to make it easier to take applications from development to production using containers on its EC2 Infrastructure-as-a-Service cloud.
AWS CTO Werner Vogels announced the free service during the Thursday morning keynote at the company’s re:Invent conference in Las Vegas. The launch follows announcement of a similar service by Amazon’s cloud rival Google earlier this month.
Docker is an open source Linux container technology spearheaded by a San Francisco-based startup that goes by the same name. In the 18 months of its existence, the technology has gained widespread popularity because of the way it decouples the application from the infrastructure underneath.
An application container describes that application’s infrastructure requirements and makes it easy to deploy it on any kind of hardware or cloud. The app is not tied to an operating system or a particular server.
Containers are Hard
Developers love containers because “you can ship them everywhere; they have a standard format,” Vogels said.
Managing multi-container applications deployed in high-availability environments, however, is not easy. “Scheduling containers requires a lot of heavy lifting,” he said.
Things like managing resiliency, deploying the right resources, managing clusters underneath containerized applications are hard to do. “What if you could get all the benefits of containers without the overhead,” Vogels said.
That’s the promise the new EC2 container service makes. It is a highly scalable, high-performance container management service that works at any scale. It enables users to launch or terminate containers on top of cloud VM clusters, deploying them across separate availability zones with centralized cluster visibility and control. It includes a simple API developers can integrate with.
The service itself is free. Users only pay for the EC2 resources they consume.
Amazon Respects Docker’s Key Tenets
Docker CEO Ben Golub took the stage during the keynote, saying he was happy with the way Amazon went about setting up its new service, respecting the portability aspect of Docker, using its native interfaces, integrating with Docker Hub, the centralized repository for container images.
“Docker just turned 18 months old, and this all feels a little surreal,” Golub said about the rate of adoption of the technology and the rate at which the ecosystem around it has been growing, support by AWS being only the latest step in that process.
Docker recently passed its 50 millionth download, he said. It’s being used by thousands of customers, and the user base is not limited to web companies. Hospitals, banks, and government institutions are also on board.
Docker is Hot Among Cloud Providers
Google’s announcement earlier this month was of a service called Google Container Engine. Based on the company’s open source container management project Kubernetes, it is a fully managed cluster manager for Docker containers.
Microsoft, another cloud giant, announced support for Docker containers on its Azure cloud in June, and in October the company said it had partnered with Docker to integrate the technology with the next release of Windows Server. Docker currently only runs on Linux.
Joyent, a smaller IaaS provider based in San Francisco, is working to deliver a service that will enable users to run Docker containers directly on the hardware in its data centers, without the layer of hypervisor and VMs in between. This approach, the company’s CTO Bryan Cantrill claims, enables the cloud to perform better than the one that involves server virtualization. | 7:02p |
Colt Acquires KVH in Asia Pacific Expansion U.K.-based Colt Group announced intent to acquire Japan’s KVH for €130.3 million (about $162 million). KVH expands Colt’s capabilities in key Asian markets and its ability to serve multi-national customers overall. The transaction is subject to shareholder approval.
The two companies have complementary businesses with similar technology, platforms, business models, and product sets. Both KVH and Colt are owned by Fidelity Investments, which bailed Colt out in 2001 when the telecom bubble burst.
Colt and KVH look to cloud and data center services as a big growth opportunity. Colt offers data center, cloud, and managed services in Europe, while KVH primarily serves Asia Pacific. The integration should go smoothly given the similarities.
The acquisition is a big geographic expansion play, allowing Colt to enter the Asia Pacific IT services market, which is growing about 12 percent a year. The major KVH markets Colt is interested in are Tokyo, Singapore, Hong Kong, and Seoul.
“The acquisition puts Colt in a position to address the emerging strategic interest of global data center providers to establish footprints within core connectivity hubs in Asia,” Jabez Tan, senior analyst for data centers at Structure Research, said in a statement. “KVH enables Colt to serve both existing customers and address new opportunities in Asia, providing a platform for Colt from which it can build upon to make the shift into a fragmented Asian market.”
“I am pleased to announce our plan to acquire KVH,” Colt CEO Rakesh Bhasin commented. “It is a growing business, largely focused on network and data centers in Asia. They have strong capabilities, a significant customer base and great assets, all complementary to our own. This partnership will enable Colt to offer our customers seamless solutions on a global basis and give us a solid platform for growth in Asia.” | 7:30p |
Cloudius Builds Non-Linux OS for Cloud with Sub-Second Boot Time Cloudius provides an operating system built from scratch called OSv. It uses the company’s own kernel and is not based on Linux. Handpicked by Amazon CTO Werner Vogels as a startup to watch, the company is exiting alpha and entering into a limited beta.
The big benefit to the cloud OS is speed. It set out to build the fastest guest OS possible designed specifically for cloud. OSv is designed to run on top of hypervisors only and serve web-scale workloads like NoSQL, micro services, and common runtimes.
Using it as a load balancer in the cloud is a good use case, because of the fast boot. The company said it has sub-second boot time, an image size under 20 megabytes, and TCP latency is reduced by 70 percent. Cloudius’ goal is to have the fastest guest OS possible.
The founders previously worked on the KVM hypervisor project, including as part of Red Hat. “After spending a lot of time at Red Hat and contributing to Linux, we saw what’s going on with the cloud,” said co-founder and CEO Dor Laor. “Usually the deployment is one application per server, and it’s a waste to have a huge general purpose OS running on another OS. OSV is unique because it runs a single application atop a hypervisor and it’s much easier to use. We load the application transparently, and there’s a lot of performance optimization. In terms of internal manageability, we have a kernel runtime and application. That’s it.”
Why is Amazon excited by Cloudius? It’s not every day that a new OS is written, let alone one written with the cloud in mind. Cloudius speaks to a few trends in the market.
“With the rise of cloud, the amount of servers constantly rises,” said Laor. “There’s a must to automate them. The larger the scale and cluster sizes grow, the opportunity increases. The more automation is needed. The root filesystem is stateless, we do not keep any configuration file in it. Everything is done in REST API. This allows DevOps to manage the entire thing from Amazon cloud services. You don’t even need automation tools like Chef or Puppet.”
Co-founder Don Marti said the cloud OS beta is going to include pre-built virtual appliances for common horizontally scaled tech like Cassandra and MemcacheD. The company’s goal is to integrate the system with all cloud providers. “We can run on top of any hypervisor,” said Laor. OSv runs on AWS EC2, Google’s Cloud, VMware, and KVM.
Lessons Learned From Docker
It’s all about speed, said Laor. During the alpha they initially had a shell based on JVM, which slowed boot time to three seconds. “We learned fast boot time is really important to users and corrected it,” he said.
The company said it’s learned a lot of lessons from Docker, the application container technology and company that is quickly rising in popularity. “The main lesson is we simplified our user experience, made it super simple,” said Laor. “You can compose our images similar to Docker. Unlike Docker, which runs on Linux, we are an OS built from scratch.”
There has been a burst of innovation in the cloud world recently. A new wave of innovation comes from the rise of DevOps and few developers in charge of a growing amount of servers.
Another notable startup in this space is CoreOS, a new Linux distribution that helps update simultaneously across massive server deployments. Google recently released a hosted version of Kubernetes, and Mesosphere released a commercial version of Kubernetes to help manage massive container deployments as well. | 8:00p |
Microsoft Open Sources the Entire .NET framework, Sets Plans for Linux Distributions 
This article originally appeared at The WHIR
At Microsoft’s Connect() developer event in New York City, Microsoft announced it has open sourced the entire .NET framework, which is central to building Windows apps, and it is planning on releasing an official distribution of the .NET Core for Apple and Linux systems.
According to a blog post from Scott Guthrie, EVP of the Microsoft Cloud and Enterprise group, the .NET Core Runtime is now available on Github under the MIT open source license. This provides everything needed to execute .NET code including the CLR, Just-In-Time Compiler, Garbage Collector, and core .NET base class libraries.
An official distribution of the .NET Core for Apple and Linux systems will enable .NET server and cloud applications to be built and run on Windows Server and Linux systems.
While the Microsoft .NET framework now fits the Open Source Initiative’s definition of open source, the Mono project was an existing cross-platform, open-source implementation of .NET. Guthrie stated that Microsoft would be working closely with the Mono community in the completion of the Linux port.
“The Mono community have done a great job advancing .NET and Linux over the last decade,” Guthrie wrote. “Releasing the .NET Core source under an open source license is going to enable us to collaborate together much more closely going forward. There are many Linux enhancements Mono has built that we would like to use, and likewise there are improvements Mono will be able to benefit from by being able to use the .NET source code.”
These latest efforts, under the guidance of new CEO Satya Nadella and his “mobile-first, cloud-first” strategy, around open-source and different operating system compatibility illustrate the steps Microsoft is willing to take to ensure its systems will still be used by developers in an ecosystem of many platforms and tools.
The company has said that around 20 percent of VMs running on its Azure public cloud platform are running Linux. Last month, it added CoreOS, a container-based Linux operating system, to the list of Linux distributions it already supports. It also participated in Google’s Kubernetes project, a management solution for containers including Docker containers, and committed to offering Kubernetes support on Azure.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/microsoft-open-sources-entire-net-framework-sets-plans-linux-distributions | 8:30p |
Microsoft Beefs Up Enterprise Cloud Security with Aorato Acquisition 
This article originally appeared at The WHIR
Microsoft announced on Thursday that it has acquired Israeli enterprise cloud security company Aorato for an undisclosed amount.
Founded in 2011 by Israeli Defense Forces veterans, Aorato uses machine learning to detect suspicious activity on a company’s network, according to a blog post by Microsoft corporate vice president, Cloud and Enterprise Marketing, Takeshi Numoto.
Aorato’s Organizational Security Graph is a “living, continuously-updated view of all the people and machines accessing an organization’s Windows Server Active Directory,” Numoto said. “AD is used by most enterprises to store user identities and administer access to critical business applications and systems.”
Numoto also said that Aorato’s solutions “will complement similar capabilities that we have developed for Azure Active Directory, our cloud-based identity and access management solution.”
Back in July, the Wall Street Journal reported that Microsoft was in talks with Aorato, with a person familiar with the matter expecting the deal to be in the $200 million range.
As part of the acquisition, Aorato will stop selling its Directory Services Application Firewall product. According to a statement on its website announcing the acquisition, Aorato plans to “share more on the future direction and packaging of these capabilities at a later time.”
With Microsoft fighting for a larger share of the public cloud market, extending its enterprise security capabilities in the cloud will certainly help achieve this goal, especially as AWS touts its strong enterprise adoption.
Last month, Microsoft announced several updates to Microsoft Azure aimed at handling high-performance workloads, including a partnership with big data software provider Cloudera.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/microsoft-beefs-enterprise-cloud-security-aorato-acquisition | 10:33p |
Intel Designs Custom Chips for AWS’ New C4 Instances Intel has designed custom Xeon processors for Amazon Web Services that will power the cloud provider’s new server instances optimized for high-octane computing. The chips will provide the highest level of CPU performance EC2 has ever seen.
Amazon previewed the new type of instance, which is not yet available, at its re:Invent conference in Las Vegas Thursday. Called C4, it comes in five different configurations, ranging from two to 36 virtual CPU cores and from 3.75 Gigabytes to 60 Gigabytes of RAM.
The instances will use hardware virtualization, which is as close to bare-metal cloud as AWS gets, and run within Virtual Private Cloud environments only, Jeff Barr, chief evangelist at AWS, wrote in a blog post.
The custom AWS CPU, called Intel Xeon E5-2666 v3, is based on the chipmaker’s Haswell architecture and built using its smallest-yet 22 nanometer process technology. The processor runs at base speed of 2.9 GHz, but with “Turbo boost” can go up to 3.5 GHz, according to Amazon.
This is not the first time Intel has customized a processor for a big customer. Making tailored chips for cloud service providers, Internet companies, and hardware vendors has grown into a big business for the company in recent years.
Another recent custom job was for Oracle’s massive database machines that came out in July.
Diane Bryant, general manager of Intel’s data center group, described a new approach the company had taken to tailoring chips for hyper-scale customers using Field-Programmable Gate Arrays.
An FPGA is a reconfigurable semiconductor typically used to give a user the ability to test different configurations before they commit to a volume purchase of non-programmable chips. Intel plans to include an FPGA in a single Xeon package and offload some of the CPU workload to the FPGA.
The chipmaker gives the customer the option of testing different configurations and then order static System-on-Chips that would use the configuration that works best for them. Another option is to deploy Xeon packages with the FPGAs at scale so you can reconfigure them in the future for different workloads.
When Bryant talked about the offering in June, it was not yet available, and she did not say when it would hit the market. It wasn’t clear whether Intel used the approach in designing the latest custom Oracle or AWS CPUs.
In June, Bryant said Intel had designed 15 custom CPUs in 2013 for different customers, including Facebook and eBay. More than double that amount was in the pipeline for 2014, she said. |
|