Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, August 6th, 2014
Time |
Event |
2:02p |
Optimize Your Virtual Environment with Software-Defined Storage Approach Hemant Gaidhani, Director Product Marketing and Management, Enterprise Storage Solutions, SanDisk.
As data centers shift workloads and critical applications to virtualized cloud environments, optimizing the performance and elasticity of cloud resources to meet growing business demands is imperative.
This is fueled by CIOs and technical stakeholders who want the data center to provide performance at scale, and CFOs looking for it to be cost efficient. They must also tackle the challenge of managing that data and various key performance indicators (KPIs) such as high performance, uptime, cost-effective scalability, reliability, and total cost of ownership (TCO).
Solid state drives (SSDs) have not only emerged as the solution to many of today’s prevalent challenges, but are also being combined with caching software to enable new performance and efficiency levels through a software-defined storage approach.
Maintaining high performance and uptime
When hundreds of virtual machines (VMs) across scores of virtual hosts try to access the same storage volumes simultaneously, there is an enormous I/O contention on data center resources. This is a very common phenomenon in multi-tenant cloud architectures and has been coined the I/O blender effect. The result is the number one performance bottleneck in a virtual environment.
One of the most common ways virtual administrators address this storage I/O performance challenge is by implementing SSDs so that requests from different VMs can be served with minimal latency. The performance of SSDs enables virtual administrators to keep up with the high I/O demands in virtualized environments. Not only do they help achieve greater VM density, but also improve application performance ensuring predictability of the end-user experience and meeting service level agreements (SLAs).
The microsecond-level latency capabilities of SSDs can overcome many scenarios that have historically met challenges in virtualized environments, such as boot storms, anti-virus scanning storms, or other high-latency events. The I/O Blender effect is just a single instance where SSDs are frequently used to solve latency bottlenecks.
Cost-effective scalability
Though virtualization helps improve utilization of compute resources, it puts enormous strain on traditional primary storage and in many cases can increase the complexity and cost.
This can be done more cost-effectively in cloud infrastructures by deploying flash optimization software and SSDs on the server. In doing so, IT managers can reduce the I/O load on primary storage resources and eliminate latency issues associated with traversing the network, negotiating through storage controllers, and dealing with the inherent latencies of traditional spinning disks.
Another critical component when considering cost-effective, scalable solutions is the ability to integrate with existing data center infrastructures without any disruptions. Adopting new performance improving hardware and software solutions should not mean upgrading entire architectures, or rendering current investments ineffective.
Reliability and total cost of ownership
As infrastructures scale, they often require additional space, consume vast amounts of power and require increased cooling capacities, dramatically impacting resource management and overall cost.
It is essential for storage solutions to scale for both capacity and performance, as well as address the pressing business concern of ownership costs. It is equally important that purchased solutions endure the demands placed on them by virtualized environments with minimal failure rate and downtime.
High-endurance enterprise SSDs deliver predictable, sustainable response times, even on data-intensive workloads. Since SSDs can work with hot swappable storage system designs, no expensive “forklift” upgrades to infrastructures are necessary.
By implementing SSDs, IT managers can deliver high performance levels with minimal disruption to the data center infrastructure while reducing power consumption and cooling costs. Furthermore, flash’s non-volatile memory protects data that would otherwise be lost-in-flight in the event of any power outage.
The bottom line for IT decision makers is that using SSDs means that their infrastructure investment will support demands and needs over time. SSDs paired with the right software afford high return on investment (ROI), without the hidden costs of component replacement due to failure or lack of endurance.
A new approach to the virtualized cloud: software-defined storage
Server virtualization has become a data center mainstay because of its ability to deliver increased computing efficiency. In fact, more than half of all servers are already virtualized, with that percentage continuing to climb.
With flash storage technology becoming more prevalent and cost-effective, a new approach to consider in virtualized environments is software-defined storage, which employs software as a means for controlling data center storage.
In the coming year, current flash-based storage systems are expected to transition to a software-defined approach, producing a new landscape that will change the industry permanently. A software-defined storage tier has the flexibility to scale up or scale out easily as application and business needs dictate.
Solutions like VMware’s Virtual SAN can help bring about this change, creating a radically simple storage tier for VMware vSphere environments, allowing customers to expand their Virtual SAN environment as computing needs grow.
Coupled with enterprise SSDs, this approach delivers extremely high performance, which make data transfers quick and efficient. Combining direct-attached flash and HDDs in a common storage pool shared by multiple servers results in a simple, high-performance, resilient shared storage solution in which flash storage acts as a cache that speeds read/write disk I/O traffic, to deliver maximum performance. This means more VMs can be supported by each physical server, saving power, reducing cooling costs, and conserving valuable data center space for future expansion.
Optimizing your operations
Enterprise organizations are continuing to look to the cloud to take advantage of the benefits available with virtual data center infrastructures.
When flash SSD hardware and software solutions are tangentially deployed in a virtualized cloud, organizations can achieve the high performance, scalability and TCO required for demanding workloads and applications.
This software-defined storage approach to virtualized cloud environments will enable businesses to transform their data centers to operate with greater agility, speed and value than ever before.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:49p |
HP Offers “Lean” Managed Cloud Service HP announced HP Helion Managed Virtual Private Cloud (VPC) Lean, a managed Infrastructure-as-a-Service offering for lighter workloads. These are workloads like application development and test environments, as well as workplace collaboration solutions.
The service is meant to make it easy to get started with HP’s managed cloud. It is a low-cost managed virtual private cloud for entry-level access, providing some basic features found in its full managed-cloud service. Pre-packaged configurations mean faster times to deploy. Customers can add managed services like backup as they go.
It lowers the barrier to entry for managed cloud. HP said it will continue to diversify its pre-packaged cloud solutions to target specific use cases to make it generally simpler to use its cloud infrastructure. Cloud providers continue to diversify the types of workloads they target, as well as make managed cloud services affordable for smaller companies.
The service touts better workload performance and better service delivery at significantly lower cost than traditional managed cloud. Pricing starts at $168 per month for a small virtual server configuration. A pilot trial service is also available.
HP Helion Managed VPC Lean offers HP Account Support along with faster OS and application certification for cloud. The service can be bundled with other enterprise services, including HP Management Services, SAP HANA Management Service and Database-as-a-service.
“HP already offers a feature-rich industry-leading managed virtual private cloud offering for enterprise customers,” said Jim Fanella, vice president, Workload and Cloud, HP Enterprise Services. “The new HP Helion Managed VPC Lean now delivers a lower-priced alternative designed to enable clients to further optimize cloud workloads in the enterprise, while still providing superior, enterprise-class service and performance.”
In another bid to make using cloud simpler, HP recently announced the Helion Network. The Helion Network is an attempt to create an ecosystem of independent software vendors, developers, system integrators and value-added resellers that will drive adoption of open standards-based hybrid cloud services.
The network is essentially a meeting ground for those developing cloud solutions that are, at various degrees, HP-centric. Those that join the Helion network have a chance to tap into a wider customer base for cloud that harnesses HP and partner technology and expertise. | 6:00p |
SQL-on-Hadoop Player Splice Machine Tops $15M Round Off With $3M More Hadoop Relational Database Management System (RDMS) provider Splice Machine has topped off an earlier Series B funding round of $15 million with an additional $3 million. The addendum means a lot of interest on the part of investors in the company, as well as more equity to help out in a period of fast growth.
Correlation Ventures led the additional round. The Series B in February was led by Interwest Partners and Mohr Davidow Ventures.
Splice Machine is designed to scale real-time applications using commodity hardware without application rewrites. It replaces traditional RDBMS, such as Oracle and MySQL, that are experiencing scaling or cost issues. A transactional SQL on Hadoop RDBMS, Splice Machine helps customers power real-time Big Data analytics and applications.
“We provide what we call an affordable scale-out solution,” said Monte Zweben, co-founder and CEO of Splice Machine. “When databases hit the wall, take data off centralized infrastructure and spread across a cluster of commodity machines. We are a software company that is essentially deployed on premise or in the cloud.”
Zweben said that the original $15 million financing was a significant event, allowing the company to transition from development to commercial. “The funding has been going towards the sales force, marketing, consulting force in the field and making us a real enterprise software company.”
The company provides “the scalability of Hadoop and HBase, the ubiquity of SQL, and the transactional integrity of an RDBMS,” according to Zweben. “The most important trend is the fact that companies are now moving to scale out for powering real-time applications. Hadoop is not just a data science. “
Splice Machine is in fast growth mode. In 2012, the company tripled its staff and engaged with about 10 charter customers. Zweben said he expects the company to be around 60-70 strong by the end of the year (from about 35 staff members today).
“If there’s one thing that makes us radically different, it is that we provide a platform that is truly general-purpose and can power apps,” he said. “Most scale-out offerings are focused on data science and analytics. We provide a real-time concurrent platform for transactional applications for users hitting [an app] in real-time. Think of the last time you went shopping – the same time you were doing that, there were a thousand users doing similar things. In the background, inventory levels were changing. This needs to be kept consistent and in real time. We are the only ones on Hadoop that provide that.” | 7:35p |
Latest Cumulus Linux OS Release Supports x86 Architecture Cumulus Networks, a startup with a Linux-based operating system for commodity data center network switches, has added support for a new hardware architectures and expanded the feature set in the latest Cumulus Linux 2.2 release.
The OS now supports x86 CPU architectures, making it simpler to deploy on Dell S6000-ON and Penguin Computing Arctica 4806XP switches. Cumulus now supports 16 hardware platforms, two CPU architectures and five vendors.
Cumulus launched the OS in June 2013, marketing it as an alternative to proprietary network software sold by Cisco and other market incumbents that is inseparable from their hardware. The company has been expanding the ecosystem of partners that support its software since the launch.
Jussi Kukkonen, director of product management at Penguin, said the vendor’s customers running high-performance computing clusters and scale-out data center architectures found flexibility of the x86 and Linux application environment valuable.
“Cumulus Linux 2.2, powering our new Penguin Arctica 4806XP switch, meets the growing customer demand for an open 10/40 Gigabit Ethernet networking fabric with the ease and flexibility of an x86-Linux application environment,” he said.
In release 2.2, Cumulus also added a suite of solutions for dual-attached servers, overlay solutions for L2 cloud services on bare-metal switches and Lightweight Network Virtualization, among other features.
Here’s the list:
- Improved Linux networking experience, bringing scalable and simplified interface configuration for networking devices (ifupdown2)
- Simplified operations and workflow with Prescriptive Topology Manager (PTM)
- Lightweight, consistent fast link failure detection mechanism with Bidirectional Forwarding Detection (BFD)
- Improved routing table scale leveraging the algorithmic LPM (ALPM) table
- Network traffic visibility through sFlow with InMon’s open source Host sFlow agent
- Turnkey infrastructure as a service (IaaS) integration with MetaCloud OpenStack private cloud
| 9:51p |
MongoHQ in Open Relationship With MongoDB, Changes Name to Compose MongoHQ is expanding beyond its MongoDB roots and changing its name in tow. Under the new name “Compose,” the company now offers ElasticSearch as a service through its formerly MongoDB-exclusive platform. The ElasticSearch service is now in beta.
The company is evolving into a multi-Database-as-a-Service (DBaaS) vendor. Their mission is to enable developers to choose the database that is right for their need, helping get them to production quickly. “As-a-service” means that it not only helps deploy, but also helps host and scale databases for customers.
The company that provides the database of its former namesake, MongoDB, raised $150 million last October. It was one of the largest single funding rounds for a database company at the time. In total, MongoDB has raised more than $231 million.
The first supported database expansion to ElasticSearch services means customers can combine speed and scale of MongoDB application data with the real-time search capabilities of ElasticSearch through a single platform.
ElasticSearch has seen growing adoption for both search and analytics for enterprises and web-scale operations. The open-source structured search engine has a wide range of uses, including user-defined, flexible queries across a range of attributes and powerful full-text search. It includes the ELK stack: ElasticSearch for Search, Logstash for centralized log data and Kibana for real-time analysis of streaming data.
The company will continue to add databases to its roster of as-a-service options, evolving into a hosted database specialist rather than a hosted MongoDB specialist. However, as it increases its available options, it should increase adoption of MongoDB as well.
Applications are increasingly being powered by more than one database as developers leverage individual strengths of each offering. The company’s expansion will help developers build new applications powered by multiple databases in one-click production deployments. Users pay based on the amount of data that they actually use.
The service helps to scale automatically, growing as your data grows. It provides automatic daily backups as well as a way to quickly integrate new versions of MongoDB and ElasticSearch. Customers can choose where they host their data.
Compose CEO Kurt Mackey explained the name: “In computer science, ‘composition’ is the process of combining existing functions into a new function that solves new problems. It is a simple and well-understood programming concept that can also be applied to database infrastructure. Modern applications have diverse data problems that are poorly served with a single database technology. Compose helps developers combine multiple open-source tools to solve unique data problems in production applications.” | 10:00p |
Experts: Too Early to Tell Whether Open-IX Will Reach End Goal While it has enjoyed support by many big-name companies, including wholesale data center providers like Digital Realty Trust, CoreSite and DuPont Fabros, and Internet companies like Twitter, Netflix and Google, future of Open-IX, the initiative aiming to create more competition in the U.S. Internet exchange market, remains uncertain.
Its future will depend on whether or not Open-IX-certified exchanges will attract enough members to truly compete with a handful of data center providers that operate the nation’s largest Internet exchanges. These are providers like Telx and Verizon, but the biggest one is Equinix, which controls nearly all key exchanges in the U.S.
The Open-IX association was created last year. It has created standards it uses to certify Internet exchanges and data centers that host them. The idea is to have a common set of standards to create a distributed exchange, and that way reach a critical mass of participants that will not be limited to one peering point or one building.
On Tuesday, a conference in Palo Alto, California, called Peer 2.0 hosted a panel discussion on Open-IX, its purpose and its potential effect on the market.
Panelists agreed that Open-IX has so far enjoyed positive momentum, but it is too early to tell whether or not it will actually attract the amount of members necessary to make a significant impact on the market overall. It will take several years before it will be possible to tell how big that impact will be.
Good early signs
Keith Mitchell, one of the founding board members of LINX (London Internet Exchange), said two exchanges have been certified so far, and the organization was in discussions with three more. Mitchell is an Open-IX board member.
Al Burgio, founder and CEO of IIX, a company that operates a sort of a cloud for global network peering, said one positive effect Open-IX already had was greater awareness of interconnection and peering in the industry. “Open-IX has brought a lot of visibility to interconnection and peering,” he said.
“I think that helps all market participants. That’s definitely been a positive byproduct of Open-IX coming into existence.”
No company today is actually moving from a big commercially operated exchange to an Open-IX-certified one. Vinay Nagpal, director of carrier relations at DuPont Fabros Technology and an Open-IX committee member, said early adopters of Open-IX were using it for redundancy.
DFT was an early supporter of Open-IX and had data centers in Ashburn, Virginia, and Piscataway, New Jersey, certified to host the exchanges. AMS-IX (Amsterdam Internet Exchange) is already up and running in the Piscataway facility, and LINX NoVA (a U.S. subsidiary of LINX) has set up shop in Ashburn.
For DFT, participation in Open-IX and hosting European Internet exchanges was a way to get into retail colocation. Until recently, the company has only been doing large multi-megawatt wholesale deals. Now, by focusing more on interconnection, it hopes to attract a wider variety of customers, which will increase the value of its properties.
That strategy has already borne some fruit. Nagpal said some early adopters have already taken space with DFT to get access to the peering infrastructure.
‘Jury’s still out’
Robert DeVita, general manager at Cologix, a retail colocation provider, said the company already had multiple member-operated Internet exchanges at its facilities and was still trying to figure out what the value of having Open-IX-certified exchanges would be.
He said he supported the initiative and the standards the organization had come up with for facilities to be good hosts for exchanges. Answering the moderator’s closing question about the point of Open-IX, however, he said, “I don’t know. Jury’s still out.”
Mitchell explained that Open-IX had multiple objectives, the big overarching one being a level playing field. Another goal is to lower the cost of peering, but without actually dictating the costs.
“The objective is very much to lower the costs,” he said.
To achieve both of those objectives, however, Open-IX membership will need to reach a certain critical mass of participants. There has to be enough players for companies to justify going into an Open-IX-certified facility instead of a massive Equinix-operated exchange. |
|