Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, May 27th, 2015

    Time Event
    12:00p
    Converged Data Appliance Startup Rubrik Raises $41M

    Mere months after raising an oversubscribed $10 million round, converged data management system provider Rubrik has raised an additional $41 million in funding led by Greylock Partners. The company’s r300 appliance has also entered general availability, sold exclusively through channel partners.

    Enterprise storage has long been a pain point for many and is one of the biggest impediments to retuning IT for the web-scale age. For this reason, investment continues to flow into companies tackling this space.

    Two breeds of “all-in-one” functionality companies are emerging: those with appliances packed to the gills with features, such as Rubrik and Nutanix, and those that tackle storage strictly from a software perspective, such as Veeam and SpringPath. Rubrik says it’s different from the likes of Nutanix because it controls the secondary environment (think backup, etc.) rather than the primary environment that Nutanix provides.

    Rubrik’s team is made up of several Google and Facebook alums looking to translate their web-scale know-how to the enterprise world.

    Will enterprises choose the appliance route or the commodity hardware route? Both have their advantages.

    Appliance vendors argue that enterprises desire the turnkey simplicity of appliances, while those focusing on the software-defined approach say it is not as disruptive to existing infrastructure.

    However, all converged data management vendors are facing trust hurdles. The software fabric underlying all of these platforms needs to be easy to learn and use in addition to potential integration worries.

    Enterprise backup and storage fabric competitor Veeam recently noted explosive growth without the need of funding. The company claims it has added close to 11,000 total paid customers in the last quarter, compared to an average of 3,500 in the past. Like Rubrik, Veeam is a channel play.

    Rubrik’s sizable funding will help it expand its channel and customer base. The company has eliminated as many barriers to adoption as it could think of: the system reportedly takes 15 minutes to set up and is extensible with public cloud storage.

    The appliance takes up two rack units and does many things. The company claims it eliminates backup software by integrating data protection, instant recovery, and DevOps infrastructure into a single fabric.

    It comes pre-configured with the Rubrik Converged Data Management Platform. It’s a storage convergence play, with Rubrik taking care of several storage-related processes in what it calls the secondary environment, such as backup, deduplication, and compression.

    Turning Rubrik into a storage endpoint means developers can rapidly provision data. As a mountable live storage endpoint, the company claims it provides zero-time recovery, allowing applications to be recovered instantly. Rubrik also has strong search functionality the company calls “Google-like,” allowing customers to locate any data with predictive search results based on data stored across private and public clouds.

    Research firm IDC estimates that businesses will spend $47 billion in 2015 on infrastructure to protect data, manage disaster recovery, enable DevOps, and archive for compliance.

    Companies are having to do more with less, either in terms of people or in terms of budgets. IT convergence vendors are glomming on to the idea of DevOps because as development and IT Operations becomes more intertwined in terms of people and process, it will need infrastructure tailored to treat both as a single entity. Historically separate functionality is collapsing into platforms that act as a single point of control. These platforms need to be simple to operate.

    Rubrik customer Power Integrations touted its simplicity.

    “In my twenty years of IT experience, I’ve used pretty much every solution in Gartner’s Magic Quadrant and they’re so complicated that you need a PhD to operate any of them,” said Jaime Huizar, senior architect, Power Integrations. “With Rubrik, it took ten minutes to get up and running. Our upfront software and hardware costs were reduced by 4x.”

    Greylock has made several investments around DevOps, believing the evolution will happen at an increasingly rapid pace.

    “We invest in world-class teams who build category defining companies in large markets,” said Asheem Chandna, general partner, Greylock. “As with our previous investments in Data Domain, Pure Storage, and Docker, we see Rubrik as a game-changing company that radically simplifies data management and bridges data across public and private clouds.”

    As part of the investment, Chandna will join Rubrik’s board of directors.

    3:30p
    The Role of Database-as-a-Service in Future Data Centers

    Ken Rugg is a founder and CEO of Tesora.

    Private clouds are here and delivering real value and ROI for enterprises across the board. From self-service compute to storage resources, private clouds are indispensable tools for IT managers to deliver necessary resources. With success in rolling out these initial core services, IT managers are expanding the scope of the services they provide to the users of these private clouds. Next up is the database tier, a critical function of a private cloud data center and central to enabling the largest and most important workloads in the enterprise.

    As mission critical as they are, database workloads can also be the hardest to accommodate in cloud computing for a number of reasons. First, it’s well known that database servers are typically resource intensive and perform best on dedicated physical servers, rather than virtual servers and abstracted compute resources. Then, there is the issue of maintaining security and privacy of all the data contained in those databases stored outside locked-down physical servers. However, the biggest hurdle of all is probably that databases are “stateful” services, meaning that a sequence of actions are dependent on one another and any failure could result in a loss of critical information. Trying to scale or build out capacity in a reliable way, therefore, can become complex very quickly.

    Enter DBaaS

    Here’s the good news: In April 2014, OpenStack introduced its Database-as-a Service (DBaaS) component, called OpenStack Trove. Simply put, its goal is to make it as quick and easy to deploy and manage relational or non-relational databases as it is to provision simple virtual machines or raw storage. To accomplish this, Trove automates complex administrative tasks including deployment, configuration, patching, backups, restores, and monitoring.

    Shared Service Architecture

    Trove leverages core components and shared services of OpenStack so that enterprises can provision DBaaS in their environment. For example, Trove uses the Nova compute service to create virtual machines on which to run database servers, Cinder block storage to provision database storage, and Swift’s object storage to capture backups.

    This architecture means that Trove can leverage OpenStack shared services to take advantage of the latest services and technology. For example, an enterprise may decide to leverage OpenStack’s open APIs for standardizing high-performance storage for Cinder storage or utilize Software Defined Networking (SDN) capabilities to enhance Neutron. Since Trove is layered on these core services, its users can take advantage of these services without any special customization.

    Guest Agents and Guest Images – Keys to Multi-Database Support

    One of the most powerful Trove features is the way that database instances are launched and managed. Prepackaged guest images of virtual machine configurations are stored in an OpenStack repository called Glance. When a guest image boots, it unpacks itself and produces a full-service, ready-to-use database instance, eliminating the need to provision and configure the database from scratch.

    The guest image includes a guest agent that manages the database instance on behalf of Trove. The guest agent is a small software module that serves as a proxy for Trove to start, stop and manage the various processes that constitute the data store. The result is an architecturally simple construct for provisioning and managing multiple database technologies.

    Advantages of the Trove Architecture

    A common management and provisioning RESTful API provides access to DBaaS functionality in a database-agnostic manager. Using this interface, administrators can perform a variety of functions in unified, simplified ways, such as:

    • Spin up instances
    • Create replicas
    • Resize instances
    • Add users and databases
    • Manage database backups
    • Change the instance configuration

    Applications interact with individual database management systems using native data access APIs that execute the functions in the manner specific to that database. By separating provisioning and management from the intricacies of accessing data within individual databases, Trove makes life easy for both operators and developers. Trove provides self-service capabilities for developers to provision databases that they can query and update in the manner that they are used and on whatever database is best suited for the task at hand. At the same time, operators can manage all of these database technologies in a consistent way without requiring that they be experts in a particular database. In effect, the database becomes just another service rather than a time-consuming central focus.

    The net result is a new model for how enterprises interact with their databases.

    OpenStack Trove benefits include:

    • Multi-database support and certification: For example, Tesora’s DBaaS platform implementation of Trove currently supports Cassandra, CouchBase, MariaDB, MongoDB, MySQL, Oracle, Percona Server, PostgreSQL and Redis. Support for additional databases is under development.
    • Single management interface for many database technologies: Common administrative tasks including provisioning, deployment, configuration, tuning and monitoring are achieved in a simple, unified way.
    • Automated backup and recovery: Minimizes data loss and protects against hardware failure with redundant backups.
    • When OpenStack is deployed as a private cloud inside the data center, it adheres with enterprise best practices and policies, such as data retention, data privacy, encryption and backups.

    Trove has made fast progress in its short life and is currently in production on a very large scale at Rackspace and HP in their public cloud offerings. Also, eBay and several other major enterprises have also started using Trove in their business operations in private clouds. So, as you can see, Trove has become a big part of IT’s transition to the cloud, and is ready for the real world.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:30p
    Sony Acquires Facebook-Born Startup That Repurposes Blu-Rays for Cold Storage

    Sony has acquired Optical Archive, a startup that spun out of Facebook that uses Blu-Ray technology in an innovative way. Instead of a medium for video, Facebook saw in Blu-Ray a potentially cost-effective data storage medium. Sony will likely develop a similar system that may someday be used in data centers. Terms of the deal were not disclosed.

    Optical Archive developed a storage system that packs 1 petabyte of data into a single cabinet filled with 10,000 Blu-Ray optical disks. The solution also employed a robotic retrieval system similar to those used to retrieve tape from archived storage units. This wasn’t the Blu-Ray’s intended use, but it did give the discs a renewed purpose.

    Facebook has not yet deployed the system, the company told Wired, but is currently testing it.

    Frank Frankovsky founded Optical Archive. Frankovsky was responsible for streamlining data center hardware for the social media giant. The Blu-Rays were used for “ultra-cold” storage. While Blu-Ray is not ideal for primary storage because data can’t be retrieved instantly, for data that is rarely accessed, it provides a very cheap solution.

    Using Blu-Ray disks offers savings of up to 50 percent compared with the hard disks Facebook is using in its newly-completed cold storage facility at its data center in Oregon. Blu-Ray also uses 80 percent less energy than cold storage racks because the cabinet only uses energy when it’s writing data during the initial burn.

    Each disc is certified to retain data for 50 years, another plus, and it can operate in a wide range of environments. As the battle for the high end, IOPS side of the market raged on, Facebook innovated at the tape level.

    Facebook’s Giovanni Coglitore, hardware engineering director, said last year that he believed Blu-Ray could potentially “creep into warmer and warmer storage tiers.”

    Facebook believed that this could be of broader use, and Sony – creator of Blu-Ray technology – share this sentiment.

    Currently, discs are being developed that blow past the 50GB threshold of Blu-Ray, housing as much as a terabyte. This would mean a company’s entire cold storage on a couple of cheap, plastic discs.

    Lateral Thinking With Withered Technology

    The death rattle for Blu-Ray disks began almost as soon as they were released, with the emergence of streaming movies and digital downloads. This is not the first time that a mature technology believed to have one foot in the grave roared back in a big way.

    Nintendo visionary and Gameboy inventor Gumpei Yokoi called this “lateral thinking with withered technology.” Yokoi was famous for finding ways of using outdated technology and creating something that blew past technically superior competitors. The Nintendo Gameboy used a simple dot matrix and crushed full-color, powerful handhelds like the Atari Lynx and the battery-chewing Sega Game Gear.

    Blu-Ray is a mature technology, which is cheap and well-understood – lateral thinking requires looking at mature, cheap tech and finding new, radical uses.

    IT operations people are familiar with the concept of lateral thinking with withered technology. Budget limitations often mean having to make something work within limited parameters – stretching technology as far as it can go.

    Facebook has the benefit of experimenting with unique approaches, and the company has graciously shared many of its innovations through the Open Compute Project. Now, if only somebody could figure out a way to repurpose those AOL Trial CDs from the 90s.

    5:53p
    Liquid Web Names Former Cbeyond Head CEO, Receives Investment from Madison Dearborn Partners

    logo-WHIR

    This article originally appeared at The WHIR

    Liquid Web, a Michigan-based web hosting provider, announced on Wednesday that it has received an investment from Madison Dearborn Partners. Liquid Web has not disclosed the financial details of the investment but says the Chicago-based private equity firm has provided a “substantial investment” in the company.

    Matthew Hill, Liquid Web founder and CEO will be succeeded as CEO by Jim Geiger as part of the transaction. Madison Dearborn previously worked with Geiger in his role as founder, chairman and CEO of Cbeyond.

    Liquid Web was founded 18 years ago and has grown to serve more than 30,000 clients from data centers in Michigan, Arizona and Amsterdam. The company’s suite of products include its Storm Platform, which provides VPS, dedicated bare metal cloud servers, and dedicated server solutions as well.

    The company’s headquarters will remain in Lansing, Michigan. It has provided further details about what the transaction will mean for customers on its blog.

    Cale Sauter, Liquid Web public relations, says that the company has been exploring growth opportunities for the past nine months.

    “This is a really positive change for our ability to add new products and grow much faster than the rate we even have been at this point,” Sauter tells the WHIR in a phone interview.

    “This is actually something that our founder and CEO Matthew Hill – he started the company 18 years ago – he had noticed that there were other opportunities for the company to grow a little faster and be competitive with the way the market has changed since we started. He was on the lookout for any kind of investor that could really accelerate this for us. He talked to a number of people over the past nine months and this is the team that we found best suited to the needs of the company going forward.”

    Last year, Liquid Web expanded its space in Ann Arbor and announced hiring efforts to grow its workforce in the area.

    “I am pleased and humbled to join such an impressive company founded by an entrepreneur and supported by employees who share my passion for taking care of customers,” said Jim Geiger, incoming CEO said in a statement. “Liquid Web is an extraordinary company and, as an entrepreneur myself, I will work tirelessly to ensure that the company continues to thrive. I look forward to working again with Madison Dearborn, and together we will focus our efforts on scaling the Liquid Web business globally while remaining dedicated to its core values of unparalleled customer service and hosting solutions.”

    The transaction is expected to close over the summer, subject to customary closing conditions.

    Disclosure: Liquid Web provides hosting to the WHIR and Data Center Knowledge.

    This first ran at: http://www.thewhir.com/web-hosting-news/liquid-web-names-former-cbeyond-head-ceo-receives-investment-from-madison-dearborn-partners

    6:33p
    CloudBolt Brings Self-Service Capabilities to IT Management Platform

    Looking to make it simpler for IT organizations to manage complex data center environments, CloudBolt Software today announced it has added self-service capabilities to its IT management platform.

    CloudBolt CEO Jon Mittelhauser says the company’s namesake IT management framework is designed to span both legacy and modern IT environments consisting of tens of thousands of servers running on premise or in the cloud. Capabilities of the platform include automated server provisioning and management, unified IT management, chargeback and showback reporting, service catalogs, and license management.

    “Our customers have complex headaches,” says Mittelhauser. “We target what we call brownfield environments where there are a lot of legacy servers that need to be managed alongside new ones.”

    Rather than replace existing management frameworks, Mittelhauser says, CloudBolt is designed to function as a management overlay IT organizations can use to invoke a variety of other IT management environments, including Chef, Puppet, HP Server Automation, and cloud frameworks, such as VMware, Amazon Web Services, Microsoft Azure, OpenStack, and Google Compute Engine.

    Other new features include service catalog blueprints that can be accessed via the CloudBolt web interface or a REST application programming interface and the ability to create snapshots of virtual machines.

    IT administrators can also set rate-based limits and set limits on specific classes of deployments environments along with triggers for turning IT infrastructure on or off, rebooting servers, and the ability to support multiple types of currency units.

    IT environments, notes Mittelhuaser, are not only becoming more heterogeneous in terms of the virtual and physical servers they have to support, but also the management frameworks being deployed. Semi-autonomous groups within the IT organization are likely to have embraced multiple management frameworks. CloudBolt provides a mechanism to manage all those frameworks at scale without necessarily requiring every IT team in the organization to standardize on a specific management framework.

    Mittelhauser recognizes that it tends to only be the largest of IT organizations that needs to address that level of complexity. But as enterprise IT becomes more distributed in the age of the cloud, the number of IT organizations wrestling with high levels of complexity is steadily increasing.

    As the number of business processes that span multiple systems continues to grow and organizations continue to mature in terms of managing IT as true service, interest in more comprehensive approaches to managing IT will undoubtedly increase.

    6:47p
    Instart Logic Updates Software-Defined Application Delivery Platform

    Responding to the needs of modern applications and cloud and mobile architectures, application delivery company Instart Logic introduced the next generation of its software-defined application delivery platform with new performance features.

    Earlier in the month Instart Logic completed a $43 million financing round to continue its drive for proclaiming that the future of application delivery is based on software, not networks. With advanced programmatic access and control in the new SDAD platform, Instart Logic says, JavaScript Interception and Browser Cache Purge are the primary performance enhancements that deliver speed, security, and scale to the customer.

    JavaScript Interception extends its JavaScript Streaming capability to intercept both static and dynamic third party content through its SDAD service, according to the company. This is made possible, Instart Logic notes, through a new Nanovisor 2.0 that includes a client-side interception system to ensure runtime redirection of client-side asset requests.

    Browser Cache Purge is a new performance feature that Instart Logic says changes the current model of web application development by making caching an integral component of application design. The company notes that with this new feature publishers will be able to push changes to the browser via an API and clear browser cache instantaneously, thus eliminating unnecessary network requests to check for current content.

    Instart Logic is initiating an Early Experience Program to recruit application publishers to provide input into how this capability impacts application design.

    For enhanced programmability of the SDAD platform Instart Logic has also added an expanded set of APIs and a new management portal.

    9:21p
    HP Acquires ConteXtream To Bolster NFV

    HP is acquiring Mountain View, California-based ConteXtream to expand its Network Function Virtualization play. ConteXtream is a provider of an OpenDaylight-based, carrier-grade SDN fabric for NFV. At transaction close, ConteXtream will become part of HP’s Communications Solutions Business.

    ConteXtream was already an HP OpenNFV Partner. Its open SDN controller platform complements HP’s NFV expertise and telecommunications and IT experience. ConteXtream’s SDN controller platform complements HP OpenNFV solutions and aligns with NFV’s evolution as an open source-driven architecture.

    NFV is a way to offer functions–traditionally handled by proprietary network appliances–as a service. Instead of proprietary hardware, functions like firewalls and caching can be carved out of a resource pool through software. It saves time and money through not having to physically install and pay for appliances.

    ConteXtream’s technology is based on open standards and delivers capabilities like advanced service function chaining (delivering service functions in a sequential order).

    NFV is revolutionizing the Communications Service Provider world. CSPs face exploding traffic on their networks and declining margins, as HP described, and NFV is an opportunity to roll out additional revenue generating services.

    The OpenNFV program is open approach that allows HP and external partners, such as network equipment providers (NEPs) and independent software vendors (ISVs), to take advantage of the open and standards-based NFV reference architecture, HP OpenNFV Labs, and the HP OpenNFV partner ecosystem of applications and services.

    OpenNFV is based on open standards and leverages the open source project OpenDaylight. It supports several open source controller technologies.

    We are at a pivotal point for the communications industry,” wrote HP’s Saar Gillai, senior vice president and general manager of HP’s Communications Solutions Business. “CSPs have a significant opportunity to explore new markets and business models. It’s a time of change not only in the application of technology, but within the organization as well. HP will leverage our technology, partners, services, labs and commitment to open standards to help CSPs thrive. As a partner to CSPs, our aim is to make this journey to NFV as efficient as possible.

    << Previous Day 2015/05/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org