Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, September 16th, 2014

    Time Event
    4:01a
    Basho Advances NoSQL Riak Enterprise 2.0 With Search, Advanced Data Types

    Basho introduced Riak Enterprise 2.0. The company is the creator of Riak, a distributed NoSQL database.

    The two big additions in version 2.0  are enhanced search functionality and advanced data types. The update also comes with some security enhancements and simplified configuration management. The enhancements position Riak 2.0 as a platform for a wider variety of applications.

    The previous search functionality was built in-house, but customers demanded more full-featured capabilities, according to Peter Coppola, vice president of product at Basho.

    Apache Solr has been leveraged for search in 2.0 giving better performance. “With this powerful search capability, we can now support different apps within the organization,” said Coppola. “It moves us from point solution to platform.”

    The 2.0 release also includes support for new distributed data types, including sets, flags, register and maps.

    Here are some use cases:

    • Sets: An e-commerce company may use this to represent items in the shopping cart, or the list of connections on a social network.
    • Flags: Detecting when something is retweeted, identifying whether someone is a premium user on LinkedIn.
    • Registers: A list that changes value over time, such as what’s trending, top 10 or favorites.
    • Maps: A combination of the above data types to create a data structure for a user profile.

    The addition of these data types means users don’t have to write the code for them and brings better conflict resolution for a more consistent database. “We’ve defined a conflict resolution methodology,” said Coppola. “That conflict resolution may differ from data types.”

    Other upgrades include:

    • Security: Riak now enables authorization and authentication to manage users and groups. Security measures can now be applied within Riak itself, including data access and functional permissions.
    • Simplified configuration management: To add further operational simplicity, configuration information has been further consolidated and stored in an easy-to-parse, transparent format.
    • Efficient bucket management: with bucket types, users can create and administer bucket properties and apply them to a collection of buckets, improving overall efficiency.
    •  Tiered storage: Riak allows LevelDB users to split data files across two mount points based on access patterns to optimize for low latency of the most frequently accessed data.

     

    11:30a
    Data Center Security Startup vArmour Emerges from Stealth

    Data center security startup vArmour has come out of stealth. The company recently raised a $15 million series C after proving itself with several enterprises and service providers, but has not, until now, said what it does.

    vArmour says traditional security perimeters have disappeared in the cloud world. While compute, storage and networking have become virtualized, security remains locked in legacy, hardware-centric perimeter models that cannot scale to meet modern business requirements and systems architecture.

    The company said its solution provides visibility, control and threat defense across physical, virtual and cloud applications and can easily scale with the infrastructure.

    CEO Time Eades said vArmour is not just for discovering, but defending against attacks that evade the traditional security perimeter.

    The increasing use of virtualization has benefited IT with cost savings and agility, but it has also caused new avenues for attack that rest outside traditional local-based perimeter security models. Advanced attackers exploit these critical gaps in visibility and control inside the data center.

    Virtualization and cloud have changed the nature of traffic flows themselves – 83 percent of traffic now travels “east-west” within the data center, never seen by the traditional perimeter. Attackers often compromise low-profile assets as their initial way into the system.

    vArmour is led by former NetScreen, Juniper Networks, Silver Tail Systems, Citrix, Riverbed and IBM executives.

    The solution provides:

    • Security Visibility into every application, asset, packet and connection in the data center
    • Threat Analytics delivered through real-time detection and visualization of laterally moving threats
    • Attack Remediation policies to contain compromised hosts and prevent exfiltration
    • Policy Control and Enforcement to isolate and control communications between applications, workgroups and tenants

    vArmour says it helps an enterprise understand the nature of an attack’s progression across the entire network, showing intent and path, as well as “patient zero,” the initial point of compromise. Through software it provides distributed sensors and enforcement points in a single logical system that scales horizontally. The system provides insight into data center risk profile, as well as the tools to control and prevent breaches without requiring changes to existing policies or IT infrastructure.

    11:30a
    Docker Raises $40M in Sequoia-Led Series C

    Docker, the startup behind the eponymous open source app container technology, has raised $40 million in a Series C funding round. The round is double the size of the startup’s Series B, closed in January, and significantly increases its valuation.

    In the round, the company, which has enjoyed support from some of high-tech world’s top startup financiers in the past, adds another heavyweight Silicon Valley VC to the list of backers. Sequoia Capital led the round, joined by Docker’s previous investors Benchmark Capital, Greylock Partners, Insight Ventures, Trinity Ventures and Jerry Yang, co-founder and former CEO of Yahoo.

    Docker launched the first production-ready release of its software in June. Its technology packages Linux applications inside application “containers,” which make the applications portable across a variety of infrastructure environments, on premise or in the cloud.

    Read a Data Center Knowledge Q&A with Docker CEO Ben Golub

    The company sells an array of tools that help developers take an application through all the stages between development and deployment without worrying about infrastructure configuration requirements. The idea is to speed up the application development process by freeing developers from infrastructure concerns.

    Docker has support from IT infrastructure software heavyweights, such as Google, IBM, Microsoft, Red Hat and more recently VMware and claims to have been enjoying a lot of success in the enterprise IT market. The new funding will go toward expanding its enterprise business and growing its ecosystem.

    David Messina, vice president of enterprise marketing at Docker, said one of the immediate goals was to grow the organization that provides support to enterprise customers. Docker currently provides a “typical support and training agreement traditional enterprise software companies would have,” he said, but wants to grow its support capabilities.

    A number of customers are trying to scale Docker implementations from one team across the entire enterprise and are seeking guidance in doing so from the company, he explained.

    There are currently about 60 people total on the company’s team. Some working out of its San Francisco headquarters and some working remotely elsewhere around the world.

    Another short-term goal is to release a private version of Docker Hub, a catalog of Docker tools developers can choose from. The hub is public, but there is demand from enterprises for something similar they can put behind their own firewalls.

    As part of the funding announcement, Docker said Bill Coughran, a partner at Sequoia, will join its board of directors. Coughran spent eight years as senior vice president of engineering at Google before joining the venture capital firm.

    “The velocity at which the Docker team has innovated on product and grown its community is staggering – in 18 months they’ve accomplished what many leading companies take years to build,” Coughran said in a prepared statement.

    12:00p
    MapR Adds Support for Drill, Open Source Version of Google’s Dremel

    MapR, a Google Capital-backed Hadoop distribution provider, announced its software now supports Apache Drill, an open source stream data processing software framework the company is deeply involved in developing.

    Support for Drill 0.5 comes as part of the latest release of MapR’s software, MapR 4.0.1. The release also includes support for updated Apache Spark and HBase and uses Hadoop 2.4, including YARN.

    Drill is a framework for distributed applications that analyze large datasets. It is an open source version of Dremel, a system Google built for itself and today provides as BigQuery, a service available through its cloud platform. According to the project’s Wiki page, Drill supports a broader range of query languages, data formats and sources.

    Support for Drill brings SQL to MapR’s Hadoop distro, meaning users can run SQL queries against data stored on Hadoop clusters. According to MapR, Drill enables querying of complex data in native formats, including schema-less data, nested data or data with quickly changing schemas, with little involvement from the IT team.

    Six of the eight people listed as core developers on the Drill Wiki page are MapR employees, including the company’s co-founder and CTO MC Srivas. “We are kind of the lead drivers, in terms of the committers [to the open source code base],” MapR Chief Marketing Officer Jack Norris said.

    But the Drill developer community extends far beyond the MapR team. There are more than 40 contributors to the project total, Norris said.

    “It’s an actual formal Apache Software project, incubated within the Apache Software Foundation and really started there, so the design and the APIs, and basically the architecture, had complete exposure to the open source community,” he said.

    Some of of Drill’s design goals are to process “petabytes of data and trillions of records in seconds” and scale to more than 10,000 servers. According to Norris, these goals have been addressed in the current architecture of Drill, but indicated it was still early to offer any solid proof that it can meet them, since the version that’s out now is 0.5 and not exactly a solid 1.0 general availability release.

    With support for Hadoop as well as Drill and Spark (another stream processing framework), MapR has a wide range of data analytics capabilities, providing a variety of tools to choose from. The distribution now includes several batch processing frameworks, five SQL-on-Hadoop technologies, two NoSQL technologies and three machine-learning and graph libraries.

    In June, MapR landed a $110 million financing round, which included an $80 million equity investment by Google Capital.

    12:00p
    AMD and Canonical Unveil OpenStack Cloud in a Box

    AMD and Canonical unveiled a joint solution that provides a private OpenStack cloud in a box.

    The collaboration aims to simplify getting an OpenStack cloud up and running through an integrated plug-and-play solution. The solution includes AMD’s SeaMicro SM15000 server, Canonical’s Ubuntu LTS 14.04 and OpenStack.

    Another recent integrated OpenStack solution came from a collaboration by Cisco and Red Hat.

    “AMD and Canonical have dedicated a tremendous amount of engineering resources to ensure an integrated solution that removes the complexity of an OpenStack technology deployment,” said Dhiraj Mallick, corporate vice president and general manager, AMD data center server solutions.

    The bundle was able to spin up 168,000 virtual machines using the Ubuntu feature called Metal as a Service (MAAS) and orchestration tool Juju. Juju was used for deployment and MAAS set up the hardware, delivering bare-metal servers, storage and networking.

    The SM15000 links 512 CPU cores, 160 gigabits of I/O networking and more than five petabytes of storage in a 1.28 terabyte fabric called Freedom Fabric.

    The hardware in the packaged 10-rack-unit OpenStack bundle includes:

    • 3 Cloud Controllers
    • 57 Nova nodes
    • 3 Cinder nodes
    • 64 GB Object Storage
    • 128 GbE NICs (upgradable to 512)
    • Integrated Layer 2 switching
    • 80 Gbps of I/O.

    Canonical software in the package includes Ubuntu server, MAAS, and Juju.

    12:00p
    Manhattan’s Iconic 60 Hudson Street Gets a Power Boost

    NEW YORK - For a window into the ambitions of DataGryd, look to the rooftop of 60 Hudson Street. High atop the iconic Manhattan communications hub, more than 370 feet above the the streets of Manhattan, a hatch in the roof is unlocking the building’s future.

    That’s the entry point for the diesel generators that are expanding the power capacity at 60 Hudson. The generators must be installed by a crane, which lowers the 8,000-pound units into their new home on the 24th floor reinforced with additional steel girders. Two of the massive engines are in place, with four more to come.

    The skyhatch and engine room are powering the expansion of data center space inside 60 Hudson, an art-deco landmark which has been a cornerstone in the development of America’s communications infrastructure. Built in 1929 as the headquarters for Western Union, the building powered the growth of the nation’s telegraph system and then evolved into a key telecom hub for AT&T and others. With the dawn of the Internet age, 60 Hudson became a key meeting place for networks, with fiber optic cable filling conduits that once delivered telegrams in pneumatic tubes.

    But servers, switches and storage units are hungry for power, and 60 Hudson Street needed more of it. Enter DataGryd, a new data center company formed in 2012. The company’s founder, Peter Feldman, saw an opportunity to transform four floors at 60 Hudson into high-density space for service providers. To make it work, Feldman had to devise a way to expand the power and cooling capacity for the 80-year-old historic landmark building, located in a noise-sensitive neighborhood. It had to do all this without disturbing the current tenants, many of whom run mission-critical data operations

    “We had to modernize and future-proof the building,” said Feldman. “We had to turn the building inside out to do it. But now we can meet future market demand.”

    Telx ready to christen new space

    The first phase of that plan comes to fruition this week, as colocation specialist Telx brings a new data center online on a full floor it has leased from DataGryd.

    Expanding a building’s power capacity in an urban setting is a major challenge. The upgrades at 60 Hudson offered additional layers of complexity.

    “This building is a landmark, and anything we wanted to do had to be approved,” Feldman said last week as he showed off the renovations. “People don’t like diesel generators or exhaust or noise. I’ve got to be sensitive to the impact on the neighbors. We worked with the landmarks commission and the city and the mayor on meeting our civic obligations.”

    The plan featured several facets: adding more grid capacity from Con Edison, an on-site cogeneration facility, and additional generators to provide backup electricity. Tying these together is a utility-scale microgrid that can tap any generation source – the grid, the cogen plant or the generators. This would allow DataGryd to create an additional 240,000 square feet of high-density space within 60 Hudson.

    Feldman knows the building well from his days as a co-founder of Telx, one of the largest tenants. He worked closely with the building’s owners and the management firm, Colliers International, to develop the renovation plan.

    12:30p
    DataGryd Opens for Business in NYC, With Telx as First Tenant

    NEW YORK - The newest player in Manhattan’s data center scene is about to open its doors within 60 Hudson Street, the iconic Manhattan connectivity hub in TriBeCa. DataGryd is operating out of four floors in 60 Hudson, representing nearly a quarter million square feet of space in one of New York’s most wired addresses.

    One of those floors houses a brand new data center for Telx, which continues an aggressive expansion in the greater New York market. Telx is the primary interconnection provider within 60 Hudson, making it a natural tenant for DataGryd.

    “Telx has always wanted to go first, because they were already in the building,” said Peter Feldman, the CEO of DataGryd, which has been upgrading the power infrastructure within 60 Hudson to add capacity. Telx and DataGryd are hosting an open house and tour this week at 60 Hudson Street.

    Anchor tenant a key milestone

    Launching a new brand is never easy, particularly in the data center business, which has historically had a high barrier to entry. It’s particularly tough when your first location is one of the most competitive markets in the world. That’s why landing an anchor tenant was a key milestone for DataGryd.

    “Someone’s always waiting for the first person to put their toe in the water,” said Feldman. “Some folks were waiting to see if we’d get all the power and cooling we need.”

    DataGryd launched in 2012, when it leased four floors of the 24-story building, and recently completed major infrastructure upgrades to the building to provide the power needed to fill those floors with data center tenants.

    Expansion continues for Telx

    The first of those will be colocation provider Telx, which will begin operations early next month in 69,000 square feet of build-to-suit space prepared by DataGryd, which includes about 38,000 square feet of raised-floor space for customer equipment.

    Telx got its start at 60 Hudson Street, and has facilities on three floors of the building. In recent years the company has added space in key hubs throughout the New York market, where it now operates 620,000 square feet of data center space.

    Last year Telx opened NJR3, a 215,000 square foot greenfield data center at the company’s data center campus in Clifton, New Jersey, as well as new space at 32 Avenue of the Americas. Telx continues to operate colo space and the Meet-Me-Room at another major Manhattan data hub at 111 8th Avenue.

    The New York market dynamic

    Telx isn’t the only provider expanding in the New York market. In Manhattan, Sabey has entered the market with Intergate.Manhattan, renovating a former Verizon building at 375 Pearl Street.

    It’s even busier over in New Jersey, where Equinix, Digital Realty, DuPont Fabros, CoreSite and Internap have all brought new space online over the last three years.

    “Everybody built at once,” said Feldman. “You just had a glut, and that creates pricing pressure.”

    This building boom for service providers has resulted in a growing menu of data center options for companies seeking space in the greater New York market. The region has also been impacted by the aftermath of Superstorm Sandy, which flooded the basements of several data centers in lower Manhattan. While many traditional data hubs in the city were unaffected, the Sandy-related flooding prompted some providers to seek higher ground or contemplate sites in other regions.

    Manhattan’s magnetism persists

    Feldman has experience in both the data center and energy industries. He was a co-founder of Telx, and has also worked with cogeneration technologies. He says that although Manhattan can be more expensive than other markets, there are compelling business reasons for companies to have a data center footprint in the city.

    “People would stay in Manhattan if they had the space,” said Feldman, who said 60 Hudson remains a premier address for multi-tenant providers who need to interconnect with other networks. “It’s not the cheapest space in town, but it remains attractive as a long-term (connectivity) hub.”

    While there are several new projects in New York, Feldman believes demand for space in Manhattan will outpace supply.

    “We are seeing some traction here,” he said. “It’s a long sell cycle, but we’re on track. With normal market absorption, there’s plenty of business for us.”

    1:00p
    CDN Startup Fastly Raises $40M Series C

    Content delivery network provider Fastly has raised a $40 million Series C round from previous investors August Capital, Battery Ventures, O’Reilly AlphaTech Ventures and Amplify Partners. New investor IDG Ventures came on board as well.

    Fastly will build more points-of-presence (POPs) in strategic locations around the world, hire executive and engineering talent and develop product partnerships. Over the past year, the company has tripled both its employees and customer base. Customers include Twitter, Github, GOV.UK, popular image sharing site Imgur and U.K. news organization The Guardian.

    The big differentiators for Fastly are a focus on dynamic applications and content, being very transparent, being friendly to the DevOps community and an emphasis on mobile and APIs.

    “We’re a CDN that’s focused on the content people want to deliver today,” said CEO Artur Bergman. “Dynamic and semi-dynamic, longtail content, API, more application infrastructure in general, not just the traditional content. We have some key features such as instance cache, quick purging to get rid of a piece of info, real-time log files and real-time stats. All of this lets you put content on the edge that you couldn’t previously put on the edge.”

    Fastly is the only CDN built on the open source Varnish, which was designed as an HTTP accelerator. Bergman wrote parts of the Varnish code along with engineer Fastly Rogier Mulhuijzen. “The [Varnish] configuration language is extremely flexible,” said Bergman. “People are used to configuring Varnish inside their own data centers. Now they can do this and deploy across the world. Whatever you can do inside your own data center, you can now do at the edge.”

    Fastly uses a distributed system that synchronizes purges across all of its global caches. A 150-millisecond instant purge updates the customer’s content in a speedy fashion. The company claims its Varnish-based caches have up to 12x the performance and capacity of traditional caches.

    David Hornik, general partner at August Capital, said, “The team has completely evolved the legacy CDN technology model to give companies unprecedented control over how they serve and monitor content online.”

    A handful of traditional CDN challengers have emerged recently to raise rounds. Instart Logic raised $26 million for its “CDN Replacement,” Highwinds recapitalized and raised several rounds, and Edgecast Networks raised money shortly before acquisition by Verizon.

    Fastly is similarly challenging traditional content delivery networks with a different spin. The CDN is built on all solid-state drives (SSDs) with POPs located close to subscriber networks. Through caching at the edge, the CDN is built to accelerate any type of content whether dynamic or static, from HTML and APIs to streaming video.

    It provides API-based tools to allow businesses to control their digital worlds. Fastly also delivers real-time performance analytics that helps companies make intelligent decisions about their content in real-time.

    Fastly will continue building and optimizing its own custom architecture and strategically expanding POP locations around the world. Future POP locations include Denver, Melbourne, Osaka, São Paulo, Seattle and Stockholm.

    2:30p
    Aligning Business & Technology Strategies

    Ability, agility and readiness to change are attributes that have never been as important as they are today for data center professionals, and making decisions as a team about issues critical to the future of business is crucial.

    Conversely, C-Suite executives also have a responsibility to data center professionals to understand how business decisions impact IT. Staying on the same page requires a give-and-take relationship that fosters respect, communication, common goals and strategies, and proper execution.

    The bottom line is that entire organizations need to work together to meet cost-cutting goals and support profit-making measures by reducing energy usage, carbon emissions, and making the data center space more efficient.

    Derek Odegard, president and founder of CentricsIT and 18-year veteran of the IT industry, is part of a panel that will address “Aligning Business & Technology Strategies to Deliver Value and ROI” at the upcoming Orlando Data Center World conference.

    Data Center Knowledge asked him about the most importance element of achieving this alignment goal.

    “The biggest factor here is interdepartmental communication. Obviously technology affects every area of the business. You have the accounting team using software and resources to maintain the books; marketing teams using technology platforms to reach consumers, sales teams using CRMs and other tools to manage appointments and meetings—so it’s important that all of their technology needs are communicated to IT. IT should then vet these needs to ensure they align with the overarching technology strategy currently being deployed,” Odegard explained.

    There are several scenarios that play out each and every day in companies around the world that can sabotage alignment. According to the Data Center World speaker, some of them include:

    • Businesses making technology decisions without considering who is going to support it.
    • Businesses making technology decisions without considering the security requirements.
    • Unnecessarily upgrading hardware before it makes sense, or retiring equipment that can be used in other areas of the business.
    • Spending money on maintenance for machines that are not even in use.
    • Throwing away or storing old machines that have value on the secondary market.

    Odegard believes the biggest disconnect surrounds ongoing support and security. Misalignment occurs when there is a lack of communication between non-IT departments and IT. IT leaders need to have an understanding of the way each team is utilizing technology to ensure they align with the IT goals of the organization.

    On the flip side, business leaders have a responsibility to IT, and Odegard suggested any business leader ask the following five questions:

    1. How do individual teams use technology; and do they have the technical resources they need?
    2. How is the technology budget allocated among all teams within the organization?
    3. What are the security requirements and considerations for the different technology needs?
    4. How will IT provide ongoing support for new technology initiatives?
    5. Finally, what does my data center require to meet the long-term needs of each team?

    Ultimately, any strategy should result in providing as much value as possible. It’s important to distinguish value from cost savings as value means something different to every organization, said Odegard. For some it’s cost, for others it’s security; yet others link value exclusively to revenue. Once your organization defines value, you must take the proper steps to provide as much as possible—something he will expand upon during the two-hour panel.

    One area that CEOs and data center professionals are often at odds about is disaster recovery; spending money in the event of a disaster and weighing the unknown potential financial and data losses is not always a popular or easy choice.

    “With the amount of business-critical data that resides on your infrastructure in this age, not having a disaster recovery plan is inconceivable,” he stressed. “Modern virtualization and cloud backup technologies make it much simpler and cost-effective to implement virtual redundancy plans, so securing your information is easier than ever. Any CEO who doesn’t see the value in this investment isn’t seeing the big picture.”

    Want to explore this topic more? Attend the panel on “Aligning Business & Technology Strategies to Deliver Value and ROI” or dive into any of the other 12 leadership development sessions at Data Center World. Check out the details and register at the Orlando Data Center World conference page.

    5:00p
    Case Study: How to Deliver Millions of Dollars in Annual Power and Cooling Savings

    More than ever organizations are looking to data center and colocation providers to house some of the most critical and complex systems their businesses have.

    One of the strongest growing industry segments to adopt a colocation strategy has been the financial sector. There are a few specific reasons for this:

    • Financial services are a fast growing segment – they need fast evolving technologies
    • The SLAs offered by colocations help ensure physical/data security for IT assets
    • Strategic colocation partners help provide a manageable and cost effective way for organizations to rapidly scale and adjust IT operations
    • Colocation energy efficiency helps ensure that large financial and enterprise organizations are meeting and adhering to regulatory compliance requirements around energy use, carbon emissions, and overall efficiency

    One of the biggest reasons that we’re seeing so many large financial organizations move to a colocation environment is because these platforms extend IT operations to an external/independent facility. This ultimately enables the organization to have continuous visibility as well as insight into the operations at remote facilities to ensure availability, uptime, and cost controls.

    In this case study from RF Code you’ll learn how intelligent power and cooling systems helped data center and colocation provider CenturyLink save millions of dollars around efficiency.

    Download this case study today to learn how, based on its current data center footprint and power costs, CenturyLink identified annual savings of $2.9 million. This figure is expected to rise as power costs increase and as outsourcing drives greater asset density in the data center. The more data the global economy sends to the data center, the more organizations like CenturyLink will see the benefits of optimization. The company’s investment in optimization and power sustainability will continue to yield savings as growth continues.

    Now imagine being able to scale this to dozens of sites all across the world. For financial organizations, this is a critical must to stay resilient and scalable globally. For all other sectors, having the ability to scale and control efficiency means overall optimization around power, cooling and data center management.

    5:30p
    Data Center Jobs: Webair Internet Development Inc.

    At the Data Center Jobs Board, we have a new job listing from Webair Internet Development Inc, which is seeking a Data Center Operator in Garden City, New York.

    The Data Center Operator is responsible for the day to day mechanical and electrical operations in the data center, addressing any emergency maintenance facility issues by performing and/or coordinating repair efforts, maintaining facility well-being and general appearance, preparing and maintaining a program maintenance program for all facility power and HVAC plant, running and testing multiple emergency power standby generators, and measuring and monitoring facility and customer power consumption. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    6:22p
    OnApp Buys IaaS Platform for Web Hosts, SolusVM

    logo-WHIR

    This article originally appeared at The WHIR

    Infrastructure-as-a-Service platform provider for the hosting and service provider market, OnApp has bought SolusVM, a light-weight virtual server management system used by thousands of service providers to offer Infrastructure-as-a-Service cloud hosting.

    Combined with SolusVM, OnApp now has more than 3,000 customers, and a product portfolio that spans the complete spectrum of Infrastructure-as-a-Service. OnApp’s platform lot of add-ons for web hosts and service providers, whereas SolusVM is more barebones and ideal for offering services similar to a hosting provider like DigitalOcean.

    “This is a significant transaction that adds a few million to our revenue, a large number of customers to our community, and a new OnApp product that enables the kind of streamlined, bare-bones cloud that developers love,” OnApp CEO Ditlev Bredahl said. “It’s a perfect complement to the fully integrated cloud, dedicated, CDN and storage services that the core OnApp platform brings to service providers.”

    The deal promises to make OnApp operate more efficiently by bringing supply and demand in line. The OnApp Federation and Market, where companies can buy and sell excess CDN and compute resources, has many members supplying resources. So much so that it outweighs supply. Bredahl told The WHIR that SolusVM, with its large customer base, could help balance supply and demand.

    Service providers will also be able to use the OnApp portfolio to sell services based on other container-based and hardware virtualization platforms such as Xen, KVM, VMware and OpenVZ.

    Following the acquisition, OnApp will extend SolusVM to be able to access infrastructure from the OnApp Federation, the world’s largest public cloud, which is estimated to be completed in around 60 days following the acquisition.

    SolusVM employees will now be working at OnApp’s London office, however they will continue to operate as a separate business division, and Bredahl said he has no immediate plans to change SolusVM’s aggressive pricing model. An enterprise SolusVM license costs $10 per month and allows for unlimited number of VPS to be hosted on the Master or a “Slave Only” license for $2.50 per month per slave VPS it’s installed on.

    The deal (for an undisclosed amount) was made in cash from OnApp’s operating budget.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/onapp-buys-widely-used-iaas-platform-web-hosts-solusvm

    9:48p
    Rackspace: We’re Not Selling but We Do Have a New CEO

    Rackspace leadership announced they will not be selling the company. After reviewing a number of “alternatives,” the company’s management decided to press on with its strategy to pursue greater share of the managed cloud market and appointed its president Taylor Rhodes as new CEO.

    The stock market didn’t immediately buy the vision. The Windcrest, Texas-based company’s stock was down nearly 17 percent in afterhours trading on Tuesday following the announcement.

    In May Rackspace disclosed that it had been approached with a number of partnership and acquisition offers and hired consultants (including Morgan Stanley) to help it review the offers. Now that the review is over, the management has decided to continue pursuing its current managed-services-focused strategy, announced earlier this year.

    Rhodes, who’s been with the company for eight years, will replace co-founder Graham Weston, who will continue as non-executive chairman of the board.

    In a conference call with analysts Tuesday Rhodes said the company made the decision to preserve flexibility in a market that is large but still in its early stages.

    “This is about preserving our options,” he said. “We have a much greater opportunity to create shareholder value in stand-alone mode.”

    He cited the company’s strong performance this year, during a time when large cloud providers cut their prices drastically, apparently referring to the cloud services price cuts announced by Google, Microsoft and Amazon earlier this year. Rackspace reported a 16-percent year-over-year revenue increase in the first quarter and a 17-percent increase in the second quarter.

    The fact that Rackspace revenue grew at a time when its competitors were slashing prices was proof that “we’re playing in a different market,” Rhodes said.

    But revenue numbers alone are not the best indicator of health. Being a hands-on IT infrastructure service provider, Rackspace is in a low-margin business. Building data centers, populating them with hardware and sophisticated management systems and keeping a big staff of support engineers available around the clock costs a lot and makes it hard to turn a big profit. Rackspace’s net income margin was 6 percent in the first quarter and 5.1 percent in the second quarter.

    Addressing the margin question, Rhodes said the company was now ready to start growing its margin after completing the task of repositioning itself. “We still have many levers to pull,” he said, adding that those levers would not include “draconian measures.” One of the quickest ways to increase margin is to lay off a portion of staff.

    Billing itself as a “managed cloud company,” Rackspace is a lot more hand-on with its customers’ cloud infrastructure than the big public cloud providers are. The company offers some level of managed services even with the most basic of its cloud offerings, and options of having Rackspace engineers manage customers’ infrastructure extend all the way up the application stack.

    But managed cloud services is a crowded market with some heavyweight competitors. While Gartner considers Rackspace a leader in the category, it has at least one other leader to contend with in North America – a company called Datapipe – and a slew of leaders in the European market, including Interoute, Colt, Verizon, CenturyLink, Claranet and BT Global Services.

    While Rackspace has rejected recent acquisition and partnership offers, Rhodes did not rule out the possibility of entertaining such options in the future. “It’s on the table,” he said. “It will not be executed at this time.”

    For now, however, he said he was glad to have put acquisition talks aside. “I’m so excited to not have this conversation with prospects and customers anymore,” he said. “We are pumped to put this behind us.”

    << Previous Day 2014/09/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org