Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, February 18th, 2016

    Time Event
    1:00p
    Data Center Stocks: Rackspace Pivot to Cloud Support Fails to Impress Investors

    From an investor’s point of view, Rackspace Hosting is now operating in uncharted territory, and Mr. Market hates uncertainty.

    Fanatical belief in “fanatical support” and anecdotes about the potential of managed services for Amazon Web Services and Microsoft’s Azure, Private Cloud, and Office 365 simply didn’t excite analysts on the Q4 2015 earnings call.

    Rackspace (RAX) investors bid the stock up 3 percent to close at $18.17 prior to the release of Q4 earnings and full-year 2015 results after the bell Tuesday.

    The company reported record revenues of just over $2 billion for full 2015, with fourth-quarter revenue of $523 million coming in $1.4 million higher than consensus estimates. Its CEO Taylor Rhodes’s prepared remarks were upbeat regarding the new direction, key new hires, and new initiatives, which the cloud hosting company has undertaken.

    But none of this helped its stock. RAX fell 8.6 percent in after-hours trading to $16.70 per share that day on what was widely perceived as soft guidance for 2016. One analyst on the Rackspace earnings call referred to the company’s 6 to 10 percent revenue growth in 2016, or 8 percent at the midpoint, as essentially being flat.

    The company’s management also shared that a “seasonal slowdown” expected in Q1 2016 would become a headwind for Rackspace full-year 2016 results. The global macroeconomic environment was another factor mentioned for the relatively flat revenue expectations for 2016.

    Little Color on New Initiatives’ Progress

    Rhodes seemed pleased to be able to report that the company had signed 100 customers since the October 2015 launch of Rackspace Fanatical Support for Amazon Web Services. One of these enterprise customers signed up for a “six-figure” monthly services contract, he said. Seattle-based Razorfish, a digital advertising firm, was singled out on the call as a featured customer win. However, to put those numbers in perspective, Rackspace has an existing customer base of 300,000 customers located in over 120 countries.

    It became clear as the call progressed that the new business plan for supporting Amazon and Microsoft public cloud offerings had many moving parts, with minimal visibility regarding sales, revenues, or margins.

    While Rackspace has made substantial investment in its public OpenStack cloud business, it was also clear on the call that company management is no longer expecting to see a lot of growth from that business. They were candid in sharing that its existing customers were deploying new workloads on AWS and Azure.

    Rackspace is now spinning AWS pricing in a positive light, with Rhodes saying that both Rackspace installed base and new customers will use its services to migrate into the big public clouds and to manage hybrid cloud solutions. He also expects this migration to boost sales of Rackspace Managed Security services.

    Management believes that engineering hybrid solutions for customers should lead to more private cloud hosting opportunities for Rackspace. In turn, this will drive CapEx spending for new equipment to build those IT stacks. This helps to explain why the company’s initial 2016 guidance of 20 to 22 percent of CapEx did not markedly decrease from the 23.3 percent expenditure in 2015.

    My concern is that Rackspace is not truly in a growth mode, while the transition over to this new strategy precludes any real hope for steady-state results in 2016.

    RAX - 3Q'15 Earnings s18 Steady State Economics

    A slide like this one, from the Q3 earnings presentation by Rackspace, was not included in the Q4 deck. In fact, many slides were omitted from last quarter, highlighting the uncharted nature of its current initiatives.

    CFO Karl Pichler clarified toward the end of the call that as long as Rackspace remains in the hosting business, that 10 percent of revenues required for Maintenance CapEx would be the absolute spending floor.

    Low Data Center Utilization

    Another factor that may weigh on results moving forward is server decommissioning from customer migration resulting in lower data center utilization. Rackspace revealed that it is only utilizing 32.2MW of the 62.7MW of data center capacity currently under contract.

    It was required to take down space in London in 2MW increments, which mitigated savings from two sites outside of London, which were decommissioned. The management indicated that there is still a long-term expectation of growth but made no mention of any anticipated data center capacity reduction.

    Investor Takeaway

    Rackspace has been forced to pivot away from its former bread and butter of managed hosting in order to survive.

    Clearly, the elephant in the room during the earnings call was how fast the Fanatical Support managed cloud services will take to ramp up as its cloud hosting business slowly deteriorates.

    Meanwhile, Rackspace spent $327 million buying back shares during Q3 and Q4 2015 and expects to complete this $500 million repurchase by May 2016. Additionally, the board has approved another $500 million share repurchase authorization which will run through mid-2017.

    This certainly shows that management is confident in its ability to grow free cash flow as it transitions into a less capital-intensive managed services provider.

    It remains to be seen whether Rhodes and his Rackers will be able to pull a rabbit out of the proverbial Red Hat for investors.

    5:48p
    Building the Next-Gen IT Management Infrastructure

    Deep Bhattacharjee is Head of Product Management at ZeroStack.

    Cloud native apps are now being built using distributed systems, clustering and built-in fault tolerance so that a failure of any component cannot bring the application down. Furthermore, the application can be scaled on demand.

    So, why can’t we build the IT management systems that way? They are nothing but a meta-app that converts bare metal hardware in to a software-driven cloud that can be consumed via APIs.

    In the past I have argued that management systems are like puppies that need special attention. Their installation, maintenance and upgrade significantly increase the operational expenses of running an enterprise datacenter. Think about how Boeing builds new planes – every new model is better than the previous generation planes in fuel efficiency, level of automation, etc. That cannot be said of IT infrastructure management systems.

    So, how should new infrastructure management systems be designed and built?

    • The system should be highly available by default. In the past, automobiles came with the option of anti-lock brakes. Not anymore. All cars come with these by default. IT management systems that need to take care of serious workloads should be highly available by design. It should no longer be necessary to force customers to configure HA and then manually maintain the HA setup.
    • Management systems should choose their persistent store wisely. These systems generate a lot of data, not all of which is transactional in nature. Out of the millions of stats that are generated, does it really matter if a few are dropped? Customers end up spending a lot of time in monitoring, tuning, and adding capacity to these databases. Managing these traditional databases incurs huge cost in licenses and operational bottlenecks. The next-generation management systems should be designed with NOSQL databases as the main persistent layer with pockets of SQL where ACID properties are absolutely essential.
    • Scale must be construed and handled differently than it is now. The problem of distribution often gets misconstrued as one of scale. Customers are more likely to have multiple data centers across geographies, rather than one very large data center where all of their infrastructure lives. IT management systems should be designed to be multi site-aware from the ground up. This would also include scenarios where the customer has a dual cloud strategy but wants a single management interface.
    • System components should scale linearly. The previous generation of converged infrastructure (VCE, FlexPod, etc.) offered compute, storage and networking in a single system but one had to start big even if they had very few workloads and had to grow into this infrastructure. This meant sitting on cash that you cannot spend as well as deprecated assets.
    • Modular design should be the norm. A customer may have a large number of clients (developers or operators) making API or CLI calls to the IT management system. The actual number of VMs under management may be low, but the sheer volume of API calls can bog down the performance of the system. The API service should be separate from the VM scheduling service (say) and both should be able to scale independently without any user intervention. This is when real operating expense savings occur. Without something like this it would probably take a month to debug what the real issue was and another year to ask your vendor to provide a fix.
    • Customers upgrade their smartphones quite often to get the benefits of new features. Why don’t they do it for the management systems? It is fundamentally hard. Management systems should be designed such that patches and upgrades are automated and do not need a month long planning process.
    • On premise vs. running in the cloud. Most enterprises still want their workloads and data to be on premises under their complete control. However, this does not mean all of the management components also live within the enterprise. By dividing the IT management control plane into two components, one can keep the main control plane on premises. This would include the compute, storage and SDN layer. However, the operations and consumption layer can run from the cloud. This is a huge advantage for 2 reasons:
      • Our experience has shown that the core infrastructure piece does not change that often.
      • Customers ask for features that are mostly delivered from the consumption layer. Having this delivered as a SaaS component makes these features instantly available to customers within weeks or months from asking for them as opposed to years. An agile enterprise can now get management features delivered in an agile way too.

    The next-generation IT management system needs to be self-operating as opposed to a set of software components managed by humans.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:48p
    Microsoft Builds on Red Hat Momentum with More Open Source Love

    Talkin Cloud logo

    By Talkin’ Cloud

    Microsoft made a number of cloud-related announcements on Wednesday, one of which builds on its recent joint partnership with Red Hat.

    Microsoft and Red Hat customers can now deploy Red Hat Enterprise Linux instances from the Azure Marketplace. The Red Hat Enterprise Linux 6.7 and 7.2 images are now live in all regions except for China and US Government.

    The instances can be deployed directly from the Azure Marketplace, where more than 60 percent of its images are Linux-based, according to Microsoft Director of Project Management, Azure, Corey Saunders. He said in a blog post that Microsoft has seen “strong interest and momentum from [its] customers looking to bring their Red Hat investments to Azure.”

    On the container end of things, Microsoft has launched the public preview of its Azure Container Service which is designed to make it easy to create and manage clusters of Azure Virtual Machines pre-configured with open source components. The new service builds on Microsoft’s work with Docker and Mesosphere to provision clusters of Azure VMs onto which containerized applications can be deployed and managed, Saunders said.

    Microsoft really seems to want to showcase its love for open source, highlighting a couple of recent partnerships, including an agreement with Walmart and its WalmartLabs team to enable OneOps on Azure, an open-source cloud and application lifecycle management platform. Walmart leverages Azure for public cloud while using OpenStack for its private cloud.

    Separately, Microsoft gave its seal of approval to a group of Linux images created by Bitnami to give customers “confidence in deploying these open source images into [their] enterprise environment.” It will certify many of the Bitnami-created images over the next few months, Saunders said.

    “Open source continues driving cloud innovation, and Bitnami is helping customers realize that value effectively,” Erica Brescia, co-founder and COO of Bitnami said in a statement. “We’re really excited about the next chapter of our journey with Microsoft as we deliver an extensive catalog of open source applications to Microsoft Azure customers around the globe.”

    This first ran at http://talkincloud.com/cloud-computing/microsoft-builds-red-hat-momentum-more-open-source-love

    8:45p
    Facebook Open Sources Data Center Network Fault Detection Tools

    Several years ago Facebook shut down an entire data center to test the resiliency of its application. According to Jay Parikh, the company’s head of engineering, the test went smoothly. The data center going offline did not disrupt anybody’s ability to mindlessly scroll through their Facebook feed instead of spending time being a contributing member of society.

    Facebook and other web-scale data center operators, companies that built global internet services that make billions upon billions of dollars, have shifted the data center resiliency focus from redundancy and automation of the underlying infrastructure – the power and cooling systems – to software-driven failover. A globally distributed system that consists of so many servers can easily lose some of those servers without any significant impediment to the application’s performance.

    Read more: Facebook Turned Off Entire Data Center to Test Resiliency

    That’s not to say they’ve abandoned backup generators, UPS systems, and automatic transfer switches. You’ll still see all of those things in Facebook data centers; it’s just that they are no longer the single line of defense.

    Today, Facebook open sourced some of the software tools it has built in-house that help its engineers detect the location of an outage within its infrastructure down to a single cluster of servers within a matter of seconds, isolate the fault, and avoid a wider-scale issue.

    Read more: Why Should Data Center Operators Care about Open Source?

    The tools are parts of a system called NetNORAD, which constantly monitors the entire Facebook data center infrastructure for packet loss rates and latency. Using data analytics, it detects abnormal patterns and triggers alarms, usually within 30 second of a fault.

    “Our scale means that equipment failures can and do occur on a daily basis, and we work hard to prevent those inevitable events from impacting any of the people using our services,” Petr Lapukhov, a network engineer at Facebook, wrote in a blog post. “The ultimate goal is to detect network interruptions and automatically mitigate them within seconds. In contrast, a human-driven investigation may take multiple minutes, if not hours.”

    The components of NetNORAD Facebook is open sourcing are pinger and responder, the system that has a set of servers (pingers) continuously reach out to all servers in Facebook data centers and generates packet loss and latency data based on the responses they receive, and fbtracert, the tool that automatically determines the exact location of a fault.

    For more details on how NetNORAD works, read Lapukhov’s blog post.

    10:50p
    Thinking Different: Data Centers and IoT

    The Internet of Everything (IoT) has gone from a concept not many people grasped clearly, to a tangible, living and breathing phenomena on the verge of changing the way we live—and the way data centers strategize for the future.

    At least, data center managers better develop new strategies for handling the IoT and all the data that could overwhelm current systems.

    What does the volume of data look like: In the past five years, traffic volume has already increased five-fold; and according to a 2014 study by Cisco, annual global IP traffic will pass a zettabyte and surpass 1.6 zettabytes by 2018. Non-PC devices—expected to double the global population by that year—will generate more than half that traffic.

    The global growth of data is causing new information networks and demanding new ways to be processed and managed. With no sign of this growth tapering off, new approaches and services are needed to ensure businesses can cope with and reap the benefits of this new information.

    As a result of the nature of data, and its processing demands, new potential architectures are also arising. While the Googles of the world are primarily using large mega facilities to address these requirements, this approach isn’t practical for the vast majority of operators.

    That’s what Chris Crosby, CEO of Compass Datacenters, will address in his upcoming Data Center World session, “Thinking Different: Data Centers and IoT.”

    “From a data center perspective, the IoT translates into billions of tiny packets from billions of devices. Just a few short years ago, we would have referred to these as Denial of Service attacks, and now data center professionals must develop infrastructures that are able to process this information in real time or it loses its value,” Crosby explained.

    By incorporating smart technologies data centers will be able to keep track of real time status of components and environmental measurements to keep operations flowing smoothly. Data centers will have more platforms available to them, including IoT integrating data from many different sources to keep their computing facilities functioning at optimum capacity. So, it’s key that lag time is kept to a minimum.

    For example, he referred to how a company’s IoT-based, just-in-time inventory system would suffer serious consequences if there were very long delays in its ability to track the location and volume of component parts.

    In order to prevent such delays, Crosby sees growth in more stratified structures in which data, and its processing component, are moving as close to user groups as possible in terms of edge and (small but growing) micro data centers.

    “IoT is outstripping the capability of many in-place data centers and driving the evolution to more stratified architectures,” he said.

    You will walk away from Crosby’s presentations with answers to the following questions:

    • Do we need bigger data centers?
    • What is stratification?
    • Where is the “Edge”?
    • How local is local?
    • What will we see in the next 24 months?

    Data Center World runs from March 14-18 at the Mandalay Bay in Las Vegas. For more information on the event and a detailed look at the educational sessions, visit datacenterworld.com.

    This first ran at http://www.afcom.com/news/iot-changing-way-live-process-data/

    11:56p
    Equinix Mulling Purchase of CenturyLink and Verizon Data Centers

    Equinix is one of the companies involved in talks with Verizon and CenturyLink about potentially buying data center assets the two big US telcos are looking to offload, as they hit the brakes on expansion into the cloud services market.

    “They’re talking to several people; we are one of them,” Equinix CEO Stephen Smith said on the company’s fourth-quarter earnings call Thursday.

    Smith did not name Verizon and CenturyLink specifically but said there were a “couple big telcos” that have said publicly that they were interested in divesting data center assets. The only two big telcos that have said this publicly are Verizon and CenturyLink.

    There are other telcos that have not confirmed publicly such divestiture plans, and Equinix is “in exploratory stages” with them as well, Smith said.

    It’s likely that AT&T is one of those other telcos. The company has been looking to offload about $2 billion worth of data center assets since at least early last year, when Reuters reported on these plans citing anonymous sources. AT&T offloaded at least some of those assets last year, handing its managed hosting business, including equipment and access to data centers, over to IBM.

    Echoing what has been clear for some time now, Smith said telcos that acquired big cloud and data center services companies several years ago in hopes of building out successful cloud businesses have found it difficult to compete with the biggest cloud infrastructure providers: Amazon, Microsoft, and Google.

    “There’s a lot of activity [by telcos looking to offload data center assets], mostly pressured and driven by the big Infrastructure-as-a-Service companies that are going as fast as they are,” he said. “Now, [telcos] are trying to get out of it, and their data centers go with it.”

    Read more: Who May Buy Verizon’s Data Centers?

    No big telco has actually said it was getting out of the cloud services business, but the attempts to get rid of assets that support the bulk of those businesses are indicative of their health.

    CenturyLink executives said the company would continue providing cloud services, but it would look for an alternative data center strategy for hosting them. Verizon has been vague on this front, but its CFO Francis Shammo said in January the company was undertaking an “exploratory exercise” to see if selling its data center assets would make business sense.

    Read more: Why CenturyLink Doesn’t Want to Own Data Centers

    News reports saying that Verizon was looking to sell some or all of its data centers have been coming out since November, but they relied on anonymous sources, and the company has avoided commenting on them.

    Verizon has already killed at least one of its cloud infrastructure services. Last week, the company sent users of its Public Cloud compute service a notice that they had two months to move their data elsewhere, after which it would shut the service down.

    As we reported earlier, while there are some data center assets in Verizon’s massive portfolio that many players in the industry would find very attractive – primarily data centers that came with its $1.4 billion acquisition of Terremark in 2011 – it will be difficult for the telco to find a buyer who would agree to absorb the entire portfolio in bulk.

    “There are certain assets in those portfolios that we would be interested in,” Smith said, referring to the two big telcos’ data center portfolios. “But there are many of those assets that wouldn’t make any sense for us.”

    Equinix reported $2.7 billion in revenue for 2015 – up 12 percent from the previous year. Its net income for the year was $188 million.

    << Previous Day 2016/02/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org