Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, August 12th, 2015
| Time |
Event |
| 12:00p |
Rackspace to Provide Managed AWS Services Before Year’s End Rackspace is building a managed services product for Amazon Web Services, its rival in the public cloud market where Rackspace has struggled to grow.
The Windcrest, Texas-based company’s CEO Taylor Rhodes said he expected to roll out managed AWS services later this year during his remarks on the company’s second-quarter earnings call Tuesday. Rhodes also said that Rackspace’s earlier expectations of a rebound in its public cloud business in Q2 did not materialize, and “growth remained slow throughout the quarter.”
There were earlier reports that cited anonymous sources who said the company was working on an AWS offering, but Rackspace hadn’t officially confirmed the plans until Tuesday.
The company, which has a substantial data center fleet across the US and a sizable footprint in Europe, has invested a lot in its public Infrastructure-as-a-Service offerings. Having been one of the original creators of OpenStack, Rackspace based its public cloud infrastructure on the open source cloud software. It has also designed its own servers to support both cloud VM and bare-metal cloud infrastructure services, building on Open Compute server specs.
But competing with AWS and its rivals Microsoft Azure and Google Cloud Platform is difficult without the global scale and deep pockets of the internet giants, who have been continuously slashing prices of their cloud services as they battle for IaaS market share.
In response, Rackspace has pivoted to focus more on managed cloud services, leveraging its extensive services organization called Fanatical Support. The pitch, put simplistically, is while you can easily buy cloud services from the giants online, it takes a lot of expertise to set up a proper cloud infrastructure for an enterprise, and Rackspace has the expertise to do just that.
The company already provides managed services for Azure, including support, management, and monitoring. It offers managed services around VMware-based private clouds. It doesn’t stop at the basic infrastructure level, providing services for cloud software solutions, including Microsoft’s Office 365 and SharePoint, Google Apps for Work, and Skype for Business. Adding managed AWS services to the lineup will serve to expand this part of the business substantially.
Its focus on managed cloud has earned Rackspace the leading spot on Gartner’s Magic Quadrant for cloud-enabled managed hosting in North America and one of the leading spots on the European quadrant. Its main challengers in North America include Datapipe, CenturyLink, IBM, and Verizon. Verizon and CenturyLink are also its main competitors in Europe, in addition to Interoute, BT, Claranet, and Colt.
Rackspace’s public cloud hasn’t done too well, but the company’s overall revenue in the second quarter grew 11 percent year over year.
There were also several key strategic developments during the quarter. Rackspace started providing Fanatical Support for both Azure and Office 365 during the quarter. It also announced a big partnership with Intel around OpenStack to develop the open source cloud technology further and to promote its adoption by enterprises. | | 3:00p |
Equity Group Takes Veritas Private in $8B Deal After a year of preparation Symantec has announced that it will be spinning Veritas out as an independent private company as part of a deal valued at $8 billion.
Soon to be owned by The Carlyle Croup, a private equity firm, Veritas will be led by Silicon Valley veterans Bill Coleman as CEO and Bill Krause as chairman. Coleman was one of the original founders of BEA Systems, while Krause is a former CEO of 3Com.
John Gannon, Symantec executive VP and Veritas general manager, said that as a private company Veritas will not only be able to implement its future product strategy more aggressively, it’s more than likely that it will be looking to acquire companies that complement its core information management business.
“We’ll be out of the spotlight of Wall Street that brings a lot of scrutiny every quarter,” said Gannon. “That should enable us to be more nimble and perhaps even bolder.”
Previously, Veritas has outlined a product strategy centered around a modern information management platform that starts with its NetBackup software running on-premise and in the cloud and then extends to include analytics and a software-defined data management platform. Once clear of Symantec, Gannon said, Veritas expects to be able to more aggressively implement that product strategy.
From an operations perspective Veritas has until October 3 to set up its own ERP systems that operate independently of Symantec.
While Symantec has originally hoped to tightly couple security and information management, in reality, the individuals inside IT organizations responsible for acquiring those technologies oversee fundamentally different domains. After originally acquiring Veritas in 2005, Symantec let it be known last year that it was seeking to spin the unit out as an independent entity.
Of course, competition in the information services space is nothing if not fierce. Starting with backup and recovery, there are more competitors in this space than ever. Meanwhile, a race to develop the next generation of information management systems based on metadata repositories that can span everything from small to Big Data is already on.
On the plus side, Gannon noted that as an independent entity Veritas will be led by an experienced executive management team made up of Veritas veterans and industry luminaries that have continued to remain active in making venture capital investments all across the information management sector. | | 3:30p |
Four Questions to Ask Prospective Storage Vendors Darnell Fatigati is Senior Segment and Product Marketing Manager for SolidFire Inc.
Every service provider’s business goals evolve over time. They may shift focus on particular business outcomes, like providing a better application experience for their customers or offering new performance-based services. Or perhaps their goal is to attract new customers and improve customer satisfaction, while keeping costs at a minimum and maximizing revenue.
However, when the time comes for service providers to re-evaluate their existing storage vendor or to find a new one, it’s the technical functionality, not the business objectives, that often steals the spotlight. Performance, Quality of Service (QoS), data protection and other technical specifications are a critical part of an RFI, but few service providers probe vendors to address specific business outcomes – something that’s just as, if not more, critical. Because storage is at the core of any cloud infrastructure, service providers have the right to know and choose the best platform that both meets their functionality needs and intended business outcomes. To achieve this, every service provider should ask prospective vendors these four insightful questions:
How Can the Vendor Help a Service Provider Monetize Storage?
If a provider’s main business goal is to maximize the revenue from its data center space while keeping costs to a minimum, each vendor has to be evaluated in terms of its contribution to that end goal. A storage vendor’s features and functionalities can help create new revenue streams from new services that are valued by customers. For example, if a customer’s applications need predictable performance, it’s critical to choose a storage product that allows the provider to manage performance instantly. By being able to guarantee performance to customers, they can to tap into new markets that were previously off-limits because of performance unpredictability – like mission-critical business applications, analytics, VDI, high-performance computing, e-commerce and performance databases.
How Can the Vendor Help Streamline Operations?
Enhanced efficiencies are critical aspects of a service provider’s success. Value-accretive services sold to business customers must be based on repeatable operations tasks that can be automated, orchestrated and integrated whenever possible – while considering businesses’ individual needs. When evaluating storage vendors, providers must assess their ability to help them deliver standard and repeatable services. They must assess their automation and orchestration capabilities, their ability to help meet individual application needs, scale performance and capacity non-disruptively; and finally, ease operational pains, like performance troubleshooting.
How Will the Vendor Reduce Technology and Business Risk?
When purchasing storage, there are two main areas of risk: financial and technological. To mitigate financial risk, service providers should ask the vendor about its capacity management and scale model. For example, purchasing too much capacity up front can threaten a provider’s profitability. To avoid financial risk, it is critical that the vendor allows for scaling capacity up and down as needed. To reduce technological risk, service providers should consider if the vendor forces migrations and redevelopment of automation, orchestration and integration when moving from one version to another. Furthermore, it’s essential to assess the vendor’s ability to truly prepare a provider’s business for the next generation data center and offer innovative storage features, like QoS, automated self healing, multiple replication methods and secure multi-tenancy.
How Will the Vendor Help a Provider Win More Customers and More Service Revenue from Existing Customers?
To help providers win more customers with their technology, storage vendors must understand the wider market trends and opportunities and use that knowledge to help providers shape their storage offerings and go-to-market approach. For example, the right vendor should understand how to successfully sell storage services to both enterprises and SMBs, help the provider assess the competitive landscape, and develop GTM strategies, including how they should price, produce and launch storage services.
When evaluating storage vendors, service providers often ask typical technical questions that prospective vendors respond with cut-and-paste answers. But it’s the unconventional yet meaningful questions that uncover a vendor’s true ability to help the provider meet unique business needs. Only a vendor that intimately understands a provider’s business, its customers, the market in which it operates and its supply chain, allows the provider to put its business outcomes back in the spotlight.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:58p |
Docker Content Trust Expected to Fix Docker Security Issues Looking to provide IT operations teams with greater confidence in Docker security, Docker the company today unveiled a mechanism that promises to make it simpler for developers to attach digital signatures to Docker images.
Available now, Docker Content Trust makes use of Notary and The Update Framework (TUF), a set of frameworks designed to securely distribute content and updates to software. The trust makes certain central commands in Docker, such as “push,” “pull,” “build,” “create,” and “run,” will only operate on images that either have content signatures or explicit content hashes.
Docker security has been one of the most often cited barriers to adoption of the popular system for splitting software applications into Linux containers to make those applications easy to deploy on any type of infrastructure.
Diogo Mónica, Docker security lead, said the trust is specifically designed to be both simple to implement and difficult to compromise.
“We think we went above and beyond on this,” said Mónica. “We’ve made an effort to leapfrog everything that is out there.”
Specifically, the Docker Content Trust consists of two distinct encryption keys. An Offline (root) key and a Tagging (per-repository) key that are generated and stored client-side. Each repository has its own unique tagging key, which is invoked any time new content is added or removed from the repository. Because the tagging key is online, Docker acknowledges that key is vulnerable to being compromised. With Docker Content Trust, the developer can securely rotate compromised keys by using the offline key, which ideally would be securely stored offline.
In addition, the trust can generate a Timestamp key that provides protection against replay attacks, which would allow someone to serve signed but expired content. The Timestamp key can be used to make sure that any content that is older than two weeks does not get served without a new set of keys being generated, said Mónica.
By making the digital signatures simple to generate, Mónica said, Docker is aiming for ubiquitous adoption of digital signatures as a means to make Docker images more secure than any other vehicle for delivering software. From an IT operations perspective, the Docker Content Trust means that IT organizations now have a way to easily validate what Docker images actually get deployed in a production environment.
Whether that capability results in broader adoption of Docker containers and images in production environments naturally remains to be seen. But the one thing that is for certain is that anything that reduces the burden of securing production applications in the enterprise is likely to be well received by internal IT operations teams that are generally held more accountable for security than developers. | | 4:19p |
HotLink Enables Azure, OpenStack Management Through vCenter As a provider of a tool that extends the reach of VMware vCenter into other IT environments, HotLink has been giving IT organizations a way to manage multiple types of virtual machines running on-premise or in the cloud. The company is now extending that reach to include Microsoft Azure and OpenStack clouds.
HotLink already provides support for Amazon Web Services, VMware’s vCloud Air, and Microsoft Hyper-V, Citrix XenServer, and Red Hat KVM hypervisors. Jerry McLeod, VP of business development for HotLink, said this week’s release of HotLink Cloud Management Express makes it simpler for IT organizations that have standardized on VMware vCenter to manage all those platforms from the same management console.
The solution is designed to make it easier to extend vCenter tools and workflows, such as orchestration, self-service portals, PowerCLI scripts, and automation across hybrid IT environments.
While most of these environments are managed in a semi-autonomous manner today, Mcleod said, there has been a sharp pick up in the number of IT organizations starting to embrace truly hybrid cloud computing. As most of those organizations have already standardized on vCenter to manage their on-premise environments, they may benefit from the ability to use the same management system for multiple clouds.
“IT has been siloed around on-premises and the cloud, when it was mainly application development and testing running in the cloudm and there was not a lot of resources being consumed in the cloud,” said McLeod. “Now that production applications are showing up, the internal IT organizations are getting a lot more involved.”
Now that many organizations have adopted a “cloud-first” approach to IT, the pressure to find ways to unify the management of hybrid clouds has increased substantially.
The features include auto-discovery of cloud-based resources running on- and off-premise within the context of a vCenter inventory tree; the ability to continuously monitor and analyze how those resources are being consumed; and the ability to provision VMs and edit their properties. It also provides a single console for managing alarms, alerts, and change management, as well as a way to track who accessed what controls when across hybrid platforms.
Of course, it’s not clear to what degree vCenter will be the center of the hybrid cloud management. But given the installed base of IT organizations that have already invested heavily in VMware’s technology, it’s a management platform that is sure to be in the enterprise IT mix. | | 6:16p |
Crackdown 3 Uses Cloud to Compute Massive Amounts of Destruction 
This article originally appeared at The WHIR
An upcoming Xbox One game is breaking the limits of onboard hardware by passing it off to cloud servers. Crackdown 3, due for release in 2016, employs Cloudgine middleware and Microsoft Azure to handle compute-intensive game components such as physics and Artificial Intelligence that are crucial to the gaming experience.
Among the most talked-about features of Crackdown 3 is that everything is subject to physics modeling and can be destroyed. For instance, buildings can topple into one another.
Dave Jones, Crackdown 3 creative director and Reagent Games founder, told Ars Technica that on-screen destruction draws server resources needed to compute it. “You can see that the debris falling to the ground is taking up the equivalent of an extra Xbox One worth of power,” he said. “The console takes that extra power from the server when it needs it…We’re doing a lot of destruction for destruction’s sake here, but this is a tremendous technology test bed, which opens up a lot of new areas of multiplayer gaming and makes games much more physical.”
In Jones’ multiplayer demonstration ofCrackdown 3, he was able to bolster compute up to nine times the power of a single Xbox One, and the team has been able to increase this to around 13 times.
This isn’t the first time Xbox One gameshave looked to the cloud to augment their computation capabilities. Xbox One shooterTitanfall used Azure to drive its AI, and Formula 1 racing game Forza 5 used it to develop a sophisticated model of a player’s driving characteristics that’s used to create a realistic AI driver that can be raced against.
However, leveraging the parallel computing capabilities of cloud computing could be the next stage in providing richer gaming experiences that would never be possible on a standalone system.
This first ran at http://www.thewhir.com/web-hosting-news/crackdown-3-uses-cloud-to-compute-massive-amounts-of-destruction | | 7:00p |
Hootsuite Moves App Directory Marketplace from AWS to SoftLayer Cloud 
This article originally appeared at The WHIR
Hootsuite is ditching AWS for IBM SoftLayer for its App Directory social application marketplace as part of an extension of the existing partnership between the two companies, according to an announcement on Tuesday. While Hootsuite and its App Directory currently run on AWS, App Directory’s migration to SoftLayer cloud infrastructure is scheduled for the fourth quarter of this year.
As part of the partnership the companies will also integrate their outreach programs for higher education. According to ZDNet, IBM’s Academic Initiative, which is a program that gives academics 12 months access to Bluemix for research, will combine with Hootsuite’s Higher Education Program. Providing cloud and analytics training alongside social media solutions gives students the chance to develop a practical, marketable skill-set, according to the partners, since both are leveraged together in so many real-world use cases.
Running App Directory on SoftLayer allows Hootsuite to provide clients with a dedicated and scalable infrastructure that is provisioned and managed with an easy-to-use toolkit. The announcement also touts the advantages of SoftLayer’s data privacy and localization for international customers, and the enhanced ability of Hoostuite to deliver new features and functions and respond to customers rapidly and reliably.
“IBM Cloud offers high performance, granular control and flexibility. When you couple that with its globally integrated footprint, we will have the ability to move data between data centers efficiently which will provide resiliency, flexibility and control,” said Aaron Budge, vice president of operations and IT at Hootsuite. “We have had a great relationship with IBM for more than two years and are excited about expanding our relationship with new product integrations and the ability to leverage IBM technology.”
Hootsuite was previously integrated with IBM Connections, and it added IBM Silverpop into its App Directory recently. IBM will also promote its 2016 Eighth Global Hackathon Series exclusively on Hootsuite, as each company leverages its partner’s specialty.
SoftLayer introduced NVIDIA Tesla K80 dual-GPU accelerators to its Dallas data center in July to begin offering super-cloud-computing capabilities.
This first ran at http://www.thewhir.com/web-hosting-news/hootsuite-moves-app-directory-marketplace-from-aws-to-softlayer-cloud | | 7:29p |
Device42 Integrates DCIM Software With Enlogic PDUs Device42 has integrated its data center infrastructure management software with power distribution units by Enlogic Systems.
The integration brings power monitoring data into the DCIM software, enabling auto-discovery of Enlogic’s PDUs once they’re installed in a data center, real-time power consumption data, alerting, as well as outlet control through Device42’s console.
To a great extent, the usefulness of a DCIM software product depends on the breadth of devices and management systems in a data center it can talk to. Vendors like Device42 compete by expanding the range of products their software gels with by striking partnerships, making acquisitions, or doing internal development work.
Device42 is a niche player in the DCIM software market. According to Gartner, it targets smaller enterprises and data centers (between 1,000 square feet and 3,000 square feet), but its software is scalable to larger facilities.
The product at the core has been focused on IT asset management. The company prices its power and thermal monitoring modules separately.
Gartner included Device42 in the “niche” player section of its Magic Quadrant for DCIM software in 2014, together with FieldView, ABB, Optimum Path, Geist, Modius, and Rackwise. | | 8:00p |
Data Security to Drive More and More Data Center Strategy Decisions With the line where one IT service ends and another begins getting blurrier by the day, IT organizations more than ever need to have a deeper understanding of what types of data and application workloads should run on-premise, in a colocation facility, or in the cloud.
Mark Evanko, principal engineer for BRUNS-PAK, an independent designer and builder of data centers, says the biggest issue in data center strategy decisions today is determining the level of security that needs to be applied to data. If the provider of colocation services has to provide data security, the per-square-foot cost of data center space tends to increase by several orders of magnitude. In fact, the cost gets so high that it probably makes more financial sense to keep that data on-premise.
That doesn’t mean that IT organization shouldn’t make use of colocation services or the cloud, says Evanko, it just means that they need to understand who is actually responsible for securing that data.
He is scheduled to speak about the implications of operating what now amount to virtual data centers that can be distributed almost anywhere on the planet at the Data Center World conference in National Harbor, Maryland, this September.
Making matters even more complex is legislation winding its way through Congress that would make organizations more accountable for breaches of personally identifiable data by requiring them to pay actual damages to individuals affected by such a breach, he says. Once that legislation passes, Evanko predicts that more sensitive data will be heading back into on-premise data centers.
“Liability will soon be extended down to the colocation provider along with everybody else that touches that data,” says Evanko. “Most colocation providers don’t automatically cover customers if their data is either stolen or corrupted.”
In all, there are 16 elements that IT organizations should consider when it comes to deciding whether to build a data center, rent colocation space, or move data into the cloud, according to him. All three data center strategy options can make sense as long as some consideration is given to not only the criticality of the data that will be housed in those facilities, but also who ultimately is going to be held accountable for its security.
For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend Mark’s session titled “Data Center Internal Facility, Cloud, Colocation, and Container Working Together.” |
|