Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, July 16th, 2015
Time |
Event |
1:00p |
Advanced Analytics Helps CenturyLink in Data Center Capacity Planning Bare-metal cloud servers are a best-of-both-worlds alternative to dedicated hosting and VM-based cloud infrastructure services in some cases, but they do complicate data center capacity management for the service provider.
CenturyLink rolled out its bare-metal cloud service this morning, but the IT capacity planning strategy for it will have to be devised over time, as the company gets a clearer picture of demand. The best it could do initially is create a capacity buffer to absorb a potential spike in demand, Richard Seroter, vice president of cloud product management at CenturyLink, said.
“We did start off with some excess capacity to makes sure we have some scale,” he said. “We don’t assume we know everything.”
Bare-metal cloud servers are provisioned the same way cloud VMs are provisioned, through a web interface, and billed for on an hourly basis. They are for applications that need the performance of physical servers or ones that are simply not built to run in VMs.
Numerous CenturyLink competitors have had bare-metal services for some time now, including IBM SoftLayer and Rackspace.
They provide the performance of dedicated servers but with elasticity of cloud services. But creating that elastic-capacity service for the customer using physical machines requires a lot of intricate IT capacity planning by the provider.
Home-Grown Analytics Tools Used in Capacity Planning
Future capacity management decisions for CenturyLink’s new service will be based on usage data processed by what Seroter described as fairly sophisticated data analytics tools. CenturyLink has a team of data scientists on staff, and part of their job is creating and refining models for IT capacity planning to support its services.
Their tools are primarily home-grown applications that use open-source tools like Spark, Kafka, and Cassandra. They use these tools to monitor the entire platform, crunching through hundreds of millions of data points per week to understand what is going on in in real time and to make infrastructure-management decisions.
CenturyLink delivers all of its cloud services through a single platform, be they public or private cloud VMs, bare-metal cloud servers, or AppFog, the Cloud Foundry-based Platform-as-a-Service offering it also launched today.
Faster Server Deployment Through Automation
What also helps is the amount of automation the company has built into standing up new physical servers in its data centers to make the process faster. The only manual labor involved is installing servers into the racks and plugging in the cables. The platform recognizes and configures the hardware automatically.
This approach drives hardware-purchasing decisions. From servers to switches, API maturity is CenturyLink’s number-one priority for working with hardware vendors, Seroter said. The hardware has to be managed by the company’s own software, built to make expanding capacity quicker.
The initial deployment of bare-metal cloud servers at its data centers in Sterling, Virginia, and Slough, UK, consists of HP’s Apollo hardware. But the service will not be limited to HP gear.
‘Hardware Matters Less and Less’
While there will still be room for dedicated hosting, CenturyLink expects its bare-metal cloud service to eventually become a “non-trivial part of our revenue,” Seroter said. The company still sells a lot of customized dedicated hardware, but things are generally moving toward standardization.
“The smart CIO is realizing the hardware matters less and less, and the service matters more and more,” he said.
Because service providers charge for bare-metal cloud servers by the hour, customers pay only for capacity they use and may ultimately spend less than they would on a dedicated deployment, Seroter explained. It isn’t a clear-cut comparison, however, since dedicated-server deployments differ from one another, and since there’s usually a discount for signing a long-term hosting contract.
In addition to some commercial software that cannot run on cloud VMs, such as Oracle databases that don’t have dynamic licensing models, a new approach to application deployment is on the rise that can especially benefit from bare metal. An application running in Docker containers may not need a hypervisor, and “developers are showing that they’re using bare metal for that,” he said.
“It’s not always what you can’t do in virtual; it’s sometimes what you don’t need to do in virtual. In some cases, it’s a [performance] tax you don’t need to pay.” | 3:00p |
Mesosphere Intros SDK for Data Center Operating System Aiming to make its Data Center Operating System easier to use for software developers, Mesosphere launched a software development kit and developer program.
Based on the open source cluster management system Apache Mesos, Mesosphere’s DCOS enables developers to write applications that can run on distributed data center infrastructure without being distributed-systems experts. It mimics the way giants like Google or Twitter use their infrastructure.
“Because it’s built with Apache Mesos at its core, the DCOS takes care of common distributed computing headaches such as job scheduling, high availability, resource isolation and networking — meaning developers don’t have to,” Derrick Harris, senior research analyst at Mesosphere, wrote in a blog post Wednesday announcing the SDK.
DCOS, the company claims, makes installing an application or a service in a data center or a cloud environment “as easily as you install an app on your laptop or smartphone.” Those are services like HDFS (Hadoop File System), the Kafka messaging system, or Cassandra, the popular open source NoSQL database.
Mesos was born at University of California, Berkeley, when Benjamin Hindman, one of its creators, was a PhD student there. Hindman now leads engineering efforts at Mesosphere, which he joined last year after four years at Twitter, where he oversaw implementation of Mesos in the social network’s data centers to reduce downtime.
In another example of a high-profile company using the open source cluster management system, Mesos orchestrates the underlying infrastructure for an internal Platform-as-a-Service Apple developers use to work on Siri, the natural-language interface for the iPhone.
Mesosphere rolled out DCOS through an early-access program in late 2014 and announced general availability only last month. According to Harris, the user base has been growing quickly, and the company hopes the release of the SDK and the developer program will accelerate adoption.
DCOS can be used to add distributed services in companies that already use it. Developers can also use it to write entire new applications, such as databases, file systems, stream processing, or monitoring tools, all of which will automatically have the ability to scale across clusters of servers. | 3:30p |
Top Business Issues When Moving to the Cloud Jeff Aden is Co-founder and EVP at 2nd Watch.
When planning a move to the cloud, CIOs often worry about if and how legacy applications will migrate successfully, what level of security and archiving they need, whether they have the right internal skills, and which cloud providers and toolsets are best.
There are also a number of business issues to consider, such as the changing nature of contracts and new considerations for budgeting and financial planning. For instance, transitioning from large upfront capital purchases, such as data center investments and software agreements, to monthly service fees, can help lower costs related to the management and maintenance of technology, much of which is underutilized. There’s also no need to deploy capital on unutilized resources – all positive changes. Yet pay-as-you-go pricing also brings a change in how IT is purchased: The CFO will need processes for monitoring usage and spending, to prevent so-called “cloud sprawl.” Here’s our take on the top considerations beyond technology planning for making a smooth move to the cloud.
Working with Existing IT Outsourcers
A recent survey by Gartner noted that approximately 70 percent of CIOs surveyed will be changing IT vendor and sourcing relationships over the next two years. The primary reason for this shift is that most traditional IT service providers aren’t delivering public cloud-related services or products that are suited for the transition to a digital business. Dissolving or changing relationships with longtime IT partners will take some thought around how to structure the right business terms. For instance, when renewing contracts with current vendors, companies may seek to add a clause allowing them to bifurcate between current services (hardware/colocation) and emerging services such as cloud. This will allow the right provider with the right skill sets to manage cloud workloads. If your company is within an existing contract that’s not up for renewal soon, look for a legal out, such as “default” or “negligent” clauses in the contract, which would also allow you to hire a firm with the appropriate skill set and expertise.
Limits of Liability
This contractual requirement gives assurances to the customer that the vendor will protect your company if something goes wrong. Limits of liability are typically calculated on the number of staff people assigned to an account and/or technology investment. For instance, when a company would purchase a data center or enter into a colocation agreement, it required a large CAPEX investment and a large ongoing OPEX cost. For these reasons, the limits of liability would be a factor above this investment and the ongoing maintenance costs. With the cloud, you only pay for what you use, which is significantly less but grows overtime. Companies can manage this risk by ensuring escalating limits of liability which are pegged to the level of usage. As your cloud usage grows, so does your protection.
Financial Oversight
As mentioned earlier, one advantage of on-premise infrastructure is that the costs are largely stable and predictable. The cloud, which gives companies far more agility to provision IT resources in minutes with a credit card, can run up the bill quickly without somebody keeping a close watch of all the self-service users in your organization. It’s more difficult to predict costs and usage in the cloud, given frequent changes in pricing along with shifts in business strategy that depend upon easy access to cloud infrastructure. Monitoring systems that track activity and usage in real time, across cloud and internal or hosted environments, are critical in this regard. Additionally, tools that allow IT and finance to map cloud spending to business units and projects will help analyze spend, measure business return, and assist with budget planning for the next quarter or year. Cloud expense management tools should integrate with other IT cost management and asset management tools to deliver a quick view of IT investments at any moment. Another way to control spend is to work with a reseller. An authorized reseller will be able to eliminate credit card usage, providing your company with invoicing and billing services, the ability to track spend, and flexible payment terms. This approach can save companies time and money when moving to the cloud.
Service Catalogue
One way to maintain control while still being agile is to implement a service catalogue, allowing a company’s security and network teams to sign-off on a design that can be leveraged across the organization multiple times with the same consistency. Service catalogues control which users have access to what applications or services to enable compliance with business policies, while giving employees an easy way to browse available resources. For instance, IT can create a SAP Reference Implementation for a test environment. Once this is created and signed-off by all groups and stored in your service catalogue, it can be leveraged the same way, every time and by all approved users. This provides financial control and governance over how the cloud is being deployed in your organization. It can also move your timelines up considerably, saving you weeks and months from creation to deployment.
Staffing/Organizational Changes
With any change in technology, there is a required shift in staffing and organizational changes. With the cloud, this involves both skills and perspective. Current technologists regarded as subject matter experts, such as SAN engineers, will need to understand business drivers, adopt strategic thinking and have a focus on business-centered innovation. The cloud brings tools and services that change the paradigm on where and how time is being spent. Instead of spending 40 percent of your time planning the next rack of hardware to install, IT professionals should focus their energies responding to business needs and providing valuable solutions that were previously cost prohibitive, such as spinning up a data warehouse for less than a $1,000 per year.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:00p |
SAP Takes Own Hana-Powered IT Operations Analytics Tool to Market SAP has developed an IT operations analytics product that was born as a solution to the company’s internal data center management challenges. The tool provides a holistic view of diverse data streams and runs on SAP Hana, the company’s in-memory database technology, for turbo-charged performance.
Angela Harvey, SAP senior director of Advanced Analytics, said the company sees about 0.5 billion events a day in the data center and was using anywhere between seven and 20 different monitoring tools to get a grip on everything. While the team had good experiences with these tools, they recognized that it often resulted in different groups inadvertently solving problems in parallel and other inefficiencies.
The result was the creation of single tool that also hooks into existing tools – an IT operations analytics hub, so to speak.
“We had tools we liked and didn’t want to get rid of them, to throw out the baby with the bathwater,” said Harvey. “Our goal was providing the holistic view and tying in all the diverse tools. The other value that it brings is it allows someone at an executive level to get an understanding or an insight easily.”
The tool shows what’s occurring across the data center. It has real-time streaming of log data and the ability to combine data sources from all of the other tools into a single view. That single view makes it easier for analysis.
SAP is also flipping the traditional ITOA billing model on its head. Most ITOA providers charge on ingestion.
“We were going to do the same thing but found people hate that,” said Harvey. “Instead we charge based on a Hana gigabyte — how much data is in the system.”
That pricing model serves both SAP and customer. By not charging based on ingestion, it makes the tool more likely to be adopted by all the groups in IT. Ingestion is price-prohibitive and doesn’t encourage widespread adoption. SAP’s interest rests in promoting Hana usage, and it’s priced accordingly, but Hana also supercharges the tool’s performance.
The result is an agnostic tool. “The other tools on the market are proprietary,” said Harvey. “We wanted to leverage the skillsets already within IT and not to compete on price.”
SAP is currently working on building out the predictive piece of ITOA, said John Schitka, solutions marketing manager at SAP. “The data center of the future is why we decided to put predictive analytics in it,” he said. “Everyone’s talking about going to the cloud. The cloud means while you don’t have a physical presence, somewhere there’s a physical presence, from scattered, small data centers to larger data centers. There’s a need for a holistic view, especially if you have SLAs.”
SAP said it was seeing appetite for ITOA in the OEM space as well. A larger part of its OEMs are hosting data centers on behalf of smaller customers, and quite a few have approached SAP about this, said Harvey.
Currently in ramp up, the company is looking for five customers to pilot the new tool. Two have signed up so far. | 5:38p |
Google to Help Bring Linux Containers (and Cash) to OpenStack Google has become an official sponsor of the governance organization for OpenStack, the family of open-source cloud-infrastructure software projects.
Google engineers will make contributions primarily around integrating OpenStack with Linux containers, a way to package and deploy applications that’s quickly becoming popular thanks to a startup named Docker, which has created developer tools that make it easy to launch containers and become very popular with developers and DevOps professionals. Google has used Linux containers for years, and the expectation is that OpenStack container efforts will benefit greatly from its expertise.
“Few companies understand cloud-native apps at scale like Google, so I expect big things as Google developers contribute to OpenStack projects like Magnum,” Mark Collier, chief operating officer of the OpenStack Foundation, wrote in a blog post.
Magnum a project within the OpenStack family that focuses specifically on enabling OpenStack to spin up and manage Docker or other types of containers. Magnum already integrates with Kubernetes, which is an open source version of the container-orchestration engine Google built for its own use.
Hybrid infrastructure deployments that mix private cloud infrastructure and public cloud services is on the rise among data centers, and OpenStack is emerging as “a standard for the on-premises component of these deployments,” Craig McLuckie, a product manager at Google, wrote in a blog post Thursday.
A recent survey commissioned by Canonical, the company behind the popular Linux distribution Ubuntu that has also made OpenStack an important play in its strategy, found that about half of private clouds deployed in enterprise data centers were powered by OpenStack.
One example is PayPal, which recently replaced a VMware-powered private cloud in a data center with an OpenStack cloud.
The intersection of hybrid-cloud and Linux container trends “is important to businesses everywhere,” McLuckie said.
Google hopes to give enterprise developers “container-native patterns” by joining with OpenStack. The main goal is to improve interoperability between private and public clouds.
Google will be an important ally in the quest to make OpenStack the centralized hub for managing VM-based cloud infrastructure and Linux containers. The open-source cloud software has gained in popularity, but there is still a lot of skepticism about it in the industry, and Google, which has written the book on operating data center infrastructure at massive scale, should add a lot of credibility to the project. | 6:32p |
Enterprise Cloud Provider IIJ Europe Taps iomart for Backup Technology Services 
This article originally appeared at The WHIR
The European arm of Internet Initiative Japan (IIJ) is making a bid for dominance in enterprise cloud services in Europe and Africa, and to do so it is partnering with iomart to provide services from its Backup Technology division, according to a Wednesday announcement. Cloud backup and disaster recovery from Backup Technology will be incorporated into IIJ Europe’s solutions for larger organizations in the EMEA region.
Backup Technology begins providing IIJ with strategic licensing, sales, and support for its private and public, Asigra-based cloud solutions immediately. IIJ Europe was launched by IIJ in 2013 from London head offices, and has been ramping up its pitch to enterprises under principles for cloud success of connectivity, availability, and partnership, according to its blog.
IIJ and IIJ Europe’s flagship product is IIJ GIO, which it says is a comprehensive, modular offering, based on an established platform, without traditional private cloud lock-in or public cloud instability. IIJ GIO featuring Backup Technology will be offered in EMEA through channel partners.
“Adding (Backup Technology’s) proven and robust Asigra-based solution to our competitively priced cloud IIJ GIO, provides significant added value to our offering,” said Manabu Yamamoto, Managing Director of IIJ Europe. “Furthermore this partnership will help us fully satisfy our customers’ service level requirements for mission critical systems. These added attributes makes IIJ GIO even more compelling to customers by catering for their complex needs.”
IIJ has gone from domestic success to expanding geographically, such as to Indonesia in a partnership with Biznet Networks in November.
Backup Technology was acquired by iomart in the fall of 2013 for $37 million. Iomart acquired cloud consultancy SystemsUp to boost the public cloud side of its business in June.
This first ran at http://www.thewhir.com/web-hosting-news/enterprise-cloud-provider-iij-europe-taps-iomart-for-backup-technology-services | 8:50p |
Delphix Nets $75M in Funding to Scale Data-as-a-Service Business Data-as-a-Service provider Delphix announced it has raised a $75-million funding round to scale sales, marketing, and operations across global geographies. Delphix said the investment will also allow it to more aggressively invest in cloud, analytics, and data security technologies to further build out its platform.
Fidelity Management and Research Company led the funding round with participation by additional investors Credit Suisse Next and the Kraft Group, and existing investors, including Greylock Partners, Lightspeed Venture Partners, and Icon Ventures.
Data-as-a-Service is a cloud-based model for delivering a variety data types for consumption by users’ applications.
The Menlo Park, California-based startup has raised almost $120 million since it came out of stealth seven years ago. Delphix has since won numerous industry awards, landed large enterprise accounts, and formed key strategic partnerships with companies such as Amazon Web Services, VMware, SAP, and Dell.
Recently, Gartner named Delphix as a leader in its 2015 Structured Data Archiving and Application Retirement magic quadrant. In May the company purchased Boston-based Axis Technology Software to add integrated masking and enhanced security features to its product.
Delphix CFO Stewart Grierson noted that the company has “grown revenue at a 220 percent CAGR over the last five years while primarily funding operations from cash flow. This investment will give us the flexibility to invest for further scale and cement our leadership position in the DaaS market.”
Last year data virtualization competitor Actifio secured a $100 million funding round to accelerate its product offering. More recently converged data management system provider Rubrik raised $41 million. | 9:22p |
BitTitan Cooks Up Project Cost Estimate Tool for Cloud Providers It’s been a busy week in Orlando, Florida, where the Microsoft Worldwide Partner Conference is taking place. Our sister site Talkin’ Cloud reported on several announcements made at the show by BitTitan, including a new free tool that will help partners and cloud service providers estimate the scope and cost of cloud projects.
Called the Estimator, the tool asks a series of questions about a cloud project and provides details on technical parameters and estimated pricing and offers cross-sell and up-sell opportunities by instantly detecting additional service recommendations based on a customer’s needs.
Along with the free tool, BitTitan unveiled Health Check for DeploymentPro, a cloud-based solution for automated desktop management. The solution helps service providers detect what technology enhancements must take place prior to making the move to Microsoft Office 365, including updates to software, browser, bandwidth, and operating system.
To read more on BitTitan’s announcements, visit http://talkincloud.com/cloud-computing/bittitan-launches-new-incentives-tools-cloud-service-providers |
|