Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, August 4th, 2014
Time |
Event |
1:58p |
API Compatibility War Validates Abstraction Approach to Cloud Computing Pete Johnson holds the position of senior director of business development for CliQr Technologies. Johnson is active on Twitter, @nerdguru.
As you may know, the U.S. Court of Appeals for the Federal Circuit ruled that Oracle can copyright the Java API and Google violated that copyright by creating software that is API compatible. This has huge impact across the entire software world, but in particular with cloud computing where the APIs are used to provision resources on demand.
This development sheds light on the desire for users to avoid lock-in and have the ability to more freely move applications between clouds, which can be done more easily with API compatibility. But it also sheds light on the alternative of an abstraction approach that frees cloud vendors to innovate beyond borders imposed by API compatibility while giving users the portability they seek.
API compatibility has been a hot topic in cloud computing for quite some time. In fact, last summer Cloudscaling CEO Randy Bias and Rackspace Evangelist Robert Scoble had a very heated and public debate around whether or not it was in OpenStack’s best interest to provide full AWS API compatibility.
On the flip side, IBM has invested in the Jumpgate project, making it much easier for non-OpenStack-based clouds to achieve OpenStack API compatibility. This Oracle vs. Google ruling puts those efforts into legal limbo as cloud users see API compatibility as a way to more freely move workloads between cloud vendors without having to change their often hand-crafted tooling.
Should cloud compatibility really be at the API?
The question this raises is where should the enablement of cloud portability really reside? Cloud customers today, especially Enterprise IT departments who are wary of vendor lock-in want the ability to run workloads on different clouds as the market pricing and features change over time. Many argue that a multi-cloud strategy is best given the degree of High Availability and Disaster Recover you can achieve with that approach. Also, different kinds of workloads run differently on various public and private clouds so to assume they will all run best on a single target is naive.
One way for end customers to achieve this cloud portability would be to use a common API across all possible vendors. Even before this legal ruling and well after the ultimate outcome, the likelihood of a single industry-wide API is doubtful because cloud vendors want to provide feature differentiation, which gets diluted if features have to get dumbed down to support API compatibility. Fortunately, there is another approach that gives users the portability they want without restricting cloud vendors to a common API.
Validation for an abstraction approach
This legal issue is helping point out that the right way for customers to freely move workloads across different clouds isn’t with a common API, but with an abstraction layer that can interpret the strengths and weaknesses of each offering and provision resources that applications need to run optimally on each cloud accordingly.
Many Cloud Management Platforms (CMPs) support this notion as a way to achieve the same portability effect as a common API by acting as an intermediary but without restricting innovation for the cloud vendors.
The technique CMPs use typically involves creating a meta-data description of the application’s requirements related to core primitives such as compute, storage, network, security, etc. without having to define cloud-specific infrastructure. Typically done in an application profile, template or blueprint, after completing through manual or automated means, the completed application profile can be used to natively deploy the application onto and move between any supported private or public cloud environments, unaltered, remaining portable and manageable.
In addition to the application profile, a key enabler of this abstracted approach is a cloud orchestrator. Often a single, multi-tenant virtual appliance that resides transparently on each supported private or public cloud, this orchestrator is responsible for coordinating the requirements of the application, represented by the output of the application profile, with the best-practice infrastructure and services of the cloud, allowing it to provision its resources in order to deploy the application.
In this model, the onus is on the CMP to constantly update its cloud orchestrators to take advantage of innovations of each cloud vendor, who no longer needs to be concerned with breaking API compatibility. Because of the abstraction approach, however, the user is insulated from such changes and simply inherits the benefits as they are written into the cloud orchestrator by the CMP. Centralizing specific cloud vendor expertise at the CMP, shielding the end user through the application profile abstraction, and freeing the cloud vendor from a specific API benefits everyone in the ecosystem.
Abstraction insulates from change
Regardless of how the legal issue plays out, and many experts feel it is likely to be overturned on appeal, the point it raises about where the best place is in the technology stack to ensure customers have choices is the same. Even if they could legally do so, cloud vendors will never be 100 percent API compatible across platforms because it limits their ability to innovate. By abstracting functionality above the API in a way that enables best use of features present in one vendor that are absent in another, customers get the portability they want while still having the ability to take advantage of cloud-specific features as needed.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:05p |
Report: CIA’s Amazon Private Cloud Now Online Amazon Web Services and the CIA have brought online the private cloud the provider has built for all 17 agencies of the intelligence community, Defense One reported, citing an anonymous source.
Amazon had to fight for the 10-year $600 million contract in court last year, after competitor IBM challenged the award. Government cloud is a big opportunity for technology providers, since all government agencies are under a White House directive to maximize use of cloud services.
The CIA contract is a variety of Infrastructure-as-a-Service offerings, including compute, storage and analytics. Agencies will access services on-demand and pay for what they use.
AWS has been getting most of the federal IaaS deals, Brian Burns, director at Agile Defense, which does system integration work and cloud consulting for federal agencies, told us in an earlier interview. The company owes this success to its status as “the Kleenex brand” for cloud, he explained.
Amazon has fitted out a data center on the west coast specifically to serve government clients. The facility hosts an AWS region called GovCloud, which is isolated from non-government customers, satisfies numerous security standards and can only be accessed by U.S. persons, physically and virtually.
This doesn’t mean U.S. agencies only use GovCloud. Many use the US-East region, according to the AWS website.
Investment in a separate data center for this subset of customers is justified by the size of the opportunity.
The government’s 2015 budget for cloud services is $3 billion, according to an Office of Management and Budget official’s presentation slides for a recent conference. About half of it will be spent on Software-as-a-Service; about 20 percent on Platform-as-a-Service; and the rest on IaaS.
While IBM is a sizable competitor, Burns said VMware was really the cloud provider AWS should be worrying about. VMware has not yet received the FedRAMP certification (a government security standard for all cloud services), but when it does, its vCloud Government Service is offering instant integration of cloud with customers’ existing in-house VMware environments.
VMware is a trusted vendor with government clients. That, combined with how easy it will reportedly be for agencies to stand up hybrid cloud environments with VMware, will make for a substantial challenge to Amazon in this space, Burns said. | 5:30p |
Docker Sells dotCloud PaaS Business to German Provider U.S. subsidiary of Berlin-based Platform-as-a-Service provider cloudControl has agreed to buy the dotCloud PaaS business from San Francisco-based Docker for an undisclosed sum.
dotCloud was the original name of the company that later became Docker. The company started as a PaaS provider and developed a precursor for its now famous application containerization technology for its PaaS product.
About one year ago, dotCloud open sourced the technology under the name “Docker” and launched a company under the same name. Sale of the dotCloud business means the company is now focused on Docker entirely.
Ben Golub, Docker CEO, said it was important to the company to give dotCloud customers a smooth transition as it got out of the PaaS business. “cloudControl has years of experience running a world-class PaaS, and distinguished themselves by the depth of their dedication to investing in the dotCloud platform and customer base,” he said in a statement.
Initially, cloudControl plans to continue running the dotCloud PaaS and serving its customers without changes. The company is going to develop a plan for an eventual migration to its own PaaS technology.
“Going forward we’re committed to first and foremost providing a seamless migration path and outstanding support to existing customers while also laying a great foundation for the future growth of the dotCloud brand,” cloudControl founder and CEO Philipp Strube said.
dotCloud launched in 2011 and was one of the first multi-language PaaS offerings. Its Docker project, however, has taken off within only one year of existence.
Docker is an enabling technology for PaaS. Baidu’s PaaS, for example, uses Docker, and so do open-source PaaS options OpenShift Origin and Cloud Foundry and their commercial implementations.
While application containers have been used by other companies, their use is typically restricted to a single environment.
Docker has built containers that are portable across different environments. They can run in different kinds of clouds, on bare-metal servers or on laptops, on top of various Linux distributions.
The company also has built a large ecosystem of vendors around the technology, making it easier for customers to adopt. Docker works with tools above it in the IT stack, including open source DevOps tools Puppet and Chef, and tools below it, such as OpenStack and every major proprietary cloud infrastructure provider.
Companies such as Google, Baidu, Yandex and Groupon have adopted Docker. Golub told us in a recent interview that almost every large bank and a number of large healthcare companies and enterprises have expressed interest in the technology.
“Within every large company, somebody is using Docker in some kind of a pilot project,” he said. | 7:00p |
Report: Microsoft Paid Six-Times the Price of Iowa Land for Data Center Expansion The $6 million price tag Microsoft recently agreed to pay for the 100 acres of land in, West Des Moines, Iowa, is about six times the property’s value as agricultural land, an Iowa real estate broker told The Des Moines Register.
Steve Bruere, president of the People’s Co., said soil quality on the land made it worth less than $10,000 an acre, had it been used as farm land. Microsoft is paying a premium because it is planning to build a data center on it.
“This property certainly commanded a significant premium because of its use as a data center,” he told The Register.
Considering that Microsoft is planning to spend up to $1.1 billion on the West Des Moines data center expansion, the cost of land is fairly insignificant.
The high markup on land for data center use is normal, and Microsoft actually got a good deal, Bruere said. According to the paper, several past land sales for data center construction in the state also involved higher-than-normal real-estate prices.
Facebook, for example, paid about $25,000 an acre for its nearly 200-acre data center site in Altoona. Microsoft paid about $130,500 an acre for a property in West Des Moines in 2008, where it built its first data center there.
Yahoo recently bought about 18 acres of land for data center construction in Lockport, New York, at $15,000 an acre, The Buffalo News reported. | 8:00p |
Video: Docker CEO on Why Containers are the Future of Applications Earlier today, Docker announced it had found a buyer for dotCloud, the Platform-as-a-Service business the company was founded on.
Docker has moved on to focus on its application-containerization technology it originally created to enable the PaaS. The company believes Docker containers and the ecosystem it has built around them are the future of applications.
Its CEO Ben Golub made the case at the inaugural Rackspace Solve conference in San Francisco in July. Here’s the video of his presentation:
| 8:30p |
Netmagic Building Reportedly Largest Mumbai Data Center Netmagic, subsidiary of the Japanese telecom giant NTT Communications, is building a massive data center in Mumbai, which it says will be the largest data center in the area.
Netmagic is one of the biggest data center service providers in India. While growth has been slow this year, India’s economy has generally done well in recent years compared to other countries. Strong economy usually means a thriving IT services market, which in turn benefits data center service providers like Netmagic.
NTT bought a 74-percent stake in Netmagic in 2012, when the Indian company was in expansion mode, its management looking at a potential IPO, domestically or on Nasdaq, or an acquisition by a larger player. Sharad Sanghi, founder, managing director and CEO of Netmagic, owns the rest of the company.
The future Mumbai data center, dubbed “Mumbai DC5,” will have 300,000 square feet of data center capacity. “This will be the largest data center in Mumbai,” Netmagic president and executive director Sunil Gupta said.
Currently, the largest data center in the market is a 200,000-square-foot facility by Reliance Communications, he said.
Building in the big city
Netmagic’s fifth Mumbai data center will be especially attractive because it is being built in Mumbai proper, as opposed to the suburb of Navi Mumbai, where most of the region’s data centers are concentrated. Customers want to be in the city, but real estate and power there are expensive (like in any of the world’s big cities), so data center providers prefer to build beyond city limits.
Besides being a convenient location, Mumbai is home to an international fiber landing station and has a reliable power infrastructure. Other Indian data center markets are notorious for having unreliable utility grids, forcing operators to rely on generators for prolonged periods of time.
The new facility will have access to 20 megawatts of power and enough room to support 2,700 racks. If Netmagic needs to upgrade power capacity, it has the option to add up to 8 megawatts.
This is the second time the company is building its own data center from scratch. The first one was its Bangalore DC2 facility. Its other facilities are leased and retrofitted.
NTT will fund the entire project, slated for completion in July 2015. The project is currently in design phase but the ground at the construction site has already been broken, Gupta said.
Netmagic currently operates eight data centers in India, located in all four of the country’s major IT hubs: four in Mumbai, two in Bangalore, one in Deli and one in Chennai. It has such massive presence in Mumbai because that is where 40 to 50 percent of the country’s IT market is, and that is where the company is headquartered.
“Whatever you build in Mumbai gets consumed,” Gupta said. Netmagic opened its fourth Mumbai data center in mid-2011, and by the end of last year the facility was almost full.
E-commerce showdown great for data center providers
Mumbai DC4 filled up so quickly because two large customers came in last year and took almost all remaining capacity. One of those customers is Flipkart, the Indian equivalent of Amazon that raised a $1 billion funding round last week.
Amazon followed Flipkart’s funding announcement with an announcement of its own, saying it would invest $2 billion in growing its business in India. It has been one year since Amazon has had a business in India, and CEO Jeff Bezos said in a statement that market response there had surpassed the company’s expectations.
“We see huge potential in the Indian economy and for the growth of e-commerce in India,” Bezos said. “At current scale and growth rates, India is on track to be our fastest country ever to a billion dollars in gross sales. A big ‘thank you’ to our customers in India – we’ve never seen anything like this.”
Gupta declined to say whether Amazon was shopping for data center space in India, but did say that a large unnamed e-commerce company was currently shopping for a massive chunk of capacity in the market (up to 1,000 racks).
Both the online marketplace and Amazon Web Services, the company’s public cloud infrastructure services arm, currently serve Indian customers out of data centers in Singapore. Services in what Bezos called the company’s fastest-growing market ever would benefit from the decrease in latency infrastructure placed directly in the country would bring.
Flipkart, the company Amazon is spending $2 billion on competing with, is using 250 racks in Netmagic’s DC4 facility in Mumbai. It is one of the top e-commerce companies in India, many of which are the data center provider’s customers, Gupta said.
Full suite of data center services, and nothing else
Netmagic keeps a narrow focus on data center services. “We don’t do any other business but data centers,” Gupta said.
This means the company will provide services like colocation, bandwidth and remote hands, but will not do things like network or system integration. About 70 percent of its revenue comes from managed services.
In the data center, its services span the gamut, from power and cooling to managed operating systems and security services. Its staff will manage any enterprise or e-commerce application.
These deep managed services capabilities are what enables Netmagic to compete on the Indian market with numerous deep-pocketed telcos, Gupta said. There is a lot of demand from customers that want to outsource everything and focus on their core applications.
The company also provides a variety of cloud infrastructure services. Its main competitors in this space are Amazon and IBM SoftLayer, Gupta said. |
|