Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, June 30th, 2014
| Time |
Event |
| 12:30p |
Re-thinking Colocation in the Age of Cloud Computing Keao Caindec is the Chief Marketing Officer for 365 Data Centers where he is responsible for marketing, communications and product management.
Cloud computing has swept through the IT infrastructure industry and made it easier for companies to use servers, storage and software applications on demand and “as-a-service.”
But what about the data center colocation industry? It has changed very little over the last eight years and remains as inflexible as ever with multi-year term agreements and big minimum space requirements.
Businesses want more flexibility to run temporary workloads, build private clouds and have a less risky path to migrating applications to a hybrid public-private cloud model. What can be done to make colocation easier and more “cloud-like?”

The future of cloud computing
Gartner has predicted that “this is the end of the beginning phase of cloud computing.” While this may be true for large enterprises and startup tech companies, the vast majority of applications still run on IT infrastructure dedicated to a single company and are housed either on premises or within a third-party colocation facility. In 2014, public cloud IaaS will be just one-third of 1 percent of the overall worldwide IT spending, according to Gartner.
Businesses, especially small and medium-sized businesses, will take small steps to migrate out of their own on-premise data center environments into more secure and reliable environments. The majority of businesses will run their applications across a hybrid cloud environment made up of dedicated infrastructure, private cloud and public cloud IaaS and Software-as-a-Service (SaaS).
To support a hybrid cloud environment, we need to re-think colocation. Colocation services need to be more flexible and cloud-like to better align with cloud consumption models.
Keeping up with the cloud
Here are five ways that colocation services need to change to keep up with the cloud.
- Commitment-free contracts: Contracting for colocation services typically requires a multi-year agreement and large minimum space requirements. In order to align with the on-demand nature of the cloud, colocation services agreements should have options for on-demand colocation with no minimum term commitments.
- Online ordering: Pricing, terms and conditions for colocation services should be available online. In order to support temporary workloads and to scale private cloud deployments, organizations should be able to order colocation services online and expect fast fulfillment. Combining online ordering and commitment-free contracts will allow companies to realize the benefits of agility and scale in a secure dedicated colocation environment.
- Direct connect: Colocation should have easy “direct connect” access to public cloud providers as well as local on-demand infrastructure. As much as possible, interconnection to cloud IaaS should be automated.
- Managed services: With cloud IaaS, you’re able to deploy virtual servers, configure virtual firewalls and use cloud-based storage through automated web-based interfaces and application programming interfaces (APIs). In order to replicate this, colocation providers need to offer more than just remote hands. They need to offer basic managed services such as firewall management, server management, backup and recovery services as well as other managed IT operations services for the dedicated infrastructure of each client.
- Customer service: Colocation customer service needs an overhaul. Today, most customers are treated like tenants, especially as colocation providers try to think more like real estate investment trusts (REITs). Cloud users want easy access to information and the ability to speak with a knowledgeable customer service manager or technical support person.
If colocation services are altered in these five ways, the gap between the way businesses consume infrastructure and the way they have been buying colocation will quickly begin to shrink. By changing colocation as we know it, businesses will realize true added value, especially as they seek to meet regulatory compliance requirements and/or create private or hybrid cloud environments.
Changing an entire industry’s way of doing business isn’t easy. However, for businesses to realize the value of colocation, change has to be a priority.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 12:30p |
Russian Government Bets on ARM to Replace Intel and AMD in its Data Centers The government of Russia is investing in development of processors based on the ARM architecture with the goal of replacing chips by American companies Intel and AMD in government IT systems.
Chips built using technology licensed from UK’s ARM Holdings power most of the world’s smartphones. There is also a number of companies building ARM processors for servers.
By licensing its architecture, ARM has lowered the extremely high barrier of entry into the processor market for newcomer companies, and now apparently also for governments that want to reduce their reliance on American technology.
The chip, codenamed “Baikal,” will be made by a subsidiary of the Russian supercomputer maker T-Platforms using 28-nanometer process technology, Russian newspaper Kommersant (in Russian) reported, citing documents of the Russian Ministry of Industry and Trade (Minpromtorg).
ARM-based switches, routers, servers planned
The development project is supported by Rostec, a non-profit government organization pushing development of the country’s high-tech industry. One of the investors is Rusnano, a government-owned company tasked with commercializing nanotechnology.
Baikal will be based on the 64-bit Cortex A-57 core, licensed from UK’s ARM Holdings. Minpromtorg plans to have an eight-core chip ready by the beginning of 2015 and a 16-core product, using 16nm process technology, by the end of 2016.
T-Nano, the T-Platforms subsidiary developing the chip, has received 1.2 billion rubles (about $35.6 million) from Rusnano, according to Kommersant, although it was not disclosed what portion of that sum will be used for Baikal, named after Lake Baikal in Siberia, considered the world’s deepest lake and greatest by volume.
A Rostec subsidiary is planning to build Baikal-based network switches, routers and computers. Other Russian companies, including Depo Computers, Kraftway and Aquarius, have reportedly expressed interest in the upcoming chip.
Baikal will be supplied to government departments and agencies and government-owned companies. Citing Minpromtorg, Kommersant said the size of the Russian government hardware market was $3.5 billion.
T-Platforms blacklisted (briefly) in U.S.
Last year the U.S. government added T-Platforms to the list of entities “acting contrary to the national security or foreign policy interests of the United States.” Companies on this list cannot buy American-made components or any products made using American technology.
Later that year, however, the company was taken off the list. Government records indicate that T-Platforms’ name was removed following the company’s request to do so and review of the information it provided.
The firm ended up on the list because the U.S. Department of Commerce suspected it took part in development of computer systems for military end-users and production of computers for nuclear research, among other reasons.
The sanctions did a lot of damage to T-Platforms. “The company was forced to suspend purchases of components, materials and semiconductors,” it said in a statement. “Sanctions greatly affected production and sales numbers, and the pace of new system development.”
Huge process-tech leap for Russia
The 28nm process technology T-Nano is planning to use in building the first-gen Baikal will be the most advanced process technology in the country. Processors currently made by a subsidiary of the Russian telco Sistema are built using 90nm process technology.
An anonymous industry source told Kommersant that 28nm technology, used by the likes of Qualcomm, Samsung and Altera to build smartphone chips, does not exist in Russia at the moment. | | 1:00p |
Microsoft Apologizes, Details Online Services Outage Microsoft issued an apology regarding the service outage that left many without email access for most of last Tuesday.
The problem was with Lync Online and Exchange Online services, with the brunt of the outage coming on Tuesday. Rajesh Jha, corporate vice president, Office 365 Engineering, apologized on behalf of the Office 365 team and detailed the two issues that led to the outage. A post incident report will also be issued for further analysis of what happened, how they responded, and what they will do to prevent similar issues in the future.
It’s a good step towards restoring faith. While things can be frantic during an outage, a service provider needs to actively inform customers while it happens. Adding to the problem was that the Service Health Dashboard also experienced problems which meant not all impacted customers were notified in a timely way. Jha said that the problem with SHD has been addressed.
To prevent such problems, service providers often host status dashboards on infrastructure that’s separate from their main services (for Salesforce.com, for example, it was a lesson learned early on).
Lync Online (instant messaging and voice) saw brief loss of client connectivity in North American data centers due to external network failures. Connectivity was restored, but the ensuing traffic spike caused several network elements to get overloaded.
The Exchange Online issue was triggered by an intermittent failure in a directory role that caused a directory partition to stop responding to authentication requests. This caused “a small set” of customers to lose email access. The issue led to an unexpected issue in the broader mail delivery system due to a previously unknown code flaw, and this is when the outage went wide.
Microsoft fought it by partitioning the mail delivery system away from the failed directory partition, then attacked the root cause. Jha says the team is working on further layers of hardening for this pattern.
“While we have fixed the root causes of the issues, we will learn from this experience and continue improving our proactive monitoring, prevention, recovery and defense in depth systems,” wrote Jha. “I appreciate the trust you have placed in our service. My team and I are committed to continuously earning and maintaining your trust every day. Once again, I apologize for the recent service issues.”
Read Jha’s post in full here. | | 5:00p |
IBM SoftLayer London Data Center Close to Launch In January, IBM SoftLayer, Big Blue’s cloud services division, laid out a plan to open 15 new data centers as part of a $1.2 billion global investment to strengthen its cloud play around the world. Today the company announced a big step in that plan: a data center in London set to launch in early July that will satisfy customers that care about keeping their data within UK borders.
The facility will have capacity for more than 15,000 physical servers and offer the full range of SoftLayer cloud infrastructure services. It complements an existing SoftLayer Amsterdam facility and a Point of Presence in London.
London is a key cloud market, home to the headquarters of one-third of the world’s largest companies and operations of the majority of the world’s largest financial institutions. There is also a thriving startup, incubator and tech entrepreneur scene.
SoftLayer became part of the IBM Cloud in July 2013. The provider’s infrastructure is now the backbone of IBM’s cloud portfolio.
The $1.2 billion investment will grow SoftLayer’s global cloud footprint to 40 data centers across five continents, doubling the cloud’s current capacity. As part of the $1.2 billion push, the company opened a facility in Hong Kong roughly the same size as the new London facility will be.
IBM SoftLayer is going after U.S. federal government business, pursuing FedRAMP certification for services provided out of facilities in Ashburn in Dallas.
SoftLayer CEO Lance Crosby notes that the company already had a large customer base in London and surrounding areas. “We’re excited to give those customers a full SoftLayer data center right in their backyard, with all the privacy, security and control the SoftLayer platform offers,” he said.
SoftLayer services include bare-metal cloud servers, virtual servers, storage and networking. It will seamlessly integrate via the company’s private network with all of its data centers and network PoPs around the world.
Traditionally, the company has focused on providing bare-metal servers, provisioned the same way virtual cloud servers are provisioned, and has been successful in this space. Competition in the bare-metal cloud space is heating up, however.
Earlier this month, SoftLayer’s historical rival Rackspace unveiled a new bare-metal cloud offering, claiming that it was faster and performed better than SoftLayer. | | 5:45p |
Google Capital Leads $110M Round for Enterprise Hadoop Firm MapR MapR Technologies, provider of one of the most popular distributions of Apache Hadoop, has completed $110 million in financing, including $80 million in equity led by Google Capital and a $30 million debt facility.
The company said it had tripled bookings in the first quarter of 2014, compared to 2013. The new funding will accelerate its worldwide go-to-market programs and deployment of its software as well as provide resources to continue engineering contributions to open source Hadoop software development.
By leading the round for MapR Google indicated that its recent announcement of a data analytics system that replaced MapReduce in its data centers would not be a lethal blow to the enterprise Hadoop market.
MapR has customers in financial services, healthcare, media, retail, telecommunications and Web 2.0 verticals and is said to be preparing for an IPO.
Google Capital led the $80 million equity financing, which also included Qualcomm, through its venture investment group, Qualcomm Ventures, and existing investors, including Lightspeed Venture Partners , Mayfield Fund , NEA and Redpoint Ventures. The debt facility was led by Silicon Valley Bank.
No free lunch in enterprise Hadoop space
The Hadoop market is getting heated, with several competitors vying for the enterprise space. MapR’s main competitors include Hortonworks and Cloudera.
Cloudera is backed by Intel and has raised more than $900 million in funding. Hortonworks is a spin-off from Yahoo, which formed the stand-alone company together with Benchmark Capital in 2011. Hortonworks closed a $100 million venture round in March.
MapR is arguably less open source than Cloudera and Hortonworks, based on proprietary file system and management technologies. The proprietary parts create some stickiness and helps to differentiate it with software focused on management and operations.
Cloud Dataflow won’t kill Hadoop
Google’s Capital invested in the company, while Google the cloud service provider has stopped using MapReduce, technology at the core of Hadoop. Last week at Google I/O in San Francisco, the company announced Cloud Dataflow, a data analytics system it said could scale much better than MapReduce.
Hadoop is open source, however, and Cloud Dataflow is not, and the only way a company can use it is to pay Google for the service. If an enterprise customer wants a distributed parallel processing compute cluster of their own, going with one of the commercial Hadoop distributions, hardened with enterprise-friendly features, is still the best option, especially if they are not running queries against petabytes of data, which is the kind of scale where MapReduce starts to get sluggish, according to Google.
Where Cloud Dataflow may affect MapR is the customer base that is using the Hadoop distribution together with Google Compute Engine, the giant’s Infrastructure-as-a-Service offering. The two are deeply integrated, and customers can deploy MapR clusters in the Compute Engine cloud.
Judging by official statements, however, Google remains optimistic about MapR’s future.
“MapR helps companies around the world deploy Hadoop rapidly and reliably, generating significant business results,” said Gene Frantz, general partner at Google Capital who has also now joined MapR’s board of directors. “We led this round of funding because we believe MapR has a great solution for enterprise customers, and they’ve built a strong and growing business.”
Qualcomm pitches in for Internet of Things
John Schroeder, CEO and co-founder of MapR, said Google had a long-standing commitment to Hadoop. “This investment round recognizes our customers’ rapid adoption, their tremendous results and ROI, and also the capital efficiency of our business model,” he said.
“It’s extremely gratifying to bring these high-caliber strategic investors on board, including Qualcomm who is the leader in the mobile ecosystem and also at the forefront of the Internet of Things, to help us accelerate growth and position the company for global leadership. Our installed base of more than 500 paying licensees provides a strong foundation and we are excited to move forward with the tremendous resources from our new and current financial investors.”
Albert Wang, director at Qualcomm Ventures, indicated that the company’s investment in MapR was about the Internet of Things. “We invested in MapR because of the strength of its technology in leveraging the expanding Internet of Things and providing immediate business benefits.” | | 6:05p |
Cray Joins OpenStack, Wins $54M Supercomputer Contract Supercomputer manufacturer Cray has joined the OpenStack Foundation as a corporate sponsor. The company made the announcement at the International Supercomputing Conference (ISC14) last week in Leipzig, Germany, where it also announced award of a $54 million contract to provide the Korea Meteorological Administration with two next-generation Cray XC supercomputers.
Prepping for OpenStack integration
As a corporate sponsor of OpenStack Cray will promote the open source cloud operating system and over time provide customers with common APIs, common infrastructure and core frameworks over time. It also opens the door to integration opportunities for Cray supercomputers among OpenStack suppliers and partners.
Joining OpenStack continues an open source theme for Cray over the ISC14 event, where it also announced a new data management and protection solution for Lustre (a popular open source file system for large-scale clusters, typically used in academic research) on Cray Tiered Adaptive Storage. Cray’s previous open source endeavors include participation in organizations such as OpenMP, OpenSFS, OpenACC and more.
“We want to ensure that Cray customers benefit from interoperability with OpenStack,” said Peg Williams, Cray’s senior vice president of high performance computing systems. “System management capabilities with OpenStack, for example, will help improve total cost of ownership, productivity and ease-of-use in sophisticated data center environments. We support the OpenStack Foundation because it enables our customers to take advantage of the economics and flexibility of open source by leveraging OpenStack capabilities.”
Supercomputer for forecasting and climate research
As a leading operational weather forecasting and climate research center, KMA will use the next-generation Cray XC supercomputers in conjunction with a Cray Sonexion storage system to provide more accurate weather forecasts through increased model resolution, new forecasting models, increased ensemble sizes and the implementation of advanced data assimilation. The storage solution will include 21.7 petabytes of capacity and 270 gigabytes-per-second performance.
“Weather forecasting and climate research play vital roles in societies across the globe, and we are honored that KMA, one of the premier centers in this important industry, has once again selected a Cray system to meet their demanding requirements for producing an extensive range of meteorological services,” said Andrew Wyatt, Cray vice president, APMEA.
The $54 million multi-year, multi-phase contract consists of products and services and is expected to be completed in 2015. | | 8:55p |
CERN Contributes Identity Federation Code to OpenStack CERN (European Organization for Nuclear Research) has contributed code to the latest OpenStack release called Icehouse.
Written for federation of identities, it eases the process of managing multi-cloud environments. Inclusion of CERN’s federation code in Icehouse enables OpenStack service providers to consume the code and build federated services on the OpenStack platform.
Identity federation, which was developed by CERN openlab fellow Marek Denis and other members of the OpenStack community, means a private cloud user can manage a multi-cloud environment using only their private cloud sign-in credentials. It’s an important update to both Icehouse and CERN, as it means taking advantage of compute resources in many different centers using a single set of log-in credentials for hybrid cloud.
CERN is a Rackspace cloud customer, relying on the company’s Open Hybrid Cloud to help it discover the origins of the universe. CERN has the largest research environment in the world, as it operates the Large Hadron Collider (LHC), which produces petabytes of data every day.
Rackspace and CERN openlab have been working together on a joint research and development project to federate OpenStack clouds and get them working better together. The CERN production cloud is now being used by 700 physicists for analyzing production data from the LHC recorded over the previous four years.
“People are getting resources in 15 minutes that used to a take a week or months to be delivered,” said CERN IT infrastructure manager Tim Bell. “Federation for CERN is a critical requirement looking forward.”
The LHC is a 27-kilometer ring 100 meters underground on the Franco-Swiss border used to collide beams of particles just below the speed of light. CERN examines these collisions, producing one petabyte of data a second to analyze.
The project is trying to find differences of matter and anti-matter and has contributed a lot to discovery of the Higgs boson.
The identity federation project was initially announced at the OpenStack summit in Hong Kong in November. Rackspace said it will continue to work with CERN openlab to further enhance federation capabilities.
The next steps will be working to enable security validation of the identity federation code with help of graduate students from the University of Texas at San Antonio, who are conducting important research around open cloud computing in academic environments. They will work on the development of clients to leverage the federation code in Icehouse, which is based on the SAML identity standard.
Additionally, work is planned within the image management service called Glance to leverage federation to allow images built in one OpenStack cloud to be imported into other clouds. This planned enhancement to the image service will enable a user of the CERN OpenStack cloud to spin up an image on its own private cloud and import that image into the Rackspace public cloud using only their CERN credentials (Rackspace will already know their identity due to the federation capabilities built into OpenStack).
Watch an interview with Tim Bell about the joint project, and see how CERN is benefiting from OpenStack and read more about it on the Rackspace blog. | | 9:00p |
With SmartDataCenter Joyent Goes After OpenStack’s Market Share Joyent is releasing its time-tested private and hybrid cloud management platform SmartDataCenter 7, which the company believes has a big edge over OpenStack as a way to build cloud for enterprise.
SmartDataCenter has been running at scale for years as the backbone of San Francisco-based Joyent’s cloud, and now others can use the same to build their own cloud infrastructure. It’s centered around lightweight, container-based virtualization for portability and is targeted at startups as well as enterprises.
The software works on any hardware. “Our important thesis is that private cloud should pretty much reflect public cloud economics,” said Joyent CTO Bryan Cantrill. “It must build on scale-out commodity hardware. There’s nothing necessary, other than customers can dial up the balance as appropriate.”
Evolving to be more than service provider
Started as a hosting provider, Joyent has increasingly moved into software. It develops a lot of its hoodoo in house. The firm has open sourced a lot of its technology in the past, and here it continues to move beyond its service provider roots.
Joyent says it recognizes how brutal the public cloud market can be and sees a big opportunity in software. “I came to Joyent to affect that change,” said Cantrill. “Joyent wanted to change that service stack to a software stack. Turning that into a software product is not easy. We rewrote the stack from 2010 to now.”
Focused on private cloud market
SmartDataCenter was first brought to market, somewhat quietly, in 2012. The initial interest was from service providers and telcos wanting to stand up clouds to compete with Amazon. This was not the ultimate vision for Joyent. “These operators were in no position to go to war,” said Cantrill. “Amazon is an extremely voracious competitor. They put Walmart in a pinch. This is an aggressive competitor. These telecoms are monopolies and accustomed to not working very hard.”
The big opportunity is private cloud, which the market wasn’t quite ready for at the time of the initial release. “We pulled back and reemphasized the software. In the last two years, the world has caught up.” Joyent needed an entirely new go-to-market strategy. SmartDataCenter is targeted to everything from extent enterprises that have cloud built out and want to bring it on-prem to startups.
There are different divisions within an enterprise that want private cloud, says Cantrill. “A common trend we’re seeing is the mobile group in a company. Mobile groups have the budget, charter, and it’s all greenfield.”
The big competitor is OpenStack and private clouds built on it. Joyent says its edge is that it is the only one competing in the public cloud arena. “It guides us in terms of the software of we develop,” said Cantill. “The challenge that OpenStack has is there ends up being a disconnect between those operating a public cloud and decisions made by OpenStack. We are our own biggest customer – we find issues and we know what needs to be done.”
Problems with OpenStack, according to Joyent
When it comes to OpenStack, Cantrill sees problems. “OpenStack has some very fundamental structural challenges,” he said. ”Anyone knows how hard it is to get one company to agree on something. Add multiple companies to the room and it becomes impossible. You have so many different ideas that without a dictator you are not going to get a consensus. With all those competitors and different models, a consortia goes one of two ways: towards the lowest common denominator — what they finally deliver can’t represent anyone’s fundamental innovation — or you end up with all that conflict being exported to the user of the system. OpenStack is achieving both of these outcomes.”
Cantrill says that this leaves the end provider or user left to sort out all of these civil wars. “What kind of storage should they use: Ceph, Swift? They’re not converging. The decision they make now will have ramifications down the line, so you have to research and nothing gets done. They shouldn’t be expected to make that decision.”
Cantrill says that those looking to stand up clouds don’t want to have to worry about it. “There’s a reason I buy a car and not a pallet full of parts,” said Cantrill. “There’s a lot of people that just want to drive a car, not engage in how a car should be built. There’s too many agendas in the room. There was a while were OpenStack had that halo. I’m a huge advocate of open source, but remember, there’s lots of stagnant open source out there. It’s beginning to dawn that there are some major structural headwinds. The only way it can reasonably happen is vendors are going to have to fork off their own effort – Red Hat OpenStack, Piston Cloud OpenStack – and you lose the advantage you once had by doing this.”
SmartDataCenter allows a customer to deploy cloud in hours. Joyent’s open ecosystem approach allows other complementary technologies to integrate and be used in both public and private cloud environments.
The complete cloud management platform also includes critical multi-tenant security components such as SmartLogin secure SSH key management, LDAP directory service, full identity and role based management, and built-in firewall capabilities.
C-level shake-up
Cantrill took the CTO role at Joyent only recently, replacing co-founder Jason Hoffman, who stepped down in September 2013. The company also recently appointed a new CEO, named Scott Hammond, who comes from Cisco, where he led the cloud business unit.
Hammond was previously CEO of newScale (which Cisco bought in 2011) and Digital Market. His background is all about scale — growing startups into large companies — and that’s what he’ll be focused on at Joyent.
He replaced Henry Wasik, who joined as CEO in November 2012. Hoffman also used to be Joyent’s CEO, and Wasik took the job replacing the founder. |
|