Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, September 17th, 2013
| Time |
Event |
| 11:30a |
Netcraft: Chinese Cloud Market Growing Fast China is seeing solid growth in web-facing computers, up 8.3 percent over the last year, according to Netcraft. This notable growth helps explain reports of active data center construction for major web companies in China.
China’s cloud infrastructure is still very much in its infancy, but the growth is starting to kick in. The majority of that growth has occurred within the cloud hosting market, however global cloud workloads are unlikely to be outsourced to China thanks to what remains a very closed market.
Those looking to establish cloud offerings in China still need to partner with a local provider to have any hope of achieving traction. Even Microsoft partnered with 21Vianet to launch its Windows Azure cloud platform in China.
Netcraft found particularly solid growth at Aliyun, a hosting provider that is part of the Alibaba group. Aliyun has six time more web-facing computers than a year ago, with a total of 17,934 in September. The Alibaba group is the largest hosting provider in China, as well as in the top 30 worldwide according to Netcraft, with Aliyun comprising 92 percent of its web-facing computers, according to Netcraft.
However, most of this growth is with hosted sites aimed at the Chinese market. More than half the websites at Aliyun use the .cn Top Level Domain (TLD), compared to 41 percent in the .com TLD. It’s growth is purely in the local market. So while there is significant growth, it’s a very insular type of growth. It makes sense, given the general desire to host content as closest to end users as possible to increase performance.
There’s also The Golden Shield Project, perhaps better known in the West as “The Great Firewall of China” which makes traffic crossing the border slow, unstable or even blocked.
This, among other reasons, is hindering the ability and desire for non-Chinese companies looking to use cloud hosting in China. Traffic can be “patchy” from outside of China – Netcraft notes that packets sent to www.aliyun.com from the UK takes almost half a second to make the journey and back again, and the United States is not much better.
Many hosting services are available only in the Chinese language, and some Chinese hosting companies only accept business from Chinese customers. Aliyun’s customers need a Chinese mobile phone number to receive a verification code to complete the signup process, Netcraft also points out. On-demand instances aren’t so on-demand at Aliyun – there’s an identification verification process.
China has an increasing number of internet users (591 million at the end of June). The growth is strong but self-contained, which is explains why there’s massive data center projects there – the only way to truly serve the market effectively is within the country’s borders. | | 12:00p |
Google Has Spent $21 Billion on Data Centers  Denise Harwood of Google’s Technical Operations team works inside the company’s data center in The Dalles Oregon. (Photo by Connie Zhou for Google)
Google has invested more than $21 billion in its Internet infrastructure since the company began building its own custom data centers in 2006. The company’s spending has intensified in recent quarters as Google has launched a global expansion of its data center footprint, which has led to quarterly spending in excess of $1 billion. The company invested a record $1.6 billion in its data centers in the second quarter of 2013.
The analysis of Google’s spending provides some context for the scale of the infrastructure being used to power the cloud computing platforms powering popular web apps and services. Microsoft recently disclosed that the company has spent more than $15 billion on its data center infrastructure.
While results may fluctuate from quarter to quarter, the need to invest in infrastructure continues apace. The only time the company has spent more on capital expenditures was the fourth quarter of 2010, when it spent $2 billion purchase of 111 8th Avenue, primarily for its office space. Here’s a chart of Google’s capital spending by quarter:

Google’s data center construction will likely continue at high levels in coming quarters, as since November the company has announced a $200 million expansion in Council Bluffs, Iowa, a $600 million expansion at its campus in Berkeley County, South Carolina and another $600 million to expand its campus facilities in Lenoir, North Carolina, and $390 million to add capacity to its facility in Belgium. Google is also expanding its infrastructure in South America and Asia.
A capital expenditure is an investment in a long-term asset, typically physical assets such as buildings or machinery. Google says the majority of its capital investments are for IT infrastructure, including data enters, servers, and networking equipment. In the past the company’s CapEx spending has closely tracked its data center construction projects, each of which requires between $200 million and $600 million in investment. | | 12:30p |
Cloud Apps Are Getting Smarter John Ball is CEO of KXEN, which performs predictive analytics. He brings 20 years of experience in enterprise software, deep expertise in business intelligence and CRM applications, and a proven track record of success driving rapid growth at highly innovative companies.
 JOHN BALL
KXEN
Cloud apps are hot. There may have been questions 10 years ago if businesses were ready to put their mission-critical business processes outside of their four walls, but today the cloud is mainstream. Whether it’s CRM, ERP or HR, cloud apps are bringing huge value to companies of all sizes. Enough so that Gartner estimates that over 40 percent of CRM revenue was attributed to SaaS in 2012 and the market will increase to over 50 percent by 2016. Not too shabby.
I was recently at a Salesforce customer event in San Francisco, and Marc Benioff had his usual lineup of leading companies like GE, Omnicom, and some newcomers like Trunk Club all running their customer-facing interactions on salesforce.com. You simply can’t help but get caught up in all the excitement.
Barriers Remain
But amidst all this momentum, some companies are still reluctant to move key business processes to the cloud. The old excuses of security, scalability, latency, and uptime have been resolved for the most part. So why is there still hesitation? The main reason is these companies cannot give up critical predictive functionality that is baked into their core business processes, such as cross-sell, next best activity, or churn management. Lost revenue due to losing cross-sell capability would be staggering for a medium to large-scale contact center (often processing hundreds of thousands or millions of calls a month).
From what I can tell, it’s the only remaining advantage that on-premise apps like Oracle, SAS, IBM Unica, SAP, and Pegasystems still have over their cloud peers.
Predictive Analytics Answers the Questions
Customer interaction processes such as cross-sell, churn management, and personalized recommendations are made smarter through predictive analytics. Predictive engines analyze the mountains of data collected from CRM systems, web, mobile traffic, and social media (called Big Data these days) and deliver answers to questions such as “which customer is likely to leave my service?” or “which product is the customer most likely to purchase?” The end result is that the interactions are more profitable for the company and more relevant for the customer.
So if predictive applications are so important, then what’s the hold up?
The reality is that delivering predictive applications isn’t easy. Most existing predictive applications have been custom developed involving teams of PhD data scientists and lots of IT heavy on-premise plumbing. Delivering predictive applications for cloud apps introduces fundamentally new design requirements. Specifically, there are three major challenges, which need to be addressed:
1. Automation: You simply can’t have thousands of data scientists sitting in the cloud handcrafting predictive models for every marketing offer, retention program, website recommendation, etc. Traditional predictive modeling is a manual, iterative process that takes days or weeks to produce a single model. In a multi-tenant cloud such as salesforce.com, there are over 150,000 customers, each with a different data and underlying data models, making any attempt at handcrafted predictive modeling for each customer conversation an impossible task.
2. The Physics Problem: Most companies have distributed data. The data resides in multiple cloud systems (e.g. CRM in salesforce.com, ERP in Netsuite, web traffic in Adobe) and often there are still on-premise data sources (e.g. billing data). The laws of physics still precludes pumping terabytes of data back and forth, so predictive engines need the ability to analyze and score in a distributed environment.
3. Packaged Applications: Cloud customers don’t typically know what data science is so they need packaged applications with simple UIs. These businesses need the application to embed the same best practices for cross sell, upsell, retention that have been proven in the on-premise world for cloud customers that don’t have the time, money or skills to build them themselves.
There’s no doubt that the introduction of predictive applications into cloud systems can provide tremendous value. Some of the most successful businesses have relied on similar techniques on-premise to run their business. But for these companies to fully adopt cloud applications, they need the confidence that cloud vendors can provide these same capabilities.
There are many start-ups now focusing on big data and predictive analytics. Some are targeting the traditional analytics market dominated by SAS and IBM with modern approaches, while others are focusing on new distributed infrastructures such as Hadoop. It’s time to take the best of the various new approaches and make cloud systems smarter.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:00p |
NYI Introduces Peak-Powered Enterprise Cloud  A secured cabinet inside the NYI data center in Manhattan. NYI is launching a cloud computing service powered by Peak. (Photo: NYI)
NYI wasn’t in a hurry to be an early adopter of cloud computing technology. The New York-based provider has been offering high-performance, highly-secure managed services since 1996. NYI has been closely watching the cloud market, looking for a high-performance, highly-secure cloud offering to continue the tradition.
“We wanted to be able to offer cloud services in a way that stayed true to what we do and how we work with our customers,” said Phillip Koblence, the Chief Operating Officer of NYI. “We looked at a lot of cloud services, but what we saw was a lot of smoke and mirrors – a lot of fancy GUIs with not a lot of information about what sat behind it.”
NYI has found the answer in its partnership with Peak (formerly PeakColo). This week marks the launch of NYI Cloud, a new service offering VMware vCloud solutions running on enterprise-class hardware.
Focused on Hybrid
Koblence says NYI Cloud will focus on hybrid cloud solutions running on Peak’s infrastructure within NYI data centers in Lower Manhattan and Bridgewater, New Jersey. He says the hybrid approach is the right solution for NYI’s customer base, which includes many financial services and healthcare companies that place a premium on security and regulatory compliance.
“With this launch, we get beyond the hype to offer our customers a robust, safe and scalable solution that leverages the true value of the cloud within a larger infrastructure strategy,” said Koblence.
NYI Cloud provides a framework in which mission-critical workloads and data are deployed within a traditional IT infrastructure environment that meets compliance requirements, while less sensitive assets can be virtualized in an interconnected vCloud that maps existing IPs and VLANs directly into a high-density layer 2 network topology.
“NYI Cloud Powered by Peak is an enterprise cloud offering that optimizes all the benefits of the hybrid model,” said Luke Norris, CEO and founder of Peak.
Peak is a Boulder-based firm that focuses on white-label cloud solutions packaged for service providers and channel partners. A key differentiator for Peak is that it focuses on running the latest hardware to ensure strong performance, regularly updating server, storage and network infrastructure through close relationships with vendors like Brocade and NetApp.
“We’re able to constantly stay up on the refresh cycles,” said Norris. “We’ve been a leader on rolling out new enterprise infrastructure.”
Reputation for Service
That mattered to NYI, a colocation and managed hosting provider that has made regular appearances in Netcraft’s monthly rankings of the most reliable hosting providers. The company has built a track record of working closely with customers such as RollingStone.com, USMagazine, Meetup.com, Asia Society, Opera Software and countless financial services firms. NYI is also a strong supporter of open source communities.
“From a reputation perspective, NYI is known as a service-oriented data center,” said Norris. “Their customers love them for their service. NYI is a great partner for us because they work closely with channel partners.”
Koblence says Peak is ideal for NYI because it provides an accountable solution in the same data center as NYI’s customers, rather than a remote public cloud.
“We were not striving to be first-to-market,” said Koblence. “Our goal is to continue the NYI tradition of excellence by being best-to-market.” | | 1:30p |
The Cloud Job Market Update: SDN and APIs Are Hot
We last explored the cloud job market about a year ago. As we know in IT, a year is pretty much an eternity. In the past year, much has changed in the world of the cloud. We began to see more technologies connecting various cloud points, new ways to optimize the platform, and entirely new delivery methods. Through 2013, we have crossed the zettabyte cloud era and, according to Cisco, two-thirds of all traffic moving forward will be delivered via the cloud.
My last article around the cloud computing job arena discussed key considerations for the modern cloud engineer. These included:
- Local Area Network architecture.
- Understanding WAN sizing.
- Know how storage plays a role.
- Know and understand cloud computing.
- Virtualization is everywhere – know it.
- Know the applications and workloads.
- Study end-user needs and demands.
- Learn about security.
Although these areas are still important, cloud computing is quickly becoming a normal part of our daily compute process. In fact, trends show that “cloud architect” is still a field that’s hot and in demand. Another article from late last year clearly shows this trend and the demands for more cloud staff.
There are more services being delivered via the cloud, there are more users using the cloud, and organizations are leveraging more cloud delivery platforms for their users. The digitization of the modern data center has created the need for even more robust services and control methods.
And so, the cloud job market and the demands being placed around today’s cloud architect or engineer must evolve as well. Here’s what’s hot this year:
- Software-Defined Technologies (SDN). Although software-defined networking (SDN) is a huge part of the cloud computing conversation – it’s the general idea of software-defined technologies which must be understood. Software-defined platforms aim to take cloud computing and the process through which it communicates at the data center, user and cloud layer to the next level. Basically SDN and related technologies aim to abstract the network, compute, storage and even security components of a modern cloud infrastructure. We’re seeing this trend continue to grow as some of the biggest cloud infrastructure shops are adopting or investing in the technology. Take VMware for example. In buying Nicira, a leading player in SDN for $1.26 billion, VMware positioned itself to boost its place not only within data center networking, but also within the cloud SDN model. This network virtualization platform is a pretty hot technology. Current users include AT&T, eBay, Rackspace and several others. Future cloud engineers and architects will need to understand how SDN and software-defined technologies play a role within the cloud model. They’ll need to understand the optimizations, benefits, and architectural aspects of SDN so that they can assist their organization to stay ahead of the competition.
- Cloud APIs. There are more cloud platforms emerging and new types of services being tied around these models. As more organizations and users flock to the cloud, the modern cloud API structure becomes even more important. Beyond that, the actual “stack” platform which ties cloud models together becomes an integral piece as well. In two recent articles we discussed why APIs are so important and who the big leaders are in the industry. For the future cloud engineer or architect, understanding how stack platforms and cloud APIs tie into the overall cloud architecture is absolutely crucial. Organizations such as CloudStack are blazing the way for massive service providers to deliver powerful cloud platforms. Or a stack model built around the Eucalyptus cloud allows organizations to create a truly transparent, automated, cloud solution. The point is that we’re creating an environment which is interconnecting various cloud components. Whether the actual instance is located onsite or at an outside data center, there is a direct need for visibility and easy data transfer between multiple cloud locations. Cloud engineers and architects must understand how APIs function and how they can help connect the cloud into a logical, easy to control, cluster.
- New Cloud Delivery Models (Fog). One of my very recent articles welcomes you to the world of Fog computing. It’s the idea of delivering heavy content to the user very quickly by storing information at the edge. Furthermore, this technology can be applied to technologies like streaming, content delivery and even big data analytics. By creating powerfully distributed cloud nodes, administrators are able to control he delivery process of new types of services. Aside from fog ,cloud computing continues to evolve in how the user consumes data and how this data will be delivered. Because of IT consumerization and the increase of data in the cloud, there is a direct need to optimize the delivery of information. Here’s the important part: the trends around the user and cloud utilization are not going away. In fact, they’re increasing. The cloud engineer or architect of tomorrow will have to understand content delivery and how to best optimize the user experience. In some cases, this may mean designed a complex edge delivery network. In other cases, new types of delivery models may need to be applied.
- From generalist to expert. Once a technology entrenches itself firmly within our field, we see more engineers and architects go from overall generalists to concept experts. One of the first things to understand is that there will always be the need to have a solid overall understanding of how both cloud and the underlying infrastructure operate. There will also be the requirement that cloud architects and engineers are knowledgeable with how to best deploy a cloud model and what the user experience will be like. However, with the evolution of the cloud come some very specific services which now require experts. We are now seeing API, advanced networking, and even optimization experts emerge. Both vendors and organizations are looking for people that can help improve not only the overall cloud experience, but key cloud components as well. With that, future cloud professionals are able to become true generalists as well as develop specific cloud focus areas.
The use of consumer devices, the increase of data within the cloud and the way we, as users, consume cloud-based information continuously drives innovation around the cloud field. The modern data center has become the home to pretty much all current technological platforms. We’ve got entire solutions and organizations being born directly within the cloud.
This sort of logical progression around technology will dictate how we shape the future of the cloud. However, a few things are certain – the concept of the cloud is not going anywhere and the demands of the end-user will only increase. The future holds more options around resources, bandwidth and even better infrastructure; all key ingredients to even further help cloud computing development. | | 2:00p |
Video: Inefficient Data Center Design Chris Crosby, CEO, Compass Data Centers, speaking at The Uptime Institute conference in May challenges common assumptions about what green data center design and efficiency means in the industry. In this 25-minute-long video, Crosby says that it is a myth that scale is needed to drive out cost, that one must consider the use of natural resources and materials as well as economics, that cooling doesn’t need to be adapted for different locations, and more. He emphasizes the “green washing” that happens in the industry, in effect, looking at one element and not others. For example, he notes it’s not really green to clear cut a forest to put up a solar array. This and more challenging commentary from this data center executive.
For additional video content, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 2:25p |
OpenStack Foundation Launches Training Marketplace Brought to you by The WHIR.

The OpenStack Foundation has launched a new Training Marketplace, designed to make it easy to find training courses offered by OpenStack providers.
Aptia, hastexo, The Linux Foundaton, Mirantis, MorphLabs, Piston, Rackspace, Red Hat, SUSE and SwiftStack are among the first companies to make their courses searchable through the marketplace.
The launch of the marketplace comes as demand for OpenStack-based job skills has been widely reported, and OpenStack providers have launched courses to keep up with demand for training. Recently, Red Hat launched extended training and a certificate for its OpenStack technology. Mirantis closed $10 million in funding earlier this year to drive its OpenStack consultation and training services.
“The goal of the Foundation is to eliminate barriers to OpenStack adoption, create more OpenStack experts and ensure that OpenStack has a positive impact on the careers of our community members,” Jonathan Bryce, executive director of the OpenStack Foundation said in a statement. “We want to grow the community, accelerate the availability of training programs worldwide and help close the OpenStack job gap.”
The marketplace will serve as a central portal for those looking for a list of available OpenStack training, and the Foundation has set out requirements that must be fulfilled by companies looking to offer courses in the Marketplace.
The Foundation says that the courses need to “provide a strong understanding of the OpenStack core projects based on a current version of the software, as well as cover community governance and contribution processes.”
Graphic source.
Original article was published at: http://www.thewhir.com/web-hosting-news/openstack-foundation-launches-training-marketplace | | 2:30p |
Cologix Acquires JAX Meet Me Room Cologix has once again boosted its interconnection offerings, acquiring the JAX Meet Me Room (JMMR) located in the carrier hotel at 421 West Church Street in Jacksonville, Florida. The JMMR operates over 9,000 square feet as well as the building meet me room, which serves over 20 carriers and a range of financial, cloud, and enterprise customers.
The acquisition gives Cologix a meet me room and data center in a key, emerging southeastern market.
“The submarine cables slated to land in Jacksonville create an opportunity for networks to deliver traffic directly to South America without the additional cost, latency and hurricane risk that comes with traversing through Miami,” said Grant van Rooyen, President and Chief Executive Officer of Cologix. “We are pleased to bring the JAX Meet Me Room business and customers into the Cologix platform. We have already initiated plans to make significant incremental investments in this facility to enhance infrastructure and expand capacity to support new networks interested in establishing a presence in Jacksonville.”
Cologix now operates 16 network neutral data centers, supporting over 600 customers and offering over 330 network choices for its customers. Dallas, Jacksonville, Minneapolis, Montreal, Toronto and Vancouver.
The carrier hotel at 421 W. Church Street is located in Jacksonville’s central business district, adjacent to the Bellsouth Central Switch. | | 3:00p |
Juniper Launches Controller For SDN Juniper Networks has launched commercial and open source versions of its Contrail controller for software-defined networks, and adds technology development partnerships and product integrations as well.
Juniper (JNPR) announced Contrail, a standards-based and highly scalable network virtualization and intelligence solution for software-defined networks (SDN). Contrail is based on proven networking standards, creating a virtual network, enabling seamless integration between physical and virtual networks while providing service providers and enterprises with a solution that is simple, open and agile.
Formerly known as the JunosV Contrail, the new solution contains a SDN controller, vRouter, and analytics engine. It integrates with a variety of hypervisors, physical networks and cloud orchestration platforms, including both CloudStack and OpenStack. It accelerates the connection of virtual resources and enables the federation of private, public or hybrid cloud environments. It integrates with Junipers Firefly Perrimeter firewall, the first of many SDN-enabled security services. Easing the migration path to SDN, Contrail will integrate with Juniper MX, EX and QFX Series switches and routers.
“As SunGard continues to expand our cloud service portfolio, it is important for us to ensure that our cloud network is flexible, scalable and dynamic — and SDN technologies like Juniper Networks Contrail solution can bring this agility to the network,” said Nik Weidenbacher, principal engineer at SunGard Availability Services. ”Juniper’s Contrail SDN controller is unique because it brings advanced layer 3 IP/MPLS routing capabilities into the hypervisor, while integration with orchestration platforms like CloudStack makes it simple to manage. This enables us to extend our existing network into the virtual word without the need to train server staff in routing protocols.”
Technology Partnerships
Juniper also announced technology partnerships that enable the development of solutions through integration with Contrail to facilitate SDN and cloud deployment, enable dynamic service chaining and improve visibility and insights into network operations. New partners include Cedexis, Check Point Software Technologies, Citrix, Cloudscaling, Dorado Software,Flash Networks, Gencore Systems, Gigamon, Guavus, ISC8, Lumeta, Mirantis, Red Hat, RiverBed, Sandvine, SevOne, Silver Peak, Sonus Networks, and Websense. IBM and Juniper also announced a partnership to integrate Contrail with IBM’s SmartCloud Orchestrator.
With a focus on cloud performance, Cedeix Radar and Juniper’s Contrail will enable Cedexis data to program the Contrail platform with unprecedented, real-time visibility into the cloud and last-mile ISP performance being experienced by an enterprise’s end-user audience. “The integration of Juniper Networks Contrail and Cedexis-optimized end-user data sets will allow our mutual customers to realize the promise of SDN while improving network agility, reducing costs and minimizing risk,” said Aruna Ravichandran, vice president, marketing and strategy, Software Solutions Division at Juniper Networks. “The joint solution, built on open and proven standards, will enable our customers to deploy a highly dynamic, business KPI-driven network that better meets their business needs.”
Open Source Version
Juniper also introduced a new initiative that makes the source code library for its Contrail platform available through an open source license. OpenContrail is based on proven and stable network protocols, giving developers the opportunity to innovate, adopt and experiment with SDN technology that seamlessly integrates with existing network infrastructures. By making OpenContrail freely available, service providers and enterprises can now rapidly test and build SDNs that fit their unique needs and accelerates the move from lab environments to full scale production.
Available via an Apache 2.0 License, OpenContrail will integrate with hypervisors, orchestration systems and physical networking equipment, and allow for easy integration with existing infrastructures – because it is based on proven rotocols such as XMPP and BGP and leverages the standards work from ETSI and IETF.
“Juniper Networks has a long history of supporting open source and open standards. OpenContrail reinforces that commitment,” said Ankur Singla, vice president and general manager SDN, Software Solutions Division, Juniper Networks. ”It provides developers, partners and customers with a solution that creates significant business opportunities and an open source networking framework that will accelerate adoption, foster new innovation, and create a more open and transparent approach to SDN. By providing both a fully supported commercial product, as well as a customizable open source version, we are catering to our customers’ needs and encouraging innovation via an open development community.” |
|