Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, February 24th, 2015
| Time |
Event |
| 1:00p |
Study: Emerging Data Center Markets Offset Decline in North America, Europe Companies and government agencies in developed markets are building fewer data centers overall, but that decline is being offset by growth in emerging data center markets.
That’s according to a recently published survey by market analysts from 451 Research, who said the number of data centers in North American and European data center markets declined 1 percent and 2 percent in 2014, respectively, while Asia Pacific, Latin America, and the MEA region (Middle East and Africa) all saw 2 percent growth rates year over year.
Overall, there were about 4.3 million data centers worldwide as of the fourth quarter of 2014, according to the report. That was 0.2 percent higher than at the same time the previous year, making for a weak growth rate.
Analysts at 451 attribute the low growth rate primarily to a decline in demand by traditional enterprises in the established markets. Demand for data center space from multi-tenant data center, cloud and IT service providers, however, is booming, and the boom is enough to offset the decline in the enterprise data center market.
To be sure, the enterprise market is still many times bigger than the service provider data center market. Only 5 percent of all data centers in the world, not counting server closets and server rooms, are service provider data centers. The rest are enterprise facilities.
By square footage, 83 percent of all data center space is controlled by enterprises. Multitenant data centers contribute 12 percent of the total, while cloud providers account for 5 percent.
Here’s a more detailed look at growth trends in cloud and service provider, multi-tenant, and enterprise data center sectors, courtesy of 451 Research:

The overall trend, however, is toward fewer but larger facilities. A 2014 study by IDC made a similar conclusion. The IDC study also predicted that by 2018, mega data centers by service providers would account for more than 70 percent of all data center construction.
Hyperscale data centers grew 4 percent in 2014, led primarily by demand from cloud and service providers, according to 451.
In addition to the continued outsourcing of in-house data center capacity to specialist providers, another trend working against growth in enterprise data center space is increasing IT hardware efficiency. Companies are able to do more with fewer boxes, so they need less space to house their infrastructure.
“The bright spot for facilities vendors being that those cloud and MTDC (multi-tenant data center) providers will need to accommodate the growing demand for outsourced IT resources with their own facilities, albeit fewer and more efficient ones,” Daniel Harrington, research director for enterprise data centers at 451, said in a statement. | | 4:00p |
IBM Wants to Blur Lines Between Enterprise Data Center and Cloud IBM unleashed a slew of announcements around its cloud services Monday, all aimed at creating a seamless environment across on-premise enterprise data centers and cloud infrastructure.
The glue that keeps everything together is Bluemix, IBM’s Platform-as-a-Service based on the open source PaaS software called Cloud Foundry. The vision is to give developers the ability to mix and match application components through APIs, Bluemix being the layer abstracting the underlying infrastructure, from on-prem data center to public cloud.
The company wants to enable control and security of private cloud in this hybrid model, creating an “environment that mirrors existing controls,” according to the news release.
IBM is dedicating a lot of resources to this big hybrid cloud initiative. The company said more than half of its cloud development team will work on hybrid, and “hundreds” will work on open cloud standards.
“This will help break down the barriers between clouds and on premise IT systems, providing clients with control, visibility and security as they utilize the public and private clouds,” Robert LeBlanc, senior vice president for IBM Cloud, said in a statement.
In-House Bluemix Enables Hybrid PaaS
One part of the announcement that brings the vision closer is an in-house version of Bluemix. Deployed in the customer’s own enterprise data center, it integrates with the public PaaS, giving a developer the ability to use a single platform that combines flexibility of public cloud with control of the on-prem data center for pieces of the application that require it.
Public Bluemix services went into general availability in July 2014, also as part of IBM’s big $1.2 billion cloud push announced in 2014.
App Mobility via Docker APIs
Monday’s announcement also had a big application mobility piece, crucial to the heterogeneous-infrastructure vision. That mobility is achieved by employing application containers. IBM has made it possible to use native Linux containers with Docker APIs to move an app built in a cloud to an in-house data center to work on data that must remain on premises. Docker is a logical choice, since it is currently the most popular application container technology.
Additionally, IBM has launched new tools aimed at increasing developer productivity. These include orchestration across hybrid environments, collaboration, and self-service provisioning, among others.
Sydney, Montreal Data Centers Coming Online Shortly
Bluemix and other IBM cloud services are hosted in the company’s SoftLayer data centers, and it has been aggressively expanding the footprint of this infrastructure.
Along with announcing the new hybrid cloud strategy, IBM said it was close to launching new cloud data centers in Sydney and Montreal. The company expects to bring the two new sites online within the next 30 days.
They are part of the $1.2 billion investment in cloud services the company announced in 2014. Three data centers have been launched since the announcement: in Frankfurt, in Tokyo, and in Queretaro, Mexico.
Also this year, IBM plans to launch new cloud data centers in Milan and in Chennai, India. | | 4:30p |
Serving Content and Cloud Providers with New Network Services Andreas Hipp is the CEO and co-founder of Epsilon.
Data center operators can capture more revenue from cloud and content providers if they’re able to offer complete network-inclusive data center solutions. These kinds of customers want their lives made easier by the seamless integration of data center and network infrastructure, all delivered from a single source. Data center operators, large and small, can meet this new demand but are being challenged to find effective ways to add networking to their service portfolios.
New Customers, New Demands
Cloud and content providers need infrastructure that works harmoniously and is simple to manage. They also require easy scalability so that they can grow their businesses globally without the stress of integrating multiple vendor solutions.
This in turn is driving them to look for a single source for all their infrastructure needs, saving the need to procure these services separately and work them into an integrated whole. They are looking to data center service operators to deliver both network and data center solutions and see them as natural providers of scalability. By offering a simple and cohesive network and data center ecosystem, data center operators can capture new revenue from the cloud and content providers and free these customers to concentrate their core businesses.
Caring for Your Core
Networking, however, goes well beyond the core business of data center operators, challenging them to find efficient and effective ways to add these capabilities to their basic services.
They have the theoretical option of developing the required capability internally. This is an unattractive and impractical choice for most data center businesses, demanding resources and time they can’t spare. As well as acquiring new expertise, they would also need to develop relationships with multiple network providers, initially within their home market, and as customer horizons expand be compelled to do the same in any number of other markets around the world. Buying, selling, managing and maintaining international networks is complex, and unlikely to offer a return on investment in the near term.
Developing their own networking capability would also limit their ability to get to market quickly, shifting their attention from developing data center solutions to creating a network offering. Instead they must find a way to reach out and acquire the capability they need, one that matches their own skills in the data center market. In this way they can explore new revenue streams while making sure that their cloud and content customers get the service packages they need.
The Outsourcing Answer
Data center operators can converge flexible networking with multisite data center operations through the right network outsourcing partnership. An outsourced network solution offers a low capex method of acquiring the new networking capabilities that will help the operator meet the needs of customers in the cloud and content space. When network expertise is tapped in this way then new value can be created for customers and new business models explored.
An outsourced networking solution allows data center operators to integrate network capabilities with ease and break into new services quickly. It leaves others to monitor and deliver the network capabilities that are needed. The right contract allows a network offering to be scaled up or down, without the need to commit to rigid service provider contracts. No new investments or long procurement processes are required when it comes to scaling to meet customer needs.
Data center operator Digital Realty, for example, has facilities in Chessington and Woking with 22 key London-area Internet and metro gateway centers. Through network outsourcing it has been able to gain an application-aware network that supports 100G connectivity and facilitates on-demand switching of cloud-based application workloads between all these locations. It has also gained interconnectivity with hundreds of cloud, content, OTT and carrier names globally.
Racks Central, a Singapore-based data center provider, has also outsourced its network needs and is now able to offer its customers both its core data services as well as global connectivity. It has achieved its aim of allowing customers to purchase both data center and networking solutions, simplifying their procurement processes. By acting to meet these needs, Racks Central has avoided the risk of customers looking elsewhere for the integrated services they need.
A Real Opportunity
The demand from cloud and content providers is growing and the fastest way to serve this new demand for both network and data center infrastructure is to use network outsourcing to add new networking capabilities. From the largest to the smallest data center, operators can capture new revenue without adding complexity to their businesses. With the right partner, there is an opportunity to drive profitability, capture new revenue, and do more for customers.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:30p |
Logical Abstraction of the Physical Data Center We’ve got virtualization, cloud computing, and now quite a bit of new buzz around commodity platforms. The reality is that this computing model has taken off faster than really expected. With the help of the modern hypervisor, software-defined technologies (SDx), the powerful systems designed to optimize your data center and allow your cloud to be a lot more agile can potentially run on commodity gear.
Here’s the reality: it’s making some traditional technologies a bit nervous.
I recently wrote that the conversation around custom-build servers, networking components and now storage has been heating up. The concept of a commodity data center is no longer locked away for mega-data centers or large organizations. Looking at Google as an example, you’ve got an organization which builds its own server platform by the thousands. In fact, Google has developed a motherboard using POWER8 server technology from IBM, and just recently showed it off at the IBM Impact 2014 conference in Las Vegas. DCK’s Rich Miller recently outlined how “POWER could represent an alternative to chips from Intel, which is believed to provide the motherboards for Google’s servers.”
Let’s start here: The logical abstraction of hardware and their associated services is the natural progression of the technological landscape.
That said, how will this impact existing physical systems? Let’s look at the components behind a commodity platform and what can be logically abstracted from the physical environment.
- Data center. Say hello to the logical data center. The abstraction of data center services revolves around visibility and control ranging from the chip to the cooler… and everything in between. This “data center operating system” will be able to intelligently control a truly distributed plane. Some of the leading data center providers are already doing this. IO and their IO.OS platforms gives administrators next-gen DCIM capabilities within their data centers. Furthermore, we are seeing developments from hypervisor makers, like VMware, who aim to take control of all resources within their cluster. From there, VMware can control all underlying policies and even push workloads into the cloud.
- Storage.There had to come a point where we needed to stop simply adding disks. We needed to make storage smarter. Physical storage vendors need to see the writing in the wall and adapt quickly. Already some are introducing powerful, agnostic, logical storage layers capable of direct cloud and data center interconnectivity. Let me put it this way: what’s stopping a shop from buying their own chassis and populating it with the disk of their choice? From there, point the resources to a virtual appliance like Atlantis USX or VMware vSAN and apply all of the enterprise features from there. This includes HA, encryption, deduplication, replication, cloud API extensions and more. This kind of story is becoming a lot more compelling.
- Security. Virtual appliances, services, and other abstracted features are making their way into the data center. Pretty much all major security technologies now offer a physical and virtual appliance for you to work with. For example, security products from Checkpoint allow you to deploy virtual appliances and services throughout your cloud and localized network infrastructure. These software blades can be deployed on any virtualized system and include Firewall, VPN, IPS, Application Control, URL Filtering, Antivirus, Anti-Bot, Identity Awareness and Mobile Access capabilities.
- Networking. Not much explanation needed here; SDN is a huge component of cloud computing and the modern data center. But that’s not what we’re talking about today. This little section is all about open source networking. Let me give you an example: Cumulus Networks has its own Linux distribution, Cumulus Linux, which was is designed to run on top of industry-standard networking hardware. Basically, it’s a software-only solution that provides the ultimate flexibility for modern data center networking designs and operations with a standard operating system, Linux. Now, go out there and build your own enterprise network infrastructure: all on open source networking.
- Compute. Hardware and service profiles allow large organizations to create “follow-the-sun” data center models. These logical tools are becoming more powerful and much more automated. Hypervisor and virtualization technologies are becoming so thin, that differences in performance are almost negligible. The VMware, Hyper-V, and the latest XenServer virtualization platform continues to be an example of this. We can now point every compute resource to a hypervisor and let it take over. We can create powerful HA policies that will mirror VMs across a variety of platforms (which can be commodity). Yes, you can buy a SuperMicro chassis and create your own data center around it. However, one of the biggest concerns was always around parts, warranties, and maintenance. Now, big compute vendors are creating a “commodity-like” server line without the bells and whistles. The HP Cloudline server models, for example, will be cloud-ready without added software features while still carrying a warranty.
- Taking the future into consideration. Our homes are becoming smarter, we have so much more connectivity into the cloud, and the number of users/devices coming online is growing very rapidly. Future technologies will need to take the user, the cloud, and a variety of delivery methods into consideration. In many cases, new types of logical technologies will be used. Consider these stats from the latest Cisco Visual Networking Index:
- In 2013, the number of mobile-connected tablets increased 2.2-fold to 92 million, and each tablet generated 2.6 times more traffic than the average smartphone.
- By the end of 2014, the number of mobile-connected devices will exceed the number of people on earth, and by 2018 there will be nearly 1.4 mobile devices per capita.
- By 2018, more than half of all traffic from mobile-connected devices (almost 17 exabytes) will be offloaded to the fixed network by means of Wi-Fi devices and femtocells each month.
- By 2018, over half of all devices connected to the mobile network will be “smart” devices.
We are in the midst of the Internet era as more devices, people, and data points come online. There will be new services, new kinds of compute models and new ways to deliver rich content and data. The drive of the user and the IoT will create new complex challenges around resource control. For now, many organizations are beginning to explore commodity systems paired with open-source computing, virtualization, and software-defined technologies. | | 5:00p |
Infosys Acquires ERP Software Firm Panaya Global IT and software giant Infosys announced it will pay $200 million to acquire enterprise resource planning software company Panaya. Panaya caters to enterprises with a Software-as-a-Service CloudQuality suite of big data analytics driven software that tests changes to SAP, Oracle EBS, and Salesforce.
Panaya, based in Menlo Park, California, brings a lot of value to Infosys for a relatively small $200 million price. The company claims to have a third of Fortune 500 companies as customers. Sitting on around $5 billion in cash, the acquisition is strategic for Infosys, as it evolves from services to software-driven solutions and builds out its automation and delivery engine.
The acquisition is the first big move that new CEO Vishal Sikka has made since being appointed last July. In a statement, Sikka said that this move is “a key step in renewing and differentiating our service lines. This will help amplify the potential of our people, freeing us from the drudgery of many repetitive tasks, so we may focus more on the important, strategic challenges faced by our clients. At the same time, Panaya’s proven technology helps dramatically simplify the costs and complexities faced by businesses in managing their enterprise application landscapes.”
VC-backed Panaya has received $59 in funding since it was founded 9 years ago. | | 5:30p |
Using Optical Fiber Solutions for 10G to 40/100G Migrations With the advent of data centers in communication infrastructure, various network protocols have evolved to meet the data rates necessary to handle the required amount of data transfer efficiently. Part of this evolution, of course, was installing fiber optics in more and more network interconnection scenarios in place of copper cable.
With end-user connectivity now possible through various user devices, fiber optic cables have become the ubiquitous transport medium in the data center network. The number and type of fibers, their long-term viability, and value to the user are paramount considerations to handle today’s rapidly increasing data rates and exponential growth of data traffic.
In this white paper from Sumitomo Electric, we examine and consider both how and why an optical fiber ribbon cabling solution should be deployed for new 40Gbs/100Gbs installations and upgrades of existing 1Gbs/10Gbs infrastructures.
Over the past decade, the movement of data has become crucial for businesses and individuals alike, particularly through the rise of social networking, big data, mobile communications and the infrastructure necessary to support them. As the demands on the network infrastructure continue to increase, more and more servers, routers and network switches will be added to individual data centers to handle the increased data traffic. As these increase, the amount of installed optical fiber will also increase and may be mitigated by a shift from multimode to single-mode fiber.
Because of the unique nature of the modern data center, the ultimate choice of which cable to deploy in the data center rests with the person responsible for the execution of a network’s infrastructure build plan. Many factors should be considered prior to purchasing, installing, terminating, and testing a fiber optic cable. When reviewing the major factors that typically are considered for a cable installation, it is clear that ribbon cable designs should be seriously considered for use in data center builds.
Download this white paper today to learn how the overall combination of ruggedness of the ribbon design, fiber density, size, and relative cost points to ribbon as being most suited to both new and retrofit installations in the data center. | | 5:30p |
Analytics Startup RapidMiner Nets $15M Round Big data analytics startup RapidMiner announced that it has raised $15 million to help it execute on aggressive growth plans. The Series B round was co-led by Ascent Venture Partners and Longworth Venture Partners.
RapidMiner has taken a strong position in the analytics market and describes its offering as code-free predictive analytics, with pre-built models and one-click deployments. Gartner placed this small startup from Cambridge in the leaders quadrant alongside IBM and SAS for Advanced Analytics Platforms. RapidMiner reports that in the last year the company tripled product revenues, grew to over 250,000 active users, and secured dozens of net new customers and strategic partnerships.
“RapidMiner is the only code-free predictive analytics solution on the market that can execute analytical processes in-memory, in-Hadoop, in-cloud, in-stream, and in-database,” Nilanjana Bhowmik, partner at Longworth Venture Partners, said in a statement. “We’re consistently seeing the RapidMiner team push the envelope and look ahead at what its community of users need for accurate business decisions, making it the dominant leader of the next generation of modern analytics platforms.”
The analytics startup offers free and commercial versions of its products.
Just before to the funding announcement RapidMiner launched enhancements to its platform with pushdown analytics computation for big data in Hadoop. The company says that using with pushdown Hadoop processing in RapidMiner Radoop, it can push the computation of more than 250 machine learning models directly to the data in the cluster.
“Many companies are still deterred by the complexity of building analytics applications on a complicated big data technology stack,” Nik Rouda, senior analyst at ESG, said in a statement. “RapidMiner is differentiated both by offering a solution that is very deep and yet still user-friendly, attributes which will enable faster development in a wide range of environments.” | | 6:25p |
Report: Microsoft Eyeing Phoenix Data Center Build Local news media are reporting that Microsoft may be considering building a Phoenix data center. Arizona state legislator Jeff Dial told the Phoenix Business Journal that Microsoft is asking for adjustments to the 2013 data center tax breaks.
Local reports suggest the company is contemplating a site at Union Hills Drive, near Interstate 17. The project could total as much as 575,000 square feet, according to some estimates.
If the project comes to fruition, the likelihood of it listed under a code name during the filing process is high. The company used the name Project Alluvion for a recent Iowa data center project. Facebook and Google often use code names as well, while Amazon files under subsidiary Vadata.
Local media have not been able to surface any documents associated with the potential Phoenix data center project. However, a secretive approach is characteristic of large data center builds by Microsoft. That the company’s officials may have inquired about data center tax breaks in Arizona does not necessarily mean the decision has been made. Microsoft — and others — usually shops around in different states, weighing a variety of factors, tax incentives being only one of them.
Arizona passed incentives for data centers in 2013. The effort was advanced by the Arizona Data Center Coalition, a group of data center operators, utilities, realtors, and economic development groups that would benefit from increased data center business in the state. Microsoft was part of the coalition.
One big data center user that recently announced it will be building in Arizona is Apple, which said it was spending $2 billion to convert a former manufacturing plant into a data center, to be powered entirely by renewable energy.
Phoenix is home to a data center cluster. The state has attracted solid data center business thanks to low rate of natural hazards and a friendly business climate, among other attributes.
Microsoft is no stranger to tax law. States desire these projects and are often aggressive when it comes to incentives. Microsoft is a master at the game. In 2008, Microsoft halted a Quincy, Washington, expansion to prompt better tax breaks, which it succeeded in doing.
In 2014, the company announced a $1.1 billion project in Iowa, citing an attractive tax incentive package. The project in Iowa joined an existing data center in West Des Moines, first announced in 2008. The company acquired 200 acres of land in Quincy in December 2013, but a $20 million break on sales tax in Iowa helped move that state up on the priorities list.
Roughly this time last year, the company lined up tax incentives in San Antonio, Texas, for a new $250 million data center. The company has another data center of roughly 470,000 square feet there, opened in 2008.
Microsoft recently announced a major $200 million expansion in Wyoming. Wyoming also offered aggressive incentives.
Currently in Arizona, data center operators and qualified tenants receive an exemption for sales and use taxes attributable to data center equipment purchased for use in a qualified data center, defined by new investment in the state of at least $50 million for urban locations and $25 million for non-urban locations.
Data center operators can benefit from the tax exemption for 10 years. The incentives also reward sustainable redevelopment, adding an enhanced tax benefit for up to 20 years if an owner/operator seeks to redevelop a vacant structure or existing facility using sustainable development practices.
Microsoft is allegedly advocating for adjustments to the 2013 data center tax breaks, namely amendments to 10- and 20-year tax breaks on equipment. | | 10:49p |
Juniper CIO Bask Iyer to Become Next VMware CIO VMware did not wait long to appoint a replacement for its former CIO Tony Scott, who left the role earlier this month to join President Barack Obama’s administration as U.S. CIO.
The new VMware CIO is Bask Iyer, who will be leaving his role as senior vice president and CIO of Juniper Networks, another Silicon Valley giant. VMware made the announcement Tuesday afternoon PST.
While networking is not VMware’s primary business, it is becoming an increasingly important play for the company, which has built formidable presence in the software defined networking market for data centers and telcos. This focus has made it a major competitor to Juniper, which has been trying to grow its data center networking business and building out SDN capabilities.
Iyer has been at Juniper since July 2011, overseeing the technology and business operations around business transformation, business services, IT, real estate, and workplace services. Prior to that, he was CIO at Honeywell, a U.S.-based multinational conglomerate that produces everything from home appliances to military equipment.
Before Honeywell, Iyer spent nearly 6 years as CIO for consumer healthcare R&D at the British pharmaceuticals giant GlaxoSmithKline.
Once he joins about one month from now, Iyer will lead VMware’s IT team that managed critical systems that support the Palo Alto, California-based company’s global business operations. He will report to VMware CFO Jonathan Chadwick and become one of the company’s executives.
“Bask has extensive experience as a strategic and operational leader,” Chadwick said in a statement. “He will play a pivotal role in leading VMware and helping our customers as we deliver the reality of the software defined enterprise.”
Scott, the previous VMware CIO, was tapped by the White House as a replacement for former U.S. CIO Steve VanRoekel, who left in September 2014 to join the Ebola response team of the U.S. Agency for International Development as chief innovation officer. |
|