Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, April 2nd, 2015
| Time |
Event |
| 12:00p |
Hybrid Cloud’s Need for Speed As more businesses are moving data to the cloud and outside of corporate walls, and more data is being generated via the Internet of Things (IoT), it’s turning into a hybrid world.
Not only does data need to be put into context that crosses company and geographical boundaries, making for a distributed infrastructure, but the window of time when the magic must happen is also shrinking.
Data center provider Interxion, IBM, and big data platform Qubole are three very different providers and architects of the hybrid world. All are betting that agility and real-time needs will be a key driver in the transformation to hybrid.
While cost savings is part of the hybrid story (optimizing what workloads go where based on cost), the bigger picture involves optimizing the user experience (maximizing what workloads go where based on provided value). Whether it be an employee with a back-office application, a developer in need of compute resources on the fly, or a consumer on a shopping spree, it all needs to happen in or near real time.
Interxion views itself as building the railroad station hubs of the internet, having built data center infrastructure across Europe in all major metropolitan areas. It acts as an aggregation point for clouds, networks, and content. As a result, a new data center edge is arising, according to Ian McVey, director for enterprise and systems integrators for Interxion. As data and consumption become more distributed, data centers need to be closer to the end user and connections must be secure.
IBM sees hybrid as the foundation for smarter apps, tapping richer data sets. Cloud means certain data is exposable. Its role is helping to connect these data clouds and putting that data in proper context with the right tools, as well as leveraging it in new contexts in order to build smarter applications.
“Everything is becoming hybrid,” said IBM cloud CTO Danny Sabbah. “Businesses are looking at context-based, intelligent applications. They want to create a more personalized experience for clients. But there is a three-to-seven second rule.”
Sabbah gives an example of a brick-and-mortar video game store that wants to deliver a targeted experience. “When that client walks in, they want all the screens to fundamentally change based on who walked in and their proclivity to buy something. It needs to happen in three to seven seconds. ”
The company is betting on smart applications with smarter contextualization built on hybrid infrastructure. This is the rationale behind IBM’s recently announced multi-billion dollar Internet of Things investment: it’s not the cloud, but the data in that cloud that is important. It was also the rationale for adding Watson functionality into Bluemix so developers could add new Watson-type functionality, and for signing partnerships with The Weather Channel and Twitter so enterprises could tap into those knowledge clouds. An airline can use weather data to optimize its flights and the customer experience, for example.
Finally, Qubole provides a data platform that helps companies leverage smart data sets in public clouds, and CEO Ashesh Thusoo says he sees data sharing as a strong emerging use case.
A recent Interxion survey reveals that close to half of enterprises are already employing hybrid cloud, though McVey believes that this is true mainly for static workloads at the present. However, he expects to see dynamic use cases growing.
“Where you build depends on the success,” said Interxion’s McVey. “What happens when someone wants to do real-time? Geolocation? Wants to check against the customer base, check against the store database and send a text message to see if the customer opted in?” asked Ian McVey. “The current solution is so spread out, that by the time you do all the hops, the consumer is down the street. The answer is to colocate some of the assets. We have the mobile carriers, retailer master database, and the platforms of cloud providers for marketing automation in our data centers.”
McVey said that this need extends to smart manufacturing, location-based services, Internet of Things and pretty much all emerging use cases. The world is a hybrid of many clouds, and all the next generation use cases have a time sensitivity, he said.
“One stat I will flag [from the recent hybrid survey]: When asked ‘if you could solve network issues, security costs and app performance, how much more of your public workloads would you put into the cloud?’ Respondents said they’d double it. There’s a real latent demand for enterprises to solve this network issue.”
Hybrid doesn’t only mean connected mixed clouds within an enterprise, it means connected and mixed data clouds. Service providers are building and providing the infrastructure and platform for that to happen.
“We’ve increasingly seen data economies emerge,” said Qubole’s Thusoo.“We’re seeing more data providers providing data to other companies. Our cloud platform becomes ideal for companies to share data across the supply chain.”
While the hybrid world is dependent on diverse and fast connectivity, there are many more factors in play. It might look simple for the consumer, but there is a lot going on behind the curtain.
“There is privacy, audibility, and GRC (Governed, Restricted and Compliance) issues,” said IBM’s Sabbah. “In pharma you have HIPAA rules, and there are rules in finance. You need to provide auditability and traceability to every single piece of data. Traceability is extremely important; customers need to be able to generate the right report.”
The need for security and compliance in addition to speed are driving private links and direct connections, according to Interxion.
It’s no secret that everyone across IT infrastructure land is betting on hybrid. While putting the right data in the right cost model and setting up and optimizing IT spend were the initial benefits of hybrid, it’s now ultimately about the customer or consumer of that data.
The IT infrastructure industry provides the framework and cogs to make it a reality. Internet of Things generates more meaningful data; and APIs drive a hyper-connected world to provide an optimized experience for individuals in both consumer and business worlds. | | 3:00p |
How to Approach a Move to Modular or Pre-Fab Data Centers Data center managers and operators face many challenges today, including concerns such as cost, capacity or even the increasing use of cloud computing. One situation faced by industry leaders is what to do when a data center is reaching the limits of its capacity, as many are. The questions revolve around how to increase capacity quickly, easily, and affordably. Often, modular or pre-fabricated data center tactics are considered in this scenario.
At the spring Data Center World, which will convene in Las Vegas on April 19-23, Herb Villa, Senior Application Engineer, Rittal, will present a session, “The Move To Modular: A Technology Review.” The conference’s educational tracks will include many topical sessions, covering issues and new technologies that data center managers, service providers as well as owners and operators face.
Container, Modular or Pre-Fabricated
Villa said that there is a need for language clarity in the modular or pre-fabricated part of the industry. “I use the term pre-configured,” he said. “Some use the word modular to mean a 100,000-square-foot data center that is built out in increments of 20,000 square-feet, in a modular style.” He said that is not correctly referring to pre-configured or pre-fabricated units that are factory-built off-site. “We need to first speak a common language,” he added.
For the technology review, while planning for the move to pre-fabricated units, there are simple questions the end user needs to focus upon:
- What are my organization’s priorities?
- Where are we going? Is it our own space, local space, remote space or a shared space?
- What are we putting in there? What kinds of products do we have or need to purchase? E.g. servers, switches, etc.
- What are your organization’s plans for growth? What does the current and future state look like?
Staffing and Resources
Two other key factors in considering modular or pre-fabricated units is the “allocation and availability of resources, both money and people,” Villa said.
These are important consideration in buy vs. build discussions, he said.
Villa said that the end user needs to have the answers to these questions in mind when dealing with their vendors. “I am not going to tell you where you are going or what you should be doing,” he said. “I am going to listen to what you are telling me.”
System Thinking
Many companies are moving from a component-based solution, where every piece of equipment is “cherry-picked” to a solution that comprises an entire system, Villa said. He said the purchase is more like buying an automobile now, with options, such as adding an infant seat. One is purchasing an entire transport system, that is branded Ford or Mercedes.
“End users are moving beyond traditional IT space to the industrialization of IT space,” he said. This means systems are manufactured and pre-configured off-site, then deployed in the customer’s space. He said one example of an application of this is when IT equipment lives outside a data space, such as on the factory floor.
German-based Rittal, which is an enclosure manufacturer with products covering the power, cooling, enclosure, software monitoring and climate control spaces, is deploying these kinds of pre-configured products globally. For example, Villa said, a DIY room based kit, deployed in Southeast Asia, China and the Middle East, is very popular in regions where there is less environmental and physical security than the United States. It was launched two years ago by a German-based cloud provider and it has gained traction around the world. “It’s a viable solution where appropriate,” he said.
To learn more and discuss the pre-fabricated approach to data centers, attend Villa’s session at Data Center World Global Conference in Las Vegas this month. To register, visit the Data Center World website. | | 3:30p |
Does Backing Up Data to Cloud Make Financial Sense? With over 28 years of IT data center infrastructure experience, Bill Andrews has proven success in technical sales and marketing and has impacted numerous high-growth companies, including ExaGrid, Pedestal Software, eDial, Adero, Live Vault, Microcom and Bitstream.
Over the past 30 years, data backup has been accomplished by using an application that makes a copy of the data on tape and, more recently, to disk. A copy of the tape is sent offsite or data is replicated over a WAN to have an offsite copy for disaster recovery.
Backup can be cumbersome because some data needs to be backed up on a daily basis and all data on a periodic basis. The dream for most IT professionals is to simply have someone else run the backups and to pay by the month.
The $1 million question is: Can you simply outsource your backups, pay a monthly subscription, and move on? The answer is as complicated as backup itself. The challenge of backup involves data that changes frequently. The goal is to ensure that you have all the most recent changes to databases, email, user files, and other data so that none must be recreated if it is deleted, overwritten, corrupted, or destroyed.
To that end, backups occur every night to make a full copy of all databases, a full copy of all email servers, and any file that has changed since the day before. If an organization has a small amount of data with a low change rate, then not much data needs to be backed up each day or night. However, if the amount of data is large, then the daily changed data will also be large, creating bigger challenges in order to back it up.
To back up to the cloud, data would have to leave the data center, traverse the internet, and land at the cloud storage provider, be it a specific cloud backup provider or a public cloud provider such as Amazon, Microsoft Azure, or Google. The challenge of getting the data to the cloud depends on the amount of data. Small data requires low bandwidth to move to the internet on the way to cloud storage. However, large data requires bandwidth that often becomes cost prohibitive. As a result, here’s what typically occurs.
Consumers (low amounts of data and change rates), can use software that runs on a PC that captures the daily changes and sends them to the cloud for which they pay a flat yearly fee.
Small businesses (a few hundred gigabytes to a few terabytes), on the other hand, run software that backs up data to a disk appliance to keep a local set of backups onsite. Copies are then sent to the cloud provider as a second copy or for disaster recovery purposes. Sometimes short-term backups are kept onsite, and longer-term backups called “versions” or “history” are kept offsite. The organization pays by the data amount stored per month. Over three years, this is more expensive than running all backups in-house, but if you don’t have the staff, this can certainly get the backup monkey off your back.
Above a few terabytes, the math does not work due to the amount of bandwidth required from the organization’s data center to the internet. The cost of the bandwidth far exceeds running your own backups. This is true even if you use data deduplication and only move changed bytes or blocks. That’s because backups occur every night and, therefore, you need enough bandwidth to complete the transfer into the cloud before the next backup begins.
Organizations with more than 5TB, with a few exceptions, either run their own backup application and back up to tape onsite and use tape for offsite; back up to disk onsite and use tape for offsite; or back up to disk onsite and replicate to disk offsite. For retention lower than four weeks, a straight disk is typically used. If it’s weeks to months to years, then disk-based backup appliances with data deduplication are deployed. Data deduplication stores only the unique bytes or blocks from backup to backup in order to use the least amount of disk possible, greatly lowering the cost of using straight disk.
So, in summary, the answer to this highly debated topic is: it depends. If you are a consumer or a small business with a few terabytes of data or less, you can absolutely use the cloud if you don’t want to operate your own backups. In a three-year, side-by- side comparison, it will cost more to use the cloud. However, avoiding the aggravation of running your own backups may be worth it.
If your data is multiple terabytes to tens or even hundreds of terabytes of data, the cost to ramp up internet bandwidth over three years will far exceed the cost of running your backups. It’s anyone’s guess when the latter will make financial sense.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:33p |
Cisco Acquires Data Center SDN Startup Embrane Cisco said Wednesday it will acquire Embrane, a Santa Clara, California-based software defined networking firm. Cisco made a strategic investment in the company last year, leading a $14 million Series C funding round. The acquisition is expected to close at the end of the current quarter. Terms of the deal were not disclosed.
Data center SDN startup Embrane’s platform provides lifecycle management for application-centric network services. It provides layer 4-7 network services and helps bridge a larger transition to SDN.
Embrane will be integrated into Cisco’s Nexus data center switch portfolio and extend Cisco’s Application Centric Infrastructure capabilities. ACI is Cisco’s proprietary approach to data center SDN.
SDN helps make the network and its parts fluid. ACI is aimed at making the network responsive to application needs automatically. ACI communicates high-level application requirements to intelligent network hardware, which self-configures accordingly.
There has been rising activity in Network Function Virtualization in particular, which replaces physical appliances with virtualized network functions like firewalls, load balancers, intrusion detection, and WAN accelerators. Embrane’s Heleos platform deploys software-based appliances such as firewalls across a pool of commodity servers.
“With agility and automation as persistent drivers for IT teams, the need to simplify application deployment and build the cloud is crucial for the data center,” Hilton Romanski, Cisco’s head of corporate development, wrote in a blog post.
Embrane will join Cisco’s Insieme business unit, the foundation of which was the 2012 acquisition of Insieme a data center SDN startup Cisco itself founded. Insieme was acquired as part of the ACI effort, launched that same year in response to a growing SDN opportunity and formally launched in 2014.
Embrane tech will feel at home at Cisco given the two companies have worked extensively together. Embrane’s founders formerly worked for Cisco. Following funding, Embrane has added lifecycle management for a variety of Cisco products and expanded support for other third-party systems.
“With this acquisition, we continue our commitment to open standards through programmable APIs and multi-vendor environments,” wrote Romanski. “More importantly, we remain committed to the rich ecosystem of partners and customers in production through the automation of network services, cloud and system management orchestration, and automation stacks.” | | 5:04p |
Security Startup Tanium Lands $52M from Andreessen Horowitz After an initial $90 million financing round last summer security and systems management company Tanium has recently secured an additional $52 million investment from Andreessen Horowitz. Led by former Microsoft executive and Andreessen Horowitz partner Steven Sinofksy, the $142 million total investment in Tanium represents the VCs largest single position.
The Berkeley, California-based security startup has taken money from Andreessen Horowitz since lunching in 2007. It claims to have quadrupled its total billings year over year in 2014 and lists 50 of the Fortune 100 companies as customers.
Also this week, Tanium announced new enhancements and modules in a version 6.5 release of the Tanium Endpoint Platform.
Sinofsky said Tanium’s “capability to navigate, interrogate, and act on even the largest enterprise network in seconds is the magic that fires up customers – networks comprised of millions of endpoints made up of PCs, Servers, VMs, and embedded devices. This 15-second capability is the foundation of Tanium magic and is unprecedented for large scale environments.”
Tanium co-founder and CTO Orion Hindawi previously ran software company BigFix, which was sold to IBM to become its endpoint manager. Sinofsky boasts about the leadership talent at Tanium and their ambition for innovating the product for modern enterprise needs. He notes that “systems management is now an integral part of incident detection and response.” Sinofsky adds, “conversely, security and protection require full knowledge and control of end-points. Neither set of existing tools deployed in most environments is up to the task.”
As a frontline and first-response platform, he believes, the security startup will meet two modern criteria of providing 15-second information on all endpoints and be able to remedy problematic situations immediately. | | 5:23p |
21Vianet to Operate Azure Cloud in China into 2018 Microsoft and 21Vianet, a provider of carrier neutral data center services in China, have extended their cloud partnership. 21Vianet will continue to offer Microsoft’s cloud in China, including Azure and Office 365 in China for an additional 4 years under the new agreement.
21Vianet acts as an operation entity for Azure, hosting Microsoft’s cloud infrastructure services and Office 365 in its data centers and handling customer relationships. A renewal agreement extends this setup to 2018 and suggests the partnership is bearing fruit for both companies.
Microsoft first partnered with 21Vianet a few years ago in a bid to extend its cloud in China via a local partner. China represents a huge growth market for Chinese and foreign cloud providers, but navigating the market isn’t simple for an outsider.
In November 2012, Microsoft, 21Vianet, and the Shanghai Municipal Government announced a strategic partnership agreement in which Microsoft licensed the technology know-how and rights to operate and provide Office 365 and Windows Azure services in China to 21Vianet.
“Since 2012, teams from Microsoft and 21Vianet have worked diligently and seamlessly in the preparation, public preview, and commercial launch of both Windows Azure and Office 365 services in China,” said Josh Chen, CEO of 21Vianet, in a press release.
Several Chinese tech giants, including software developer Kingsoft, invested $300 million in 21Vianet last year. The investment is going toward expanding its data center footprint in a bid to capture the burgeoning Chinese cloud and data center services market.
Some of those data centers have been built specifically in support of Microsoft. In 2013, it added a 5,000-cabinet facility in support of Azure.
“We remain firmly committed to the Chinese cloud market, and we believe this extended partnership with 21Vianet will serve as a strong foundation for both companies to further contribute to the development of the cloud computing ecosystem throughout China,” said Ralph Haupter, vice president and CEO for Microsoft Greater China, in a press release.
There are several regulatory compliance needs to get within what many dub as “The Great Firewall of China.” Officially called the Golden Shield Project, it is essentially an internet surveillance and censorship system that makes it hard to serve Chinese customers from outside.
The market remains a separate entity in need of being served from within, and the typical path for a foreign company has been to partner with a local service provider. Amazon Web Services launched a Chinese region through ChinaNetCenter. Apple stores local iCloud data in China Telecom data centers.
Several new data center builds have been announced in China recently, including by NTT and CenturyLink, which has launched a data center in Shanghai.
Big recent news was a Distributed Denial of Service attack originating in China hit the popular open source code repository GitHub. The DDos attack against GitHub followed an attack on Greatfire.org, following a Wall Street Journal article about anti-censorship groups using U.S. cloud computing services to circumvent blocking by Chinese authorities. | | 6:02p |
OpenStack Solutions Vendor Mirantis Joins Cloud Foundry Foundation The leadership team at Mirantis, an OpenStack solutions vendor, see a lot of simultaneous adoption of the open source cloud infrastructure software and Cloud Foundry, the open source Platform-as-a-Service technology with roots at VMware and later Pivotal.
As part of a first step toward establishing closer relationships with different providers of Cloud Foundry distributions, Mirantis today announced that it has joined the Cloud Foundry Foundation, which now oversees the open source PaaS project.
Boris Renski, chief marketing officer for Mirantis, says that Mirantis has taken note of the fact that companies such as IBM and HP are bundling OpenStack and Cloud Foundry together. As a provider of an OpenStack distribution, Renski says Mirantis wants to be in a position to better partner with Cloud Foundry vendors whose solutions will need to be integrated with OpenStack.
Differentiation is more crucial than ever today for companies like Mirantis, which has based its entire businesses on providing OpenStack solutions. As yesterday’s sudden announcement of the demise of Mirantis rival Nebula illustrated, it’s not easy to survive in the active but still nascent market around OpenStack.
For Mirantis, getting better aligned with Cloud Foundry is one way to have more ammunition than OpenStack alone without making a big investment in development or pivoting.
“We don’t want to build a distribution of Cloud Foundry,” says Renski. “Our focus is going to be on OpenStack only.”
As part of that effort, Renski says, Mirantis plans to work toward unifying, for example, the internal authentication schemes used in both OpenStack and Cloud Foundry.
Going forward Renski says that convergence is going to be even more pronounced as IT organizations begin to embrace application containers. While there is some debate over where containers should most optimally run, Renski says it’s probable that most containers in the cloud are going to be deployed in a PaaS environment.
In general, Renski notes that while many organizations that have deployed OpenStack using raw bits have run into issues running OpenStack at scale, Mirantis has hardened its distribution of OpenStack in the form of over two dozen configurations that have been proven to be able to scale.
Just like any other distribution of open source code, Renski says that many organizations fail to appreciate the work that goes into creating a commercial-grade implementation of open source software.
The degree to which IT organizations opt to join OpenStack and PaaS environments at the hip remains to be seen. In fact, Renski notes there are OpenStack projects underway that aim to create a PaaS environment that runs directly on top of OpenStack.
There’s no doubt that both OpenStack and PaaS environments are foundational components of a modern data center environment. But as larger players continue to combine both technologies under a single branding initiative it’s clear that organizations that embrace one technology are likely to be rapidly exposed to the other.
From the perspective of Mirantis, however, that doesn’t necessarily mean that OpenStack and PaaS platforms need to come from the same vendor. | | 7:00p |
US Government Lacks Updated Policy on Disclosing Zero-Day Vulnerabilities 
This article originally appeared at The WHIR
Despite US government claims that it has “reinvigorated” its vulnerability disclosure policies, the newest relevant policy document for the Office of the Director of National Intelligence (ODNI) is from 2010. A lawsuit filed by the Electronic Frontier Foundation (EFF) to follow up on a freedom on information request has revealed this week that the Vulnerabilities Equities Process (VEP) does not include a single document with the ODNI since then.
In an April 2014 blog post, White House Special Assistant to the President and Cybersecurity Coordinator Michal Daniel explained elements of the government’s vulnerability disclosure policy.
“This spring, we re-invigorated our efforts to implement existing policy with respect to disclosing vulnerabilities – so that everyone can have confidence in the integrity of the process we use to make these decisions,” Daniels wrote. Later in the post he added: “We have also established a disciplined, rigorous and high-level decision-making process for vulnerability disclosure. This interagency process helps ensure that all of the pros and cons are properly considered and weighed.”
The wording of the above statements does not necessarily mean that there is a new policy at all, but rather that the implementation of the old policy has been changed. An EFF post relating the results of its fact-finding suit, however, disputes that claim. Reports on the annual CIA hacker “jamboree,” where software vulnerabilities and exploits are shared, suggest that implementation of the VEP is far from vigorous, according to the EFF.
Daniel also told WIRED that the government does not have a large stockpile of undisclosed zero-day vulnerabilities in November.
An official NSA denial of a Bloomberg report that the NSA was aware of the Heartbleed OpenSSL vulnerability is worded even more strongly than Daniel’s claim.
“In response to the recommendations of the President’s Review Group on Intelligence and Communications Technologies, the White House has reviewed its policies in this area and reinvigorated an interagency process for deciding when to share vulnerabilities. This process is called the Vulnerabilities Equities Process. Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities,” the NSA said.
While the EFF is due to receive documents from the NSA in the next three weeks, “an interagency process” which had been reinvigorated would surely include notice to the ODNI of some change to the VEP. In the absence of any such document, the EFF calls the VEP “vaporware.”
WIRED notes that the ODNI documents provided to the EFF appear also to show that the use of zero-day vulnerabilities to hack Iranian networks came before any vulnerability policy had been created at all.
The US government statements on vulnerability disclosure may have reassured some portion of the public during the height of media attention on software vulnerabilities. However, the EFF and industry stakeholders may now see them more as legalese setting up plausible deniability.
This piece first ran at http://www.datacenterknowledge.com/archives/2015/04/01/with-ipo-priced-above-range-web-host-godaddy-raises-460-million/ | | 7:30p |
CyrusOne Buys its Third and Biggest Austin Data Center CyrusOne has acquired a large data center in Austin, continuing expansion in its native Texas. The company has bought an additional powered shell in Austin’s Met Center, which will be its third and largest facility in the Austin market. The data center is located minutes from Austin-Bergstrom International Airport. Financial terms were not disclosed.
The powered shell is close to 175,000 square feet with room for 120,000 square feet of colocation space. At full build-out, it will have up to 12 megawatts of critical power and feature over 25,000 square feet of office space.
Carrollton-based CyrusOne will take its “Massively Modular” approach to design in to build out incrementally. The first phase of construction will be up to 60,000 square feet with 6 megawatts of critical load.
Last August, the company expanded its second Austin data center’s footprint with a second data hall. CyrusOne also acquired 22 acres of land in Austin in 2014, following land acquisitions in San Antonio and Houston. Other expansion in Texas last year include projects in Dallas and Houston.
Nationally, it’s recent data center builds are in Chandler, Arizona and in Northern Virginia.
The new Austin data center will join the company’s National Internet Exchange of connected data centers. The National IX platform provides interconnection to other CyrusOne data centers in Texas and beyond, making it easier for customers to expand across its footprint.
CyrusOne has 25 carrier-neutral data centers worldwide, with a large concentration in Texas. It specializes in high density needs and counts the oil and gas industry as one of its biggest verticals in the state. It has over 650 customers, many of which are Fortune 1000 companies.
“Based on current and projected customer demand, it was essential to expand in this market. We’ve been extremely successful and have seen a tremendous amount of growth in Austin,” said John Hatem, senior vice president, data center design and construction, CyrusOne, in a press release. |
|