Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, September 14th, 2016
Time |
Event |
12:41a |
At Data Center World, Colos Covet the Enterprise While they haven’t had much trouble selling to tech companies, many of the more traditional enterprises – healthcare and biotech firms, banks, universities, and so on – are harder to convince for colocation providers.
The colocation trends panel at Data Center World Tuesday spent a big chunk of the session’s allotted time making the case that owning and operating data centers is something enterprises are better off leaving to the service providers sitting on the panel (Keystone NAP, Vantage, CyrusOne, and ViaWest).
As Liz Cruz, associate director with the market research firm IHS and the panel’s moderator, pointed out, hardware and infrastructure equipment sales into data centers are declining, while revenue colocation providers are raking in is growing in double digits, which means more and more companies choose outsourcing over their own data centers.
Still, when she asked people in the audience to raise their hands if their companies had at least two-thirds of their IT capacity in colocation data centers, only a handful did. It’s cloud providers who are driving a lot of the revenue growth for colo companies – a lot more than enterprises, although enterprise data center spending is slowly waning. “Cloud providers are now the largest tenant of multitenant data center facilities,” Cruz said.
The question panelists were most consistently asked (and had the most trouble with) was, “What would be the reasons to not outsource to colo providers?”
If your business revolves around your data center – a SaaS company, for example – you may choose to build your own, was one answer (by ViaWest’s Dave Leonard). It might also make sense if you know you’ll need a number of data centers around the world, he said. Neither of those answers is definitive of course, which is why the panelists had difficulty with the question. There are plenty of SaaS companies who outsource data centers (heard of Salesforce?), and plenty of data center providers that will gladly set you up with an international footprint.
The one category of users that have to build their own facilities are internet and cloud giants. They provide applications at unprecedented scale and have figured out that at their scale it makes more sense to customize infrastructure to fit their specific needs. Still, most of them supplement the hyperscale facilities they build around the world with leased footprint.
For colocation providers, these hard-nosed enterprise users are not only a big growth opportunity; it’s a matter of longevity. The race to capture the hearts and minds of the enterprise is on, but they’re not only racing each other. They’re also racing the Amazon, Microsoft, and a few others.
Most colo providers have embraced public cloud as reality and have been using their ability to provide direct network access to cloud services from their facilities as a way to attract enterprises, pitching customers on the hybrid cloud, where a physical footprint the customer has full control of is supplemented with public cloud services, all under one roof in a colocation facility.
For the big cloud providers, hybrid cloud is a necessary evil. They have found that at least for now they can’t simply dismiss the idea, but they would surely love to see a future where hybrid cloud is no longer necessary. That scenario would leave colo providers with a much smaller role to play. | 8:49p |
Would You Connect Your Data Center BMS to the Internet? Whether you believe the Internet of Things is a new phenomenon or simply a new term for something that’s been happening all along, the surge of connected devices will have an effect through big swaths of the data center industry.
Data storage and compute capacity is being deployed in “edge” locations, which are essentially places that didn’t have substantial compute capacity before, and capacity needs are growing in places that already have some – colocation data centers where customers’ IoT applications are creating new workloads. The list also includes telco central offices and even extra data center space some companies have ended up with after big virtualization and consolidation projects and lease out, Carrie Goetz, global director of data center services at Siemon, said.
Goetz sat on a panel titled IoT and Its Impact Everywhere Wednesday at Data Center World, which is taking place this week in New Orleans. Proliferation of edge data center capacity is already happening, much of it driven by IoT applications, she said, and there seems to be little discrimination between the types of environments this capacity is being deployed in. Companies are deploying data centers “kind of anywhere you can stick one,” she said.
To be clear, meaning of the expression “edge data center” is different in this context from its meaning when we’re talking about data centers where digital content is cached for delivery to end users – the type of data centers EdgeConneX specializes in. In this context, edge data centers are pods of various compute and storage capacity (usually on the smaller side) which collect and analyze data from nearby devices.
These edge data centers are part of the solution to one of the biggest issues with IoT the industry is struggling with: How to design the network that links IoT devices to ensure that each device doesn’t become a potential entry point for hackers into corporate networks while remaining useful as a connected device.
Should these devices be connected to the internet? Should they be connected to a company’s WAN? Or should the data they produce have no physical path outside the local network, and if that’s the case, are we still talking about IoT and all its benefits?
Chris Crosby, CEO of Compass Datacenters, who also sat on the panel, doesn’t buy IoT’s newness. “You’re talking about rebranding,” he said, explaining that IoT is a “brand in search of a problem.”
Connected devices have been around all along. Take data center BMS (Building Management System) for example – a primarily software system that monitors and controls a range of devices, including things like chillers and backup generators. Few enterprises will agree to connect this system to the internet, exposing it to cyberattacks, Crosby said. WAN-connected devices “are the easiest access windows into the enterprise,” he said.
If you define IoT simply as a series of connected devices, regardless of the type of network they are connected to, BMS fits right in, but not everybody defines it that way.
At Siemens, “we define it as internet-connected edge devices. Period,” Goetz said. At some point, data from an IoT device will be transmitted over the internet, she explained.
And if you pay attention to the way industrial giants are talking about IoT – the likes of GE or Schneider Electric – the internet’s role is crucial. GE’s vision for its industrial IoT cloud platform Predix, which Himagiri Mukkamala, head of engineering for Predix, outlined at another recent conference, is a good illustration. Be it locomotives or jet engines, the company wants to be able to collect data from the thousands and thousands of parts that make up every asset deployed around the world, pool the data in a central repository and analyze it using machine learning algorithms to do everything from predicting when a part may fail to optimizing the movement of goods across a fleet of trains. Assets operating in silos have no role in this vision.
Often referred to as Industrial IoT, this is one extreme case of IoT. There are also connected cars, household appliances, wearable devices, etc. And the answer to how networks will be architected to support these use cases will depend on each individual case, Adnan Akbari, senior marketing manager at Siemens Building Technologies, who was also on Wednesday’s panel, said.
For every organization implementing IoT, it will be important to outline the goal of the initiative, and then figure out the best way to reach it. “It’s most important to start with what those goals are,” he said. That is what should guide how exactly those connected devices will be connected. Every organization will have to examine its model and weigh the benefits of connecting a device to the internet versus risks.
In the end, there won’t be a single answer to this question, in Akbari’s opinion. There will be a mix of deployment models, many of them hybrid, where some data will be transferred over WANs while some data will not leave closed local networks, he said. | 10:41p |
Emerson Data Center Unit Execs “Bullish” on Post-Spinoff Freedom Executives at Emerson Network Power, the business unit Emerson Electric has agreed to sell to a group of investors led by a private equity firm, expect the unit to have more freedom to pursue longer-term plays, such as building entire data centers for customers, once the acquisition is closed.
“We feel as though we’re going to have an opportunity to be more nimble, to be more responsive than we might’ve been in the past, and to take on newer opportunities that might be neutral in the short term but very positive in the long term,” Peter Panfil, VP global power at Emerson Network Power, said in an interview with Data Center Knowledge on the sidelines of Data Center World in New Orleans this week.
Scott Barbour, executive VP of the soon-to-be private Emerson, is “very bullish on the acquisition,” Panfil said. “In many cases, companies sometimes decide against their best interests in the long term in order to make sure that the short-term returns make people happy.”
Emerson Electric announced in August an agreement to sell the unit to Platinum Equity and other investors for $4 billion. The companies expect to close the acquisition by the end of this year. Emerson said it would get rid of underperforming businesses last year, after a period of weak earnings growth.
Without the pressure to deliver immediate results, Emerson Network Power will be in a better position to pursue data center projects with a wider scope. Even five years ago, it did not have the capability it has today to build entire data centers for customers from the ground up. Over the past few years, it has done so for Facebook in Sweden, for the telco nbn in Australia, and built dozens of submarine cable landing stations around the world. There is opportunity to do more, which will be a major focus once the spinoff is complete.
While it hasn’t been impossible to pursue such projects in the past, it has been more difficult as a unit of a manufacturing company. Agreeing to build an entire data center means taking on responsibility manufacturers don’t normally accept, Panfil explained. “A manufacturer for the most part wants to build equipment and ship it.”
Builds like the one Emerson did for Facebook in Lulea, Sweden, don’t diverge too much from that inclination. Dubbed Rapid Deployment Data Center, it was built primarily using components manufactured in factories, including components of the building structure itself, and shipped on site for assembly.
On most data center projects, however, Emerson’s involvement has been limited to supplying electrical and mechanical infrastructure components, supporting consulting and specifying engineers, construction managers, and commissioning teams. “We had all the pieces except the implementation piece,” Panfil said.
The soon-to-be-independent company wants to become more of a full-solution provider, which is what companies increasingly prefer to piecing together complex data center projects one contract at a time.
Panfil likes to draw a parallel between this evolution and the evolution of the way we get bread. “We used to grow wheat and grind the flour and bake the bread,” he said. “Now we buy the bread, but tomorrow what you’d like to do is … buy the entire sandwich.” |
|