Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, September 12th, 2016

    Time Event
    12:00p
    Why Michael Dell is Smiling

    Pat Gelsinger thinks the days of the enterprise data center as we know it are numbered.

    “Increasingly, companies want to get out of the job of building their own data centers, and operating their own data centers, providing a huge opportunity for service providers,” the CEO of VMware said from stage at VMworld 2016 in Las Vegas late last month, in one fell swoop declaring the principal focus of this very publication obsolete.

    “Highly instrumented, modern, efficient, cloud-scale data centers,” is how he went on to describe service provider data centers of the future: bigger, more flexible, but fewer in number.  “And 2016 is the crossover year when that becomes the dominant way that data centers are built and operated.”

    Certainly the companies that actually build and operate data centers will have something to say about that last point.  Any crossing over due to take place only has four months remaining to make its point.  But Gelsinger — a man who previously had spent 30 years as point man for Intel’s strategy to build x86 architecture into the center of the enterprise — is now betting his career on the notion that most enterprise data centers will be defined digitally, not physically.

    And betting the entire future of his own company to back Gelsinger’s notion is someone else whose life story centers on building x86 boxes into the core of business operations: Michael Dell.  Last week, when the merger between Dell Inc. and VMware parent EMC Corp. finally closed, Mr. Dell became the chief of the company responsible for erasing all the gaps between the boxes whose manufacturing and delivery principles he himself pioneered.

    So why isn’t Dell preparing himself, and the rest of the world, to refashion VMware under what we used to call “the Dell model?”

    “The open ecosystem of VMware is absolutely critical to its success,” said Dell from the same VMworld stage.  “So we’re only going to continue to encourage that.  That hasn’t changed, and won’t change.”

    It was the message that both technologists and investors attending the show wanted to hear most, even as measurable changes in the enterprise data center market place new stress and constraints upon VMware separately from Dell.  While Dell Technologies will continue to be a private entity, as it has been since 2013, VMware will represent the only facet of the post-merger behemoth whose capital is tradable through common stock.  VMware will be Dell’s most sensitive component to changes in investors’ moods about the infrastructure market.

    And here is where VMware has, of late, failed to prove itself.  Up until the acquisition announcement, VMW was trading at about 40 percent of the value of its 2014 highs.  The vSphere brand is perceived as declining, as organizations of all sizes move more of their workloads into the public cloud, where they end up being managed by the likes of Amazon and Microsoft.

    What has to change is VMware’s perception of what its customers perceive the enterprise data center ecosystem to be.  Put another way, it has to start seeing what its customers are seeing.

    See also: Virtustream, VMware to Vie for Hybrid Cloud After Dell Reorg

    The Variable Footprint

    Infrastructure is not truly scalable infrastructure, in customers’ minds, if they can’t extend their workloads into public clouds.  This little deficiency had grown so large that even investment analysts were noting it in their reports.  So if there’s any one single takeaway from just about any five-minute sampling of VMworld 2016, it’s that vSphere is redefining NSX from a software component into a service.

    The reformed NSX comes complete with its own Web portal, extending vSphere’s virtual infrastructure footprint into public cloud territory — Amazon Web Services, Microsoft Azure, Google Cloud Platform, IBM Cloud (VMware’s new preferred partner), and certainly vCloud Air.

    “We’re taking existing on-prem products, like classic products we have today — and you can still install them on-premises, and many customers want it that way — and we enable them to manage workloads across clouds,” said Guido Appenzeller, VMware’s chief technology strategy officer, during a press conference at VMworld.  VRealize Automation and NSX are being configured so that the same network policies that apply to workloads on-premises travel with those workloads as they’re migrated into public cloud space.

    160829 Pat Gelsinger (keynotes 02)

    “Enterprises don’t need different, competing environments,” Gelsinger told one analyst during the conference.  “They want more homogeneity, so they can do their innovations at higher levels.  Having heterogeneity in their environment means having two stacks; two tools; different, fractured infrastructure.  We’re seeing customers go exactly the opposite direction.”

    More to the point, enterprise data center operators have been doing everything they can to maintain the sanctity of their application environments, as their data centers change around them.

    Disruption or Security: Choose One

    To drive the point home that VMware’s proverbial weather vane has been effectively recalibrated, the company presented reporters and analysts with two major NSX customers at the show.  Both are in the financial services industry; neither utilize containerization to any degree.  Together, they represent VMware’s ideal customer: an organization with plenty of resources, but also with a boatload of legacy assets, seeking a way to dictate the pace of its own expansion without rendering older assets obsolete too soon.

    “Unlike a traditional, physical firewall model, you need to re-architect the underlying network,” declared Brandon Hahn, a solutions architect with Wisconsin-based West Bend Mutual Insurance.  “And you need to know ahead of time what your final state needs to look like, because otherwise, you’re going to be bringing in additional devices, and it’s very expensive and time consuming to retrofit.”

    Hahn was discussing the challenge of getting network virtualization into a financial services organization, and it’s a significant one.  Management may, at some point, become reasonably convinced of the efficiencies the organization may gain from transitioning to a virtualized network infrastructure — for example in measurable efficiencies, shortened development cycles, and server utilization.  Nevertheless, the initial argument and the value proposition for essentially gutting the foundation of the organization’s entire IT infrastructure and substituting a kind of nebulous construct are not tailor-made for the CIO level.  The problem is easy enough to frame, but the solution still sounds like science fiction as far as enterprise data centers are concerned.

    “With a technology like NSX, we can go in and say, we’re building a network security platform,” said Hahn, revealing the sugar coating that, at least in his case, ended up making the medicine digestible.  (In, as VMware might say, a most delightful way.)  “Then say, three years from now, we bring in a brand new application with a different architecture that we had not planned on.  We can still apply a security model to that without re-architecting the underlying network — which is huge, especially with Docker, containers, Photon.

    “We don’t know where we’re going to be from a development cycle [standpoint] in three years,” Hahn continued.  “But we know that the framework that we’ve deployed, from a security perspective, allows us to secure that going forward.”

    Brian Irwin, a technical program manager with Seattle-based Washington Federal, told journalists it was security that sold his firm on NSX as well — specifically, the capability it gives administrators to microsegment the network, subdividing it into separate nets.

    [left to right: Brendan Hahn, Solutions Architect, West Bend Mutual Insurance; Dr. Rajiv Ramaswami, XVP/GM Network & Security Business Unit, VMware; Brian Irwan, Technical Program Manager, Washington Federal

    Left to right: Brendan Hahn, Solutions Architect, West Bend Mutual Insurance; Dr. Rajiv Ramaswami, XVP/GM Network & Security Business Unit, VMware; Brian Irwan, Technical Program Manager, Washington Federal

    “In a traditional model, you’re not going to inspect traffic within a Web tier, or within an app tier, or within a [database] tier.  And for us, we were able to microsegment the tiers, so if you had a hacker get in at the Web tier on Web 1, they’re not necessarily going to be able to move laterally to Web 2, Web 3, Web 4.  We’re just trying to make it as hard as possible for an incursion to happen.”

    NSX enables any system of applications to perceive just the network it needs, and nothing more.  It has the benefit of limiting any user’s access, including a malicious user, to the secluded portion of the network containing the application that permitted, or otherwise allowed, access in the first place.  That side benefit has the virtue of making an easily digestible use case for upper management who might not otherwise understand the benefit, say, of projecting a pre-existing data warehouse as a persistent storage container for an application running under Docker.

    The Bridge

    The other hard-to-swallow value proposition is hyperconvergence, which NSX is also tied up with.  It’s easy to declare hyperconvergence a hot technology.  But within real-world, pre-existing data centers today, compute, storage, network fabric, and memory capacities are not being pooled together in homogenous streams.

    “Look, there’s been tremendous growth in converged and hyperconverged, exactly for the reason that customers want that easy-to-adopt model,” Michael Dell said, speaking perhaps with a note of extended optimism.  “This plays very much into what we see as big pockets of market demand.”

    It’s a phrase we’ve heard from Dell and from his company before, including in the realm of virtualization.  Four years ago, Dell acquired Wyse, with its terminals and its virtual desktop technologies, first with the intention of leveraging VMware technologies, then later leveraging Citrix XenDesktop.  The Wyse partnership had hopes of making cloud-based applications available to thin client machines as a way of compensating for lower enterprise demand for PCs.  It was Dell’s first play at delivering cloud-based workloads, and it’s fair to say in retrospect that it did not move the needle.

    Now, the only needle in the new Dell Technologies hierarchy that is visible to the typical shareholder will belong to VMware.  Yes, VMware has a strong customer base with its ESX and ESXi hypervisors.  And yes, VMware’s base and Dell’s only overlap partway, giving the new Dell new prospects.

    But the principal task in front of the new company is to build a bridge between the physical infrastructure in which Dell Inc. once excelled, and the virtual infrastructure which will become — for better or worse — the face of Dell Technologies.  From an applications perspective, it’s a bridge between traditional workloads and “modern” workloads.

    “The reality is that these two types of IT are not yet integrated,” IDC program director for software development research Al Hilwa wrote in a note to Data Center Knowledge, “and DevOps and continuous delivery workflows which predominate in one realm are still not typically practiced with traditional workloads.”

    Hilwa believes VMware’s recent leveraging of security as an overall theme helps it to better associate its technology evolution with things the executive suite actually thinks its data center needs.  For example, vSphere Integrated Containers is a means of enabling container infrastructures to co-exist with traditional, database-driven architectures.  That’s a bit esoteric, but restating that theme as a security play makes it sell better.

    “This may be just the right ticket for the traditional side of IT to dip its toes in the container world.  It may even help the two sides of IT get closer together,” writes Hilwa.  “What is important to assess is to what degree will VMware bring its customers to the promised land of digital transformation.  The signs are good, but the company has to continue to invest to bridge the two sides of IT.”

    Technology analyst Kurt Marko believes organizations may remain disinclined to invest in such bridge-building exercises, until this period of disruption that Mr. Dell and others believe we’re in produces some casualty tallies.

    “The innovation gap won’t be apparent to many in traditional IT — or VMware execs if that’s all they listen to,” writes Marko for Diginomica, “until we have a chasm-crossing moment when much smaller, cloud- and API-native companies exploiting asymmetrical technological advantages begin regularly killing large incumbent business.”

    VMware’s future — and, in turn, Dell’s — is now staked on the success of a piece of software whose intended purpose is to make data center resources look the same to various generations of applications.  The challenge today is making its own customers recognize the need for it.  When we asked VMware’s customers the extent to which containerization has already impacted their business, West Bend Mutual’s Hahn responded, “Let me know when you find a commercial, off-the-shelf application that’s containerized, and then we’ll start talking.”

    1:00p
    Emerson Intros Mobile App for Managing Tiny Data Centers

    A server closet in the corner of an office or in a hospital may not be the first thing that comes to mind when you think about data centers, but the reality is that an enormous amount of the world’s IT capacity sits in small rooms just like that.

    They may be small, but they have been growing in importance in recent years. Retailers put more and more IT capacity in their stores to support new digital customer experiences, while organizations like hospitals and universities are consolidating their infrastructure, meaning the footprint that’s left behind becomes more critical, according to JP Valiulis, who oversees marketing for thermal management products at Emerson Network Power.

    “These closets are becoming a little bit more strategically important,” he said.

    As such, they present a new market opportunity for Emerson, which has its Liebert cooling units installed in hundreds of thousands of these IT rooms. The opportunity is to give those existing customers a better tool to manage that infrastructure.

    Today, at Data Center World in New Orleans, Emerson launched a mobile app that enables remote management of small data centers. For $600 or so, you can monitor alarms, temperature, and humidity in hundreds if not thousands of IT rooms from your smartphone, and if the rooms happen to be cooled by Liebert units, you get some sophisticated monitoring capabilities specific to those units.

    The biggest concerns of the people overseeing this infrastructure are cooling capacity (they are hardly ever designed with accurate assessments of future IT requirements, according to Valiulis), maintenance, and monitoring. In many cases, owners of IT closets don’t have a way to monitor basic things like alarming, temperature, and humidity.

    Screenshot of Emerson's iCOM CMS (Image: Emerson)

    Screenshot of Emerson’s iCOM CMS (Image: Emerson)

    The mobile app, called iCOM CMS, provides one-way communication if you’re outside your organization’s firewall, meaning it only sends you temperature, humidity, and alarm info, but you cannot make changes through the app. Once inside the firewall, however, you can do things like changing set points and reconfiguring alarm notifications remotely, as if you were standing in front of the cooling unit.

    The application also provides an easier-than-usual way to connect IT rooms to existing building management systems, which is typically a complicated and expensive process, according to Valiulis.

    The application is not currently integrated with Emerson’s big Data Center Infrastructure Management suite, called Trellis, but that integration is on the roadmap. “The data from this would be fed right into the DCIM,” he said.

    Another feature that’s coming in the future is the capability to collect and store operational data the application gathers and analyze it to help users make informed decisions when they are managing their distributed IT infrastructure.

    4:46p
    Internet of Things Starts With Hot Dogs for Deutsche Telekom

    (Bloomberg) — Forget about self-driving cars: In Germany, the Internet of Things starts with hot dogs.

    Certuss Dampfautomaten GmbH, a Krefeld-based maker of steam generators used for anything from cooking sausages to sterilizing medical instruments, used to fly repairmen out to far-flung locations to restart broken-down machinery. Now sensors in the machines send data on 60 items including temperature, steam pressure and flame signal via a SIM card into a cloud operated by Deutsche Telekom AG, flagging potential problems even before they occur.

    “The new system helps us reduce downtime for our customers and can cut servicing costs because we can see the problem before having to send someone there,” Certuss Chief Technology Officer Thomas Hamacher said. That improves planning — service appointments can be scheduled during regular production breaks, for example. “In the long-term, we expect to have to run fewer service missions.”

    While gee-whiz gadgets like drones and self-driving cars steal the headlines, Deutsche Telekom is trying to seize the Internet of Things opportunity in industries like construction and health care, hooking machines to the web so they can work more efficiently. The Bonn-based carrier is vying for customers with the likes of Vodafone Group Plc and AT&T Inc., who all bank on an industrial IoT market set to be worth $330 billion by 2020, according to Research and Markets projections.

    “It’s a growing market everyone wants to enter, and we want to put our foot down,” said Deutsche Telekom’s Dido Blankenburg, who is responsible for the business that tends to corporate customers like Certuss.

    More on Deutsche Telekom data centers:

    The Internet of Things, an idea long in gestation, refers to a world of connected devices that goes beyond mobile phones and smartwatches, linking everyday objects to send and receive data.

    Companies, benefiting from falling prices of sensors, wireless transmitters and cloud storage, are trying IoT solutions to improve relations with customers, reduce servicing costs, and develop new products and revenue sources. McKinsey & Co. estimates IoT’s potential economic impact on factories will rise to as much as $3.7 trillion a year in 2025, mainly on productivity improvements including as much as 20 percent in energy savings and as much as 25 percent in potential labor efficiency improvement.

    “With the ability to monitor machines that are in use at customer sites, makers of industrial equipment can shift from selling capital goods to selling their products as services,” McKinsey wrote in the study released last year.

    Deutsche Telekom, which cooperates with Huawei Technologies Co. for its IoT offerings, has won clients including shipping company Deutsche Afrika-Linien, which tracks its containers to ensure they arrive undamaged, as well as makers of forklifts and industrial sewing machines to reduce downtime. It’s also selling an IoT “starter kit” that contains all a company needs to outfit a machine — including sensors, a SIM card with a data plan and access to the carrier’s cloud.

    Early Days

    Deutsche Telekom says it can beat rivals to such contracts because it can offer hardware, operation and related telecoms services in one package. Meanwhile, Germany’s so-called Mittelstand — the small and midsize enterprises that form the backbone of Europe’s biggest economy — remain wary of handing corporate data to foreign companies.

    “We operate under the stringent German data privacy law and if our customer wants his data to remain in Germany, we can guarantee him that,” Blankenburg said. The carrier is expanding its German data center in Biere near Magdeburg to boost cloud capacity there by 150 percent, it said Monday in a statement.

    While Deutsche Telekom is strong in Germany, globally there is no one clear IoT leader in connecting machinery and sending data back to customers in real-time. Major wireless providers such as AT&T and Vodafone are also chasing contracts in a market that’s in its early days. Deutsche Telekom also sells access to its cloud services to customers, where it competes with large providers including Amazon.com Inc. and Salesforce.com Inc.

    Vying in that market with established cloud providers will be  tough given a fragmented European regulatory framework that lends natural advantage to U.S. players, according to Bloomberg Intelligence analysts Erhan Gurses and Alex Wisch.

    For now, the business is still small for Deutsche Telekom. While the carrier doesn’t break out IoT sales, its overall cloud business generated 1.4 billion euros in revenue last year — about 2 percent of the total. Moreover, it’s still unclear how steep the growth curve will be. Gartner Inc. in November cut its forecast for IoT devices to 20.8 billion by 2020, down from 25 billion a year earlier, with most of them being consumer devices such as smartwatches.

    None of that is a concern to Certuss. The 59-year-old company, which generates about 20 million euros in sales and invested about 100,000 euros in the new IoT technology, will in the coming weeks deliver its first steam generator to the U.S., to be used in a brewery. Certuss banks on the new IoT system to alleviate any of far-away clients’ concerns, Hamacher said.

    “The Americans are very happy that we will be able to monitor the unit in real time from Germany,” Hamacher said. “This can help us generate more business abroad.”

    8:54p
    ViaWest: How Cloud Computing Alters Data Center Design

    There are two principal classes of data center customers.  First, there’s service providers, whose consumption patterns are relatively rigid and whose requirements are spelled out in the SLAs.  Second, there are enterprises, whose utilization and resource usage patterns — due in large part to the cloud service delivery platforms upon which they rely — can be all over the map.

    Should a data center provider compartmentalize its operations to serve the needs of both customer classes separately?  Or should it instead implement a single design that’s flexible, elastic, and homogenous enough to address both classes — even if it means deploying more sophisticated configuration management and more hands-on administration?

    “In a multi-tenant world, you design for the latter,” responded Dave Leonard, ViaWest’s chief data center officer.  “And even in a single-tenant world, I’m convinced that it’s the wrong answer to go for the former.”

    Leonard will explain in detail how cloud computing consumption patterns have been affecting data center design this Thursday at the Data Center World Conference in New Orleans.

    Many of the major data center providers in today’s market are currently inclined to center their design efforts around one big template — for instance, a 10,000 square foot, single-tenant hall with 1100 kW of UPS power, he said.  Realistically, it isn’t practical for such a provider to make such a facility multi-tenant, Leonard argued.

    More on companies with flexible-capacity data center design ideas:

    “So say you get a software-as-a-service company.  They can only buy one thing: 10,000 square feet and 1100 kW.  And on day one, that might fit their needs perfectly, or maybe they can architect their application to where that’s perfect.  But what happens when they re-architect their application and their hardware, and now they consume double the watts per square foot?

    “Well, they’ve just stranded half of that space,” Leonard answers himself.  “Who pays for that space that’s stranded?  Well, they have to pay for it, because there’s no flexibility there.”

    Now, certain well-known data center customers — Leonard cites Akamai as one example — are moving from a 12-15 kW per rack power usage profile down to about 9 kW/rack.  Service providers are capable of making such deliberate changes to their applications to enable this kind of energy efficiency.

    Suppose a hypothetical SP customer of this same data center is inspired by Akamai, re-architects its application, and lowers its power consumption.  “Well, now they can’t use the power that’s in that space,” argues Leonard.

    “Creating space where power and cooling are irretrievably tied to the floor space that is being delivered on is a really bad idea.  When the use of that floor space, power, and cooling changes over time — and there’s a dozen dimensions that can cause it to change — those data centers are rigid and inflexible in their ability to react to those changes.”

    Yes, cloud application architectures have bifurcated the market for data center facilities.  But the phenomenon arising from this alteration is essentially a single trend.  Leonard believes a facilities or colocation provider should engineer adaptability into its design in order to adapt what it offers customers to correspond with changes in their consumption profiles.

    Like many data center providers, ViaWest is noticing a sharp uptick in what Leonard calls “Amazon graduates:” SaaS and IaaS customers who were either born in the cloud or migrated to the public cloud when it was cost-effective but found themselves moving back off once their consumption profiles evolved past that cost-effective point.

    Dave Leonard“They realize, especially as they end up with a lot of data on those clouds,” said Leonard, “that it becomes uneconomic at a certain scale.  It becomes more economic to take that back and move it into a private cloud that is dedicated to them, or move it back onto their own hardware [with] co-location.”

    These enterprises are spinning up applications through first-generation virtual machines, so they’re relying on environments such as VMware vSphere and OpenStack to provide layers of abstraction between their applications and the hardware hosting them.  It’s these abstraction layers that separate the enterprise customers from service provider customers, said Leonard, who may be providing SaaS, PaaS, and IaaS platforms for their own customers in turn, and who may need more direct, hands-on tools for optimizing their resource consumption profiles in real-time.

    However, in both cases, he explained, the variables that constitute both enterprises’ and service providers’ profiles are identifiable, manageable, and in a best-case scenario, adaptive.

    “I don’t say that there’s a cloud data center,” ViaWest’s CDCO told us, “and you build a cloud data center in a particular way.  There’s data centers that are able to adapt to changing needs — some driven by cloud users, some driven by SaaS or IaaS users, some driven by enterprises as they change over time.  There’s characteristics that all these different users drive into the physical design of their data centers, that are more important to accommodate now than was the case five or ten years ago.”

    Dave Leonard will explain in detail his firm’s methodology for providing adaptable data center facilities platforms, at 8:00 a.m. Central Time Thursday, September 15, in Room R209 at Data Center World, presented at the Morial Convention Center in Downtown New Orleans.  He’ll also be moderating a panel session, “

    << Previous Day 2016/09/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org