Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, November 18th, 2016

    Time Event
    5:50p
    Apple Chip Choices May Leave Some IPhone Users in Slow Lane

    BLOOMBERG – It turns out not all iPhone 7s are created equal.

    The latest Apple Inc. smartphones that run on Verizon Communications Inc.’s network are technically capable of downloading data faster than those from AT&T Inc. Yet in testing, the two phones perform about the same, according to researchers at Twin Prime Inc. and Cellular Insights.

    Neither firm is clear on the reason, but Twin Prime says it may be because Apple isn’t using all the potential of a crucial component in the Verizon version.

    “The data indicates that the iPhone 7 is not taking advantage of all of Verizon’s network capabilities,” said Gabriel Tavridis, head of product at Twin Prime. “I doubt that Apple is throttling each bit on the Verizon iPhone, but it could have chosen to not enable certain features of the network chip.”

    “Every iPhone 7 and iPhone 7 Plus meets or exceeds all of Apple’s wireless performance standards, quality metrics, and reliability testing,” Apple spokeswoman Trudy Muller said. “In all of our rigorous lab tests based on wireless industry standards, in thousands of hours of real-world field testing, and in extensive carrier partner testing, the data shows there is no discernible difference in the wireless performance of any of the models.”

    It would be an unusual step for a major phone company to restrain its devices. Normally, companies battle to make the fastest, most reliable handsets. Apple may be doing this because it wants to ensure a uniform iPhone experience, according to analysts.

    “They don’t want one version to get the reputation that it is better,” said Jan Dawson, founder of technology advisory firm Jackdaw Research LLC. “If Apple had a guiding principle it’s that they want to make sure customers were having a consistent performance.”

    But the move could backfire if customers realize their device isn’t performing to its full potential, or that it’s less capable than other handsets, analysts said.

    “This may not impact the fanboys, but it may make other consumers think twice about buying an Apple phone, especially if they think they might be purchasing a sub-standard product,” said Jim McGregor, an analyst at Tirias Research. The firm does paid analysis for smartphone suppliers, including Qualcomm Inc. and Intel Corp.

    The component at the root of the performance gap is the modem, a tiny chip buried deep inside a phone’s innards that turns wireless signals into data and voice. The iPhone 7 is the first Apple phone for several years to have versions with different modems. Verizon users get an iPhone 7 with Qualcomm’s latest X12 modem — capable of downloading data at up to 600 megabits per second. AT&T customers get a handset with an Intel modem that tops out at 450 megabits per second.

    Apple likely went with multiple suppliers to keep component costs in check.

    In field tests by Twin Prime, the Verizon version is a little faster than its AT&T stablemate — but not as fast as it could be. The firm proved this by doing the same tests on the Samsung Galaxy S7, which also runs on Verizon’s network and uses the Qualcomm X12. The S7 was about twice as fast as the iPhone 7 running on the same network with the same modem chip, Twin Prime found. This was based on data from more than 100,000 phones downloading an image in large U.S. cities.

    Qualcomm’s X12 is capable of dealing with more channels of data simultaneously than its Intel rival, according to a report by Milan Milanovic, an analyst at Cellular Insights, which tests phones and networks. Apple didn’t enable this feature “to level the playing field between Qualcomm and Intel,” Milanovic wrote.

    Cellular Insights tests phones in labs with equipment that can vary the strength of the signal, basically approximating a phone moving from good coverage to poor coverage. The differences in performance between the handsets Milanovic tested were greatest when the phone signal was weakest, he said.

    Twin Prime and Cellular Insights have no relationships with chip makers, phone makers or wireless carriers. Four other network testing firms contacted by Bloomberg News said measuring phone data speeds is difficult because performance can be influenced by weather and other factors beyond the control of wireless providers and phone makers. Though none of these other firms disputed the findings of Twin Prime and Cellular Insights.

    Happy Carriers

    In addition to creating a unified iPhone 7 user experience, leveling the playing field would also keep wireless carriers happy, according to some industry analysts.

    “Apple likely has some incentive to balance the performance of its iPhones across its U.S. operator partners,” said  Jeff Kvaal, an analyst with Instinet LLC. “It would be difficult, for example, to explain to AT&T, which remains the U.S. carrier with the most iPhone subscribers, why Verizon is offering a superior product.”

    Representatives of AT&T, Verizon, Qualcomm and Intel declined to comment.

    Companies that “de-feature or go with a less-advanced modem” may be left behind as the networks offer faster data speeds, Qualcomm CEO Steven Mollenkopf said in a recent conference call, without naming specific firms.

    Qualcomm vs. Intel

    Apple is in this situation because it chose different modems for the latest iPhone. By pitting Intel against Qualcomm, the company may be trying to improve its negotiating position on price. Qualcomm’s modems have for years been almost ubiquitous in high-end smartphones, including iPhones. Intel has struggled to break into this market and has used subsidies to try to win share in mobile.

    Sacrificing performance in return for cheaper components may not go down well with Apple users.

    “It was probably multiple factors that drove that decision. But in the end, it was a bad decision,” said McGregor. “It is really hard to argue for doing this because Qualcomm really does have the best modem technology.”

    Still, many users may not realize their iPhone is different from their friends’ iPhones. And even if they do, they may decide Apple isn’t responsible. Most consumers blame their wireless service provider when a phone doesn’t work well, Milanovic said.

    6:03p
    GE Strengthens Position at Top of the Tech Pyramid With $915 Million Cloud Deal
    By The VAR Guy

    By The VAR Guy

    General Electric might officially fall into the category of “diversified industrial company,” but it’s making huge plays to transform into a leader in the tech sector.

    This week, GE agreed to buy ServiceMax, a cloud-based service company, for $915 million. The investment will enhance its digital operations capabilities and further modernize equipment maintenance, as well as adding heft to its tech business unit.

    “It’s no secret our services revenue is the bulk of our earnings and is a key part of what makes us successful,” Bill Ruh, chief executive officer of GE Digital, said in an interview with Bloomberg. “We’re moving away from where it’s all on paper to where it’s all becoming fully automated. Services are becoming a key part of the digital economy.”

    Ruh’s business unit is on track to become a $15 billion business by 2020, GE says, helped in no small part by its proprietary operating system Predix, which is designed to help industrial equipment be more efficient in a digital world. The OS gives users data analysis capabilities for IoT sensors embedded in industrial equipment. That product will go a long way toward GE cementing its place at the top of the industrial servicing food chain. Ruh says that sector could grow to be a $1 trillion global market over the next 10 years, and digitization of those services will be a huge chunk of that revenue.

    GE Digital also announced an expansion of its channel program this week. Its goal is to help a growing number of independent software vendors (ISVs) use Predix as the foundation for applications to enhance its IIoT capabilities.

    “The ISV program enables GE and its partners to make industrial apps more accessible to a wider range of companies, while helping them increase productivity through the use of the Predix platform,” said Denzil Samuels, global head of channels and alliances at GE Digital. “GE is creating a robust marketplace by collaborating with companies that share GE’s vision for digital transformation and bridge the gap between the industrial and IT worlds to drive successful business outcomes.”

    Partners, it isn’t just big companies from outside industries benefiting from the opening of the tech industry gates. The digital transformation provides new, huge opportunities for the channel, too.

    This article originally appeared here at The VAR Guy.

    6:08p
    Russia Blocks LinkedIn, U.S. ‘Deeply Concerned’

    (Hollywood Reporter) – Russian authorities have blocked access to LinkedIn after a court ruled that the business networking site had broken local data storage laws.

    A Moscow district court decision last week, which went into effect on Friday, said that LinkedIn had failed to observe a 2014 federal law stipulating that internet companies that process the personal data of Russian citizens must store that data on servers located in Russia.

    The move marks the first time a social media site has been blocked in Russia. It will see LinkedIn users in Russia gradually lose access as of Friday. In the past, Russia has threatened to block such social media sites as Facebook, but has never done so.

    The decision on LinkedIn prompted alarm at the U.S. embassy in Moscow, which expressed concern, with some observers wondering if the decision could set a precedent justifying the blocking of other sites in the country. “The United States is deeply concerned,” Russian state news agency RIA quoted an embassy representative as saying. “We call on Russia to immediately restore access to LinkedIn.”

    Although none of the other large international social media sites, such as Facebook and Twitter, or Facebook’s WhatsApp service, keep Russian users’ data on Russia servers, observers believe the case, brought by Kremlin media watchdog Roskomnadzor, is designed as a warning that could be used to put pressure on those companies, which are much more popular among Russians than LinkedIn, to fall into line.

    SEE ALSO: Report: Apple Takes Space in Russian Data Center to Comply With Data Location Law

    LinkedIn has around 400 million registered users worldwide, but only 5 million of those are in Russia. Russian authorities claim the site is vulnerable to hacking, pointing to a massive hack in 2012 when 6.4 million usernames and passwords were stolen.

    “They have a bad track record: Every year there’s a major scandal about the safety of user data,” Roskomnadzor spokesman Vadim Ampelonsky told The Moscow Times.

    In a statement sent out Friday to registered users in Russia, LinkedIn said it would provide refunds for “unused time” for any paid services. Russian users who choose not to close their accounts are also likely to still be able to access them when outside of Russia.

    The move comes at a time when, following Donald Trump’s presidential election victory, Russia’s relations with the U.S. are looking like they could improve.

    Any decision by LinkedIn to comply with the Russian data storage law and restore access to its Russian users is likely to hinge on what Microsoft, which recently agreed to buy the business networking site for $26.2 billion, decides.

    6:08p
    Russia Blocks LinkedIn, U.S. ‘Deeply Concerned’

    (Hollywood Reporter) – Russian authorities have blocked access to LinkedIn after a court ruled that the business networking site had broken local data storage laws.

    A Moscow district court decision last week, which went into effect on Friday, said that LinkedIn had failed to observe a 2014 federal law stipulating that internet companies that process the personal data of Russian citizens must store that data on servers located in Russia.

    The move marks the first time a social media site has been blocked in Russia. It will see LinkedIn users in Russia gradually lose access as of Friday. In the past, Russia has threatened to block such social media sites as Facebook, but has never done so.

    The decision on LinkedIn prompted alarm at the U.S. embassy in Moscow, which expressed concern, with some observers wondering if the decision could set a precedent justifying the blocking of other sites in the country. “The United States is deeply concerned,” Russian state news agency RIA quoted an embassy representative as saying. “We call on Russia to immediately restore access to LinkedIn.”

    Although none of the other large international social media sites, such as Facebook and Twitter, or Facebook’s WhatsApp service, keep Russian users’ data on Russia servers, observers believe the case, brought by Kremlin media watchdog Roskomnadzor, is designed as a warning that could be used to put pressure on those companies, which are much more popular among Russians than LinkedIn, to fall into line.

    SEE ALSO: Report: Apple Takes Space in Russian Data Center to Comply With Data Location Law

    LinkedIn has around 400 million registered users worldwide, but only 5 million of those are in Russia. Russian authorities claim the site is vulnerable to hacking, pointing to a massive hack in 2012 when 6.4 million usernames and passwords were stolen.

    “They have a bad track record: Every year there’s a major scandal about the safety of user data,” Roskomnadzor spokesman Vadim Ampelonsky told The Moscow Times.

    In a statement sent out Friday to registered users in Russia, LinkedIn said it would provide refunds for “unused time” for any paid services. Russian users who choose not to close their accounts are also likely to still be able to access them when outside of Russia.

    The move comes at a time when, following Donald Trump’s presidential election victory, Russia’s relations with the U.S. are looking like they could improve.

    Any decision by LinkedIn to comply with the Russian data storage law and restore access to its Russian users is likely to hinge on what Microsoft, which recently agreed to buy the business networking site for $26.2 billion, decides.

    10:42p
    Google Cloud Anticipates Machine Learning Growth with GPU
    Brought to You by The WHIR

    Brought to You by The WHIR

    Google is bringing graphics processing unit (GPU) chips to its Compute Engine and Cloud Machine Learning to boost performance for intensive computing tasks like rendering and large-scale simulations. GPUs will be available from Google Cloud Platform worldwide in early 2017, according to an announcement this week.

    In a separate announcement, the company announced a new Cloud Machine Learning group, and a series of moves related to machine learning and artificial intelligence.

    Google introduced its new Cloud Jobs API, which applies machine learning to the hiring process, as well as significantly reduced prices for its Cloud Vision API, and a premium edition of its Cloud Translation (formerly Google Translate) API.  A blog post outlining Google’s machine learning-related updates also announces the general availability of the Cloud Natural Language API.

    The announcements collectively represent a push not just to broaden its compute-intensive cloud services, but to deliver practical services built on them. GPUs are used as accelerators on many clouds, from Peer1 to AWS, though industry players are engaged in an ongoing debate about the scalability and relative efficiency of different approaches.

    Google says the GPUs on its cloud will be AMD’s FirePro S9300 x2 for remote workstations and the Tesla P100 and Tesla K80s from NVIDIA for deep learning, AI, and high-performance computing (HPC) applications. Google is offering the GPUs in passthrough mode for bare metal performance, and up to 8 can be attached to each VM instance.

    “These new instances of GPUs in the Google Cloud offer extraordinary performance advantages over comparable CPU-based systems and underscore the inflection point we are seeing in computing today,” said Todd Mostak, founder and CEO of data exploration startup MapD, which used them as part of an early access program. “Using standard analytical queries on the 1.2 billion row NYC taxi dataset, we found that a single Google n1-highmem-32 instance with 8 attached K80 dies is on average 85 times faster than Impala running on a cluster of 6 nodes each with 32 vCPUs. Further, the innovative SSD storage configuration via NVME further reduced cold load times by a factor of five. This performance offers tremendous flexibility for enterprises interested in millisecond speed at over billions of rows.”

    GPU instances take minutes to set up from Google Cloud Console or the gcloud command line, and are priced per minute.

    Google Cloud Platform also extended its geographic reach earlier this month with a new Tokyo region.

    This article was originally posted here at The Whir.

    10:42p
    Google Cloud Anticipates Machine Learning Growth with GPU
    Brought to You by The WHIR

    Brought to You by The WHIR

    Google is bringing graphics processing unit (GPU) chips to its Compute Engine and Cloud Machine Learning to boost performance for intensive computing tasks like rendering and large-scale simulations. GPUs will be available from Google Cloud Platform worldwide in early 2017, according to an announcement this week.

    In a separate announcement, the company announced a new Cloud Machine Learning group, and a series of moves related to machine learning and artificial intelligence.

    Google introduced its new Cloud Jobs API, which applies machine learning to the hiring process, as well as significantly reduced prices for its Cloud Vision API, and a premium edition of its Cloud Translation (formerly Google Translate) API.  A blog post outlining Google’s machine learning-related updates also announces the general availability of the Cloud Natural Language API.

    The announcements collectively represent a push not just to broaden its compute-intensive cloud services, but to deliver practical services built on them. GPUs are used as accelerators on many clouds, from Peer1 to AWS, though industry players are engaged in an ongoing debate about the scalability and relative efficiency of different approaches.

    Google says the GPUs on its cloud will be AMD’s FirePro S9300 x2 for remote workstations and the Tesla P100 and Tesla K80s from NVIDIA for deep learning, AI, and high-performance computing (HPC) applications. Google is offering the GPUs in passthrough mode for bare metal performance, and up to 8 can be attached to each VM instance.

    “These new instances of GPUs in the Google Cloud offer extraordinary performance advantages over comparable CPU-based systems and underscore the inflection point we are seeing in computing today,” said Todd Mostak, founder and CEO of data exploration startup MapD, which used them as part of an early access program. “Using standard analytical queries on the 1.2 billion row NYC taxi dataset, we found that a single Google n1-highmem-32 instance with 8 attached K80 dies is on average 85 times faster than Impala running on a cluster of 6 nodes each with 32 vCPUs. Further, the innovative SSD storage configuration via NVME further reduced cold load times by a factor of five. This performance offers tremendous flexibility for enterprises interested in millisecond speed at over billions of rows.”

    GPU instances take minutes to set up from Google Cloud Console or the gcloud command line, and are priced per minute.

    Google Cloud Platform also extended its geographic reach earlier this month with a new Tokyo region.

    This article was originally posted here at The Whir.

    11:06p
    Is a Retreat from Private Cloud Also Under Way? Cisco Weighs In

    This last fiscal quarter was certainly less than wonderful for Cisco, and there may be plenty of reasons why to fill an entire storage volume.  But a UBS analyst focused on one: a possible lull in the adoption, or at least the excitement, around private cloud — specifically, the retooling of internal data center infrastructure so that compute, storage, networking, and other resources may be pooled together.

    UBS Managing Director Steven Milunovich, during Wednesday’s conference call, gave the world a peek at what went on in-between the various CFO speeches that took place during the banking giant’s three-day technology conference this past week.  During a panel discussion that filled some of the blank space between often blank speeches, evidently there was a discussion about the status of private cloud.

    “We had a panel on folks who helped companies moved to the cloud,” said Milunovich [our thanks to Seeking Alpha for the transcript], “and the general consensus was that private cloud implementations generally are not working, and many companies that begin on a private cloud path end up going down a public cloud path.”

    The veteran hardware analyst was framing a question with the premise that Cisco is devoting its energies to a business that may be — at least for the present time — declining.  Most importantly, Cisco CEO Chuck Robbins declined to disagree.  He cited “a lot of the complexity in building out private infrastructure,” and laid the blame for that complexity squarely upon the shoulders of OpenStack, the open source, internal cloud platform.

    The focus of the entire industry at this point, said Robbins, is how best to automate operational processes — especially applying security policies to resources.  As you might expect, he believes those solutions will come, and even drew up a timeframe of between one and two years for customers’ pains to be — in his words — “alleviated.”  Then, having clearly framed his idea of the present state and the future state of the data center, he proceeded to shove something very big into the realm of the former.

    “I think your observations are probably valid particularly if you look at like the lot of early OpenStack implementations,” said the CEO.  “But I do think that customers are going to want to have that capability, and I think we as an industry will continue to work on simplifying how that operational capability shows up within our customer base.”

    Squishy

    “‘Private cloud’ is kind of a squishy term,” explained Marko Insights Principal Analyst Kurt Marko, speaking with Data Center Knowledge.  “People are not using virtualized, shared environments the same way they use a public cloud service or an IaaS service.  Part of the problem is the way enterprise users are consuming on-premises resources.”

    During the earnings call, Cisco CEO Robbins did not relent on his company’s push for Applications-Centric Infrastructure (ACI), its policy-driven framework that incorporates software-defined networking (SDN), letting workloads themselves factor into decisions about deployments.  But Cisco has been unable to wean many customers away from their existing workloads, which is why the company has had to maintain its older infrastructure system, NX-OS, simultaneously.

    Last month, Cisco not only unveiled new capabilities for its Nexus 9000 switches, but outlined a kind of NX-to-ACI migration plan that Nexus would facilitate.

    When the public cloud first became a marketplace a decade ago, many major vendors staked out their competitive stake in the space.  HPE has since had to back off, but Oracle has doubled down on its bid.  As late as 2014, Cisco finally assembled a public cloud strategy that relied on partner service providers, and which involved the company’s entire, existing sales channel — as opposed to constructing a public cloud platform to rival Amazon.

    Still, the customer side of that first value proposition relied on a kind of a la carte menu, where customers would always want some of both.  Perhaps spurred on by these same customers’ reluctance to dive all-in on ACI, Cisco’s strategy was to enable a de facto hybridization that accepted a reality that certain workloads would be better suited to public cloud deployment than others.

    Pick and Choose

    That strategy continued to represent Cisco’s point of view as recently as late last October, when at the OpenStack Summit conference in Barcelona, Cisco’s senior solutions marketing manager, Enrico Fuiano, prefaced his session by citing IDC survey data.

    “Is it private cloud or public cloud?  We believe the debate is over,” Fuiano told attendees.  “Organizations want both, and there is no doubt about it.  You can see from the projections that private cloud, at least in the next couple of years, will continue to enjoy growth upwards of 40 percent [over two years].”

    Fuiano was framing the introduction of a consultation service, jointly produced by Cisco and IDC, helping enterprises to determine a cloud strategy for themselves, and then to execute on that strategy without falling into traps.  He reminded attendees that one of the key reasons why enterprises choose OpenStack as an internal cloud platform is to avoid vendor lock-in.

    Next, he pointed to IDC survey data indicating that enterprises lack the tools to effectively monitor, measure, or manage hybrid cloud environments.  Fuiano believes that perception of lacking tools comes from a deficiency of skills needed to manage the tools that enterprises have on hand.

    That was three weeks ago.  Now, Cisco CEO Robbins’ assessment of the state of public cloud points not to a skills deficiency as a culprit, but a tools deficiency.

    Lessons in Fence Straddling

    Kurt Marko perceives a gap in an under-appreciated part of the spectrum: cloud-native software development, both inside enterprises and among ISVs.

    Marko cites VMware as a case study in a company dealing with present and future platforms — not unlike Cisco’s NX-OS and ACI.  It has a more versatile, software-defined infrastructure to which it would like to move enterprises.  Doing so could open up plenty of new market opportunities, not just for VMware, but for a broader ecosystem around its platform.  And it could make a stronger case for private cloud as a whole.

    “VMware is almost being held back by its customers,” Marko remarked, “because they’re using the VMware stack as a legacy virtualization stack.  It’s still client/server — it’s a bunch of servers, just like in the client/server era, except now we’re running 10 or 20 or 30 on a big piece of digital hardware.  But what we run on them and how we operate them, is no different than what it was 20 years ago.”

    VMware’s platform runs on a higher level than Cisco’s, at least theoretically.  Both depend to a larger degree upon software-defined networking, but both may also manage SDN at their own respective levels.  Still, it’s this realization that workloads aren’t changing as fast as they could be, or perhaps should, that leads Marko to another realization:

    Private cloud isn’t in a lull, as UBS’ Milunovich implied, as Marko believes.  “It hasn’t ever really taken off.”

    Specifically, he argued that the strict NIST definition of a private cloud — where resources are pooled together and services are set up for full automation and self-provisioning — is not what enterprises think they have when asked whether they’ve adopted private cloud.

    “It’s kind of a misnomer,” he said.  “Most people will say they have private cloud if they have a virtualization stack, even though they’re not running it like a cloud — they’re not allowing users to self-provision resources.  They’re not dynamically auto-scaling and moving resources around; they’re not providing database and application-level services out in the cloud.  They’re just giving people a virtual machine and a logical volume, period, and they call that a private cloud.”

    Companies with the budget to build hyperscale data centers, including all-internal or mostly-internal designs, are avoiding the legacy vendors’ networking equipment, said Marko, in favor of Open Compute Project or OCP-like, bare-metal deployments plus SDN, open virtual switches, and containerization.  They may constitute the true private cloud market.  But it’s leaving Cisco, and other vendors in this perceived legacy space, behind, compelling them to adopt broader, looser definitions of private cloud and hybridization — and then to blame technologies like OpenStack when it doesn’t all stack up right.

    Cisco’s strategy, he articulates, appears to have been to encompass its entire networking ecosystem in this ACI space, and then incrementally shift its customers from one all-Cisco space to the new all-Cisco space.  That doesn’t work well, in an environment where customers don’t want everything all from one vendor any more.

    “In Cisco’s defense, networking is a little bit different than server infrastructure,” said Marko, “in that you’re always going to have devices that have to connect to the cloud.  Even if you’re a company with no data center, you’re still going to have a sizable network investment, to connect all your clients, to connect your WAN, and Cisco wants to be there to provide that.”

    In the meantime, however, Cisco finds itself in a strange position.  As Credit Suisse analyst Kulbinder Garcha put it Friday, “The switching business faces increasing pressures, as Cisco continues to lose market share in the data center switching segment.”  Its bedrock business is a technology that may not be sinking, but its returns are less than rewarding.  Its future business may lie with a technology that ensures no single vendor can own it.  It would like its future to be a matter of choosing a bit from both plates — a little from this column, some from that column.

    But in that event, it will be customers who make the choices.  That’s when all the guarantees fall apart.

    11:06p
    Is a Retreat from Private Cloud Also Under Way? Cisco Weighs In

    This last fiscal quarter was certainly less than wonderful for Cisco, and there may be plenty of reasons why to fill an entire storage volume.  But a UBS analyst focused on one: a possible lull in the adoption, or at least the excitement, around private cloud — specifically, the retooling of internal data center infrastructure so that compute, storage, networking, and other resources may be pooled together.

    UBS Managing Director Steven Milunovich, during Wednesday’s conference call, gave the world a peek at what went on in-between the various CFO speeches that took place during the banking giant’s three-day technology conference this past week.  During a panel discussion that filled some of the blank space between often blank speeches, evidently there was a discussion about the status of private cloud.

    “We had a panel on folks who helped companies moved to the cloud,” said Milunovich [our thanks to Seeking Alpha for the transcript], “and the general consensus was that private cloud implementations generally are not working, and many companies that begin on a private cloud path end up going down a public cloud path.”

    The veteran hardware analyst was framing a question with the premise that Cisco is devoting its energies to a business that may be — at least for the present time — declining.  Most importantly, Cisco CEO Chuck Robbins declined to disagree.  He cited “a lot of the complexity in building out private infrastructure,” and laid the blame for that complexity squarely upon the shoulders of OpenStack, the open source, internal cloud platform.

    The focus of the entire industry at this point, said Robbins, is how best to automate operational processes — especially applying security policies to resources.  As you might expect, he believes those solutions will come, and even drew up a timeframe of between one and two years for customers’ pains to be — in his words — “alleviated.”  Then, having clearly framed his idea of the present state and the future state of the data center, he proceeded to shove something very big into the realm of the former.

    “I think your observations are probably valid particularly if you look at like the lot of early OpenStack implementations,” said the CEO.  “But I do think that customers are going to want to have that capability, and I think we as an industry will continue to work on simplifying how that operational capability shows up within our customer base.”

    Squishy

    “‘Private cloud’ is kind of a squishy term,” explained Marko Insights Principal Analyst Kurt Marko, speaking with Data Center Knowledge.  “People are not using virtualized, shared environments the same way they use a public cloud service or an IaaS service.  Part of the problem is the way enterprise users are consuming on-premises resources.”

    During the earnings call, Cisco CEO Robbins did not relent on his company’s push for Applications-Centric Infrastructure (ACI), its policy-driven framework that incorporates software-defined networking (SDN), letting workloads themselves factor into decisions about deployments.  But Cisco has been unable to wean many customers away from their existing workloads, which is why the company has had to maintain its older infrastructure system, NX-OS, simultaneously.

    Last month, Cisco not only unveiled new capabilities for its Nexus 9000 switches, but outlined a kind of NX-to-ACI migration plan that Nexus would facilitate.

    When the public cloud first became a marketplace a decade ago, many major vendors staked out their competitive stake in the space.  HPE has since had to back off, but Oracle has doubled down on its bid.  As late as 2014, Cisco finally assembled a public cloud strategy that relied on partner service providers, and which involved the company’s entire, existing sales channel — as opposed to constructing a public cloud platform to rival Amazon.

    Still, the customer side of that first value proposition relied on a kind of a la carte menu, where customers would always want some of both.  Perhaps spurred on by these same customers’ reluctance to dive all-in on ACI, Cisco’s strategy was to enable a de facto hybridization that accepted a reality that certain workloads would be better suited to public cloud deployment than others.

    Pick and Choose

    That strategy continued to represent Cisco’s point of view as recently as late last October, when at the OpenStack Summit conference in Barcelona, Cisco’s senior solutions marketing manager, Enrico Fuiano, prefaced his session by citing IDC survey data.

    “Is it private cloud or public cloud?  We believe the debate is over,” Fuiano told attendees.  “Organizations want both, and there is no doubt about it.  You can see from the projections that private cloud, at least in the next couple of years, will continue to enjoy growth upwards of 40 percent [over two years].”

    Fuiano was framing the introduction of a consultation service, jointly produced by Cisco and IDC, helping enterprises to determine a cloud strategy for themselves, and then to execute on that strategy without falling into traps.  He reminded attendees that one of the key reasons why enterprises choose OpenStack as an internal cloud platform is to avoid vendor lock-in.

    Next, he pointed to IDC survey data indicating that enterprises lack the tools to effectively monitor, measure, or manage hybrid cloud environments.  Fuiano believes that perception of lacking tools comes from a deficiency of skills needed to manage the tools that enterprises have on hand.

    That was three weeks ago.  Now, Cisco CEO Robbins’ assessment of the state of public cloud points not to a skills deficiency as a culprit, but a tools deficiency.

    Lessons in Fence Straddling

    Kurt Marko perceives a gap in an under-appreciated part of the spectrum: cloud-native software development, both inside enterprises and among ISVs.

    Marko cites VMware as a case study in a company dealing with present and future platforms — not unlike Cisco’s NX-OS and ACI.  It has a more versatile, software-defined infrastructure to which it would like to move enterprises.  Doing so could open up plenty of new market opportunities, not just for VMware, but for a broader ecosystem around its platform.  And it could make a stronger case for private cloud as a whole.

    “VMware is almost being held back by its customers,” Marko remarked, “because they’re using the VMware stack as a legacy virtualization stack.  It’s still client/server — it’s a bunch of servers, just like in the client/server era, except now we’re running 10 or 20 or 30 on a big piece of digital hardware.  But what we run on them and how we operate them, is no different than what it was 20 years ago.”

    VMware’s platform runs on a higher level than Cisco’s, at least theoretically.  Both depend to a larger degree upon software-defined networking, but both may also manage SDN at their own respective levels.  Still, it’s this realization that workloads aren’t changing as fast as they could be, or perhaps should, that leads Marko to another realization:

    Private cloud isn’t in a lull, as UBS’ Milunovich implied, as Marko believes.  “It hasn’t ever really taken off.”

    Specifically, he argued that the strict NIST definition of a private cloud — where resources are pooled together and services are set up for full automation and self-provisioning — is not what enterprises think they have when asked whether they’ve adopted private cloud.

    “It’s kind of a misnomer,” he said.  “Most people will say they have private cloud if they have a virtualization stack, even though they’re not running it like a cloud — they’re not allowing users to self-provision resources.  They’re not dynamically auto-scaling and moving resources around; they’re not providing database and application-level services out in the cloud.  They’re just giving people a virtual machine and a logical volume, period, and they call that a private cloud.”

    Companies with the budget to build hyperscale data centers, including all-internal or mostly-internal designs, are avoiding the legacy vendors’ networking equipment, said Marko, in favor of Open Compute Project or OCP-like, bare-metal deployments plus SDN, open virtual switches, and containerization.  They may constitute the true private cloud market.  But it’s leaving Cisco, and other vendors in this perceived legacy space, behind, compelling them to adopt broader, looser definitions of private cloud and hybridization — and then to blame technologies like OpenStack when it doesn’t all stack up right.

    Cisco’s strategy, he articulates, appears to have been to encompass its entire networking ecosystem in this ACI space, and then incrementally shift its customers from one all-Cisco space to the new all-Cisco space.  That doesn’t work well, in an environment where customers don’t want everything all from one vendor any more.

    “In Cisco’s defense, networking is a little bit different than server infrastructure,” said Marko, “in that you’re always going to have devices that have to connect to the cloud.  Even if you’re a company with no data center, you’re still going to have a sizable network investment, to connect all your clients, to connect your WAN, and Cisco wants to be there to provide that.”

    In the meantime, however, Cisco finds itself in a strange position.  As Credit Suisse analyst Kulbinder Garcha put it Friday, “The switching business faces increasing pressures, as Cisco continues to lose market share in the data center switching segment.”  Its bedrock business is a technology that may not be sinking, but its returns are less than rewarding.  Its future business may lie with a technology that ensures no single vendor can own it.  It would like its future to be a matter of choosing a bit from both plates — a little from this column, some from that column.

    But in that event, it will be customers who make the choices.  That’s when all the guarantees fall apart.

    << Previous Day 2016/11/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org