Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, December 13th, 2016

    Time Event
    12:30p
    Issues for 2017: Is Compute Power Truly Moving to ‘the Edge?’

    From the point of view of people who run data centers, “the edge” is the area of the network that more directly faces the customers.  And from the perspective of people who manage the Internet and IP communications on a very low level, “the edge” is the area of their networks that more directly face their users.  They’re two different vantage points, but a handful of lucrative new classes of applications — especially the Internet of Things — is compelling people to look towardsthe edges of their network once again, to determine whether it makes more sense now, both in terms of efficiency and profitability, to move computing power away from the center.

    It would seem to be the antithesis of the whole “movement to the cloud” phenomenon that used to generate all the traffic on tech news sites.  Cloud dynamics is about a centralization of resources, and hyperconvergence is perhaps the most extreme example.

    The Edge Gets Closer to Us

    Last year at this time, hyperconvergence seemed to be the hottest topic in the data center space.  Our indicators tell us that interest in this topic has not waned.  Assuming that observation is correct, how can hyperconvergence and “the edge” phenomenon be happening at the same time?  Put another way, how can this reportedly relentless spiking in the demand for data be causing data centers to converge their resources and data centers to spread out their resources, simultaneously?

    “I remember when ‘the cloud’ first came out, and we used to talk about, what was the cloud?  And what was going to happen to all the data centers?” remarked Steven Carlini, Schneider Electric’s senior director of global solutions, in a discussion with Data Center Knowledge.

    Carlini pointed to the dire predictions from 2014 and 2015 that the cloud trend would dissolve enterprise data centers as they moved their information assets into the public cloud.  The “hyperscale” data centers, we were often told at the time, would become larger in size but fewer in number, swallowing enterprise facilities and leaving behind these smaller sets of components that faced the edge.

    But through 2016, while those hyperscale complexes did grow larger, they refused to diminish in number.  As Data Center Knowledge continues to cover on a day-by-day basis, huge facility projects are still being launched or completed worldwide: for example, just in the past few days, in Hong Kong, in Quebec, and near Washington, D.C.

    The challenges that builders of these “mega-centers” face, Carlini notes, have less to do with marketing trends and much more to do with geography: specifically, whether sites being considered provide ample electricity and water.  So for years, they avoided building in urban areas, in what he called “the outskirts of society.”

    “What started happening was, as more and more applications went to cloud-based, people started to be more frustrated with these centralized data centers,” he continued.  “So we saw a huge migration out of the enterprise data centers.  All of the applications from small and medium companies, especially, that could be moved to the cloud, were moved to the cloud — the classic ones like payroll, ERP, e-mail, and the ones that weren’t integrated into the operation of manufacturing.”

    Bog-down

    As more users began trusting SaaS applications — especially Microsoft’s Office 365 — Carlini believes that performance started to become a noticeable factor in users’ computing experience once again.  Files weren’t saving the way they used to with on-premise NAS arrays, or even with local hard drives.

    This was one of the critical factors, he asserted, behind the recent trend by major firms to center their more regional data center projects closer to urban areas and CBDs — for example, Digital Realty’s big move in Chicago.

    That’s the force precipitating the wave Carlini points to: the move to the edge where the computing power is closer to the customer.  In a way, it’s a backlash against centralization not so much because of its structure but its geography.  It introduced too much latency into the experience of everyday work, for organizations that did move their general-purpose business and productivity applications into the public cloud.

    Perhaps it’s a bit too simplistic to assert that, because enough people twiddled too many of their thumbs waiting for documents to save, a revolution triggered a perfect storm that moved mountains of data back toward downtown Chicago.

    But it may be one symptom of a much larger phenomenon: the introduction of latency into the work process which, when multiplied by the total number of transactions, results in unsustainable intervals of wasted time.  Last summer at the HPE Discover conference in Las Vegas, engineers made the case that sensitive instrumentation used in geological and astronomical surveys are too deterministic in their behavior, and too finely granular in their work detail, to afford the latencies introduced by billions of concurrent transactions with a remote, virtual processor, over an asynchronous network.

    Is the Edge in the Wrong Location?

    Content delivery networks (CDN), which operate some of the largest and most sophisticated data centers anywhere in the world, came to the same conclusion.  Their job has always been to store larger blocks of data in caches that reside closer to the consumer, on behalf of customers whose business viability depends on data delivery.  So yes, CDNs have always been on the edge.

    But it’s where this edge is physically located that may be changing.  In a recent discussion, Ersin Galioglu, the vice president of Limelight Networks (by many analysts’ account, among the world’s top 5 CDNs behind Akamai) told Data Center Knowledge his firm has been testing the resilience of its current network by situating extra “edge servers” in the field, generating surpluses of traffic and conducting stress experiments.

    “The big distinction to us is the distribution of small objects and large objects [of data],” Galioglu explained.  “With large objects, I can perform a lot more efficiently; small objects are a lot harder from a server perspective.”

    Internet of Things applications are producing these small objects: minute signals, as opposed to large blocks of video and multimedia.  The strategy for routing small objects is significantly different from that for large objects — so different, that it’s making CDNs such as Limelight rethink its approach to design.

    Limelight’s lab tests, Galioglu told us, first conduct real-world samples that produce best representations of global traffic patterns.  Those patterns are reduced to a handful for purposes of comparison.  Then Limelight dispatches edge servers into various field locations, to generate specific — albeit artificial — transactions that are then monitored for performance.

    “One of the challenges that the CDN industry is having,” the VP remarked, “is that there is limited capacity.  As much capacity as we have been adding, it gets consumed as soon as we add it.  And the challenge there is [when] there’s a very ‘spike-y’ traffic pattern on the Internet, and a lot of times, the customers themselves cannot anticipate what the demand will be.”

    Existing networks may be too sensitive to sudden changes in customer demand one way or the other.  And those changes may be having bigger impacts with respect to greater quantities of smaller data objects — the products of IoT applications.

    When Edges Collide

    So both enterprises and commercial access providers are rethinking their strategies about the locations of their respective edges.  As a result, a possibility that would have been dismissed sight-unseen just a few years earlier, suddenly becomes viable:  Enterprises’ edges and service providers’ edges may be merging into the same physical locations.

    It’s not really a new concept, having been the subject of speculation at IDC at least as early as 2009.  And it’s a possibility that some folks, including at HPE, are at least willing to entertain.  Consider something that’s more like a “Micro Datacenter” than an Ashburn, Virginia, megaplex.  Here, a handful of enterprise tenants share space with a few cross-connect points to major content providers.  The “service provider edge,” to borrow IDC’s phrase, would overlap the enterprise edge at that point.

    “I think there is the opportunity here to kind of rename it all,” admitted Schneider Electric’s Steven Carlini.  “There’s definitely a lot of confusion a bit like ‘the cloud’ 10 years ago — everyone was all like, ‘Oh, what’s the cloud, what does it mean?’  It’s the same thing right now with ‘the edge.’”

    1:00p
    Monitoring Those Hard-to-Reach Places: Linux, Java, Oracle and MySQL

    Gerardo Dada is Vice President of SolarWinds.

    Today’s IT environments are increasingly heterogeneous, with Linux, Java, Oracle and MySQL considered nearly as common as traditional Windows environments. In many cases, these platforms have been integrated into an organization’s Windows-based IT department by way of an acquisition of a company that leverages one of those platforms. In other cases, the applications may have been part of the IT department for years, but managed by a separate department or singular administrator.

    Still, whether it’s a perception of required specialization, frustration over these platforms’ many version permutations or just general uncertainty and doubt, Linux, Java, Oracle and MySQL create mass monitoring confusion and are routinely considered “hard to reach” for even a seasoned IT professional. This problem goes both ways (when monitoring Windows is actually the unnatural element) but for the most part, IT shops are primarily Windows-based and consistently struggle to monitor these more niche platforms.

    Of course, it’s certainly possible for IT professionals who are proficient in Linux, Java, Oracle or MySQL to successfully monitor these platforms with their own command line scripts or native tools—many even enjoy manually writing code and command scripts to monitor these instances—but this strategy will ultimately increase the organization’s likelihood of performance issues. Why?

    First, should the IT professional tasked with creating unique code and scripts for these specialized instances ever leave the business, the organization will be vulnerable to downtime while another team member is trained on how to operate the native tool or also learns to develop command line scripts for these platforms.

    This isolated technique also exacerbates the biggest challenge of monitoring in any environment: sprawl. It may seem easier to maintain the status quo and continue operating disparate tools for the hard to reach platforms, but it ultimately adds another layer of complexity to the overarching monitoring system. Having too many monitoring tools to manage increases the chances that some will be forgotten (especially the tools used to monitor Linux, Java, Oracle and MySQL instances), meaning performance issues or downtime will take much longer to remediate.

    Not only that, but niche monitoring processes lack the type of sophisticated metrics that help create a more effective IT department. Here are several command lines for each specialized instance that will return basic metrics:

    • Linux command lines such as ‘top,’ ‘glances’ and ‘htop,’ for example, will provide basic metrics like performance, CPU, memory and any pertinent errors of which to be aware.
    • A ‘performance_schema’ command line for MySQL on Linux will display tables of wait states and information about sessions (how many executions, how many rows are examined, etc.) to uncover inefficient queries. Rather than run command lines on the server, Oracle also offers a built-in desktop workbench that works in a similar fashion.
    • J-Console, a built-in dashboard available with Java installation, provides an overview of all tools running on the platform. It looks at heat/memory uses, classes, threads (important to monitor for security leaks), as well as CPU usage.

    However, more comprehensive features like advanced alerts, dynamic baselines, correlation and detailed application metrics and availability, which ultimately help enable a more proactive and effective method of managing infrastructure and applications, will be missing from these reports.

    At the end of the day, it’s important to remember that administrators should not be spending more time configuring their command line scripts or monitoring tools than they spend taking advantage of the data. Whether an administrator is monitoring Windows or something more specialized like Linux or MySQL, an IT department will always be stuck on the reactive (troubleshooting) mode without better visibility into the health and performance of its systems and the ability to get early warnings.

    By establishing monitoring as a core IT function deserving of a strategic approach, businesses can benefit from a more proactive, early-action IT management style, while also streamlining infrastructure performance, cost and security.

    To create a more thoughtful, comprehensive monitoring strategy that incorporates those hard to reach places, here are several best practices IT departments should consider:

    •  Create an inventory of what is being monitored. The majority of IT departments have a broad set of monitoring tools for a number of different things. Are there applications in the cloud that are monitored by one tool? Are workloads being hosted in a different data center that leverage a separate tool? Before standardizing monitoring process, organizations should create an inventory of everything they are currently monitoring and the tools being used to do so.
    • Implement a set of standards across all systems. This should be done for every workload independent of what tool is being using, and especially if an IT department is running several different tools. At the end of the day, it’s impossible to optimize what isn’t being measured, so it’s in every IT department’s bet interest to create a standard set of monitoring processes, similar to creating runbooks. What are the key metrics needed from each system? What are the situations for which alerts are needed, and how will they be acted on? Having answers to these types of questions will allow even an IT department with distributed workloads and applications to successfully monitor and ensure performance and availability. This approach also helps IT departments avoid having a “weakest link,” where one team may have their own security protocol, their own network and their own firewalls that ultimately leave the organization open to security vulnerabilities.
    • Unify the view. IT departments should be have a comprehensive set of unified monitoring and management tools in order to ensure the performance of the entire application stack. Everything is important to monitor, even these hard to reach systems, and IT professionals should look to leverage tools that integrate monitoring for these non-standard Linux, Java, Oracle and MySQL instances within a Windows environment into a single dashboard in order to cultivate a holistic view of their infrastructure.
    •  Remember that monitoring should be a discipline. Traditionally, monitoring in the data center—even for more standard, Windows-based applications—has been somewhat of an afterthought. For most organizations, it’s been a “a necessary evil,” a resource that the IT department can leverage when there’s a problem that needs solving, and often a job that’s done with just a free tool, either open source or software pre-loaded by the hardware vendor. However, the concept of monitoring as a discipline is designed to help IT professionals escape the short-term reactive nature of administration, often caused by ineffective, ad hoc practices, and become more proactive and strategic.

    In sum, despite the perception of Linux, Java, Oracle and MySQL instances as being “hard to reach places,” there are many ways in which administrators can receive data about the health and performance of systems running on these applications, from running manual command line scripts to leveraging comprehensive tools that integrate these instances in a single dashboard.

    However, as the data center continues grow in complexity, and especially as hybrid IT increases, IT professionals should look to establish the practice of monitoring as a discipline. With this approach and unified monitoring that aims to turn data points from various monitoring tools into more actionable insights by looking at all of them from a holistic vantage point, rather than each disparately, coupled with the other above best practices, will allow IT administrators to ultimately increase the overall effectiveness and efficiency of their data centers.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:44p
    Google Quietly Opens Dutch Wind-Powered Facility Amid Scandal

    Just days after making a pledge to power all of its worldwide data centers with 100 percent renewable energy by the end of 2017, Google very quietly opened its Eemshaven, Netherlands facility last week, according to Dutch online news service NU.nl, which spoke to local union leaders there.

    But the news wasn’t even covered on Google’s own Web page for the facility, and Google has yet to respond to Data Center Knowledge’s request for details.

    When ground was broken on the project in September 2014, it was to be built on 109 acres of prime real estate in the Dutch seaport, with a projected investment of $773 million.  As it stands, the facility was opened on time, although we know nothing more yet about the company’s budget goals.

    In a company blog post published at the time of groundbreaking, William Echikson, who headed Google’s data center community relations then, pledged that the facility would run on 100 percent renewable energy, and would be fully free-cooled, including with the use of “grey water” (unfiltered, untreated, but not yet waste water).

    Indeed, just a few months after that post, Google entered into a 10-year deal with Eneco, one of the Netherlands’ major gas producers.  That company had already been constructing a 19-turbine wind farm in Delfzijl, just a few miles south, using partially offshore turbines built on an artificial reef.  That production facility opened in September 2015, with Eneco reporting its total production capacity at 62.7 MW.

    Although that facility could power as many as 55,000 Dutch households, the industry publication Wind Power Intelligence reported the previous December that Eneco’s 10-year deal with Google was for its entire output.

    So why the quiet attitude?  It would appear Google’s project is another in a string of successes.

    As it turns out, a former Google director and European data center contract negotiator named Simon Tusha pled guilty last May to one count of conspiracy to defraud the U.S.

    Specifically, Tusha admitted to receiving kickbacks from both Dutch and British companies, and then hiding those kickback funds through a series of shell companies.  U.S. investigators stumbled onto Tusha, according to the Pittsburgh Post-Gazette, quite by accident — through a criminal investigation of a Pennsylvania drug ring operated by one of the men involved in Tusha’s shell companies.

    Late last month, Dutch real estate magnate Rudy Stroink, the former head of TCN Group, known in the Netherlands as the “Mr. Clean of Real Estate” — was indicted for allegedly paying out the bribes, totaling some €1.7 million, to which Tusha pled guilty of receiving.

    The story is front-page, live-coverage news all over the Netherlands, involving one of its most well-known public figures.

    For its part, Google has distanced itself from the proceedings, issuing a statement last month saying that the Eemshaven project began after Tusha had already left the company, and adding, “We are the victim of the crimes that will be charged.”

    6:30p
    The Cisco-Arista Battle over Networking Tech Continues
    By The VAR Guy

    By The VAR Guy

    The long legal battle that originated in a complaint filed by Cisco in 2014 had another development Friday, when the International Trade Commission ruled that Arista Networks illegally used Cisco’s network device technology in its ethernet switches. The ITC case is one of three fronts in the battle, which is also being fought in federal court in California and the U.S. Patent and Trademark office in Alexandria, Virginia.

    The original ITC complaint asserted that Arista infringed six of its patents. The ruling Friday agreed that two of the claims were valid. The ruling still has to be reviewed by the full commission over the next two months, but if upheld, would almost certainly lead to an order banning the import of Arista’s products into the U.S. In a separate case in June, the ITC ordered an import ban on other Arista products that infringed Cisco patents.

    Arista was founded by former Cisco executives, who Cisco says ripped off its technology before making a break for the land of startups. Cisco’s lawyers say that It’s hard to argue that they weren’t aware of the similarities in the code since Arista’s founder, chief executive officer and chief technology officer were in charge of the same tech they allegedly stole.

    It’s a bigger deal than many infringement cases we’ve seen recently. Cisco is trying to navigate a huge market shift toward software-based products. The company currently holds 56.5 percent of the switching market, but customers rankle at its expensive hardware and software designed to eliminate the ability to switch products. Some of the code in question in California supposedly makes it easier for customers to make that switch.

    The California trial boils down to Cisco’s accusation that Arista infringed its copyrighted command line interface terms that control hardware and software functions, down to copying and pasting from Cisco’s manuals–without even bothering to correct typos. When asked in court two weeks ago if he’d reached out to Arista before filing the suit, Cisco’s executive chairman, John Chambers, called the Arista executives in question his friends.

    “In my mind, they knew exactly what they were doing and a phone call wouldn’t have helped,” Chambers said. “It is hard to accuse people who are your friends — and they are still my friends — of stealing from you. But this was so blatant.”

    If true, it would be hard to not take the infringement as a betrayal. Arista CEO Jayshree Ullal, founder and chairman Andy Bechtolscheim, the heads of both its software and hardware engineering and four members of its board of directors are all former Cisco employees. In fact, one of those board members, Charlie Giancarlo, was once thought to be the eventual successor to Chambers as Cisco CEO.

    If the theft and the charge seem personal, they’re nothing compared to the blood Cisco is out for in the damages it’s seeking. It’s suing Arista, whose last quarter revenue was about $290 million, for $500 million in damages in addition to pressuring the ITC to block shipments of Arista’s products. It seems to want the startup out of business.

    For its part, Arista is sticking to a line of defense we’ve heard a lot recently. It says that the command codes aren’t subject to copyright protection and are available for use by all developers, adding that other companies use identical commands in their equipment. Its claim is that Cisco wants to eliminate competition and make an example of it.

    Arista has countered with an antitrust complaint that alleges Cisco is using the courts and its market dominance to suppress competition and punish customers by charging them higher prices if they don’t remain Cisco-only shops.

    “This lawsuit is an effort by an older established company with older technology to prevent fair competition by a young new company with new technology and better products,” Arista’s lawyer, Bob Van Nest, told jurors, according to a court transcript.

    Cisco’s general counsel Mark Chandler fired back in a statement released today. “We welcome fair competition, but “slavishly copying” (Arista’s words) is neither fair nor innovative. It causes harm to innovation across the industry.”

    The two companies’ PR teams have been hard at work positioning the ITC ruling in their favor. Arista’s press release on the decision is headed “Arista Favored in ITC Initial Determination on Four out of Six Patents.”

    “We appreciate the tireless work of the [Administrative Law Judge] in this preliminary decision and are pleased that she found in our favor on four of the asserted patents,” said Marc Taxay, Senior Vice President, General Counsel of Arista Networks. “We do, however, strongly believe that our products do not infringe any of the patents under investigation and look forward to presenting our case to the full Commission.”

    For its part, Cisco’s announcement is titled “Protecting Innovation: ITC Confirms Arista Products Violate Additional Cisco Patents.” Chandler writes, “In my two decades at Cisco, we have initiated an action such as this against a competitor on only one other occasion. There is no question that Arista copied from Cisco. There is ample evidence and multiple admissions from Arista confirming they have done so.”

    The trial case is Cisco Systems Inc. v. Arista Networks Inc., 14-5344, U.S.District Court for the Northern District of California (San Jose).

    This article originally appeared here at The VAR Guy.

    6:38p
    How Microsoft and LinkedIn Can Make This Expensive Deal Work

    BLOOMBERG – As Microsoft officially swallows LinkedIn, it should have one goal: Make this acquisition different. The company has a track record of big buys gone south, and writedowns have topped $13 billion since 2012. The purchase of Nokia’s handset unit seemed doomed from the start, but buying aQuantive, which made software for selling display ads on the web, seemed like a good idea– yet it ended up being a costly mistake. Here’s what Chief Executive Officer Satya Nadella should do to keep the recently-completed LinkedIn deal from joining the ash-heap of M&A history.

    1. Keep LinkedIn Chief Executive Officer Jeff Weiner –– and for more than the two to three years that acquired executives usually stick around. Weiner is wildly popular among the staff. He talks a lot about “managing compassionately:” When LinkedIn shares plummeted last February after a bad earnings report, he held a company meeting to ease everyone’s fears. Then he gave up his $14 million stock award, instead distributing it to employees. He’s also one of the few Silicon Valley executives who can speak about a corporate mission  — helping people find better jobs — with enough sincerity for listeners to buy it.

    Microsoft has acquired companies with well-regarded leaders in the past, David Sacks’s Yammer and Mike McCue’s Tellme Networks among them. Both of those founders stayed at Microsoft for about two years before departing. Then their companies got subsumed, losing steam and staff. If you can’t remember what Microsoft did with Tellme, a voice-controlled telephone applications company, you’re not alone. And last month Microsoft introduced a new enterprise social service to compete with Slack, leaving many to ask: “Isn’t that what Yammer was supposed to do?”

    Right now, Weiner is saying the right things–“He truly believes this is his dream job,” said LinkedIn spokeswoman Melissa Selcher– but some wonder if he’ll be in it for the long haul. “I have to believe in three years, Jeff is not there,” said Steve Goodman, an investor and entrepreneur who sold his company, Bright.com, to LinkedIn in 2014. “LinkedIn has one of the leading mission driven cultures in Silicon Valley,” he said. “Microsoft will have to tread carefully to maintain this. Jeff will have to thread a needle.”

    2. Let LinkedIn be LinkedIn. This is hard. Many acquirers don’t try giving the purchased company its independence, because the whole value of the deal comes from effective integration.

    “It’s very difficult to keep the cultures separate,” said Douglas Melsheimer, a partner at investment banking firm Bulger Partners. “I can’t think of an example of a large-scale acquisition like this where the acquired company really maintained any independence. Their fate is sealed, to a degree.”

    Microsoft doesn’t intend for LinkedIn to be run as a fully independent subsidiary, the way Warren Buffett’s Berkshire Hathaway acquisitions operate, said a person familiar with Microsoft’s plans, who didn’t want to be named because the assimilation planning is private. And LinkedIn will report financial results as part of Microsoft’s Productivity and Business Processes unit rather than getting its own line, according to Selcher. But to keep LinkedIn’s special sauce, Microsoft execs shouldn’t impose their will on Weiner, like demanding an integration with Windows that doesn’t make sense for LinkedIn.

    LinkedIn and Microsoft have cited Facebook’s 2012 acquisition of Instagram as a model. Instagram has its own CEO, campus and separate HR system, as well as smaller but important signals like different employee badges. Ryan Roslansky, LinkedIn’s vice president of product, said LinkedIn will retain many of the same privileges, including their own badges.

    Microsoft has tried this before. It said it would not take a heavy hand with Skype when that deal went through in 2011. For a while that worked. Then Microsoft’s approach changed. In 2013, then-CEO Steve Ballmer enacted a massive reorganization, called One Microsoft, combining groups with similar functions. Skype was lumped in with related Office apps. Since the acquisition, Microsoft has almost doubled Skype’s users and moved it into the more lucrative business market, but bumps and competitive battles remain.  In September, Microsoft shuttered Skype’s London office, fired a few hundred workers, and replaced the head of the business. It lags Cisco in execution and vision for corporate use, according to market research firm Gartner, and it’s losing consumers to Apple’s FaceTime and Google’s Hangouts.

    3. Keep talented managers and engineers motivated. As soon as the deal was announced, LinkedIn embarked on an ambitious effort, “20 in 20.” The company picked 20 projects it had planned for the coming nine months and sped up deadlines, aiming to complete them in 20 weeks, said LinkedIn’s Roslansky.

    “When you are acquired, it’s human nature that things tend to slow down a bit. There’s a lot of questions of: ‘What do I do now?’ ” he said. “It was very important to us to make sure everyone was motivated.”

    It’s going to be a challenge for Weiner to maintain that energy, said Wes Miller, an analyst at Directions on Microsoft, a market research firm. “I’ve seen so many companies get acquired by Microsoft and the exodus begins.”

    Danger, a mobile phone company started by Android founder Andy Rubin, was acquired in 2008. Instead of making a breakthrough device, most of the employees left and the unit produced the Kin phone, possibly the biggest flop in Microsoft hardware history. AQuantive saw a similar brain drain, as Microsoft both sold off the ad-agency part of the business and decided to shift focus away from display ads, the whole raison d’etre of the acquisition in the first place.

    Ultimately, the real measure of the combination’s success will be how well it performs financially. The companies need to find compelling product integrations that justify the $26 billion deal price tag. LinkedIn generated more than $3.6 billion in sales in the 12 months through September and lost money. For the deal to be deemed a win, money has to be made.

    For now, the two companies are starting with a smaller, more manageable number of joint products where they know they can succeed, said Roslansky. They plan to sync LinkedIn networks with Microsoft Outlook e-mail, so when you get a message and can’t remember who the person is, you’ll get information from LinkedIn, Weiner said in an interview in September. If the person’s not in your LinkedIn network, you can add that person with a click, he said.

    “There’s a tendency when a deal like this happens to say here are 100 ideas of crazy things we can do together. We purposely kept that list short,” Roslansky said. “If many of them go wrong, it sets a bad tone.”

    Weiner also wants to enhance the help function of Microsoft products like PowerPoint. If you need help with a slide deck, clicking help brings up the equivalent of an online product manual. Weiner aims to use this function to connect you to people on LinkedIn with relevant experience. If your presentation is about using artificial intelligence tools for marketing, the help button could point you to experts in the field and to freelancers who can assist for a fee. It could direct you to LinkedIn learning courses in the area, he said.

    Investors should be patient. A person familiar with Microsoft’s thinking said it probably won’t be possible to tell if the acquisition has worked for about five years.

    Miller, the analyst, agreed. “You will never recoup the direct spend to buy LinkedIn,” he said. “But does the deal make LinkedIn better? Does it make Microsoft better? That’s the end game.”

     

    << Previous Day 2016/12/13
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org