Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, July 29th, 2016
| Time |
Event |
| 12:45a |
Larry Ellison Accepts the Dare: Oracle Will Purchase NetSuite “I started NetSuite. NetSuite was my idea,” famously stated Oracle’s founder and then-CEO Larry Ellison during a public appearance in 2012. Ellison was talking about one of the originators of SaaS service, for which he indeed was the initial backer, reportedly holding more than 40 percent of that company’s stock as recently as yesterday.
Of course, this is the same fellow who discounted the entire concept of cloud computing, only to take credit for having invented it later.
When the always outspoken, but never predictable, Ellison, now his company’s Chairman and CTO, boasted during his company’s earnings call last March without any prompting from colleagues or analysts that, hey, Oracle Fusion ERP has 10 times the number of customers — no, make that more than 10 times the customers — of rival ERP producer Workday, any analyst with so much as a single ear knew something was up.
“And ERP has always been a much larger market than CRM,” explained Ellison, perhaps arguing against someone else in the room at the time whom we didn’t hear. “Salesforce.com is missing all of that ERP market opportunity. The breadth of our ERP HCM and CRM SaaS product portfolio, combined with the technical superiority of our underlying SaaS cloud services, should enable us to sustain our rapid cloud growth for a long period of time. And that, in turn, should make it easy for Oracle to pass Salesforce.com and become the largest SaaS and PaaS cloud company in the world.” [Our thanks to Seeking Alpha for the transcript.]
Easy, maybe, with the help of acquiring NetSuite, in a transaction independently valued at $9.3 billion.
ERP: Legacy or Trailblazer?
Enterprise Resource Planning (ERP) is not the typical subject of a Datacenter Knowledge profile. But the market segment is important in our context, because it helped launch the SaaS market, first by providing the tension it needed to create a clear market demand, and then by developing a service that only stronger data centers could fulfill.
The tension part came courtesy of legacy software. The first ERP products were developed for specific job categories — procurement had its own planning tools, as well as sales, and marketing was also separate. Eventually, the U.S. Bureau of Labor Statistics sought to simplify matters by formally declaring eight categories of ERP, in a desperate effort to whittle the number down to just eight. Even when SAP, the ERP leader at the time, tried to appease the U.S. Government — one of its largest customers — it managed to sneak more categories into its proofs of concept, until finally it was getting away with 15.
NetSuite’s initial appeal was not so much that it was cloud-based, but that its ERP was not modularized. Its main goal has always been to facilitate a single planning tier. However, leading customers to that goal has been such a slow process that it has become an industry in itself. In October 2012, as part of a strategic agreement between the two companies, NetSuite announced the production of a two-tier ERP platform for Oracle. A “two-tier” platform enables an organization to maintain its legacy system — in this case, Oracle Fusion ERP (although Ellison would beg to differ about the characterization) — while working to integrate new processes into the organization through NetSuite.
Integration is the Key
Integration is undoubtedly one of the leading drivers for enterprise cloud adoption today, not so much as a way to move away from old platforms as to effectively “embrace and extend” them. It’s part of what makes NetSuite such an attractive acquisition target for Oracle.
“I see ERP moving to SaaS very, very quickly,” said Paul Hamerman, Forrester’s vice president and principal analyst, in a conversation with Datacenter Knowledge Thursday. “I’ve been studying some of the adoption data in this market for some time, and it’s pretty recently reaching an inflection point where SaaS is becoming a preferred delivery model for ERP — not necessarily among the largest companies, but certainly in the mid-market.”
When Ellison’s March comment tripped off rumors of Oracle’s pending NetSuite acquisition, financial analysts warned against the move, saying that the future value of NetSuite’s customer strategy in the ERP market was uncertain. But Hamerman — a technology analyst, not a financial one — strongly disagrees, citing NetSuite’s two-tier integration strategy that began back in 2012.
Because Ellison has always been NetSuite’s largest shareholder, Hamerman believes, it maintained a kind of policy that intentionally restrained it from aggressively going after Oracle’s customers. This let both Oracle and NetSuite target SAP, arguably the pioneers in that category.
Nevertheless, Hamerman’s data tells him, NetSuite ended up acquiring Oracle’s customers anyway, as the journeys begun by the two-tier strategy started coming to a close. Even though many analysts and observers have noted a general market trend away from all-public cloud deployments and toward hybrid, with respect to ERP, organizations were moving toward NetSuite and its public SaaS.
“The company’s been growing at thirty-plus percent for several years now,” the Forrester analyst told us, “and it was inevitable that the two [Oracle and NetSuite] would be competing with one another more explicitly in those deals.
“So one of the things Oracle will gain in this acquisition is, it won’t have to compete with NetSuite any more. And they can integrate them into an overall strategy where they can position their cloud ERP solution into certain segments of the market, positioning Oracle more in the mid-market.”
What should have been more obvious to everyone, but it took Forrester’s Hamerman to point out, is that NetSuite was already being supported and hosted by Oracle infrastructure. So Oracle does not need to reinvent the wheel in order to integrate NetSuite into its operations. NetSuite can continue doing business as it has been, including offering its ERP cloud services as an integration platform.
“Oracle’s cloud business is growing very, very quickly,” the analyst pointed out, “but its traditionally on-premise ERP products — including PeopleSoft, E-Business Suite, and JDEdwards — are declining. So it’s an important acquisition for Oracle, in order to push their revenues more towards the subscription side, and really outrun the decline on the traditional revenue side of licenses and maintenance.”
Right now, Oracle could do well with an acquisition capable of running under its own power. When an acquisition requires guidance and direction from Oracle — as certainly has been the case with Sun Microsystems’ assets, especially Java — when the acquired asset begins languishing, its supporters are quite capable of starting a revolt.
Regardless of the outcome, however, we can perhaps expect Larry Ellison to say, I told you so. | | 4:30p |
Google Results Show Signs of Cloud Progress Under Greene (Bloomberg) — Google has spent years telling Wall Street its investments in non-advertising businesses will eventually pay off. Thursday’s results suggest that’s beginning to happen.
Google parent Alphabet Inc. reported second-quarter earnings and sales that beat analysts’ estimates. The “Other” part of the Google business saw sales jump 33 percent to a record $2.17 billion. Growth in Google’s cloud-computing and corporate software businesses drove the gains. The shares rose 4.6 percent in early trading Friday to $800.83.
Alphabet board member Diane Greene was brought in to run Google’s cloud and work apps businesses in November. Since then, she’s made hires in sales, marketing, global alliances, industry solutions and professional services, Google Chief Executive Officer Sundar Pichai said during a conference call with analysts Thursday.
Read More: Google Launches Its First Cloud Data Center on West Coast
Alphabet added more than 2,000 employees in the second quarter and most were engineers and product managers to support growth in priority areas such as cloud and apps, Chief Financial Officer Ruth Porat said on the call.
Those hires, combined with new products, have helped Google sell cloud-based software and services more effectively to large companies — something it struggled to do before Greene arrived. “We now have key leadership in place and centralized teams,” Pichai said. “I see a shift to a world-class enterprise approach and it’s definitely having an impact on the type of conversations we are having.”
Cloud computing is a strategic growth area for Google, which is seeking to turn the infrastructure it built for its mammoth web operations into services that other businesses can rent. The market for cloud services in 2016 is worth about $204 billion, according to Gartner. Though Google is praised for technical expertise, it is fourth in cloud services behind Amazon.com Inc., Microsoft Corp. and International Business Machines Corp., according to Synergy Research Group.
On Thursday, Amazon Web Services, Amazon’s cloud business, reported a 58 percent jump in quarterly revenue to $2.89 billion. Microsoft’s Azure cloud revenue doubled in the same period. So Google’s Greene still has work to do to catch up.
Google’s “new cloud business could be growing even faster than 33 percent,” said Jitendra Waral, an analyst with Bloomberg Intelligence. “But obviously, it’s very early stages if you look at the scale Amazon is at.”
The cloud business is part of Alphabet’s broader efforts to diversify into other industries and reduce reliance on ads, which were responsible for $19.1 billion of Alphabet’s $21.5 billion in second-quarter sales. The ad business could see margins compress in coming years as more ads are bought automatically through so-called programmatic technology and the company runs more lower-priced mobile ads.
Alphabet’s longer-term “Other Bets’ businesses, which include self-driving cars, fast internet services, and health-care ventures, saw second-quarter sales more than double to $185 million from $74 million a year earlier. Operating losses in the division rose as well, but at a slower rate than sales climbed. | | 5:11p |
CORD for Telcos Now a Linux Project with Google’s Backing The biggest consumer of computing power by volume may very well be the telecommunications industry. In recent years, the world’s major telcos have been desperately urging engineers and developers to build a real cloud that works for them — one that can be staged, provisioned, and orchestrated like a common enterprise data center. Just bigger.
This morning, one of the candidates for that architecture declared itself ready to be thoroughly vetted. What’s has recently been called Central Office Re-architected as a Datacenter has formally been christened the CORD Project, organized under the oversight of the Linux Foundation.
And the site of the inaugural CORD Summit today tells you everything you’d need to know about who wants to drive this architecture: the big Tech Corner Campus building at Google’s Sunnyvale campus.
“One of the benefits of creating CORD as an open source project by itself is that it can have its own board, governance, steering team, and community,” explained Guru Parulkar, the executive director of the Open Networking Lab, in an interview with Datacenter Knowledge. “They can try to push CORD forward and realize the potential of this platform.”
He’s referring to a specification for ordinary, commodity servers used in telcos’ central offices. That specification includes the architecture and arrangement of server racks, and the access racks contained within them. It also includes a software platform that could be implemented on new servers, as well as those which telcos already own and operate, and those they may purchase in the future.
Homogenization
The move to an agile staging ground for virtual network functions (VNFs, the applications-as-services of the telco world) would be far too expensive to roll out if it required entirely new equipment. Telcos need to be able to utilize at least some of their existing central office equipment, extending their facilities as they can afford to, while deploying new classes of service as readily as possible.
“If you look at the typical central office over the years, it may have 300 different types of closed, proprietary equipment,” explained Parulkar. “What that means is, central offices represent significant CapEx and OpEx for service providers. Their lack of programmability means lack of innovation, and lack of new services. And that is what the service providers are wanting to change.”
OpenStack has accomplished something on the order of what telcos aim to do, only in enterprise data centers. CORD would appropriate OpenStack, along with Docker containers and the open source SDN controller ONOS (which Parulkar also leads), to create a platform that makes a variety of servers with differing specifications look more homogenous to the VNFs it provisions — to “homogenize” the platform.
Last March, at the Open Networking Summit in Santa Clara, a cavalcade of CORD’s prospective customers — including AT&T, Verizon, China Unicom, and Korea’s SK Telecom — discussed what were then the emerging issues behind unifying on CORD as a reference platform. Among those addressing their concerns was Ayush Sharma, senior vice president of Huawei, who spoke of the need for an open orchestrator (Open-O).
“When you build those data center-like networks,” said Sharma, “[with] those practices coming from the world of data centers, orchestration is one which was not used in networking. You need to use those practices to orchestrate these [function] slices.”
In recent months, Google has successfully managed to steer the containerization movement toward its open source orchestrator, Kubernetes. Just this week, commercial OpenStack producer Mirantis announced it’s working with Google and Intel to replace OpenStack’s existing deployment and management component, Fuel, with Kubernetes.
As Parulkar confirmed for Datacenter Knowledge, Google has officially joined the CORD Project, and has plans during CORD Summit to discuss a similar brain transplant for CORD. On CORD’s roadmap, he said, are details for implementing Kubernetes, either in addition to or in place of Docker. “That is something we were wanting to do,” said Parulkar, “and with Google, we hope that gets accelerated.”
Could CORD Lead the Way?
Today’s announcement marks the official publication of CORD as an open source project. At the heart of CORD architecture, Guru Parulkar explained to us, is a fabric built with leaf/spine architecture, utilizing white box servers and appliances that host VNFs connected using OpenFlow protocol (which Parulkar and colleagues created at Stanford University).
Unlike the typical enterprise data center, he explained, central offices make use of access racks — one layer of compartmentalization up from server racks.
“A central office may have 10,000 residential subscribers coming in on GPON networks,” he explained, “10,000 mobile customers coming in on some kind of backhaul, and maybe enterprise customers coming in on Metro Ethernet. What we are trying to do is help build those access racks, with the same philosophy and same approach that is merchant silicon, white boxes, and open source.”
Are there any lessons from what CORD is learning about orchestrating services for tens of thousands of simultaneous customers, that enterprises outside of telecommunications can put to use in their data center architectures? After all, if this is an open source project, what good is openness unless people can see what you’re doing?
Parulkar pointed us to an emerging subset of CORD’s mobile deployment reference platform, which he referred to as “M-CORD Lite,” which he hopes to publish during the next quarter.
“The idea is, right now it’s a big rack, and it’s not replicable and easy to deploy,” he explained. “We are building a portable M-CORD Lite rack, so that it is very easily to replicate and allow people to do experimentation. If they have the organization and the expertise, they can take the M-CORD Lite, deploy it, operate it, and get going. Our plan is that it should cost somewhere between $10,000 and $20,000. By the cellular industry standard, it is cheap.”
Parulkar cited figures from member telco AT&T stating it operates some 4,700 central offices nationwide. AT&T is already a principal contributor CORD architecture, and began rolling out solutions based on an earlier version of CORD in June of last year. | | 10:57p |
Intel Finally Releases Its Rack Scale Design to Open Source Just a few weeks ahead of its upcoming IDF conference in San Francisco, Intel announced Thursday that the data center asset utilization platform it has been assembling since 2013 is now ready to be shared with the open source community. Rack Scale Design (RSD, no longer called “Rack Scale Architecture” for obvious reasons) is now being published by Intel.
By sharp contrast with Facebook’s Open Compute Project reference architecture, RSD is an effort to arrive at an acceptable industry standard around the arrangement of disaggregated computing resources in standard-sized racks. At the center of the discussion is software-defined infrastructure. Rather than produce proprietary software that manages virtual assets pooled together from these racks, Intel is publishing libraries of REST-based APIs utilizing JSON code, with which developers may produce their own monitoring and management software, or integrate existing software with RSD systems.
This way, other people’s software can define the infrastructure.
“This is the first step in preparing the broader ecosystem for pooled resources and a path to a software-defined infrastructure,” wrote Charles Wuischpard, Intel’s VP/GM for Scalable Datacenter Solutions, in a company blog post Thursday. “Through its ability to provide a new systems-level architecture that uncouples a system’s resources, Intel Rack Scale Design helps hyperscale operators address the challenges of growing workload complexities and the sheer scale of usage demands.”

According to Intel’s newly published RSD hardware design guide [PDF], software will perceive a data center quite differently than its floor plan would imply. At the center of the RSD data center is a pod manager, represented in the NASA-like documentation as the PODM. It collectively addresses all the servers that reside within its management domain. “Racks” become physical culminations of pooled system — real, tangible enclosures for virtual components. But the collection of one or more physical racks into a virtually addressable enclosure, is what RSD calls a pod (not to be confused with “pod” in Kubernetes or another orchestration platform).
Each of the virtual pools contained within a pod has a pooled system management engine, the PSME. It runs separately from the control plane (CPP), possibly to enable a variety of open source components to serve the SDN control plane management role. Individual blades within the physical racks interface with the virtual pools by way of module management controllers (MMC). So you can see how the virtual configuration breaks down into the physical configuration by way of layers, rather than a single layer of abstraction.
Intel specifies that each rack should contain at least one PSME; should be capable of sharing a power bus bar, or at least utilize a high-efficiency unit; should be capable of sharing cooling modules or housing a centralized cooling module across the rack; should have at least one Ethernet switch; and should include at least one compute blade nodes (no storage-only nodes). Physically, each rack is divided into one or more uniquely addressable drawers, each of which connects to the PSME.
“We expect solutions based on Intel Rack Scale Design to be available from multiple hardware and software vendors by the end of this year,” wrote Intel’s Wuischpard. “To date, numerous nodes have been deployed at multiple telco and cloud service providers via our hardware and software partners.”
Ericsson’s HDS 8000 is perhaps the most visible culmination of a server based on the Intel RSA principles (as they were then known). Although HPE and Dell have both presented their own spins on “hyperscale architecture,” including through poolable resources (e.g., HPE’s “composable infrastructure”), both manufacturers, as well as Lenovo, are partners with Intel in RSD.
“Using Intel® RSD, Intel empowers its partners to separate the Intel-made CPUs from the rest of the components — both physically and functionally,” wrote Ericsson’s Michael Bennett Cohn, in a company blog post Thursday. “Any given process uses the resources it needs; no more, no fewer. When Intel comes out with their next generation of CPUs, those CPUs can be added to the resource pool without disruption. There is no need to throw out the components associated with the old CPUs; they too, will remain part of a resource pool.”
Intel will very likely provide more details on RSD on August 16, when its IDF ’16 conference opens in San Francisco. |
|