Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, February 1st, 2017
Time |
Event |
4:00p |
ERP Forecast: Partly Cloudy – Private vs Public vs Hybrid Solutions Glenn Johnson is a Senior Vice President at Magic Software Americas.
ERP cloud offerings are growing in demand because they enable companies to reduce upfront development costs, scale systems up and down easily and speed up deployment times. Unlike traditional ERP systems, which are installed on dedicated servers located on a company’s premises, cloud-based ERP systems are installed on third-party servers and software and accessed via the Internet. While businesses can run their ERP in public Software-as-a-Service (SaaS) models or as private self-managed ERP cloud installations, we are seeing an increasing trend toward a hybrid cloud computing model where core ERP processes are being deployed in the cloud while some best-of-breed solutions are still hosted on-premise in the company data center.
Companies are faced with the challenge of deciding which ERP cloud computing model gives them the right balance between agility and control, and then managing all the ERP data in a consistent and efficient way while having the flexibility to evolve with rapidly changing business needs.
What Goes Where?
In addition to choosing whether to run ERP in a public or private cloud, companies can choose to extend ERP functionality with third-party best-of-breed solutions on-premise or in the cloud. Public clouds deliver services over a network that’s shared by other businesses in a multi-tenant fashion, making the service more cost effective while leveraging investments in advanced technology by giants such as Amazon, Microsoft, Oracle and Salesforce.
The pay-as-you-go scalability of public clouds is ideal for heavy or unpredictable traffic and when there is a need to implement a single set of operational and administrative processes globally across several locations or subsidiaries of a large multi-national corporation.
With a private cloud, services are maintained on a private network protected by a firewall. Private clouds provide enhanced security and ultimate control with more data visibility which can help organizations meet data security regulations for health and financial organizations. An externally hosted private cloud provides the cost advantages of hosted services with more privacy than public clouds.
The Pros and Cons of Hybrid Solutions
A hybrid cloud includes both cloud and on-premise solutions – often from multiple providers. Hybrid clouds offer variety, so companies can pick and choose which aspects of their business are better off in a public or private cloud versus on premise. In addition, if users want to scale computing requirements beyond the private cloud and into the public cloud, they can switch resources quickly – otherwise known as “cloud bursting”.
A hybrid cloud can also involve on-premise ERP systems and integration to cloud-based third-party applications. Stellar, a leader in the Engineering and Construction Industry, uses Magic xpi to integrate their on-premise Oracle JD Edwards ERP system with an F1 Field Service Management system in the cloud.
A hybrid two-tier ERP model in which companies run more than one ERP system, often a primary one at headquarters and additional cloud ERP services at subsidiaries, is popular with many customers of cloud ERP vendors.
The Integration Challenge
However, all of these different cloud computing models mixed with on-premise solutions creates confusion as to where integration should reside. For example, it’s possible for one company to run JD Edwards on Oracle private cloud, SAP ECC HANA in the private cloud and a separate Salesforce.com SaaS application in a public cloud that integrates to both ERP environments.
The challenges are finding ways to manage all the different data flows in and out of public and private clouds. Finding the most painless path to cloud integration means having an integration platform that is fully interoperable and not locked-in to one environment or architecture and that is able to handle all of the different data types and computing environments.
This is important since the “best” deployment model for today might not be the best model in a few years, or even a few months. Integration platforms can provide the flexibility to maintain ERP integration as requirements evolve. For example, an integration platform makes it relatively easy to adapt data incoming from a CRM system from a private ERP cloud to a public cloud ERP offering as public ERP clouds become more commonplace due to enhanced functionality, improved security and lower operating costs.
Private clouds are easier to control and customize while public clouds are more resource efficient. Hybrid cloud configurations can provide enterprises with the best of both worlds with the option to shift resources back and forth as business needs evolve. Integration platforms deliver the agility companies need to provide one unified platform to manage data and business process complexity for today and tomorrow’s cloud computing needs.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:42p |
This Server’s Uptime Puts Your SLA to Shame  Brought to You by The WHIR
An unusual and noteworthy retirement from the IT industry is scheduled to take place in April, Computerworld reports, when a fault-tolerant server from Stratus Technologies running continuously for 24 years in Dearborn, Michigan, is replaced in a system upgrade.
The server was set up in 1993 by Phil Hogan, an IT application architect for a steel product company now known as Great Lakes Works EGL.
Hogan’s server won a contest held by Stratus to identify its longest-running server in 2010, when Great Lakes Works was called Double Eagle Steel Coating Co. (DESCO). While various redundant hardware components have been replaced over the years, Hogan estimates close to 80 percent of the original system remains.
The server runs on a version of Stratus proprietary VOS operating system, which Hogan tells Computerworld has been very stable despite not being updated since the early 2000s. The character-driven interface has proven simple enough for continued use, particularly in light of the reliability advantages.
While the value of fancy interfaces relative to reliability remains an ongoing debate within the industry, running server software which is still supported is generally recommended, though not always followed.
Jason Anderson, vice president of business line management for Stratus, told Computerworld that since the 2010 contest, the company has become aware of other servers that have been operating for over 20 years, though the one Hogan set up may be the oldest.
U.S. Steel acquired DESCO in 2015, and is now planning to upgrade its IT system in April, which will mark the end of a remarkable run of uptime.
This article first ran on The WHIR. | 6:08p |
Report: Heartbleed Remains Unpatched on Thousands of Servers  Brought to You by The WHIR
The Heartbleed Open-SSL flaw remains unpatched on nearly 200,000 servers and devices, according to connected devices search engine Shodan.
The company’s Heartbleed Report, released in January, shows 52,000 Apache HTTPD servers are still vulnerable and exposed to the internet, nearly three years after the vulnerability was disclosed and fixes released in April 2014. The number of servers vulnerable to Heartbleed was halved by June 2014.
The report shows that more than one in five of those vulnerable servers (42,032) are located in the United States, with AWS hosting the highest number, 6,375, followed by Verizon (4,328). The country with the next most vulnerable devices is South Korea with 15,380, almost half of them (6,376) belonging to SK Broadband.
“The initial media blizzard for Heartbleed helped secure hundreds of thousands of devices (from 600,000 down to 200,000) but the subsequent follow-up has been lackluster as the problem keeps lingering,” Shodan founder John Matherly told Threatpost. This is despite most affected devices supporting TLSv1.2. “This means they support good encryption, unfortunately their dependencies are old,” he said.
Veracode senior director of security Tim Jarrett told Threatpost the number of remaining servers highlight the complications that “forgotten servers” on public cloud services like AWS can bring.
“What used to require a sysadmin and a capital expenditure can now be done with a few lines of code. And we know that both real and virtual servers are easy to forget about, particularly when created outside of normal IT processes. So it’s unsurprising that some of these ‘forgotten servers’ are unpatched and dangerous,” Jarrett said.
See also: Renown Hacker: People, Not Technology Most Vulnerable Security Link
This article first appeared on The WHIR. | 7:32p |
Siemens Taps New Boss With Head in the Cloud as Profits Surge By Oliver Sachgau and Aaron Ricadela (Bloomberg) — Over the past 170 years, Siemens AG has forged a reputation as a manufacturer of trains, turbines and other huge things that weigh more than a house. So why is it asking a software guy to oversee the place?
Siemens on Wednesday introduced Jim Hagemann Snabe, a 51-year-old veteran of software house SAP, as its next chairman, scheduled to take effect next January, according to a statement. The move makes sense as Siemens seeks to adapt its 19th-century industrial heritage to the 21st century, says William Mackie, an analyst at Kepler Cheuvreux who rates the company a buy. The shares on Wednesday surged as much as 5 percent after the German manufacturer raised its outlook.
As Siemens products become more automated, they’re throwing off a growing flood of data that requires sophisticated software to really understand. And for Siemens, software and related services offer key advantages: they tend to be more profitable than industrial goods, they have predictable revenue streams, and they’re “sticky,” meaning they make it harder for a customer to defect to a competitor.
Snabe has “the crucial perspective of a man who has spent a large part of his career focused on the software and hardware elements of the technology sector,” Mackie said.
He also represents a stark departure from previous Siemens chairmen: The native of Denmark would be the first non-German in the job, and one of the few foreigners in top management. Snabe would also be the youngest person to ever serve as chairman, by more than a decade.
It’s his job experience, though, that really makes Snabe stand out. The company’s current chairman, Gerhard Cromme, 73, made his name at companies like industrial giant Thyssenkrupp AG and building materials producer Saint-Gobain, and earlier chairmen were mostly Siemens lifers.
Snabe has “in depth industry expertise in software and digitalization,” Cromme said in the statement, adding that the nomination sets the course for “long-term succession planning and continuity” at Siemens.
Snabe’s nomination comes as Siemens’s rivals are spending billions to reinvent themselves for the digital age. General Electric Co. has said that by 2020 it wants to become a top-10 software company, to rival Oracle and Microsoft. Swiss competitor ABB says it plans to boost its software and services revenue from 15 percent of total sales to about a third in coming years.
Siemens on Tuesday raised its full-year outlook after renewable energy projects and digital services led to better-than-expected first-quarter profit even as Europe’s biggest engineering company said orders are starting to weaken. The shares rose as much as 5.75 euros, the most in more than two months, and were up 4.5 percent as of 9:50 a.m. in Frankfurt.
Siemens, a company built on engineers and manufacturing know-how, says it must transform itself into a maker of fully digitized factories. In the past year it has pushed that strategy through major acquisitions like the $4.5 billion purchase of Mentor Graphics, a maker of equipment used to design and produce electronics. But in software, Siemens still gets help: Its flagship cloud offering, Mindsphere, incorporates SAP technology.
Snabe started at SAP as a trainee in 1990 and rose quickly, running by turns the consulting and product development groups. In 2010, he was named co-Chief Executive Officer, alongside Bill McDermott. Then in 2014, McDermott held on as the sole CEO while Snabe moved on to the supervisory board. The two took over SAP’s leadership after the ouster of CEO Leo Apotheker, who had alienated employees by cutting jobs.
Boardroom Tour
Snabe and McDermott reversed a long-standing aversion to big takeovers at SAP. Over their four-year tenure, Snabe and McDermott spent more than $14 billion on acquisitions, including database maker Sybase, sales software house SuccessFactors, and expense account manager Concur Technologies to add mobile and cloud computing skills.
“Jim has broad insight into the field of digital disruption and business opportunity,” McDermott said in an e-mail. He “can build trust with leaders and colleagues at every level.”
For the past couple of years, Snabe has sought to combine his experience in software with the real world, according to a person familiar with his thinking. Snabe has been advocating the view that the next wave of digital goods won’t simply displace physical products — think CDs being pushed out by MP3s and Netflix killing the DVD — this person said. Instead, they’ll serve to augment existing products: driverless cars or automated factories would be examples.
Cromme in 2013 invited Snabe to join the Siemens board. Snabe believes that as chairman he can help accelerate the company’s digital strategy, the person familiar with his thinking said. To better focus on Siemens, Snabe intends to reduce the number of other supervisory board positions he holds over the next year, the company said.
He won’t likely push Siemens to build its software capabilities via further buyouts; After a string of acquisitions culminating in the Mentor deal last year, company executives have indicated that they want to hit the pause button on big transactions. That means Snabe’s initial task will be to help CEO Joe Kaeser integrate those purchases and think more like a software house, said Ingo Schachel, an analyst at Commerzbank who rates Siemens a hold.
“He can definitely play an active part,” Schachel said. “He won’t make a 180-degree turnaround, but he will assume a noticeable role at the company.” | 9:30p |
Google Ramped Up Data Center Spend in 2016 Google’s capital expenditures were up in 2016, reflecting more investment in data center construction than the year before, while 2017 is poised to see another spike in infrastructure spend.
The Aphabet subsidiary’s 2016 capex was about $10.9 billion, compared to $9.9 billion in 2015 – a 10 percent increase. Not all of Google’s capital spend goes to data centers, but the bulk of it does. On the company’s fourth-quarter earnings call last week, the company’s CFO Ruth Porat said its capital expenditures during the quarter reflected “investments in production equipment, facilities, and data center construction.”
The spike in capital spending is also in line with Google’s announcement last March that it would ramp up investment in data centers to support its enterprise cloud services, a business whose growth has been a top priority for the company as it races to catch up with cloud leaders Amazon and Microsoft.
See also: Who Leased the Most Data Center Space in 2016
Google’s share of the Infrastructure-as-a-Service market was 2.5 percent in 2015, according to Structure Research, while Amazon Web Services commanded 70.1 percent of the market; Microsoft Azure’s share was 10.8 percent of what was then an $11 billion market. Google was behind Alibaba, IBM, and Rackspace in terms of IaaS market share.
While Google already operates a huge global network of massive-scale data centers, not all of those facilities support its cloud services. The company said in March it would add 10 new cloud data center locations by the end of 2017. It launched two in 2016 – Tokyo and The Dalles, Oregon – and said it will launch the other eight this year, which means its 2017 data center spend will likely outpace 2016.
Read more: Google to Build and Lease Data Centers in Big Cloud Expansion
New regions due to come online this year are:
- Sydney
- Sao Paulo
- Frankfurt
- Mumbai
- Singapore
- London
- Hamina, Finland
- Northern Virginia
Google said it will announce additional regions throughout 2017.
In addition to building data centers for its cloud services, Google is also continuously expanding data center capacity to support its other businesses. Besides the two cloud regions launched last year, the company also brought online a new data center in Dublin, Ireland.
While the increase in capex between 2015 and 2016 was substantial, it was not the biggest spike the company has reported in recent years. There was a much bigger increase between 2013 and 2014 (from $7.4 billion to $11 billion, a 51 percent increase), which the company attributed to expenditures related to production equipment, data centers, and real estate purchases.
Google reduced capital spending in 2015 before ramping up again last year:

Read more: What Cloud and AI Do and Don’t Mean for Google’s Data Center Strategy |
|