Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, July 24th, 2017
Time |
Event |
6:05a |
Top Five Data Center Stories: Week of July 23 Here are the most popular stories that appeared on Data Center Knowledge this week:
10 Things Every CIO Must Know about Their Data Centers – While data centers aren’t necessarily something CIOs think about on a daily basis, there are some essential things every executive in this role must know about their organization’s data center operations.
You Can Now Earn a Bachelors in Data Center Facilities Engineering – Developed after lengthy consultations with Google, Facebook, and Microsoft, The Institute of Technology in Ireland said the studies will focus on traditional enterprise data center practices with the hopes of graduating a class ready and able to help fill the skills gap in technology management and operation of data center facilities.
Report: VMware, AWS Mulling Joint Data Center Software Product – No details about what that product would be are available at this point, but VMware’s value for AWS is its enormous enterprise data center install base, while helping customers create and run environments that combine their on-premises data centers with one or multiple public cloud providers is currently the big business focus for VMware.
Here’s How Azure Stack Will Integrate into Your Data Center – What it gives you is a system that’s not exactly the same as Azure running in an Azure data center but that’s consistent with it, using the same management API and portal, with many of the same services, giving you a unified development model.
Quantum Computing Could Make Today’s Encryption Obsolete – Quantum computers — which use quantum bits, or qubits — are capable of running computations impossible for existing technology. The technology promises to open up new possibilities in areas like medical research, artificial intelligence, and security. Oh, and they would also easily crack current encryption algorithms.
Stay current on data center industry news by subscribing to our RSS feed and daily e-mail updates, or by following us on Twitter or Facebook or join our LinkedIn Group – Data Center Knowledge. | 12:00p |
Oracle Wants Its Cloud to Grow Inside Your Data Centers Taking a different approach to hybrid cloud than some of its biggest competitors, Oracle has substantially beefed up the capabilities of its on-premises cloud product called Oracle Cloud at Customer. It gives companies the ability to use its cloud services but have them run inside their own data centers.
The company launched the Oracle Cloud at Customer service last year. It installs and manages all the converged hardware, software, and networking equipment on premises, essentially provides organizations a private version of its public cloud through a subscription-based model.
Oracle last week expanded services it offers in this way to include Software-as-a-Service applications, including ERP and CRM software, and its full suite of Platform-as-a Service offerings. The company previously only offered Infrastructure-as-a-Service, and some PaaS services on Oracle Cloud at Customer.
Giants’ Hybrid Strategies Start to Take Shape
Analysts say the announcement allows Oracle to better compete against cloud rivals, such as Microsoft, IBM, Amazon, and Alphabet’s Google. All leading enterprise cloud players’ approaches to hybrid cloud are now starting to take shape, all quite different from each other.
Microsoft’s strategy, Azure Stack, is the closest to Oracle’s. Microsoft’s partners Dell EMC, Hewlett Packard Enterprise, and Lenovo started taking orders for the on-premises version of the public Azure cloud, earlier this month.
“The update to [Oracle’s] Cloud at Customer is competitive with Azure Stack,” Dave Bartoletti, VP and principal analyst at the Forrester Group, told Data Center Knowledge. “The appeal is for customers who want a bit of the Oracle Cloud running on-premises for whatever reason: security concerns, data residency, or simply a desire to control data and apps more directly.”
IBM’s BlueMix Private Cloud Local is another on-premises play that’s similar to Oracle’s and Microsoft’s, while Amazon Web Services and Google Cloud Platform at this point only offer migration and integration services.
“Azure, IBM, and Oracle are all trying to place their clouds in customer data centers, while AWS and Google are trying to make it as easy as possible to connect customer data centers to their clouds. It’s a slightly different approach,” Bartoletti said.
For example, Google this month struck a deal with Nutanix, a hyperconverged infrastructure vendor, to help organizations build hybrid clouds that integrate their Nutanix environments with GCP. Meanwhile, AWS partnered with VMware to help enterprises easily integrate their existing VMware environments with its public cloud. Last week, however, an anonymously sourced report appeared, saying the two may have a joint on-premises data center software product in the works.
IDC research VP Carl Olofson said Oracle’s latest upgrades of Cloud at Customer are important for the company because it wants to move as many existing customers to the Oracle Cloud as possible. The hybrid approach allows IT organizations to take an intermediate step toward the cloud.
“People have complicated data centers with interconnections with Oracle and non-Oracle applications. So moving to the cloud is a big deal for them. They have to plan it out, decide which cloud to move to and how to do integration. Left to their own devices, it could take years to work this out,” he said. “By putting a remotely managed system in their data center, they can still physically connect to the things they need to connect to. It’s not too hard to adopt and digest because they don’t have to do everything at once.”
Every Type of …aaS
Oracle says its on-premises cloud has already seen a lot of success. The company last week announced that large enterprises across 30 countries, including AT&T and Bank of America, have adopted Oracle Cloud at Customer.
New features in the latest version include servers with faster processors, NVMe-based flash storage, and all-flash block storage, which provides faster IaaS performance, Nirav Mehta, VP of product management for Oracle Cloud at Customer, told Data Center Knowledge.
Other new services include improved security through Oracle Identity Cloud, which provides identity and access management services, as well as application development, Big Data and analytics services, he said.
Mehta noted that one big difference between Oracle and Microsoft’s on-premises offerings is that Oracle provides the entire service on its own, while Azure Stack users have to purchase the hardware and get support from a vendor other than Microsoft.
Oracle customers pay the same subscription prices as if they were using the public Oracle Cloud, he added. The equipment that is installed in customer data centers is owned and remotely managed by Oracle – much like a cable company providing home users a digital video recorder or modem for internet access.
“You can now bring the cloud right to your doorstep,” Mehta said. “The important thing is we took the exact same infrastructure in the Oracle Cloud and brought it to the customer. It is tested and proven at scale.”
Read more: Oracle’s Cloud, Built by Former AWS, Microsoft Engineers, Comes Online
What Does It Mean for the Data Center Manager?
While Oracle technicians install and manage the on-premises equipment, data center operators have to provide power, cooling, and network connections. Enterprises can use their own data centers or colocation facilities, whatever they feel most comfortable with, Mehta said.
“We use standard racks and specify what the data center manager must provide us in terms of power, cooling, and network connectivity,” he said. “They look at the spec sheets and work on sizing with us and the positioning of these racks.”
Enterprises that implement Oracle Cloud at Customer typically start with a minimum configuration of 100 compute cores, and as their needs grow, they can add more infrastructure. “They can slap them together like Lego blocks,” he said.
| 4:28p |
UK Startup Raises $30M to Build AI Chips for Data Centers Jeremy Kahn and Ian King (Bloomberg) — Graphcore, a startup that is designing chips specifically for artificial intelligence applications, has secured $30 million in funding, including from leading industry researchers, to fuel its continued growth, the company said.
Atomico, the London-based venture capital firm founded by billionaire Niklas Zennstrom, a co-founder of Skype, is leading the investment round. Graphcore’s existing investors, Amadeus Capital, Robert Bosch Venture Capital, C4 Ventures, Dell Technologies Capital, Draper Esprit Plc, Foundation Capital, Pitango and Samsung Catalyst Fund are also participating in the new funding.
In addition, a number of renowned machine learning experts are investing in Bristol, U.K.-based Graphcore as part of the deal. They include Demis Hassabis, the co-founder and chief executive officer of DeepMind, the Alphabet Inc.-owned artificial intelligence company best known for creating software that beat the world’s top players at the ancient strategy game Go.
The increasing use of artificial intelligence is opening the possibility of a shakeup in the data-center industry, which has been almost totally reliant on Intel Corp. processors for more than a decade. Companies such as Alphabet’s Google, Amazon.com Inc. and Microsoft Corp. have started using new types of chips — some they’ve designed themselves — to take on parts of the processing work they think can be done more effectively than by industry-standard microprocessors.
Machine-learning algorithms, which can require large amounts of computing power, are often run on graphics processing units. But these chipsets, made by companies such as Nvidia Corp., were originally designed for rendering images, mostly in video games and design applications. As a result, they are not ideally suited to artificial intelligence software, Siraj Khaliq, the Atomico partner leading the investment, said.
“GPUs are just not a great architectural fit for the problem,” he said.
Read more: Nvidia CEO Says AI Workloads Will Flood Data Centers
Khaliq said the venture capital firm looked at a number of startups now working to create semiconductors specifically tailored to machine learning and found Graphcore’s “the most elegant.” He noted that while other companies – including Google – were designing computer chips specifically for neural networks, a popular kind of machine learning that mimics the architecture of the human brain, Graphcore’s design was more flexible, able to improve the performance of many different kinds of machine learning algorithms.
Atomico was also impressed with benchmarking tests of Graphcore’s first chip, which the company calls an Intelligence Processing Unit, or IPU, he said. Graphcore’s IPUs are from 10 to 100 times faster than most existing chip designs at critical machine-learning functions, according to the company.
See also: This Data Center is Designed for Deep Learning
Graphcore also is receiving money from Greg Brockman and Ilya Sutskever, who are among the co-founders of OpenAI, a San Francisco-based nonprofit AI research company, as well as Pieter Abbeel and Scott Gray, who are researchers affiliated with OpenAI. Zoubin Ghahramani, the chief scientist at Uber Technologies Inc. and a machine learning researcher at the University of Cambridge, is investing too.
“The key there is to have connections with luminaries in the field, people who give us insight into what people are doing today and what people would like to create going forward,” Nigel Toon, Graphcore’s chief executive officer, said in an interview. “What we have done is take a clean sheet of paper and talked to leading innovators and looked specifically at what they are trying to do and designed a processor to accelerate machine learning.”
The company will begin shipping IPUs to its first test customers by the end of the year and will begin selling them more broadly in 2018, he said.
“Compute is the life blood of AI,” Sutskever said. He said the move from using general central processing units, or CPUs, to more powerful GPUs led to big advances in machine learning. A similar move from GPUs to chip architectures designed for deep learning, a kind of machine learning using neural networks, “will unlock spectacular and drastic progress in AI.”
Toon said Graphcore, which currently employs 60 people, would look to double or even triple in size by the end of 2018. The company raised an initial $30 million in funding in the summer of 2016. | 5:00p |
GoDaddy Drops Curtain on Its Cloud Business… Again GoDaddy is shuttering Cloud Servers, its public cloud service. I know what you’re thinking. “GoDaddy is in the public cloud business?” Therein might lie the problem.
Launched only a year ago, Cloud Servers was never intended to go after the big guys — AWS, Azure, GCP, and the like — and had no dreams of competing for well-heeled, big-business customers. Instead, it was hoping to position itself as a gateway to the cloud for small and medium sized businesses wanting to test the waters. In other words, it was hoping to take on DigitalOcean and Linode. It was also undoubtedly hoping to leverage the substantial base of its hosting business and convince some of those customers that their lives would only improve if they made a move to the cloud.
It based its cloud offering on the open source OpenStack platform that’s widely used in private and hybrid clouds. It also entered into a partnership with Bitnami, which gave potential customers an easy way to install apps in its cloud. Server configuration options and such were rather limited compared to the larger full-service clouds, but that was part of its design, to be oh-so-easy to use. When Cloud Servers rolled out in March of last year, it supported 26 languages, was available in 53 countries, and accepted 44 currencies for payment. GoDaddy was serious about this new cloud endeavor.
Evidently that didn’t work as well as the company had hoped. Last Thursday, media reports started coming out that GoDaddy was shutting its cloud down. The company made no formal announcement, and news sites only found out about the planned closing when Cloud Servers’ users began receiving notices informing them the service will stop being supported on December 31. Bitnami-supplied apps and development environments will make an earlier departure, losing support on November 15. The notice of course prompted a discussion on Twitter.
Eventually, GoDaddy senior VP Raghu Murthi confirmed the closure of Cloud Servers in a statement to ZDNet:
“After serious consideration, we have decided to end-of-life our cloud servers’ product. Our goal from the beginning was to create simple and scalable services for small and medium business owners. We’re proud of what we built and now we are focusing on building a robust and scalable solutions based on OpenStack infrastructure.”
Only two days earlier, on July 18, GoDaddy announced it was selling — for $456 million — the German-based hosting business, PlusServer, that it acquired in April as part of a $1.79 billion acquisition of Host Europe Group. That sale wasn’t entirely unexpected, as GoDaddy had indicated when it acquired HEG that keeping PlusServer wasn’t in its plans.
The shuttering of Cloud Servers will likely have no more of an effect on the overall cloud economy than when HPE and Cisco disconnected their OpenStack-based cloud services. Probably even less. GoDaddy’s service appears to have never really been a player to begin with, but merely an idea that didn’t pan out.
It also wasn’t the company’s first try at being a public cloud provider. In May 2012 it launched a cloud service, also called Cloud Servers and also targeting SMBs, which it shut down five months later. At least this time they made it past the year mark. Maybe the third time will be the charm. | 5:30p |
Facilitating Digital Transformation in One Step Chuck Rathmann is Senior Marketing Communications Analyst, North America, at IFS.
Business conditions are changing rapidly thanks to digital transformation, which is defined in an MIT Sloan Management Review article as the use of technology to radically change performance or reach of enterprises.
Consumer businesses like Uber and Lyft come to mind, as they harness mobile technologies to manage virtual workforces and service offerings that disrupt traditional taxi companies. Industrial organizations may sensor their production equipment and automate machines or entire work cells—or reduce downtime by adopting condition-based maintenance. And indeed, this industrial internet of things (IIoT) is about to reach new heights as sensor prices have dropped, connectivity has increased and tools streamline the operationalization of IoT data.
But according to a new study of 200 industrial executives conducted by IFS, mobility represents just as large an opportunity for industrial organizations as it does for Uber. The study suggests that companies where employees access enterprise systems like enterprise resource planning, enterprise asset management or field service management software through mobile devices were more prepared for digital transformation than other companies.
Respondents who said their enterprise software prepared them well for digital transformation were more than twice as likely to access their software from a mobile device than those who said their software did a poor job of preparing them for digital transformation. There is a relationship between enterprise mobility and readiness for digital transformation.
Moreover, for almost 70 percent of respondents, increasing mobile access to enterprise software may be an immediate digital transformation opportunity. Only 31 percent of respondents said they access enterprise software through a mobile device.
And the transformative potential of enterprise mobility for industrial companies is real. Mobile access to software including enterprise resource planning (ERP), enterprise asset management (EAM) and field service management:
- Enables accurate and real-time collection of enterprise information for more efficient operation and executive decision support.
- Improves the customer experience in field service environments.
- Increases productive time of technical staff by allowing them to interact with systems like enterprise asset management or computerized maintenance management systems while in the field or at the machine on the plant floor.
- Enables workers and enterprise systems to harness advanced features of mobile devices including geolocation and cameras.
- Improves the amount and quality of information available to those servicing assets or customers, allowing more efficient service and first-time-fix in field service environments and more reliable troubleshooting and less down time in a plant environment.
- Induces users to engage with software systems more frequently, increasing return on investment in enterprise software.
This last benefit, increased engagement with enterprise systems, may be the most significant transformative element of enterprise mobility, according to Rick Veague, IFS’s chief technical officer in North America.
“Mobile is the most obvious manifestation of digital transformation,” Veague said. “It is not the only one or, for that matter, the most important one. But when people use enterprise software from a mobile device, it indicates that the system is the lifeblood of the business. Your employees can connect into those core processes and participate even if they are not sitting at their desk. If you cannot do this, you will struggle with anything in digital transformation.”
The authors of that article on the MIT Sloan Management Review site their own study of 157 executives at 50 companies. They wound up identifying nine elements of digital transformation. And several of them, including worker enablement and process digitization, seem to suggest other knowledgeable parties share Veague’s viewpoint on the importance of connected people for digital transformation.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 6:28p |
AT&T to Build Edge Data Center Network for Self-Driving Cars, VR/AR The next generation of applications, powered by advances in machine learning, autonomous vehicles, and virtual and augmented reality, will require lots of computing power, and since in many cases these applications will need near-real-time response from computing systems, much of that computing power, according to technology companies and analysts, is going to be deployed at network edges, smaller-capacity data centers scattered in and around densely populated areas that will receive and process data they receive from self-driving cars or AR systems and send data back.
AT&T wants to utilize the sprawling infrastructure assets it has accumulated as it built out its wireless network — consisting of central offices, cell towers, and other types of cell sites – to house its future edge data center network as it prepares for the roll-out of 5G wireless technology, expected to enable the type of lightning-fast wireless data transfer that’s necessary for those next-generation applications to work.
If this story sounds somewhat familiar, it’s because several years ago AT&T started converting its central offices into data centers to enable the transformation of its legacy network into one that’s software-defined, where network management is automated using software, and where network functions, traditionally delivered by dedicated appliances, are virtual, running on commodity servers, and delivered in similar manner to cloud services companies buy from the likes of Amazon Web Services.
Read more: Telco Central Offices Get Second Life as Cloud Data Centers
 AT&T switching facility (Photo by John W. Adkisson/Getty Images)
AT&T’s goal is to virtualize 75 percent of its network functions by 2020, and the company expects to reach 55 percent virtualization this year. It said the network virtualization effort will continue hand-in-hand with the buildout of the future edge computing network and implementation of 5G. AT&T said in a statement:
“We think 5G and software defined networking will be deeply intertwined technologies. We don’t think you can claim to be preparing for 5G and EC (Edge Computing) if you’re not investing in SDN.”
Because edge data center networks that will supplement self-driving cars and VR/AR systems with computing power will have to be highly distributed, telecommunication companies’ network assets put them at a natural advantage. Their networks are already highly distributed and placed with the goal of providing wireless services in close proximity to end users. They have the end points for ingesting device data, and they have the backbone networks in place to interconnect edge sites with remote core data centers, another necessary component of the architecture.
See also: Edge Data Centers in the Self-Driving Car Future
Other companies in the network infrastructure business also have kicked off serious efforts to exploit their assets for edge computing. Japan’s NTT Communications, for example, announced in March a partnership with Toyota to research and build a global network of data centers for Internet of Things applications with an emphasis on autonomous vehicles.
Another example is Crown Castle, the largest wireless tower company in the US, which leases towers to all the top carriers, including AT&T, Verizon, and T-Mobile. Crown Castle recently acquired a minority stake in Vapor IO, an Austin-based startup that designs data center infrastructure solutions for edge computing. When Vapor announced the deal, it also rolled out a new edge data center colocation service, offering companies data center space and infrastructure at cell-tower sites, as well as help with selecting and securing the best edge locations for their applications.
See also: GE Bets on LinkedIn’s Data Center Standard for Predix at the Edge |
|