Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, November 4th, 2015
| Time |
Event |
| 2:00p |
Lenovo and Nutanix Partner on Converged Infrastructure Appliance Making yet another big step in its foray into the data center market, Chinese IT giant Lenovo has partnered with Nutanix to bring to market a Lenovo-branded converged infrastructure appliance that will compete against comparable solutions by HP, Cisco, Oracle, EMC, and NetApp, among others.
The companies are not revealing technical specs or pricing details of the solution until a full roll-out at Gartner’s data center trade show in Las Vegas next month. The hardware, however, will be based on System x, IBM’s commodity x86 server line Lenovo acquired last year, Brian Cox, director of product marketing at Nutanix, said.
Converged infrastructure, or full-package scale-out IT infrastructure solutions that combine servers, storage, and virtualization software, freeing users from the need to set up and operate separate Storage Area Network systems, is a growing and hotly contested market where San Jose, California-based Nutanix is a leader. Nutanix sells its own converged infrastructure appliances, and the only other hardware vendor it has a relationship with that’s similar to the one with Lenovo is Dell, which Gartner considers a “niche player” in the converged infrastructure market.
The deal with Lenovo is a broad partnership that includes collaboration on engineering, marketing, and sales. Lenovo’s global sales reach is the biggest reason the relationship was attractive to Nutanix, a six-year-old company whose product has only been on the market since 2011.
Nutanix currently sells in about 70 countries, Cox said, while Lenovo’s reach stretches across 160 countries, Cox explained. “It’s a big expansion for Nutanix.”
“Lenovo will be creating a dedicated sales force to sell this new product,” he said. When a lot of technology alliances are little more than “a handshake and a photograph,” the commitment of an entire sales team that will only sell the Nutanix-Lenovo solution makes the partnership stand out.
The companies expect the appliance to start shipping in the first quarter of 2016.
According to Gartner’s latest Magic Quadrant for integrated systems – which is Gartner-speak for converged infrastructure – Nutanix is the leader in the space in terms of completeness of vision. Cisco, while meaningfully behind Nutanix on vision, leads on the ability-to-execute axis of the quadrant.
Others in the “leaders” quadrant are NetApp, HP, Oracle, and EMC. Notably, Cisco has converged infrastructure partnerships with both NetApp and EMC.
Gartner puts Lenovo in the “challengers” category, which it shares with the Japanese IT giant Fujitsu.
The strength of Nutanix is in its software, which combines Nutanix OS and infrastructure management software that unifies management of storage, networking, and compute. It has built-in storage management features, such as deduplication, and compression, and strong availability, performance, and scalability attributes.
Nutanix has its own KVM-based hypervisor, called Acropolis, but also supports VMware and Microsoft’s Hyper-V.
Cox said the companies will try to avoid direct competition between the Lenovo appliance, Dell’s XC appliance, and Nutanix’s own converged infrastructure product. The plan is to establish a “deal orchestration” process, where the three companies will avoid pursuing the same sales leads, he explained.
“This is a fast-growing market,” he said. “There’s enough opportunity for all the companies involved.” | | 4:45p |
Analysts: AWS Cloud Business Worth $160B 
This article originally appeared at The WHIR
Analysts at Deutsche Bank believe Amazon Web Services could be worth $160 billion. On Tuesday Deutsche Bank analyst Karl Keirstead said in a note that with AWS cloud revenue forecasted to reach $16 billion by 2017, and based on a 10x multiple, it would get a valuation of $160 billion if it was a standalone business.
To put the number into perspective, IBM currently has a valuation of $160 billion,Oracle is at $170 billion, and Salesforce has a $50 million market cap. And those are whole companies.
A report earlier this year by analysts at financial firm Robert W. Baird pegged the value of AWS at between $40 and $50 billion prior to AWS reporting its earnings for the first time in April 2015. At the time, Amazon CEO Jeff Bezos said AWS was a $5 billion business.
Deutsche Bank came up with the 10x multiple by comparing it to AWS cloud peers in the enterprise space, according to a report by Business Insider. Salesforce,Workday and Google had a median 2016 revenue multiple of 9x.
The note said that AWS is about 6x larger than Microsoft Azure based on revenues and “is arguably the greatest disruptive force in the entire enterprise technology market today.” Microsoft recently announced its quarterly earnings with Azure revenue and compute usage doubling year over year.
The analyst note went on to commend AWS head Andy Jassy, who has kept “a relatively low profile.” It said that he may be “the world’s most under-appreciated technology company head.” Jassy was on stage at this year’s AWS re:Invent conference to deliver a keynote which unveiled a number of new services, including cloud-based business intelligence service Amazon QuickSight, physical data transfer service Amazon Snowball, and managed streaming service Amazon Kinesis Firehose.
In its most recent quarter AWS reported $2.1 billion in revenue, growing 78 percent year over year.
This first ran at http://www.thewhir.com/web-hosting-news/aws-cloud-business-worth-160-billion-analysts | | 5:47p |
The 10 Biggest Data Center Stories of October Here are the 10 most popular stories that ran on Data Center Knowledge in October 2015:
Safe Harbor Ruling Leaves Data Center Operators in Ambiguity – Europe’s annulment of the framework that made it easy for companies to transfer data between data centers in Europe and the US while staying within the limits of European privacy laws has caused a lot of uncertainty for businesses that operate data centers on both sides of the Atlantic.
Which Data Center Skills are in Demand Today – As their approach to infrastructure changes, organizations are looking to invest in good people to support new initiatives.
Equinix Doubles Down in One of Internet’s Most Important Locations – Equinix broke ground on a brand new Ashburn, Virginia, campus that has the potential to grow to 1 million square feet of gross building space – a data center build-out that would cost $1 billion by the company’s estimate.
 Inside an Equinix data center. (Photo: Equinix)
Telx Acquisition Closed, Here’s Digital Realty’s Plan – Interconnecting Telx meet-me rooms with its big wholesale facilities over private network links is Digital’s new value proposition.
 Digital Realty data center in Chessington, U.K. (Photo: Digital Realty)
Growth Continues in Secondary North American Data Center Markets – While top data center markets like New York, Silicon Valley, and Dallas get most of the attention, a lot of growth is taking place in markets considered secondary. Markets like Seattle, Portland, Phoenix, and, more recently, Reno, Nevada, are seeing a lot of multi-tenant data center construction and take-up.
Windstream to Sell Data Center Business for $575M – Windstream has more than 20 data centers, most of them on the East Coast, with some in the Midwest, South, and on the West Coast.
Who is Winning in the DCIM Software Market? – While the very top players are the same, competition for their market share is heating up, as new vendors enter the market, and as previously existing ones step up their game.
The Billions in Data Center Spending behind Cloud Revenue Growth – Every quarter, cloud giants Amazon, Microsoft, IBM, and Google collectively spend billions of dollars on servers and other hardware for their cloud services and data centers around the world to house all that gear, and the quarter that ended September 30 was no different.
 A technician at work in a data hall at Facebook’s Altoona, Iowa, data center. (Photo: Facebook/2014 Jacob Sharp Photography)
Facebook to Build Third $200M North Carolina Data Center – The company has been on a data center construction tear this year, which indicates that its user base continues to grow quickly.
HP Launches Open Source OS for Data Center Networking – The move to open networking is about giving users more control of the configuration of their networks, as well enabling Software Defined Networking and Network Function Virtualization capabilities.
Stay current on data center news by subscribing to our daily email updates and RSS feed, or by following us onTwitter,Facebook, LinkedIn and Google+. | | 6:00p |
Realizing the Potential of the Software-Defined Data Center Patrick Quirk is Vice President and General Manager of Converged Systems for Emerson Network Power.
The data center industry is in the midst of several major pivots that are changing established paradigms. Virtualization has largely replaced hardware-based capacity management. Single-instance data centers in which all data and applications receive the same support are evolving into multi-modal operations where resiliency and security are tailored to the application. Historically closed systems are becoming open.
These disruptions are being driven by the need for IT to move faster, improve application support, increase productivity, reduce risk, and lower costs. But the ramifications extend beyond even these powerful benefits: The changes occurring today will ultimately allow organizations to realize the promise of the software-defined data center.
That promise is a location-agnostic “data center” in which capacity is managed fluidly and securely across an ecosystem that includes on-premise data centers and micro data centers, associated colocation facilities, and multiple clouds. Organizations will access capacity based on application-specific requirements for availability, security and cost, regardless of the physical location of that capacity.
A key enabler of this evolution is gaining visibility into real-time operating parameters across systems. That requires increased connectivity and communication across the various devices within each facility and across facilities, as well as the ability to aggregate, analyze and visualize data to impact operations.
Currently, language differences across devices create data silos within systems that limit visibility and control, perpetuating inefficient operating practices and preventing enterprises from creating a connected ecosystem of IT, infrastructure and applications. Unfortunately, the situation that exists today—millions of legacy servers, switches, storage devices, and supporting infrastructure systems using different native languages—is not easy to overcome.
However, a roadmap for realizing the vision of the software-defined data center has emerged. Fueled by virtualization, holistic management platforms and a common, open-language specification for devices, the software-defined future is now foreseeable.
Virtualization has, of course, changed the way server capacity is utilized, but has plateaued in the face of the challenge of managing virtual environments that include compute, networking, storage and power. Instead of simplifying management, virtualization on the facility level is increasing complexity.
Data Center Infrastructure Management (DCIM) should provide the visibility to address that complexity, but closed DCIM systems simply increase the size of the silo being managed rather than enabling true holistic management. Fortunately, DCIM platforms are increasingly using open APIs to facilitate integration with complementary software suites such as IT management and accounting. Through this integration, organizations can get the real-time visibility into resource utilization, available capacity, and costs required for informed decision making.
The final hurdle, hardware communication, is being addressed through the Redfish specification, now under the management of the Distributed Management Task Force (DMTF). The DMTF is an industry standards organization working to simplify the manageability of network-accessible technologies through open and collaborative efforts by leading technology companies, including HP, Dell, Intel, Emerson Network Power, Microsoft and VMware. Redfish is a common language for IT and infrastructure devices that will facilitate greater connectivity and communication across devices and systems without adding complexity.
Version 1.0 of the Redfish specification was released in August of 2015, and its adoption will be aided by the broad industry support of the DMTF. It will take a number of years for the specification to reach critical mass in terms of installed devices, but organizations can begin to capitalize on the value of Redfish and position themselves for true software-defined management through DCIM systems with open APIs supported by strategic use of Redfish translation engines. These Redfish translators will accelerate the industry’s ability to use the new specification to optimize operations.
With this foundation in place, an organization can create and maintain a map of data center resources and their real-time operating parameters to achieve a new level of data-driven, real-time efficiency. IT and data center staff can then focus all their energy on delivering what their customers (internal and external) need in the fastest, most efficient, and most secure way possible. The primary factors that currently consume them—geography, security, power, availability, and connectivity —become non-factors in the open, DCIM-enabled data center.
For data center managers who are wrestling with how to identify and decommission ghost servers, or are deploying cloud-based applications simply because they can’t mobilize their own resources fast enough to meet organizational requirements, all of this may sound like marketing hype. It’s not. The core technologies and specifications are now in place to make this vision a reality, and the market—those who rely on the applications and processing data centers deliver—will demand nothing less.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 7:24p |
Equinix Closes Bit-isle Deal, Expands Japan Data Center Footprint Equinix has completed its acquisition of Japanese data center provider Bit-isle, adding five data centers in Tokyo and one in Osaka to its existing footprint in both cities, the Redwood City, California-based company announced today.
Equinix has acquired about 97 percent of equity interests in the Japanese provider, expecting to buy the rest by the end of the year. The company first announced the deal in September, saying it had made an offer for $280 million.
The deal brings the number of Equinix data centers in Tokyo to 10 and in Osaka to two, making it the fourth-largest data center provider in the country, the company said in a statement. Its biggest competitors in Japan are NTT Communications and KDDI Corp.
“For several years Equinix has been evaluating how to accelerate our leadership in this market,” Equinix CEO Steve Smith said in a statement. “Bit-isle facilities are adjacent to our carrier dense sites in downtown Tokyo, giving us customer-ready capacity as well as the opportunity to scale Platform Equinix in this increasingly constrained, but important global market.”
This is the second acquisition Equinix has made this year. In May, the company announced a deal to buy European data center giant TelecityGroup for $3.6 billion. The Telecity deal has not closed yet, but if it does Equinix will become the biggest data center provider in Europe.
With Bit-isle assets Equinix has 27 data centers in Asia Pacific, where countries like China, Hong Kong, and Singapore have rapidly growing data center markets. In total, the company now has 111 data centers in 33 markets around the world. | | 8:28p |
Achieving Cloud-Ready Data Center Scale Securely, Rapidly, and Inexpensively There are some big changes happening within the modern data center. We’re seeing more users connecting, sharing data, and using more devices to be productive. Consequently, organizations providing specific cloud-ready services for those users must keep up with the competition. Consider this: global spending on IaaS is expected to reach almost US$16.5 billion in 2015, an increase of 32.8 percent from 2014, with a compound annual growth rate (CAGR) from 2014 to 2019 forecast at 29.1 percent, according to Gartner’s latest forecast. Projections like this one beg the question: how can service providers and organizations keep up with this kind of demand?
To truly gain a competitive advantage and create internet-ready data center scale, you have to partner and work with a colocation provider that’s capable of keeping up. In this white paper, we learn how a good data center colocation partnership gives organizations and service providers the ability to scale massively and do it with unprecedented speed. This means:
- Having access to strategic markets and locations
- Optimizing the economics of your data center ecosystem
- Taking advantage of interconnection partnerships
In creating this type of colocation partnership, Internet organizations, service providers, and enterprise businesses can deliver resources and content at cloud speeds.
Download this whitepaper to learn how to create a secured, multi-tenant ecosystem, that’s capable of keeping up with the modern demands of the market. You’ll see how a flexible fabric architecture can fit a diverse set of business requirements to help optimize the user experience and the overall corporate strategy. | | 10:08p |
CenturyLink Wants to Sell its Data Centers CenturyLink, the telecommunications company with a substantial data center services business, no longer wants to own its massive data center fleet.
The company has hired financial advisors to help it explore alternatives to ownership of its nearly 60 data centers in the US, Asia, and Europe, totaling more than 180 MW of power and 2.6 million square feet of data center space. It does not own all of those facilities, leasing a lot of its footprint from data center providers.
The company plans to continue providing colocation and other data center services. Potential alternatives to owning the massive portfolio are joint ventures or sale of all or some of the facilities.
CenturyLink gained much of its data center portfolio when it acquired data center service provider Savvis in 2011 for $2.5 billion. The company has been expanding its data center capacity continuously. Earlier this year it announced that it had completed data center expansion in six markets.
In a statement, CenturyLink CEO Glen Post said he was confident in the company’s business strategy around network, hosting, and managed services.
“We expect colocation services to remain part of our service offerings, but we do not believe ownership of the physical data center assets is necessary to effectively deliver those services,” he said. “Therefore, we are exploring all of the strategic alternatives available for our data centers.” |
|