Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, September 30th, 2016
| Time |
Event |
| 12:00p |
Equinix to Launch Internet Exchange in Finland Equinix is preparing to extend its global internet exchange into the Nordics, promising greater international connectivity options to companies in the region.
The company expects to launch the exchange in Helsinki in the first quarter of next year, it announced Thursday. It will add a 23rd exchange-point location and 20th market to the Redwood City, California-based data center services giant’s global exchange fabric.
The volume of global internet traffic is growing extremely fast, and peering with more partners in more IX locations is one way network operators have been addressing this growth, according to Equinix. By striking mutually beneficial peering agreements, operators reduce cost of data transit.
Internet traffic has been growing at a compound annual rate of more than 33 percent over the last four years, Equinix said, citing market research by PriMetrica. Cisco analysts expect global IP traffic to grow almost three-fold between 2015 and 2020.
The launch of Equinix’s IX in Finland will help Finland increase its internet connectivity. Namely, it will facilitate exchange of traffic along the new submarine cable system called C-Lion1. Connecting Finland and Germany, the system is a “bridge” between Russia, Baltic countries, and Central Europe, according to the data center provider.
Equinix is a dominant player in the global IX market, operating the world’s third-largest internet exchange by the number of members, after Brazil Internet Exchange (IX.br) and Amsterdam Internet Exchange (AMS-IX). The Equinix Exchange currently lives in Paris and Zurich in Europe; New York, Ashburn, Chicago, Dallas, Los Angeles, and San Jose in the US; Tokyo, Hong Kong, Singapore, and Sydney in Asia Pacific; and Rio de Janeiro and Sao Paulo in Brazil.
Once the IX is launched in Finland, customers in all six Equinix data centers in the Helsinki area will have access to it, since all facilities are interconnected by the company’s campus and metro networks. | | 3:00p |
ServiceNow Launches Investment Arm for Enterprise Software ServiceNow, one of the biggest ITSM (IT service management) software companies, has launched a formal investment unit in hopes of attracting more developers to build applications on its cloud platform, built specifically for IT service delivery.
ITSM plays a growing role in the DCIM software market. Many DCIM (Data Center Infrastructure Management) vendors have integrated with leading ITSM platforms, including ServiceNow, and some believe DCIM will eventually be considered a subcategory of ITSM.
DCIM software vendors that have integrated with ServiceNow include Nlyte Software, Sunbird Software (formerly part of Raritan), and Tier44, a company formed around the intellectual property of Power Assure, which dissolved in 2014.
Nlyte has two connectors featured in ServiceNow’s online store, which is where applications future developers that get funded by ServiceNow Ventures will also be. The unit is kicking off with a contest, announced Wednesday, through which it will provide a total of $500,000 in funding to three winning startups.
ServiceNow Ventures is targeting early-stage companies that develop cloud-based apps on its parent company’s platform, as well as growth-stage vendors that build integrations with ServiceNow. Smaller-size DCIM vendors building connectors for ServiceNow would fall into the second category.
Read more: Who is Winning in the DCIM Software Market? | | 4:25p |
Nutanix Shares Jump in Debut After $238M Increased IPO (Bloomberg) — Nutanix Inc. surged in its trading debut after the software maker raised $238 million in its initial public offering.
Nutanix rose 86 percent to $29.80 at 11:32 a.m. in New York, giving the company a market value of about $4.1 billion.
The San Jose, California-based company sold 14.87 million shares for $16 apiece, according to a statement Thursday. The terms reflect both an increased number of shares and a price above the marketed range, which the company boosted this week.
Nutanix, which makes software that can store and analyze data on inexpensive, standard servers, had offered 14 million shares for $13 to $15 each. It lifted the price range this week, from $11 to $13 a share initially. The stock is listed on the Nasdaq Stock Market under the symbol NTNX.
Following the slowest start to a year for IPOs since the financial crisis, and the fewest U.S. technology IPOs in seven years, a flurry of offerings have come to market in the past few weeks.
See also: Spirit of Competition: Nutanix Certifies Cisco UCS Hardware
Advertising-technology company Trade Desk Inc. raised $96.6 million this month, including an over-allotment. Apptio Inc. sold about $110 million in stock and Everbridge Inc. raised $104 million. All three have gained at least 35 percent since their debuts.
Nutanix posted revenue of about $445 million for the year ended July 31, an 84 percent increase from the previous 12 months, according to its prospectus.
The company hasn’t made a profit in at least the past five years, the prospectus shows. The net loss in fiscal 2016 widened to $168.5 million from $126 million the previous year.
Goldman Sachs Group Inc., Morgan Stanley, JPMorgan Chase & Co. and Royal Bank of Canada managed the deal.
Read more: Why Hyperconverged Infrastructure is So Hot | | 6:18p |
Google Will Lend You Its Own Engineers to Keep Your Cloud Apps Running Smoothly To critics that say Google lacks experience in selling and providing support to enterprise customers, the company says, “We’ll do you one better.”
Unlike most other companies, people who operate the global Google data center infrastructure are software engineers first and IT people second. The company’s philosophy is that software services run better if the infrastructure underneath is built and operated by those who know software.
“It turns out services run better when people who understand software also run it,” Melissa Binde, Google’s director of Site Reliability Engineering, said during a presentation at a company conference earlier this year.
These Googlers are called Site Reliability Engineers, and soon, enterprise customers of the company’s cloud services will be able to embed Googlers with similar credentials on their own infrastructure teams to ensure critical applications deployed in the Google cloud run smoothly.
Part of a bigger announcement about major changes and upgrades across Google’s entire cloud business – including the announcement of eight new cloud data center locations slated to come online next year – was a note about this unusual model for cloud customer support: Customer Reliability Engineering, or CRE.
Read more: Google Devs Get to Run Google Infrastructure for Six Months
The “Seriousness” of Google Cloud
Google, which has been slow to grow its enterprise cloud business in comparison to Amazon and Microsoft, is often criticized for not being “serious” about its cloud services. One of the criticisms was that the company didn’t really know how to work with enterprise customers, which is something other major cloud players – the likes of Microsoft, VMware, and IBM – have done for many years.
Starting last year, the company has been on a mission to prove those critics wrong. The first big step was hiring VMware founder Diane Greene to lead the cloud unit, and the following steps focused on investing tons of money into a global Google data center expansion to offer more cloud availability regions and improving the feature set around cloud services, including a major focus on enhancing services with machine learning.
Read more: What Cloud and AI Do and Don’t Mean for Google’s Data Center Strategy
“Designed to deepen our partnership with customers, CRE is comprised of Google engineers who integrate with a customer’s operations teams to share the reliability responsibilities for critical cloud applications,” Brian Stevens, VP of Google Cloud, wrote in a blog post. “This integration represents a new model in which we share and apply our nearly two decades of expertise in cloud computing as an embedded part of a customer’s organization.”
Pokémon Go: a Trial by Fire
Google tested the CRE model on Niantic, the company behind the popular mobile “augmented-reality” game Pokémon Go. Niantic originated at Google but was spun out last year. While some past reports have suggested that the game runs on Google’s cloud, this is the first official confirmation by Google, for whom attracting and boasting high-profile cloud customers is another major way to prove its cloud’s worth.
Some might say the outage-ridden roll-out of Pokémon Go in July is not the best customer engagement to boast about, especially as part of the announcement about a new customer reliability team. The game was plagued by downtime throughout its first month on the market, players around the world frustrated by the frequently appearing message saying the game’s servers were overloaded.
A separate blog post on the launch of Pokémon Go, however, indicates that the rough start was due more than anything else to the game’s unexpected popularity. The worst-case estimate of Pokémon Go traffic on Google’s cloud datastore the team prepared for was five times Niantic’s target traffic. But once the game launched, it quickly outstripped the target fifty-fold:

“Throughout my career as an engineer, I’ve had a hand in numerous product launches that grew to millions of users,” Luke Stone, director of Customer Reliability Engineering, wrote in the post. “User adoption typically happens gradually over several months, with new features and architectural changes scheduled over relatively long periods of time. Never have I taken part in anything close to the growth that Google Cloud customer Niantic experienced with the launch of Pokémon Go.”
Google has not provide much more detail about the new CRE program, saying only that it will have more to share about it “soon.” | | 10:26p |
Amazon, Google Detail Next Round of Cloud Data Center Launches The global buildout of cloud data centers by internet giants is marching on. The latest move and countermove in the cloud arms race came from Amazon and Google this week, both companies announcing new locations they are adding to their growing lists of cloud availability regions.
They, as well as Microsoft and IBM, have been investing billions of dollars collectively to expand the global reach of their cloud empires by both building data centers and leasing space from data center providers, such as Digital Realty Trust, Equinix, T-Systems, EdgeConneX, and 21Vianet, among others.
Extending physical infrastructure into new regions reduces latency for customers in those regions, gives users more backup location options, reduces data transport costs (for both users and cloud providers themselves), and helps organizations comply with data-location regulations, wherever they apply.
The next outpost of Amazon’s cloud empire, called Amazon Web Services, will be in the Paris metro. The company announced Thursday plans to bring a French cloud region online next year, following recent announcements of upcoming regions in Ohio, Canada, China, and the UK.
The French region will be Amazon’s 10th availability zone in Europe. AWS currently has smaller points of presence in France – two in Paris and one in Marseille – which it uses to serve some high-profile enterprise cloud customers, among others. Those customers include Schneider Electric, Lafarge, Dassault Systemes, and Societe Generale Group, according to a blog post by Amazon CTO Werner Vogels.
Google followed Amazon’s announcement the same day by revealing plans to launch eight new cloud regions over the course of next year: Mumbai, Singapore, Sydney, Northern Virginia, São Paulo, London, Finland, and Frankfurt. Google is behind its biggest rivals in terms of variety of cloud regions but recently has been spending a lot to catch up.
Along with the new locations, the company announced a restructuring and rebranding of its cloud portfolio, putting Google Cloud Platform, its collaboration and productivity apps, machine learning tools and APIs, enterprise Maps APIs, and cloud-connected Android devices and Chromebooks under one unit called Google Cloud.
Here’s a map of Google’s current and upcoming cloud data centers:

In most cases, cloud providers don’t go into new regions by building their own data centers from the outset. Decision whether to build or lease in a new region depends on many factors, the primary ones being cost and the size of the potential market.
“It may not be cost-effective to build your own data center for a small instance in a new region,” Joe Kava, Google’s VP who leads the company’s data center operations, told us in an interview earlier this year. “At some point, that region might be big enough to where having our own data center makes sense.
“It’s just a total-cost-of-ownership analysis, and the same goes for a large enterprise company. If you need a few hundred kilowatts, you wouldn’t necessarily build your own data center, because you’re going to pay a lot of money for that.” |
|