Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, November 17th, 2015
| Time |
Event |
| 1:08a |
INetU Acquisition Gives ViaWest Data Centers on East Coast, in Europe US data center service provider ViaWest announced Monday that it has agreed to acquire INetU, a managed cloud hosting company with data centers on the US East Coast and in Europe – both new markets for Denver, Colorado-based ViaWest.
INetU hosts and manages cloud infrastructure for its customers, including private, public, and hybrid cloud environments, as well as cloud infrastructure provided by Amazon Web Services. The company also helps companies set up their cloud environments through its consultative design services.
ViaWest, whose customers include Overstock.com and Pearson Education, is a subsidiary of the Canadian cable company Shaw Communications, which acquired it last year for $1.2 billion.
ViaWest is buying INetU from the private equity firm BV Investment Partners and other shareholders for $162.5 million.
The deal will expand ViaWest’s cloud and managed services capabilities and add data centers in Pennsylvania, Virginia, Washington, the UK, and Netherlands to its portfolio. The Shaw subsidiary currently has nearly 30 data centers in the US and Canada.
This is the latest in a series of acquisitions this year as the data center industry continues to consolidate. Other examples of recent deals include the acquisition of Net Access by Cologix in October, CyrusOne’s Cervalis deal, which also gave it instant East Coast presence, and the acquisition of Carpathia Hosting by QTS.
The biggest deals this year were Equinix’s $3.6 billion purchase of TelecityGroup in Europe and Digital Realty Trust’s Telx acquisition in the US for $1.9 billion. | | 1:00p |
America’s Supercomputer Might Continues Shrinking While the US continues to have the biggest presence on the list of the world’s most powerful supercomputers, its share is shrinking. Today, America’s share is lower than it has ever been, or at least lower than it has been since the first Top500 list was created in 1993.
The latest edition of the biannual Top500, released Monday, has fewer US supercomputers than even the previous edition, which came out in July of this year. Meanwhile, China’s presence on the list continues to grow by leaps and bounds. There are three times more Chinese systems on the list today than there were in July.
China also continues to command the top spot on the list. The Tianhe-2 supercomputer, also known as Milky Way-2, built by China’s National University of Defense Technology, has been designated as the world’s most powerful supercomputer for the sixth consecutive time.
Europe’s share of the list is declining as well, while Asia overall commands a growing percentage of the pool of the mightiest supercomputers.
Top500 is compiled by scientists at the US Department of Energy’s Lawrence Berkeley National Laboratory, University of Tennessee, Knoxville, and Prometeus, a German company that organizes the annual International Supercomputing Conference.
The rate of performance growth in supercomputing overall has slowed down. There has been little change at the top of the list in recent years, while performance at the bottom has been increasing but not as quickly as it used to.
Here are the highlights in numbers:
US and China
- 46.4 percent: US share of HPC systems on the first Top500 list, published in June 1993
- 41.3 percent: US share of HPC systems on the latest Top500 list, published in November 2015
- 0: China’s share of the first Top500
- 21.8 percent: China’s share of the latest Top500
- 31: The number of US supercomputers that were on the July 2015 list but were pushed off in the latest edition

Country share of HPC systems on the November 2015 Top500 list (Chart courtesy of Top500)
Europe’s Presence Also Shrinks
- 26.4 percent: Europe’s share of HPC systems on the June 1993 list
- 21.6 percent: Europe’s share on the latest, November 2015, list
- 24.4 percent: Asia’s share on the June 1993 list
- 34.5 percent: Asia’s share on the latest list
The Slowing HPC Performance Growth
- 2: The number of new systems in the Top 10 since July 2015
- 6: The number of systems in the Top 10 that were installed in 2011 or 2012
- 90 percent: Performance growth of the last system on the list between 1994 and 2008
- 55 percent: Performance growth of the last system on the list between 2008 and now
Co-Processors on the Rise
- 104: The number of systems on the latest list that use accelerator/co-processor technology by Nvidia, AMD, or Intel
- 90: The number of systems using accelerators on the July 2015 list
- 66: The number of systems using Nvidia co-processors
- 27: The number of systems using Intel Xeon Phi chips for acceleration
- 3: The number of systems using AMD’s ATI Radeon for acceleration
- 4: The number of systems using a combination of Nvidia and Intel accelerators
| | 2:00p |
IIX Raises $26M from Silicon Valley’s VC Heavyweights IIX, the startup that is addressing the growing need companies have for interconnecting their networks with other companies’ networks, has closed its second funding round, raising $26 million from a group of well-known Silicon Valley venture capitalists, tech entrepreneurs, and executives.
According to IIX and others, such as data center and interconnection giant Equinix , demand for private network links between companies’ own servers and the servers of their cloud providers and partners without using the public internet is growing. Companies don’t trust the internet with sensitive corporate data and don’t want to rely on it for critical applications because of concerns with both security and performance.
Many cloud service providers, including the biggest ones, such as Amazon, Microsoft, Google, and IBM, offer the option of connecting to their servers directly, and data center providers and network carriers help facilitate those links. But setting up such a private connection is a complex engineering task, and most enterprise users don’t have the in-house expertise to do it themselves, according to service providers.
Equinix recently launched professional services specifically to help customers set up the links, and IIX built a platform that automates interconnection provisioning. The platform, called Console, is a Software-as-a-Service application that the startup claims makes interconnection between any of the 150 data centers around the world it’s available in as simple as a click of a button.
“This is the biggest platform of its kind today that is focused on direct interconnection,” IIX founder and CEO Al Burgio said.
Among investors that participated in the company’s latest funding round is New Enterprise Associates, one of the biggest and oldest venture capital firms that recently closed on a $2.8 billion fund – the biggest in history of venture capital. NEA is the only Series A investor that also participated in the Series B round for IIX.
The round was led by Formation 8, one of the hottest newer VC firms that recently broke up. Its three founders decided to go their separate ways, but the firm said it will continue working with companies it has invested in already and investing the money that’s left in its most recent fund.
One of Formation 8’s founders, Jim Kim, has joined IIX’s board of directors. Before starting Formation 8, Kim was a general partner at Khosla Ventures, another one of Silicon Valley’s cornerstone VCs.
The group of investors in IIX’s Series B also included Andy Bechtolsheim, co-founder of Sun Microsystems and founder of Arista Networks, and Drew Perkins, co-founder of the optical networking giant Infinera Corp., whose optical interconnect technology links some of the world’s biggest data centers to each other.
Also on board are Yahoo co-founder Jerry Yang’s AME Cloud Ventures and Rajiv Ramaswami, the man in charge of Broadcom’s infrastructure and networking group.
The round brings the total amount of capital IIX has raised to $65 million, including debt and equity. According to Burgio, the company had to turn some willing investors down.
“We were offered quite a significant amount of money, well beyond what we decided to take in,” he said. Asked what the company’s criteria for selecting investors were, he said, “chemistry, obviously. We don’t just want people’s money; we want people that get it.” | | 4:30p |
Decoding DevOps: a Management Primer Gerardo Dada is Vice President of Product Marketing for SolarWinds.
Businesses can only progress and perform as quickly as IT enables them to, and these days, technology is a major point of differentiation for any type of company. In response, as a growing number of organizations look to increase agility and performance in the IT department, one movement is changing the way two teams have traditionally collaborated: DevOps. Despite being the industry buzzword of the year, though (well, maybe third after “containers” and “big data”), there exists substantial confusion over what DevOps means and how organizations can take advantage of the movement.
Why is DevOps so hard to define? In part, confusion stems from the fact that DevOps as an idea is growing organically, and the way it is implemented and leveraged continues to evolve. Perhaps more importantly, organizations continue to perceive DevOps as a specific role or set of tools usually exclusive to cloud deployments, rather than a mentality. At its core, DevOps is simply a modern approach to software development—it aims to dissolve the siloes between the development and operations teams and encourage shared accountabilities and processes in order to better understand software performance.
Typically when software is developed, one team will write the code, another team tests it and yet another team deploys or runs the software, all of which translates to conflict between these teams and a much longer update cycle for any piece of software. In a DevOps environment, developers share responsibility for testing and operations; everyone is accountable for performance, and tools and goals are shared. The team is able to manage changes to software more quickly; and in smaller pieces, ultimately resulting in a more efficient, effective and agile IT department with greater quality assurance for the end-user.
Still, for all its benefits, DevOps does have its challenges. Notably, the upheaval a traditional data center will undergo to support a DevOps-centric environment can be potentially detrimental to an organization. Foremost, finding IT professionals with the right skills is extremely difficult. CIOs and other business decision makers may remember a similar experience several years ago when cloud computing exploded into the data center—IT pros with suitable cloud experience were few and far between. In the absence of hiring a DevOps expert, organizations must instead invest in training their existing teams, experiment and feel their way through a newly deployed DevOps environment.
The good news is most organizations are willing to share their own best practices; the bad news is that most development teams are already understaffed and as a result, there is very little time for admins to think about, build, test, optimize and implement all the changes that are required to successfully move to a DevOps process. That includes not only taking the time to learn about the process, but also deciding how to realign existing processes and skills to fit a new DevOps model. One should not underestimate the required change in culture to adopt the new mindset, either.
For most organizations and IT decision makers, these challenges are likely off-putting and intimidating. However, while DevOps may be the operational mentality of the future, its broad implementation within traditional data centers is not urgent. In fact, a successful transition can often take several months. That doesn’t mean organizations can’t start preparing themselves now, however. To help ensure a more seamless move to a DevOps environment down the road, businesses can start by understanding and adopting the underlying principles that make DevOps great:
End-to-End Monitoring and Automation
A DevOps process requires everything to be monitored and automated. Visibility across the application stack and into everything that drives performance is critical for speed and collaboration. The impact of every change should be known. To move faster, code deployments, tests, monitoring, alerts and more should be automated. IT services, too, should embrace self-service for users and focus on what matters.
Collaboration
At the heart of a successful DevOps mentality and process is collaboration. Because the ultimate objective is to provide the end user with peak application performance, silos do not work. If an application is down, everyone has failed. There is no database team. There is no virtualization team. There is no storage team. There is only the development/operations team, and they are responsible for the performance of applications. This requires transparency, visibility, a consistent set of tools and teamwork. Breaking down siloes between traditional data center teams and aligning them behind end user performance goals will help organizations prepare to integrate and manage their development and operations teams down the road.
Speed and Service Orientation
Taking agility one step further, shorter, iterative processes allow teams to move faster, innovate and serve the business more effectively. Development to production cycles go from months to hours. This is a key benefit of DevOps, but it should be noted that despite a focus on speed through “sprint projects,” leveraging a DevOps mentality does not impact an organization’s ability to execute on larger, long-term projects. In addition, there should be no monolithic applications. Everything is a service, from application components to infrastructure. Everything is flexible and ready to scale or change.
Application and End User Focus
For every business preparing to adopt a DevOps mentality, end users and the applications they rely on are the singular focus everyone must align behind. The performance of every line of code and the metrics of each component of the stack are only relevant based on how they affect application performance. Performance needs to be a discipline. The integration of development, operations and QA teams is intended to speed software updates, changes, deployments and time-to-resolution for bugs—all of which deliver a better end-user experience.
There’s no doubt that the concept of DevOps is picking up steam and making its way into the traditional on-premises IT department. Although the transition to a DevOps environment does not take place overnight, and there are significant challenges to be aware of before beginning a transition, by leveraging these principles, businesses can be well on their way to reaping the benefits of an integrated DevOps mentality.
Ultimately, despite being a bit of a difficult term to define, DevOps, a positive organizational movement that will help businesses empower IT departments to innovate, has the potential to change traditional data centers as we know it.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:46p |
Azerbaijan’s Internet Goes Dark after Data Center Fire Most network connections in Azerbaijan went dark for several hours Monday, following what the country’s communications officials attributed to a fire in the data center of the country’s main service provider Delta Telecom.
The former Soviet republic’s internet went down around 4 pm local time, according to reports by the Azeri news agency Trend and BBC Russian Service. The incident disrupted nearly all internet connectivity in the country.
Renesys, a company that tracks global internet connectivity, said 78 percent of Azerbaijan’s networks were affected. All of the 600-plus networks that went dark reached the internet through the same connection: a link between Delta Telecom and Telecom Italia Sparkle.
According to Renesys, Azerbaijan is among countries whose risk of an internet shutdown is significant because of the low amount of networks that link it to the internet. Risk level is similar among its neighbors Iran, Georgia, Armenia, and Saudi Arabia.
Renesys puts countries with fewer than 10 cross-border service providers in the category of countries under significant risk of internet shutdown. About 70 countries are in this category.
About 60 countries are in the “severe risk” category, including Syria, Tunisia, Turkmenistan, and Lybia. These countries have only one or two companies at their “international frontier,” according to Renesys.
Azerbaijan’s internet outage affected one of the country’s three mobile network operators, Azercell, which was connected to Delta Telecom. Two other operators, Azerfon and Bakcell, weren’t affected because they wre connected to another telco, AzerTelecom.
Networks used by Azeri banks and the largest oil and gas fields were unaffected, according to Trend. | | 6:58p |
EMC Launches New Core Offerings for Data Center Cloud Integration 
This post originally appeared at The Var Guy
EMC unveiled several new hybrid cloud-centric products and solutions on Tuesday as the storage company looks to provide customers with more ways to connect primary storage and data protection systems to private and public clouds.
The offerings, all of which are available immediately, aim to provide customers with the best of both the public and private cloud, according to EMC. Users can access the public cloud for speed and scalability, while simultaneously using their private cloud for control and security in a best of both worlds-type scenario.
Straight from the press release, the new products and solutions include:
- Data tiering to/from the cloud via EMC VMAX and VNX storage platforms, including enhancements to the FAST.X tiering solution for VMAX to allow customers to automatically tier to public clouds from both EMC and non-EMC storage. Those looking to cut costs can also substitute VNX for VMAX in their infrastructure configuration and add EMC VPLEX cloud tiering. Both VMAX and VNX offer expanded support for private and public cloud providers, with support for VMware vCloud Air, Microsoft Azure, Amazon S3 and Google Cloud Platform.
- EMC users can utilize CloudBoost 2.0 to extend customers’ existing EMC data protection solutions to create more elastic, scale-out storage, according to EMC. CloudBoost also features enhanced performance, scalability and manageability as well as an estimated 3x faster throughput and 15x more data capacity than previous versions.
- Improved restore and security capabilities for Spanning by EMC
- New features added to EMC’s Data Domain operating system 5.7, including enhanced capacity management, secure multi-tenancy and dense shelf configuration
- EMC announced the next generation of NetWorker data protection software with NetWorker 9. The latest version includes universal policy engine and integrates with EMC ProtectPoint to deliver integrated block-level protection for Microsoft and Linux environments, according to the announcement.
“Many businesses have seen huge benefits from strategically moving data and workloads to the cloud. However, this often means sacrificing some control over the data,” said Guy Churchward, president of Core Technologies at EMC, in a statement. “Only EMC has the breadth and depth of portfolio to empower customers to take control of their data and achieve the greatest efficiencies.”
Dell recently purchased EMC for a record-breaking $67 billion, effectively creating “the world’s largest privately controlled, integrated technology company.”
This first ran at http://thevarguy.com/secure-cloud-data-storage-news-and-information/emc-launches-new-core-offerings-data-center-cloud-int | | 10:16p |
Canonical Releases OpenStack Autopilot Open Source Cloud Management 
This post originally appeared at The Var Guy
Bringing OpenStack private clouds to the masses is the pitch behind Canonical’s latest move in the open source cloud computing market. Today, the company released OpenStack Autopilot, a cloud management tool for Ubuntu Linux.
Autopilot, which has been available in beta form since last year and now enters general availability, is a feature in Landscape, Canonical’s platform for managing deployments of Ubuntu systems. It extends the product to include support for automated OpenStack deployment and administration using Ubuntu servers. It can set up clouds, add new hardware to existing clouds and assist with cloud management.
Autopilot also offers the ability to run OpenStack administrative services inside dedicated containers via LXD, Canonical’s home-grown virtualization hypervisor.
In addition, it supports “a range of software-defined storage and networking options,” according to Canonical. Those now include OpenDaylight, an open source SDN that has been added to Autopilot with the general-availability release.
The company is pitching Autopilot as a way for organizations to build private clouds using OpenStack without having to break the budget by hiring OpenStack expertise.
“Economics are the prime driver of cloud adoption; for public and also for private cloud,” said Ubuntu founder Mark Shuttleworth. “The OpenStack Autopilot transforms the economics of private cloud, enabling every institution to create its own private cloud without hiring specialist OpenStack skills and without any third-party consulting.”
This first ran at http://thevarguy.com/open-source-application-software-companies/canonical-releases-openstack-autopilot-open-source-cloud- |
|