Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, February 23rd, 2017
Time |
Event |
4:00p |
Diamanti Launches Hyperconverged Infrastructure Appliance, Raises $18M Diamanti, a data center hardware and software startup founded by former Cisco engineers, has launched its first product, a hyperconverged infrastructure appliance that automates deployment of containerized applications, and closed a Series B funding round, raising $18 million.
The company is up against serious competition in the relatively young hyperconverged infrastructure space. Last year’s IPO of one of the two leading players in the space, Nutanix, was followed this year by acquisition of the second leading player, SimpliVity, by Hewlett Packard Enterprise. Diamanti’s focus on containers may serve to differentiate it from the other, more established brands, both of whom have been focused on more traditional hypervisor-based virtualization.
Read more: Incumbents are Nervous About Hyperconverged Infrastructure, and They Should be
Application containers, a long-existing technology whose popularity among developers quickly rose several years ago with the advent of Docker, an open source project and a company that created a container standard and tools for developers to use it, are still relatively immature. Plus, while developers love Docker, much of the existing IT infrastructure in corporate data centers has not been set up for it, according to Diamanti.
“Until now, we found that IT operators have been forced into complex and expensive ‘do-it-yourself’ Docker implementations, since traditional network and storage technology was built for virtual machines rather than containers,” Thorsten Claus, partner at Northgate Capital, a new investor that led the startup’s latest funding round, said in a statement.
See also: Understanding Hyperconverged Infrastructure Use Cases
Diamanti’s D10 appliance offers service-level guarantees for networking and storage infrastructure running containerized applications.
The appliance includes compute, storage, and networking resources necessary to run containers and comes with all the necessary software, according to the announcement. It uses open source Kubernetes as the container orchestration software.
The startup named NBCUniversal and MemSQL as two of the early adopters of its technology.
One of Diamanti’s founders, Jeffrey Chou, used to be director of engineering at Cisco, according to a Silicon Valley Business Journal report. Another founder, Amitava Guha, worked for a startup acquired by the network technology giant in 2008. A third founder, Luis Robles, comes from the venture capital firm Sequoia Capital.
The latest funding round brings the startup’s total money raised to about $30 million. CRV, DFJ, Translink, and GSR Ventures took part in the round in addition to Northgate. | 4:30p |
Global Switch Secures £425M Credit Facility as it Pushes into Chinese Data Center Market Global Switch, the London-based wholesale data center giant focused on European and Asia-Pacific markets, has closed a £425 million credit facility with an international bank syndicate, up £50 million from its previous borrowing arrangement, the company announced this week.
Global Switch, the world’s second-largest wholesale provider by market share, is gearing up for another expansion phase, after a consortium of Chinese companies bought a 49 percent stake in it last December for £2.4 billion.
The consortium, called Elegant Jubilee, is now splitting control of the company (albeit unevenly) with Aldersgate Investments Limited, which is owned by Reuben Brothers, the investment and development firm of the famous British billionaires David and Simon Reuben. Simon Reuben sits on the data center company’s board. Global Switch expects the consortium to give it access to Chinese telcos and internet companies looking to expand outside of China.
Elegant Jubilee was put together by Li Qiang, a Chinese telecommunications and internet entrepreneur who holds a stake in the Chinese data center provider Daily-Tech Beijing. Investors in the consortium include China’s largest privately owned steel company, Jiangsu Sha Steel Group, Singapore-based asset manager AVIC Trust, as well as institutional investors Essence Financial and Ping An Group.
As part of the acquisition announcement in December Global Switch said China Telecom Global had pre-leased space in Global Switch data centers under construction in Hong Kong and Singapore. That deal was done through Li’s Daily-Tech, which will also act as China Telecom’s service provider in the Singapore data center.
See also: Hong Kong: China’s Data Center Gateway to the World
Daily-Tech and Global Switch are also in talks with China Mobile about services in Global Switch data centers, although no specific deal has been announced.
The bank syndicate behind the new credit facility consists of HSBC, Barclays, Credit Suisse, and Deutsche Bank, as well as a new lender, Bank of China.
Global Switch had 7.7 percent share of the global wholesale data center colocation market in 2016, making it second only to San Francisco-based Digital Realty Trust, whose market share was 20.5 percent, according to Structure Research.
Read more: Here are the 10 Largest Data Center Providers in the World | 5:00p |
The App Architecture Revolution: Microservices, Containers and Automation Scott Davis is EVP of Engineering & Chief Technology Officer for Embotics.
With the explosive growth of cloud and SaaS-based business applications and services, the underlying software architectures used to construct these applications are changing dramatically. Microservices architecture is not a brand new trend but has been picking up momentum as the preferred architecture for constructing cloud native applications. Microservices provide ways to break apart large monolithic applications into sets of small, discrete components that facilitate independent development and operational scaling. Key to this architecture is making sure that each microservice handles one and only one function with a well defined API. Microservices must also have no dependencies on each other except for their APIs.
When mixed with automation as a dynamic management solution for the individual application components, applications become less limited by the infrastructure they run on. Through automation and infrastructure as code technologies, applications now have the ability to control their underlying infrastructure technologies, turning them into services to be harnessed on demand and programmatically during application execution. While we’ve seen cloud native pioneers such as Uber, Netflix, Ebay, and Twitter publicly embrace this method of building and delivering their services, many organizations aren’t sure where to begin when it comes to achieving effective and efficient operations through this app architecture revolution.
Before microservices, it would take engineers months or years to build and maintain large monolithic applications, but today microservices’ design methodology makes it easier to develop systems with reusable components that can be utilized by multiple applications and services throughout the organization, saving developers valuable time. This enables better continuous delivery, as small units are easier for developers to manage, test and deploy.
In order to successfully deliver microservices and container solutions cost-effectively and at scale, it’s important to have a proper design framework in mind. Microservices must have a well formed, backward and forward compatible API and only communicate with its peers through their API. Each Microservice should perform one and only one dedicated function. Each microservice is ideally stateless and if needed typically has its own dedicated persistent state that is not exposed to others. When all of these principles are rigorously followed, each microservice can be deployed and scaled independently because they do not require information about the internal implementation of any other services – all that is required is that they have well-defined APIs.
At the same time, microservices are well matched to and driving the adoption of container technologies as the two often work in conjunction. Each microservice has to run somewhere, and containers are often the preferred choice because they are self-contained, rapidly provisioned or cloned, and usually stateless. Developers can easily construct a container with all the required code to execute the microservice, allowing them to break a problem into smaller pieces, which was not previously possible at this scale. Containers offer developers a way to package their function into this self-contained block of code, creating efficient, isolated and decoupled execution engines for each app and service.
The problem? This creates more component parts that need to be dynamically managed to achieve their promise of scalable, cost-effective cloud services. Automation can provide the dynamic management needed to deliver microservices and container solutions cost effectively and at scale. With microservices-based designs, developers and operations staff are left with many more component parts that need to grow and shrink independently. Automation can be harnessed to reduce this complexity and deliver the desired results.
Microservices-based designs fundamentally enable faster development and deployment of highly scalable applications, whether for the cloud or on-premise. Flexible automation via both portals and APIs is the key ingredient for effectively deploying and managing these next generation, distributed applications, across today’s multi-cloud environments.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 8:37p |
Interxion to Build £30M London Data Center Despite Brexit Concerns While many global banks have devised plans to move operations out of Britain in case of a “hard Brexit,” or the country’s full exit from the European Union’s single market and things like discontinuing unrestricted cross-border travel, Interxion is demonstrating faith in the London data center market by committing to a mid-size but nevertheless costly construction project in the city.
The Amsterdam-based service provider announced Thursday a plan to spend £30 million on what will be the company’s third London data center. The facility will add about 20,000 square feet to the service provider’s central London data center campus.
Interxion also announced plans to build new data centers in Frankfurt and Stockholm, citing robust demand in all three of the major European markets. The company made the announcements in the run-up to its fourth-quarter and full-year 2016 earnings report scheduled for March 1.
Construction activity by data center providers in the London market was relatively slow last year, but several major cloud providers launched data centers there during that period. Brexit has created an unusual dynamic for data center operators in Britain, where on the one hand companies need IT infrastructure in the country to serve what will continue being one of the continent’s biggest markets while on the other hand needing to beef up their European capacity outside of the UK to serve what will likely become a separate market, with its own regulations.
Recent UK cloud data center launches:
Interxion CFO Joshi Joshi told investors earlier this year that Brexit had a silver lining for the company. It will push enterprises to start using more public cloud services, which give them more data location flexibility, making them better prepared to deal with the current uncertainty. That, according to Joshi, will boost demand for Interxion’s services as cloud providers expand their capacity to serve those enterprises.
Still, the data center provider appears more bullish on other European markets than it is on the UK. Its recent investments in Germany and France, for example, overshadow the 20,000-square foot London expansion. It is building two data centers in Frankfurt, which will add 63,500 square feet total. The company is adding 22,600 square feet across two expansion phases in Paris and 15,000 square feet in Marseille.
The new Stockholm data center announced Thursday will be its fifth in Sweden’s capital and add about 23,700 square feet of space. Stockholm is an important network interconnection location for traffic traveling between Western and Eastern Europe. | 11:18p |
Sabey Launches First Building in Booming N. Virginia Data Center Market Sabey Data Center Properties, the Seattle-based wholesale data center developer, has completed and tested its first Northern Virginia data center. The facility is the first of three the company is planning to build at its Intergate.Ashburn campus, which at total build-out is expected to be 900,000 square feet large, with capacity to support about 70MW of power.
While Sabey has had East Coast presence since 2011, when it acquired the Verizon tower in Manhattan, this week’s announcement marks its first foray into Northern Virginia, North America’s biggest and most active data center market. Data center space is in high demand in Ashburn, where much of the region’s data center capacity is located, and Sabey has already signed a tenant for a 1.8MW, 12,000-square foot quadrant in the newly finished building, whose total capacity is 7.2MW.
The developer did not disclose the tenant’s name. Robert Rockwood, Sabey’s head for the eastern region, told us in an interview last year, as the first building was still under construction, that numerous existing customers who are using space in Sabey data centers in Washington State and New York, were interested in Northern Virginia data center capacity.
Customers leased 84.4MW total in 2016 from data center providers in Northern Virginia, according to a recent market report by the commercial real estate company CBRE Group. That’s not counting several pre-lease deals that were signed 2016 but were not due to be delivered this year. Those deals would bring the region’s 2016 total to about 140MW.
The Dallas-Fort Worth market was second in total absorption last year, with 37.6MW of data center capacity leased, and Chicago was third, with 36.2MW, according to the report.
Together, data center providers in major US markets signed leases for 195MW of capacity total. Those markets are Atlanta, Chicago, Dallas-Fort Worth, New York-New Jersey, Northern Virginia, Phoenix, and Silicon Valley. That number represents a slight decline from absorption the previous year, when companies leased out 200MW of capacity in those markets.
Most of the data center leasing momentum across the country was fueled by hyper-scale cloud service providers, and that was especially true in the Northern Virginia data center market. In a statement, Jamie Jelinek, CBRE’s senior associate for Data Center Solutions, said:
“With the Ashburn area’s prominence as one of the most densely connected fiber-rich areas in the U.S., cloud service providers have dominated a significant portion of leasing in the region. In most cases, cloud provider requirements are being deployed in third-party facilities. Some of this is speed-to-market-driven to meet immediate customer needs. But it is also part of a larger ‘active-active’ cloud infrastructure strategy where redundancy needs are shifted away from each individual data center and spread out to the network and application level.” |
|