Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, January 18th, 2017
Time |
Event |
1:00p |
Who Leased the Most Data Center Space in 2016? The short answer is Microsoft. The second-largest cloud service provider signed six of last year’s largest wholesale data center leases with five landlords in five markets, according to the latest market report by the commercial real estate firm North American Data Centers.
Microsoft and to a lesser extent Oracle together were responsible for a 25-percent increase in leasing activity from 2015. According to NADC, that increase represents a “historical high.”
Cloud providers and other tech companies with hyperscale internet platforms have completely changed the dynamics of the data center services market in recent years in the US and beyond. As they race to expand capacity, the likes of Microsoft, Amazon Web Services, Uber, and Oracle have created supply shortages in top US markets, driving unprecedented growth in the wholesale data center business. Wholesale market growth now outpaces growth in retail colocation, according to a recent report by Structure Research.
Here are the 10 biggest data center leases signed in 2016, according to North American Data Centers:
- Microsoft: 35 MW with CloudHQ in Manassas, Virginia
- Microsoft: 30 MW with EdgeConneX in Elk Grove Village, Illinois
- Microsoft: 22 MW with CyrusOne in Ashburn, Virginia
- Microsoft: 16 MW with DuPont Fabros Technology in Santa Clara, California
- Microsoft: 13.5 MW with CyrusOne in Phoenix, Arizona
- Microsoft: 9 MW with CyrusOne in San Antonio, Texas
- Oracle: 6 MW with CyrusOne in Sterling, Virginia
- Salesforce: 6 MW with QTS in Dallas, Texas
- Oracle: 5 MW with CyrusOne in Ashburn, Virginia
- Oracle: 5 MW with Digital Realty Trust in Ashburn, Virginia
Why AWS Isn’t on the List
The company most obviously missing from NADC’s report is AWS, the chief rival to Microsoft (and really anybody in cloud or any other IT infrastructure outsourcing business). That’s because Amazon doesn’t generally lease a typical wholesale data center product, Jim Kerrigan, managing principal at NADC and the report’s author, told Data Center Knowledge in an interview.
Amazon did a lot of deals in 2016, especially in Northern Virginia, the country’s hottest and largest data center market, but the company doesn’t lease data centers that are already fit out with electrical and mechanical infrastructure. It leases powered-base buildings – building that are connected to electrical feeds, have access to fiber-optic infrastructure and all the planning permissions in place. The company usually fits the space out with all the necessary data center infrastructure on its own.
Since Amazon is extremely secretive about its cloud data centers, it’s impossible to tell with full certainty that any particular lease it signed was for a data center and not, say, for offices or a distribution center, Kerrigan explained. Because the deals are not typical data center leases, the company’s rates are much lower, making it even harder to deduce what any particular building will be used for.
Microsoft Leasing on Hold, for Now
While Microsoft’s leases are clearly for data center use, what is unclear is which of its many services the facilities will support. Be it Azure, Office 365, or Xbox, these services require a hyperscale data center platform.
All in all, Microsoft leased 125.5 MW of data center capacity last year across Northern Virginia, Chicago, Silicon Valley, Phoenix, and San Antonio markets. Its biggest deal – and the biggest single data center lease signed in 2016 – was for 35 MW in Manassas, Virginia, with CloudHQ, a new company launched by Hossein Fateh, co-founder and former CEO of DuPont Fabros Technology, a long-time wholesale data center provider and one of the biggest players in Northern Virginia.
Microsoft’s second-largest deal was for 30 MW with EdgeConneX in Elk Grove Village, Illinois, outside of Chicago. This was also the second-largest deal of the year. More on this deal here: With Microsoft Data Center Deal, EdgeConneX Takes on Wholesale Giants
The company is not expected to continue leasing data center space at the same rate as it did in 2016, however. “They did put all of US on hold at the end of the year,” Kerrigan said, adding however that it’s always hard to tell how long such a hold may last.
Oracle Ramps Up Data Center Leasing
If Microsoft has leased so much data center space that it has to take a pause, Oracle, a newcomer to the world of hyperscale cloud data center leasing, is just getting started. Last year saw a burst of leasing activity by Larry Ellison’s enterprise software giant that is now investing a ton of money into its effort to take on Amazon and Microsoft in the cloud services market.
The company hired many of the same people that worked on building Amazon’s, Microsoft’s, and Google’s cloud platforms to design its own platform. Oracle launched the new cloud platform in the second half of last year, starting with a single availability region served out of three data centers in the Phoenix market.
It signed seven wholesale data center leases in 2016, totaling more than 30 MW. The bulk of this capacity is in four Northern Virginia data centers with three different providers: CyrusOne, Digital Realty Trust, and RagingWire. This capacity apparently supports the new Virginia availability region, one of three the company announced Tuesday. The other two are in the UK and Turkey.
More on Oracle’s new cloud platform and data center strategy here: Oracle’s Cloud, Built by Former AWS, Microsoft Engineers, Comes Online
Construction Not Keeping Up With Demand
The rate of wholesale data center leasing by cloud and internet giants is spurring concerns of supply shortages in the four top markets – Northern Virginia, Chicago, Dallas, and Silicon Valley – as construction has not been able to keep up with demand.
The difference in total supply between 2015 and 2016 was incremental, while 2016 saw record-breaking leasing in Chicago and Virginia, Kerrigan said; there was more capacity under construction across the country in 2015 than there was in 2016.
“Lack of product in Virginia and Chicago could really hurt,” he said. “Supply creates demand.”
You can read North American Data Centers’ full 2016 market report and 2017 forecast here. | 4:00p |
How the Chinese Data Center Market is Evolving Oliver Jones is CEO of Chayora.
China, home to more than 1.3 billion people, is the most populous nation in the world and a major contributor to global advancements in science and technology. Representing roughly one-quarter of the world’s online population today and projected to be nearly one-third within five years, China has also rapidly become an influential player in the global internet ecosystem. Most notably, the Chinese data center market is currently on the rise, so much so that research analyst firm Technavio predicts a Compound Annual Growth Rate (CAGR) of 13 percent over the next four years. As an increasing number of multinational and domestic enterprises turn to Cloud Services Providers (CSPs) and colocation solutions, the Chinese data center market must continue to evolve, providing the necessary space, power, redundancy and low latency to meet market demands.
End-user demand for data centers in China has now exceeded the available supply as organizations seek enhanced connectivity and scalable solutions for their growing businesses. When compared to the global market, very few of the nation’s data centers provide the space and power density necessary to support the needs of today’s technology-dependent organizations. Government investments, earmarked to stimulate China’s technology development, have led to an increase in the adoption of cloud-based services, Big Data analytics and the Internet of Things (IoT), while recent government reforms, including the establishment of free-trade in Shanghai, are attracting international investors.
The growing demand for high-density, redundant facilities throughout China is precipitating a shift in the design and development of the country’s data centers. Thanks to emerging cloud technologies, outsourcing of IT infrastructure services has increasingly become commonplace, further fueling growth throughout the colocation industry. In an attempt to match global standards, facility developers are setting their sights on Shanghai and Beijing, as well as the major cities and increasingly understood markets of Tianjin, Nanjing, Hangzhou, Guangzhou and Shenzhen, which have major network hubs and where power, capacity and high-bandwidth connectivity is available.
Those looking to colocate in China have to do their homework before deciding on the appropriate data center provider. So, for the many international investors seeking a facility that is purpose-built to meet their individual needs, where do they begin?
Data center solution providers must first have a license to operate a data center facility in China as well as secure and reliable funding to complete the project. However, the market has many wider challenges: from land or building acquisition processes, to negotiating fiber and power agreements, to achieving international customer service standards, to state security compliance, and more. Some providers exist at the cutting-edge of the market’s transformation, leveraging an experienced management team, operational IDC licenses, sites in key cities as well as secured funding. They do not compete with traditional colocation players, but instead are agents of change, providing solutions for customers looking to access the Chinese market while bypassing under-scaled, expensive and unsuitable solutions.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:30p |
Creating Efficiency with Converged Infrastructure: How to Reduce Overall Space Sponsored by: Dell and Intel
 
Today, converged infrastructure aims to unify powerful data center resources and introduce new levels of economics for the business. It’s important to see just how much we’re advancing when it comes to data delivery, new types of devices connecting into the data center, and how it’s impacting your business.
Converged systems are powerful platforms that took the industry by storm when introduced. With built-in automation, high-density architecture, and high-performance chassis, converged systems help architect a very robust cloud and storage environment. The idea is to create unparalleled density and allow for resources to be delivered as effectively as possible. Furthermore, industry trends show the pace of converged systems adoption will only continue to grow. According to a recent Gartner report, hyper-converged integrated systems will represent over 35% of total integrated system market revenue by 2019.
Consider this – in a recent Dell EMC | Intel survey looking at the most modern data center trends, we see exactly why so many organizations are deploying hyperconverged and converged infrastructure systems. The number #1 response for hyperconverged infrastructure (HCI), for example, is to help reduce overall data center space, with 55% of respondents indicating that it’s their main concern. From there, another 30% are hoping for both better density for virtualization as well as reduced deployment risk.
There are very real reasons we’re seeing this level of growth.
- Converged Infrastructure Enables Enterprise Scalability. With converged infrastructure (CI) you see the integration of core resources and delivery technologies. These are no longer segmented systems sitting in silos within your data center. Because of this tight integration, administrators can quickly deploy more infrastructure to support business use-cases. Most of all, this level of rapid scale helps organizations properly utilize resources as they are delivered to applications, desktops and users. We’ll touch on this later – but CI is deployed in efficient (validated) building blocks. You can effectively forecast your level of scale as your business needs grow.
- Enabling Greater Amounts of Density. A great way to create better ROI and reduce space is to efficiently place more users on less gear. CI gives users this option by combining key resources into one management plane. With converged infrastructure – you create a mechanism which can host apps, desktops, and a variety of other use-related use-cases. This means you can support more users while still reducing your overall data center footprint. In working with CI, you’re not just placing more users onto an infrastructure. With VDI and virtualization, you’re removing legacy end-points and providing even better user experiences than before. Today’s CI architectures combine best-of-breed systems to handle more users, while still optimizing user experiences.
- Reducing Deployment Risk and Size. Converged infrastructure is deployed in pre-validated blocks of architecture which are referenced and tested to work with a variety of deployment scenarios. This means that organizations are working with technologies that have been tested and verified to work in their specific use-case. This helps reduce deployment complexity, significantly lowers the risk of making a mistake, and ensuring that the piece of architecture you have not only deploys properly – but can also scale. Risk is the factor that often slows down critical deployments or puts the brakes on great IT projects. With CI, you mitigate that risk with validated designs for your specific IT and business needs. This means the environment is sized, configured, and oftentimes validated before it even goes in your ecosystem.
- Saving on Data Center Real-Estate. When working with modern converged infrastructure solutions, you absolutely create new levels of scale and density; along with making the environment easier to deploy. An added benefit surrounds new IT initiatives around reducing data center footprints. CI allows you to remove legacy infrastructure to enable greater amounts of IT flexibility. Remember, with this reclaimed space, you’re capable of optimizing cooling requirements, power needs, and even management. Finally, CI can be deployed in a wide variety of sizes. This means you can support larger as well as smaller (branch) data center locations. Instead of just putting in some heterogenous gear at a branch location, smaller CI nodes can integrate with the overall infrastructure while still keeping the footprint small.
New types of converged systems are helping define the next-generation data center. Organizations looking to create cloud-scale ecosystems must look to convergence to help them evolve. These kinds of systems help manage resources, reduce IT costs, and help create real competitive advantages for the business. | 8:39p |
HPE Acquires Hyperconverged Infrastructure Startup SimpliVity for $650M  By The VAR Guy
Hewlett Packard Enterprise (HPE) today announced a $650 million acquisition of hyperconverged infrastructure (HCI) startup SimpliVity. The deal comes after HPE spent much of the last year streamlining its business capabilities, shedding business units that did not align with its emphasis on cloud-based infrastructure, storage and servers.
HPE’s move to acquire SimpliVity, which consistently gained traction last year in a race for market dominance in a small but fierce field of competitors that include rival Nutanix, falls right in line with its new strategic focus. According to Gartner, the HCI market is projected to reach nearly $5 billion, or 24 percent of the market, by 2019, making it the fastest-growing segment of the overall market for integrated systems. HPE clearly wants a big piece of the software-defined infrastructure action, and this deal will help them get there.
“This transaction expands HPE’s software-defined capability and fits squarely within our strategy to make Hybrid IT simple for customers,” said Meg Whitman, President and CEO, Hewlett Packard Enterprise, in a statement. “More and more customers are looking for solutions that bring them secure, highly resilient, on-premises infrastructure at cloud economics. That’s exactly where we’re focused.”
Related: Incumbents are Nervous about Hyperconverged Infrastructure, and They Should Be
Last year, Simplivity’s big rival Nutanix went public with a splash. SimpliVity had raised over $276 million, and at one point was one of the infamous “unicorns,” startups with a valuation of over $1 billion. Rumors circulated last year that it, too, might be considering an IPO.
“Over the past 8 years we’ve been on an incredible journey and joining HPE is the logical next step for SimpliVity,” said Doron Kempel, Chairman and CEO, SimpliVity, in a statement. “HPE’s broad sales reach, extensive partner channel, complementary technology and commitment to innovation will accelerate SimpliVity’s journey and significantly strengthen our ability to deliver the best-in-class hybrid IT solutions our customers are looking for.”
HPE will continue to offer current customers and partners its existing hyperconverged products, the HC 380 and the HC 250, and there will be no immediate change in the product roadmap for SImpliVity customers. The deal is expected to close in the second quarter of HPE’s 2017 fiscal year, ending on June 30, and HPE intends to offer the SimpliVity Omni Stack software qualified for its ProLiant DL380 servers within 60 days of closing. In the second half of 2017, the company will offer a range of integrated HPE SimpliVity hyperconverged systems based on HPE ProLiant Servers.
This article originally appeared here, on The Var Guy. | 9:26p |
Open Source Serverless Computing Frameworks, and Why They Matter  By The VAR Guy
Serverless computing is fast becoming one of the hottest trends in the channel since the cloud. What is the open source ecosystem doing to keep pace with the serverless trend, and why does it matter? Here’s a look.
Serverless computing is a paradigm in which developers deploy and run code on demand, without having to maintain a backend server at all. The term is a little misleading; serverless computing does not mean there is no server involved. A server still runs your code, but you don’t have to think about the server when deploying it.
When it comes to deploying apps, serverless computing offers some key advantages. It eliminates the need to set up and maintain a virtual server in the cloud. In addition, because your serverless code runs on demand rather than continuously, you only have to pay for computing time when you are actually using it.
AWS Lambda was the first major serverless computing platform to debut, in 2014. Other cloud vendors have followed suit; Azure now offers Serverless Functions. Expect the list of serverless vendors to keep growing fast as more organizations look to leverage the advantages of the serverless model.
Serverless and Open Source
Most serverless platforms available today — including the big three mentioned above — run on closed-source code. But the open source world is not sitting idly by as more computing becomes serverless.
Fission is the open source ecosystem’s major response to the serverless revolution. Fission works in conjunction with Kubernetes, the open source orchestrator for container clusters. It allows you to run serverless code on a Kubernetes cluster on demand.
That cluster can be one you build yourself on premise or in the cloud. Or, you can use Fission in conjunction with a managed Kubernetes service.
The big downside to Fission is that it depends on Kubernetes. It’s not a totally pure-play serverless solution. Still, Fission is an important step forward for organizations that want to take advantage of serverless computing solutions, but don’t want to rely on closed-source platforms.
The Effe project is also working to build an open source serverless computing solution. For now, Effe remains basic, and, like Fission, it is designed to work as part of a containerized environment. But it doesn’t depend strictly on Kubernetes.
And then there is OpenWhisk, which is probably the most influential open source serverless framework to emerge to date. IBM now offers a hosted OpenWhisk service as part of the Bluemix cloud.
So far, the closed-source serverless solutions hosted by the big cloud providers are dominating the market. I have not heard much about anyone shifting production workloads to Fission or other open source serverless frameworks (although OpenWhisk on Bluemix has become an increasingly big deal since it became generally available late last year). But I suspect that will happen as serverless computing becomes more popular.
Why It Matters
Why? What’s the point of adopting an open source serverless solution, rather than using one of those hosted in the public cloud?
That’s a fair question to ask. For many organizations, the primary reason for adopting serverless computing is that you save time and money by not having to set up and maintain infrastructure. You get that benefit whether your serverless platform is powered by closed or open source software.
Still, there are advantages to keeping your serverless backend open. For one, you get more freedom in deciding how to deploy your serverless code. Solutions like AWS Lambda require you to run serverless functions in the AWS cloud, on special servers designed for Lambda. With Fission, in contrast, you can do serverless anywhere you want — in your own cloud, in a virtual server on a public cloud, or on a plain-old bare-metal server in your own office, if you want.
Keeping everything open in the serverless backend will also facilitate more integrations and standardization. Rather than being limited to using only the programming languages that a particular serverless vendor supports, or only the monitoring tools that the vendor makes compatible with its serverless platform, an open source solution will theoretically allow you to customize and extend your serverless computing stack. It maximizes your freedom.
Last but not least, open source serverless platforms may provide some assurance to people who see the cloud as a threat to open source software. As Richard Stallman has noted, cloud computing presents special types of challenges to open source (or, to use Stallman’s preferred term, free software). That’s because when an app is hosted in the cloud, users usually lack the ability to control or modify the app, even if the app is open source.
An open source serverless framework doesn’t nullify that issue. But it at least keeps the server framework that hosts your app fully open. That provides an assurance to your end users, because they will know that they can review and study the source code of the platform that powers the serverless functions that help deliver your app — even if they are unable to control the app itself.
This article originally appeared here, on The VAR Guy. | 9:40p |
Oracle Sued by US Over Alleged Discriminatory Pay, Hiring (Bloomberg) — Oracle America Inc. was sued by the Obama administration over claims it pays white, male workers more than other colleagues doing the same jobs.
The company’s compensation policies discriminate against women and black and Asian employees, the U.S. Labor Department said in an administrative complaint filed Wednesday. The agency also alleged Oracle favors hiring Asian workers for product development and other technical jobs.
If Oracle doesn’t change its practices, the department will seek to cancel the company’s federal government contracts, worth hundreds of millions of dollars, and bar it from winning new ones, according to a government statement.
“The complaint is politically motivated, based on false allegations, and wholly without merit,” Oracle spokeswoman Deborah Hellinger said in an e-mailed statement.
“Oracle values diversity and inclusion, and is a responsible equal opportunity and affirmative action employer,” Hellinger said. “Our hiring and pay decisions are non-discriminatory and made based on legitimate business factors including experience and merit.”
The case against Oracle comes on the same day the Obama administration took two last-minute swipes at JPMorgan Chase & Co., accusing the lender in separate lawsuits of discriminating against minorities in home lending and against its own female employees by paying them less than their male counterparts.
The bank disputed both sets of claims and pledged to fight the gender lawsuit, while agreeing to pay $55 million to settle the race case, according to a person familiar with the matter. |
|