Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, November 5th, 2014
Time |
Event |
1:00p |
Equinix to Offer Private Links to Google’s Public Cloud A new member has joined Equinix’s Cloud Exchange, and its name is Google Cloud Platform. Direct network links to Google’s public cloud are now available out of Equinix data centers in 15 markets, covering nearly all Equinix Cloud Exchange markets worldwide.
Equinix is one of the first partners to offer private, direct links to Google’s public cloud, offering enterprises a high performance, secure connection that bypasses the public Internet. Google’s is last of the big three public clouds to join the exchange, the other two being Microsoft Azure and Amazon Web Services.
The partnership between Equinix and Google is an enterprise cloud play. Such connectivity services make colo providers’ data centers more attractive for enterprises that are looking to stand up hybrid environments that combine dedicated servers and public cloud. The cloud providers benefit by making their offerings more palatable for security-conscious enterprise customers.
“We spent a lot of time early on investing in intellectual property in doing it right and doing it global,” said Chris Sharp, vice president of cloud innovation. “Google’s endorsement is absolutely critical.”
Equinix will offer up to 10Gb connection to Google cloud services and will manage multiple IP addressing methods for customers, whether customer-provided, LAN addressing and/or Google-provided IP addressing.
Direct links to Google’s cloud is launching in 15 initial Equinix markets. The markets are Amsterdam, Atlanta, Chicago, Dallas, Frankfurt, Hong Kong, London, Los Angeles, New York, Paris, Seattle, Silicon Valley, Singapore, Tokyo and Washington, D.C. The two markets not announced but expected are Sydney and Toronto.
Direct links to Google’s cloud out of Equinix data centers launching now:
- Amsterdam
- Atlanta
- Chicago
- Dallas
- Frankfurt
- Hong Kong
- London
- Los Angeles
- New York
- Paris
- Seattle
- Silicon Valley
- Singapore
- Tokyo
- Washington D.C.
Locations expected to be added to the list in the near future:
Enterprises are increasingly employing a multi-cloud strategy as they figure out what workloads work best for what clouds. The Cloud Exchange is young, but Equinix is already seeing the multi-cloud trend.
The first group of Equinix customers to use direct links to public clouds combined private clouds sitting on their servers in colocation data centers with a single public cloud provider like AWS. Now customers are increasingly using multiple public clouds in addition to dedicated servers of their own.
“Each of these clouds have different types of customers and customer profiles,” said Sharp. “They all have feature functionality tuned to specific applications.”
“We can remove barriers around connectivity in a dynamic manner,” he said. “The Cloud Exchange removes some of the complexity of multi-cloud. There’s still the hybrid private and public appeal, but as customers dig into public cloud offerings, to have access to multi-cloud is more critical.”
Combination of private connectivity services to public clouds and the high volume of network provider connectivity in Equinix data centers make them very attractive for enterprises. Other colo providers, such as Telx and CoreSite in the U.S., or Interxion and TelecityGroup in Europe, have similar models, but none has the geographic scale and the volume of customers the Redwood City, California-based giant has.
On the company’s latest earnings call, Equinix execs said connectivity services were now its fastest growing business in terms of revenue, and private links to public cloud was the fastest growing connectivity segment.
Enterprises want multiple clouds and willing to spend the money
A recent survey, conducted by Dimensional Research, confirmed the multi-cloud trend and revealed that enterprises are now spending more on cloud. Nearly 80 percent of respondents among global enterprise IT leaders said they planned on implementing multi-cloud architectures in the coming year.
Interconnected colocation data center environments offering direct connections to multiple clouds are high on the enterprise cloud want list.
About three-fourths of respondents are budgeting for a multi-cloud migration strategy, and eighty-five percent said they strongly value direct connections to public clouds.
Driving the desire for private links is security — the top driver for 85 percent of respondents. This figure illustrates that enterprises are still weary of security in the public cloud, and that security concerns easily overshadow concerns about performance and reliability.
Cloud surveys used to be about whether or not an enterprise was considering cloud to begin with. Now 91 percent of respondents said new cloud based offerings will be deployed in their organization over the next 12 months. Half said new cloud-based apps will be deployed at a colocation provider.
With more than 90 respondents saying they would deploy some kind of a cloud-based offering in the next 12 months, cloud surveys are no longer about whether or not enterprises are considering cloud as a concept. According to Half, most new cloud-based apps will be deployed at a colocation data center, which makes for a secure near future for a company like Equinix.
Most important of all, the budget for cloud is increasing. About 75 percent of respondents expect a larger budget in 2015 for cloud services than they do now.
“What surprised us about this survey is how quickly multi-cloud strategies are becoming the norm worldwide,” said Ihab Tarazi, CTO of Equinix. “Businesses have discovered that colocation can provide many benefits, including the ability to connect securely to multiple clouds. In the future, we expect these companies will be able to point to quantifiable ROI as a result of these multi-cloud initiatives.” | 4:00p |
Bitcoin Mining Data Centers Flock to Central Washington for Cheap Hydropower As the bitcoin computing network expands, it is growing in the footsteps of the cloud computing industry. Locations with cheap land and power are attracting large bitcoin mines, which are often deployed near major cloud data centers, and sometimes inside their data halls.
A case in point: Central Washington state. Bitcoin ASIC Hosting said it is planning to build out 2.5 megawatts of data center capacity in the Wenatchee area, adding to an existing 1 megawatt of colocation space for bitcoin mining customers. And it’s not alone. MegaBigPower, which operates one of North America’s largest bitcoin mining operations, is planning a major expansion that could add more than 20 megawatts of capacity in Central Washington. Seattle-based HashPlex has also announced plans to deploy capacity in that part of the state.
Bitcoin ASIC Hosting already hosts some customers in leased space in a Dell data center in Quincy, the small farming community in Grant County that has emerged as a magnet for cloud computing infrastructure.
Microsoft and Yahoo have built some of the world’s most advanced data centers in Quincy, which is also home to data centers for Intuit, Sabey Data Centers, and Vantage Data Centers. The region is now emerging as a hotbed of activity for bitcoin hashing centers filled with high-density servers crunching transaction data. The growth of the two industries in Central Washington demonstrates how bitcoin entrepreneurs can benefit from piggybacking on cloud growth.
The lure of cheap, clean power
Why are these power-hungry server farms flocking to Central Washington? The region’s supply of cheap hydropower from dams along the Columbia River is the key to its appeal. Rates for hydro power in Quincy and Wenatchee can run as low as 3 cents per kilowatt hour, placing it among the cheapest locations in the continental U.S.
Yahoo and Microsoft each acquired land in Quincy in 2007, a time when their site selection efforts were focused on finding optimal locations for Internet-scale investments in land, power, and servers.
Affordable power was the key attraction for Bitcoin ASIC Hosting, which was launched in 2012 by Seattle-area bitcoin entrepreneurs Allen Oh and Lauren Miehe. They got into the cryptocurrency game by running mining rigs in their homes and garages, but saw an opportunity to build a business offering data center space optimized for virtual currency miners.
“It was amazing to learn that we lived an hour from the place with the cheapest power prices in the country,” said Oh. “We felt like we’d won the lottery. We were among the first ones to go out there and lease space. We’ve helped cultivate some positive experience with the PUD (public utility district) and contractors. Being there and shaking hands is important.”
Location as ‘business opportunity’
Dave Carlson, who runs MegaBigPower, was another Seattle-based entrepreneur who has focused his operations in Central Washington.
“Someone in the industry tipped me off to the extremely cheap power and real estate in Central Washington,” said Carlson. “That was all it took for me to realize I had a significant business opportunity. But it wasn’t easy at all. The difficulty at the time was that if anyone had heard anything about bitcoin, it was probably only bad things. I really had to work hard to find a landlord to make the system work.”
Carlson set up a mining facility in a warehouse near Wenatchee in July of 2013. His operation has grown to 5,000 square feet and more than 3 megawatts of power. There’s plenty more in the pipeline, as MegaBigPower is preparing several new facilities in its Central Washington operations, including a 23 megawatt hashing center and another site that could support 12 megawatts.
Carlson also quickly learned the importance of strong relationships with the local utility. “It became fairly capital-intensive to build out the power,” he said. “I had to work with the local PUD to bring in as much power as they could manage. It was a fresh build.” The expansion required new transmission lines, new poles, two large transformers, and trenches to the building.
Carlson said the utility districts were familiar with data centers, but bitcoin was new territory. “They have realized that it’s for real.”
“The PUDs get a ton of calls,” said Oh. “Last year it was once a week. Now they’re being flooded with calls seeking detailed power information. I think they’re a little more reserved about who they talk with, given the low hit rate (of actual customers).”
Hashing center or data center?
The bitcoin network infrastructure is split between data centers and no-frills hashing centers featuring high-density hardware and low-reliability power infrastructure, often housed in former warehouses. Both types of facilities can be found in Central Washington.
MegaBigPower follows the hashing center design model. Carlson has built out his mining rigs in former warehouses, using commercial shelving and keeping his mining gear cool with large fans that were originally used in the dairy industry. “The only thing we borrowed from data centers was the power distribution and using rack PDUs,” said Carlson.
Bitcoin ASIC Hosting goes with a traditional data center approach, using raised floors, with UPS and generators for backup power. “We want to scale up and provide capacity, but we also want to operate at a standard where we could pivot to hosting other kinds of hardware,” said Oh. “Just like the bitcoin market, the colo market is going to be volatile.”
While other bitcoin firms have expressed interest in expanding in Central Washington to tap its cheap hydro power, Oh said being an early follower of the cloud pioneers has positioned his company for success.
“Word is getting out, and it’s raising the profile of this area,” said Oh. “It helps to have connections. Local officials really do appreciate those who are serious about it.” | 4:30p |
Siaras Launches WAN-as-a-Service for Interconnecting Clouds Siaras is a new company focused on integrating clouds with Wide Area Networks (WANs). Its cloudScape platform is targeted at service providers, enabling them to sell WAN on demand as a service — in similar fashion to cloud services.
Enterprises are increasingly leveraging multiple clouds in their deployments. The problem is that these clouds don’t talk to one another easily, making WAN provisioning a bigger task. Applications are driving WAN requirements, but WAN isn’t as flexible as cloud, which makes it a bottleneck for multi-cloud apps.
Cloud is available on demand while WAN often takes longer to set up and requires long-term contracts. Siaras believes that network service providers are in the position to address this by offering what it calls “WAN-aware cloud.” cloudScape helps service providers orchestrate the WAN between clouds.
It is a proprietary platform based on OpenStack, which extends its network control capabilities. The platform helps a service provider create a logical network from data center to data center, including clouds like Amazon Web Services.
Siaras CEO Vivek Ragavan said enterprises face two radically dissimilar worlds with cloud and WAN. “The service provider is in position to understand both WAN and cloud and resolve the discrepancy,” he said.
The WAN-aware cloud sits atop of other clouds and acts as what CTO and co-founder Sig Luft calls an “orchestrator of orchestrators.”
The two current approaches enterprises use with cloud are either trusting the public Internet or setting up direct links to popular public clouds like AWS, Azure, and Google Cloud Platform through colocation providers.
“There are several cloud orchestration platforms, but WAN orchestration, which has been around longer, is less advanced,” said Luft. “These services haven’t been sold on demand yet.”
Siaras believes this is a differentiator and opportunity for service providers to raise flat network revenue streams. The company also hopes to generally extend OpenStack’s ability to accommodate intercloud WANs and has been a contributor to elements of the open source project that have to do with the network.
In cloudScape, endpoints determine the sharing of constrained WAN resources. It can talk to other systems not dealing with WAN and it can interconnect them through similar or different clouds.
“The discontinuity between WAN and cloud throws open opportunity for network service providers,” said Ragavan. “In the first mile, you have to have a relationship with somebody. Network service providers are best suited to help enterprise adopt cloud faster. Siaras means network service providers can deliver integrated multi-cloud solutions.” | 4:30p |
Data Portability: Shortcomings of Containers and PaaS Luke Marsden is the founder and CEO of ClusterHQ. Prior to this, he co-founded Digital Circus and was an engineer at TweetDeck Inc.
Support for portable and resilient data volumes is a missing piece of the puzzle for Docker, and a significant challenge for PaaS offerings as a whole. Why should the heart of most applications live outside the platform?
By bringing data volumes back into the center of the application architecture, more and more workloads will be able to take advantage of the portability Linux containers provide.
To do this a production-grade solution would need to do the following:
- Enable applications and their data services to automatically scale (with disk, network, CPU load, etc.)
- Eliminate single points of failure in our architectures, even where those single points of failure are whole data centers or cloud providers
- Reduce total cost of ownership and reduce the complexity of the platforms we’re deploying/building so we can manage them easily
Hidden inside these requirements are some really hard computer science problems. Current PaaS platforms and container frameworks don’t handle these requirements very well. In particular, the current approach to both PaaS and containers is to punt the hard problem of data management out to “services,” which either ends up tying us into a specific cloud provider, or forces the data services to get managed orthogonally to the stateless application tier in an old-fashioned way. What’s more, having different data services in test, dev and production means that we violate the principle that we’re working hard to establish: real consistency of execution environments across test, dev and production.
In practice, the application (as in a distributed application, across multiple containers, across multiple hosts) should include its dependent data services, because they are an integral part of their execution environment. The status quo — with data services in one silo and the scalable app tier in another — is a radically sub-optimal solution.
It’s possible to develop something more like Google’s data management layer (which includes the concept of a number of replicas) which we believe should become embedded firmly within our container platforms (“orchestration frameworks”), in order to capture both the stateful data tier (left) and the stateless app tiers (right) in our applications.
What we have vs. what we need
 Green: What Docker can capture today
 Green: What we believe the entire application should consist of Red: The parts that need to be captured
Consistently managing containers: challenge < opportunity
To deliver on the promise of infrastructure as code, and portability of entire applications we need a way of safely and consistently managing the stateful as well as the stateless components of our apps in an ops-friendly way, across dev, staging and production environments. Managing stateful and stateless containers in a consistent way simplifies operations by unifying what would otherwise be at least two systems into one.
In a cloud infrastructure world it’s necessary to do this with unreliable instances and effectively ephemeral “local” storage EBS volumes have a failure rate which, at scale, you have to plan for. Shared storage, e.g. NFS, doesn’t work well in the cloud and introduces a single point of failure.
We should be able to treat our data in the same way we treat our code: cheaply tag and branch it, and pull and push it around among locations. This allows us to become more our agile with our data.
Managing containers with persistent state is a significant problem, and one that must be solved for Docker to scale and the full promise of containerization to be realized. Containerized databases are not suitable for production workloads without solutions for data migration, cloning and failover. Developing these solutions presents some of the greatest challenges, and opportunities, in the Docker ecosystem and the PaaS market today.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:00p |
Teradata Intros Massive Scale Big Data Analytics Platform Teradata has introduced Connection Analytics, driven by data and able to perform against disparate data sets at massive scale. Powered by the Teradata Aster Discovery Platform, the new tool provides insights that extend advanced analytics to business analysts for looking at contextual relationships between people, products, and processes in new ways, the company said.
The legacy data warehouse company has proven its out to embrace all forms and sources of data, noting that Connection Analytics is also built on MapReduce, Graph engines, and more than 100 pre-built algorithms. Teradata Labs president Scott Gnau said the new tool “delivers the advanced analytics data scientists demand and the usability to extend this capability to business analysts across the enterprise.”
Examples of how Connection Analytics can be used include pinpointing influencers in a social media campaign, limiting customer churn, managing cyber threats, and detecting fraud.
Connection Analytics is available immediately as a component of the Teradata Aster Discovery Platform within the vendor’s Unified Data Architecture.
Teradata also introduced a new data fabric enabled by its QueryGrid, which joins analytics, diverse data repositories, and disparate systems to build an orchestrated analytical ecosystem. With the average organization having a myriad of file systems, operating systems, data types, and analytic engines to choose from, the data fabric empowers self-service analytics and fuses everything together and synthesizes query execution across those resources. | 5:30p |
APIs are Connection Points for Everything in the Cloud In the center of cloud growth there is one very specific technology (or rather platform) that has been shaping how we communicate over the cloud. The need to enhance the cloud experience and have cross-cloud compatibility has helped push the cloud API (Application Programming Interface) model forward. The ability to cross-connect various applications – and even physical platforms – is an important need for many different industries across a number of verticals.
Right now, there are four major areas where cloud computing will need to integrate with another platform (or even another cloud provider):
- PaaS APIs (Service-level): This means integration with databases, messaging systems, portals, and even storage components.
- SaaS APIs (Application-level): CRM and ERP applications are examples of where application APIs can be used to create a cloud application extension for your environment.
- IaaS APIs (Infrastructure-level): The rapid provisioning or de-provisioning of cloud resources is something that an infrastructure API can help with. Furthermore, network configurations and workload (VM) management can also be an area where these APIs are used.
- Cloud provider and cross-platform APIs: Ultimately, this is the really interesting model. Many organizations already don’t use only one cloud provider or even platform. More providers are offering generic HTTP and HTTPS API integration to allow their customers greater cloud versatility. Furthermore, cross-platform APIs allow cloud tenants the ability to access resources not just from their primary cloud provider, but from others as well. Plus, why not have the ability to deliver to deliver public cloud features to an organization which only wants to stay private?
Who’s in the API race and what are they doing?
There are a lot of people in the API arena. Many are trying to create better ways to connect into the cloud model. More than ever before, there are major models which are continuing to shape how organizations interact with cloud resources. Platforms like CloudStack and OpenStack are creating an open source infrastructure to enhance connectivity. Many different API players already include:
- CloudStack
- OpenStack API
- Nimbus
- Google Compute Engine
- Simple Cloud
- VMware vCloud API
- And lots of others…
There’s one more important API model, which is doing something that we’re going to see a lot more of in the near future. It’s the idea of cloud agnosticism. Or, if your environment wants to deploy a private cloud with the full functionality of a public cloud platform — it should have the ability to do so!
The Amazon Web Services API and Eucalyptus cloud model (with version 3.3) is doing just that. In fact, according to a recent Data Center Knowledge Article, Eucalyptus 3.3 is also the first private cloud platform to support Netflix’s open source tools — including Chaos Monkey, Asgard, and Edda — through its API fidelity with AWS.
According to Eucalyptus, the new platform includes three pretty major features which help keep any cloud model very agile and robust:
- Auto-scaling – The ability to scale Eucalyptus cloud resources up or down in order to maintain performance and meet SLAs.
- Elastic load balancing – The ability to distribute incoming application traffic and service calls across multiple Eucalyptus workload instances.
- CloudWatch – A monitoring tool similar to Amazon CloudWatch that monitors resources and applications on Eucalyptus clouds.
Basically, users are able to run applications in their existing data centers that are compatible with a variety of Amazon Web Services (EC2 and S3 for example). What does this mean? The future of cloud connectivity will revolve around the direct ability to interface with a variety of cloud resources. Already, we’re able to deploy public cloud solutions with a private cloud setting. This type of cloud evolution will only continue as more cloud-based services are developed and deployed. | 9:27p |
Gold Data Centers Buys Las Vegas Facility With a Big Tenant Gold Data Centers has acquired a 24,000-square-foot data center in Las Vegas. The facility is leased to a publicly traded $23 billion company and acts as its primary data center. Terms of the deal were not disclosed.
Gold Data Centers purchased the facility due to the stability and strength of the tenant and the build itself. The data center has access to 20-plus telecom fiber carriers and dual power feeds from different substations. The buyer also noted recent activity in the Las Vegas market as key part of its rationale.
The acquisition is another instance of a sale-leaseback, a transaction that’s become common nowadays as enterprises look to shed facility operations responsibilities and free up capital. The company that buys the building in such transactions gets a property with built-in ability to generate revenue.
“Recently, there has been a lot of activity in the Las Vegas market with ViaWest’s Lone Mountain Data Center, Cobalt’s data center and the ever expanding Switch SuperNAP facilities,” Bill Minkle, CEO of Gold Data Centers, said. “We feel that all the energy and activity in this market enhances the value of these technical facilities and is a great fit for Gold Data Centers. We will continue to look for opportunities in this space all over North America and possibly overseas.”
The Las Vegas market has a favorable tax environment and small chance of natural disasters. It has become home of several “showcase” facilities for companies like the ones Minkle mentioned.
One of them, Switch, recently landed CenturyLink and Shutterfly as customers.
In June, Gold sold a 30,000-square-foot data center it was developing in Sacramento to an undisclosed Internet and networking company. It was initially intended as a multi-tenant facility with four data halls.
Sacramento was chosen for the development due to its growing popularity for those looking for seismically stable space in Northern California. Sacramento sits on a different tectonic plate from the Bay Area, yet remains close enough to appeal to Bay Area companies. | 9:33p |
Red Hat and Dell Ship DevOps in a Box for Beginners Red Hat and Dell have teamed up on a DevOps starter pack. The two companies are trying to get in on the ground floor of new DevOps companies and those starting to adopt the DevOps methodology.
The starter pack consists of Red Hat’s OpenShift Enterprise Platform-as-a-Service, Dell’s PowerEdge R420 server, and partner services.
DevOps is the combination of development and operations. Software development and IT operations integrate into a collaborative entity to streamline IT processes.
Some consider DevOps a threat to traditional outsourcing, such as Pythian, a company that acquired into DevOps to evolve with the times. The starter pack will help partners to embrace the trend, maintaining and enhancing relationships with customers.
PaaS plays a key role in DevOps as it’s a quick way to develop, host, and scale applications in a cloud environment. OpenShift has a cartridge specification method which links key technologies and services into applications built on it.
Red Hat added a marketplace to OpenShift earlier this year to give PaaS users access to a variety of third-party offerings linking in through the specification.
The DevOps starter pack will be sold through Red Hat channel partners. The first partner is Vizuri.
“OpenShift Enterprise can help organizations to bridge the knowledge gap that accompanies initial setup and configuration of DevOps solutions,” Joe Dickman, senior vice president of Vizuri, said. “The DevOps Solution Starter Pack was created to alleviate these concerns and enable IT organizations to be more agile and responsive as they chart their path to the cloud while converging infrastructure and making modernization faster, easier, and with less risk.” | 9:57p |
Cisco: Data Center Traffic Will Triple by 2018 Cisco has published the 2014 installment of its annual Global Cloud Index. The index projects data center traffic to nearly triple on the back of cloud growth in the next five years to 8.6 zettabytes annually by 2018.
Traffic will grow with a compound annual growth rate (CAGR) of 23 percent, the report forecasted. This traffic is equivalent to streaming all the movies (around half a million) and TV shows (3 million) ever made 250,000 times. The number counts traffic from data center to user and data center to data center.
The cloud piece of the traffic pie is up significantly (9 percent) from last year’s study projecting out to 2017. The 2012 results predicted similar trends, including growth of consumer cloud traffic, cloud expected to represent two-thirds of data center traffic by 2016.
Growing Internet access for the world and consumer cloud usage (personal cloud storage) also significantly contributes to this growth. The United Nations predicts the world’ population will be 7.6 billion people in 2018. Cisco predicts half the population will have residential Internet access and half of those users will use personal cloud.
More countries will be “cloud ready” according to the study. The number of countries that met single advanced application criteria for fixed network jumped from 79 in 2013 to 109 in 2014.
A recent report by International Data Corporation predicts public cloud revenue to reach more than $127 billion in the same time frame. Cisco’s annual overall traffic growth is in the same ballpark as IDC’s annual cloud growth rate at 23 percent.
Consumer cloud storage is expected to grow significantly, with the average user contributing over 800 megabytes of traffic monthly, about five times the 2013 numbers.
Both public and private cloud usage will grow significantly. “When people discuss cloud, they often focus on public cloud services or public cloud storage services,” said Kelly Ahuja, senior vice president, service provider business at Cisco. “However, a very significant majority of today’s cloud workloads are actually processed in private cloud environments. Even with public cloud workloads having significant growth, by 2018, almost 70% of cloud workloads will still be private cloud-related, requiring the ability of workloads to bridge across a hybrid private/public cloud environment.”
 The split between public and private cloud workloads
The countries with the leading fixed network performance in 2014 are (in alphabetical order) Hong Kong, Japan, Korea, Luxembourg, the Netherlands, Romania, Singapore, Sweden, Switzerland and Taiwan.
The countries with the leading mobile network performance in 2014 are (in alphabetical order) Australia, Belgium, China, Denmark, Korea, Luxembourg, New Zealand, Oman, Qatar and Uruguay. |
|