Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, October 30th, 2014
Time |
Event |
12:00p |
Equinix: Private Links to Cloud Now Fastest Growing Business Segment Revenue growth from interconnection services has outpaced the rate of overall revenue growth for Equinix, and providing secure private access to public clouds, such as Amazon Web Services, Microsoft’s Azure or IBM SoftLayer, is the fastest-growing segment of the company’s interconnection business.
While reporting rapid growth in interconnection services revenue and successes in building out a cloud provider ecosystem in its data centers, the company reported a year-over-year drop in earnings per share, missing analyst EPS expectations for the third quarter.
Equinix built its global data center empire by enticing companies to connect to each other’s networks inside its facilities, and its executives now hope that the next stage of growth will come from interconnecting clouds. “The emergence of the cloud ecosystem represents a transformational opportunity,” Equinix CEO Stephen Smith said on the company’s third-quarter earnings call Wednesday afternoon.
Chasing the promise of enterprise cloud
AWS, and later Azure, grew up on individual developers or startup entrepreneurs using their credit cards to stand up cloud compute environments for their applications. Public cloud services gave these developers the ability to deploy big applications without having to buy servers and lease space to house them in data centers like the ones Equinix has built around the world.
Enterprises, however, companies that have the budgets not only to lease colo space but to build and operate their own data centers, have been harder to sell on public cloud services. They like the flexibility and the pay-for-what-you-use billing model, but they’re weary of connecting servers that often hold their crown jewels to Amazon or Microsoft data centers via the public Internet.
Private cloud connection services like AWS Direct Connect or Azure ExpressRoute were designed to address this problem. Through them, colocation providers like Equinix, CoreSite, TelecityGroup, and Datapipe, among others, can link their enterprise customers’ servers to the cloud data centers privately, bypassing the Internet altogether.
In addition to colos, the cloud providers also partner with network carriers, which exponentially increases the amount of data centers around the world that can connect customers to the public clouds privately.
The relationship between colo providers and cloud companies is mutually beneficial. Data centers become more attractive as gateways to the cloud and give cloud providers access to their customer base to sell to. So companies in both camps have been racing to grow the amount of relationships and locations to interconnect in.
Almost immediately after Amazon announced its new data center in the Frankfurt metro, for example, Equinix followed with an announcement that it would offer AWS Direct Connect services to the facility from its data centers in the area.
Microsoft followed its announcement of two new Azure data centers in Australia with an announcement that Equinix and Telstra, the country’s largest telco, would provide the ExpressRoute cloud connection services to the new facilities.
Revenue up, EPS down
The company reported $620.4 million in revenue for the quarter – up 14 percent year over year. Its net income was $42.8 million, and earnings per share were $0.79 – down 4 percent year over year.
Interconnection services contributed 16 percent of the total revenue Equinix reported for the third quarter, although it did not break out how much of that revenue came from private connectivity to cloud providers. Colocation revenue contributed about 80 percent.
Customers in the network services business contributed more than a quarter of the revenue – on par only with revenue contributed by cloud and IT services companies.
Equinix’s third-largest customer segment is financial services (20 percent of revenue in Q3), followed by content and digital media (16 percent), and enterprise customers (11 percent). | 4:42p |
Microsoft’s Second-Gen Open Compute Server Design Now Open Source Microsoft has released the second-generation server design specification created to support all of its 200-plus cloud services into the public domain.
This is the second server spec the company has contributed to the Open Compute Project, a Facebook-led initiative to bring the ethos of open source software development to hardware and data center design. Microsoft became the second data center operator to open source its server design specs (first one was Facebook) in January, when it joined OCP and announced its first OCP server.
That announcement was part of a complete overhaul of the company’s server strategy. It went from every product team deciding on the kind of server they would use to support their service to a uniform approach, where all services would be supported by OCP machines.
High-volume buyers circumvent middle men
Web-scale data center operators like Facebook, Microsoft and Google design their own servers based on their needs and go directly to the same manufacturers in Asia hardware market “incumbents” (the likes of HP or Dell) go to. The volume of boxes they order gives them the power to bargain for low prices, which has created a whole new business segment for those manufacturers.
Facebook now has OCP designs for servers, storage, top-of-rack network switches, as well as data center racks.
An ecosystem of vendors has grown around OCP, where companies now compete not only for the business of the world’s Internet giants but for deals with other companies that operate large-scale data centers, such as cloud service providers and financial services companies.
Dual CPUs, FPGA support
Kushagra Vaid, general manager of server engineering at Microsoft, announced the second-gen Open CloudServer specification (OCS v2) at the Open Compute EU Summit in Paris Thursday and wrote about it in a blog post. The company has now invested more than $15 billion in its global cloud infrastructure, he said.
The new server design spec is based on the latest Intel Xeon E5-26000v3 CPUs. The dual-processor design enables 28 compute cores per blade.
For the first time, Microsoft is incorporating field programmable gate arrays (FPGAs) into its servers to make them adaptable for different workloads. The company’s researchers published a paper on reconfigurable servers for large-scale data center services that featured FPGAs in June.
Diane Bryant, general manager of Intel’s data center group, talked about the chipmaker’s latest Xeon E5 chips that would work coherently with FPGAs at the GigaOm Structure conference in San Francisco. These hybrid chips, she said, would come in a single package with FPGAs and be socket-compatible with regular Xeon E5 chips.
Microsoft’s OCS v2 spec allows for integration of FPGAs but does not require it. The platform overall is highly configurable, allowing for integration of various components and add-on cards.
The spec also incorporates support for high-capacity solid state disk memory, 40 Gigabit Ethernet networking, and high-capacity (1600w) power supplies, among other features.
Here’s a list of the key specs:
- A dual-processor design, built on Intel Xeon E5-2600v3 (‘Haswell’) CPUs, enabling 28 cores of compute power per blade, and reflecting Microsoft’s joint engineering collaboration with Intel to develop the next generation board.
- Advanced networking for low latency, high bandwidth, highly-virtualized environments, based on 40-gigabit Ethernet networking, with support for routable RDMA over Converged Ethernet (ROCEv2).
- Flexibility incorporated into the core design itself. This allows the integration of a variety of components and add-on cards, including FPGA accelerators, which enables customers to tune their servers for their own unique workloads.
- Low-cost, high-bandwidth, Flash-based memory support, incorporating the latest form factor for m.2 Flash memory. This allows OCS v2-based servers to incorporate higher-capacity SSDs, while ensuring TCO optimization by virtue of using cost-optimized NAND.
- A compact, high-capacity power supply, capable of delivering 1600 watts of power, with a high holdup time of 20 milliseconds.
- Support for high memory configurations, along with flexibility in the amount of memory deployed, by virtue of support for 128GB, 192GB, and 256GB memory capacity configurations.
Microsoft is showcasing OCS v2 servers at this week’s summit in Paris designed by Quanta QCT, Wiwynn and ZT Systems. The company listed Intel, Mellanox, Seagate, Geist and Hyve Solutions as component vendors that support the spec. | 5:10p |
Cloudyn’s Cloud Cost Management Tool Works Across AWS, Rackspace, Google Cloud cost management is a big deal. Figuring out true costs across several business units and several clouds running several types of workloads makes it hard to pin down and allocate spending. Cloud monitoring and optimization provider Cloudyn has enhanced its capabilities with a multi-platform cost allocation tool to address this problem. The tool, announced this week, is designed to help allocate and reign in cloud spending.
The difference between its Cost Allocation 360 and other cloud cos management tools, said Cloudyn, is that it takes more than developers into consideration. Accounting can use it to figure out true costs, and the CFO is better able to allocate spending to units. Security groups get a handle on what’s going on with automated security policies, and developers can use it to eliminate a lot of the manual labor associated with properly tagging and categorizing usage. It integrates and analyzes cost across all cloud platforms including Amazon Web Services, Rackspace and Google Compute Engine.
Cloudyn recently raised a $4 million round. CEO Sharon Wagner said the company has a technological edge in its ability to work across multiple clouds and strength in hybrid cloud. The company has mentioned before that it would introduce more granular cost visibility, down to the cost of delivering the application itself, and the new tool is a big step in that direction.
The tool calculates all cloud-related costs, including miscategorized, shared, and untaggable resources. Cloudyn said that half of cloud resources in the enterprise are uncategorized or are improperly tagged or allocated. This means that many businesses don’t understand the true cost of their cloud activities and have limited information for making decisions going forward.
While the system tries to automate the process as much as it can, it still requires some work calculating what some workloads truly cost. Better categorization and tagging is a start.
Cost Allocation supports business rules for handling complex deployments, shared resources, and helps the enterprise get a grip on “shadow IT” costs and uncategorized cloud consumption. The financial guys will be capable of associating costs with specific business units, departments and regions, getting an idea of where the money is going and how much.
Smarter tagging and cost modeling are the two big draws. Enhanced cost modeling for previously untaggable resources (e.g. DynamoDB, Data Transfer, Reserved Instance) lets business apply rules to attribute costs.
The majority of the company’s customers use AWS. Wagner said that 8 percent of AWS workloads are monitored by Cloudyn.
Competitors include RightScale, with its Cost Calculator, and Cloudability. TSO Logic is arguably another competitor, though it centers on energy consumption to deliver cost per transaction, user and other metrics. The cloud providers themselves are arguably competitors as well, as they continue to provide more granular insight into billing. However, their competitive positioning ends at their cloud boundaries, while companies like Cloudyn build their strengths across multiple-platforms. | 5:20p |
NetApp Acquires Riverbed’s Backup Appliance Line for $80M NetApp announced it has acquired Riverbed’s SteelStore line of storage backup appliances for about $80 million. The move adds a cloud-integrated storage product line for NetApp and rids Riverbed of a non-core offering.
NetApp’s chief strategy officer Jonathan Kissane said the “product, coupled with our broad ecosystem of application and cloud service providers, will give enterprises a heterogeneous backup solution to leverage the flexibility and economics of the cloud for their backup and recovery needs.”
Formerly branded as Whitewater, the Riverbed SteelStore line makes it easy for enterprises to integrate with existing backup software and will compress, deduplicate, encrypt, and stream data to a cloud provider of choice.
Calling the acquisition a logical extension to NetApp, Riverbed chairman and CEO Jerry Kennelly said it “allows NetApp customers to extend existing backup, archive, and disaster recovery to the cloud.”
Activist investor Elliott Management may have had some influence in the sale, as it has been pressuring Riverbed for organizational changes for almost a year. | 6:00p |
Learn to Boost Data Center Capacity With Public Cloud Moving to a public cloud environment takes time and consideration. No organization should simply jump in without evaluating ROI and the pros and cons of moving to such a platform. Still, there are powerful reasons to adopt the ever evolving technology. With denser environments, more WAN capabilities, and better cloud management, data can be delivered faster and more economically across vast distances.
With the cloud movement, organizations should take the time to see how and where their business and IT goals fit in with a public cloud environment.
Public cloud computing extends the data center
Without a doubt, one of the most powerful benefits of cloud computing is the ability to extend the existing environment beyond the current datacenter walls. Administrators are able to do more with less as cloud computing components have become much more affordable. Now that both unified computing and WAN-based solutions have come down in price, IT environments are quickly seeing the direct benefits that cloud computing can bring to an organization.
So how can an organization extend their data center with a public cloud architecture? Consider this:
- Adopting the “pay-as-you-go” model. Instead of having servers simply sitting at a co-location, administrators are able to adopt a pay-as-you-go model. This is where servers and VMs are provisioned only when needed. This is great for environments that don’t see the use of a given workload over a long period of time.
- Using cloud resources. By “borrowing” cloud resources IT environments don’t have to invest in their internal infrastructure. Whether it is storage, bandwidth, or virtual machines, administrators are able to use these resources as need instead of just buying them up for an existing datacenter.
- Evolving disaster recovery. Public cloud environments have taken disaster recovery strategies to a whole new level. With site-to-site replication, emergency resources can be spun up via automated workflows. This can have an entire infrastructure back up and running quickly and efficiently. Many organizations are now working with public cloud providers to add that extra level of redundancy to their infrastructure.
- Applying BYOD initiatives. A big benefit of cloud computing is the ability to use almost any device to access centralized data. With a public cloud environment, an organization can provision servers which will specifically handle and deliver workloads for a BYOD initiative. Applications and even desktops can be pushed down to the end-point from a public cloud environment.
- Creating distributed datacenters. Having data in multiple locations not only creates a point of high availability – it also helps with accessibility. Users close to the datacenter will be able to access their information quickly with fewer hops in between. More so, this type of environment creates data and application redundancy. So, if any piece of an environment fails – with a public cloud – administrators can redirect traffic to a different cloud-based datacenter and continue operations.
- Evolving testing and development. Public cloud environments have played in favor of administrators looking to test out new infrastructure components without actually purchasing any gear. This is where the public cloud can really shine. IT environments can test applications, workloads, delivery methodologies and a slew of other technologies without enduring the local datacenter infrastructure cost. They use what they need and then de-provision those resources. This is a form of very efficient development computing.
And now the interesting part. How are you using the public cloud to extend your data center? Some have begun adopting tools like ShareFile or other corporate file sharing tools. Also, consider this – deploying workloads once bound by compliance and regulation is now a lot easier with updated rules. For example, E-commerce in the cloud has always been a bit of a challenge. The passing of sensitive information caused serious issue for cloud providers. And so, providers like Rackspace decided to get creative. By intelligently controlling data through the cloud, the organization’s servers and the payment gateway, you’re able to continuously control the flow of sensitive information. According to Rackspace, when you host your infrastructure in their cloud, you can also sign up with a separate payment processor to provide tokenization, which occurs when you replace credit card data with meaningless numbers or “tokens.” When you accept a payment, non-PCI data is routed to your Rackspace-hosted environment, while the tokenized credit card data is routed to your payment processor. Since your customers’ credit card data is not routed to your Rackspace hosted infrastructure—only the payment processor—your Rackspace environment stays out of the scope of your PCI requirements.
The point is that cloud computing, through APIs, connection points, and software-defined technologies, has changed the way we replicate and control data. Public cloud technologies aren’t perfect, but they create a powerful new architecture to help the elasticity of your data center and business. If your organization needs to boost capacity for a new location, more users, or to support your business, public cloud platforms can help. | 7:00p |
Cloud Price Cuts Leave Customers Confused, Annoyed: Peer 1 Report 
This article originally appeared at The WHIR
Half of IT decision makers in the US and even more in the UK say the price cuts of the so-called cloud price war are “not to be trusted,” “confusing,” or “annoying,” according to a survey released this week by Peer 1. While some respondents said that price cuts could motivate them to change service providers, the majority are not willing to compromise on basic service elements like security and customer service.
The study is composed of answers from 550 IT decision makers, split nearly evenly between the U.S. and UK, and echoes a persistent pushback against what is sometimes called a “race to the bottom.”
Twenty-three percent of respondents said that they are “highly” or “somewhat” likely to consider changing hosting providers as the result of a price cut. Conversely, 43 percent said that a price cut is “unlikely” or “extremely unlikely” to sway them to a new provider.
Most respondents are unwilling to accept a lower level of service in return for a lower price, with 80 percent saying they would not compromise on security for a lower price, over half unwilling to compromise on speed, and over 90 percent unwilling to compromise on customer service.
“In the age of heightened cyber threats, the boom in eCommerce and also the storing and dissemination of data by thousands of organizations, the role of hosting in many businesses has never been more important,” said Toby Owen, VP product at Peer 1. “While cheaper hosting has an appeal for some who are motivated by price, the research shows that speed, service and security are top priority for IT decision makers who are not willing to compromise on their Web hosting business.”
The results indicate that customers agree with Peak CEO Luke Norris that “price is just a weapon” and that meeting the specific service requirements of customers is more beneficial to them than a low sticker price.
Still, IaaS price cuts march on, with ProfitBricks making deep price cuts last week.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/cloud-price-cuts-leave-customers-confused-annoyed-peer-1-report | 7:30p |
Canadian Execs Want Benefits of Cloud, but Don’t Really Know What Cloud Is: Microsoft Report 
This article originally appeared at The WHIR
Most Canadian C-level executives are not familiar with cloud computing, and even half of those who are do not know what it really means, according to a survey released this week by Microsoft Canada. Ninety percent said they are not familiar with what cloud computing means, and only 45 percent of the 10 percent who are familiar with it chose the correct definition from multiple choices.
The survey was conducted by Northstar for Microsoft, and includes responses from 476 C-level executives for private sector companies in the financial and professional services, retail, oil and gas, construction and telecommunications industries.
The result is particularly shocking when the range of industries is considered. Construction and telecommunications companies have not only diverse uses for the cloud, but presumably different levels of familiarity among employees, and major Canadian telecom Rogers is also a cloud services vendor.
“I think the findings reveal a disconnect between what the cloud really is, what it offers, and how it is perceived by Canada’s C-suite decision-makers,” said Microsoft Canada president Janet Kennedy. “To many of them, especially those in smaller businesses, exactly what the cloud is remains unknown but the bottom line benefits are highly valued – bigger profits, better service, lower costs and a more satisfied customer base.”
Security is the main concern cited, with 65 percent saying they do not feel secure sharing business information and data with a cloud provider, and even more (72 percent) fearing for the security of confidential strategic plans. Fourty-five percent said their organizations data would be “unsafe” in the cloud.
“This lack of awareness about cloud-based benefits in general, coupled with persistent concerns about data security should be cause for concern because they are holding Canadian businesses back,” Kennedy said.
Security concern has historically been the greatest barrier to cloud adoption, particularly among SMBs. This is despite a trend among IT decision makers towards the position that data is as secure in the cloud as on-premises, as shown in a 2012 Microsoft report.
Since then, incidents which executives are likely to have read about in the media have influenced general public perception of cloud security, fairly or not.
Almost half of executives both from small businesses (45 percent) and overall (43 percent) believe the cloud benefits only large organizations. Accordingly, 61 percent of small business executives are not involved in or discussing cloud adoption, whereas only 26 percent of medium and large business executives are not.
This stark contrast clearly points out a path to customer growth for cloud service providers, at least in Canada, if not globally: raise confidence in cloud security and awareness of what cloud can do for businesses, particularly small ones.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/canadian-execs-want-benefits-cloud-dont-really-know-cloud-microsoft-report | 9:31p |
CoreSite Doubles Down on Expansion in Core Markets CoreSite executives said the company is focused on maximizing the value of its existing assets around the U.S. rather than expanding into new markets on the company’s third-quarter earnings call Thursday.
CoreSite is one of the largest providers of data center services in the U.S. It’s business model combines both leasing wholesale data center space — the way the likes of Digital Realty Trust and DuPont Fabros Technology do — and providing retail colocation with a focus on network interconnection services — the model close to its big competitor Equinix.
The company repoted about $70.5 million in revenue for the quarter — up from $60.6 million it reported for Q3 last year. Net income was about $3.1 million, and earnings per share were $0.14 — flat year over year.
CoreSite is doubling down on growth in Tier I markets where it already has presence. “We’re investing in depth rather than breadth,” said CFO Jeff Finnin on the call. “No need to get into new markets. It’s nice over time, but no pressure.” The company said it can double the amount of sold net rentable space on the land it already owns.
Growth in edge markets is not independent of core metros
While there is a growing amount of edge data centers, and lots of players are building in Tier II markets, CoreSite believes growth there will depend on healthy growth in the core data center markets.
The company is adding about 130,000 square feet in five markets by the end of next year. Boston is in pre-construction for 15,000 square feet and there’s room for another 70,000 square feet of space. Denver, Chicago and New York expansions are coming next year, and a second facility in Virginia is expected to open soon.
Expansion costs around $4-4.5 million per megawatt in its current markets. The expansion cost is low because the company is focused on expanding in existing markets where it has built out shells and some of the core infrastructure already.
Healthy demand across core markets
CoreSite, a real estate investment trust, saw healthy leasing activity during the quarter, which is a continuation of the trend major REITs reported in Q2.
A significant chunk of the nearly 70 leases the company signed in the quarter were with network and cloud providers, viewed as especially valuable customers because they boosts the data center’s connectivity and cloud ecosystem. Big leases came from the digital content vertical.
Interconnection revenue was up 20 percent quarter over quarter. Interconnection revenue is also growing rapidly for Equinix, which reported Wednesday that it had become its fastest-growing business segment. Within the segment, services that provide private connectivity to public cloud services were growing the fastest, Equinix said.
Demand in New York and Virginia has not let the abundance of available data center space in those markets upset the pricing. The one market where pricing is rising notably is Chicago. Rent price is increasing in the market, as has been indicated by many providers. There’s a lot of space anticipated to come online in the future but the near-term supply is limited. The area will be inventory constrained in the next 12-18 months.
Rent has solidified as sublease space has been absorbed in Santa Clara.California. There was some concern over a large capacity coming online quickly as the result of tenant moving out. The “sublease vacuum” had a few providers worried. CoreSite’s statements echo Vantage Data Centers execs who recently commented that pricing has stabilized following a sublease dash. |
|