Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, August 9th, 2016
Time |
Event |
12:00p |
Hortonworks: Making Big Data Pay Off Takes Time The biggest problem facing modern data centers with respect to software is integration — blending new methodologies and technologies with existing assets and methods. Now Hortonworks, one of the first commercial ventures to have sprung forth from the rise of Hadoop and “big data,” seems to be feeling the sting of data center integration as well. During the company’s quarterly earnings call last week, executives with the one big data firm that’s publicly traded were compelled by analysts to explain a shortfall in billings for the quarter as symptomatic of the nature of the business they’re in: specifically, driving architectural change in the data portion of the data center.
“We’re clearly seeing the market accelerate from adopting the use cases that Hadoop enables them to bring inside and into the enterprise that they’ve never been able to do before,” said Hortonworks CEO Rob Bearden [our thanks to Seeking Alpha for the transcript], “and they’re clearly in the phase of adopting Hadoop to bring and consolidate their not only traditional but the new. . . data sources.”
So the customers are certainly there, in other words — that’s not the problem. And open source, Bearden continued, helps enterprises to choose Hortonworks, by virtue of the ability for customers to directly perceive where they can apply Hadoop architecture to their existing data warehouses and data stores. That’s not the problem, either.
But the data-in-motion market is starting to open up, Bearden told another analyst. “it’s very big transformation projects that [enterprises] are enabling that are very strategic to their business models, which generates a very large market. And sometimes the sales cycles are a bit longer than more simplistic platforms. And so we’re balancing how we sell into that environment into the adjustments that we’ve made in current quarter.”
Procuring an open source big data platform is not a retail process, where the enterprise takes the shrink-wrapped box home and plugs it in. The sales cycle involves leading the enterprise through the integration process. And as Hortonworks’ VP of corporate strategy, Shaun Connolly, pointed out during the same call, other players in the market, including Hortonworks’ own partners, have managed to increase the number of steps involved in the integration process.
“Many of our customers have a higher approach to their ways of connecting their data,” said Connolly. “And so we have solutions across that gamut. In particular, we’re in the market today with [Microsoft] Azure HDInsight. We power that solution. At [Amazon AWS] Summit, we demonstrated the work that we’re doing around AWS, and we even saw Google demonstrating some of their capabilities on stage at Hadoop Summit. So there’s definitely interest there.
“From our perspective, there’s additional workloads, he continued. “And from an enterprise class capabilities of security and governance and consistent enterprise experience, I think we’re pretty well positioned to sort of monetize both sides of that.”
The company is poised to reap the benefits from a growing variety of integration points, as Connolly characterizes it. But then there’s the danger, as CEO Bearden pointed out (to his credit, with full transparency), of attaining “too think a territory coverage.”
On the one hand, it’s a good thing that customers feel willing to commit to multi-year relationships with Hortonworks, Bearden explained at one point. It’s just that it causes an undesirable side-effect on paper: Annual contract value (ACV) declines as you add more annuals, if you will, to the equation, leading to what he called “a longer tail, obviously, on the revenue recognition.”
So the challenge for the first wave of big data providers — including for private firms such as Cloudera and MapR as well — is to encourage these necessary, long-term relationships with customers, while finding new ways to monetize those relationships closer to the front end of the proverbial elephant. On the NASDAQ exchange Friday, trading in Hortonworks (HDP) stock was sharply lower at the opening, though recovered throughout the day, losing just over 22% to close at USD$9.84 per share. | 4:29p |
AWS Expands CDN CloudFront’s Edge to Canada  Brought to You by The WHIR
Amazon Web Services is adding CloudFront edge locations in Toronto and Montreal to deliver content at high speeds and low latency to the Canadian market, according to a Monday announcement. The expansion brings the AWS CDN to 59 edge locations, including a recently launched location in São Paolo, Brazil.
The Canadian locations will be priced the same as US edge locations, putting them in Price Class 100, and will support the Amazon Route 53 DNS service “in the future,” according to a blog post.
Applications using CloudFront will automatically leverage the new locations, and CloudFront service is included in the AWS Free Tier up to 2 million HTTP/HTTPS requests and 50 GB a month.
“As a developer, you will find CloudFront to be simple to use as well as cost-effective,” AWS Chief Evangelist Jeff Barr said in the blog post. “Because it is elastic, you don’t need to over-provision in order to handle unpredictable traffic loads.”
See also: Top Cloud Providers Made $11B on IaaS in 2015, But It’s Only the Beginning
AWS seems to be increasingly focusing on developers, with the July acquisition of cloud development environment provider Cloud9 and the launch of its IoT platform and accompanying SDK in October.
Geographic expansion, including new regions, is also a priority for the public cloud leader as it nurtures its market share lead on Azure and others, CEO Andy Jassy told an audience at the AWS Public Sector Summit in June.
This first ran at http://www.thewhir.com/web-hosting-news/aws-adds-cloudfront-locations-in-canada | 5:00p |
Breaking Down Cloud Infrastructure-as-a-Service Pricing  By IT Pro
Rachel Stephens at the market-research firm RedMonk has some good analysis and charts showing price differences among various cloud Infrastructure-as-a-Service providers, mapping out how pricing wars appear to be pushing service costs generally down even as providers flesh out their offerings.
Her findings also show that providers are starting to be wary of focusing on simply being the cheapest offering, with many vendors aligning closely around one price point and instead.
One interesting exception: Google, which far undercuts the pack in memory pricing as well as compute units.
There are a lot of caveats to Stephens’ data, as she notes: She compares list — not actual — prices, apples to apples comparisons between providers are impossible, and a number of non-pricing factors are completely ommitted.
See also: Top Cloud Providers Made $11B on IaaS in 2015, but It’s Only the Beginning
Also interesting to note that HPE has stopped publishing Helion pricing, so it’s omitted entirely from the survey.
A lot of what Stephens finds reinforces what we already know of the market, but it’s interesting to see who stands out and where. It’s also a good reminder that while pricing is important, it’s not everything, particularly if that pricing is negotiable.
See also: Winners and Losers in Gartner’s Magic Quadrant for IaaS
This first ran at http://windowsitpro.com/cloud/breaking-down-cloud-infrastructure-service-pricing | 5:30p |
Alibaba Offers to Help Global Tech Companies Navigate China (Bloomberg) — Alibaba is extending a hand to companies such as SAP keen on operating in China, proffering a window into a market that’s increasingly hostile to foreign technology.
China’s largest e-commerce company is aiming to help them comply with local regulations and sell their products, as it seeks news areas of growth to combat a slowing economy at home. Its new AliLaunch program makes use of its cloud computing platform and can help clients with joint ventures and marketing. Its biggest customer so far is Germany’s SAP, which will sell its Hana data-software and services on Alibaba’s cloud.
Securing an influential Chinese partner has become key to cracking the domestic market. China has championed homegrown services over foreign technology, after saying last year it will block software, servers and computing equipment. A tightening of regulations on everything from data to content has also threatened the ability of U.S. companies to participate in China’s $465 billion market for information products.
Alibaba Cloud “‘is able to help its overseas technology partners comply with data security laws in the country,” Alibaba Vice President Yu Sicheng told a conference in Beijing. The company said it aims to sign up 50 partners over the next 12 months.
Alibaba is betting on internet-based computing and big data to boost growth in the next decade. The company is exploring artificial intelligence to help provide real-time comments for basketball games, predict traffic or public sentiment. While the cloud division contributed just 4.7 percent of revenue in the March quarter, it’s Alibaba’s fastest-growing business and a primary driver of growth over the longer term.
See also: Top Cloud Providers Made $11B on IaaS in 2015, but It’s Only the Beginning | 6:18p |
Survey Ship for Amazon’s Transpacific Cable Sets Sail Last Thursday, Survey, a vessel that will explore the best route for laying a fiber-optic cable system across the Pacific Ocean floor that will carry data between data centers in the US, Australia, and New Zealand, set sail.
This is one of the early steps in constructing the system – a project that secured its last bit of the necessary financing earlier this year, when Amazon Web Services agreed to become the future cable’s fourth anchor customer.
It is the cloud giant’s first major investment in a submarine cable system. Its rivals, Microsoft and Google, have also committed substantial sums to boosting trans-oceanic network bandwidth, which is in high demand as cloud services becomes an increasingly global business.
Read more: Amazon’s Cloud Arm Makes Its First Big Submarine Cable Investment
Launch of the marine route survey follows a survey of landing sites for the 30Tbps, 14,000 km Hawaiki Cable, which will land in mainland US as well as in Hawaii. Expansion to several South Pacific islands is possible, according to a statement issued by TE SubCom, the company building the cable for Hawaiki Submarine Cable LP.
Internet and cloud giants becoming major investors in submarine cable projects is a fairly recent phenomenon. The consortia financing such projects have traditional consisted of telecommunications companies.
As their bandwidth needs grow, however, the biggest cloud companies appear to be realizing that they can cut connectivity costs by becoming members of these consortia themselves, rather than paying telcos for using the cables.
Google is a member of a consortium called FASTER, which in June launched a submarine cable system that lands in Oregon, Japan, and Taiwan. Microsoft has invested in several transatlantic and transpacific cable builds, including a joint project with Facebook to lay a cable that will link landing stations in Virginia and Spain.
The Hawaiki cable is expected to go live in 2018.
Read more: Microsoft and Facebook to Build Undersea Cable for Faster Internet | 7:39p |
Google Gives Facebook’s Open Rack a 48V Makeover While data center racks designed by Facebook and Google share many common principles – a shared power source for multiple servers in the rack is one example – two aspects have been radically different: server input voltage and physical depth of the rack. Facebook’s racks are 800 mm deep, while Google’s are 660 mm; Facebook’s input voltage is 12V, while Google’s is 48V.
When Google joined Facebook’s Open Compute Project earlier this year, however, the company said it would work on an open source rack design that would fit its needs and, hopefully, become standardized enough to where it would be readily available from a variety of vendors.
Last week, Google unveiled the first design document that resulted from those efforts, which it plans to submit for review to the Open Compute Project, the open source data center and hardware design effort founded by Facebook several years ago, whose members today also include Microsoft, Apple, Equinix, numerous major telcos and financial institutions, as well as nearly all major IT and data center infrastructure suppliers.
Google isn’t the only OCP member besides Facebook that has developed their version of Open Rack. Fidelity Investments, one of OCP’s early backers, submitted a design spec for its Open Bridge Rack in 2014.
See also: How OCP Server Adoption Can Accelerate in the Enterprise
Google’s Open Rack 2.0 builds on the work Facebook and others have done for Open Rack 1.2, the latest version of the data center rack spec that’s been officially adopted by OCP. The biggest changes are of course additional specs for 48V power distribution and shallower racks to accommodate Google’s data center designs.
The latest spec uses modularity to address the fact that most companies using OCP designs probably aren’t going to switch to higher in-rack voltages and shallower racks any time soon.
The standard now includes two depth options: the original 880mm depth and the shallow derivation, which includes a shallow base rack and a modular extension for cable management and security provisions.
See also: Visual Guide to Facebook’s Open Source Data Center Hardware
It accommodates two voltage options by interchangeable bus bars, which distribute power from centralized power shelves to IT devices. A rack with 12V bus bars can be retrofitted with 48V bus bars, but the bus bars aren’t connector-compatible, to prevent someone from accidentally installing 12V IT gear into a 48V rack.
While 48V power distribution is more efficient, according to Google, having both voltage options is nothing new for the company. It deploys 12V server trays into its 48V data center racks occasionally, but tries to avoid it whenever possible.
The reason providing 48V to the motherboard is more efficient is that it requires fewer conversion steps, since each step results in some energy loss. Facebook’s Open Racks are backed up by UPS systems whose output is 48V, which needs to be stepped down to 12V to accommodate its servers’ power requirements.
Google’s design feeds 48V to the motherboard, where power is stepped down individually for each component, such as CPU, memory, or disk. Urs Hölzle, senior VP of technical infrastructure at Google, said this difference makes for a 30 percent improvement in energy efficiency. The company has been using this 48V architecture for several years now.
Download Google’s preliminary Open Rack 2.0 design document here
See also: What Cloud and AI Do and Don’t Mean for Google’s Data Center Strategy | 9:08p |
Intel Buys AI Startup Nervana to Bolster Data Center Unit (Bloomberg) — Intel, which makes chips that run more than 90 percent of the world’s servers, said it’s buying startup Nervana Systems to add software, a cloud service and future hardware in an attempt to better tune its products for artificial intelligence work.
While Intel’s Xeon processors dominate in data centers, they’re not built for the unique workloads of artificial intelligence calculations, according to Gartner analyst Martin Reynolds. Adding Nervana’s products and expertise will help it gain a foothold in a small but growing market and fend off would-be rivals such as Nvidia, if it can rapidly turn its acquisition into products.
“The market isn’t that big yet,” he said. “But it’s potentially a huge opportunity.”
Intel’s data center unit, its most-profitable and fastest-growing business, needs to find products suited to running emerging services such as voice and picture recognition. Such artificial intelligence work is expected to become a bigger portion of the activity of the servers powered by Intel’s chips.
Read more: Intel Wants to Make Machine Learning Scalable
“We believe that bringing together the Intel engineers who create the Intel Xeon and Intel Xeon Phi processors with the talented Nervana Systems’ team, we will be able to advance the industry faster than would have otherwise been possible,” Intel’s Diane Bryant, executive vice president of its data center business, said in a web posting. “We will continue to invest in leading edge technologies that complement and enhance Intel’s AI portfolio.”
San Diego-based Nervana was founded in 2014. Terms of the acquisition weren’t disclosed.
See also: Google Has Built Its Own Custom Chip for AI Servers |
|