Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, May 19th, 2016

    Time Event
    12:00p
    What Cloud and AI Do and Don’t Mean for Google’s Data Center Strategy

    One of Google’s big missions this year has been to prove to the world that it is a serious player in the cloud services market, a player that’s capable of taking on Amazon Web Services. The Alphabet subsidiary has been taking big steps to show that it is “dead serious” about its cloud business, to quote Diane Greene, the founder of VMware whom Google hired last year to lead this charge.

    Hiring Greene was one of the biggest steps. The other was Google’s commitment to make a sizable investment in expanding the data center infrastructure necessary to support a global cloud business. The company said in March it would add cloud data centers in 10 new locations around the world before the end of next year.

    One of the key people executing this expansion is Joseph Kava, a Google VP who leads the company’s data center engineering, design, and operations. This week, before the company kicked off its big annual conference Google I/O, taking place in a concert arena next to its headquarters in Mountain View, California, we sat down with Kava to get a better understanding of Google’s data center strategy as its cloud business evolves and to ask him what effects, if any, will the rise of the Internet of Things, machine learning, and virtual reality have on its infrastructure.

    Here are the highlights of our conversation, edited for brevity and easier readability:

    Data Center Knowledge: Cloud computing has changed how companies think about nearly everything that has to do with data centers. What effects has it had on the data center site selection process for Google as a service provider?

    Joe Kava: We are expanding into some other regions that we didn’t have already. We announced that between now and the end of 2017, we’re going into 10 new serving regions for Google Cloud Platform. One of them that is coming up shortly is in Japan, and that is a region we didn’t have a data center in previously.

    If you look at where most of our data centers have been, our campuses are not in major metropolitan areas. They’re not in Chicago and New York; they’re in Council Bluffs, Iowa, or Pryor, Oklahoma, where you can get large pieces of land and build for long periods of time. But for the public cloud, we’re going into a lot of the major metro areas that are going to be the biggest regions for cloud, like Tokyo.

    Google's data center in Douglas County, Georgia (Photo: Google)

    Google’s data center in Douglas County, Georgia (Photo: Google)

    Google famously likes to keep as many engineering tasks in-house as possible, including data center operations. But company executives recently said the big expansion announced earlier this year would include both Google’s own and leased data centers. Does competing in the public cloud mean Google has to compromise on its traditional strategy of keeping data center operations in-house?

    Joe Kava speaking at Google's GCP Next event in San Francisco in March, 2016

    Joe Kava speaking at Google’s GCP Next event in San Francisco in March, 2016

    It may not be cost-effective to build your own data center for a small instance in a new region. At some point, that region might be big enough to where having our own data center makes sense. It’s just a total-cost-of-ownership analysis, and the same goes for a large enterprise company. If you need a few hundred kilowatts, you wouldn’t necessarily build your own data center, because you’re going to pay a lot of money for that. It doesn’t mean we’re changing strategy.

    Read more: Google to Build and Lease Data Centers in Big Cloud Expansion

    We hear from companies specializing in edge data center markets that demand from big cloud providers in those markets is rising. How is Google thinking about the infrastructure that’s necessary to provide cloud services to users in those regions?

    We have a huge global network of points of presence, and once we can get our customers onto our network, it really doesn’t matter. We can serve them just fine from our cloud regions that we’ve already established, plus the new ones that are coming online over the next year or so. We also have a lot of edge-style data centers [primarily in colocation facilities] already that we’ve been using for caching, so our use case might be a little bit different than others’.

    Digital Realty Trust, one of the biggest data center providers, recently changed its strategy to focus on combining large wholesale-scale data centers with interconnection-rich colocation facilities. The expectation is that cloud providers will take lots of wholesale space near these interconnection points where enterprise customers can access them directly, and hopefully, the enterprises will also take space adjacent to the ecosystem. Do you see a lot of value in being in those big cloud campuses for enterprises and cloud providers?

    We all know that as people move to the public cloud, they are developing a hybrid strategy. They are still keeping some of their apps and some of their systems either on-premise or in their colo, and they’re offloading a tremendous amount of workloads to the public cloud providers. If they were in a large multi-tenant colo and each of the public cloud providers also had a cluster or something in that large colo, then I’m sure it’s very attractive from their perspective, because they have a lot of choice.

    But that’s human behavior. It’s just that comfort level. I think it generally doesn’t matter. Wherever customers are, we have enough points of presence for them to get onto our network and take good advantage of our cloud platform. It will probably take some transition time for people to get used to it.

    Read more: Digital Realty Leans on IBM, AT&T to Hook Enterprises on Hybrid Cloud

    There’s a lot of excitement currently about the Internet of Things. What implications do you think IoT has for Google’s data center strategy?

    We’ve already had the Internet of Things. They’re called smartphones. Android has over a billion registered things that are chatting with our data centers all the time. Having the next billion interconnected things doesn’t really worry me, because those devices, whether they’re your refrigerator at home, or whatever those internet-connected things are going to be, they’re generally not going to be as chatty with data centers as your smartphone is. We’ve already dealt with it.

    Artificial Intelligence or machine learning have been a big focus for Google recently. What are the implications of this focus for your data center decisions, especially now that it’s become a core part of the company’s cloud services strategy?

    Google's Tensor Processing Unit boards fit into server hard drive slots in the company's data centers. TPU is a custom chip Google designed specifically for machine learning applications. (Photo: Google)

    Google’s Tensor Processing Unit boards fit into server hard drive slots in the company’s data centers. TPU is a custom chip Google designed specifically for machine learning applications. (Photo: Google)

    Machine learning as a service offered through our cloud platform is a huge offering, and I think more and more companies are seeing the benefits of that. It’s going to be a big growing product in our portfolio, but from the infrastructure side of things, not necessarily a big change.

    There are customized hardware platforms that machine learning runs better on. It doesn’t affect the way we design our data centers, because we’ve already been running pretty high-density, high-performance compute systems for many years. We optimize everything from the actual server through the rack and the cooling systems, so it won’t really change our strategy.

    Read more: Google Has Built Its Own Custom Chip for AI Servers

    3:00p
    Report Cards for CIOs Prove School is Back in Session

    Lisa Rhodes has had a long-time career with Verne Global.

    There are two types of people in this world. Those that were excited for report cards at the end of each semester and those that were not. It was usually contingent on how much value, work and time you put into your courses and how much consistent attention you paid to the daily grades you received leading up to the end of the semester. All of these things combined were a telltale sign of your own personal success…or failure.

    However, what if there was a way for your teachers to predict your success or failure in the course merely weeks into the class? There’s a CIO from Marist College in Poughkeepsie, NY who figured out predictive analytics can tell if a student is likely to fail a course by the third week in a semester. This happens by analyzing class performance, online activity, and collecting student data from more than a dozen digital sources, including participation in class-related online forums. By analyzing in-class behavior and the underlying participation and engagement online, teachers are able to identify potential issues and address them before a student becomes in danger of failing a course.

    Let’s apply those same predictive analytics to a company. Based on data-intensive projects that were planned or underway, like HPC clusters or data analytics, these analytics could determine whether or not a company was at risk of failing. What would the criteria be? Surely there would be something about server density, application security and network reliability. Considering the importance of the data availability, uptime factors would need to go beyond the software and the network.

    One hidden risk starting to present itself is power, or more precisely, the stability, reliability and availability of power from the electrical grid. Many CIOs might see this as outside of their control, but in reality, it needs to be factored into any decision that relates to power-hungry, data-intensive applications they are implementing for their businesses.

    A Data-Intensive World

    Enterprises worldwide are finding long-term, strategic business benefits by better analyzing and extracting more value from the data they create and gather. With data volume expected to double approximately every two years, there is enormous potential economic value to society, worth trillions of dollars. For this reason, the data center is playing a more strategic role in a company’s IT strategy as quick access to critical information is becoming more important than ever before.

    The Crippling Cost of a Power Outage

    Data centers rely on a continuous feed of power from their electric utility, which has an immediate financial impact for many companies should the grid go down. Data center outages are no longer just an inconvenience; there is a true business cost to the organization. As a result, the demand and risks on the data center are higher than ever before.

    Consider the Power Grid Profile of a Data Center Location

    While data center power outages could be a random phenomenon, there is a direct correlation between an increase in electricity demands in data centers and a decrease in the necessary power grids and infrastructure to support this growth. Many of these grids are functioning on aging infrastructures and facing increasing reliability issues and cost pressures, as well as a mandate to decarbonize electricity supply resources. As data centers put more stress on already brittle power systems, it’s time to ask not only ‘will there be enough electricity?’ but ‘will it be there when my data center needs it?’

    So, what is a forward thinking CIO to do? First, make sure you understand the full scope of HPC projects currently underway in your company. There may be a special project or two hidden away in a different department and you need to be aware of these projects as they now likely impact your budget and data center resources.

    Next, get a report card on the power grid for any location where you have data centers. The utility contract may not be part of the CIOs usual purview, but everything the office of the CIO is responsible for – including business continuity of the organization – relies heavily on the power infrastructure.

    Third, think about the cost of downtime associated with the applications in those data centers based on a grid outage and how that impacts your business in terms of operational and opportunity cost. A 2016 survey of 63 US data centers that reported an outage within the past 12 months indicated the average cost of a data center outage is over $740,000, up 7 percent from 2013 and 38 percent from 2010.

    Finally, think about the applications you have running at each location. Some applications, like financial trading, will dictate location based on latency, resiliency and other requirements, but many won’t. Other applications have high compute power requirements, but low latency or resiliency needs. Applications such as data analytics, HPC and scientific computing might be ideal to move to a location with a more stable power grid as a way to minimize your total risk exposure from a fragile or limited capacity grid.

    Going back to the original idea of creating a model for predicting a CIO’s ability to succeed or fail with the projects they know are driving their business forward, it only works if they factor in all the variables, those specific to the application and the underlying support systems (including power) that make it possible. The CIO at Marist College created a system that looked at a wide variety of factors, both overt and underlying to identify students at risk. Enterprise CIOs need to do the same.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:45p
    Tata Communications Sells 17 Data Centers for $633M

    (Bloomberg) — Tata Communications, part of India’s biggest conglomerate, announced the sale of 74 percent stake in its data center business to a unit of Temasek Holdings for an enterprise value of $633 million to raise funds for expansion and pare debt.

    Singapore Technologies Telemedia is buying the majority stake in 14 data centers in India and three Singapore facilities while Tata Communications will retain the remaining 26 percent. The transaction is expected to close “in the coming weeks” and the proceeds will go toward new investments, according to Rangu Salgame, Tata Communications’ CEO for growth ventures.

    “The sale will bring in cash which will improve our balance sheet,” Salgame said in a phone interview. “It will also help us invest in newer areas such as cloud and e-commerce. The new partner will co-invest and help in expanding the data center business.”

    Related: The Allure of Singapore, the World’s Second Gateway to China

    Audit Process

    Shares of the Mumbai-based company fell 1.5 percent on Thursday to 444.10 rupees, following a 4.2 percent rally the previous day when the company had said in an exchange filing that it would update investors about the sale status. The benchmark S&P BSE Sensex dropped 1.2 percent on Thursday. It will likely report financial results in the next 10 days as the audit process couldn’t be completed, it told bourses today.

    The enterprise value for the Indian unit is 31.3 billion rupees ($465 million) and S$232.4 million ($168 million) for the overseas facilities, according to an exchange filing.

    The transaction is the latest in the asset sale push that the coffee-to-cars conglomerate has been pursuing as Chairman Cyrus Mistry seeks to pare debt across Tata companies, cut costs and boost profit. Tata Steel, Tata Power and Indian Hotels, which wants to sell its Taj Boston hotel, are among group firms looking to dispose of non-core assets.

    Tata Communications, which owns the world’s largest undersea fiber link, had a total debt of 95.95 billion rupees as of Sep. 30, according to data compiled by Bloomberg.

    South Africa

    It had first informed bourses in July last year that it’s exploring “strategic options” for its data centers. The business unit has 45 data centers globally, Salgame said. “We haven’t contemplated selling stakes in data centers in other locations yet.”

    Separately, Tata Communications, which was due to announce its earnings for the quarter ended March, said in a filing that the audit of its financial statements wasn’t completed and therefore not considered by the board at its meeting yesterday.

    The company is also scouting for another buyer for its South African network operator Neotel after a proposed deal with a unit of Vodafone Group fell through in March following almost two years of regulatory tussles.

    5:17p
    Cisco Signals Success in Focus Shift from Hardware to Software

    (Bloomberg) — The shares of Cisco Systems jumped the most in three months after quarterly sales and profit forecasts exceeded analysts’ estimates, an early sign that it’s staying ahead of shifts in the networking industry that threaten its lucrative hardware business.

    Growth is being driven by newer units such as security and conferencing, divisions that Cisco has built up in recent years through acquisitions. The company late Wednesday projected sales growth of as much as 3 percent in the current period, while analysts had predicted revenue would decline. Fiscal third-quarter results also topped estimates, sending the stock up as much as 5.9 percent.

    CEO Chuck Robbins is trying to accelerate growth by shifting the company’s offerings toward software-based networking, security and management products, which customers increasingly prefer because they’re less expensive and more adaptable. Recent acquisitions such as Jasper Technologies, whose software allows companies to connect all sorts of electronic devices, and a new emphasis on security are helping make Cisco less dependent on its expensive, purpose-built hardware, especially as lackluster economic growth means corporate customers are reluctant to spend.

    “We’re in the early days of this transition, but I think we’ve proven in those businesses that we can actually make this transition, and we now have a plan underway to take that methodology across the balance of our portfolio,” Robbins said in an interview Thursday morning.

    The company still has a long way to go to recast itself fully, and earnings are not where they should be, Robbins said Wednesday after the earnings results were released.

    Software Takeover

    Profit before certain costs in the period that ends in July will be 59 cents to 61 cents a share, and revenue may rise as much as 3 percent, the company said in a statement, indicating sales as high as $13.2 billion. That compares with average analyst projections for profit of 58 cents a share on $12.4 billion in sales, according to data compiled by Bloomberg.

    “The market was braced for them to miss and they put up decent results, not great,” said Mike Genovese, an analyst at MKM Partners. “They’re facing up to the reality that hardware is not a growth market. Software is taking over.”

    Cisco’s shares, down 1.6 percent this year through Wednesday, jumped to as high as $28.29 Thursday, the biggest increase since February, and were trading at $27.91 at 9:51 a.m. in New York.

    In the third quarter, which ended April 30, Cisco’s net income fell to $2.35 billion, or 46 cents a share, from $2.44 billion, or 47 cents, a year earlier. Sales fell 1.1 percent to $12 billion. Excluding some costs, profit was 57 cents, compared with an average analyst estimate for profit of 55 cents on revenue of $11.98 billion.

    Cisco’s upbeat forecast, given a month after most other technology companies gave their own quarterly predictions, may indicate that spending on networking improved in April after a weak first three months of the year. Rival Juniper Networks last month said corporate customers and telecommunications providers had cut back on orders in the calendar first quarter, leading to lower-than-predicted profit and revenue. Cavium, a chipmaker that counts Cisco as its biggest customer, reported sales that fell short of its forecast.

    “With Cisco being off by a month, they may be able to call out if things started improving in April,” said David Heger, an analyst at Edward Jones & Co. “That could be a new data point for the market.”

    Hardware Declines

    Cisco’s biggest division, switching, had third-quarter sales of $3.45 billion, a decline of 3 percent from a year earlier. Its second-largest division, routing, suffered a 5 percent drop in sales to $1.89 billion, the company said. Newer units including security, service-provider video and collaboration all posted sales increases of more than 10 percent.

    Gross margin, or the percentage of sales remaining after deducting costs of production, widened to 65.2 percent in the recent quarter from 62.5 percent a year ago.

    Robbins, who took the top job at Cisco last year, is trying to return the company to the double-digit percentage growth the company delivered under his predecessor, John Chambers. Cisco hasn’t achieved that since 2010, and analysts don’t project that rate of expansion in the coming years, as the networking market turns away from the combinations of locked-down custom software and hardware that were once so successful.

    Robbins has said the company is rapidly transforming its product line to fit changing customer requirements ahead of any full-scale shift. Regardless of whether it makes that move fast enough, some analysts are betting Cisco’s reserves of cash and equivalents — which stood at $63.5 billion at the end of the latest quarter — means the company can protect itself by acquiring any smaller rival that’s an emerging leader in technology that could undermine Cisco’s position in the broader networking industry.

    “The balance sheet they have, they can keep buying themselves into areas that are growing,” said Edward Jones’s Heger. “They can at least buy their way into keeping things stable.”

    5:55p
    Salesforce Plans to Expand Relationship with AWS
    By The WHIR

    By The WHIR

    The relationship between Salesforce and Amazon Web Services has expanded, and will continue to do so, Salesforce CEO Marc Benioff said in an earnings call following the release of the company’s Q1 2016 results on Wednesday. The shift from its own data centers to the AWS cloud may not be complete and company-wide, but it does appear to be substantial enough to represent a major strategic shift for Salesforce.

    “We’ve got a great relationship with Amazon; they are a huge user of Salesforce and that certainly has been a huge part this quarter as well. We did a very significant and very large transaction with Amazon, and Jeff Bezos and I have a great meeting of the minds, (on) the future of the cloud. I think that it’s been a great relationship and partnership for us,” Salesforce President and COO Keith Block said in the conference call to answer an analyst’s question about expanding Salesforce’s use of AWS beyond Heroku and its IoT cloud. “We want to continue to grow that and expand that strategically. We are definitely exploring ways so we can use AWS more aggressively with Salesforce.”

    Block told analysts that all of Amazon now uses Salesforce, and a source told Fortune that Amazon had reached a deal with Salesforce to extend all Salesforce software to all of its employees. Darrow speculates that that deal could be the “nine-figure deal” closed this quarter.

    Features of Salesforce’s marketing tools announced last week also run on AWS, Block said, as does, perhaps tellingly, “a lot” of company research and development. A Salesforce spokesperson also told Fortune that SMB sales software SalesforceIQ runs on AWS.

    Benioff also hinted at more announcements from Salesforce involving AWS at its upcoming developer conference TrailheaDX and customer conference Dreamforce.

    Shifting away from the cloud, or AWS in particular, can reduce costs for companies like Dropbox, but Salesforce appears to be betting heavily on AWS to grow its margins.

    This first ran at http://www.thewhir.com/web-hosting-news/salesforce-plans-to-expand-relationship-with-aws

    8:34p
    Scale-Out Infrastructure Startup DriveScale Raises $15M

    DriveScale, a Silicon Valley startup that sells scale-out IT infrastructure built from commodity hardware, came out of stealth Thursday and announced a $15 million funding round.

    Founded by a group of IT hardware industry veterans, DriveScale’s key differentiation point is enablement of scaling storage resources in a scale-out architecture separately from compute. It’s pitching the architecture as a better way to deploy infrastructure for Big Data.

    One of the investors participating in the funding round is Ingrasys, a subsidiary of Foxconn, one of the world’s largest electronics manufacturers. Ingrasys co-developed DriveScale’s hardware and will act as its manufacturer. The Foxconn subsidiary is also one of the startup’s first customers, a group that also includes AppNexus, ClearSense, and DST Systems.

    The other investors in DriveScale are Nautilus Venture Partners and Pelion Venture Partners. Pelion led the Series A round.

    DriveScale is calling its architecture “composable,” a term HPE also used to describe its recently launched product line with many similar aims, including the flexibility to adjust compute or storage capacity independently from each other.

    Read more: HPE Rethinks Enterprise Computing

    The approach is often referred to as “rack-scale architecture,” which is something internet giants like Google and Facebook use in their data centers. Now, the startup is promising enterprises the kind of rack-scale infrastructure the web giants have been enjoying for years.

    DriveScale’s three-person founding team has deep roots in the IT infrastructure industry. Two of the founders, CTO Satya Nishtala and chief scientist Tom Lyon, held key engineering roles at Nuova Systems, a startup acquired by Cisco in 2008 whose technology became the basis of Cisco’s UCS servers and Nexus switches, according to founder bios on DriveScale’s website.

    All three founders have deep ties to Sun Microsystems, the legendary Silicon Valley hardware company whose engineering legacy continues to command respect in the industry, despite its business troubles in the years between the dot-com crash and its acquisition by Oracle in 2009.

    The third founder is VP Duane Northcutt. He ended up at Sun after it acquired Kealia in 2004, where he was VP of technology. Kealia was a startup launched by one of Sun’s founders, Andy Bechtolsheim, who had left Sun but rejoined it following the acquisition. The Kealia deal was the basis for Sun’s entry into the x86 server market, according to DriveScale.

    << Previous Day 2016/05/19
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org