Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, October 10th, 2016

    Time Event
    12:00p
    Cloud by the Megawatt: Inside IBM’s Cloud Data Center Strategy

    Whenever Francisco Romero hears a client tell him they need so-many thousand cloud VMs and an X amount of storage and connectivity across the sites that will host them, in his head, he converts the requirement into megawatts. They may be asking for 10,000 VMs of various kinds, but to him that means he has to deliver 1MW or 2MW of data center capacity, depending on the VMs. Little drives home the physical nature of cloud computing better than this.

    As chief operating officer of IBM SoftLayer, Romero oversees the data center strategy that underpins IBM’s entire cloud business, as the company battles in the market dominated by Amazon Web Services and, to a much lesser extent, Microsoft Azure. All major US hardware vendors have tried and failed to become formidable rivals to Amazon in the cutthroat cloud infrastructure market; all except IBM, which continues to expand both technological capabilities of its cloud and the global infrastructure that makes it all possible.

    IBM’s Infrastructure-as-a-Service cloud, its Platform-as-a-Service cloud, called Bluemix, as well as the plethora of Software-as-a-Service offerings, including services enhanced by Watson, its Artificial Intelligence technology, run in SoftLayer data centers. “We are the main delivery mechanism for those,” Romero, who joined SoftLayer four years before IBM acquired the Dallas-based data center provider in 2013, said.

    There are three basic flavors of infrastructure for nearly every cloud service IBM provides: public cloud, private cloud, and on-premise private cloud. SoftLayer data centers host infrastructure for the first two, he explained. This excludes cloud managed services, for things like SAP or Oracle applications, which are hosted in non-SoftLayer facilities IBM either builds or leases around the world.

    Taking into account IaaS alone, IBM had the fourth-largest cloud market share in 2015, according to Structure Research. With an estimated $583 million in revenue that year, its IaaS business was slightly behind Rackspace’s, which turned over $646 million. For the sake of comparison, AWS raked in $7.88 billion, while Microsoft Azure is estimated to have made $1.209 billion, according to Structure.

    Read more: Top Cloud Providers Made $11B on IaaS in 2015, But It’s Only the Beginning

    Since early 2014, IBM has been expanding its cloud data center footprint. That January, it announced a commitment of $1.2 billion to add 15 data centers around the world to the 13 SoftLayer and 12 IBM facilities that existed at the time. These expansion activities focused in part on adding new locations but also on expanding capacity in markets where the company already had cloud infrastructure.

    SoftLayer’s website currently lists 31 data centers in 22 markets across North America, Europe, and Asia Pacific. There are also numerous network Points of Presence in places like Los Angeles, Chicago, New York, Stockholm, and Perth, among others.

    ibm softlayer data center map 10 07 16

    Map of IBM SoftLayer data center locations, as of October 2016. (Image: IBM)

    Growing in Place

    Expansion in existing locations has been an important part of the strategy. As more enterprise customers deploy more critical workloads in the cloud, they are engineering those applications to take advantage of multiple availability zones in each region for load balancing and disaster recovery. In response, Romero’s team has been adding data centers in existing regions to increase the number of availability zones IBM’s cloud offers.

    The European Commission’s annulment of Safe Harbor laws last year added another incentive to expand capacity in existing regions. When the set of rules that had for 15 years governed cross-border data flows between the US and all EU countries was in place, cloud customers had little concern about having a primary data center in Frankfurt, for example, and a disaster-recovery site in Amsterdam. Once Safe Harbor was struck down and each country was left to its own devices to regulate data flows, cloud users became a lot more interested in multi-site infrastructure within a single country, Romero said. This past August, Europe enacted a replacement for Safe Harbor, a set of regulations titled EU-US Privacy Shield, but companies have been slow to sign up, many weighing other compliance options.

    Expanding the Empire

    The newest markets IBM recently entered with SoftLayer data centers are Seoul and Oslo. The company weighs a number of factors before deciding to extend infrastructure into a new market. The most obvious one is demand: there has to be a critical mass of customers it knows will want cloud capacity in a new geography before it makes the investment. Another big consideration is network infrastructure. India, for example, has lots of demand for cloud services, but very limited and expensive connectivity options, Romero said. IBM announced the first SoftLayer data center in India, located in Chennai, one year ago.

    Like other cloud providers, SoftLayer prefers to team up with a local partner when going into a new market. It partnered with local telcos in both Seoul and Oslo on launching data centers in those cities. Besides having a partner familiar with the local market, this strategy also helps in places with subpar infrastructure. Partnering with a telco, for example, means the partner will be incentivized to invest in new infrastructure to ensure the venture succeeds, Romero explained.

    Cloud by the Megawatt

    While emerging markets in Asia are growing fast, demand for cloud services in the US continues to grow faster than anywhere else in the world, according to Romero. “The US for sure is ahead of everybody else,” he said. This is because the US cloud market is the most mature one.

    While even in Europe most cloud customers are still testing the waters with cloud services, US companies are now starting to shift really large critical workloads to the cloud, he said. IBM is now seeing cloud deals in the US that require commitments of 3MW to 5MW of data center capacity, keeping Romero and his team busy working in a whole new paradigm in capacity planning and deployment.

    Unlike Amazon, Microsoft, and Google, who lease data centers and build their own, SoftLayer has so far relied exclusively on data center providers. Today, it uses around six providers, but its biggest one is Digital Realty Trust.

    IBM is Digital Realty’s biggest tenant by annualized rent, occupying close to 1 million square feet in 23 locations and generating more than $115 million in annualized rent for the San Francisco-based data center REIT, according to investor documents available on Digital’s website. These include both leases with SoftLayer and with IBM, which has numerous non-SoftLayer data centers built for various purposes around the world.

    One of the reasons Digital Realty gets so much SoftLayer business is that the cloud provider has a very specific uniform design for all data centers it deploys, and Digital has been delivering on that design for many years. It includes specific cooling and power density requirements, room layouts, strict PUE caps, tight SLAs for temperature and humidity ranges, and infrastructure redundancy, among other things.

    Unlike other cloud giants, who use N+1 redundancy for UPS systems and other electrical infrastructure components in their mega-scale data centers, SoftLayer requires 2N UPS and treats this requirement as a major differentiator, Romero said. Put simply, N+1 means there are enough UPS units to handle the required load and one extra to compensate if one of the primary ones fails. 2N means there is full redundancy for the entire UPS plant.

    SoftLayer’s design requires the ability to deliver variable power densities on the data center floor, anywhere between 3kW and 16kW per rack, for example. While power draw by physical hosts for cloud VMs is fairly predictable, power can fluctuate widely for bare-metal cloud servers, Romero said. Those rack-density ranges also change on a regular basis. The team revisits them about every six months, as Intel introduces new chipsets, to evaluate how much power the next generation of servers will need.

    Changes Ahead

    Romero doesn’t expect SoftLayer’s design requirements and its overall data center strategy to remain static. IBM’s cloud business has reached a scale where starting to invest in building its own data centers is an ongoing discussion for his team.

    They are also evaluating new design innovations, such as modular in-row cooling, and becoming curious about data center providers with new ideas about structuring leases with their clients. Instead of pushing the client to reserve big chunks of capacity ahead of time, whether there is immediate need for it or not, these companies offer the option to increase the commitment incrementally, while still willing to build ahead to ensure capacity is available when the need comes. “Now we’re seeing providers that are willing to be much more creative with those commercial terms,” Romero said.

    5:24p
    NetSuite Shares Fall After Oracle Says Deal Needs More Support

    (Bloomberg) — NetSuite Inc. fell the most since June after its suitor, Oracle Corp., announced it only has 22 percent of the stock needed to close the $9.3 billion acquisition, raising doubts about the deal.

    The stock dropped as much as 4.7 percent Friday and declined 3.5 percent to $105.44 at 3:40 p.m. in New York. That’s below the price of $109 a share that Oracle offered in  July for NetSuite, a cloud-based software provider. Oracle extended its tender offer to Nov. 4, saying that would be the final deadline for shareholders to make their decision on the deal. Oracle also said in a statement Friday that it will respect the choice of the shareholders, if they don’t endorse the agreement.

    Background on the dealLarry Ellison Accepts the Dare: Oracle Will Purchase NetSuite

    Buying NetSuite, one of the first cloud-services companies, will help Redwood City, California-based Oracle compete against the likes of Salesforce.com Inc., Microsoft Corp. and SAP SE. Oracle responded after at least one NetSuite shareholder expressed concerns about the price of the deal, saying it should be higher. The large software maker believes the agreed-upon per-share price of $109, which was accepted by the board, is the the right price for NetSuite, Chief Executive Officer Safra Catz said in an interview last month.

    The “deal is running into roadblocks,” analysts at Cowen & Co. said in a note, adding Oracle could still raise the price tag.

    The deal relies on an unusual arrangement, given that Larry Ellison, chairman of Oracle, has a large minority stake in NetSuite. A group of independent Oracle directors — excluding Ellison — helped evaluate and negotiate the deal with San Mateo, California-based NetSuite. For NetSuite, approving the deal will mean getting clearance that sidesteps the large stockholder. The company has said a majority of “unaffiliated” shares, those not owned by Ellison and his family — or by directors and executive officers — must vote in favor of the acquisition for it to go through.

    NetSuite’s stock decline on Friday was the first time it had fallen much below the offer price of $109 since the deal was announced. Shares of Oracle were little changed.

    Joel Fishbein, an analyst with BTIG, said Oracle is playing hardball.

    “Oracle’s $109 offer was already a 44 percent premium to the ‘clean’ price of $76 when the rumors of a deal started circulating,” he said. “So if it falls through you’re looking at real risk of this thing dropping back to those levels.”

    See alsoOracle Reduces Ellison’s Pay to $41.5 Million, Co-CEOs Cut

    6:52p
    Here’s Google’s Plan for Calming Enterprise Cloud Anxiety

    Enterprises that sign up for Google’s cloud services will now have the choice to submit their software development and IT operations teams to the same level of operational rigor Google submits its own engineers.

    The company on Monday revealed more details about a new approach to cloud customer support it announced last week, created to help alleviate customers’ anxiety about giving up control of their infrastructure to a cloud provider. It will embed its own experts on cloud customers’ teams to help them deploy and run applications in Google’s cloud data centers in the most reliable way possible.

    The services will include shared paging (when things go wrong), auto-creation and escalation of priority-one tickets, participation in customer “war rooms,” and Google-reviewed design and production system.

    The company will not charge a penny for what amounts to extremely hands-on professional services, but it doesn’t expect every customer to opt for them, given the level of commitment required on the customer’s part.

    Dave Rensin, Google’s director of Customer Reliability of Engineering:

    “This program won’t be for everyone. In fact, we expect that the overwhelming majority of customers won’t participate because of the effort involved. We think big enterprises betting multi-billion dollars businesses on the cloud, however, would be foolish to pass this up. Think of it as a de-risking exercise with a price tag any CFO will love.”

    Google has formed a new team to support this capability, called Customer Reliability Engineering. The title is a variation on Site Reliability Engineering, a concept created at Google years ago to describe software engineers responsible for building and operating Google’s global infrastructure. The company doesn’t differentiate between software development and IT and in fact prefers to have developers run infrastructure, assuming it’s a job handled better by people with deep understanding of software.

    CREs will work with customers’ dev teams the same way SREs work with developers at Google. There is a set of ground rules both sides agree to commit to. SREs accept the responsibility for maintaining uptime and healthy operation of a system if:

    1. The system (as developed) can pass a strict inspection process — known as a Production Readiness Review (PRR)
    2. The development team who built the system agrees to maintain critical support systems (like monitoring) and be active participants in key events like periodic reviews and postmortems
    3. The system does not routinely blow its error budget

    In a way, “error budget” is a different name for availability requirements, or SLAs, such as 99.9 or 99.999 percent uptime. Making Google developers on the product side of things responsible for reliability, SREs give them an error budget, and once they blow it, they have to spend all of their engineering time writing code that fixes the uptime problem they caused and make the system more stable overall.

    If the developers don’t hold up their side of the bargain, the SREs are free to “hand back the pagers,” meaning they are no longer committed to, say, coming the rescue when something goes down at 3 a.m.

    Google cloud customers that opt for working with the CRE team will have to agree to the same “social contract” in exchange for its services. Rensin:

    “When a customer fails to keep up their end of the work with timely bug fixes, participation in joint postmortems, good operational hygiene etc., we’ll ‘hand back the pagers’ too.”

    9:27p
    NFL Linemen Show Business How It’s Done

    Joe Dupree is VP of Marketing for Dupree.

    The start of the NFL season has revealed interesting insights about today’s players.  For one, linemen, in particular, seem to be getting bigger and faster.  Gone are the days of just putting the largest players on the line of scrimmage to plug gaps.  Now, these linemen need quickness to defend against misdirects and non-traditional formations.

    Something akin is happening with data in the modern enterprise.

    Meaningful Data

    It’s no secret that file sizes are increasing along with the amount of data being generated today. Even the average website has doubled in size in the past three years. Data and files are getting larger, yet still must get from point A to point B in an expedited fashion. It can no longer be the big, slow data it once was, but enterprises still fail to fully embrace the agility component.

    So how do companies embrace data agility? Like the 32 NFL teams, the modern enterprise must adapt to this new era and cultivate assets that can nimbly move where they can most benefit. In the people sense, this often means new diets, adjustments to players’ strength and cardio training regimens, and even new technology. In business, this often means turning to next-generation technology solutions that can solve current and future needs.

    But why even bother? Data only benefits organizations if it means something. Organizations are finding that they must collect data – structured and unstructured – from a variety of disparate sources, and this data must be “in shape” to fit their needs. It cannot sit fat and idle around the office and still be counted upon for results.

    Companies are actually great at this part, the gathering of the data. It’s the moving of these large data sets, to where they can be of benefit, that’s the greater battle but often an afterthought. Whether it’s automated, structured EDI information pouring out of your internal applications or the unstructured customer and social data pulled from a data lake or data warehouse, all of this information must integrate seamlessly to enable faster, better business decisions.

    The Lack Thereof

    So what happens when your big data isn’t agile data? Lots:

    • A big hit: Think of the quarterback getting blind-sided because his left tackle missed a block. Major blows to an organization can add up when a security vulnerability or an industry pivot is missed, something data can indicate ahead of time. Agile data doesn’t get caught flat-footed.
    • Data bloat: Too much information without a way to use it means it is just clutter. If this is true for your organization, then congratulations, your business has become an episode of A&E’s “Hoarders.” That means you’re paying for storage and bandwidth for something you’ll never use. In the business world, there’s no limit to how much data you can have in play (aside from network and storage limits), but throwing more “guys on the field” won’t help your cause if they’re not coordinated.
    • Competitive edge: It is one thing to not gain yardage, but your organization could actually lose ground to the competition if you’re not employing all of the data at hand and in a forward-looking manner. Agile data grants businesses a competitive edge.

    Data Agility

    The paradigm shift we’re witnessing is that size alone doesn’t matter. Data agility is just as important (if not more important), and that trait enables massive amounts (i.e., big) data to be organized, calculated, and insightful. Look into a data integration strategy that includes enterprise-grade scalability and high-speed data transfer as a foundation, where even the biggest data becomes leaner and more useful.

    The bottom line: Modern enterprises – and NFL teams – have little use for large and slow.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    << Previous Day 2016/10/10
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org