Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, January 12th, 2017

    Time Event
    1:00p
    IBM’s Leading Data Center Storage Line Gets All-Flash Upgrade

    Like the latter stages of a nicotine patch program, IBM has been steadily weaning data center storage off of rotating, ceramic hard drives and onto all-solid-state memory.  The next step in that program begins this morning, with IBM’s announcement of new models in its DS8880 data storage system series whose storage enclosures are being replaced with all-flash units.

    The company is introducing Models DS8884F, DS8886F, and DS8888F, with the “F” representing the all-flash substitution, compared with their non-“F” counterparts.  But in a departure from its previous policy, as Levi Norman, director of IBM Enterprise Storage, stated in an interview with Data Center Knowledge, the company will be coupling the phrases “Business Class,” “Enterprise Class,” and “Analytic Class,” respectively, to these models.  Its intention, he tells us, is to speak more clearly to data center managers who are more involved with the procurement process today than ever before — more clearly than what he described as IBM’s traditional “alphanumeric soup of nomenclature.”

    “When you look at the overall architecture,” stated Norman, “I think you can’t discount the software stack along with the CPU complex, along with the storage complex itself.  And when you bring all those things together, and they operate in harmony because they were designed to operate in harmony, you can actually get better response times out of these styles of architecture than you could with something that was completely in-memory.”

    What the “F” Adds

    The “F” editions build off of the original DS8884, DS8886, and DS8888 models introduced in October 2015, Norman told us.  The first two in that series were hybrid enclosures containing up to 1,536 HDD units, although the DS8888 was always all-flash.

    The “Analytic Class” model DS8888F [pictured left], as with the non-“F” model, is based around the Power System E850 server: a 4U, 4-socket component enabling up to 48 cores clocked at 3.02 GHz, or 32 cores clocked at 3.72 GHz.  Its chassis supports up to 128 host ports, 2 TB of DRAM, and over 1.2 PB of solid-state storage on a maximum of 384 flash cards.  (IBM has stopped using the phrase “flash drives,” to draw a sharper distinction.)

    By comparison, the “Enterprise Class” model DS8886F is based around the Power System S824, which enables up to 2 8-core 4.15 GHz processors, up to 2 6-core 3.89 GHz, or two 12-core processors clocked at 3.52 GHz.  The “Business Class” model DS8884F is based around Power System S822, whose maximum core count configuration is up to two 10-core 3.42 GHz processors, and whose highest clock speed option is up to two 8-core 4.15 GHz CPUs.  All three new models will continue to use the 8800-family rack enclosure as their non-“F” predecessors.

    Is Analytics Storage Really That Different from Regular Storage?

    It’s obvious that IBM has carved out three clear performance classes, in an adaptation of the classic “Good / Better / Best” marketing scheme that was a hallmark of the old RadioShack catalog.  But with analytics software makers directing their product pitches more toward smaller businesses, and with telcos and connectivity providers using “Business Class” to mean the upper speed tier, isn’t IBM worried that its efforts to address data center managers more directly might end up with a mismatch?

    “The way that this product [line] is segmented,” Norman responded, “is because of the primary audience. . . those mission-critical-style clients that can’t afford any downtime in their business.  Those people are 19 of the top 20 banks, telcos that are easy to recognize, trading desks, healthcare institutions, big healthcare research environments.  When we’re talking to that audience with this product, it’s the higher-end — the ‘best’ end of the ‘good/better/best’ — that they tend to want.”

    For that reason, IBM is also pushing its Analytic Class unit towards so-called “cognitive workloads” — which is a tricky message to craft, especially since it’s the other end of IBM that’s advancing the cause of all-DRAM architectures — as opposed to storage arrays — with its DB2 version 11.1.

    “When you peel back the onion layers of cognitive to what it means,” IBM’s Norman told Data Center Knowledge, “underneath it, it behaves in an analytic manner.  I think the difference is, cognitive beyond analytics is meant to make sense of all the data that it pulls in, and then start to reason through it.  But underneath it, from an analysis perspective, it behaves like analytics.  You have to ingest massive amounts of data, make sense of it, get it back out of storage very quickly, and move it around very quickly, to get to a near-real-time answer that satisfied the question that was posed.”

    Continuity as a Metric

    Since the DS8800 series’ introduction in 2015, Norman has characterized its architecture as enabling “non-stop availability” of stored data, and he repeated that claim for us this time.  But with systems such as Hadoop enabling high availability by way of resilience techniques and high redundancy — essentially assuming the underlying hardware to be unreliable and subject to failure — what is the real business value of consolidating all that storage into one node and applying a premium?

    Norman responded by saying IBM customers apply a higher-order metric, demanding that their storage hardware endow their systems with what he described as business continuity.  So we asked him, how many bare-bones, Open Compute Project-style, x86 servers would it take to provide a data center with the same level of business continuity as one IBM DS8888F?

    For clarity, Norman passed on our question to an IBM technical team, who crafted this answer for us:

    “It is a combination of availability and performance that we sell.  Hardware will always fail and it depends on what your definition of availability really is.

    “We have sub-millisecond response times,” the IBM technical team continued, “and in the event of a hardware fail, we maintain that response time after failover in the individual system.  Error recovery is typically less than 6 seconds. In a replication case then you are talking about hyperswap failover times, usually tens of seconds or for longer distance asynchronous replication the RPO [recovery point objective] is in the order of minutes or more, depending on the client implementation.  X86 single system error recovery can take may times the 6 seconds; [and] some are in the minutes range.  So it is an apples-to-oranges comparison.”

    Perhaps so.  But IBM has just upgraded its lineup of oranges, so to speak, and is pitching them to customers with a room full of apples (small “a”).  So such comparisons are bound to be made.

    4:30p
    Managed AWS Cloud ‘Fastest Growing Business Rackspace Has Ever Had’, President Says

    Brought to You by Talkin’ Cloud

    Even in 2008, when the recently appointed Rackspace president Jeff Cotten was just starting his career at the company, Amazon Web Services was a leader in the public cloud. It would take years before Rackspace would turn its business model on its head and launch managed cloud services for AWS and Azure.

    In an interview with Talkin’ Cloud, Cotten, who will formally take the role of president on Feb. 1, says managed AWS is the fastest growing area of business that Rackspace has ever had.

    “We sort of see this as the innovators dilemma. Rackspace was a disrupter in the industry in the early 2000s, disrupting a lot of the traditional IT service providers at the time,” Cotten says, who previously led Rackspace’s managed AWS business, and international teams. “In some regards our business got disrupted with cloud and we’ve now taken advantage of that with a pivot in our strategy to manage these public clouds.”

    As president, Cotten will be taking over some of the duties from Rackspace CEO Taylor Rhodes, who since 2014 has served as both president and CEO. (Cotten and Rhodes actually worked together before Rackspace at EDS in Dallas.)

    “It will sort of serve to divide and conquer here to allow Taylor to focus on our private cloud and traditional managed hosting business which we still see a rather large opportunity there,” he said. “And then he’ll also be focused on a big opportunity we see with top 100 accounts to make sure we serve them effectively across cloud platforms.”

    Jeff Cotten, president, Rackspace (Photo: Rackspace)

    Cotten will oversee Rackspace’s channel organization, direct sellers, engineers and architects that engage with customers, and its professional services organization. Along with that, Cotten’s focus spans across three main areas: ensuring its salespeople have the right specializations and understand customer pain points; growing its international business; and expanding its hypergrowth business, which includes managed AWS and managed security.

    “Trying to understand what customer pain points are and what they’re trying to solve with cloud infrastructure is very complex. It’s certainly complex for our customers, that’s why they come to us, but even for our sales organization,” he says.

    “We’re going to continue to ensure that our sellers have speciality areas that they can go much deeper into a certain cloud platform to solve the pain points.”

    This year will also bring more international growth for Rackspace, specifically the market launch in DACH, which was announced in October along with the opening of its Munich office.

    “That’s a market we’ve been studying for a number of years when we set up our international headquarters in Switzerland,” Cotten says, where he himself worked as managing director of international before joining the managed AWS team.

    Rackspace launched managed AWS services in 2015, and in 2017, it’s about growing this area of the business, Cotten says, including hiring the right people to make it happen.

    “We’re finding that our customers are on multiple points of the journey to cloud adoption,” he says. “It’s very important that we have services that help them capture wherever they are in their journey to cloud adoption and help them further advance.”

    “Finding the talent that understands how the customer behaves, what they need differently is a very difficult thing and not to mention the credentials and certifications and either finding the people that are either already certified or capable of obtaining the certification of some of these cloud platforms is a very big challenge,” he says.

    When asked about the impact of AWS Managed Services, which the cloud giant launched last month to mixed reactions, Cotten doesn’t seem concerned.

    “Our assessment is that it doesn’t have any near-term impact on our business. We actually were involved with AWS around this time last year in the development of that offer,” he says. “We actually are excited about it in some regards because there’s some tooling and systems they will launch that will actually benefit us as a result of what they’re doing with their managed service launch.”

    Beyond that, Cotten says Rackspace and AWS are targeting a very different customer.

    “It’s a very high-end enterprise customer; [AWS’] starting point from a pricing perspective is $100,000 in monthly services alone, that doesn’t count infrastructure. Rackspace’s starting point is $1000 per month,” he says.

    Though the business looks a lot different than it did when he first started at Rackspace, in some ways the company is going back to its roots, Cotten says, describing the evolution of the business in three separate phases. In the first phase, Rackspace was focused on service; though customers often knew what infrastructure they needed, they were looking for managed services. In the second phase of Rackspace its customer was much more technology-savvy and self-service. They wanted to buy and consume its public cloud, and appreciated OpenStack.

    “Now we’re entering phase three which is getting back to our service roots,” he says. “In phase three it’s a very different interaction because our customers are coming to us seeking our advice and how to get started. The complexity is so much vaster than it was than it was in phase one or even phase two.”

    “Phase three is frankly the most exciting phase because it allows us to have a much more mature discussion with the customer,” Cotten says. “The nature of Fanatical Support I think lends itself to getting engaged earlier in the customer’s buying cycle and developing that trusted relationship.”

    This article originally appeared here, on Talkin’ Cloud.

    6:59p
    New Investors Aboard, CEO Expects Compass to Scale by Order of Magnitude

    Compass Datacenters CEO Chris Crosby expects the company’s first institutional investors to enable it to go after deals it has not been able to go after in the past due to capital constraints.

    RedBird Capital Partners and Ontario Teachers’ Pension Plan have backed Dallas-based Compass, the company announced Thursday. Between them, the two investors now own a majority stake in the company, with the remaining portion continuing to be owned by its existing management team, which includes Crosby, one of its two co-founders.

    In an interview, Crosby said the deal would fuel Compass’s next stage of growth and see it “scale by an order of magnitude from where we’re at right now.”

    Access to capital will enable the data center developer to go after larger, multi-phase projects and buy real estate in markets where customers need data center capacity. The idea is to give customers the flexibility of going into a market with relatively small initial footprint but growing that footprint over time to match their demand growth patterns.

    The developer is already eyeing several markets where it can buy land for data center construction. A new property in the Dallas market is already under contract, Crosby said.

    Compass is not after the types of deals we’ve been seeing in Northern Virginia, for example, where a hyperscale internet or cloud service provider takes 10MW at a time from a wholesale developer. There’s been a boom for data center companies willing to build at that scale, but Compass is not competing for those deals.

    Read more: Cloud Fuels Unprecedented Data Center Boom in Northern Virginia

    “We’re not trying to go out and do a mega-data center on day one,” Crosby said. “That’s not our business, but if you’ll get to 20 or 30 [MW] over time, that’s well within our solution set.”

    A growing market opportunity for a company like Compass is to build smaller-capacity data centers in so-called edge markets, highly populated centers that don’t already have a lot of internet infrastructure. Cloud providers and companies with video-streaming, augmented-reality, and Internet of Things applications need servers in those markets to improve performance for users who live there.

    It’s hard to tell in advance how fast demand for data center capacity will grow in a market like that, however, so companies don’t like to risk investing in a large build-out right away. Being able to expand capacity one bite-size chunk at a time while knowing there’s enough land to reach significant scale in the future is an attractive proposition for a customers who need infrastructure in such places.

    7:05p
    Who’s Responsible for Airflow Management in Colocation Data Centers?

    Lars Strong is Senior Engineer and Company Science Officer for Upsite Technologies.

    Who is responsible for airflow management in colocation data centers? It wasn’t that long ago that no matter who you asked the answer was always, “Not Me.” Tenants would say, “why should I worry about airflow management, I won’t see any savings”. Conversely, owner/landlords would say that they could not make unreasonable demands on customers or prospects and risk losing that tenant to a competitor. Rather than asking who is responsible, perhaps the more appropriate question might be, who should be responsible?

    While the litany of, “not my job” responses has somewhat subsided, the stakes are reaching a scale where shrugs and finger-pointing are no longer acceptable. After all, the colocation data center market is forecast to continue growing at approximately 15 percent per year for at least the next four to five years and is currently around $30 billion. A challenge to establishing airflow management best practices is the lack of homogeneity in the industry: market share leaders Equinix and Digital Realty only enjoy 8.4 percent and 5.6 percent market shares, respectively, and more than 70 percent of the total market revenue is accounted for by local colocation providers with 1-3 facilities earning less than $500 million.

    Demand and Cost

    The demand for colocation space and services is primarily driven by cost. This will eventually shift the conversation to both tenants and owners being more concerned about data center airflow management. With cost, either directly or indirectly, being such an important part of the colocation value proposition, it is only natural that cost becomes an important competitive factor for providers, which is why PUE and airflow management are critical to the overall conversation.

    PUE and the Impact of Airflow Management

    Despite a few outlying anomalies, PUE remains a very valuable predictor of operating costs and thereby another metric for the competitive landscape of colocation data centers. Table 1 illustrates an example of savings that could be expected for various PUE reductions for a data center with 1MW of IT load paying $0.10 per kW hour for electricity.

    Airflow management contributes to most of these PUE reductions and resultant energy savings. In a case where all things are equal, airflow management is responsible for 100% of these savings. For a PUE reduction from 2.0 to 1.75 the annual savings will be $219,000. These substantial savings often result in a 1 year or less simple payback for the cost of airflow management solutions.

    In cases where PUE is reduced by installing motion sensing lighting or more efficient power distribution, those are such smaller factors (see Figure 1) that savings are much less significant than savings in the mechanical plant resulting from airflow management improvements. Some of the savings resulting from airflow management improvements are easily identified, such as fan energy reductions due to dramatic reductions in fan speeds and bypass airflow rates.

    Table 1: Estimated Energy Savings Resulting from PUE Improvements

    In addition, good airflow management reduces the differential between cooling supply temperature and maximum server inlet temperature. This provides access to the sources of major energy savings in the data center mechanical plant. For example, if the maximum allowable server inlet temperature is 80˚ F (26.7˚ C), that specification can be met with good airflow management and 75˚-78˚ (23.9˚-25.5˚ C) supply air rather than 55˚-60˚ (12.8˚ -15.6˚ C) supply air. The net effect of this adjustment will be a 30%-50% reduction in chiller energy, or even greater chiller energy savings by not operating the chiller because of access to 25% – 80% more free cooling hours. Therein lies the connection from airflow management to lower PUE to lower energy costs to lower total operating costs. But, who is responsible for navigating this path?

    Who Should be Responsible?

    Actually, both tenants and landlords have skin in this game and exert different variations of control over airflow management, primarily based on the billing model.

    If billing is by the rack and circuit, it behooves the tenant to shop for space where airflow management is strictly enforced by the landlord so that cabinet densities can be maximized. Good airflow management will allow for full cabinets of either 2U or blade servers. Reasonable airflow management (hot aisle/cold aisle configuration and perhaps some discipline about blanking panels and floor tile grommets) will support rack densities around 6kW.

    By-kW billing can either be metered or a power allocation. If it’s a power allocation billing, the landlord’s interests are obviously served by strictly enforcing airflow management rules to increase profits from billing for unused energy. If it’s metered kW billing with a variable PUE factor, then it is clearly in the tenant’s best interest to execute airflow management best practices, but it is also in the landlord’s interest to enforce airflow management best practices in multi-tenant facilities to avoid losing customers who don’t fancy paying for others’ bad habits.

    Figure 1: Example Energy Distribution at 1.92 PUE

    PUE billing will be most practical in single tenant buildings and can take several different forms, such as contracted PUE, actual PUE or a blend of actual and contracted PUE. Where a set PUE maximum threshold has been agreed to between tenant and landlord, it is in the landlord’s best interest to enforce the airflow management practices required to support the PUE agreement.  When billing is based on actual PUE, all the responsibility, capability and motivation resides with the tenant, the better they do airflow management, the less they pay. Where a blended PUE billing arrangement is in place, it is in the landlord’s best interest to enforce airflow management best practice rules that will prevent the tenant from blowing usage past the agreed on PUE ceiling, while it is in the tenant’s best interest to exceed those best practices to drive actual billing below the contracted PUE cap.

    Both Sides Will Benefit

    Good airflow management in colocation data centers is beneficial to both tenants and landlords, either directly through lower costs and/or increased profit margins, or indirectly through an improved sales value proposition or through more cost-effective scalability. Depending on the billing model, the landlord’s role may range from strict enforcement to collaborative consultation. In speaking with John Sasser, VP of Operations at Sabey Data Center Properties, with over 3 million square feet in their portfolio, he has found, “Some customers come in with a better understanding than others, so facility operators must be prepared to explain the fundamentals of airflow management and the benefits of containment. (Today’s) customers seem to require less explanation.” In simplest terms, in single tenant facilities, airflow management is the tenant’s responsibility and in multi-tenant facilities it is the landlord’s enforcement responsibility. Conversely, in single tenant facilities, the landlord’s responsibility is to design and maintain a facility that responds efficiently to optimized airflow management practices and in multi-tenant facilities, it is the tenant’s responsibility to cooperate and be a good neighbor.

    Fundamentally, all landlords need to be well educated in airflow management best practices and the science behind them to engage with knowledgeable tenants, or provide guidance to tenants who are less educated on the topic. At the end of the day and regardless of the motive, it is essential for both tenants and landlords to be discussing airflow management best practices to collaboratively maximize the benefits for both parties.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    8:32p
    US May Be Probing Other Targets in Former Autonomy CFO’s Case

    (Bloomberg) — U.S. prosecutors who charged former Autonomy Corp. Chief Financial Officer Sushovan Hussain disclosed for the first time they may be investigating other people and other possible offenses related to Hewlett-Packard’s money-losing acquisition.

    Hussain is scheduled to make his first U.S. court appearance Thursday after arriving from England to face charges he schemed to inflate the price of Autonomy’s $11 billion takeover by Hewlett-Packard in 2011. In a court filing late Tuesday, the U.S. acknowledged its investigation extends beyond Hussain.

    The U.S. “continues to investigate the involvement of other persons and the possibility of other offenses arising from the facts and circumstances of this case,” lawyers wrote in a document signed by Hussain’s lawyer as well as prosecutors. Other possible targets weren’t named in the filing.

    Autonomy co-founder Michael Lynch, along with Hussain, faces a lawsuit filed by Hewlett-Packard in London seeking $5.1 billion. The Palo Alto, California-based company accuses them of making false claims about Autonomy’s performance and financial condition to boost the company’s value. Hussain was charged Nov. 10, five years after Hewlett-Packard admitted that its acquisition of Autonomy was a bust.

    William Portanova, a former federal prosecutor, said previously that it’s unlikely Hussain would be the only participant in conference and video calls described in the indictment in San Francisco federal court.

    David Satterfield, a spokesman representing Hussain’s lawyer John Keker, had no immediate comment on the filing. Abraham Simmons, a spokesman for the U.S. Attorney’s Office in San Francisco, didn’t immediately respond to an e-mail after regular business hours Tuesday seeking comment on it.

    Hussain’s Defense

    Keker has said Hussain was innocent of wrongdoing and that it was a “shame” the U.S. Justice Department was doing Hewlett-Packard’s bidding by charging him. He has also told U.S. District Judge Charles Breyer that his client is eager to go to trial.

    According to the Tuesday filing, the government on Thursday will turn over to Hussain witness interviews done by the Federal Bureau of Investigation as well as “other third party reports of witness interviews.”

    Keker and government prosecutors agreed in the filing that while Hussain is prohibited from sharing that information with subjects of the ongoing investigation, he may ask witnesses or their lawyers about the interviews.

    This case is U.S. v. Hussain, 16-cr-00462, U.S. District Court, Northern District of California (San Francisco).

    << Previous Day 2017/01/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org