Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, August 29th, 2016

    Time Event
    12:00p
    One Data Center Standard to Rule Them All?

    Mehdi Paryavi says people from every walk of data center life he’s met over the years call him for advice, ranging from operations staff to senior-level execs. “I have chiller technicians call me that know me from 15 years ago,” he says, adding that he’s as likely to get a call from a facilities manager as from someone configuring core switches and routers.

    Paryavi says he started his career as a management and information systems engineer, became an IT manager, then learned about things like power and cooling, and eventually became a businessman. He declines to name the companies he worked for in those roles however. “Honestly, I don’t want to get into that stuff,” he says.

    He also declines to name customers of his data center consultancy, TechXact, which he co-founded in 2002. “Almost anybody you think about has been a customer of ours at some point,” he says. “I don’t want to name any customers. We have a boutique data center services company, and we don’t disclose references.” More often than not, companies like to keep their data center projects secret, and it’s common for contractors they hire for those projects to be bound by non-disclosure agreements.

    Several years ago, TechXact founded an organization called International Data Center Authority to develop a standard that would give companies a way to assess the performance of their IT infrastructure, starting with power and cooling infrastructure and ending with software applications the infrastructure is built to support.

    The technical committee IDCA put together to develop the standard consists of senior engineering and operations staff from several well-known companies, including eBay, LinkedIn, and AIG, an architecture branch chief who oversees cloud and hosting for US Courts, as well as TechXact’s own employees besides several others. IDCA’s self-imposed deadline to deliver the standard is about four and a half months away.

    Taking Aim at Tiers

    On its website IDCA in no vague terms attacks the Uptime Institute’s four-tier rating system for defining reliability of data center infrastructure as “outdated.” In interviews, Mehdi, who chairs the IDCA board, and Steve Hambruch, a data center architect at eBay who chairs its standards committee, explain that the Tier system is one of too many standards used in the data center industry, each of them addressing an individual component of the ecosystem without regard for the ecosystem as a whole.

    Uptime, now owned by The 451 Group, developed the tier system and reserves the right to be the only organization that can certify and assign tier levels to facilities. A data center operator can pay hundreds of thousands of dollars and spend weeks on the certification process, which involves working with a team of Uptime experts onsite. It has certified more than 700 data centers around the world, including certification of design documents and constructed facilities, which are separate certifications.

    Read more: Explaining the Uptime Institute’s Tier Classification System

    The problem, according to IDCA, is that the tier system focuses squarely on reliability of facilities infrastructure. “A data center is not a cooling center; it’s not a power center; it’s not a bunch of walls,” Paryavi says. It’s a good standard for what it is, he says – “Uptime has done a good job for their own space” – but it’s not enough if you want to truly rate how well your data center will do the job it was built to do.

    “It’s not our mission to replace existing, well-established standards,” Hambruch says. “It doesn’t mean we don’t recognize where gaps exist.” It’s not enough to rate the level of redundancy in your power and cooling infrastructure using a tier rating or determine the efficiency of your facilities infrastructure using Power Usage Effectiveness (PUE), he explains. “That tells me nothing about whether or not I’m getting the maximum possible compute as a result of the power and cooling I’m consuming.”

    He offers an example: You can have two Tier II-rated data centers – which is the second-lowest availability rating – and a failover system between them, which will provide a higher level of availability than each of them individually.

    Addressing the entire data center ecosystem was never Uptime’s aim, as there is still a big need to address just the underlying infrastructure, Julian Kudritzki, chief operating officer of the Uptime Institute, says. “All of our standards and certifications were never meant to address absolutely everything,” he says. “We focused on what we thought the need was.”

    Just recently he was involved in a data center project that had the same problems as the first tier certification Uptime ever did. Different teams were working in isolation from each other and there was no coordination even on fundamental things like the order in which design, construction, and commissioning should happen.

    Certification prevents things like that from happening, Kudritzki explains. “It addresses that human foible factor,” he says. “If someone thinks that that’s still not needed, than that person doesn’t understand our business.”

    In his opinion, there really isn’t a need for an all-encompassing standard like the one IDCA is working on. “I don’t think that rolling it all up into one giant hairball is the way to do it. The industry needs simplicity, not further complexity and splintering.”

    The Infinity Paradigm

    To guide its standard development process, IDCA has created a framework it calls Infinity Paradigm. It is represented as a seven-layer pyramid, whose bottom layer is Topology (individual data centers and how they interact), and whose tip is “Application” (set of software services the company needs). The middle layers cover data center location, facilities infrastructure, physical IT infrastructure, compute resources in the abstract form in which they are presented to applications, and platform, or specific methodology for delivering applications.

    IDCA’s goal is to offer companies guidance in defining the product of this entire stack for themselves (that unit will be different from industry to industry and possibly from company to company within each industry) and use the standard to fine-tune the stack to maximize whatever that output is.

    “When the organization can identify what their work unit is, we can begin to measure the efficiency of the overall IT operation – including the very low-level KPIs at the site layer and so forth – against that work unit,” Hambruch says.

    Too Many Standards, or Not Enough?

    TechXact is already advertising data center audit and training services based on IDCA’s future standard, but unlike Uptime, which doesn’t allow certification using its rating systems by anyone other than its own staff, IDCA isn’t planning to retain the exclusive right to audit and issue certifications based on its standard, according to Paryavi. “We’re not keeping it exclusive to us,” he says, adding that he wouldn’t mind if an Uptime consultant learned the framework and the standard and went on to audit data centers on their own.

    The reason he believes IDCA has a shot at making the data center industry adopt its standard is that he sees a need for it. “We’re addressing everybody’s pain point,” he says. “I never met a person who said [they were] happy with [the standards] they have.”

    See also: Data Center Design: Which Standards to Follow?

    IDCA’s technical committee members are volunteers, doing it out of passion, Paryavi says. “They care about the community. We are not another bunch of consultants sitting in a room making a recipe for everybody else in the world.”

    Regardless of whether or not you think IDCA is actually capable of creating a universal standard that will address every layer of the data center from chillers to software and every inter-layer dependency, and then also convincing the industry to accept and adopt it, the confusion about all the standards used in the data center industry is a problem people in the industry often complain about.

    Depending on who you talk to, there are either too many standards or not enough. Whether it’s possible to create a single standard that effectively and elegantly solves this problem by covering the entire ecosystem IDCA’s framework describes is a different question. There is a countdown clock on the organization’s website. Once it gets down to ‘zero,’ we’ll learn what they propose as the answer.

    4:34p
    VMware’s New Cloud Plan: Sell Stuff for Rival Clouds

    (Bloomberg) — VMware unveiled new products in a bid to eke out greater relevancy for the company in internet-based cloud computing.

    On Monday, the company announced Cloud Foundation, which combines software for storage, networking and virtualization into one package, as well as the ability to use that product as a service hosted in IBM’s cloud.  The company also previewed new Cross-Cloud subscription services that let customers manage and protect applications hosted in clouds from IBM, as well as market leader Amazon and Microsoft.

    VMware is shifting its cloud strategy after little traction and executive departures. It is focusing more on selling products that work with existing leaders, rather than trying to establish itself as an alternative to the Amazons of the world. As more companies run applications on rented servers over the internet, rather than their own data centers, providing services that link the different technologies customers use is a growing opportunity for VMware.

    “CIOs and IT professionals now need to manage a multiple-device, multiple-application and multiple-cloud world and they don’t have the tools to do that,” said VMware CEO Pat Gelsinger in an interview. “We can uniquely partner with them to give them the ability to run, manage and secure an application across multiple clouds and deliver it on any device.” Gelsinger spoke Monday at the company’s annual VMworld conference in Las Vegas, where it unveiled Cloud Foundation and preview its Cross-Cloud services.

    The shift comes after a year of upheaval for VMware. The company faced investor concern in the wake of Dell’s announcement in October that it was buying VMware parent EMC. Over in the cloud business, VMware and EMC backed off a plan to jointly own cloud infrastructure provider Virtustream, instead shifting the asset and its losses to EMC. VMware cloud executives Bill Fathers and Simone Brunozzi left, and the company faced poor adoption of its vCloud Air products, which competed with Amazon’s public cloud offerings, said Abhey Lamba, an analyst at Mizuho Securities USA Inc.

    “Now they are taking this approach of becoming the management layer in clouds,” said Lamba, who has a “neutral” rating on VMware shares. “That’s a really tough battle, and I’m not sure VMware has any strategic advantage there over the next guy.” The new services aren’t even in market yet, so it’s hard to say how it will do until customers can see how it performs, he said.

    VMware created virtualization software that lets companies cram more workloads onto servers. It dominated there by building a product that could work across many different operating systems and types of hardware. That market is maturing, so the company is looking for other opportunities but has seen its cloud efforts lag behind established players. The new approach allows VMware to return to its traditional heritage as a company that can let customers operate with different technology.

    It’s key for VMware to carve out a lucrative segment of cloud sales as the internet-based approach becomes more pervasive. Gelsinger, in his speech, will predict that 50 percent of corporate IT workloads will be in the cloud by 2021.

    While the Cross-Cloud services will address multiple public clouds, Cloud Foundation will initially work with IBM only, though Gelsinger said in the interview that VMware will add other cloud providers in the future. Companies can also buy the software installed on servers from companies like Dell and Hewlett Packard Enterprise and place those in their own data centers.

    Gelsinger was expected to be joined on stage at VMworld by Dell founder Michael Dell and by hotel company Marriott International, which is trying out Cloud Foundation.  The product, which targets Nutanix’s main offerings, will be generally available this quarter.

    5:04p
    Hyperconverged Infrastructure Firm Nutanix Buys Two Startups

    Nutanix, the hottest startup in the emerging hyperconverged infrastructure space, announced Monday that it has acquired two companies. The acquisitions may further delay its IPO, expected to be one of the biggest tech IPOs of the year.

    One is PernixData, a storage acceleration and storage management startup whose acquisition by Nutanix has been agreed on but not yet closed. The second deal, acquisition of Calm.io, has been closed. Calm.io is a cloud automation and management company whose customers include the National Stock Exchange of India, which uses Calm.io’s software to manage more than 50,000 servers globally.

    See also: Nutanix Certifies Cisco UCS Hardware

    As it integrates PernixData with its hyperconverged infrastructure environment, Nutanix plans to improve performance of storage environments, accelerating them by introducing storage-class memory and advanced interconnects. PernixData technology will also help Nutanix enable its App Mobility Fabric to run any application in any environment.

    More on this acquisition on our sister site Talkin’ Cloud

    Hyperconverged infrastructure is one of several types of pre-integrated full-stack data center infrastructure solutions. The concept has been around for only three to four years, but Gartner expects it to become a $5 billion market by 2019 (from ‘zero’ in 2012).

    Both hyperconverged infrastructure and converged infrastructure collapse compute, storage, and networking into a single SKU with a unified management layer. Hyperconverged infrastructure is different because it adds a sophisticated software-defined storage layer and doesn’t focus as much on networking, emphasizing data control and management.

    Read more: Why Hyperconverged Infrastructure is so Hot

    6:09p
    Can Data Centers Really Revive Rural Towns?

    That’s the question a recent article in the New York Times asks. The conclusion is “negative.” While local officials often tout these projects as a way to boost rural economies that have been suffering as a result of the loss of manufacturing jobs, nobody can really claim that a data center, no matter how large, can offer anywhere close to the amount of jobs a textile or a furniture factory does.

    There are other economic benefits, such as taxes on the enormous electricity and equipment purchases companies make for these facilities. But those benefits are often diminished by tax breaks local and state governments offer companies to lure their big data center construction projects in.

    Yes, tax breaks expire over time. Also, it’s not uncommon for one major data center build to put a rural community on the map and attract more construction by other companies. Prineville, Oregon, is a prime example, where a data center built by Facebook was followed by a data center built by Apple.

    These campuses also expand over time. A company like Google may spend several hundred million dollars to build the first data center in a town, spend another several hundred million on an expansion a few years later, and so on. Every phase creates lots of temporary construction and engineering jobs.

    Those long-term benefits, however, don’t seem to be persuasive enough to overshadow the fact that a data center requires very few permanent full-time workers for day-to-day operation. Any economic development initiative is scrutinized by the public first and foremost in terms of how many jobs it will result in, and data centers don’t look very attractive if viewed through the jobs lens alone.

    Read the New York Times feature here

    7:01p
    Is Dell and EMC a Done Deal?
    Brought to you by MSPmentor

    Brought to you by MSPmentor

    The usual buzz is at a crescendo for VMworld, VMware’s annual homage to virtualization, which kicks off Sunday at the Mandalay Bay Hotel & Convention Center.

    Perhaps not as widely known, another technology get-together will also be happening just up the Las Vegas strip, where Dell Peak Performance 2016 is being held at the Aria Resort & Casino.

    Hanging over both events is a looming announcement – in the making for the past 10 months – that Dell has completed its $67 billion acquisition of storage giant EMC.

    At the time the purchase was announced, EMC held an 81 percent stake in VMware.

    See alsoVMware’s New Cloud Plan: Sell Stuff for Rival Clouds

    “Dell Chief Integration Officer Rory Read has said the companies were prepared to close the transaction – announced last October – within two weeks of receiving anti-trust approval from Chinese regulators,” CRN reported last week.

    The New York Post was among several news outlets reporting Aug. 19 that Chinese regulators have, in fact, granted that critical approval.

    Further fueling the speculation, United Kingdom newspaper The Register reported today that signage at EMC’s London headquarters was being taken down.

    “Corrective brand surgery seems to be taking place at EMC’s London HQ already – it may be that Michael Dell is so keen to get his hands on the storage giant that he can’t wait for the ink to dry on the deal,” the article states. “We can report the EMC logo once emblazoned on the huge clock tower next to its main office was covered in a black plastic sheet with workmen on a crane, er, doing stuff to it.”

    See also: What About Dell’s Own Huge Data Center Portfolio?

    The VMware and Dell conferences in Las Vegas, both scheduled to run through Wednesday, could offer an ideal opportunity for the announcement.

    Eyes of the technology world will be focused on the VMworld trade show for state-of-the-art virtualization tools and strategies.

    At the same time, Dell will be playing host to partners of its security subsidiaries, SonicWALL and One Identity, for a four-day education pow wow.

    Many observers expect formal closing of the deal will unleash synergies among all three entities.

    “With the purchase of EMC, Dell has also gained the grandfather of virtualization in VMware,” one expert opined in an article for TechTarget. “When you combine Dell’s server platforms with Wyse thin clients and EMC storage under a VMware software umbrella, you start to see a complete data center picture.”

    This first ran at http://mspmentor.net/msp-mentor/dell-and-emc-done-deal

    7:24p
    IoT Could Drive Micro Data Centers for Telcos

    It is not, despite what many have hypothesized, trillions of non-sentient devices with unique IPv6 addresses logging onto the Web.  And it is this fact — that the Internet of Things would be a more clever architecture than a colossal hub-and-spoke topology — that testifies to its power to change the landscape of data centers.

    IoT could, if it continues to develop the way it has, draw more compute, storage, and bandwidth power towards the edge — away from centralized facilities, and closer to where these various streams of data are being gathered.  Carsten Baumann, who manages North American operations for data center gear manufacturer Schneider Electric, believes that this could shift demand for resources away from “the cloud,” as we have come to know it, to become more geographically distributed.

    Baumann will moderate a panel discussion on the broader topic of the Internet of Things’ impact on data center architecture, during a September 14 session at the Data Center World 2016 conference in New Orleans.

    “If we look at the Gartner hype cycle of emerging technologies, we are apparently right at the peak [for IoT],” Baumann told us in a recent interview.  “The realism leads to the ‘trough of disillusionment,’ which we will enter.  If the Internet of Things is driving all this data, I believe we will see an increase in the need and demand for edge computing — edge data centers.”

    Traditional data center providers such as Digital Realty and Equinix, in Baumann’s view, could conceivably deploy larger numbers of smaller facilities.  Or, alternately, a completely different class of data center provider could emerge into the market.  Consider, if you will, the fact that all those trillions of sensors will not be connected to their respective hubs by wire.  Nor will their hubs to their local IP hosts, most likely.

    Carsten Baumann“For the Verizons, the AT&Ts, and the Sprints. . . to me, this might be a very interesting, compelling argument:  We’ve got to build edge data centers at every cell tower,” he explained.  “Therefore we can aggregate all this information from all these sensors, and then we either apply software analytics at the local level, or we can aggregate information and ship it off to the core data center, where it will then be processed and different analytics will be applied.”

    Baumann is discussing the very real possibility of “IoT-as-a-Service,” and a way for carriers and communications companies to capitalize on this service before someone else does.  Why should telcos be content to simply provide pipelines for their customers’ data and content?  Perhaps they could be refining the quality of that content, by means of analytics and data extraction tools, so that they can present it to customers in a pre-processed form.

    If there’s a market to be built around the data that sensors would be collecting, then it could be telcos that could be in the best position to monetize that market.

    “We will see a proliferation of smaller, higher quantity, edge data center infrastructure,” he projected.  “Could it be one rack?  Two racks?  Five?  I don’t know, it’s going to be small.  It’s not going to be a data warehouse like Google, Facebook, and others have.”

    Baumann conceded he may not be the proper person to be making broad economic speculations.  But he notes that public utilities operators, including telcos and electric providers, already own and operate networks that gather data on vast arrays of public infrastructure, using cellular technology to collect it into central databases.  Data that records the variations in foot traffic to and from airport gates, for example, is already being collected.  It may yet be determined, he told us, whether the owners of the sensors in these instances, or the organizations that make use of the data, are the actual drivers of business transactions.

    Whoever does drive the business, however, may determine to what extent the edges of the data communications network are built up, and how soon.  If the incentive is there, then the world’s data center market as a whole could change dramatically.  It’s here where Baumann injects a phrase we’ve heard from Schneider Electric a few times before.

    So while the individual IoT applications may not have a direct impact on physical data center infrastructure design, Baumann told us, “yes, the requirement of creating more compute power at the edge — that will shift, compared to how we’ve seen it in the past.  Maybe an Amazon, a Google, a Microsoft, a Compass [Datacenters], or a Digital Realty — maybe they will go into the market of building micro data centers, geographically dispersed at the edge, in order to accommodate this new data aggregation, and then applying sensors to it.  Maybe there are companies that can create, out of all this noise, value.”

    Baumann  panel will include high-level experts from Compass Datacenters, Siemens Building Technologies, Legrand, and Siemon, at 10:50 a.m. Central Time Wednesday, September 14, in Room R206 at Data Center World, presented at the Morial Convention Center in Downtown New Orleans. Data Center World is presented by AFCOM, the association for educating data center and IT infrastructure professionals worldwide.  (Baumann is the president of AFCOM’s Southern California chapter.)

    Register for Data Center World today!

    << Previous Day 2016/08/29
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org