Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, June 8th, 2017

    Time Event
    12:00p
    Switch Backs Away from Uptime’s Tiers, Pushes Own Data Center Standard

    Switch, the Las Vegas-based data center provider that’s been one of the more vocal users of Uptime Institute’s four-tier rating system for data center reliability, will no longer pursue certification by Uptime for the facilities it builds.

    The company is proposing a new standard it calls Tier 5, which includes data center design elements other rating systems cover, as well as elements it says they lack. The “standard” is proprietary, Switch said in a statement, but the company is planning to launch a non-profit organization that will control Tier 5 and the way data center operators use it, Adam Kramer, Switch’s executive VP, said in an interview with Data Center Knowledge.

    The non-profit body, which Switch expects to roll out next year, will also “vigorously defend the certification,” he said. This, according to him, addresses another problem with Uptime’s rating system – its misrepresentation by data center operators for marketing purposes, which in Kramer’s opinion is not policed aggressively enough by Uptime, a subsidiary of The 451 Group.

    Finally, having a non-profit body administer the standard would ensure impartiality, he said. While Uptime is vendor-neutral, it is a commercial entity that provides tier certification as a professional service, its core revenue source. Uptime’s parent company pursues the same customers Uptime sells these services to, raising concerns of “conflict of interest and independence questions,” Switch said in a statement.

    Uptime Sticks to Its Guns

    Commenting on these concerns, Matt Stansberry, a senior director at Uptime, said impartiality is a big part of the value the organization provides. “We collect money for our services, but we are an unbiased, vendor-neutral organization that specifically makes our living on fairness or integrity,” he said in an interview with Data Center Knowledge.

    Addressing the question of misuse of the rating system’s nomenclature by companies, Stansberry said, “I understand there are folks in the industry who misrepresent the tier certifications. We do enforce the tier policy and I think that’s one of the things, you’ll find, that we do pretty well.”

    The biggest step Uptime has taken to address one type of misuse was to stop certifying design documents for colocation providers in North America before they build the facilities. The practice of certifying design docs separately from finished facilities led to some companies certifying design docs and promoting the certification without ever certifying the actual buildings.

    Uptime continues to certify design docs for colocation data centers outside North America but those certifications now expire after two years.

    Original Tier Rating Authors

    Switch has been collaborating with some of the original authors of Uptime’s tier system on developing Tier 5, including Uptime’s former CTO, Vince Renaud, and Hank Seader, its former managing principal. They will also be involved in the future standards body, called Data Center Standards Foundation, the company said.

    In a statement, Seader said the Uptime rating system was designed for enterprise data centers, and that options available to the industry have since expanded, which means the system has to adjust. He said he expected the system to evolve over time, but that has not happened. “The innovation has stagnated when it comes to evolving the facilities standard,” Seader said.

    Small Portion of Switch’s Footprint Certified

    While Switch has been promoting heavily its existing Tier IV Gold certifications (Uptime’s highest-level reliability rating), the portion of its total footprint that’s actually certified is relatively small. Only the opening sectors of two of its SuperNAP facilities (Las Vegas 8 and Las Vegas 9) have been certified.

    The two data centers are modular, Kramer said, so the design of the initial sectors is replicated as the company adds capacity in the buildings.

    These buildings are massive, together totaling 900,000 square feet, according to Kramer. Switch says on its website that its overall data center space in Las Vegas measures 2.4 million square feet and can provide up to 315MW of power. The company is getting ready to launch yet another data center in the market (Las Vegas 10) this month, Kramer said.

    It also recently launched data centers outside of Reno, Nevada, and in Grand Rapids, Michigan; it entered into joint ventures to build data centers in Italy and Thailand and announced plans to build data centers in the Atlanta market.

    Design is King

    Switch’s overall message is that its data center design – virtually all of it attributed to the company’s founder and CEO, Rob Roy – is more reliable than the design requirements described by Uptime’s Tier IV, the highest reliability rating. The elements making it more reliable according to the company include things like the ability for the data center to “run forever without water,” detection and protection from outside air pollutants, energy storage system redundancy, availability of multiple network carriers, individual rack security, and many more. (The detailed list is here.)

    Data center design and focus on physical security have played a big role in Switch’s differentiation story, and the long list of differences between Tier 5 and Uptime’s Tier IV requirements highlights many of the key elements in Roy’s design and approach to security.

    5:24p
    Provide Cloud Services to the Feds? Study the Space-Weather Threat

    It’s been nine years since the US government seriously addressed the threat of electromagnetic interference from a solar storm or nuclear event on the century-old US electrical grid.

    Congress acted by proposing an upgrade to the grid at a cost of about $2 billion and the development of protections for 5,000 power-generation plants for an additional $250 million. The initiative stalled, and the movement to protect the country against electromagnetic pulses (EMPs) came to an abrupt halt.

    And, when the Task Force on National and Homeland Security told Congress two years ago that EMPs pose “existential threats that could kill 9 of 10 Americans through starvation, disease, and societal collapse,” it raised some federal eyebrows but not much else.

    Considering the critical nature of the potential (if obscure) threat, it’s a good thing that Mike Caruso’s crusade to push the government and the IT industry into action hasn’t lost steam. For the past five years, the director of government and specialty business development at ETS-Lindgren has spoken at Data Center World, hoping to shed as much light as possible on the issue.

    See also: Can Space Weather Kill the Cloud?

    His efforts are paying big dividends today; and you might want to listen more closely than usual when Caruso takes the podium again July 12 at Data Center World Local in Chicago. He will talk about what data center professionals need to do now that the 2017 National Defense Authorization Act (NDAA) has been signed into law.

    More about Data Center World Local, Chicago here

    The NDAA Legislation requires that all 16 Critical Infrastructure Segments submit a preliminary study of what impact EMP would have on them. That includes any public or private company involved in: chemicals, commercial facilities, communications, critical manufacturing, dams, defense industrial base, emergency services, energy, financial services, food and agriculture, government facilities, healthcare and public heath, IT, nuclear reactors, sector-specific agencies, transportation, and water and wastewater systems.

    “At this point only data centers with FedRAMP activity will be required to address EMP, but eventually all data centers serving any of the 16 Critical Infrastructure Segments will have to comply with regulations that the Department of Homeland Security (DHS) will create,” Caruso explained.

    That’s required—but not funded—by the government.

    FedRAMP is the certification program for companies that want to provide cloud services to federal agencies. More than 80 products are currently FedRAMP-authorized, with 65 additional ones in progress. They include cloud services by Amazon Web Services, Accenture, Adobe, Google, IBM, Microsoft, Hewlett Packard Enterprise, and Oracle.

    “Because the critical infrastructure is comprised of mostly private/corporate ownership, the US government has not made plans to fund its EMP protection. However, in the case of utilities, there is the possibility of consumer rate increases to cover the capital expenses.”

    The first step toward complying, Caruso suggested, would be to work with an expert to perform an evaluation of either their planned facility (early planning is essential) or their existing facility.

    He said costs will vary according to the size and location of the data center. “New construction of an EMP- protected data center would be approximately 10 percent above the cost of a non-protected data center. Retrofitting an existing facility would be significantly more expensive and dependent upon the particular circumstances.”

    The good news for data centers is that the technology that can protect against EMP threats exists and is in use today. Caruso said RF shields and Faraday cages are affordable ways to protect either new facilities or existing ones. Both deflect electromagnetic energy.

    To the naysayers who deny such threats exist, Caruso had this to say:

    “I suggest they research the findings of the US government’s EMP Commission and consensus view of EMP experts who have advanced degrees in physics and electrical engineering along with several decades of experience in the field—with access to classified data throughout that time—and who have conducted EMP tests on a wide variety of electronic systems, beginning in 1963.  Now, in today’s world, with the unfortunate reality of terrorists and rogue-nation activity, it’s not a matter of if a threat will impact the nation’s security, but when. That’s why we’re seeing legislation such as the NDAA enacted by the US government and increased involvement by the Department of Homeland Security.”

    To learn more, come to Data Center World Local, Chicago next month.

    6:00p
    Equinix First Member of CommScope’s New Colocation Data Center Alliance

    With the total operational square footage of global multi-tenant data centers (MTDC) expected to reach 177 million by the end of next year—with no end in sight to growth—CommScope announced that it has formed an alliance to help standardize customer solutions.

    And, Equinix is the Multi-Tenant Data Center Alliance’s first member. The alliance is part of CommScope’s PartnerPRO  Network that serves as a resource for distributors, installers, integrators, and consultants worldwide.

    “With our global network of 179 data centers across 44 markets, Equinix is thrilled to be part of the CommScope MTDC Alliance to offer enterprise customers the optimal data center deployments that best fits their needs,” Greg Adgate, vice president of global technology partners and alliances at Equinix, said in a statement.

    MTDC infrastructure makes advanced technologies such as cloud computing and virtualized data centers available to companies of all shapes and sizes, while also allowing flexible and easy expansion as the business grows. By outsourcing data center services instead of building, hosting, maintaining and upgrading them, MTDC tenants can realize significant operating and capital expenditures saving.

    “Over the years, a shift has taken place in which companies increasingly outsource IT needs to shared environments in which data centers are viewed as an operating expense,” Stephen Kowal, CommScope’s senior vice president, global partners, said in a statement. “By leasing third-party data center white space, enterprises can remain focused on their core businesses while enjoying optimal data center availability, reliability and cost control.”

    6:30p
    The Cost of Complexity

    David Flynn is CTO of Primary Data.

    It’s no secret that we’re in an era of unprecedented change – and much of that change is being driven by data. Luckily, new storage solutions are helping enterprises simultaneously manage rapid data growth, inputs from new data sources, and new ways to use data. These technologies service a wide variety of application needs. Cloud storage delivers agility and savings, SSDs and NVMe flash address the need for fast response times, web-scale architectures give enterprises the ability to scale performance and capacity quickly, and analytics platforms give businesses actionable insight.

    While each technology provides unique benefits, collectively, they can also introduce significant complexity to the enterprise. Let’s take a closer look at how complexity is increasing enterprise costs and how both costs and complexity can be eliminated by automatically aligning data with storage that meets changing business objectives.

    Solving the Paradox of Storage Choice

    Faced with a diverse storage ecosystem, IT often finds themselves making a choice between purchasing all storage from a single vendor, or shopping between different vendors, even building their own with customizable software on commodity servers. IT must carefully weigh these options to ensure they make the best choice for both the top and bottom line.

    Sourcing storage through a single vendor is convenient. Procurement is simple, all support calls go to one place, and management interfaces are frequently consistent, making it easier to configure and maintain storage. But vendors with wide product portfolios typically charge premium prices, and single sourcing weakens IT’s ability to negotiate. Further, as it’s unlikely for all of a vendor’s products to offer best in class capabilities, enterprises might need to compromise on certain features, potentially to the detriment of business.

    Conversely, sourcing storage through multiple vendors or building storage in-house can reduce upfront costs, but increases labor costs. IT must spend significant time evaluating different products to ensure they purchase the right product for the business. They must then negotiate pricing to ensure they aren’t paying too much. Since each vendor’s software and interfaces are different, they must also invest time to train staff to properly configure and maintain the different systems.

    Given all this complexity, it’s no wonder enterprises are eager to push as much data as they can into the cloud. The problem is that many enterprise workloads aren’t cost-effective to run in the cloud, it’s costly to retrieve data back to on-premises storage, and many enterprise applications need to be modified to use cloud data, which may be impractical. This makes the cloud yet another silo for IT to manage.

    A metadata engine resolves these problems by separating the metadata path from the data path through virtualization. This makes it possible to connect different types of storage within a single namespace, including integrating the cloud as just another storage tier. This enables IT to assign objectives to data that define data’s performance and protection requirements, analyze if those objectives are being met, and automatically move data to maintain compliance, tiering data between different storage devices to meet performance, cost or reliability requirement—transparently to applications. With these capabilities, IT can transition from a storage-centric architecture to a data-centric architecture. Instead of maintaining separate storage silos, IT can deploy storage with specific capabilities, from their vendor of choice, into a global namespace. The metadata engine will automatically place and move data to meet objectives, while maximizing aggregate storage utilization and efficiency.

    The Cost of Avoiding Data Migrations and Upgrades

    Many vendors upsell new storage devices to customers every few years. These upgrades usually deliver new features, but few IT teams look forward to a migration. Typically, migrations take months of planning and consume a large portion of IT’s budget and resources. Since it’s so hard to move data without disrupting applications, IT commonly overspends to purchase excess capacity well in excess of expected future demands.

    A metadata engine solves common issues with data migrations by making the process of moving data completely transparent to applications. IT no longer has to halt applications, manually copy data to the new storage and reconfigure, then restart the applications. Available performance and capacity can also be seen from a single interface, while alerts and notifications tell admins when they need to deploy additional performance or capacity. Since performance and capacity can be added in minutes or hours instead of days or weeks, IT no longer needs to perform painful sizing exercises or overpurchase storage years in advance of actual need.

    The Cost of Downtime

    While IT is always looking to reduce complexity and costs and make life easier, the cost that matters most when it comes to complexity is the increased risk of downtime. The more systems IT is managing, the more human involvement is required. The more human involvement that exists, the greater the risk of unplanned downtime, and this downtime can be disastrous for business.

    Stephen Elliot of IDC released a report that examines the true cost of downtime and infrastructure failure. Some key data points are:

    • For the Fortune 1000, the average total cost of unplanned application downtime per year is $1.25 billion to $2.5 billion
    • The average hourly cost of an infrastructure failure is $100,000 per hour
    • The average cost of a critical application failure per hour is $500,000 to $1 million

    A metadata engine places and moves data non-disruptively to applications, across all storage, to meet business objectives. This ensures applications can always access data at the service levels they require, while greatly reducing or eliminating unplanned downtime.

    Storage diversity introduces complexity, but managing a wide range of systems with diverse capabilities doesn’t have to be a challenge any longer. A metadata engine enables enterprises to transition from a storage-centric architecture, where IT manages each system separately, to a data-centric architecture, where IT deploys the storage features they want, automating the placement and movement of data with software. This enables IT to slash complexity and costs, while freeing staff to focus on projects that deliver more direct value to the business.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    << Previous Day 2017/06/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org