Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, August 6th, 2015

    Time Event
    12:00p
    As Tenants Get Smarter About Data Center Availability, They Want More Options

    You may like the feel of an F-250, but if you don’t need to lug around a giant load on a regular basis, perhaps you’re better off with a sedan, while holding on to some more of that gas money. Big-vehicle aficionados may disagree, but it’s a matter of practicality versus style.

    When it gets to data center services, there’s a lot more room for practicality than there is for style of course, and for a long time, getting the most redundant, fail-proof data center services you can afford has been considered the practical choice. As data center users are getting smarter, however, they are starting to think differently about data center availability.

    More and more users are learning to assess actual reliability needs of individual applications and demand more lower-reliability and lower-cost data center options from providers for workloads that don’t necessarily need guaranteed 24-seven uptime year-round. Many of these workloads are high-performance computing applications, used by universities, financial-services firms, and the oil-and-gas sector. Some bitcoin mining companies have also taken lower-redundancy colocation space, and in several cases companies have moved IT labs traditionally housed in office buildings into stripped-down data center facilities.

    And data center providers are responding to the trend, designing lower-redundancy infrastructure and offering it as a service at a much lower cost than traditional N+… product.

    Savings for the customer can be substantial. A traditional 2N colo environment can cost about $140 per kW per month, while a non-redundant solution structured for an HPC deployment can be less than $100, Patrick Lynch, managing director for the Data Center Solutions Group at CBRE, a commercial real estate firm, said.

    As Big Data analytics becomes more commonplace, the use of HPC is growing in certain industries; and most HPC applications aren’t serving customers around the clock. Many of them tend to do batch processing jobs and can tolerate a graceful switch to hibernation mode from time to time.

    There are HPC deployments that require 2N infrastructure, and Lynch and his colleagues have worked on such deals, but most of the HPC data center requirements he sees are “N with a small 2N component.”

    The typical model for the better part of the last decade was to build out multi-purpose data center environments and put everything in them, he said. As customers became savvier, they started realizing that some workloads could be outsourced to colos or managed service providers. Lynch sees the trend of discerning between different levels of infrastructure redundancy in colos as the next step in the evolution of the user.

    “It’s a part of that further bifurcation of the IT needs,” Lynch said. “It’s not one-size-fits-all,” and users are increasingly aware of the differences.

    One example of a provider responding to this trend is Sentinel Data Centers, which builds massive facilities with on-site electrical substations and specializes in wholesale data center deals. It built a 10 MW data center in Orangetown, New York, for Bloomberg together with Russo Development, and multi-tenant campuses in New Jersey and North Carolina, each with tens of megawatts of power capacity.

    Sentinel announced its lower-redundancy solution earlier this year, whose cost is “commensurately lower” than its baseline service, Todd Aaron, the company’s co-founder and co-president, said.

    Users “have gotten a lot smarter about what they need, and how they contract for it,” he said. “It’s a trend. We think that we’re going to continue to see more of it.”

    While Aaron doesn’t think this kind of solution will displace demand for more traditional high-redundancy data center services, the company did see enough demand for it to design a whole new offering. Adding a lower-redundancy service is not simple, as it has to make economic sense for both the customer and the provider.

    But, a more discerning user requires a finer touch by the provider. Besides, the more you save on your data center bill, the more gas money you’ll have to drive that F-250 to the grocery store.

    1:00p
    CoreOS, Mirantis Join Forces to Marry OpenStack and Linux Containers

    In a move that heralds the coming together of OpenStack and Docker containers, CoreOS and Mirantis today unveiled an alliance under which the OpenStack distribution from Mirantis will be integrated with a Tectonic distribution of Linux from CoreOS optimized for Linux containers.

    Kamesh Pemmaraju, director of partner marketing for Mirantis, said that a lot of work has recently gone into integrating OpenStack with Google’s open source container orchestration framework Kubernetes, which is now part of the Tectonic platform by CoreOS.

    “There’s not a lot of knowledge about containers in the OpenStack community,” said Pemmaraju. “Since Google joined the OpenStack Foundation there’s been a lot of work done.”

    Because of that integration effort, IT organizations can now make use of OpenStack to manage Linux containers, whether they are deployed on bare-metal servers, virtual machines, or in a Platform-as-a-Service (PaaS) environment. In the case of CoreOS, Kubernetes orchestration software comes bundled with its operating system.

    Couple that with the existing ability to manage multiple types of VMs, and OpenStack is now the most flexible and comprehensive IT platform available, Pemmaraju said, because it provides a single API through which all those IT resources can be provisioned and managed.

    In general, Pemmaraju said, while Linux containers, recently popularized by a startup named Docker, have not been widely deployed in production environments yet, their usage in application development and testing environments is already widespread. For the most part, however, it has been done on top of VMs, because that’s the simplest way for most organizations to provision containers.

    In addition, Pemmaraju noted, networking containers across multiple servers remains challenging.

    As a consequence, Mirantis expects to see the first Linux containers to be deployed in production within inward-facing non-mission-critical applications. That’s something he expects to change over time.

    Most IT operations teams have little familiarity with containers, which continue to be a developer-led phenomenon in terms of adoption. But as IT operations teams gain more confidence in the approach, it’s only a matter of time before its usage increases on bare-metal servers as an alternative to the bloating VM count in enterprise IT.

    3:00p
    Take Control of Hadoop with a Data-Centric Approach to Security

    Reiner Kappenberger is Global Product Manager for HP Security Voltage.

    I’ve often said that Hadoop is the biggest cybercrime bait ever created. Why? Well, in the past, attackers had to gain intricate knowledge of a network and go through a lot of work and expense to find the data they wanted to retrieve. In a Hadoop environment, an organization consolidates all of its information into a single destination, making it very easy for criminals to find all the information they want – and more.

    It isn’t just the size of the bait that makes Hadoop breaches so treacherous. Hadoop environments are inexpensive to replicate and require no prior knowledge of the data schema used. In just a few days, terabytes of data can be siphoned up and replicated elsewhere.

    Hadoop is ground zero for the battle between the business and security. The business needs the scalable, low-cost Hadoop infrastructure so it can take analytics to the next level—a prospect with myriad efficiency and revenue implications. Yet Hadoop includes few safeguards, leaving it to enterprises to add a security layer.

    Security cannot afford to lose this fight: Implementing Hadoop without robust security in place takes risk to a whole new level. But armed with good information and a few best practices, security leaders can put an end to the standoff.

    With a data-centric security strategy as you plan and implement big data projects or Hadoop deployments, you can neutralize the effects of damaging data breaches and help ensure attackers will glean nothing from attempts to breach Hadoop in the enterprise.

    What do I mean by data-centric? Data exists in three basic ways – at rest, in use, and in motion. The data-centric approach is in contrast to traditional network-based approaches to security, which haven’t responded directly to the emerging need for security that neutralizes the effects of a breach through protection of sensitive data at the field-level.

    With data-centric security, sensitive field-level data elements are replaced with usable, but de-identified, equivalents that retain their format, behavior and meaning. This means you modify only the sensitive data elements so they are no longer real values, and thus are no longer sensitive, but they still look like legitimate data.

    The format-preserving approach can be used with both structured and semi-structured data. This is also called “end-to-end data protection” and provides an enterprise-wide solution for data protection that extends into Hadoop and beyond that environment. This protected form of the data can then be used in subsequent applications, analytic engines, data transfers and data stores.

    A major benefit is that a majority of analytics can be performed on de-identified data protected with data-centric techniques – data scientists do not need access to live payment cards, protected health information (PHI) or personally identifiable information (PII) in order to achieve the needed business insights.

    Whether you take advantage of commercially available security solutions, or develop your own proprietary approach, the following five steps will help you to identify what needs protecting so you can apply the right techniques to protect it—before you put Hadoop into production.

    Audit and Understand your Hadoop Data

    To get started, take inventory of all the data you intend to store in your Hadoop environment. You’ll need to know what’s going in so you can identify and rank the sensitivity of that data. It may seem like a daunting task, but attackers can take your data quickly and sort it at their leisure. If they are willing to put in the time to find what you have, you should be too.

    Perform Threat Modeling on Sensitive Data

    The goal of threat modeling is to identify the potential vulnerabilities of at-risk data and to know how the data could be used against you if stolen. This step can be simple. For example, we know that personally identifiable information always has a high black market value. But assessing data vulnerability isn’t always so straightforward. Date of birth may not seem like a sensitive value alone, but when combined with a zip code, a date of birth gives criminals a lot more to go on. Be aware of how various data can be combined for corrupt purposes.

    Identify the Business-Critical Values Within Sensitive Data

    It’s no good to make the data secure if the security tactic also removes its business value. You’ll need to know if data has a characteristic that is critical for downstream business processes. For example, certain digits in a credit card number are critical to identifying the issuing bank, while other digits have no value beyond the transaction. By identifying the digits you need to retain, you can be sure to use data masking and encryption techniques that make re-identification possible.

    Apply Tokenization and Format-Preserving Encryption

    You’ll need to use one of these techniques to protect any data that requires re-identification. While there are other techniques for obscuring data, these are particularly suited for Hadoop because they do not result in collisions that prevent you from analyzing data. Each technique has different use cases; expect to use both, depending on the characteristics of the data being de-identified. Format-preserving technologies enable the majority of your analytics to be performed directly on the de-identified data, securing data-in-motion and data-in-use.

    With Hadoop, you must protect sensitive data before it is ingested. Once data enters Hadoop it is immediately replicated inside your cluster, making it impossible to protect after the fact. By applying your tokenization and format-preserving encryption on data during the ingestion process, you’ll ensure no traces of vulnerable data are floating around your environment.

    Provide Data-At-Rest Encryption Throughout the Hadoop Cluster

    As just mentioned, Hadoop data is immediately replicated on entering the environment, which means you’ll be unable to trace where it’s gone. When hard drives age out of the system and need replacing, encryption of data-at-rest means you won’t have to worry about what could be found on a discarded drive once it has left your control. This step is often overlooked because it’s not a standard feature offered by Hadoop vendors.

    The perfect time to undertake this process is after you’ve done a pilot and before you’ve put anything into production. If you’ve done the pre-work, you’ll understand your queries, and adding the format-preserving encryption and tokenization to the relevant fields can be done very easily, taking just a few days to create a proof of concept.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:53p
    Compass Eyeing Dallas, Atlanta Data Center Markets

    Compass Datacenters has acquired parcels of land in Dallas and Atlanta, both new markets for the provider, and added to its existing footprint in the Columbus market. The company plans to build dedicated customer facilities on each of the new sites.

    The sites are available for builds with aggressive timelines, according to Compass CEO Chris Crosby. Pre-permitting, site planning, facility pre-planning, and other steps have already been completed, with all locations ready to go once customers sign. The company said it selected the three sites because of strong interest from potential customers.

    Compass provides made-to-order data centers in whatever market a customer wants. The company uses a single, repeatable design to deliver data centers in short order and usually gets its facilities Tier III certified by the Uptime Institute.

    With its flexibility on location, Compass targets companies wanting to establish data centers in emerging or second-tier data center markets that often have limited supply. Examples of its past builds include data centers for CenturyLink in Minnesota and for Windstream in the Nashville metro.

    The land acquisition is notable because Dallas and Atlanta are both major established data center markets, suggesting the company is more directly competing with other wholesale providers in their backyards.

    “Even in ‘major’ markets like Dallas and Atlanta, businesses want the functionality that we can deliver in a dedicated facility, such as a loading dock that you don’t have to share, dedicated storage and staging space, and total control over the site’s operations—at the same price point that they would find for traditional wholesale colocation,” said Compass CEO Chris Crosby

    The Lithia Springs, Georgia, parcel is 23.5 acres and can accommodate up to five standard 1.2 MW Compass data centers. Economic incentives are available for data center projects there, and the area has rich, redundant power infrastructure. Services from two utilities are available: GreyStone Power and Georgia Power.

    The Richardson, Texas, parcel is just over 10 acres in the suburban Dallas telecom corridor and has room for two of Compass’ 1.2 MW data centers.

    The 9.5-acre parcel of land in the New Albany Business Park in suburban Columbus is just north of the company’s current customer in Ohio, American Electric Power. There’s room for two Compass data centers there. The location also has robust power and redundancy with dual feeds in a looped system.

    5:17p
    Zscaler Raises $100M to Grow Internet Security as a Service Platform

    Internet security firm Zscaler announced it has raised $100 million to step up global expansion and sales efforts for its platform. The funding round was led by late-stage investor TPG, with existing investors EMC and Lightspeed Ventures contributing. With a total amount raised now at $138, million the company places its valuation at over $1 billion, as it prepares for a potential IPO.

    The seven-year-old Silicon Valley firm provides real-time insights into security issues through a multi-tenant, distributed cloud service, with claims of protecting over 13 million users across 5,000 global companies and government organizations. Numerous well-known companies are listed as customers, and the company has been recognized as a leader in Gartner’s Magic Quadrant for Secure Web Gateways and the Forrester Research Forrester Wave for SaaS Web Content Security.

    Zscaler founder and CEO Jay Chaudhry describes its Security-as-a-Service platform as a disruptor in the field, where old appliance-based gateways are no longer effective, saying the startup’s cloud approach will better service a mobile and cloud-first world.

    ZscalerRecently released a cloud-based firewall and said it continued to have record bookings, revenue, and renewals. The company cited a third-quarter win with a multi-million subscription by one of the world’s largest banks.

    “We see tremendous opportunity in the rapidly-growing cybersecurity industry, and after spending significant time in the space, we found Zscaler to be the leading cloud-based security solution for the world’s largest and most demanding customers—a true SaaS platform like that of Salesforce or Workday,” Nehal Raj, partner at TPG, said in a statement.

    5:34p
    Kim Dotcom Severs Ties to Mega, Plans To Build New Secure Cloud Storage Service

    logo-WHIR

    This article originally appeared at The WHIR

    Kim Dotcom said in an interview on Slashdot that he is no longer involved with Mega, the cloud storage service he founded in 2013. He said he doesn’t own any shares of Mega, and that users shouldn’t trust Mega to host sensitive files anymore.

    Dotcom had become notorious through his file hosting service Megaupload which theUS Department of Justice shut down by seizing its domain names in 2012 in an effort to stop copyright infringement. A year later, while still awaiting trial, he founded Mega which he saw as a way to provide a truly private cloud service.

    In an interview on Slashdot last week, Dotcom explains that he’s broken all ties to Mega, which he doesn’t think meets up with the original privacy goals he had for the service.

    “I’m not involved in Mega anymore,” he wrote. “Neither in a managing nor in a shareholder capacity. The company has suffered from a hostile takeover by a Chinese investor who is wanted in China for fraud. He used a number of straw-men and businesses to accumulate more and more Mega shares. Recently his shares have been seized by the NZ government. Which means the NZ government is in control. In addition Hollywood has seized all the Megashares in the family trust that was setup for my children. As a result of this and a number of other confidential issues I don’t trust Mega anymore. I don’t think your data is safe on Mega anymore.”

    He notes that when his non-compete clause runs out at the end of 2015, he “will create a Mega competitor that is completely open source and non-profit, similar to the Wikipedia model. I want to give everyone free, unlimited and encrypted cloud storage with the help of donations from the community to keep things going.”

    Dotcom’s still faces significant legal challenges. Dotcom has avoided facing charges for years with his lawyers continuing to successfully postpone his extradition from New Zealand to the US.

    According to the DoJ, Megaupload and other related sites had generated more than $175 million in illegal revenue, and caused more than $500 million in harm to copyright owners as of November 2013.

    Dotcom said he and others are victims of “copyright extremism” that he said is “hurting technology companies and the Internet as a whole.” He said the current state of the copyright industry essentially censors the Internet and stifles innovation, and that it needs to evolve to ensure content creators are compensated for their work and investments. Otherwise, technologies will come along that challenge these antiquated notions of copyright.

    While copyright is what got him into trouble, Dotcom’s passion seems to be around creating services that promote privacy.

    Dotcom is optimistic that encryption will become a more widespread technology that helps provide online privacy in the wake of Edward Snowden’s revelations of widespread government surveillance.

    “The booming encryption market has been created by the actions of the US government,” he wrote. “Businesses that offer verifiably safe encryption will outperform those that don’t. Now that the people are aware of what’s going on they will demand more privacy options from the services and products they purchase. Governments will struggle to stop or control encryption and technology will prevail. That is good for all of us. But it’s sad that technology has to safeguard our human rights because our governments failed to do so.”

    This first ran at http://www.thewhir.com/web-hosting-news/kim-dotcom-severs-ties-to-mega-plans-to-build-new-secure-cloud-storage-service

    10:15p
    Report: Officials in Oregon Approve Google Data Center Tax Breaks

    Local officials have approved for a new package of tax breaks for a new data center Google is considering building in The Dalles, Oregon, reported The Oregonian.

    The company acquired a large parcel of land and is mulling a third data center build about a mile down the road from its existing campus. Google will be exempt from property taxes on its massive fleet of servers, and the approved package could save the company millions over the deal’s lifespan.

    Google values its current investment in The Dalles at $1.2 billion. Oregon was home to Google’s first self-designed and built data center in 2006 and a second $600 million Google data center there came online earlier this year.

    Franchise fees generated by Google’s electricity use account for around 6 percent of The Dalles’ general fund, according to the Oregonian. The new data center could increase that share to 10 percent.

    Google will be required to pay $1.7 million upfront and $1 million annually to local governments if the project goes forward. For the second data center, Google paid $1.2 million to the city, Wasco County, and the local school district in a 2013 tax deal and agreed to pay $800,000 annually starting 2016.

    Google will also pay $250,000 to the port for brownfield redevelopment. The new tax deal also requires Google to pay its employees in The Dalles at least 150 percent of the local average.

    The data center tax breaks were approved 3-0 by county commissioners following a 2-1 vote on the day prior. The dissenter wanted assurance that Google won’t seek an extension after the 15-year term ends. That sort of thing has happened before.

    Google is constantly expanding its data center footprint. The most recent Google data center expansion projects include construction on the site of a shuttered coal power plant in Alabama, a $300-million build in the Atlanta metro, a $380-million data center in Singapore, and a $66-million project in Taiwan.

    Google’s data center would be one of many coming to Oregon, which is enjoying a data center boom thanks to tax breaks, low-cost hydro power, and and also, in all likelihood, because of Portland’s amazing variety of food trucks.

    << Previous Day 2015/08/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org