Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, August 18th, 2015

    Time Event
    4:01a
    Vapor IO and Bloom Energy Partner on Gas-Powered High-Density Data Centers

    Few companies in recent years have tried to challenge conventions of data center design as much as Vapor IO did earlier this year when it announced its Vapor Chamber, a complete rethink of the data center floor layout. The startup is now taking things a step further by partnering with a company that’s challenging the way the industry is used to thinking about data center power.

    Vapor IO has partnered with Bloom Energy to integrate its Vapor Chamber with Bloom’s fuel cells, which use natural gas or biogas to generate energy. The application of Bloom’s Energy Server extends far beyond data centers, but the data center sector has been a particular focus for the company and a market where it’s had some big wins, including deployments at eBay, Apple, AT&T, NTT, and CenturyLink data centers, to name some of its more well-known customers.

    The Vapor Chamber is a cylindrical chamber nine feet in diameter that packs up to 150kW of server power across six wedge-shaped 42U racks. The traditional data center hot aisle is replaced by a “hot column” in the center, where servers exhaust hot air to be removed by a duct. Cold air is supplied from outside of the chamber. The design aims to pack a lot of compute capacity into a relatively small footprint.

    Vapor’s founder and CEO is Cole Crawford, who until recently was executive director of the Open Compute Foundation, which oversees Facebook’s open source hardware and data center design community called the Open Compute Project.

    When it came out of stealth in March, the company also announced its own data center infrastructure management software that applies analytics to traditional DCIM data, such as temperature and humidity, as well as non-traditional data such as air pressure and vibration.

    While Vapor can optimize for efficiency, it has no control over the delivery system for data center power, which is where the Bloom partnership comes in, Crawford said in an interview.

    He first met Peter Gross, who runs Bloom’s data center business, in London last year, he recalls. That’s when the conversation started, and they eventually realized that “we could get to just about a carbon-neutral data center deployment between the two companies,” Crawford said.

    While natural gas is a lower-CO2 fuel than coal, it is not generally considered a renewable source of energy. But Bloom fuel cells can use both natural gas and biogas, which can be a byproduct of waste treatment plants, making for a stronger “green” story.

    Bloom Energy Servers can also use exhaust heat from the Vapor Chamber to supplement their generation capacity, Crawford said.

    Another integration point can be with Vapor’s software to adjust the amount of power the chambers draw from fuel cells. While a Bloom Energy Server’s generation capacity is not adjustable, the amount of power it supplies to a piece of equipment is, Gross wrote in an email.

    “They still produce a constant level of overall power, and any excess power not consumed by critical IT loads is utilized for mechanical or other non-critical loads at the data center,” he said.

    A single Energy Server can power up to five Vapor Chambers, depending on how they’re configured, Gross said.

    Crawford sees the ideal customer for the integrated solution as one that wants “a very green story.” One of Vapor’s major target markets is the edge data center market, or companies that build data centers close to high-population areas where web content providers cache content to improve performance for their users and to reduce their network costs. A small footprint is important in those densely populated areas where real estate comes at a premium.

    Crawford said one company is building a “Vapor-enabled” data center in the US but declined to name the customer citing confidentiality agreements. Vapor is also in final stages of negotiations with customers in Europe and Asia.

    Vapor and Bloom will promote the integrated solution through their respective channels.

    1:00p
    Big Switch Unifies Physical and Virtual Networks for OpenStack

    Big Switch Networks today moved to unify the management of physical and virtual OpenStack networking, while at the same time tightening the level of integration it provides with VMware.

    In addition, the company is making available packet broker software that runs in real time on its switches, thereby eliminating the need to acquire a dedicated appliance to perform the same function.

    Greg Holzrichter, chief marketing officer for Big Switch, said Big Cloud Fabric 3.0 enables IT organizations to deploy a software-defined network that spans both physical switches and the virtual networking services enabled by OpenStack in a way that provides a simpler alternative to directly addressing Neutron, the virtual OpenStack networking services.

    “We’re trying to get rid of a lot of the Neutron complexity,” said Holzrichter. “For the first were unifying physical and virtual networking in OpenStack,”

    Holzrichter said that via an OpenStack Neutron plug-in Big Cloud Fabric 3.0 now provides a single pane of glass through which IT organizations can manage both a physical and virtual networking environment. At the core of that capability is Switch Light software from Big Switch that can be deployed on open networking switches, known as Switch Light OS, and on virtual servers running the Kernel-based virtual machine (KVM) hypervisor, which is marketed as Switch Light VX.

    Priced starting at $49,000, Big Cloud Fabric 3.0 provides the management layer that unifies those two OpenStack networking environments.

    The new fabric also deepens its native support for VMware by providing support for both NSX-v network virtualization software running on top of VMware vSphere 6 and the distribution of OpenStack that VMware supports on top of VMware vSphere 6, known as VMware Integrated OpenStack. In addition, Big Switch is making available a VMware vCenter GUI plugin and integration with VMware vRealize Log Insight software.

    To eliminate the need for separate network monitoring appliances, Big Switch is enhancing its Big Monitoring Fabric software to include a network packet broker (NPB) to make it possible to monitor network traffic in real time. Holzrichter said this capability eliminates the need to buy an expensive appliance, which he said is the primary reason most organizations don’t monitor network packets in real time.

    Holzrichter also disclosed that the company intends to add support for Docker containers in the future and will develop a 100G switch at commodity-level price points.

    Finally, the company rolled out an elastic pricing model, under which IT organizations can acquire eight racks of switches, but only initially pay, for the example, for the four racks they might use at first. The excess capacity of Elastic SDN Fabric would then be consumed on an as-needed basis for $599 per switch per month.

    On the one hand Big Switch is aiming to pioneer the adoption of so-called “white box” switches in the enterprise. But Holzrichter concedes that getting in the data center door more often than not requires interoperability with VMware.

    The challenge data center operators going forward will face is to what degree they can strike a balance between open network architectures and the more proprietary legacy networks that today still dominate the data center landscape.

    3:00p
    Platform9 Enables Private OpenStack Clouds on VMware, KVM

    Moving to provide IT organizations with a way to bridge the divide between OpenStack and VMware, Platform9 today announced the general availability of a service through which private OpenStack clouds can be deployed on top of VMware vSphere virtual machines.

    The service is designed to make private cloud deployment simpler. Planet9 CEO Sirish Raghuram said the company is now taking that concept a step further by allowing IT organizations the option to deploy a private cloud based on OpenStack on either vSphere or the open source Kernel-based virtual machines (KVM) that are used primarily to run OpenStack, the popular open source cloud infrastructure software.

    Fresh off raising another $10 million round of funding, Platform9 also announced today that it is currently beta testing a version of its service that makes use of Docker containers as the delivery mechanism for a private OpenStack cloud.

    Raghuram said that while IT operations teams tend to favor VMware, developers are increasingly voting with their feet for OpenStack. The reason for this is that OpenStack APIs provide a flexible mechanism through which developers can self-provision their own IT resources, he said.

    “Developers love the API model,” said Raghuram. “What IT operations teams are now discovering is that a lot of those developers work inside the enterprise.”

    Raghuram said Platform9 makes use of a metadata construct to discover changes to the underlying virtual machines every five minutes. As a result, IT operations teams can make changes to them without disrupting applications running in the cloud. Other approaches to running an OpenStack distribution on top of VMware do not provide that same level of deep integration, he claimed.

    Once deployed, Platform9 provides IT organizations with a single pane of glass to manage private cloud deployments on vSphere and KVM in a way that enables them to more consistently enforce service level agreement policies, said Raghuram.

    The rise of OpenStack is clearly starting to exacerbate pre-existing tensions between the IT operations team and developers. Many IT teams don’t view OpenStack as mature enough to deploy in production environments.

    Nevertheless, cloud service providers that have lots of internal engineering resources are already deploying OpenStack. Those cloud services give developers a set of agile IT capabilities in the cloud that they now increasingly expect to be available on-premise.

    Regardless of the pace of adoption, what’s clear is that IT operations teams will probably wind up living with one form of OpenStack and VMware for years to come. Otherwise, developers will simply continue to push more applications into the public cloud with or without the approval of the IT operations team. The challenge IT operations face is finding the most tenable way possible to enable OpenStack and VMware to co-exist as peacefully as possible.

    3:30p
    Why IT Automation Matters Today

    Justin Nemmers is the Director of the US Public Sector Group at Ansible.

    The benefits of IT automation are vast. Not only does it free up developers and sysadmin schedules to focus less on repetitive, administrative tasks and more on providing value to the business, it improves workflow and quality of life; yet many organizations struggle to adopt IT automation because their environments are too complex.

    For example, consider the following scenario: Your development team has just completed weeks of work, delivering their masterpiece – a ready-to-deploy application – to IT, but it doesn’t work once IT deploys it. Why? The network port used by the development team must be opened on the firewall so end users can communicate with the software. But IT changed the firewall rule and forgot to tell development. No procedure or policy was created to capture all of the changes necessary to successfully deploy the app, and now you’re looking at an unnecessary delay, one that could have been avoided altogether if a better structure was in place to eliminate disparate factors. What’s worse, once this error is discovered, it doesn’t mean that it has been corrected for all future releases – the same roadblock may happen again and again. It’s a vicious cycle.

    IT departments struggle to manage thousands of configurations and hundreds of applications, often with highly separated teams working in silos. The reasoning behind this is simple: the teams responsible for developing apps are typically not integral parts of the teams tasked with deploying them, and needed changes just don’t get conveyed back to the development team.

    In the past, applications and hardware were closely connected. Apps came from single vendors complete with their own hardware and software, backed up within your environment. Hardware was loosely standard-based, which meant organizations chose a vendor and were then tied to that vendor for their hardware and software. Even though it was difficult to change vendors, it could be done if you redesigned nearly everything in your environment. As time went on, however, the tight coupling of hardware and software began to loosen.

    Applications for Any Operating System

    Hardware became commoditized and open standards-based architectures allowed software providers to build their own operating environments. Suddenly, software developers could develop applications for any operating system, regardless of the hardware. At the same time, companies were granted more freedom as they no longer had to rely on a single vendor for their hardware and software needs. However, as is often the case, the introduction of more choices brings about further complications to a once simple, straight-forward process. Some would deem this the tyranny of choice.

    While hardware could be bought from anyone and organizations could choose their own operating center and applications, they also now had to manage all of these pieces in-house rather than rely on their hardware providers for support.

    With the rise of virtual environments, it was no longer possible to point to one server and easily identify what it did. In this new landscape, the data center continuously grew, and managing it fell on the IT department’s shoulders.

    Though there are a number of tools available to help manage these more complex and virtual IT environments, they are often incomplete. When these tools were built, applications were easy to configure because a company’s web server, database server and middleware were all in one place.

    But today, application workloads are more widely distributed, and IT applications and configurations are more complex. Single point-in-time configuration management alone is simply no longer adequate.

    Think about it like this: When you come home from the grocery store, there is a precise and specific set of processes – an orchestrated workflow – that needs to happen in order for you to get from inside your car to your sofa.

    First, you pull into your driveway. Then, you stop the car, open the garage door, open your car door, shut the car door, walk to the house, unlock the door, etc. This orchestrated set of events needs to occur the same way, every time (i.e. You can’t open your door before you stop the car.).

    Similarly in IT, there has historically never been a single tool that could accurately describe the end-to-end configuration of each application in a particular environment. Though some tools could describe the driveway, for example, they could not also accurately describe how the car interacts with the driveway, nor how the key can open the door (it’s height and width, and whether the handle is on the left or right side, etc.). This sequence of seemingly basic tasks is analogous to the process of developing and deploying any application in the modern IT landscape.

    The key is helping IT organizations understand the big picture of how these hundreds of configurations, applications and teams of people can successfully work together. It’s a piece of strategy that delineates those IT teams that will successfully transform and adapt to rapidly changing technology, and those that will continue to spend too much money struggling just to keep their heads above water.

    Ideally, development teams would create a playbook that they deliver alongside their application so that IT could then use it to deploy and manage said application. When changes to the playbook are made, they are sent back to the development team so that the next time they deploy the application they are not reinventing the wheel.

    This eliminates the massive back and forth and miscommunication between the two teams, which also reduces delays in deployment. By automating this process, there are fewer human errors and better communication and collaboration overall. Companies can save money, compress their deployment time and time between releases, and validate compliance frequently and automatically. It injects some agility into traditional development and operations methodology.

    Once a playbook has been created for the first deployment, IT departments already have a proven roadmap for how to do it right the next time.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:37p
    Leaked Documents Detail Tight NSA-AT&T Partnership

    logo-WHIR

    This article originally appeared at The WHIR

    Newly disclosed National Security Agency documents describe a “highly collaborative” relationship between AT&T and the NSA.

    According to a report over the weekend by The New York Times, AT&T’s cooperation included a range of classified activities from 2003 to 2013, giving the NSA access to billions of emails as they have been transmitted across its networks in the US.

    The release of the documents come as privacy advocates rally against the Cybersecurity Information Sharing Act of 2015 (CISA) which has been widely criticized for allowing companies to hand over user data to the government without a warrant. Voting on CISA has been delayed and is slated for the fall.

    Watch a recording of a Data Center Knowledge webinar on the impact of NSA surveillance disclosures on the data center provider industry

    The division of the NSA that handles corporate partnerships, including that with AT&T, is responsible for more than 80 percent of the information the NSA collects.

    In one example, AT&T participated in a NSA program called Fairview in 2003 where it turned on a new collection capability that forwarded the agency 400 billion Internet metadata records and one million emails a day to the keyword selection system in one of its first months of operation, according to the documents.

    A 2012 presentation said that the NSA does not typically have direct access to telecoms’ systems. “Corporate sites are often controlled by the partner, who filters the communications before sending to NSA,” the presentation said. By 2013, AT&T was giving NSA access to 60 million foreign-to-foreign emails per day.

    Domestic wiretapping laws do not cover foreign-to-foreign emails, according to the report, which means AT&T had provided the information voluntarily, not in response to court orders.

    AT&T denied in a statement provided to the NYT that it voluntarily provides information to investigating authorities “other than if a person’s life is in danger and time is of the essence.”

    Last year, AT&T released a transparency report that showed it had received 301,816 subpoenas, court orders and search warrants for real-time information.

    This first ran at http://www.thewhir.com/web-hosting-news/leaked-documents-detail-highly-collaborative-partnership-between-nsa-and-att

    6:45p
    New Submarine Cable to Connect Equinix Data Centers in New York, London

    Equinix has added its New York and London data centers to the list of sites that will connect to the new transatlantic submarine cable system being built by the Irish operator Aqua Comms.

    Aqua Comms partner TE SubCom earlier this month loaded cable for the America Europe Connect fibre-optic cable system onto Reliance, one of its cable-laying ships in preparation for deployment. Microsoft signed on as the first “foundational” customer of the system earlier this year.

    The cable will land in Shirley, New York, and in Killala, Ireland, on the European side. Aqua Comms plans to build backhaul routes from Shirley to New York and New Jersey for data center connectivity in that market. It already owns a backhaul network in Europe, called CeltixConnect, that will carry AEConnect traffic between Killala, Dublin, Wales, Manchester, Slough, and London.

    Here’s a visualiation of the future transatlantic route by Aqua Comms:

    Aqua Comms AEConnect

    There are currently about 15 submarine cables linking sites on the East Coast of the US to Europe directly, according to a map by the telecom analyst firm TeleGeography. There is also one between Canada and Europe.

    Transatlantic bandwidth is in high demand, according to Equinix. The London-to-New York route is the “second-largest international internet traffic route globally with multiple terabits of peak traffic,” the data center provider said in a statement, citing data by TeleGeography.

    Another notable trend is skyrocketing demand for private transatlantic connectivity, outside of the public internet. The analyst firm noted in a 2015 report that private bandwidth on the transatlantic route has outgrown internet bandwidth for the first time in history. Private connectivity now accounts for 56 percent of used bandwidth on the route.

    With its aggressive global expansion efforts, including in the US and Europe, Equinix needs access to all the bandwidth it can get.

    Just in March, the company announced an initiative to add close to 500,000 square feet of data center space across New York, Toronto, London, Singapore, and Melbourne markets. All five are major densely-populated metros with enormous demand for data center connectivity.

    7:30p
    Linux Foundation to Create Open Object Storage Spec

    Looking to make it simpler to embrace a next generation of storage devices, the Linux Foundation this week launched the Kinetic Open Storage Project, which counts a number of networking, storage, and enterprise software heavyweights among founding members.

    The project is looking to expand the scalability of storage by enable storage drives to be connected directly to an Ethernet interface in a way that enables software-defined storage applications to manage them directly. In effect, the goal is nothing short of decoupling hardware and software to make it easier to truly scale performance across petabytes of storage.

    It assumes that the IT world as a whole will be making a major shift to object-based storage. Generally embraced by cloud service providers as a way to more efficiently manage cloud storage at scale, object-based storage requires developers to either move away from traditional file systems or deploy a file system that can be layered on top of an object-based storage API that makes the object storage system appear as a traditional network-attached storage (NAS) system.

    The foundation made the announcement at its LinuxCon event in Seattle. Jim Zemlin, executive director of the foundation, said the way storage is accessed is facing fundamental changes in the very near future.

    “Object storage means the way storage is accessed is changing,” he said. “We’re going to be able to access the drive directly.”

    Specifically, the Kinetic Open Storage Project will provide common mechanisms for establishing Ethernet connectivity and defining a key value store that allows storage applications to access drives directly without the need to be managed by a storage server. The new project will manage all the associated open source libraries, APIs, and simulators that need to interface with Kinetic-based drives.

    Developers of storage solutions are expected to use those vendor-agnostic open source libraries and APIs to create applications that invoke Kinetic-based drives rather than traditional storage controllers.

    Cisco, Cleversafe, Dell, DigitalSense, NetApp, Open vStorage, Red Hat, Scality, Seagate, Toshiba, and Western Digital are among the project’s founding members.

    While three of the top manufacturers of drives are among the founders, leading storage vendors, such as EMC, Hitachi Data Systems, and HP, are absent.

    While new applications generally are written directly to object-based APIs, most of the cloud storage being invoked by enterprise IT organizations these days still relies on a traditional file system. However, the eventual development of drives that connect directly to Ethernet interfaces that are orders of multitude faster in terms of IOPs than existing NAS systems may finally force a greater shift towards native object-based storage both inside and out of the cloud.

    Here’s more LinuxCon coverage by Data Center Knowledge:

    IBM Launches Linux Mainframes, Open Sources Mainframe Software

    PlumGrid, Cisco, Others Launch Open Network Virtualization Project

    And here’s why data center operators should care about open source

    << Previous Day 2015/08/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org