Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, April 13th, 2016

    Time Event
    3:34p
    Report Confirms Large Cloud Providers Drive Q1 Leasing

    Despite record leasing in 2015, cloud service providers continue to have a large appetite for data center space, especially in Northern Virginia.

    Jim Kerrigan, managing principal at North American Data Centers (NADC), a data center-focused commercial real estate firm, confirmed that large cloud providers continue to actively look to lease space from third-party landlords.

    NADC just released its Q1 2016 leasing update, which highlighted the Northern Virginia hub continues to be the strongest data center market nationally. A steady supply of new projects in Loudon County has led to the lowest wholesale pricing ($105-$115/kW) in the country, even though tenant demand remains robust.

    Demand in Northern Virginia is largely due to the network connectivity, and cloud dense ecosystems which attract content providers, social networks, as well as numerous government contractors and agencies.

    During Q1, Virginia extended sales tax incentives for data centers from 2020 through 2035. Dominion Power and NOVEC provide electricity at $.045-$.065 per kW/hour, while trying to keep up with demand from data centers.

    In addition to Northern Virginia’s existing 4 million square feet of data centers, the NADA report anticipates that the 125 acre former MCI campus will be marketed to be sold within the next three months.

    Cloud Absorption Is Accelerating

    The first quarter of 2016, has seen leasing momentum continue after record absorption in many markets during 2015.

    Read more: Who Leased the Most Data Center Space in 2015?

    Amazon Web Services continues to be actively looking for space in addition to build-to-suits, with Microsoft aggressively expanding its cloud footprint in order to gain market share. Now Google has officially tossed its hat into the expansion race, having announced the intention to launch 10 more cloud regions by the end of 2016.

    Read more: Diane Greene: Google is “Dead Serious” about Enterprise Cloud

    The NADC report confirmed that it was Microsoft who leased 22 MW in Ashburn, VA, plus another 9 MW in San Antonio from data center REIT CyrusOne. Microsoft also leased 16 MW in Santa Clara, CA from DuPont Fabros, which significantly reduced the amount of available space in that market.

    Even Equinix is getting in on the cloud feeding frenzy in Ashburn, Virginia, having recently leased 2 MW of data center space to Google. Equinix began construction on the first phase of its new 45 acre Ashburn campus in October 2015. Kerrigan told Data Center Knowledge that he would not be surprised to see Equinix sign more 2 MW strategic deals in the future.

    NADA - 1Q'16 Ashburn combo chart Apr12'16

    Source: North American Data Centers – April 2016

    He added, “It is ironic that everyone is concerned that the cloud is going to eliminate the need for data center space when in reality the cloud absorbed 40 percent of wholesale space in 2015, and if the first quarter is indicative of the rest of 2016, it will be closer to 75 percent for multi-tenant data centers.”

    Read more: Hybrid Cloud Growth Powers Data Center REITs 19.6 Percent Higher

    This cloud momentum may be critical for landlords in 2016, since the report noted enterprise customers have historically been reluctant to sign new deals prior to a national election.

    National Leasing Trends

    Overall, the colocation and wholesale market is the tightest that NADC’s Kerrigan has ever seen, a direct result of record leasing over the last two quarters.

    However, there are a handful of markets or properties that have not benefited from the surge in leasing activity. Notably, Houston and Minneapolis have seen a significant new supply over the last few quarters, but limited demand.

    While the CyrusOne sale-leaseback of CME Group’s Chicago Globex data center announced earlier this year was somewhat unique, Kerrigan believes the enterprise data center outsourcing trend is clearly gaining strength. The NADA report noted that there was a $300 million pipeline of data center assets which companies are looking to monetize.

    Another trend in 2016, has been more large users looking for N solutions, rather than N+1 or 2N.

    Kerrigan’s firm sees the colocation market tightening for Chicago, Dallas, and Ashburn, which should firm up pricing for Equinix and Telx (Digital) at certain key facilities.

    The North American Data Centers April 2016 report is available here.

    3:46p
    Datera, Fresh from Stealth, Moves Intent-Based Config to Data Storage

    You will see much more of this word from now on: decoupling. Data center architects familiar with software-defined infrastructure will recognize it as referring to separating the resource requirements of software from the hardware that hosts it. Software-defined networking architects accomplish this by partitioning the data that comprises network traffic separately from the instructions that control and manipulate it — the data plane and control plane, respectively.

    Datera is not the first company to attempt to apply SDN principles to data storage. As it emerged from stealth mode on Tuesday, Datera introduced its key product: a software-defined data storage platform called Elastic Data Fabric. Think of EDF as a kind of “data stack” that provides capacity to applications on an as-needed basis, endeavoring to provide flexibility and responsiveness similar to Amazon’s Elastic Block Storage but with on-premises, commodity x86 hardware.

    As you’ve already surmised, Datera’s methodology is based on decoupling software from infrastructure. The ideal implied here is that any application that consumes data should not be responsible for determining the design of the infrastructure that supports it — especially as rapidly as the demands of all modern applications change.

    Pomp and Circumstance

    The EDF product is commercially ready now, with Datera boasting of four customers having successfully deployed it during the company’s stealth period: logic and IC engineering firm Cadence Design Systems, block storage service provider Packet, IT consultant Schuberg Philis, and Germany-based OpenStack hosting firm Teuto.net. Its financial backing, the company announced Tuesday, includes a $40 million round led by Sun Microsystems co-founder Andy Bechtolsheim, Juniper Networks vice chairman and CTO Pradeep Sindhu, plus backing from Khosla Ventures and Samsung Ventures.

    If that’s not reputation enough, Datera’s founding CEO is Marc Fleischmann. Run a search for “decoupling” in the U.S. Patent Office database, and you’re likely to run into his name.

    In 2001, as an engineer for a firm whose names IT veterans will recall — Transmeta Corporation — Fleischmann led the design of a kind of “dimmer switch” for CPUs. This switching mechanism enabled Transmeta’s Crusoe line of processors to limit power consumption during low production cycles. It was decoupling, just at another level. And it sent competitor Intel back to the drawing board, changing how its engineers perceived power, and helping to trigger the “Tick-Tock” process revolution that remade the company. (Transmeta, you’ll recall, was the employer of one Linus Torvalds.)

    In 2008, Fleischmann co-founded RisingTide Systems, along with Linux SCSI project maintainer Nicholas Bellinger and Transmeta veteran Claudio Fleiner. It’s the heart of RisingTide that descended into stealth for a bit, to re-emerge this week as Datera. You could say, a huge chunk of Linux’ heart and soul is responsible for powering this new storage venture.

    “The goal of RisingTide was to turn Linux into a viable storage operating system,” wrote Fleischmann in a note to Datacenter Knowledge, “and achieve wide distribution for our block storage stack (LIO) to make our stack the industry standard. By 2013, we had achieved our goals, and helped storage startups, including Pure Storage, to become very successful.

    “However, while the industry took great advantage of our open source stack to replace proprietary hardware with less-proprietary hardware (and called it ‘software-defined’),” Fleischmann continued, “we think they missed the point. Mapping the rigid infrastructure of the past into software is simply not enough to deliver the DevOps-centric infrastructure operations model of the future. We imagined a fundamentally different, continuous delivery model for storage, to create a modern, agile data fabric for enterprises and service providers building cloud data centers.”

    This Time, Floating All Boats

    Fleischmann’s reputation, or those of his colleagues or financial backers, may not matter nearly as much to data center architects today as this future operations model which the CEO mentions. To the point, all the stagecraft and marketing wizardry may pale in comparison to the question of whether Datera can scale.

    In today’s hyperscale data centers, where the demands of applications are shifting and adjusting rapidly, configuration management platforms from the 2000s are failing. Decoupling is becoming vitally necessary here, especially now that configuration scripts simply cannot be attached to an infrastructure schematic that can no longer be pinned down from day to day.

    Now that it’s in the public eye, Datera must quickly work to distinguish itself in a market where “decoupling,” “software-defined,” and “Amazon-like” are skirmishing with each other for the top of the buzzword list. To that end, Fleischmann told us, he aims for Datera to bring to storage infrastructure the counterpart of what containerization and orchestration have brought to workload infrastructure.

    When RisingTide tried this path once before, it met fierce competition — and, in some circles, outright opposition — from Red Hat, which acquired SDS pioneer Gluster back in 2011 and established itself as a dominant force in that market space. How will Datera differentiate itself with this go-round?

    “Red Hat indeed uses our open source block storage software stack LIO successfully, together with other open source software, to create software-defined storage products,” responded Datera CEO Fleischmann. “However, all of these products require extensive manual configuration and substantial support to deploy and operate them. This is by design, as open-source software companies rely on a services and support business model.”

    Here is where Fleischmann’s newly remade company picks up one of the mantles from SDN: intent-based configuration.

    Inside Datera’s Intent

    Datera gave Datacenter Knowledge a look at its API User’s Guide, which completely reveals the company’s EDF management information model (MIM). As with all modern APIs, this one is entirely programmable using procedure calls placed over HTTP protocol.

    [SCM]actwin,0,0,0,0;Datera.API.UG.860004-001.pdf (SECURED) - Adobe Acrobat Reader DC AcroRd32 4/12/2016 , 3:17:09 PM

    In EDF’s model, there are three principal constructs that relate to the storage process: applications, storage nodes, and volumes. Both classes of construct are divided into two classes of object: one for the construct in general, the other for each specific instance (“an application instance,” for example, as opposed to “the application”). Contrary to reason, the way EDF breaks down its devices, a volume descriptor is contained by the storage node category, and a storage node descriptor by the application category. This corresponds with what’s truly important in this system: The application (or, more accurately, the orchestrator making space and time for the application) sets the rules for how resources are to be utilized, and whose user access policies apply to them.

    Rather than presenting Datera with a colossal manifest detailing the present state of the data center, EDF listens for API calls. Each of these calls specifies some part of the intent of the object — its “expectations” for how it will utilize resources. EDF responds by adjusting the MIM, which represents the “live” configuration for the entire data storage space.

    “We take the complexity out of storage operations,” wrote Marc Fleischmann. “Users can simply define the goals they desire for their applications (in ‘application intents’), and let our intelligent software do the rest, instantly, automatically, and at any scale.”

    With Docker, a script can be written to represent the intent of an object, one example being a storage volume. This is especially important for Docker, whose containers are designed to be “ephemeral,” and whose data was originally not supposed to survive the termination of the container (this “stateless” model has since been intentionally circumvented in many ways).

    As the company’s containers solution brief [PDF] shows, an intent template for a volume may specify the maximum number of I/O events per second, or the bandwidth in MB/sec, or the maximum number of replicas to be allowed — elements of the volume’s policy. Since this template is just text, it’s conceivable that a UI can be readily created to compose such a template on the spot. The operator then invokes the Datera volume driver from the Docker command line, and passes the template through that command.

    Datera EDF is not a data lake. However, that doesn’t mean a customer could not use it to build one, said Fleischmann.

    “Our elastic data fabric is designed for both modern data lakes and more traditional storage use models,” the CEO told us. “We contributed our block storage stack LIO to Linux to make it an industry standard connector, behind which we can build an elastic data fabric to unify such a wide spectrum of use cases. We have more storage protocols on our roadmap, to make our data fabric very broadly usable. On top of it, we built a powerful policy-based management plane, to allow an equally broad spectrum of applications to automatically consume the data fabric, while we automatically configure all of its elements and continuously optimize them.”

    It’s an effort to apply two of the SDN field’s most compelling concepts towards

     

     

     

    5:50p
    OpenStack Company Mirantis Rekindles Dell Partnership
    By The WHIR

    By The WHIR

    Mirantis announced this week that it has joined Dell’s Technology Partner Program, collaborating closely with Dell to produce a reference architecture on Dell PowerEdgeservers.

    Mirantis OpenStack is now certified on Dell PowerEdge servers with Dell networking so customers can deploy a scalable OpenStack environment using the reference architecture and deployment guide as a template.

    The company partnered with Dell a few years ago, working on customer deployments and contributing to the Dell-developed Crowbar software framework, which manages the OpenStack deployment from the initial server boot to the configuration of primary OpenStack components. But about a year-and-a-half into the partnership, Dell partnered with Mirantis competitor (and former investor) Red Hat.

    In an interview with CIO, Mirantis co-founder and CMO Boris Renski said that Dell “took the easy way out” by partnering with Red Hat on OpenStack since the companies had an existing partnership on the analytics-side. “Now, a few more years have passed. We’ve moved our business from being 150-person OpenStack service company to now 900-person OpenStack distribution company,” he told CIO.

    The Mirantis-Dell announcement doesn’t appear to be dampening Dell’s partnership with Red Hat. This week Red Hat announced that Japanese mobile services provider C.A. Mobile built an infrastructure based on an OpenStack cloud solution from Red Hat and Dell.

    Read more: Volkswagen Picks Mirantis for OpenStack Private Cloud

    The fully-validated deployment is on Dell PowerEdge R630and R730xd servers with Dell Networking S3048-ON and S4048-ON top-of-rack switches. It includes a hardware and software deployment and configuration guide, Mirantis said, in order to assure success in deployments from bare metal to a production private cloud.

    Read more: Intel Leads $100M Round for OpenStack Cloud Heavyweight Mirantis

    “In order to accelerate time to value and deliver web-scale applications rapidly, our enterprise customers want a validated and comprehensive blueprint of how to build a scalable private OpenStack cloud with the Dell server and networking components,” Kamesh Pemmaraju, Vice President of Product Marketing, Mirantis said in a statement. “The combination of validated Mirantis OpenStack along with Dell’s proven server and networking solutions delivers a rapidly deployable, powerful, scalable, rack-based enterprise cloud solution.”

    Original article appeared at http://www.thewhir.com/web-hosting-news/openstack-company-mirantis-rekindles-dell-partnership

    8:43p
    HPE Debuts ProLiant Easy Connect Platform for SMBs
    By The VAR Guy

    By The VAR Guy

    HPE is looking to capture a larger share of the SMB market with the launch of the ProLiant Easy Connect Managed Hybrid solution, the first in a new series of plug and play devices for small and mid-sized businesses sold exclusively through the channel.

    The Hewlett-Packard Enterprise (HPE) ProLiant Easy Connect is a subscription-based server that combines on-premise storage with the ability to manage files remotely via the cloud. The device is meant to help SMBs quickly and easily spin up a hybrid cloud server that is both secure and manageable from anywhere, according to McLeod Glass, vice president and general manager of SMB solutions and Tower Servers at HPE.

    “Small businesses want to focus on growing their core businesses, not spending their limited resources on deploying and managing IT,” he said. “This new solution is part of a broad HPE initiative, inspired by the unique needs of small and mid-sized businesses, to deliver innovative solutions that are easy for our channel partners to sell and easy for our customers to use.”

    According to a study from AMI Partners, the lack of user-friendly cloud solutions has actually caused many SMB users to switch back to on-premise solutions. To counteract this trend, HPE is working with Zynstra, which specializes in cloud management software and virtualization solutions, to make their latest piece of hardware easier to set up and manage for SMBs without advanced IT capabilities.

    “The cool thing about this is it delivers value to the end customer and the partner as well,” said McLeod. “The SMB market is a very important market for us. We’re looking for innovative ways to solve their problems and partners are very important in that [journey].”

    Currently, partners can choose to offer either a 12-month subscription plan or a three-year subscription plan, according to McLeod. A pay-per-month plan will also be made available in the near future.

    HPE plans to make the ProLiant Easy Connect Hybrid solution available to partners in the United States and UK beginning on April 28th. The company did not comment on planned availability for the rest of its global channel partners.

    HPE did not comment on product details regarding the rest of the Easy Connect family of solutions.

    This post was originally published at: http://thevarguy.com/smb/hpe-debuts-proliant-easy-connect-platform-smbs

    << Previous Day 2016/04/13
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org