Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 24th, 2016

    Time Event
    1:00p
    Docker Makes Docker Easier for Data Center Managers

    Application containers, namely Docker containers, have been heralded as the great liberators of developers from worrying about infrastructure. Package your app in containers, and it will run in your data center or in somebody’s cloud the same way it runs on your laptop.

    That has been the promise of the technology based on the long-existing concept of Linux containers the San Francisco startup named Docker devised its application building, testing, and deployment platform around. While developers love the concept of Docker, IT managers that oversee the infrastructure those applications eventually have to be deployed on have certain processes, policies, requirements, and tools that weren’t necessarily designed to support the way apps in Docker containers are deployed and the rapid-fire software release cycle they are ultimately meant to enable.

    This week, Docker rolled out into general availability its answer to the problem. Docker Datacenter is meant to translate Docker containers and the set of tools for using them for the traditional enterprise IT environment. It is a suite of products that enables the IT organization to stand up an entire Docker container-based application delivery pipeline that is compatible with IT infrastructure, tools, and policies already in place in the enterprise data center.

    Read more: Eight Key Features for IT Managers in Latest Docker Release

    That means developers can use Docker containers, but the applications they build can run on existing dedicated servers, VMs, or cloud infrastructure and comply with existing requirements. “The only dependency that we have is the host running Linux kernel,” Scott Johnston, senior VP of product management and product design at Docker, said.

    The suite consists of Docker Universal Control Plane, the central enterprise IT management dashboard for Dockerized applications, Docker Trusted Registry, which gives data center managers the ability to define for developers the types of container images that comply with their policies and requirements, and Docker Engine, the heart of the Dockerized infrastructure, the runtime that builds and runs Docker containers:

    docker_platform

    (Source: Docker)

    Docker went to great lengths to minimize any pain that may be associated with bringing an entirely new system for deploying and managing applications, so Docker Datacenter comes with open interfaces for existing storage systems, networking, configuration management, logging, monitoring, or content management tools.

    According to Johnston, the platform is easy to set up, and a small IT shop with a few nodes can get up and running in a matter of minutes. For big enterprise data center deployments, Docker partners with the familiar system integrators in that space, companies like Accenture, or Booz Allen Hamilton, he said.

    Docker sells the platform as a subscription service. It costs $150 per month per node (physical or virtual) for support during business hours only (five days a week, 12 hours a day) and $300 for support 24-seven.

    4:00p
    Cause and Effect: The Hidden Risk in Your Data Center

    We’re constantly hearing about how the lack of rain in much of the Southwest has contributed to the worst drought in the history of the region, but the subject of water doesn’t come up much with respect to data centers.

    However, it should garner just as much attention—specifically water treatment programs—according to Data Center World speaker Robert O’Donnell, managing partner of Aquanomix.

    “The water management program is a huge risk in data centers; one that many facility owners don’t understand or give enough credence to,” he says.

    In O’Donnell’s session, “Cause & Effect: The Hidden Risk in Your Data Center,” he will focus on how a water treatment controller that monitors the quality of the water pumping through your chiller plant has a tendency to corrode, scale, foul, or support microbiological growth in the system.

    “These effects can take your data center down, and quickly. The relationship between your system’s heat exchangers and water management is a vulnerable target that’s easily managed,” says O’Donnell.

    Unlike the drought, the issue isn’t just the quantity of water used, but also the quality of that water. To operate at peak efficiency, facilities must ensure that the water they use is pure and that the piping is free of biofilm and corrosion.

    Wherever water is used, biofilm and rust are concerns. Biofilms are formed by microbes that grow on the inner surfaces of pipes and other water containment vessels. Rust—iron oxide formed by the redox reaction of iron and oxygen—also forms inside pipes and deposits precipitates in the water.

    Mild steel, galvanized iron with copper and zinc coatings, stainless steel, copper alloys and plastic are all used in water systems. Likewise, the water itself has varying properties that affect cooling systems, including hardness, alkalinity, total suspended solids, ammonia and chloride.

    For example, chlorine, a common water purifying agent, corrodes most metals. Ammonia is used to produce chloramine, a less aggressive alternative to chlorine, but promotes biofilm development in heat exchangers and in the cooling tower. Additionally, with hard water, as water temperature increases, calcium salts precipitate out.

    The costs of biofouling and scaling can be significant. For example, a 1,000-ton air conditioner operating 12 hours per day at a cost of 20 cents per kWh would cost $29,932 each year to operate with only 0.012 inches of scaling and biofilm. When that scaling and biofilm triples, so do costs.

    “The variability of water quality DIRECTLY affects heat exchange, energy use, and water use. You need to recognize the criticality of continuously monitoring and analyzing the water quality and energy use,” O’Donnell says.

    See the more than 60 sessions scheduled for Data Center World, March 14-18, in Las Vegas, and register today!

    This first ran at http://www.afcom.com/news/cause-effect-hidden-risk-data-center/

    4:30p
    The Hybrid Cloud: Your Cloud, Your Way

    Jim Ganthier is Vice President and General Manager for Dell Engineered Solutions and Cloud.

    Cloud computing has become a significant topic of conversation in the technology industry and is being seen as a key delivery mechanism for enabling IT services. Today’s reality is that most organizations already are using some form of cloud because it opens up new opportunities and has become engrained in the fabric of how things are done and how business outcomes are achieved.

    Cloud offers a host of service and deployment models: both on- and off-premises, across public, private, and managed clouds. We see some organizations starting with public cloud because of the perceived ease of entry and lower costs. Some organizations, such as test and development groups, use public clouds because they need to quickly stand-up infrastructure, test and run their application and take it down, and this can’t be supported by their existing IT team. Other companies, such as startups, use public clouds because they simply don’t have the resources to build, own and manage a private cloud infrastructure today. We’re also seeing a rather significant shift back towards private clouds, which are becoming much easier and quicker to deploy and still come with IT control and piece-of-mind security benefits.

    That said, every organization’s cloud is a unique reflection of its business strategies, priorities and needs; and this is why there is a great variation in how companies go about implementing their own specific clouds.

    The Cloud Journey

    No matter where the journey begins, one of the first realizations is that there is no one particular solution or one particular answer in how to best utilize cloud solutions. The journey typically evolves over time and requires multiple clouds with a combination of both public, private and possibly managed clouds- resulting in a hybrid cloud end state.

    Before deciding on a cloud approach, it is important to understand all of the possibilities that cloud technologies provide, and agree on business initiatives, priorities, and desired results required to support your business needs and intended outcomes. The decision should not focus entirely on which type of cloud to deploy – private, public, managed or hybrid – but rather focus on delivering the right cloud or clouds, at the right cost, with the right characteristics (i.e. agility, costs, compliance, security) to achieve your business objectives.

    When evaluating cloud options, look for solutions that enable the following business benefits:

    • Faster Innovation: Does this cloud provide new ways to deliver faster value to customers in current and new markets?
    • More Agility: Does this approach provide a more flexible, modular way to meet ever-changing customer needs and scale up or down as needed, quickly and efficiently?
    • Increased Return on Investment (ROI): How will this cloud generate increased value to end customers? How will it help optimize existing technologies and lower long-term total cost of ownership (TCO)?
    • Range of Choice: Does this cloud enable customized workloads? How does it address compliance and security concerns?
    • More Simplicity: Will this cloud simplify or complicate IT environments? How will it be managed?

    Chances are that using more than one type of cloud will best enable delivery of these benefits.

    Additionally, consider which platform will provide the optimal business results for a given workload. With hybrid cloud, multiple workloads could run across multiple cloud platforms. As demands on workloads ebb and flow (i.e. a major event or opportunity that causes a usage spike in your web or IT requirements), those workloads can take advantage of more than one cloud platform with options typically chosen based on the most cost effective and reliable options at any given time. Evidence of this can be seen as some workloads originally hosted in a public cloud environment are now moving back into the fold of internally-managed IT – a trend recognized as cloud repatriation.

    Repatriation represents a real and outstanding value proposition in terms of efficiency, value, and how to successfully manage a cloud journey while still gaining quick results and doing what is best for the overall organization. More importantly, it points the way to where cloud, or all of IT, is going. It has shifted the debate from “public OR private cloud” to, “when and how fast” a company will reach a hybrid cloud end-state that truly reflects its needs and the best of the service and support it is employing.

    Taking the discussion one step further, beyond repatriation and hybrid cloud, organizations will start looking not only at a hybrid cloud end-state, but a multi-cloud strategy. Businesses will have the ability for on-premise support and use of a public cloud provider, and they also will be able to choose to have multiple public cloud providers – the Amazons, the Azures, the Googles of the world – to help achieve their desired business outcomes.

    Ultimately, there are multiple factors that go into deciding which cloud solutions or platforms will enable the best business results from a given workload. One choice does not fit all. A hybrid cloud strategy represents being able to extract the benefits from multiple cloud platforms and “rightsizing” the workloads for your organization. At the end of the day, that is what is going to impact how your infrastructure helps achieve business outcomes.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:23p
    QTS Managing for Long-Term Growth, Expanding in Seven US Markets

    QTS Realty Trust continues to pursue its so-far effective niche strategy targeting network, cloud, and enterprise customers who appreciate the flexibility offered by a one-stop data center shop.

    According to statements by the data center REIT’s top management on its fourth-quarter earnings call Tuesday, it will stay the course, executing on its strategy of purchasing infrastructure-rich facilities at a low cost and repurposing them as state-of-the-art data centers.

    Another QTS trademark has been to offer a full range of solutions, the one-stop-shop play it refers to as 3C’s: custom wholesale deployments, colocation, and cloud and managed services. Notably, the number of QTS customers who contract for more than one type of these services grew from 40 percent in 2014 to more than 50 percent in 2015.

    Chicago Expansion to Impact a Key Metric

    QTS management has consistently focused on allocating capital to generate a weighted average of 15 percent return on invested capital. During Q4 and full-year 2015, the company rang the bell at 15.8 percent ROIC.

    QTS purchased the former Chicago Sun-Times press facility for $18 million in 2014. The 133,000-square foot data center under development sits on 30 acres close to downtown. The initial phase of 14,000 square feet is scheduled to be ready for occupancy in mid-2016.

    Management said on the call that the company’s overall portfolio ROIC will dip below 15 percent later this year due to the amount of capital required to launch the Chicago facility. The QTS balance sheet will also carry a bit more leverage during 2016 than management’s targeted 5x net-debt to EBITDA.

    The Chicago market does not have a large supply of modern purpose-built data center space available, compared with the size of the market. The QTS facility is located midway between the suburbs and downtown and is connected by fiber to the downtown carrier hotels.

    Want to know how other data center REITs did in the fourth quarter? Curious about investing in data center REITs in general? Visit the Data Center Knowledge Investing section for everything you need to know about this high-performing sector.

    Strong 2015 Results

    QTS has been working through a backlog of large multi-megawatt contracts, where the deployments are phased over several quarters. This booked-not-billed backlog helps give it better earnings visibility and confidence to move forward with its current development plans.

    QTS - 4Q'15 Earnings s5 2015 Overview

    Source: QTS – Q4 2015 Earnings presentation

    Notably, this backlog results in an uptick of larger wholesale deals being added to the revenue mix vs a more normalized leasing cadence. Additionally, QTS would be pleased to sign a wholesale tenant to anchor the new Chicago facility in a similar fashion to the way it kicked off its first site in the Dallas market in 2014.

    Read more: Exclusive: How QTS Plans to Keep Momentum After Two Years of High Growth

    Carpathia Acquisition “Pays Dividends”

    Last year QTS closed on its $326 million acquisition of Carpathia Hosting, which added over 230 cloud and hosting customers.

    The acquisition immediately increased revenues derived from cloud and managed hosting from 10 percent to 25 percent. It also bolstered the QTS security and compliance offerings, including its government offerings.

    The Carpathia purchase, at 9.6x EBITDA, was immediately accretive to operating FFO and AFFO.

    These are important REIT metrics which add back depreciation, amortizations and other non-cash adjustments to earnings. AFFO is also referred to as cash available for distribution (CAD) which supports the REIT dividend distribution.

    Read more: Why QTS Dished Out $326M on Carpathia Hosting

    Dividend Increase and 2016 Guidance

    QTS announced a 12.5 percent increase in its quarterly distribution to $0.36 per share, up from $0.32 per share previously. The forward dividend yield on QTS shares based upon the most recent close of $43.62 per share is now 3.3 percent.

    • Adjusted EBITDA: 2016E of $177-$185 million, an increase of 29.3 percent vs 2015 at the midpoint
    • Operating FFO: 2016E of $125-$130 million, an increase of 22.6 percent vs 2015 at the midpoint
    • OFFO per share: 2016E of $2.54-$2.64, an increase of 13.1 percent vs 2015 at the midpoint

    Management expects that core organic revenue growth for 2016 will be in the mid-teens. The Carpathia integration is six months ahead of schedule, and the expected $2 million in expense synergies is included in the forecast for this year.

    QTS expects to bring 125,000 square feet online in 2016, including: 15,000 square feet in Richmond, Virginia, 20,000 square feet in the Atlanta Metro, 38,500 square feet in Dallas-Fort Worth, 19,000 square feet in the Atlanta suburbs, 3,250 square feet in Santa Clara, 15,000 square feet in New Jersey; and 14,000 square feet in Chicago.

    Bottom Line

    One of the reasons data center REITs like QTS and its rivals CyrusOne and CoreSite are able to deliver impressive mid-to-high teens ROIC is that they are building out new raised-floor space in existing powered shells. Additionally, they each can benefit from building out multiple data centers on campuses where they already own the land.

    Currently, 91 percent of the space that QTS has built out and is available for lease is occupied. QTS is sacrificing a bit of short-term performance in order to open its Chicago facility and participate in the growth of this Tier I data center market.

    This is a dilemma that CoreSite is now facing in some of its key markets, due to the strong customer demand, and recent leasing success.

    Read more: CoreSite Reports Strong Q4 but Shell Capacity in Key Markets Short

    Investor Takeaway

    QTS’s mega-scale approach to infrastructure combined with flexibility for customers to expand in-place from colo to dedicated hosting or wholesale deployments helps set it apart from its peers.

    I remain constructive on QTS shares for 2016. The analyst consensus target price is currently $49 per share. This implies a 12.3 percent upside for QTS from current levels and a 12 month total return of over 15.5 percent, including the dividend.

    7:51p
    AFCO to Take Bloomberg’s Custom Data Center Rack to Market

    Like many of its peers in the world of financial-services giants, Bloomberg hasn’t been shy to experiment with novel approaches to data center infrastructure in hopes of finding ways to cut cost and improve performance of its applications.

    One of the fruits of this labor is a custom data center rack design – a modified version of the Open Rack, a set of designs and specs available through the Open Compute Project, the Facebook-led open source data center and hardware design community.

    Now, AFCO Systems, a manufacturer that developed the rack together with Bloomberg, is taking it to the broader market. The company announced that it will showcase the product, called Bloomberg Open Adaptive Rack, at the big annual Open Compute conference in Silicon Valley next month.

    AFCO Bloomberg BOAR OCP rackOne of the key distinctive Bloomberg-specific features is a chimney flex duct ring at the top to carry hot server exhaust air into an overhead plenum to prevent it from mixing with cold supply air on the data center floor. These chimneys play a big role in the containment scheme at Bloomberg’s newest data center in the New York suburbs.

    AFCO’s Bloomberg rack for the general market will come with or without the duct ring.

    Open Compute racks in general take a Facebook design approach that’s radically different from the norm. There’s emphasis on resource sharing by servers in the rack. Instead of having a separate power supply on each individual server, for example, the Open Rack has “power shelves,” which act as shared power supplies for groups of servers in the rack. More detailed specs below.

    Major financial services players Goldman Sachs and Fidelity Investments were involved in the Open Compute Project together with Facebook early on. Both companies have been using OCP hardware in their data centers, and Fidelity also has its own customized version of the Open Rack, called the Open Bridge Rack.

    Read more: Wall Street Rethinking Data Center Hardware

    Bloomberg has three primary data centers in the US and hundreds of nodes in data centers around the world, as well as an extensive global network to provide its market information services to clients.

    More on Bloomberg’s data center strategy here: Bloomberg Data Centers: Where the “Go”s Go

    BOAR rack by AFCO specs:

    Power Zones. The BOAR Rack by AFCO Systems is divided into separate power zones, each of which includes an equipment bay for compute, storage, or other components and a power shelf, which provides power to the compute components in the equipment bay. In addition, each power zone equipment bay can be can be configured to accommodate different amounts and heights of equipment chassis.

    OpenU Slots. Each height slot in an Open Rack is measured in OpenU. Unlike a traditional rack unit (RU) which is 44.5 mm high, an OpenU is larger at 48mm high, which includes the space between each chassis unit. The compute chassis are supported on L-shaped brackets that directly snap into vertical structural rack posts. These brackets, installable without tools, can be mounted at 0.5xOpenU (24mm) increments; this allows the support of any chassis height, regardless of the combinations of different heights within the same power zone. Available chassis height in one power zone adds up to 10xOpenU-maximum.

    Bus Bars. Bus bars are installed in pairs at the rear of each power zone in the rack. Each pair is connected to output voltage positive and negative terminations provided by the power shelf installed in the same power zone

    Cabling. The racks contain cabling runs along the front-left and front-right rack posts which are stamped sheet metal C channels, each providing 180mm x 25mm of room available for cables. Flanges have been built in to the channels, routed vertically, to retain cables using simple Velcro straps or cable ties. Horizontal Cable Troughs (4″x4″) are welded into the rack frame at the top-front and top-rear to support intra-rack cabling.

    Construction and Access. The rack frame itself is constructed of welded tubular steel. A black powder coat finish is applied to comply with OPC standards. The rack top panel offers a brush filled cable access openings in all four corners and is fitted with a Chimney Flex Duct Ring. The rack has a removable perforated front door and a removable sealed/clear plexi door. Both doors are hinged right.

    10:45p
    Oracle Acquires Cloud Migration Startup Ravello
    By The WHIR

    By The WHIR

    Workload cloud migration startup Ravello Systems was acquired on Monday by Oracle to ease enterprise adoption of its public cloud. Oracle is reported to have paid between $400 and $500 million for the California-based company which maintains a research presence in Israel, and Oracle is now expected to open a cloud research and development facility in Israel, according to Ha’aretz.

    Ravello was started in 2011 by the team behind the KVM hypervisor. It offers nested virtualization solutions, allowing KVM and VMware workloads to be developed, tested, and demonstrated in the cloud without migration, and migrations to new cloud providers and management platforms without rewriting applications. KVM was passed in benchmark tests by Canonical-backed Linux container hypervisor LXD in May.

    According to Ravello, it will continue operating as-is.

    “Ravello will join in Oracle’s IaaS mission to allow customers to run any type of workload in the cloud, accelerating Oracle’s ability to help customers quickly and simply move complex applications to the cloud without costly and time-consuming application rewrites,” Ravello CEO Rami Tamir said in a statement.

    Current Ravello customers will continue to receive the same service, with additional products and services coming available through the deal. Oracle has confirmed that Ravello’s full team will join Oracle Public Cloud.

    Oracle acquired Docker operationalization company StackEngine in December, when it also announced a cloud campus in Austin, Texas, to support the growth of its public cloud business. Verizon, by contrast, announced the closure of its public cloud earlier in February, and gave customers just one month to completely migrate; showing that entrance into the growing industry is far from easy, even for tech giants.

    The deal is subject to customary closing conditions.

    This first ran at http://www.thewhir.com/web-hosting-news/oracle-acquires-ravello-systems-for-reported-500-million

    << Previous Day 2016/02/24
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org