Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, February 22nd, 2016

    Time Event
    1:00p
    Reality Check: Can Underwater Data Centers Really Work?

    Editorial-Theme-Art_DCK_2016_Feb

    Most times, “underwater” and data center are only together in a sentence about the financial condition of a failed company, not computers actually covered by liquid. Yet, Microsoft has gotten great attention from the experiment they publicized in January, putting a “capsule” containing computers 30 feet underwater for 105 days. People appear to be fascinated with the idea of underwater data centers, an idea that conjures up images from Jules Verne’s Voyage to the Bottom of the Sea.

    Don’t get me wrong, I like the boldness of the idea and the innovation required to tackle Project Natick. But since we’re in the election season in the US, let’s do some fact checking to see whether this idea can do more than tread water.

    The virtues proposed by Microsoft researchers and industry analysts include reduced cooling costs; the ability to use clean, renewable tidal energy; lower latency and better application performance for the 50 percent of the world’s population that lives within 200km of the ocean; and reduced deployment time of mass-produced capsules, from years to weeks.

    So what are the issues that have not been addressed so far?

    People Hate Putting Things in the Water

    Americans, at least, seem to have a hard time accepting companies putting things in the water near the coast, delaying implementations by years while special interests argue the virtues and detriments.

    One of the most extreme examples is the Cape Wind Project, proposing wind turbines 10 miles off the Massachusetts shore. The project’s first proposal came in 2001, 15 years ago, with everyone from business operators to celebrities opposing it at every step.

    On the West Coast, plans for Seattle’s first tidal-energy project were scrapped in January 2016 after the Snohomish County Public Utility District invested eight years and at least $3.5 million in developing it. These types of delays are commonplace when anybody proposes deploying anything in the ocean or rivers.

    Environmental Impact

    On the good side of the environmental discussion, Microsoft’s engineers noted that sea life around the capsule quickly adapted to its presence. The artificial reef aspect of an underwater cluster of capsules could be beneficial, providing habitat for sea creatures in the local area.

    But what about the warming of the area around the capsules? Reports on Project Natick said the 10 foot by 8 foot cylinder had “the power of 300 desktop PCs.” Let’s assume the servers were a variation of Microsoft’s Open Compute OCS V2 blades, which consume 300W of power and provide up to 28 cores per blade. Eleven blades would provide roughly the equivalent of 300 desktops of compute capacity and would consume about 3300W of power. For the sake of argument, we’ll assume the production version of the capsule is three times larger, or 14 feet in diameter and 10 feet high. Each capsule would have room for three of the racks used in Project Natick and would reject about 10kW of heat.

    Capsules placed on the ocean floor in an open pattern would allow water to flow around them, drawing away the waste heat. Ten capsules would throw off 100 kW of heat each hour and might raise the temperature of the water around them by about 1C every 10 hours if the water is stagnant. The walls of the capsules will be warmer than the surroundings, which will create microclimates around the vessels and may attract unexpected species.

    Tidal Power

    According to a New York Times article, “the project [could] harvest electricity from the movement of seawater. This could mean that no new energy is added to the ocean and, as a result, there is no overall heating.” True, the First Law of Thermodynamics says the overall balance of energy in the ocean would stay the same, but energy would be converted from mechanical energy of moving water to thermal energy of waste heat, so it is likely the temperature around the capsule farm would register some increase in temperature.

    And what happens at slack tide if the underwater data centers are running on tidal energy? Twice per day, the kinetic energy of the tides would dwindle to zero, and the installation would have to be energized by more conventional power sources.

    Modular Advantage

    Microsoft also estimated they could mass-produce the capsules and deploy them within 90 days of demand for capacity. Land-based modular data centers have been in production since Sun’s Project BlackBox in 2006 and are well established in their ability to deliver capacity in a short time. No advantage over land-based deployment there.

    The cost of water-tight pressure vessels is likely 10 to 20 times higher than Microsoft’s ITPAC containers, and the cost of placing capsules underwater is much higher than the equivalent operation in Cheyenne or Northlake.

    The company H2OMES makes underwater domiciles, and estimates the installed cost of their product at over $3,100 per square foot. Compare that to the average US home at $129 per square foot and it makes the underwater construction 24 times more expensive than land based. Not compelling.

    No Time Soon

    There are other concerns: cost of operations, security, connectivity, power, operation of sensitive cooling equipment in harsh salt water environment. The reality is we will not see underwater data centers for a very long time, if ever. We’ve spent the last 15 years aggregating compute resources into big, efficient facilities that can be located anywhere the speed of light can reach, and that will be the method, combined with land-based edge centers, for the foreseeable future.

    About the author: Mark Monroe is president at Energetic Consulting. He is a former executive director of The Green Grid and CTO and VP of DLB Associates. His 30 years’ experience in the IT industry includes data center design and operations, application and system design, service level management, process design and analysis, professional services, sales, government and commercial program management, and outsourcing management. He works on sustainability advisory boards with the University of Colorado and local Colorado governments, and is a certified Six Sigma Black Belt and Master Black Belt.

    4:00p
    Why De-Escalation Management is Crucial to IT Infrastructure Health

    Ralph Eck is General Manager at Monitis.

    Sink or swim. This is precisely what it boils down to when system administrators (SysAdmins) are dealing with the influx of data coming from all directions. Do this, drop that, careful there! While IT monitoring is meant to provide some guidance and give direction, it very often does the exact opposite. This is where monitoring de-escalation management comes into play to change things for the better.

    Monitoring is about collecting the data you need in order to keep your crucial IT systems running. And even though this may sound blatantly obvious, there is more to it than first meets the eye. Monitoring may easily leave you with tons of data that means next to nothing – if you do not structure it right.

    The most obvious distinction that needs to be made is whether you are more of a reports or an alerts kind of person. Reports and alerts both help account for the health of a system. Yet reports are primarily used to document the overall state of a system. Say for instance you are a web hosting provider and you want to demonstrate the quality of your service to your clients, a report will serve this purpose just fine. Assuming that everything is as it should be.

    But then again, it is obvious that a report will not come out right automatically. Too many issues will certainly affect your overall service quality and bring it down to a level where it definitely should not be. So what you need to do is get active as soon as you get the first indication that something goes wrong. And that is precisely where an alert will help you keep matters on track. In other words: Alerts allow you to catch an issue before it becomes a problem. Therefore alerts are what SysAdmins must tend to so the reports show a healthy system.

    The Need for Incident Management is Clear

    Today’s monitoring technology provides the ability for SysAdmins to receive automatic alerts whenever a monitor detects a problem – that is certainly not a major bit of news. Even the fact that you can decide as to whether it is in the form of an email, text message or phone call will not necessarily sweep you off your feet.

    Yet, there are a couple of crucial factors that need to be recognized and dealt with – including proper incident management. Each incident needs to be handled appropriately, and a proper escalation routine is the first step to ensure an alert is being brought to the right person’s attention at the right time. For instance, no one wants to receive a text message alert in the middle of the night while they are away from their desk, and presumably sound asleep. This would just be a dead end.

    While this may not be a problem if it is concerning a minor issue, it may be totally different story if it is about a vital object of your system. If this critical object is possibly in danger you might want to make sure the alert gets to the right person in the right manner. So incident management sure matters, but it is still not all that needs to be taken into account.

    Threshold to Determine Severity Level

    To determine the correct escalation path, thresholds first need to be defined so each problem state can be assigned a severity level. This will help determine whether an alert is critical or can be just handled as a warning that something is not in its usual parameters.

    Even moreso since it is important to point out that what matters to one organization may not be as important to another. So while it may be important for some users to know whether a server fails to respond within a predefined timeout others will have to find out whether page elements fail to load, or whether their RAM capacities cross a set threshold.

    However, as any SysAdmin will tell you, inconsistent or even conflicting alerts sometimes can be almost as frustrating as not getting any results at all. The key to prevent that is to have more than one monitoring location so that the various locations can challenge the results of the other.

    Finally, it is certainly critical to make sure that one issue is not being fixed by two people at the same time. This would most definitely be a waste of resources. So when an alert is being delivered to multiple parties, it is important to inform all involved hands when somebody takes the ownership of this particular issue. Everyone needs to be informed about the status, so that unnecessary task repetition can be avoided.

    De-escalation Management Can Make a World of Difference

    Monitoring certainly remains a key job for system admins and IT managers alike. Yet the success of it hinges on a proper set up, prioritization, and key elements like incident management, thresholding, and alert acknowledgement. If these elements are not in place, not monitoring can easily add to the misery of the folks who make IT systems work. But if they are, they can make a world of difference.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:30p
    Equinix: the Complicated Math of a Technology REIT

    While Equinix is now officially planted in the REIT sector, it is still very much a technology company, an aspect its management remains firmly attached to. This creates a dynamic tension between growing funds available for distribution and funding global business initiatives.

    A Complicated Story

    On February 18, Equinix (EQIX) reported results for the fourth quarter and full year 2015. The Q4 earnings call prepared remarks and management’s presentation were densely packed with information.

    In a nutshell, Equinix had a lot of moving parts during 2015, including: REIT approval mid-year, the acquisition of TelecityGroup in Europe and of Bit-isle in Japan. Additionally, Equinix made substantial investments in its own IT platform to support hyperscale public cloud providers and inaccelerating its sales and marketing initiatives targeting enterprise hybrid cloud customers.

    Unattractive REIT Metrics

    The chart excerpt below shows steadily declining FFO and AFFO, both sequentially and year over year. More traditional REIT investors would simply run for the hills based upon these results, but the Equinix story is more nuanced.

    EQIX - 4Q'15 s34 snip REIT Metrics poor

    Source: Equinix 4Q 2015 Earnings presentation

    Footnote (3) above reads: Full-year diluted AFFO per share calculated on the YTD weighted average diluted share count basis.

    However, calculating the diluted share count is complicated by the Telecity acquisition.

    • The fully diluted EQIX share count at the end of 2015 was 62.1 million.
    • Management guidance for 2016 adds: 865 million of employee stock awards, 1.9 million shares added from convertible securities, and 6.9 million shares associated with the Telecity acquisition, totaling about 71.77 million basic shares.
    • However, there was another end of year 2016 estimate of 72.06 million for projected fully diluted shares.

    EQIX - 4Q'15 snip 2016 Guidance

    Source: Equinix 4Q 2015 Earnings presentation

    The financial guidance for 2016 contained a lot of adjustments, including a $50 million foreign exchange loss related to the acquisition of TelecityGroup and $58 million of integration costs. Both of these items hit “normalized” AFFO, taking the number down from $1.08 billion to $970 million.

    Dividend Increase

    On February 18, Equinix also announced a quarterly cash dividend of $1.75 per share on its common stock, a 3.6 percent increase over its prior cash dividend per share. Based upon the most recent close of $299.37 per share, this equates to an annual yield of 2.33 percent.

    The ability for Equinix to grow AFFO per share directly impacts investor perception regarding this new REIT’s ability to support and grow its dividend distributions.

    M&A Update

    Telecity: Management guided to a mid-year 2016 closing. Equinix is “deep into the process” of divesting the eight data centers required for EU approval. After closing, Equinix plans to move the Telecity assets into a more efficient tax structure.

    Bit isle: The good news was that Bit-isle was purchased for less than a 10x EBITDA multiple. However, the acquisition will be dilutive to margins in 2016. CFO Keith Taylor explained that Bit-isle had significant space that will be vacated by larger wholesale users. It will take a couple of years for Equinix to back-fill that space with higher-revenue colo/interconnection customers.

    Verizon – Telco Data Centers: CEO Steve Smith explained that Equinix will be taking a “proactive, but highly selective” approach to potential M&A deals. Any acquisitions would be limited to network-dense and cloud-dense assets. However, management would consider a JV arrangement in order to secure one or more trophy data centers.

    Equinix – Operational Highlights

    Same-Store Performance: Cash gross profit grew 10.5 percent, driven by interconnection growth.

    Customer Revenue: Notably, 94 percent of revenues were recurring, consisting of: 78 percent Colocation, 17 percent Interconnection, and 5 percent MIS.

    Key Vertical Markets: Network – 28 percent; Cloud and IT Services – 27 percent; Financial Services – 20 percent; Content/Digital Media – 15 percent; Enterprise 12 percent.

    Public Cloud: Amazon, Microsoft, IBM, Cisco and Oracle now average 17 metro locations each, globally.

    Pricing Escalators: Cross-connects, existing contracts and power density escalations all averaged in the 2 to 5 percent range.

    Investor Takeaway

    Equinix remains at its heart a technology growth story. While its feet are firmly planted in the data center business, its head and vision remain first and foremost “in the clouds.”

    Frankly, Equinix is much harder to model and understand that any of its five data center REIT peers. This makes EQIX a “stranger in a strange land,” hard to “Grok” for traditional REIT investors.

    On Friday, it appears Wall Street investors gave Equinix a free pass for 2016. It appears that Mr. Market is betting on the ability of CEO Smith and his team to close and integrate acquisitions and grow EBITDA margins in 2017 and beyond.

    << Previous Day 2016/02/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org