Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, March 2nd, 2017

    Time Event
    1:00p
    After Beating Its Own Leasing Record in 2016, DuPont Fabros Keeps Foot on Gas

    Looking back at 2016, there was very little wholesale data center leasing by enterprises compared to leasing by hyper-scale cloud companies – and even those large deals dried up towards the end of the year. The November election has often been cited as a reason, along with a lack of available large data halls in top markets.

    The 2016 data center REIT results are in the book. However, a nagging question which has yet to be fully answered is: What is the new normal for wholesale leasing in the top US data center markets? Another “new normal” that’s unclear is the cost per megawatt for hyper-scale deals.

    Can Record Leasing Continue?

    DuPont Fabros Technology, one of the biggest data center landlords for hyper-scale companies has had a lot of success during the past couple of years with its 100-percent focus on wholesale deals. The company surpassed its own 2015 leasing record of 47MW of bookings by signing 51MW across its three active markets in Northern Virginia, Chicago, and Silicon Valley last year.

    Read more: Surge of New Capacity Expected in Top US Data Center Markets This Year

    In the fourth quarter, the data center REIT signed a full-service 2.88MW pre-lease with a new strategic cloud customer. Subsequently, it signed one additional 4.2MW lease this quarter, boosting the occupancy of its ACC7 data center in Ashburn, Virginia (its biggest market) to 100 percent. The company’s operating portfolio of 287.1MW is now 99 percent leased, leaving about 2MW available for leasing.

    While these results may seem to indicate that opportunities are decelerating, the developer’s Q4 earnings call last week showed that that’s not the case.

    Entering the Phoenix Market

    DuPont Fabros has entered into a contract to purchase 56 acres in the Phoenix market, the company’s CEO, Chris Eldredge, announced on the call. He noted that the land parcel is located in Mesa, a Phoenix suburb, due in part to attractive development sites being difficult to find in Chandler, where much of the Phoenix market’s existing data center capacity is located.

    Notably, Mesa is where Apple is also building a huge, $2 billion data center.

    The current plan is for DuPont Fabros to hold the parcel in its land bank until a suitable pre-lease is executed to initiate development. Meanwhile, Eldredge is searching for sites suitable for more development in Ashburn, Chicago, and Silicon Valley.

    Development Accelerates in 2017

    This year the data center REIT plans to have active development underway across five different projects, totaling 64.4MW, most of this capacity expected to be delivered before the end of the year. The company has already pre-leased 18.9MW of this future capacity.

    It expects to spend $600 million to $650 million in capital this year, which is both a record for the company and ahead of guidance given just 15 months back, at Investor Day 2015.

    This guidance includes delivering the spec shell of ACC10 in Asbhurn in hopes of snaring a hyperscale cloud or a similar build-to-suit. DuPont Fabros acknowledged that speed to market has become more important, so it is accelerating development of the shell in Ashburn to be competitive.

    See also: Digital Realty Signals the Gloves are Coming Off in 2017

    Eldredge confirmed that the company’s first Toronto data center is on track to open with 6MW in the former Toronto Star printing facility in this year’s fourth quarter, along with another 12MW of future capacity for Phase I. Meanwhile, the Hillsboro “wheat field” outside of Portland, Oregon, is in pre-development phase, with the first data hall anticipated to be delivered in the second half of 2018.

    If early pre-leasing were to occur at ACC10 or Phase II of CH3 in Chicago, that would result in additional capital expenditures, not currently budgeted for 2017.

    Few Shots Across the Bow

    Eldredge made it a point to clarify that all his publicly traded competitors are acting rationally when it comes to pricing massive cloud deals. This was not the case, however, in a recent deal in Chicago signed by a a private equity-backed provider, whom he did not name but was apparently referring to EdgeConneX and the big lease it signed with Microsoft.

    He underscored that despite a competitive market, “The strong demand from cloud providers has resulted in strong ROIs from our recently completed facilities.” He provided additional color on ACC7 Phase IV, which delivered an unlevered ROI of almost 14.5 percent, “well above our target return.”

    On the call, CFO Jeff Foster didn’t pull any punches when it came to the all-in cost per megawatt for the product that DuPont Fabros is delivering in each market:

    Ashburn: The estimated cost per megawatt is $8.9 million to build the ACC9 N+1 product.

    Chicago: DuPont increased the total megawatts of CH3 from 25.6MW to 27.2MW by lowering the redundancy of the building from N+2 to N+1. This lowers the estimated cost to construct CH3 to $10.25 million per megawatt, which is comparable to CH2.

    Santa Clara, California: Development cost at SC1 Phase III is projected to be $10.2 million per megawatt, compared with the $12 million per megawatt cost for the first two phases of SC1. The decrease was due to Phase III being constructed at a higher density and some unspecified design changes.

    Toronto: DuPont Fabros is building its flexible 4.0 design. Maximum critical load for TOR1 is 46MW. However, the final megawatt for critical load in this building will be dependent upon customer selections. DFT is delivering four rooms at opening, which are being constructed at 1.5MW per room and N+1 redundancy. The initial cost is estimated to be $10 million per megawatt, and Foster is underwriting 13 percent ROI.

    Portland: Notably, the 4.0 design will also be utilized in Portland.

    The level of cost detail he offered was unprecedented as far as data center REIT disclosures go and appears to be intentional, an effort to increase transparency in the industry regarding development costs.

    Investor Takeaway

    When it comes to evaluating and entering new markets DuPont Fabros has been measured and methodical. Eldredge made it clear that the company is considered a partner by its major cloud customers and is part of the long-term planning process.

    Eldredge assured the analysts on the call that DuPont Fabros is competitive on hyper-scale deals and can achieve its targeted 12 percent-plus unlevered returns on invested capital. One area for investors to watch going forward will be the renewals of the Facebook leases in ACC4, ACC5, ACC6.

    DuPont Fabros revealing its all-in cost per megawatt in each of its top markets once again begs the question: What does it really cost to deliver a megawatt for massive hyper-scale cloud data center capacity? I think the ball has now moved back into CyrusOne’s court to clarify what is included in its $6.3 million per megawatt budget.

    5:52p
    Seeing Clearly with Data Center Simulation

    Dave King is Product Manager at Future Facilities.

    It’s been said that CFD provides a historical view of the airflow in a data center, one that is probably out of date by the time the report is produced.  This view of CFD as a snapshot of the past misses the real power of the technology (that of prediction); an unfortunate line of thinking that seems to be widely held within the industry.  I’ve lost count of the number of conversations I’ve had at conferences with data center operators who have said something along the lines of, “Why do I need someone to perform a CFD study to show me what my facility looked like two weeks ago? I have sensors that can tell me what’s happening right now.”  This perception hasn’t come about by accident.

    In the Beginning

    CFD first entered data centers about 10 to 15 years ago when power densities started to rise.  When IT equipment failed due to thermal problems, operators found it very difficult to understand why because they lacked the data to analyze the problem.  This is where CFD came in: Operators engaged engineering consultants to model their facilities and tell them what was going wrong.

    The consultant would return after about three weeks with a report displaying the environment in the facility.  Invariably, these reports would contain temperature planes or temperature maps showing the surrounding environment.

    For many operators, this was the first time that they visualized the facility environment.  Being able to see how conditions varied within the space, often for the first time, offered great value.

    In addition, the CFD simulation allowed the source of issues to be traced, giving deep insight into how the facility was performing.  The consultant would work with the operator to find a solution and then show it working in the model before being implemented, fully using the predictive power of the technology.

    Developing Real-time Data

    As time went on, monitoring systems that gave operators the ability to see what was happening real-time in the data center  started to appear on the market. The manufacturers of these systems had to find a way of presenting the data from many (probably at least 100) individual sensors in an easy-to-digest way.  They chose to use a process called interpolation to try to join the dots between sensors and create temperature maps, which looked very much like the outputs from the CFD models that operators were used to seeing.

    At this point, it’s worth thinking about the primary question operators were really asking when having a CFD analysis performed:  What is happening in my data center?  They may have received answers on why is this happening and what will happen if we do this as a bonus from the CFD model, but that wasn’t the main thrust of the thought process.  As far as the market was concerned, the temperature maps provided by the monitoring systems already in use could answer this question without the need to engage with an expensive consultant. They also had the added bonus of being a display of what was happening right now, rather than three weeks ago.

    Where We are Today

    Operators that were using CFD as a tool to get a snapshot of what was happening in their facility came to the conclusion that they could get almost the same information in real time through modern monitoring technology, without the expense (however, a CFD analysis will always give you more information than a monitoring system). Thus, CFD would be written off as no longer necessary.

    I wouldn’t necessarily disagree.

    CFD is expensive and cumbersome compared to a monitoring system if all you are using it for is getting a snapshot of the conditions in your data center.  But there’s the rub: The real benefits of CFD are in its ability to answer the “whys” and “what ifs.”

    The introduction of monitoring systems allowed massive improvements in data center performance because they showed operators when they were exceeding limits.  Rather than providing the same data, CFD modeling adds new information to the operator’s armory.  Future plans can be stress tested and optimized in a way that simply is not possible with any other technology.  Doing this will allow the data center envelope to be pushed further, utilizing more capacity and squeezing every last drop of efficiency out of the cooling system without risking the IT load.

    Case Study: Financial Institution

    To illustrate what can be achieved, I want to share an illustration. The goal of the project was to rip out roughly 150 old direct-cooled, glass-fronted cabinets and replace them with a more modern hot aisle/cold aisle arrangement to make better use of the available cooling.  This amounted to around 50 percent of the server cabinets in the facility. At the same time, an extra 200kW of load was being migrated into the hall from server rooms in other locations, increasing the total from 900kW to 1.1MW.  The work took place over the course of 20 weekends, with the rest of the data center remaining fully functional and resilient.

    To begin, we simulated the end point of each of the 20 stages up front to make sure that the plans were sound. This exercise highlighted a number of cable trays in the floor that would need to be removed as they would sit directly below the new cold aisles, affecting airflow.

    However, the really interesting part was once work had begun.  As was always going to be the case, work quickly deviated from the plan as applications had to be kept running when they had been scheduled to be moved. We worked on-site with the project teams to update the CFD model with the work that had actually been completed each weekend and the new plan for the upcoming weekend. After this, we ran a fresh simulation to give the migration teams safe load limits for each of the new cabinets.  These weekly safe limits were often significantly less than the final design load of each cabinet.

    The project was completed within the estimated timeframe and without a single thermal shutdown. This was because the migration teams knew exactly where the limits were and could approach them with confidence due to having previously simulated each situation. Without the use of simulation, this would not have been the case, and limits would either have been exceeded (causing thermal shutdowns) or less equipment would have been installed each week (extending the length of the project).

    Complementary Not Competitive

    The data that CFD provides can enable the same leaps in data center performance that the addition of monitoring systems have been able to achieve over the past decade.  While there are sound reasons for the market to view CFD and monitoring as competing technologies, they are in fact completely complementary.  As data center operators are asked to do more with less, they are going to need both working together to achieve their goals.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    7:43p
    AWS Outage that Broke the Internet Caused by Mistyped Command

    This past Tuesday morning Pacific Time an Amazon Web Services engineer was debugging an issue with the billing system for the company’s popular cloud storage service S3 and accidentally mistyped a command. What followed was a several hours’ long cloud outage that wreaked havoc across the internet and resulted in what is potentially hundreds of millions of dollars in losses for AWS customers.

    The long list of popular web services that either suffered full blackouts or degraded performance because of the AWS outage includes the likes of Coursera, Medium, Quora, Slack, Docker (which delayed a major news announcement by two days because of the issue), Expedia, and AWS’s own cloud health status dashboard, which as it turned out relied on S3 infrastructure hosted in a single region.

    Cyence, an analytics company that quantifies economic impact of cyber risk, estimated that the S&P 500 companies impacted by the outage collectively lost between $150 million and $160 million as a result of the incident. That estimate doesn’t include countless other businesses that rely on S3, on other AWS services that rely on S3, or on service providers that built their services on Amazon’s cloud.

    Related: No Shortage of Twitter Snark as AWS Outage Disrupts the Internet

    The engineer that made the expensive mistake meant to execute a command intended to remove only a small number of servers running one of the S3 subsystems. “Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended,” according to a post-mortem Amazon published Thursday, which also included an apology.

    The servers removed supported two other crucial S3 subsystems: one that manages metadata and location information of all S3 objects in Amazon’s largest data center cluster, located in Northern Virginia, and one that managed allocation of new storage and relies on the first subsystem.

    Once the two systems lost a big chunk of capacity they needed to be restarted, which is where another problem occurred. Restarting them took much longer than AWS engineers expected, and while they were being restarted, other services in the Northern Virginia region (US-East-1) that rely on S3 – namely the S3 console, launches of new cloud VMs by the flagship Elastic Compute Cloud service, Elastic Block Store volumes, and Lambda – were malfunctioning.

    Amazon explained the prolonged restart by saying the two subsystems had not been completely restarted for many years. “S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected.”

    To prevent similar issues from occurring in the future, the AWS team modified its tool for removing capacity to prevent it from removing too much capacity too quickly and to prevent capacity from being removed when any subsystem reaches its minimum required capacity.

    The team also reprioritized work to partition one of the affected subsystems into smaller “cells,” which was planned for later this year but will now begin right away.

    Finally, the Status Health Dashboard now runs across multiple AWS regions so that customers don’t have to rely on Twitter to learn about the health of their cloud infrastructure in case of another outage.

    << Previous Day 2017/03/02
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org