Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, May 22nd, 2017

    Time Event
    12:00p
    DCK Investor Edge: CyrusOne — Catch Me If You Can

    CyrusOne has developed a low-cost data center design which also appears to give them a speed-to-market edge when negotiating and landing hyperscale cloud deployments.

    The inability for hyperscale cloud providers to forecast their own customer growth is at odds with their need to seamlessly provide them with unlimited servers “on-demand.” This has created a new playing field for resourceful landlords: The opportunity to co-create wholesale data center solutions to meet increasingly tight schedules that can deliver on contractual service level agreements, or SLAs.

    Increasingly, the construction of enormous data center shells is being accelerated to meet existing customer expansions and perceived hyperscale demand across several markets. This requires advanced land purchases and the fast-tracking of entitlement, design, and pre-development tasks.

    These changes are most notable in Tier 1 data center markets with available land, such as Dallas-Fort Worth and Northern Virginia’s “Data Center Alley,” where the landscape is now dotted with massive building sites. The phrase “must be present to win,” comes to mind.

    CyrusOne’s Strong Q1 2017 Results

    CyrusOne was one of the last REITs to report results and Q1’17 was another exceptional leasing quarter for the company. This was in large part due to its “Massively Modular” design/build approach, which continues to shave off time and squeeze money out of each successive project.

    Source: CyrusOne Q1 2017 presentation

    During the past six quarters, CyrusOne has leased an average of 37.2 MW per quarter. In addition to having 30 large wholesale customers with at least 9MW each, the landlord also leases retail colo space to thousands of enterprise customers. Notably, during Q1’17, a record 480 leases were signed, one-third of them under 500kW each.

    On the earnings call, CyrusOne management guided analysts to model $20 million of GAAP revenue per quarter going forward. However, this guidance could be considered the “low bar.” CyrusOne will shortly be able to triple the size of its existing 2.5 million-square foot portfolio due to the rapid pace of powered-shell development underway in major markets, including Chicago, Northern Virginia, Phoenix, Dallas, and San Antonio.

    Design-Build Evolution

    During the past two years, CyrusOne has leapfrogged larger peers when it comes to wholesale leasing in part because of its short delivery times and low cost.

    Back in March, the company hosted an event where management and the design-build team explained to Wall Street analysts how they can build so much faster and cheaper than industry peers. There were prepared remarks, panel discussions, and a lengthy Q&A with the audience.

    The slide below was used by CFO Diane Morefield to illustrate how CyrusOne could deliver a 22.5 MW build-to-suit facility in 180 days for $6.4 million per MW. The example was CyrusOne’s “standard 2N data center product,” recently delivered in Northern Virginia. This data center design has been engineered to five 9s reliability and uses less equipment than competitors, according to the company — a big ingredient in the secret sauce.

    Source: CyrusOne March 2017 Design Build Forum

    However, gross power capacity of this Sterling, Virginia, data center is 30 MW. This means if the customer only required an “N” solution, the reduction in mechanical and electrical equipment would result in an eye-popping low-cost build at $4.83 million per MW.

    Seeking “More Faster”

    Utilizing the same team members and supply chain from project to project has sped up the design phase. The self-described “maniacal” approach to driving costs and time out of the schedule includes running the electrical conduits on top of the slab, (rather than being trenched into the ground in a typical design). This cuts a huge amount of time out of the schedule, especially in cold climate zones, where frozen ground can be an issue.

    Meanwhile, the value-engineering of electrical conduit fittings saved $75,000 on a 9MW project. Last year CyrusOne delivered about 100 MW, so this design change alone could save over $750,000 across the portfolio. There is no detail too small when it comes to driving costs down.

    CyrusOne isn’t attempting to deliver the same product being built by DuPont Fabros Technology or Digital Realty Trust. In fact, the company’s team suggested that there could be as much as 50 percent fewer components in the CyrusOne mechanical and electrical design. Meanwhile, recent leasing success suggests that engineers who operate some of the largest cloud data centers on the planet are satisfied with reliability of its design.

    Catch Me if You Can

    The CyrusOne secret sauce is now out of the bottle and hidden in plain view for anyone willing to devote a few hours to watching the webcast like I did. However, like most things in the real world — especially mission critical environments — innovation is easier said than done. It can be an enormous cultural challenge for most organizations to change from tried and true engineering principles and try a novel approach. The innovations and processes at CyrusOne have evolved over several years.

    CyrusOne believes the standard 2N product can be built for $5.1 million per MW in the Pacific Northwest if there is a hyperscale customer willing to commit to a 45 MW super-scale data center. However, CEO Gary Wojtaszek is still not satisfied. He believes $100,000 of additional savings could be achieved to deliver on his stated goal of $5 million per MW. One area which remains untapped would be for CyrusOne to negotiate long-term contracts with key members of the supply chain.

    Investor Takeaway

    The leasing success of CyrusOne during the past two years has often been described by Wojtaszek as “punching above our weight class.” Less than three years ago, CyrusOne had zero top public cloud customers. During the last six quarters, the data center REIT has been able to blow past its historical leasing averages in large part by signing nine of the ten largest cloud providers.

    The slide below was intended to address concerns voiced by some analysts that CyrusOne was “buying the business” at lower returns than competitors were willing to accept. The comparison points out that a lower cost structure more than balances out the ramp-up period for customer rents.

    Source: CyrusOne March 2017 Design Build Forum

    There certainly will be an ongoing debate regarding cost per megawatt and apples-to-apples data center comparisons. However, it has become crystal clear that the ability to deliver large data halls within months of inking a contract has become an edge for CyrusOne in landing hyperscale cloud deals.

    Check back next week for another installment of DCK Investor Edge, our weekly column about investing in data centers.

    In addition to covering investing and business news for Data Center Knowledge, Bill Stoller is an Expert Contributor for REITs on Seeking Alpha. The information contained in this article is not investment advice.

    Disclaimer: Bill Stoller’s REITs 4 Alpha Seeking Alpha Marketplace portfolio includes: COR, CONE and DFT. A member of his household in a retirement account owns: COR, CONE and DFT.

    8:00p
    How Rear-Door Heat Exchangers Help Cool High-Density Data Centers

    When the national weather services for Denmark and Switzerland upgraded their computing capacities, they each turned to supercomputers that are cooled by internal heat exchangers.

    It doesn’t take a supercomputer to justify liquid cooling, however. Heat exchangers have been used inside server cabinets for many years to dissipate heat and reduce the cooling needed from computer room air handing (CRAH) units. Recent advances are causing data center managers who may have dismissed them as risky to take a second look.

    Rear door heat exchangers (RDHx) are being used for dense server environments in any data center where racks use 20 kW/hour or more of power.  “That level of usage is typical of organizations conducting intense research or mining bitcoins,” says John Peter “JP” Valiulis, VP of North America marketing, thermal management, for Vertiv (formerly Emerson Network Power).

    Cooling for data centers with high power density per rack is an especially timely subject today, as Machine Learning software starts making its way into enterprise and service provider data centers. Training Machine Learning algorithms requires massive, power-hungry GPU clusters, pushing data center power density far beyond the average 3kW to 6kW per rack.

    Read more: Deep Learning Driving Up Data Center Power Density

    RDHx systems target mission critical, high transaction rate work that demands the smallest possible server count. “Education, government and, in particular, the defense sector, are classic candidates for RDHx,” says Mark Simmons, director of enterprise product architect for Fujitsu. “Industries that don’t want to run massive quantities of water throughout the data center,” should be interested, too.

    For racks with less energy usage, RDHx systems may not be cost effective, Simmons continues. “Most data centers use only 3kW to 6kW/hour per rack. Even if they used 10kW/hour per rack, RDHx would be expensive.”

    Heat Removal

    RDHx systems make economic sense for intense computing applications because they excel at heat removal.

    Typical RDHx systems are radiator-like doors attached to the back of racks with coils or plates for direct heat exchange or for chilled water or coolant. “This method of heat dissipation is very efficient because it places heat removal very close to the heat source,” Valiulis says. Consequently, it enables a neutral room, without the need for hot or cold aisles.

    They are so efficient, Lawrence Berkeley National Laboratory (LBNL) suggests it may be possible to eliminate CRAH units from the data center entirely. In an internal case study 10 years ago, server outlet air temperatures were reduced by 10°F (5.5°C) to 35°F (19.4°C), depending on the server workload, coolant temperature and flow rate. In that example, 48 percent of the waste heat was removed.

    The technology has improved in the past decade. Today, Simmons says, “RDHx can reduce the energy used for cooling 80 percent at the racks, and 50 percent in the data center overall.”

    Technological Advances

    Adding RDHx systems to existing racks is possible.

    Liebert’s XDR door replaces existing back doors on racks by Knurr and other leading manufacturers.  This passive door provides up to 20kW/hour of cooling, using a coolant that changes phase into a gas at room temperature, thus reducing concerns about introducing liquid into the data center.

    Other manufacturers are designing heat exchanger doors for their own racks. The Fujitsu RDHx system, for example, can be retrofitted onto PRIMERGY racks, which contain high performance Fujitsu CX400 servers. “We add a backpack to a standard 19 inch rack, making it deep enough to contain the heat exchanger,” Simmons says.

    That isn’t the only advance. “This field-replaceable system uses liquid-to-liquid heat exchange to dissipate heat directly, rather than by air flow,” Simmons says. “This removes heat quicker and reduces cooling needs. It’s very simple.”

    The backpack also is designed to prevent leaks. Like double-hulled oil tankers, any leaks will be contained in the shell, trigger the patented leak detection system to send an alert.

    Fujitsu is using these backpacks in its European operations and expects to launch the system in the U.S. this fall.

    Run Cool, Run Fast

    As heat builds up inside cabinets, servers run slower. Adding RDHx, however, alleviates the problem for Fujitsu’s own high performance data centers. “They can run at maximum speed all the time,” Simmons says, because the heat is removed.

    Some in the industry have suggested RDHx systems can provide the extra cooling needed to allow them to overclock servers and therefore increase processing speeds.

    Air, Water or Coolant

    Initially, RDHx doors cooled servers passively via large radiators attached to the back of the racks, Valiulis says. “Those doors relied on the fans within the servers to remove the heat. In about the past three years, active doors became available that use their own built-in-fans to pull heat through the servers.”

    Early systems, and many current ones, use chilled water to remove heat. Some recent versions use hot water (at 40°C) to remove heat. Others rely on coolants, like the popular R-410A. The next generation of RDHx systems are likely to explore even more efficient refrigerants.

    Liquid-to-liquid heat exchange is considered the most efficient.

    Overall Benefits

    RDHx are good solutions for high performance computing centers and dense server racks, but they also add value to less intensive computing environments.

    By efficiently removing heat, these cooling systems support increased density, which helps data centers decrease their footprints. As Simmons, himself a former data center manager, explains, “When RDHx systems are used, data centers can fill up the entire rack with servers. This typically isn’t done with air-cooled systems.”

    Data centers also now have the ability to more easily segment the physical space. For example, high performance computing may be consolidated in one area of the data center, which can be cooled with RDHx systems without the adding additional CRAH units.

    RDHx systems are more efficient, less expensive and easier to install than CRAH systems, and may allow data centers to add capacity in areas in which otherwise would be impractical. “RDHx systems can make a lot of sense for data centers with some high density areas,” Valiulis says.

    This ability adds an important element of flexibility, especially for older data centers struggling to meet today’s power-intensive needs.

    Lawrence Berkeley National Laboratory (LBNL) evaluated passive heat exchangers several years ago. It reports that passive doors don’t require electrical energy and perform well at higher chilled water set points.

    According to its technology bulletin Data Center Rack Cooling with Rear-door Heat Exchanger, “Depending on the climate and piping arrangements, RDHx devices can eliminate chiller energy because they can use treated water from a plate-and-frame heat exchanger connected to a cooling tower.” Maintenance consists of removing dust from the air side of the heat exchanger and maintaining the water system at the chiller.

    Whether RDHx is effective depends on the ability to adjust the system to deliver the right amount of cooling. “The ability to adjust refrigerant offers higher protection and efficiency,” Valiulis says.

    RDHx Isn’t for Everybody

    “RDHx systems are not being well-adopted,” Valiulis says.

    This cooling method is best for high performance computing platforms. “The very large, commoditized computing companies like Google, Amazon and Network Appliance aren’t embracing this technology because they don’t have a need for really highly dense, fast infrastructures,” Simmons says. For those applications, “good enough” computing actually is good enough.

    Colocation host Cosentry cites other reasons when it opted not to use RDHx in its facilities. Jason Black, former VP of data center services and solutions engineering, now VP and GM at TierPoint, explains that RDHx systems don’t provide the flexibility Cosentry needs as it lays out the data center floor.

    “Typically,” Black elaborates, “a rear door heat exchanger requires hard piping to each cabinet door.  This creates a problem when colocation customers move out and we need to repurpose the space.” Today’s flexible piping could simplify, but not eliminate, the piping issue, however.

    Black says he also is concerned about introducing liquid into the data hall. The IBM heat eXchanger door, for example, holds six gallons of water and supports a flow rate of 8 to 10 gallons per minute. A catastrophic failure could drench a cabinet and the cabling underneath the raised floor. To avoid that possibility, Black says, “We have specifically designed our data centers with mechanical corridors to eliminate any water/coolant from the data hall space.”

    LBNL, in contrast, piped chilled water for its RDHx system underneath its raised floors using flexible tubing with quick disconnect fittings. Alternatively, overhead piping could have been used.

    RDHx also makes accessing services a hassle, Valiulis says. “You have to swing open a door to access each rack, and close it when you’re done.” That’s a minor inconvenience, but it adds two more steps to servicing every rack.”

    Ensuring security is another concern. “Mechanical systems need maintenance at least quarterly,” Black points out. Cosentry data centers have a mechanical hall that enables maintenance technicians to do their jobs without coming into contact with customers’ servers, thus enhancing security. “Rear door heat exchangers would negate these security procedures,” Black says.

    Simmons, at Fujitsu, disagrees on two points. He says that once the new RDHx systems are set up, they are virtually maintenance free. “Fujitsu’s backpacks are, essentially, closed loop systems. You can lock the racks and still access the backpacks.”

    Future Cooling

    The practicality of RDHx for routine computer operations is becoming less of a discussion point as the industry develops newer, higher tech cooling solutions. In the relatively near future, server cooling may be performed at the chip level. Chip manufacturers are developing liquid-cooled chips that dissipate heat where it is generated, thus enabling more compact board and server designs.

    For example, Fujitsu’s Cool-Central liquid cooling technology for its PRIMERGY servers, dissipate 60 to 80 percent of the heat generated by the servers. This cuts cooling costs by half and allows data center density to increase between 250 percent and 500 percent. The water in these chips routinely reaches 40°C but still provides ample cooling.

    Looking further into the future, university researchers are investigating quantum cooling. A team at the University of Texas at Arlington has developed a computer chip that cools itself to -228°C without using coolant when operating in room temperatures. (Previous chips had to be immersed in coolant to achieve that feat.)

    To achieve this intense cooling, electron filters called quantum wells are designed into the chips. These wells are so tiny that only super-cooled electrons can pass through them, thus cooling the chip. The process is in the early research stage but appears to reduce chip energy usage by ten-fold.

    Implementation Checklist

    In the meantime, before quantum wells and liquid-cooled chips become commonplace, high performance data centers can improve performance gains, increase density and reduce cooling costs by installing rear door heat exchangers.

    To help these systems operate at maximum efficiency, LBNL recommends installing blanking panels in server racks to prevent hot air from short circuiting components. It also advises scrutinizing raised floor tile arrangements to ensure air is directed where it is needed and increasing the data center setpoint temperature. To monitor the system and allow adjustments to improve performance, an energy monitoring and control system is important.

    Ensuring hot aisle/cold aisle containment is less important when heat exchangers are used in the server cabinets, although that arrangement may still be valuable. “Using an RDHx can sufficiently reduce server outlet temperatures to the point where hot and cold aisles are no longer relevant,” the LBNL bulletin reports. Typically, however, CRAH units are still in place and are augmented by RDHx systems.

    Once RDHx systems are installed, check for air gaps. LBNL reports that RDHx doors don’t always fit the racks as tightly as they should. Seal any gaps around cabinet doors with tubing to increase efficiency, and measure temperatures at the rack outflows before and after heat exchangers are installed. Also monitor the rate of flow through the system to ensure the RDHx is functioning properly and to correlate liquid flow rates and server temperatures. Ensure that coolant temperatures at each door are above the dew point to prevent condensation, and check the system periodically for leaks.

    Conclusion

    RDHx can be a strategic piece of data center hardware, or an expensive solution to a minor problem, depending upon the data center. Before considering RDHx, think carefully about current and future needs and know what you’re trying to accomplish, Simmons says. That will determine whether RDHx is right for your organization now, or in the future.

    Gail Dutton covers the intersection of business and technology. She is a regular contributor to Penton publications and can be reached at gaildutton@gmail.com.

    8:30p
    Dell Technologies: Apple, or Just Bananas? 

    Michael Beelsey is CTO of Skyport Systems.

    Today, Dell Technologies is one of the biggest hardware and software technology companies in the world with estimated annual revenues in the $60 billion range, putting it neck and neck with Huawei for the title of world’s biggest private tech company. This has truly been an amazing journey for a company founded in 1984 on a $1,000 investment to sell IBM PC clones out of a dormitory room.

    Despite more than two decades of incredible success built around cost-effective manufacturing, rapid product introduction, industry best supply chain management and a direct to consumer business model, recently the company had struggled to maintain relevance and to avoid commoditization of its hardware- centric business model. In 2013, Michael Dell started taking a set of actions that amount to one of the biggest gambles ever taken in the world of technology; let’s look at what he did, why, and how it might play out going forward.

    2013: Dell is taken private through a private equity funded leveraged buyout; at $24 billion it goes down in history as the largest technology buyout ever.

    2016: Dell completes the $67 billion acquisition of EMC Corp, the largest technology merger in history bringing EMC, VMware, RSA, Pivotal, SecureWorks and Virtustream all under the Dell Technologies’ umbrella.

    So why on earth did Michael Dell and team take on an eye-watering, terrifying mountain of debt to transform and enlarge his company, when others have the reverse strategy?

    The answer I believe has all to do with the rapid change being experienced in the enterprise data center, which has historically been a major profit center for infrastructure equipment and software vendors. Several things are changing:

    Public Cloud (IaaS and SaaS) Growth

    Public cloud (and the SaaS services hosted therein) continues to grow at a rapid rate and is literally eating the enterprise data center, which some are predicting is doomed.

    Virtualization and ‘Software Defined’

    Technology has evolved rapidly to the point that complex, critical functionality traditionally embedded in specialized storage, networking and security products is now offered as virtualized software running in and on the hypervisor of a server.

    Commoditization of Hardware

    Hardware system innovation has been at a standstill for years with every vendor having access to the same reference designs from Intel, Broadcom and others. As such, margins for systems have collapsed to commodity levels.

    Dell’s Success Case: ‘Apple’

    The success case for Dell is that it manages to stabilize the shift to cloud such that the world’s compute and storage remains evenly split between the public cloud and on-premise within enterprise. They establish a walled garden, integrated stack, including networking, storage, compute, virtualization and security, within enterprise, capturing 99 percent global profit share from a much lower market share (as Apple has done in smartphones). It is worth noting that despite years of competition from Microsoft’s System Center and from OpenStack (R.I.P – enough said), VMware still commands 90-plus percent profit share in virtualization and orchestration. In this scenario, all competitors exit the data center market and Dell enjoys a monopoly in perpetuity within the enterprise becoming the world’s most valuable company.

    Dell’s Failure Case: ‘Just Bananas’

    The failure case involves endless migration to the public cloud to the point that the enterprise data center ceases to exist. Without the enterprise market, the global profit pool for vendors shrinks by an order of magnitude even though global compute and storage continue to increase. This is due to the scale effects and buying power of the small number of public cloud providers, combined with their penchant and abilities to engineer their own solutions if and when a commercial option becomes too expensive. In short, shops like AWS, Azure and GCE are the worst types of customer for an infrastructure or software vendor. The resulting business for Dell is at such thin margins that they cannot fund any research and development and cannot create enough cash from operations to pay the interest on their debt. They go bankrupt.

    It will be fascinating to see how this all plays out over the next several years; it is very hard to predict the outcome, but one thing is for sure: The recent Dell moves will either go down in history as the bravest, wisest, most profitable move ever taken in tech, or they will go down as the most foolish, reckless, irresponsible, cash burning, company-destroying moves ever.

    Only time will tell.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    << Previous Day 2017/05/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org