Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, August 21st, 2017

    Time Event
    12:00p
    Uptime in Space and Under the Sea

    The SpaceX Dragon spacecraft that took supplies to the International Space Station last week had something new on board: the first commercial computers headed for space. Usually, computers for space missions are special-purpose hardware hardened to withstand everything from the g-force on take-off to zero-gravity and the cosmic radiation – Earth’s atmosphere is the reason servers in your data center aren’t affected by that last one. Hardening a computer for space travel can take years, said Dr. Eng Goh, CTO of SGI, the supercomputer outfit Hewlett Packard Enterprise acquired last year.

    “They spend so long hardening for the harsh environment of space that the computers they use are several generations old, so there’s a huge gap in performance,” Goh said in an interview with Data Center Knowledge. “For some missions, you could spend more time hardening the system than you use it for.” That specialized one-off hardware is also expensive and doesn’t let you take advantage of the economies of scale technology typically offers.

    Goh is hoping to equip astronauts with the latest available hardware that can be loaded with standard software for general-purpose computing, plus intelligent, adaptive controls that shift the burden of system hardening from hardware to software. The two water-cooled Apollo 40 servers HPE sent to spend a year on the space station came straight from the factory, with no hardware hardening, after passing the battery of NASA tests required to go into orbit; no modification was needed, which means they should do well in other difficult locations too.

    The Spaceborne Computer, as Goh calls it, is an experiment to discover what impact the harsh environment in space actually has on unhardened hardware, and what you can do in software to reduce that impact. The idea is to reduce the servers’ power consumption and operating speed when higher levels of operation are detected to see if that’s enough to keep them running. “Can we harden the computer using software? That’s the question we want to answer.”

    Hardening systems for difficult environments is becoming a more mainstream issue, Christopher Brown, CTO at the Uptime Institute, a data center industry group, said. As society becomes increasingly dependent on compute and communications technology – satellite communications, GPS, computer-assisted aircraft navigation, and so on — such research grows more relevant in places outside a few niche applications. “It is really moving beyond the fringe of people and groups with very specialized purposes to a point [where] it can impact all people.”

    Lessons to Come for Space and Earth Computers

    The servers aboard SpaceX Dragon are in a locker connected to power, Ethernet, and the station’s chilled-water system. The locker by the way isn’t designed to protect the machines – it’s there just to store them. They have SSDs rather than hard drives that could be affected by zero-gravity and ionizing radiation; there’s a mix of smaller fast drives and larger slower drives to see which works better in space; there are also Infiniband interconnects, because copper connections could be more vulnerable to radiation than fiber. The team did tweak CPU, memory, and SSD parameters more than usual, but the servers are running a standard build of RHEL 6.8.

    General-purpose servers would be useful for future astronauts, so it’s an interesting potential growth market for a company like HPE. “The market isn’t that small if commercial space travel goes the same way air travel has,” Goh pointed out, and space exploration is also where you really need edge computing. If we send an expedition to Mars, the 20-minute latency will mean earthbound systems won’t be suitable for any real-time processing like image recognition or predictive analytics.

    But the lessons from space will also useful down here on earth. HPE hopes to apply what it learns to harsh earthbound environments and to generally teach computers to better take care of themselves. “The high-level goal is to give computers a self-care intelligence that tries to adapt to the environment it’s in through sensors and early warning systems,” Goh said. “Today we set aside some compute cycles for anti-virus; we should also set aside cycles for the computer to care for itself and defend itself. If you have, say, a billion operations per second, are you willing to set aside half a percent for anti-virus and maybe five or eight percent for self-care?”

    Microsoft Learning in Another Extreme Environment

    Those goals are similar to some of the goals of Microsoft’s Project Natick. When the software giant’s researchers put a 42U rack of servers retired from Azure data centers inside a sealed enclosure and sunk it to the bottom of the ocean half a mile from shore, one of the goals was to learn how to speed up data center deployment in any environment.

    “Today, it takes a long time to deploy a large data center,” Ben Cutler, one of the Project Natick researchers, told us. “It can take two years, because I’ve got to find some place to put it; I’ve got to get the land; I’ve got to get my permits; I’ve got to build buildings. Even if I have cookie-cutter data centers that I build the same everywhere, I still have to deal with the fact that the land is different, the climate is different, the work rules and the building codes are all different, how the power comes in is different. It just takes a long time.”

    Sometimes there’s a spike in demand for cloud services in an unexpected place, and Microsoft wants to be able to respond as quickly as possible, Cutler went on. “Our motivation was, can we develop the ability to deploy data centers at scale, anywhere in the world, within 90 days from decision to power-on?”

    Project Natick, Microsoft’s experimental underwater data center, being deployed off the coast of California (Photo: Microsoft)

    Microsoft has developed the process for installing fully populated racks straight in new Azure data centers for the same reason. “Something logical that we don’t usually do is to treat buildings as a manufactured product,” Cutler pointed out. “With a laptop or a phone, we pretty much know exactly how that’s going to behave, and how much it will cost before we build it; and you can get one quickly because when you order it, it’s pulled off the shelf and shipped somewhere. We want to get the same thing for data centers.”

    Designing for Hands-Off Operation

    The ocean isn’t as harsh an environment as space, but it can also be much milder than dry land, with its hurricanes, temperature swings, and other extreme weather. That means in the long run it could even be cheaper to make a data center reliable under water than on land. For one thing, cooling could cost only 20 percent of what companies spend today, Cutler said. Today’s data centers mostly rely on air cooling, which means they’re relatively warm. “Our hypothesis is that if I have something that’s consistently very cold, than the failure rates are lower.”

    Server failure rates take on a whole new level of significance here. Underwater data centers will be sealed units designed to work without maintenance for the life of the servers: five or even ten years. “Historically, failure rates didn’t matter too much if there was going to be a new and better PC every year,” Cutler said. Today, however, hardware isn’t changing as quickly, shifting priority to hardware that can run over longer periods of time to keep costs down.

    With no humans going inside the unit to do maintenance, a Project Natick chamber is filled with nitrogen and has virtually no humidity inside. Humidity isn’t just bad for hard drives; one of the main causes of data center failure is corrosion of connectors in the electronics. Over time, moisture gets between two pieces of metal connecting one device to another, eventually pulling them apart and causing a failure. You also can’t make the air too dry because some hard-drive brands use motor grease with some moisture. “If you go down below 10-percent humidity, it starts to turn into powder, and then you have another kind of failure.”

    A sealed rack eliminates dust problems, so you don’t need air filters, and the rack can be simpler, without all the quick-release connections for disks and server blades that give technicians the ability to quickly take things apart and put them back together. All that easy access comes with extra cost.

    Doing away with data center staff can prevent many problems on its own. “It’s often possible to tell where maintenance has been happening in a data centre; if people work in an area, you’ll start to see increased failure in that area two to three weeks later,” Cutler said. “Whenever you touch something, there’s a risk that something else is affected.”

    For some scenarios where you need edge computing, whether that’s in space, on an oil rig, or down a mine, sealed units look like an obvious choice. After a seismic survey on an oil rig, terabytes of data usually travels back to head office on hard drives for processing. Moving that processing workload to the rig itself could give you quicker results. “It’s possible that rigs on the ocean surface will disappear and become automated platforms on the sea bed,” Cutler noted. “You’ll need a lot more compute power to make that work.”

    What Microsoft and HPE are learning about fully automated, lights-out data centers in space and underwater could help standard data centers too, whether that’s through automation and self-healing software or through sealed units. Teams inside Microsoft are already thinking about ways to apply takeaways from Project Natick to designing the company’s data centers on dry land, Cutler said. “We take all this back to the data center design community inside Microsoft and try to understand if any of these things make sense to deploy on land to give us an economic advantage, whether it’s being more environmentally friendly or having lower costs.”

    3:00p
    DCK Investor Edge: Why CoreSite is REIT Royalty

    If you’re a regular reader of DCK Investor Edge, you know that investors who own data center REIT shares have seen better rewards year-to-date compared to more traditional commercial real estate asset classes like office, retail, apartments, net-lease, and healthcare.

    Data centers sit squarely at the intersection of real estate and technology, providing space, power, and access to networks and cloud service providers that attract coveted enterprise customers.

    When it comes to the various REIT sub-sectors, data centers sit squarely at the top, while the broader real estate sector is treading water, down 0.6 percent. Data centers as a sub-sector are chalking up another year of outperformance, following similar gains in 2016.


    Source: Seeking Alpha – Hoya Capital Real Estate Aug. 18, 2017

    Many investors rely on mutual funds and ETFs which own a broad basket of all the REITs. However, this can lead to being overweight on large-cap mall REITs, shopping centers, lodging, and other under-performing sectors leveraged to GDP, employment, and consumer spending.

    Data centers enjoy tailwinds of cloud computing, exponential wireless data growth, streaming media and OTT content, and increasingly Internet of Things, big data, AI, and virtual reality, which help drive outsize returns. The consensus among analysts and research firms is the shift from legacy on-premises data centers toward hybrid cloud and distributed IT architecture is still in the early innings.

    CoreSite: Top REIT in Top Sector

    Connectivity-focused CoreSite Realty (COR) continues to deliver impressive results quarter after quarter. It is easy to be lulled into thinking that it is just business-as-usual.

    CoreSite’s ability to grow revenues, earnings (FFO/share), and dividends at double-digit compound annual growth rates (despite year-over-year comparisons becoming increasingly more difficult), has been stunning.

    Source: CoreSite August 2017 presentation (all slides)

    Any single company, or REIT sector, can have a windfall year, but it’s rare to witness such a string of consistent out-performance for any asset class.

    Field-Leading Returns

    CoreSite in particular, and data centers collectively, have delivered incredible sustained performance for the past few years.

    The August 2017 KeyBanc Capital Markets report titled The Leaderboard shows just how much CoreSite stands out in the field. The report evaluates 178 equity REITs across 15 sectors, including a recap of performance for both three-year and five-year time periods. CoreSite is ranked as the top performing REIT in both time frames, delivering total returns of 241.5 percent and 374.3 percent, respectively.

    The three-year REIT leaderboard also shows data center providers CyrusOne and DuPont Fabros Technology holding down the second and third rankings, with total returns of 156.3 and 151.7 percent, respectively. Notably, Equinix, QTS, and Digital Realty Trust are ranked sixth, ninth, and twelfth, with 137.2, 100.6, and 99.4 percent total returns, respectively.

    It should come as no surprise the data center REIT sector was ranked the top sector, at number five on the list overall, delivering an average total return of 144.5 percent.

    High-Quality Revenues

    Not all data centers follow the same business model. This leads to differences in valuation based upon the quality of the earnings. Wholesale data center providers tend to have less predictable, or “lumpier” leasing, depending on a smaller number of larger leases to grow revenues.

    Read more: DCK Investor Edge: Here’s How Data Center REITs Did in the First Half of 2017

    On the other hand, retail colocation data centers, which focus on network, cloud, and enterprise connectivity and tend have a larger number of smaller leases, sign leases for shorter terms, generally two or three years. But those leases are quite sticky, as tenants tend to land and expand.

    Retail colocation providers such as CoreSite generally derive 75 to 90 percent of new lease revenues from existing tenants each quarter. Additionally, growing the number of cross-connects creates industry ecosystems which become more valuable over time. The monthly recurring fees charged for interconnection are a high-margin business. After reaching a critical mass, interconnection revenues often grow at a faster rate than colocation revenues from space and power.

    How About Future Trends?

    Investors and analysts tend to be forward-looking, as past performance cannot be relied upon to predict future results; and the future appears to be quite bright for CoreSite and its much larger peer, global interconnection leader Equinix. This past week, Equinix released the findings of the first study intended to quantify the growth of private network information exchanged compared with the growth rate of data on the public internet. (Private Data Exchange is Outpacing Internet’s Growth)

    The numbers were astounding:

    “The capacity for private data exchange between businesses is outpacing the public internet, growing at nearly twice the rate and comprising nearly six times the volume of global IP traffic by 2020.”

    This study also found that private networks are growing faster than Multi-Protocol Label Switching (MPLS), the legacy model for business connectivity, by a factor of 10x (45 percent to four percent).

    These trends play into CoreSite’s strengths, including its multi-cloud capability.

    The ability to connect privately to Amazon Web Services, Microsoft Azure, and Google Cloud Platform reportedly acts as a magnet for financial services, content, social media, healthcare, and traditional enterprise customers. These direct links are faster, more reliable, and more secure than the internet.

    This becomes a virtuous cycle, where the value of each location for CoreSite’s customers continues to increase as the number of networks and cloud providers grows over time.

    REIT Investor Edge: Retirement Accounts

    In addition to price appreciation driven by earnings growth, REIT shareholders love a well-covered and growing dividend. CoreSite’s 35 percent dividend per share compound annual growth rate (CAGR) from 2011 to 2017 is nothing short of stellar.

    All REITs must pay out a minimum of 90 percent of taxable income as dividends to shareholders. In return, qualified REIT earnings are not taxed at the corporate level.

    See also: DCK Investor Edge: Why Money is Pouring Into Data Centers

    However, individuals holding REITs in a taxable account are not entitled to the same lower tax rate for C-Corp qualified dividends. This makes it attractive for REITs to be held in a retirement account where dividends can compound on a tax-deferred basis and can be reinvested quarterly to boost the compounded annual returns even higher.

    The Bottom Line

    Many REITs deliver the majority of total return through dividends rather than price appreciation. Data centers have shown the ability to outperform through sustained growth.

    CoreSite’s three-year performance breaks out at share price appreciation 207.3 percent and dividend 34.3 percent. The five-year total return for CoreSite reflected a 296.2 percent increase in share price, compared with 78.1 percent from dividends.

    This means owning CoreSite shares in a taxable account, and holding them for long-term price appreciation paid off nicely too.

    3:30p
    In the Face of Ransomware, Is Your Cloud Data Safe?

    Peter Smails is Vice President of Marketing and Business Development for Datos IO.

    I’ve spent much of my career focused on enterprise backup, recovery and disaster recovery. Two big shifts in the market have taken many vendors and IT professionals by surprise: First, new application platforms are not just cloud-first, but often touch multiple clouds. Second, ransomware attacks against these same platforms are emerging as a very significant threat.

    Prevention is a critical part of an overall protection strategy to combat ransomware. But given the rapidly evolving threat, it’s likely that even organizations with strong security technology and policies will be affected.

    While CIOs and IT administrators evaluate the strategies and dangers posed by these attacks, there are additional steps to help ensure protection through data backup. Backup strategies won’t necessarily prevent an attack from occurring, but can serve as a crucial last line of defense enabling organizations to destroy all affected data and then restore it from a backup taken before the data was infected.

    The rapid rise of multi-cloud infrastructure and next-generation cloud applications, in which the data resides in multi-cloud and hybrid cloud environments, adds an additional layer of complexity to the challenges of ransomware. A common misbelief held by many CIOs is that data in the cloud is immune to ransomware attacks but this is not the case.

    Next-generation cloud applications are quickly becoming a new target for criminals. Cloud applications are running businesses and are big, high value targets with tens or hundreds of terabytes of sensitive data. Financial institutions, for instance, store years of account or trade records and create unified views of the customer. And retailers have created critical e-commerce capabilities that drive loyalty programs and generate customized offers.

    Compounding the problem, these applications are almost always directly connected to the Internet. The underlying databases are open to query by different systems, and are often exposed to attack because of that. These applications have also emerged quickly driven by business teams (rather than IT), and lack robust processes for security, availability, backup, recovery and other enterprise functions that are standard for on-premises infrastructure.

    Recently, more than 34,000 MongoDB servers were compromised, with attackers demanding $150 to $500 in ransom to restore data. And a 2016 study found that over 5,300 Hadoop installations were exposed to the internet and had insecure configurations.

    Cloud providers recommend maintaining independent backups of the databases and applications to ensure organizations can recover applications independent of any specific cloud. Effective backup and recovery is critical, but traditional approaches to backup don’t provide effective recovery or protection from ransomware for these applications for a variety of reasons.  For instance, traditional LUN or VM-centric backup solutions are not designed for the cloud, making the process incompatible with the distributed, scale out nature of most modern applications. Traditional backup solutions also do not address the eventually consistent nature of today’s modern applications live, wherein data is committed to a node, and then eventually written to other nodes for redundancy but may not be present on every node.

    Modern data protection solutions must therefore be application-centric in order to create a consistent point-in-time backup of the database.  Other issues include lengthy restore processes and unreliable point-in-time recovery due to traditional backup’s inability to quickly restore distributed databases — especially across or between clouds – to the required point-in-time prior to infection.

    How can you ensure reliable backup and recovery capability for next-generation multi-cloud applications? Here’s a list of best practices and capabilities you should put at the top of the list:

    • Focus on applications rather than infrastructure. Infrastructure is quickly becoming invisible, particularly in the cloud. Backup and recovery need to work at the application level to be effective.
    • Stay cloud-independent. Ransomware, cloud outages and common business sense make it desirable to work across major public cloud platforms, as well as hybrid cloud and private cloud platforms.
    • Architect for flexible recovery to any cloud or cluster topology. You never know when you’ll need to use a different cluster, different data center or different cloud for recovery.
    • Support any point-in-time recovery. This is critical for ransomware, and important for any outage in general.
    • Use infrastructure efficiently. The cloud can be cost-effective if used efficiently. Many approaches to backup make copies of already redundant data.  You need a solution that provide efficient backup storage for any cloud, NFS, or object storage platform.

    As enterprises continue their journey to build and move applications to the cloud, following security best practices is crucial. It’s important to review these and take steps to limit exposure, but ransomware will get through with almost statistical certainty. And when that happens, backup will be that last line of defense that enables you to delete data under attack and restore from an earlier copy of data.  If backup and recovery is not already part of your strategy against ransomware, it should be.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    << Previous Day 2017/08/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org