Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, November 7th, 2016
Time |
Event |
1:00p |
Data Center REITs Q3 Update – Is the Sky Really Falling? Data center REITs continue to report solid earnings and robust leasing activity. So, why has Mr. Market sold-off the entire sector during the past few weeks? What has changed between all-time highs in July, and the most recent quarter ended Sept 30, 2016?
The short answer is that REIT share prices tend to fall in a rising interest rate environment. Investors are concerned that a hawkish Fed may not be “one and done,” and a December rate hike just might become a new trend for 2017.
However, Wall Street also hates uncertainty. The two U.S. presidential candidates have proposed distinctly different policies regarding the U.S. economy, taxation, global trade and energy policy.
Market Jitters
If you have been feeling anxious regarding your investment portfolio — including your data center REIT shares — you are not alone.
On Friday, Bloomberg reported that US stocks posted their longest losing streak in 36 years, as anxiety surges over the presidential election. The S&P 500 Index has lost 3.1 percent of its value over the past nine sessions.

Source: Bloomberg.com
The Chicago Board Options Exchange (CBOE) SPX Volatility Index has spiked up to 22.90 over the same time period.
Since it is difficult to fight the proverbial ticker tape real estate investment trusts have seen valuations fall across the board.
Frothy REIT Valuations
During the first half of 2016, share prices for the six data center REITs were bid up 50 percent on average, as shown on the chart below.

Source: YChart created by author
The highest flying REIT sectors have all been hit particularly hard by this recent sell-off. This includes regional malls, net-lease, multifamily, and industrial REITs — with data centers prominently featured, at the very top of that list.
A tremendous amount of good news was baked into the June 30, 2016 share prices — perhaps prematurely, due to record second quarter results.
Read more: Record Second-Quarter Leasing Fuels Data Center Land Rush
Then in early July, fears swirling around the Brexit vote took US REIT shares even higher.
Post-Brexit Price Boost
Data center REITs which had already been trading at frothy multiples, had prices kicked up another notch because they are levered to exponential data growth, and less dependent upon GDP, jobs growth and consumer confidence than more traditional REIT sectors.
This defensive aspect of US-based data center REITs has made them even more sought after by investors post-Brexit, including global connectivity leader Equinix and industry blue-chip Digital Realty, which both own significant UK and European assets.
Read more: Report: Data Center Market Trends ‘Strong Demand, Smart Growth’
By July 10, some data center REITs were priced higher than 26x earnings, (FFO or AFFO is used for REITs), versus a more normal trading range of 12x to 18x core FFO per share. This last leg-up resulted in the entire sector trading at all-time highs.
Tale Of The Tape – Prior To Election
Fast-forward to November 4, just two trading days until the election results are known.
All six data center REITs have now reported decent-to-good Q3 2016 earnings. However, due to the rising wave of fear, good news is no longer good enough to sustain high multiples. It would have taken phenomenal news to overcome investor concerns regarding interest rates and the upcoming election.
In the face of uncertainty, data center REIT prices have pulled back and Price to FFO per share multiples have compressed. In turn, this has created a better value proposition for investors looking to initiate or add to an existing position.

Source: YChart created by author
The “growthy” data center REITs continue to outperform the broader REIT sector, but data centers are only up 15.6 percent on average year-to-date. However, that still beats the stuffing out of the S&P 500, NASDAQ 100, and DOW 30 indices, trading up 2.34, 1.64 and 2.75 percent, respectively.
Here is the sector at a glance as of Nov. 3, 2016 by market cap, Price to FFO, and yield:
- Equinix, Inc. (EQIX) – $24.1B, 23.25x FFO, 2.05% yield.
- Digital Realty (DLR) – $14.3B, 15.0x FFO, 3.97% yield.
- CoreSite Realty (COR) – $3.6B, 17.4x FFO, 3.00% yield.
- CyrusOne (CONE) – $3.5B, 15.0x FFO, 3.61% yield.
- DuPont Fabros (DFT) – $3.5B, 12.8x FFO, 4.81% yield.
- QTS Realty (QTS) – $2.1B, 16.0x FFO, 3.28% yield.
Notably, investors are still expecting faster growth from interconnection-focused Equinix and CoreSite. CoreSite also retains a bit of an investor halo from having been the top performing REIT in 2015 — having delivered a total return of 50 percent.
Why Good News Doesn’t Matter
I read the Q3 earnings prints, looked through the presentations, and listened to the earnings calls. While some were better than others, none were even close to being horrible enough to attempt to explain the rout in the market. The recent steep declines were predominately about valuation.
Frankly, there is little to be gained from looking closely at the operating results this quarter, because record lease signings, mid-teen ROIC on newly deployed capital, and robust future deal pipelines are simply falling on deaf ears.
Read more: CyrusOne Q3 Earnings – Trick or Treat?
Hopefully, after the election results are tallied and digested, FY 2016 results and 2017 guidance will be what investors focus upon, once again.
Investor Takeaway
Right now Mr. Market is viewing the entire equity REIT sector through gloom-and-doom colored glasses.
Data center REITs have simply pulled back to more realistic valuations. They are certainly still not “cheap” by traditional REIT standards, nor should they be. However, data center REIT investors still have a unique challenge to deal with next year. The success enjoyed during the past few quarters has created much tougher year-over-year comparisons for 2017.
This is compounded by the nagging concern that nobody knows how long the hyperscale cloud land grab will last.
Read more: Cloud Fuels Unprecedented Data Center Boom in Northern Virginia
Demand for space in the Silicon Valley, Chicago, the Pacific Northwest, and especially Northern Virginias has totally blown away historical averages. Nobody in the industry has a crystal ball to determine if this is a one-off, or the new normal.
However, the sky is certainly not falling. Not even close. Data continues to grow at exponential rates, driven by wireless data, streaming video, big data, cloud computing, and a still-nascent Internet of Things (IoT).
Technology sector investors are better able to understand the strong secular trends behind the recent acceleration in data center leasing. However, it is more challenging to educate “nervous REIT investors” who draw on experience with other sectors and past real estate cycles.
Notably, when there is a sea of red on Wall Street, it can be an excellent time to buy. Legendary investor Warren Buffett is often quoted as a reminder to investors: “Be greedy when others are fearful, and fearful when others are greedy.” | 2:00p |
Deconstructing VDI – The Trend Toward Shrinking Stack Karen Gondoly is CEO of Leostream Corporation.
You may not know this about me, but I used to be a pastry chef. Bear with me, I swear this becomes relevant.
Years ago, after studying rocket science at MIT but before my rise to CEO at Leostream, I toiled away in various kitchens around Boston. From my modest beginnings as a lowly (overworked and underpaid) pastry cook, I rose to a position as the head (overworked and underpaid) pastry chef at a very nice seafood restaurant here in town. (No, not Legal Sea Foods.)
Through all those years, I never caught on to the trend of deconstructing desserts. Part of the reason is simply that I lack the artistic acumen. Part of me wondered what was the point. If I want a s’more, it can look like a s’more.
Well, all these years later, I’m finally embracing the “deconstructing trend”, just not in desserts. Now, I’m applying it to VDI. No, really, here it goes!
The Original full-Stack S’more of VDI
For years, the key players in the VDI market sold full-stack solutions that included hypervisors to host virtual desktops, connection brokers to handle assignments, display protocols to connect users to their desktops, security gateways to tunnel users into the network, and a host of other components geared at making VDI a roaring success.
The problem? Those full stack solutions have a high cost that limit your ROI. They lock you into certain workflows, which may or may not match your business use cases. And, they generally don’t future-proof your data center, instead making it more difficult for you to try new technologies that come to market. Those three factors all benefit the virtualization vendor (they make more up-front money, develop fewer features, and keep you paying support), at the expense of your budget and IT department.
VDI Deconstructed, Part 1 – Your Resources Can Be Anywhere
I realize IT doesn’t want to reconstruct a deconstructed solution from a long list of vendors, which is one reason full-stack solutions seem attractive. But, deconstructing VDI isn’t about separating each and every component. It’s about artistically, realistically, and technically separating the components that make sense. So, what makes sense?
IT is always looking for ways to improve business processes, lower cost, and work more efficiently. Virtualization technology provided those things in spades for the server world. Virtualization improved the utilization of the data center and servers, and turned tasks that took days into something that could be done in hours.
Now software-defined data centers, clouds, and hyperconverged hardware are simplifying deployments one step further, bundling compute, storage, and networking resources into easy-to-deploy (or already-deployed-in-someone-else’s-data center) solutions that can host any of your virtual workloads.
And that is deconstructed VDI. It takes the resource layer out of the VDI stack, allowing you to mix and match hosting environments to best meet your needs. Go ahead and place some virtual desktops in AWS, run some RDS sessions in Azure, build a private OpenStack cloud, and wrap it all together with the vSphere servers already in your data center. The key to deconstructed VDI is that you bring all those pieces together to form a single, coherent system. How do you do that? Well, with a smaller VDI stack.
VDI Deconstructed, Part 2 – Your Stack Just Got Smaller
A VDI deployment is not made up of the resource layer, alone. At its simplest, you also need to consider two, maybe three, additional components.
First, the connection broker. The connection broker is the brains of the system, ideally managing the capacity in your resource layer (automatically provisioning and terminating virtual machines, as required by your business) and managing user assignments and connections to those resources. Your connection broker should provide the flexibility to use any system to host your resources, including any hypervisor, hyperconverged system, cloud, you name it. And, it should support any display protocol you need. Which brings us to component number two.
You need a display protocol to connect user’s chosen client device to their remote desktop. Display protocols come in many shapes and sizes, from built-in Microsoft RDP to high performance HP Remote Graphics Software (RGS). Which protocol you use is a factor of the types of tasks your users perform. If you have task workers running applications with low graphics loads, then RDP is likely fine. If you have remote workers who need to use a CAD application, you probably need to investigate a protocol with better performance.
That remote workforce leads us to the third component, a gateway. Remote users need a way to tunnel into the network that hosts their desktops. Additionally, if you plan to use a public cloud like AWS or create desktops in OpenStack, the desktops may be on a private network that even users on your LAN must tunnel into.
I had to leave the pastry business to finally find a deconstruction I can support, and it’s deconstructed VDI. It allows you to keep your options open when it comes to your resource layer, and even change where you host your resources, over time. It narrows down your VDI stack to a connection broker, display protocol, and gateway. Ultimately, it makes IT more flexible, your data center more future-proof, and your end users more productive. How’s that for have your dessert and eat it too?
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 8:09p |
China Adopts Cybersecurity Law Despite Foreign Opposition
(Bloomberg) — China has green-lit a sweeping and controversial law that may grant Beijing unprecedented access to foreign companies’ technology and hamstring their operations in the world’s second-largest economy.The Cyber Security Law was passed by the Standing Committee of the National People’s Congress, China’s top legislature, and will take effect in June, government officials said Monday. Among other things, it requires internet operators to cooperate with investigations involving crime and national security, and imposes mandatory testing and certification of computer equipment. Companies must also give government investigators full access to their data if wrong-doing is suspected.</p>
China’s grown increasingly aggressive about safeguarding its IT systems in the wake of Edward Snowden’s revelations about U.S. spying, and is intent on policing cyberspace as public discourse shifts to online forums such as Tencent Holdings Ltd.’s WeChat. The fear among foreign companies is that requirements to store data locally and employ only technology deemed “secure” means local firms gain yet another edge over foreign rivals from Microsoft Corp. to Cisco System Inc.
“This is a step backwards for innovation in China that won’t do much to improve security,” James Zimmerman, chairman of the American Chamber of Commerce in China, said in an e-mailed statement after the law was passed. “The Chinese government is right in wanting to ensure the security of digital systems and information here, but this law doesn’t achieve that. What it does do is create barriers to trade and innovation.”
The decision on cybersecurity was revealed along with a raft of other announcements, including a ruling that barred a pair of elected Hong Kong localists from office and the surprise replacement of veteran official Lou Jiwei as finance minister.
Companies operating on Chinese soil rarely raise public objections to domestic policy for fear of repercussions, but much is at stake in a Chinese IT market Gartner puts at $340 billion. The draft law prompted more than 40 business groups from the U.S., Europe and Japan to pen a letter to Premier Li Keqiang this summer, arguing it would impede foreign entry and the country’s own growth. Parallel legislation governing the use of data for the insurance industry has also provoked objections.
The measures are part of a sweeping push under President Xi Jinping to control China’s internet, including the passage of a security law establishing “cybersovereignty” and making the spread of rumors and defamatory posts a crime.
“The law fits international trade protocol and its purpose is to safeguard national security,” said Zhao Zeliang, director-general of the bureau of cybersecurity for the Cyberspace Administration of China. “China’s cybersecurity requirements are not being used as a trade barrier.”
China’s campaign to safeguard its infrastructure echoes post-Snowden efforts in Europe and elsewhere. The difference lies in how the vague language affords regulators leeway to expand their scope if needed, critics say. And it’s not just technology providers who’re concerned, but also any company that relies on foreign systems to run its business there. Broad or vague language casts uncertainty over the steps required for compliance, for starters, said Xiaoyan Zhang, an attorney with Mayer Brown LLP in Shanghai.
The requirement on certification could mean technology companies will be asked to provide source code, encryption or other critical intellectual property for review by security authorities. This is something Microsoft already does with its software, under controlled conditions.
The law also requires business info and data on Chinese citizens gathered within the country to be kept on domestic servers and not be transferred abroad without permission. That last condition hampers the operations of multinationals accustomed to a global Internet computing environment.
“A number of IT companies have really serious concerns. We don’t want to see barriers put up,” U.S. Deputy Secretary of Commerce Bruce Andrews told reporters during an October visit to Beijing. “Cross-border data flow has become increasingly important to trade and to companies in the way they operate every day.”
Some foreign companies may have already begun to ring-fence their Chinese data. In November, Airbnb sent an e-mail last week informing its Chinese users that their personal data will be transferred to servers within the country, “in accordance with Chinese laws and regulations.” It’s not clear if the move was in anticipation of the cybersecurity law. Airbnb didn’t respond to requests for comment.
The law may drive further business to local giants such as Huawei Technologies Inc or Lenovo Group Ltd., the world’s largest PC maker. Alibaba Group Holding Ltd.’s paying cloud customers had already doubled in the September quarter. Alibaba said in an e-mail it will ensure it’s compliant with relevant laws.
Not all see it this way. Advocates say the government will issue future regulations to clarify its scope and intent.
“The new law is to protect China’s cyber security and will not damage the interests and the normal operations of foreign companies,” said Ma Minhu, director of the Information Security Laws Research Center of Xi’an Jiaotong University.
| 8:19p |
Tips and Best Practices for Securing your Cloud Initiative Brought to you by The WHIR
As organizational IT data centers move to adopt cloud technologies they’ve immediately begun to see benefits in this type of distributed computing. Users are now able to access their applications or corporate desktops from any device, anytime and anywhere. But it’s not just about apps and desktops. New types of cloud services are revolutionizing user experiences and rich content delivery.
This seamless experience creates a more sustainable environment and helps end-users have a better computing experience.
Still, security will almost always be one of the biggest concerns of the IT business sector. Experience has shown that no matter what technology or platform is implemented, securing that environment is a top priority. As data centers push their cloud infrastructure even further with solutions that include identity federation and single sign-on, the clear challenge and question becomes: How do we secure our cloud initiative?
Understanding and working with cloud security best practices
When analyzing the concept of security concerns associated with cloud computing, an organization may sometimes come up with a long list of challenges.
When evaluating a cloud solution, whether it is private or public, IT administrators must conduct their due diligence in researching their technology and making sure their environment is ready for such a step. Below are some industry tips and best practices when cloud security comes into the equation.
- Plan strategically
- Since every environment is unique, very careful consideration must be given to how the corporate workloads are to be delivered to the end-user. By designing a solution from the start which embraces security, an infrastructure will already be one step ahead in their cloud initiative. Taking a secure approach from the initial phase creates a solid foundation for entering into the cloud. By starting with security first, compliance conscious organizations are able to deploy both a resilient and audit-ready environment.
- Pick a partner wisely
- Your partner’s ability to protect sensitive cloud-based data will be crucial. There are many cloud providers to choose from. Some will offer private cloud solutions, while others will offer a combination of a pubic/hybrid cloud deployment. When evaluating a partner that will be set to deliver corporate IT services via the cloud, make sure that partner has a foundation and heritage in both IT and security services. Verify that cloud-ready risk mitigation is part of the provider’s common security practice. Evaluate a partner that has proven experience integrating IT, security, network services, as well as providing robust and strategic service-performance assurances.
- Identity Management
- Every enterprise environment will likely have some sort of identity management system. This is to control user access to corporate data and computing resources. When looking to move to the cloud, identity management quickly becomes a security concern. One of the last things an IT administrator would want is a user who is forced to remember several sets of credentials. Cloud providers must either integrate the customer’s identity management system into their own infrastructure, using identity federation or single sign-on technology, or provide an identity management solution of their own. Without that, some environments have seen what is known as identity pools, where users have multiple sets of authoritative credentials they must use to access common workloads.
- Protecting corporate data
- For an IT organization to be considered protected, data from one end-user must be properly segmented from that of another. That means that “data at rest” must be stored securely and “data in motion” must be able to securely move from one location to another without interruption. Good cloud partners have solutions like this in place to prevent data leaks or access by unauthorized third parties. As such, it’s important to clearly define roles and responsibilities to ensure that auditing, monitoring and testing cannot be circumvented even by privileged users unless otherwise authorized.
- Develop an active monitoring solution
- Just like information within a data center – data in the cloud must be continuously monitored. If an IT manager needs live data to be pulled from a cloud environment, they must leverage an active monitoring solution. Performance bottlenecks, system instabilities or other issues must be caught proactively to avoid any outages in services. Failure to constantly monitor the health of a cloud environment will result in poor performance, possible data leaks and, sometimes worst of all, an angry end-user. Organizations which are ready for the cloud must plan accordingly as to the monitoring and intervals required based on their data content. From there, it’s advised they implement manual or automated procedures to respond to related events that may occur in their cloud environment.
- Test regularly and establish environmental metrics
- Whether deploying your own private cloud or using a cloud-ready partner, always make sure to test and regularly maintain your environment. When looking at a service provider, make sure they offer a solid Service Level Agreement (SLA) that should include metrics like: availability, notification of a breach, outage notification, service restoration, average time to resolve, and so on. Both in a provider relationship and in a private cloud solution, regular proactive testing should occur. By keeping an environment healthy and tested, we remove quite a bit of risk associated with security or inadvertent data leaks.
Never forget the basics
Since security is always a concern for a conventional data center, it should very much be a top priority in any cloud initiative as well. Third-party organizations, such as the Cloud Security Alliance, regularly publish advice for securing a cloud deployment.
Always try to remember the following for securing SaaS, PaaS and IaaS environments:
- Strong authentication methods are always recommended. Two-factor, and even certificate-based authentication methods can be great. Remember, depending on the risk level of the services being offered, your security architecture will need to match those requirements.
- You must be able to manage user access across the board. User privileges will absolutely vary and you especially need to control the administration of privileged users for all supported authentication methods.
- Incorporate self-service and identity validation. You can deploy powerful tools which analyze lost and orphaned accounts across onsite and remote locations. And, they’ll look at admin accounts as well. You can allow users to request new services and even modify their own permissions (where it makes sense). The key is managing these permissions and creating user controls.
- We’re beyond just enforcing strong passwords; even though that’s still important. Now, new technologies allow for deep interrogation of users, locations, devices, and even specific resource access points. Either way – ensure your users havesecure methods of entry depending on the devices they’re using.
- Identity management and federation can help out a lot. For example, federated services can be a means of delegating authentication to the organization that uses the SaaS application. Or, you can tie separate services using federation services to reduce authentication challenges. These are great ways to manage user identities in one spot.
As more data centers are pushed into the cloud, security will play an even greater role in maintaining data integrity. Even though the technology is still new, cloud-computing offers great benefits to those environments prepared to make the investment. Remember to make wise and well-researched decisions when evaluating cloud data center security options. | 9:36p |
Chinese Bitcoin Firm Says It’s Building a 135 MW Powerhouse A data center facility that draws close to 140 megawatts of power had better be housing some very important tenants with some world-shaking projects. In an announcement Saturday, a company called Bitmain Technologies Ltd. said, if the weather holds out, it could complete construction on a colossal (judging from the pictures), 45-building solar-powered data center complex in China’s Xinjiang autonomous region.
Its “major application,” according to the announcement, will be the “mining” of Bitcoin’s virtual currency — a process originally designed to be distributed among Bitcoin stakeholders worldwide through a P2P network. The mining process involves computing the parameters necessary to achieve the proper encryption of Bitcoin’s virtual monetary transactions, since the encryption method for producing its blockchain is non-algorithmic.
Not only will this data center’s primary application be self-serving, but much — if not all — of its hardware will be dedicated to the task. Bitmain is the manufacturer of a custom form-factor line of Bitcoin mining-dedicated hardware components called Antminer. Its “home version,” which looks like an outdoor mosquito trap laid on its side and plugged into a 220V power supply using exposed wiring, is said to deliver 2600W of peak power output. Bitmain says each Antminer consumes 845W, making it into one hungry little beast.
Assuming some form of Antminer is used to power this mining operation, we used an ancient mining device called a “calculator” to perform some math, in an effort to fill in the details Bitmain omitted.
A 135 MW complex, operating at full capacity, would provide enough power to fuel 159,763 Antminers. With 45 buildings in the complex (which, from the 3D models, looks somewhat like a solar-powered POW camp), that’s about 3,550 Antminers per building.
Each stripped-down Antminer’s awkward 5U form factor should still be enough to seat 8 units in a tray that’s 5U tall. You could conceivably pack 64 Antminers into one rack, and leave room for a 2U power supply. . . if a standard 2U supply could feed them all. A full-size 2U UPS probably delivers no more than 3,200W, for a ratio of one UPS for every three Antminers.
Assuming Bitmain has done its best to optimize each Antminer’s power consumption, it might be able to squeeze eight Ants into a kind of “module.” A full 5U of that module would house the compute units, and you’d need 4U for about 6,400W of power. Each module being 9U tall, you could fit five of them into a 45U rack.
With 3,550 Antminers to distribute throughout each building at 40 per rack, that gives you 89 racks per building. For racks alone, you’d need 2,670 square feet per building, or 120,150 square feet total. That’s not counting the additional space consumption for power distribution, so for safety, let’s round it up to 135,000 square feet. That would give you a power density of about 1 kW/sq. ft.
Compare this against the real world where money is made of cash. Switch Communications’ state-of-the-art, 407,000 square foot SuperNAP facility in Las Vegas boasts the capability to deliver power densities of 1.5 kW/sq. ft. So Bitmain’s design, even as power hungry as an Antminer is, may be well within reason.
SuperNAP’s tenant is online auction giant eBay. Bitmain’s sole tenant — or so it would appear, for now — is itself.
A June 2016 New York Times article by journalist Nathaniel Popper introduced us to Jihan Wu, the founder of Bitmain. In a virtual currency space whose value is determined in large measure by how many individuals may be “mining” for value (guessing the right parameters for encryption functions for their own transactions), Wu capitalized by creating a system called Antpool. Think of it like a mutual fund in the virtual space, where investors pool their resources, and in so doing pool their mining capacities.
Now you get an idea what the Xinjiang complex seeks to become: a kind of super-drill for seeking the ultimate solutions as expeditiously as possible. The more answers they find, the more virtual currency is produced as their reward.
Popper’s story painted a picture of the Bitcoin ecosystem as a whole has having been suddenly driven away from diversity and wide distribution, and toward consolidation around a handful of China-based companies. Photographs depicted what appear to be the husks of abandoned barns and factory houses having been commandeered for use as server farms.
Which would make Bitmain’s ambitious plans, at least from one angle, look almost similar to progress. |
|