Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, July 11th, 2016
| Time |
Event |
| 12:00p |
Another Huge Quarter for Data Center REITs: What’s Next? Data center REIT shares spiked 19 percent during the first three months of 2016, buoyed by record leasing results that fueled high investor expectations.
While it would seem highly unlikely for data centers to be able to top that quarter’s performance, the second quarter proved to be even stronger, with an average gain for the data center sector of about 50 percent through June 30, 2016.

Subsequent to the end of the quarter, all six data center REITs continued to trade at new 52-week highs, including Equinix and Digital Realty Trust.
Despite having significant UK and European data center operations, and recent large European acquisitions, both Equinix and Digital have proven to be “Brexit-resistant.”
Read more: Brexit: Keep Calm and Hold Onto Data Center REITs
This is in stark contrast to the negative post-referendum view of REITs and funds which own traditional office, retail, and industrial real estate located in the UK.
Will the Cloud Land Grab Continue?
The record leasing during Q4 2015 and Q1 2016 was driven in large part by unprecedented demand by the hyperscale cloud providers.
Read more: How Long Will the Cloud Data Center Land Grab Last?
After Q1 2016, North American Data Centers (NADA) published a report which highlighted how large cloud service providers drove leasing success for data center REITs in major markets.
The question on the minds of most investors and many data center executives is: What will a normalized leasing picture look like? However, trying to read the tea leaves to determine the “new normal” has been complicated by the uncertainty following the Brexit vote.
See also: Bracing for Brexit: How Your IT Department Can Prepare for the Coming Changes
Issues such as: data portability, work permits for IT personnel, and vendor and supplier relationships have the potential of complicating the picture. Investors also must consider a hotly contested US presidential election which may put existing trade agreements at risk.
This may lead many corporations to delay or curtail CapEx spending during the second half of the year.

Source: IDC
However, a potential silver lining for data center REITS might be the desirability of third-party cloud solutions which offer flexible pricing plans and are considered to be operating expenses.
The IT spending trends forecast by IDC continue to favor public and private cloud deployments, and recent headlines appear to be providing more fuel for that fire.
Q2 Earnings Calls Will Be Crucial
The challenge for data center REITs will be turning the uncertainty confronting customers into signed leases.
As a practical matter, the upcoming Q2 earnings release and conference calls will be the last time to revise guidance or update investors on leasing that will impact full-year 2016 earnings.
Data center REITs with existing shell space available for expansion are able to deliver space in a matter of months, offering a just-in-time solution for CIOs. Therefore, new lease signings announced during the second quarter of 2016 can still boost results for the second half of the year.
Potential Build-to-Suits
Another impact of the cloud land grab being felt in many markets is a lack of inventory for large-scale deployments. This has created a pipeline of build-to-suits on a scale that is unprecedented.
Read more: CoreSite Shares Spike as Cloud Data Center Leasing Accelerates
Build-to-suits give data center landlords earnings visibility that can result in analysts raising earnings estimates for 2017 and beyond. A large build-to-suit requirement can also help seed new phases of existing campuses, as well as provide a catalyst to expand into new geographic markets to serve existing clients.
NADC’s Jim Kerrigan has just released the July 2016 leasing report, which highlights how Microsoft and Oracle are continuing the momentum of hyperscale public cloud providers in the large Northern Virginia data center market.
Read more: Report: Microsoft and Oracle Gobble Up Data Center Space in Virginia
Things are heating in up in Chicago for the second half of 2016, with the July opening of the QTS campus in a market that has been supply-constrained. According to Kerrigan’s report, Microsoft may be looking at the Chicago market for an extremely large deployment as well.
Investor Takeaway
When it comes to investing, although the trend is your friend, there is no time to rest on your laurels. If the cloud leasing momentum were to slow, data center REIT price momentum would likely falter as well.
One nice artifact from the burst of cloud data center leasing during the past two quarters is the visibility of booked-not-billed space which will be delivered in coming months. However, given the huge run-up in data center REIT shares during the first half of 2016, this good news is baked into prices continually flirting with all-time highs.
Investors and analysts will be eagerly listening to upcoming earnings calls for clues regarding lease signings and other catalysts not currently reflected in earnings models. Despite all the recent success, management will be under pressure to answer to the age-old question: What have you done lately? | | 3:00p |
Oracle’s Jordan, Utah, Data Center … Very Cool  Brought to you by AFCOM
Two years ago, the U.S. Environmental Protection Agency (EPA) and ASHRAE acknowledged Oracle as a leader in data center energy conservation. Shortly after, DatacenterDynamics gave the company an award to recognize its “breakthrough innovations for large data centers in cold climate.”
Both honors were largely a result of Oracle’s approach to cooling its 30,000-square-foot data center in Jordan, Utah, using air-side economization. A three-person panel will present a case study on its design at Data Center World in New Orleans Sept. 12-15.
“The innovation centers on the use of waste heat from the IT equipment for space humidification in the winter; evaporative cooling in summer; reduced primary airflow to the IT equipment; strategic hot air separation with recirculation; and novel controls that enable the data center to achieve very high cooling efficiency at a lower initial investment cost,” according to Oracle’s website.
The premise for economization is that when ambient temperatures are favorable, it is the use of direct fresh-air cooling by filtering outside air into the data center, through ducting and use of an automatic control system. The free cooling reference describes the situation when outside air is both sufficiently cool and dry to supply the data center without requiring additional cooling. In some colder climates the expelled warm air from the data center may be used, so that the supplied air is not too cold.
With efficient operations and low operating Power Usage Effectiveness (PUE), the cooling system is expected to save over 41,000 MWh a year and the equivalent of 37,000 metric tons of carbon dioxide annually compared to an average efficient data center, according to Oracle. The data center is supported with a 95,000-square foot structure to house infrastructure equipment and a 44,000-square foot office space.
Oracle calls the facility “an object lesson in dematerializing the data center, eliminating unnecessary hardware, and leveraging advances in modern design and technology, particularly in electrical infrastructure. The end result: a leading edge data center driving innovation, efficiency, reliability and simplicity.
Oracle Project Cell 2.1 – A Critical Space Journey: Innovation, Efficiency and Simplicity (Panel – Case Study) will be presented on Thursday, Sept. 15, from 8 a.m. to 9 a.m.
Register for Data Center World today! All AFCOM members will receive a $300 discount.
This first ran at http://afcom.com/Public/Resource_Center/Articles_Public/Oracle’s_Jordan_Utah_Data_Center_Very_Cool | | 5:01p |
HPE Is Said to Consider Selling Some of Its Software Assets (Bloomberg) — Hewlett Packard Enterprise is considering a sale of some of its software assets as it continues to slim down its operations, according to people familiar with the matter.
The divestitures would come from a portfolio of acquisitions made over roughly the last decade, including Autonomy, Mercury Interactive and Vertica Systems, said the people, who asked not to be identified because the matter is private. A sale process is in the preliminary stages and may not result in any deals, the people said.
CEO Meg Whitman has been pushing the company to reduce its size and become nimbler to help it better take on rivals such as Dell and navigate the changing demands of corporate customers. After splitting from sister company HP in November, she announced in May she will spin off and merge its business-services division with Computer Sciences Corp. in a deal valued at $8.5 billion for HPE shareholders.
Read more: CSC to Merge With HPE’s Services Unit
Howard Clabo, a spokesman for HPE, declined to comment.
The overall software business has been showing some improvement. While sales declined 13 percent in the quarter ended April 30, in constant currency it climbed 2 percent when adjusted for past divestitures and acquisitions, CFO Timothy Stonesifer said on a call with analysts in May. Operating margins also improved, the company said.
The Autonomy acquisition came under fire soon after the company was purchased for $10.3 billion in 2011. The following year, Hewlett-Packard wrote down $8.8 billion connected to the takeover and said more than $5 billion was the result of accounting practices at the Cambridge, England-based software company. The deal brought legal headaches and is now seen as a key example of an overly aggressive acquisition strategy prior to Whitman’s arrival.
Mercury, which provides tools to measure the effectiveness of clients’ software and technology, was another large acquisition, valued at about $4.5 billion in 2006. It was the largest at the time since the $18.9 billion acquisition of Compaq. HP spent about $350 million on Vertica, a data-analysis company, in 2011.
Whitman left the door open for more divestitures on the conference call in May.
“Over time, we continue to ensure that we’ve got the right set of assets,” she said. “We’re going to continue to optimize the set of assets that we have, but we’re really happy with the current portfolio.”
Read more: Whitman Slims Down HPE, Unwinding ‘IT Supermarket’ | | 7:35p |
Google Devs Get to Run Google Infrastructure for Six Months If you’re a Google software developer working on the company’s products, the company not only wants you to know how its global-scale infrastructure operates, it wants you to run that infrastructure yourself.
In an unusual practice, Google has a program that has engineers that work on product development work for six months on the team that runs operates its infrastructure, which consists of a global network of company-owned and leased data centers.
Borrowing from NASA, the program is called Mission Control, and its goal is to have more of its engineers understand what it’s like to build and operate a high-reliability service at Google’s massive scale, according to a Monday post on the Google Cloud Platform Blog by one of the engineers who is about to begin his six-month Mission Control stint in Seattle.
When they are embedded on the Google infrastructure team, the engineers find that the people working there speak the same language. As Google has been explaining in conferences in recent years, it doesn’t have sysadmins. The company uses software engineers to design and run the software that operates the infrastructure inside its data centers because it believes they are better at it.
See also: What Cloud and AI Do and Don’t Mean for Google’s Data Center Strategy
“It turns out services run better when people who understand software also run it,” Melissa Binde, Google’s director of Site Reliability Engineering, said during a presentation at the company’s GCP Next conference in San Francisco in March. “They have a deep understanding of what makes it tick; they have a deep understanding of the interactions.”
Their title is Site Reliability Engineer, a concept created by Google to describe a software engineer who designs and runs infrastructure. In a way, SREs are Google’s answer to DevOps, which also seeks to address the conflicting goals between sysadmins and developers in companies with the traditional organizational structure.
The problem with DevOps, Binde said, is that it means different things to different people. Site Reliability Engineering is so precisely defined by Google, the company published a book on it earlier this year, aptly titled Site Reliability Engineering.
See also: Why Google Doesn’t Outsource Data Center Operations
When developers and sysadmins are divided into separate groups, each with its own culture, they are not incentivized to help one another; they’re incentivized to do the opposite.
As Binde put it, developers get “cookies” for releasing new features, while sysadmins get cookies for maintaining uptime. The more frequently new features come out, the harder it is to maintain uptime.
“The sysadmins will get more cookies if they can prevent features from going out, and the developers will get more cookies if they can figure a way around the sysadmin’s rules,” she said.
The opposing camps come up with different ideas, such as calling a new feature “beta,” which often means it can get released faster, without going through a rigorous sysadmin process for testing features before they’re launched in production. Meanwhile, sysadmins demand launch reviews and stretch them out as much as possible to delay deployment. Cumulatively, all these efforts result in stalled progress, or, as Binde put it, long periods of cookie-less sadness for everyone.
Watch Binde’s presentation in full:
| | 9:08p |
US IT Sector Adds 32,100 Jobs in June  Brought to you by MSPmentor
The US IT industry added 32,100 jobs last month, paced by positive job growth in all categories except tech manufacturing, according to a CompTIA analysis of federal data.
June’s rebound marked a sharp reversal from May, when the sector shed 28,800 jobs, despite the addition of 7,400 jobs in the IT services category.
Hiring in the IT services sector has remained strong throughout the first half of the year.
“For the first six months of 2016, the IT sector has added 43,900 net new jobs, with the IT services category accounting for 90 percent of those gains,” said Tim Herbert, senior vice president of research and market analysis for CompTIA, the Computing Information Technology Industry Association.
IT employment across all categories as of June 30 was estimated at 4.39 million, up from 4.35 million at the end of the previous month.
The bulk of May’s job losses were blamed on a telecommunications strike, during which about 35,000 Verizon workers were not on company payrolls.
Those employees returned to work in June, fueling an increase of 28,100 telecom jobs last month.
Hiring gains in IT even outpaced a sharp spike in job creation nationwide.
Overall, the US economy created an estimated 287,000 jobs last month, according to the monthly report from the US Bureau of Labor Statistics.
That figure significantly beat analysts’ estimates, and tamped fears of an economic downturn.
“The IT sector grew at a faster rate than overall national employment,” Herbert said. “Information technology continues to be a vital driver of growth and jobs in the US economy.”
This first ran at http://mspmentor.net/msp-mentor/it-sector-adds-32100-jobs-june |
|