Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, April 29th, 2016
Time |
Event |
12:00p |
Big Cloud Provider Pre-Leases Digital’s Entire First Japan Data Center Digital Realty has pre-leased the entirety of its first data center in Japan. The anchor tenant who signed the lease is a major hyperscale cloud provider whom the data center company did not name.
There’s currently a wave of high demand for large chunks of data center space in top markets around the world as the biggest cloud providers race to increase the scale of their infrastructure and win share of the quickly growing enterprise cloud market. This wave has fueled a boom for wholesale data center providers like Digital Realty.
Read more: Report Confirms Large Cloud Providers Drive Q1 Leasing
It’s difficult to deduce which of the hyperscale cloud providers has signed the multi-megawatt lease in Osaka, but the top players in this category are Amazon, Microsoft, and Google, as well as IBM and to a lesser degree Oracle. Some Software-as-a-Service providers, such as Salesforce, could also be considered hyperscale.
Microsoft has had a cloud data center in Osaka since 2014, while Amazon’s physical presence in the market so far has been limited to Direct Connect private network links at an Equinix data center there. Google does not have a data center in Japan, but it is currently on a cloud data center expansion kick, planning to add 12 sites quickly, using both company-built and leased facilities.
Whether or not a company like Microsoft or Google already has a data center in a given market, however, says little about its future plans there.
Digital Realty management mentioned the lease on the company’s first-quarter earnings call Thursday, seemingly eager to demonstrate to analysts that the company is delivering on its promises.
The company announced a land acquisition in Osaka for a data center development in 2013. It paid $10.5 million for the 15,000-square meter site.
Digital broke ground in Osaka earlier this month, expecting to deliver the data center to the customer next fall.
Digital has also made progress on its plan to enter Germany, the other top priority in its geographic expansion plans. This February, it announced the acquisition of a six-acre parcel in Frankfurt, where it can build a three-building 27MW data center campus.
Company execs said on Thursday’s call that the build-out schedule in Frankfurt will depend on demand, of which there is plenty. Digital is very careful about construction nowadays, usually shying away from building out new data center capacity in any market without a pre-lease with an anchor tenant.
Read more: Digital Realty Takes Foot Off Brake Pedal on Expansion | 4:50p |
IoT Past and Present: The History of IoT, and Where It’s Headed Today  By Talkin’ Cloud
The Internet of Things (IoT) is one of the hottest IT buzzwords of the moment. Yet the term is actually almost two decades old already. If IoT is not actually a new idea, what is the concept’s history? And why is it suddenly trending now? Keep reading for an overview of the history of the Internet of Things, and what makes it a bigger deal today than ever.
Inventing the Internet of Things
Although you probably haven’t heard much about IoT until recently, the terminology dates back to 1999. Kevin Ashton, co-founder of MIT’s Auto-ID Center, is credited by most sources with coining the phrase “Internet of Things.” (The acronym, IoT, appears to be a considerably later innovation; Wikipedia did not use the abbreviation until starting in 2009, although it contained an entry for Internet of Things since July 2007.) Once introduced, the term quickly entered widespread use, as this Google Ngram shows. (The result there suggesting that the term was used once in 1979 is an anomaly apparently based on erroneous metadata; the publication in question actually appeared later than Google thinks.)
The history of the phrase is significant because it shows that, although the concept of IoT may only have reached the masses in the last few years, it had a wide following among experts that stretches back to the early 2000s.
Evolving Concepts
Also interesting is the fact that Ashton’s idea of IoT focused on using radio frequency identification (RFID) technology to connect devices together. That was similar to but significantly different from today’s IoT, which relies primarily on IP networking to let devices exchange a broad range of information. RFID tagging allows much more limited functionality.
Of course, Ashton’s concept of an RFID-based IoT was not surprising at the time. In 1999 wireless networking as we know it today was still in its infancy, and cellular networks had not yet switched to a fully IP-based configuration. Under those conditions, it would have been much harder to conceive of an IoT in which all devices had unique IP addresses. (Plus, in the absence of IPv6, there weren’t enough IP addresses to go around if all devices joined the Internet.) Because RFID would not have required IP addresses or actual direct Internet connectivity for each device, it would have seemed like a much cheaper and more feasible solution.
Building the IoT
In the event, device manufacturers put little stock in an RFID-based IoT. Instead, by June 2000, the world’s first Internet-connected refrigerator, the LG Internet Digital DIOS, had appeared featuring a LAN port for IP connectivity. (The fridge had been under development since 1997, showing that the idea if not the name for IoT existed well before Ashton introduced the term in 1999.)
The concept expanded and saw more real-world implementation as the 2000s continued. In 2008 the IPSO Alliance formed as a collaboration of industry partners interested in promoting connected devices. That was a sign that big businesses, not just entrepreneurs and researchers, were growing interested in implementing IoT in production environments.
Today’s IoT: Breaking with the Past, and the Importance of the Cloud
But it has only been in recent years that IoT has really become a reality on a massive scale. The IoT is no longer just about a handful of high-end Internet-connected appliances. Now, it’s common for all types of devices, from TVs to thermostats to cars, to connect to the Internet.
What has changed since the 2000s to make this all possible? There are several key factors. They include the expansion of networking capabilities, the introduction of large-scale data analytics tools (which make it easier to manage and interpret data from IoT devices) and the creation of new standards, such as the Allseen Alliance’s AllJoyn, which make it simpler for IoT hardware and software from different vendors to interact.
Perhaps more than anything else, however, the growth of the cloud has played a crucial role in making modern IoT possible. That’s because the cloud provides a low-cost, always-on place for storing information and crunching numbers. Cheap, highly available cloud infrastructure makes it easy to offload storage and compute tasks from IoT devices to cloud servers. In turn, IoT devices can be cheaper, leaner and meaner.
Thanks to the cloud, your smart thermostat doesn’t have to do much beyond upload some very basic data to your utility company’s cloud, and download instructions you send it through the cloud for managing your home’s temperature. It doesn’t have to store the data itself. It does not even have to have a local management interface (although most thermostats do) if the manufacturer doesn’t want. You can control the device solely through the cloud — provided, of course, it has Internet connectivity.
Things were not so easy when LG was building an IoT refrigerator a decade ago. Then, the company could not count on an always-on, hugely scalable cloud to help manage the device. Instead, the fridge had to act more like a traditional computer.
Remaining IoT Challenges
For all the challenges that the cloud and other advances have helped to solve for IoT vendors, issues remain. One is a lack of universal standards. The AllJoyn framework is only one IoT standards framework; competing solutions exist, and without consensus standards are not very useful.
Another challenge is lack of infinite bandwidth and networking infrastructure. The more devices you place on the network, the more traffic your network pipes have to handle, and the more connections your switches have to manage. It’s possible to expand network infrastructure, and that is what service providers are doing all the time. But it’s a slow and costly process. In the absence of faster ways of expanding the network’s capacity, this will remain a limiting factor for IoT’s rate of growth.
Power is a problem, too. Since part of the advantage of IoT is the ability to manage a large number of devices spread over a wide area without building them into traditional infrastructure, being able to untether IoT hardware from a permanent power source is a requirement for realizing IoT’s full potential. But the technology for doing that is not yet here. It’s on its way, but it will take time before batteries in IoT devices can last for years, or local solar cells suffice to power a device indefinitely.
Last but certainly not least, security and privacy remain huge issues for IoT. IoT devices introduce a whole new degree of online privacy concerns for consumers. That’s because these devices not only collect personal information like users’ names and telephone numbers, but can also monitor when you are in your house or what you eat for lunch. Following the never-ending string of disclosures about major data breaches, consumers are wary of placing too much personal data in public or private clouds, with good reason. IoT vendors will need to work through these security issues before IoT devices reach their full potential.
Article by Talkin’ Cloud, via The Var Guy: http://thevarguy.com/var-guy/iot-past-and-present-history-iot-and-where-its-headed-today?page=1
| 8:09p |
CoreSite Shares Spike as Cloud Data Center Leasing Accelerates CoreSite Realty knocked the cover off the ball last year, and it appears that leasing momentum has continued to accelerate into Q1 2016.
During the past 52 weeks shares of CoreSite (COR) traded inthe range of $44.47 – $73.11 per share. Fast-forward to Thursday’s Q1 earnings release, and COR shares opened at $75.00, well above the recent new highs.
The major takeaway from the Q1 conference call Thursday was how quickly the company will be converting signed deals into lease commencements to drive earnings growth in 2016.
Huge Earnings Boost
Continued leasing success resulted in management boosting full-year 2016 FFO guidance to $3.52-$3.60 per share, from $3.37-$3.47 per share. Notably, the prior guidance represented a 20 percent increase at the midpoint over FFO of $2.86 per share for 2015.
Most companies would be pleased to announce a penny or two increase. CoreSite eclipsed its prior guidance by $0.14 at the midpoint, or a 4.1 percent increase in just over 90 days.
In a bit of symmetry Mr. Market bid CoreSite shares up by 4.19 percent, ending the day at an all-time high of $75.33 per share. There are very few REITs that deserve to trade at 21x 2016e FFO, but CoreSite has continued to deliver the growth to back it up.
Cloud Powers Leasing
Hyperscale public cloud providers have amped up their deployments significantly in 2016 in a digital land grab to compete for market share. This has become a windfall for data center REITs which have powered shell space available to deliver data halls quickly.
While the paradigm shift of enterprise IT stacks to the cloud is in the early innings, large cloud providers are helping data center REITs like CoreSite hit leasing home runs right off the bat.
Related: Big Cloud Provider Leases Digital’s Entire First Data Center in Japan
During the latest quarter CoreSite executed 119 new and expansion data center leases comprising 102,678 net rentable square feet. Notably, 114 leases were for less than 1,000 SF; four were in the 1,000-5,000 SF range; and one 80,000 SF lease (in SV7).
Enterprise customers accounted for 53 percent of the deals, with 24 new logos added during the quarter. CoreSite’s focus on high-performance, low-latency requirements remains its bread and butter.
Fiber interconnects grew 21 percent, led by enterprise customers connecting to cloud providers.
Read more: Report Confirms Large Cloud Providers Drive Q1 Leasing
Core Site’s ability to deliver large data halls quickly has paid off handsomely for shareholders in 2016. However, management is keenly aware the “cloud burst” of leasing in Q4 2015 and FY 2016 could be a one-off anomaly rather than a trend.

Source: CoreSite – Q1 2016 Supplemental
Data center REITs are faced with the challenge of balancing the recent spike in demand and space absorption in key markets over the past couple of years.
Read more: CoreSite Reports Strong Q4 but Shell Capacity in Key Markets Short
CoreSite has 15 percent of its net rentable square feet available for lease, with another 30 percent of portfolio capacity currently being built out.
Key Markets Update
CoreSite will essentially have built out its entire Santa Clara and Northern Virginia inventory after it delivers and leases this latest round of space. However, due to existing tenants not exercising right of first refusal on adjacent space, the company appears to have overshot the mark in Los Angeles.
Santa Clara: The CoreSite SV6 facility is a powered shell, 100 percent leased to a build-to-suit customer signed in April 2015, scheduled to be delivered Q2 2016.
Available space at CoreSite’s SV7 in this tight data center market became the epicenter for leasing and construction activity during Q1:
- An 80,000 NRSF pre-lease had been announced previously, and it is scheduled to commence in Q2 2016.
- In response to perceived market demand and lack of large blocks of available powered shell space in Santa Clara, CoreSite has accelerated the SV7 build-out of the entire 230,000 SF facility.
- This will provide over 123,000 NRSF of spec powered shell space in one of the hottest markets in the US by Q3 2016.
Virginia: CoreSite has a small amount of capacity left in 80 percent leased VA2 Phase 2, which was completed in Q4 2015.
- VA2 Phase 3 was completed in Q1 2016, with half of the 48,484 NRSF, or one large data hall, remaining available for lease.
- VA2 Phase 4, another 48,484 NRSF of space will be delivered as 100% spec shell space in Q2 2016.
Los Angeles: CoreSite is bringing online another 43,345 NRSF in LA2 in Q2 2016, and still has about 20% of space available from the prior 17,500 SF build-out.
Notably, the LA market is still CoreSite’s largest source of revenue at 27.6 percent, closely followed by San Francisco Bay at 24.4 percent, and Northern Virginia at 21.6 percent.
While the Los Angeles data center market has only been growing ~2 MW per year, CoreSite is a major player in this media/content distribution-focused market.
Read more: Data Center REITs Scored Big in 2015 Despite Weak Markets
During the conference call, CEO Tom Ray said in retrospect, the large LA2 expansion may have been premature. However, LA2 is an owned by CoreSite, while LA1 is a leased facility. During Q1 2016 leasing in CoreSite’s LA2 eclipsed the iconic LA1 One Wilshire facility for the first time.
Bottom Line
One of the major dilemmas facing data center REIT investors and management alike is trying to understand how the enterprise paradigm shift to public, private, and hybrid cloud will play out over time.
Read more: Hybrid Cloud Growth Powers Data Center REITs 19.6 Percent Higher
However, it is becoming increasing clear that data center REITs positioned to deliver space quickly are a valuable piece of the puzzle for cloud providers battling it out for market share.
CoreSite’s connectivity strategy of providing on-ramps to Amazon Web Services and other hyperscale cloud players continues to drive outperformance. The flexibility to deliver high-performance space cabinet by cabinet, along with the capability to selectively lease wholesale space — and deliver it quickly — has paid off handsomely for investors.
On the earnings call CoreSite acknowledged the need to put new pins in the map in Santa Clara, Northern Virginia, and Chicago; but similar to last quarter, management continues to keep those cards close to the vest. | 9:42p |
FBI to Build Data Center in Idaho FBI is planning to build a data center in Pocatello, Idaho, a facility that will likely be one of three core data centers the bureau will have once it completes an ongoing data center consolidation project.
The mayor of Pocatello announced FBI’s decision to build a data center in town this week. The bureau officially revealed its plans in February, when it released a solicitation document, seeking contractors interested in taking on the design-build project, potentially worth over $10 million.
The project is to construct a 100,000-square-foot building that will include office space and a 25,000-square-foot data hall that can support 5.4MW of IT. The building will have the potential to add 8,000 square feet of data center space in the future.
This will be an expansion of FBI’s existing campus in Pocatello. Mayor Brian Blad told local news reporters that the expansion will turn the campus into a major hub for the bureau.
He also said there was a lot of competition for the project from other cities, and that he had been working on luring it to Pocatello for five years. The expansion is expected to bring 300 new permanent jobs and 80 temporary construction jobs to the city.
FBI is part of the Department of Justice, which like other federal departments and agencies has been required to aggressively consolidate its data center footprint.
The Federal Data Center Consolidation Initiative, which started in 2010 and was replaced with a new initiative last month, resulted in closures of more than 3,100 data centers as of March, according to the Government Accountability Office, but departments of Agriculture, Defense, the Interior, and the Treasury were responsible for 84 percent of those closures.
The rest of the agencies “made limited progress,” the GAO said in a report issued last month.
The White House initiative that replaced FDCCI, called Data Center Optimization Initiative, imposed a virtual freeze on all new data center construction by federal agencies. To get a build approved, an agency has to prove that it has weighed every alternative option, such as colocation with other agencies, or outsourcing to data center or cloud providers, and found that it is necessary to build a new data center.
In a January 2015 note, FBI said its plan was to consolidate into three core enterprise data centers by the end of fiscal year 2019, and that two of them would be operated by the bureau. |
|