Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, November 15th, 2016

    Time Event
    4:37p
    Wind Power Provider for Microsoft DC: Cost Savings Will Come

    The man who directs resource planning for Black Hills Corp., the owner of two wind farms in Wyoming and the winner of a huge contract to provide power for Microsoft’s Cheyenne data center, told Data Center Knowledge that there will be cost savings for Microsoft in adopting renewable power. . . down the road a bit, but still along the way.

    “I think this new structure between ourselves and Microsoft, where we’re able to leverage their generating units behind the meter — ultimately, long-term, that’s going to be a lower cost than normal in today’s market,” said Black Hills’ Chris Kilpatrick.

    Two wind farms — Black Hills’ Happy Jack and Silver Sage projects — are involved in this deal.  Black Hills serves some 1.2 million natural gas and electricity customers in eight states, as far north as Montana and as far south as Arkansas, from its home base in Rapid City, South Dakota.

    Behind the Meter

    When any new customer enters Black Hills’ service territory, Kilpatrick said, it’s his firm’s obligation to provide energy.  Regardless of which customer that may be, it may need to build new generation facilities to serve the load, or enter into a long-term power purchase agreements (PPA).

    As part of the Microsoft deal, he explained, the data center provider has agreed to supply generation behind the meter — meaning, backing up what wind power may not consistently be able to provide, with supplemental power, in this case from natural gas generators.  Not having to build that backup power supply not only saves Microsoft, said Kilpatrick, but also saves other customers in the area, by not having to bolster transmission capabilities with supplemental construction that could impact service.

    In a corporate blog post Monday, Microsoft President and Chief Legal Officer Brad Smith explained, “Unlike traditional backup generators that run on diesel fuel, these natural gas turbines offer a more efficient solution and, more importantly, ensure the utility avoids building a new power plant.  This is a small step toward a future where other customer-sited resources may help make the grid more efficient, reliable and capable of integrating intermittent energy sources like wind and solar.  And as we recently demonstrated in our pilot with Agder Energi in Norway, this future will be enabled by the application of cloud technologies that enable utilities to visualize and optimize resources, providing the foundation for a low carbon energy future.”

    That’s suggesting that Microsoft may be adding some intelligence on its end of the bargain — again, behind the meter.  That Agder Energi project to which Smith referred is an agreement with the Norway-based renewable energy provider to use resources hosted on Azure to develop new, real-time models for situational awareness on the power grid.

    System Upgrades

    All this is not to say this energy deal is ready to go the moment Microsoft flips the switch.  Black Hills has already constructed one new power substation, though Kilpatrick told us another is in the works, possibly costing some $15 million.  Improvements to some transmission lines will also be necessary, at a cost of about $5 million.

    There’s also this: jEvery time a new customer enters the grid with a power draw that could peak at 237 MW — which is the number being touted by Microsoft on Monday — the strategy for serving the entire West Coast must be adjusted.

    “It’s another piece of the puzzle that they certainly have to take into account, absolutely,” said Black Hills’ Kilpatrick.

    “We have an electric industry of schedulers that work on the western half of the United States grid, basically scheduling energy through specific transmission lines to make sure that you don’t oversubscribe transmission lines, and make sure that you have enough capacity and enough space in those lines to allow energy to flow through.”

    Black Hills utilizes such a group for its customers in Wyoming, South Dakota, and Colorado, to make certain the infrastructure is capable of supplying those three states.

    Perhaps Microsoft could use a bit of that situational awareness today.  Kilpatrick, who told us he’s worked with Microsoft in planning energy consumption at its Cheyenne data center for about four years, admitted that facility does not consume anything close to 200 MW of power today.

    A back-of-the-matchbook estimate for how much power the western U.S. power grid provides, he told us, is about 40,000 MW — making the Cheyenne DC, at peak consumption, about one half of one percent of the total draw.  Still, that’s astonishingly sizable.

    “Part of what we do, when we’re figuring out how to serve our customers — especially large ones — is to work extensively with our regional transmission group,” he added.  “We do regional transmission planning to make sure there’s enough transmission for the whole region.  We don’t just look physically only at Cheyenne, or only at Wyoming.  It’s a bigger piece of the pie.  We do studies with partners in the region as well, to make sure we’re not upsetting the apple cart.”

    Today’s agreement marks the fourth renewable PPA for Microsoft in the U.S.  The first, signed in November 2013, was a 20-year PPA with the Keechi Wind Project, led by RES Americas, a Texas-based wind power supplier.  That was followed in July 2014 with an agreement with EDF Renewable Energy, to purchase power from the Pilot Hill Wind Project, a 175 MW facility some 60 miles north of Chicago.  The third and latest PPA, signed last March, involved boosting solar power to the Virginia region by some 20 MW.

    10:17p
    Data Center Experts Weigh-In On Trump Economy

    The winds of change are still reverberating across the US after the national election results were tabulated last week. There is little doubt that President-elect Donald Trump — working with a Republican majority in both houses of Congress — will usher in a different business environment for 2017, and beyond.

    Commercial real estate landlords, tenants and investors are now focusing on how new leadership in the Oval Office will influence demand for space. Silicon Valley-based technology giants are also grappling with how the new administration will view the digital economy, data sovereignty, visas for workers, and potentially prickly trade agreements.

    Read more: Tech Defanged as Stocks From Amazon to Netflix Left Out of Rally

    Most commercial real estate sectors are leveraged to growth in GDP, jobs and consumer spending. However, data center demand is primarily linked to macro trends, including: growth of cloud computing, big data, distributed or “edge” IT architecture, wireless data and streaming media, IT outsourcing trends, and emerging Internet of Things (IoT) deployments.

    DCK – Exclusive

    Data Center Knowledge reached out to a group of commercial real estate experts who are data center specialists for post-election expectations. Here are insights from industry insiders on leasing trends for 2017, as well as commentary on data center supply and demand in several key markets.

    CBRE

    “We believe the industry will continue to expand based on issues larger than the U.S. election and will not be really affected by the next administration,” said Todd Bateman, North American Agency Practice Leader for CBRE’s Data Center Solutions Group. “However, clearly any administration that plans on investing in infrastructure and mitigating taxes has benefits for the data center industry. Also, if the new administration is looking to create more governmental efficiencies, there may be opportunities for outsourcing that could benefit data center providers as well.”

    Read more: Digital Realty to Build Data Center Tower in Downtown Chicago

    Bateman is actively marketing the proposed 12-story 330 E. Cermak project, planned to be built directly across from the iconic 350 E. Cermak, in the South Loop of the Chicago Central Business District.

    North American Data Centers

    “There is significant pent up demand,” according to NADA’s Managing Principal Jim Kerrigan, “the biggest challenge is lack of existing supply in key markets.”

    He pointed to Chicago as a market where many enterprise users are looking for space but there is limited inventory of space available to take immediately.

    “Now that the election is over, there will be an increase in leasing activity particularly by the enterprise users during the fourth quarter.  This will be consistent with 2012 where most of the enterprise leasing activity that took place was in Q1 and November/December. Also the increase of sale-leaseback activity, particularly as it relates to data centers will carry thru to 2017.”

    Jones Lang LaSalle

    JLL Managing Director, Allen Tucker, said “I don’t believe data center deals that are in the current pipeline will be effected by either the current administration or by President-elect Trump.”

    Tyson, Virginia-based Tucker operates in the middle of government contracting country. DCK asked: Is the buzz optimistic?  Tucker replied, “Yes, in general, many are optimistic about a change in administration but change will not happen immediately.  My guess after the 100 days of policy decisions are put into motion the tail wagging will take effect later next year, approximately third quarter 2017.”

    Read moreCloud Fuels Unprecedented Data Center Boom in Northern Virginia

    Another JLL Managing Director, Dallas-based Bo Bond, also weighed in on the election results as it relates to what he sees in his Texas markets. He replied in an email, “I believe the election results won’t adversely affect data center momentum we have going into the New Year.  We are working on behalf of a number of providers on their expansion plans (land acquisitions for data center campuses) AND our user clients are desperately trying to get projects booked by year end while gearing up for new 2017 budget cycles. We are very bullish on 2017!”

    Five 9s Digital

    North Carolina-based brokerage and data center developer Five 9s Digital principal Stephen Bollier weighed-in on the scramble to get deals done prior to year-end 2016. He said, “I think any deal not baked six months ago will be pushed into 2017, as the deal cycle is too complex to get things done in the next 45 days.”

    Five 9s principal Doug Hollidge added, “The enterprise transition to the hybrid cloud will continue as corporations further understand the value proposition of outsourcing their data center requirements, place more trust in third-party platforms, and gain an increased number of options to evaluate.”

    Newmark Grubb Knight Frank

    Data Center Knowledge asked NGKF’s Bryan Loewen, executive managing director, global lead, Data Center Consulting Group: What are your views for leasing in 2017, given record CSP deployments this year?

    He replied in an email, “I expect 2017 to be on par with 2016 for leasing across the North American markets. I expect Dallas and Chicago to have more volume in 2017, and I expect Northern Virginia and Santa Clara to taper off slightly from 2016 results.  NY/NJ will probably increase slightly for 2017; however, the 2016 leasing volumes for NY/NJ are lower than anticipated.”

    Read moreReport: Data Center Market Trends ‘Strong Demand, Smart Growth’

    Investor Takeaway

    The uncertainty regarding the election results has now been replaced with the unknowns surrounding the business environment and trade policies under a Trump administration.

    There is currently a lot of market power concentrated in a handful of Silicon Valley tech behemoths. It remains to be seen if there will be any regulatory fallout from the ugly campaign rhetoric — particularly for M&A deals, and network/content plays like Comcast and AT&T.

    Read more:  Data Center REITs Q3 Update – Is the Sky Really Falling?

    The good news for long-term investors is that data growth is more or less politically agnostic and should continue unabated for the foreseeable future. However, it is anyone’s guess as to how long the record leasing by the giant cloud providers will continue.

    10:21p
    Chicago Gets Hotter in the Fall: Ascent Adds Power to CH2

    In the latest signal that Chicago may as well be the data center capital of middle America, Ascent Data Centers declared Tuesday it has completed a project to deliver an additional 2.5 MW of critical power capacity for a single tenant in its CH2 facility.

    CH2 is located at 505 North Railroad Avenue in the western suburb of Northlake, just south of O’Hare Airport, a half-block east of Interstate 294, and within a stone’s throw of three local cemeteries.  Prior to the upgrade, it had been a 3.3 MW, 250,000 square-foot facility, more than 16,000 of which were occupied by that one tenant that needed more critical power.

    “This second development of ours, CH2, we really cater to enterprise-class customers,” explained Ascent CEO Phil Horstmann, in an interview with Data Center Knowledge.  “Most of our data center capacity that we’ve developed, delivered, and leased to tenants has been customized and built-to-suit.  This suite was a third-phase expansion for a very large enterprise user, and as their business continues to grow and they need more capacity to serve their customers, it just fell in-line for that buildout.”

    Generator equipment inside Ascent Data Centers' CH2 facility in Chicago. [Courtesy Ascent]

    Generator equipment inside Ascent Data Centers’ CH2 facility in Chicago. [Courtesy Ascent]

    Ascent boasts 40-gigabit accessibility for CH2 from the Chicago Mercantile Exchange (CME), the Intercontinental Exchange, and the NYSE’s Chicago branch downtown, as well as from the CME branch in Aurora.  That’s important for the CME, which sold its own Aurora data center to CyrusOne last March, and may still be looking for good reasons not to pull out of the state of Illinois.

    As Horstmann told us, Ascent envisioned CH2 to be a showcase for what he calls “dynamic data center suites,” whose tenants’ needs for expansion are anticipated in advance.  It’s capacity planning taken to one logical extreme: effectively pre-equipping growing customers with space they will eventually use, hopefully in a contiguous location.

    So this expansion request was not a surprise but part of the plan.  The bulk utility power for CH2 was already on-site, said Horstmann, along with a high-voltage 138 kV substation with 54 MW capacity.  On the other hand, the extra power draw requirement did call for some work, consuming some 150 days after ground-breaking commenced.

    “A lot of times it takes a little time to get utility for a user to just turn on that kind of capacity,” the CEO explained.  “But since that was already there, and we had an existing building and a good number of different reference designs, we were really just ‘rinse-and-repeat’ on a customized program that we had already deployed two of, for this customer.”

    Horstmann credits his engineers with the ability to break down critical infrastructure requirements into logical building blocks of electrical and mechanical capacity.  By setting aside these building blocks in advance, by way of capacity planning, he said, it’s possible to deploy additional capacity without becoming overcommitted.

    Generators outside Ascent Data Centers' CH2 facility. [Courtesy Ascent]

    Generators outside Ascent Data Centers’ CH2 facility. [Courtesy Ascent]

    “I think that’s a real advantage we have with the cloud service providers,” he said.  “At the breakneck pace at which that business is growing, they never really know if they need 2 MW today, and 12 or 20 MW by the end of the year.  That’s where a lot of our current discussions are centering, with those types of users — the ability to turn on large blocks of capacity, such as 3 to 6 MW, in an even shorter timeframe like 60, 90, or 110 days.”

    For Ascent, Horstmann explained, this creates a natural but necessary tension pitting the need to pre-allocate space and hold it off the market, against the need to meet customer demand for keeping pace with future growth.

    “Our tenants are smart, sophisticated companies,” he said, “so they always have a pretty good sense.  But capacity planning is a challenge.  You can’t just have endless expansion space and hold it off the market forever, but you have to have a good balance of develop-able space, and a good tenant mix.  That’s what leads to a healthy, growing development.”

    10:29p
    F5 Leverages Equinix to Extend Deployment, Security to Microservices

    In a move intended to ease mid-level businesses’ transition to modern microservices in colocated environments, application delivery provider F5 Networks announced today the addition of application-centric and even container-centric deployment and access control services to its portfolio.  In doing so, the firm is taking advantage of its existing partnership with colocation leader Equinix, providing these new services over Equinix Performance Hub.

    It’s not an easy development to explain, so we’ll go slowly:  Mid-level enterprises are looking for ways to transition their global online presence to a model that works much more fluidly, like Google or Netflix.  This transition involves the use of containerization as a more manageable model for applications, as well as a weaning from traditional, virtual machine-centered hosting environments.

    At the same time, these enterprises are looking to more easily deploy certain of their applications to the public cloud, as necessary, on a per-application basis.  Containerization eliminates the overhead of deploying huge virtual machines just to support them.

    Extend the Plank

    F5 perceives a viable market there, specifically for businesses that want a hassle-free mechanism for deploying, maintaining, and securing new microservices models.  So it’s tapping into the cache of businesses that have already demonstrated their willingness to bypass the public Internet, and interface directly with Equinix’ network of high-speed interconnects.

    “Application Connector gives you the ability to put an app out in the public cloud — you want it to have the same security policies,” explained Lori MacVittie, F5’s principal technical evangelist, in an interview with Data Center Knowledge.  “That would be anything from dealing with DDoS at the TCP level and the HTTP level, to having WAF policies [Web Application Firewall] move with that application.  So it allows those applications, when they’re launched in the cloud… to dial home and get the right policies provisioned, so that they’re automatically there.”

    So the same access control policies that apply to an application when it’s hosted on-premises (or within the customer’s leased domain) can be extended to that application when it’s deployed to public cloud — and also, in this case, through Equinix Cloud Exchange.  VMware is offering a similar concept, although it requires the use of its NSX network virtualization platform and also its vSphere operating environment. Plus, it requires partners such as Microsoft Azure and IBM Cloud to be on board.

    Nerve Center

    Similarly, F5’s new Container Connector intends to do much the same thing with the applications hosted within Docker and other containers (e.g., OCI, rkt).  But it’s a much more complex undertaking, especially since managing microservices bears a closer resemblance to herding cats.

    For extending access control policy to containers, F5 needs a go-between.  In this case, it’s the orchestrator — the mechanism that automates the deployment of individual containers to available resources in real-time.  As MacVittie confirmed, Container Connector will interact with Kubernetes — at present, the most prominent open source orchestrator in the space — by means of its native API.

    Using Kubernetes, she told us, Container Connector will facilitate the deployment of BIG-IP, F5’s application delivery controller (ADC), as well as an Application Services Proxy (ASP, not to be confused with Microsoft’s technology of the same name).  These will act as load balancers within the container environment, as well as access controllers.

    “Each service will get its own lightweight proxy,” MacVittie explained, “that might then be dealt with by something upstream.  Every time a container comes up, you need to have either a service registry, or some way to tell these load balancers — whether they’re upstream, or sitting with the containers — that there’s a new one, and please add this to your pool.  Conversely, if you take one out of rotation, you have to get it out of there.  And this can happen very fast; container lifetimes are highly variable, more than we’re used to seeing.”

    Deeper Dive

    What F5 is trying to establish is a kind of communications mechanism for dispatching policy updates throughout a fast-moving system.  It must use the orchestrator as the go-between here; although it’s technically feasible to field calls from active containers directly, that job would be akin to tagging mosquitos, releasing them back into the wild, and expecting regular reports from them.  Since the orchestrator determines the lifespan of containers in a system, several of them could cease to exist whenever the orchestrator makes that determination — right in the midst of a policy operation.

    F5’s choice of interface should avoid this contingency.  However, it does require a concession to the new architectural model, which means that Container Connector must itself become a container, amid all the others in the system.  MacVittie confirmed that F5 will distribute pre-built containers, with its new components included, through its own Web site.

    “We’re also providing visibility data back from the services proxy [ASP],” she added.  “So if that’s sitting in front of containers, we’re able to provide things like uptime intervals, response times, and the metrics around how these things are running and what their status is.

    “We’re trying to make sure we provide not only the basic services of load balancing and making sure things are available and can scale up and down.  Let’s also make sure the DevOps community has the necessary metrics they need, in order to understand what’s going on in their environments, how their applications are performing, and what feedback they need so that they understand what’s going on.”

    << Previous Day 2016/11/15
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org