Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, May 25th, 2016

    Time Event
    12:00p
    Vantage to Add 21MW in Supply-Starved Silicon Valley Data Center Market

    Here is some good news for Bay Area technology firms: One of the tightest data center markets in North America is going to have more product available.

    Vantage Data Centers has announced plans to up its market share in the Silicon Valley data center market by expanding its existing 51MW campus in Santa Clara to 72MW. Earliest availability for the V5 and V6 buildings is projected for the second half of 2017.

    The latest expansion is in addition to Vantage’s 6MW V4 data center which is expected to be delivered in fall 2016. The two-story V4 design is similar to the company’s V1 facility, with 200 watts per square foot on a raised floor with airside economization, or free cooling.

    “Vantage’s original V1 facility contains the only newly built 3MW, modern data center space currently available in the Silicon Valley market,” the company said in a statement.

    Read more: Silicon Valley – a Landlord’s Data Center Market

    Vantage plans to commence construction on the V5 powered shell without an anchor tenant or prelease. Sureel Choksi, the company’s president and CEO, told Data Center Knowledge that he feels confident expanding the existing campus by 40 percent, given the current deal pipeline and lack of supply in the Santa Clara market.

    He pointed out the paucity of suitable expansion sites in the area, which is constrained by flood plains, flight paths, and high-speed rail corridors prior to even thinking about the entitlement process. However, the real trick is avoiding man-made disasters.

    Most notably, the large data center REITs are now using a phased approach to allocating capital. This has not always been the case in Santa Clara, which up until recently has seen periods of significant oversupply.

    Phased and Fully Funded

    Vantage’s 21MW expansion will be done in phases, with V5 being built at the existing Vantage campus, while the V6 data center will be built on newly acquired land immediately adjacent. The Vantage design is flexible, and can accommodate wholesale users from 500kW up, offering both N and 2N configurations.

    Notably, this expansion is fully funded from the $295 million increase in the company’s credit facility to $570 million, led by RBC Capital Markets, which was announced in February.

    Vantage is backed by private equity firm Silver Lake, with $24 billion of assets under management. This gives Vantage the horsepower to compete for customers with the large public REITs active in the Santa Clara market.

    The company’s Santa Clara campus is already home to security software giant Symantec, the enterprise Hadoop company Cloudera, and MarkLogic, an enterprise NoSQL database company.

    Notable Leases in Silicon Valley: 2015

    Jones Lang LaSalle, a brokerage firm with a large data center practice, pegged the total data center absorption in the San Francisco Bay Area/Silicon Valley market at 38MW for 2015. Recently, strong demand estimates for this market have been widely reported by both brokers and landlords.

    Leasing activity was brisk last year, which has exacerbated the shortage of large contiguous data center halls in Santa Clara. According to a North American Data Centers report:

    • Vantage leases in Santa Clara included: Microsoft at 10MW and Arista at 3MW, with VMware and Symantec taking 2MW each
    • Alibaba leased 3MW from CenturyLink
    • Amazon Web Services was identified as being CoreSite’s 130,000 SF SV6 build-to-suit, while Uber also took down 4MW with CoreSite in 2015

    Subsequently, NADC reported Microsoft had leased 16MW in Santa Clara from DuPont Fabros Technology, which significantly impacted the amount of available space in the market.

    Read more: Report Confirms Large Cloud Providers Drive Q1 Leasing

    CoreSite signed an 80,000-square foot lease in Santa Clara during Q1 2016 and also announced the acceleration of its final 123,000-square foot phase of SV7.

    Read more: CoreSite Shares Spike as Cloud Data Center Leasing Accelerates

    What Comes Next?

    Vantage is unclear how long this spike in demand from hyperscale public cloud providers will last. Historically, demand for Silicon Valley data center space has come from software firms and content providers that need to have their data centers close to their IT workforce. Santa Clara has not been dependent upon cloud providers for the vast majority of data center absorption.

    Choksi also discussed expansion beyond Santa Clara and Quincy, Washington, with Data Center Knowledge. It is essentially a chicken-and-egg proposition. Vantage is looking to leverage existing relationships with customers to hopefully seed the next location, but only in markets which it views as attractive.

    During the past couple of years Vantage has come close but has not been able to consummate a deal with the right party to balance development risks. One advantage of being privately held is not having to meet or exceed analyst estimates for growth each quarter.

    However, Choksi and his team have certainly noticed the recent success of public REIT competitors. Since Vantage has most of its eggs in one basket, an IPO at this time would not be well received. Expansion into other data center markets could give Vantage and Silver Lake more options moving forward.

    3:00p
    CenturyLink Data Center VP Joins RagingWire as COO

    RagingWire Data Centers, the US data center provider majority-owned by Japan’s NTT Communications, has appointed Joel Stone, former head of data center operations at CenturyLink, as senior VP and chief operating officer.

    Stone will lead facilities engineering, design, construction, and data center operations at RagingWire, which recently pivoted from a mixed retail and wholesale data center services model to one focused on wholesale, seeking to take advantage of the current hunger for data center capacity by big cloud providers, such as Amazon, Google, and Microsoft.

    Stone has a lot of experience in both data center services and web-scale data center infrastructure. At CenturyLink, he oversaw a global portfolio of 60 colocation data centers. Prior to joining CenturyLink in 2011, he oversaw global operations at Global Switch, which provides data center services in Europe and Asia. Before that, he spent nine years managing data centers at Microsoft.

    Reno, Nevada-based RagingWire was attractive to him because he buys into the company’s new business strategy, Stone said in an interview. “They’re making their mark in the wholesale industry,” he said. “I have a great opportunity to come in and help them with their strategy to grow the business.”

    Joel Stone, COO, RagingWire Data Centers

    Joel Stone, COO, RagingWire Data Centers

    He left CenturyLink at a time of uncertainty for the Monroe, Louisiana-based telco’s data center business. Since last year, the company has been exploring alternatives to owning its extensive data center portfolio, which it increased substantially in 2011 when it acquired data center provider Savvis for $2.5 billion.

    The company’s execs said that while they have no plans to get out of the colocation business, they are weighing a potential sale of some or all of its data center assets.

    Stone declined to comment on CenturyLink’s data center plans.

    His focus at RagingWire will be on expanding the company’s existing campuses in Sacramento, California, and Ashburn, Virginia, completing the massive-scale data center construction project the company kicked off last year in Texas, and building in new markets. RagingWire is eyeing expansion into New York, Silicon Valley, Chicago, and another West Coast market – Los Angeles, Phoenix, or eastern Washington – the company’s president, Doug Adams, told DCK earlier.

    There isn’t a single particular type of data center design that works for all cloud providers, Stone said. “Some cloud providers require very strict architecture, and they really don’t want to deviate from a true 2N-type of style. Others have more of a single-cord [approach] and they have geographic redundancy.”

    For RagingWire, the key differentiation will be scale, since that’s what all the major cloud providers are after today. They’re racing to expand data center capacity, and they expand in multi-megawatt chunks, so scale is important to them.

    “From our perspective, it’s all about scale,” Stone said.

    3:30p
    How Colocation and the Cloud Killed the Data Center

    Laz Vekiarides is Chief Technology Officer for ClearSky Data.

    In speaking with enterprise CIOs and IT managers, I hear a lot of the same stories about successful technology deployments and complicated mistakes. As companies scale, they tend to take separate paths to similar ends, eventually running into the same obstacles and undertakings.

    One of the most interesting, but not infrequent, stories I’ve heard comes from enterprises that recently built primary or secondary data centers – without considering that in the modern cloud era, there are no circumstances under which a company should build a data center.

    A company telling this story likely bought land and constructed its new data center in a remote part of the country, where real estate and utilities were cheap. It entered a contracted agreement with a single network carrier that served the area. Then, as the organization grew and the company sought to work with new service providers, the team was surprised to learn that its site’s so-called valuable location prevented the data center from accessing certain services, ultimately putting a cap on the company’s growth.

    Gartner recently noted that the cloud and colocation sites are “natural allies, not competitors,” explaining that an IT strategy combining colocation with cloud can help reduce latency, increase security and create cloud interconnection opportunities. If the potential to build a data center site is still on the table for your team, consider the below ways cloud computing and colocation changed the IT game for good, and how your enterprise can benefit from the shift.

    Carrier Diversity Matters in the Modern Data Center

    It’s clear that the cloud should be part of your IT strategy, even if your team has yet to determine how to leverage it. Many CIOs are stuck, having moved some workloads to the cloud but facing obstacles as they attempt to migrate the rest of their business. According to Gartner, security and IT complexity are the top reasons cloud strategies grind to a halt. For these teams, it’s important to remain educated about their companies’ individual needs, and seek services that can help meet them.

    In any case, when you’re dealing with the cloud, you’re dealing with remote IT resources. These require private networks with high levels of bandwidth and resiliency, and support from a robust data center provider. (To avoid vendor lock-in, you’ll want to make “providers” plural.)

    When you build a data center, you pay a carrier to run fiber to the facility. Depending on the site’s location, the area often lacks the benefits of competitive providers, meaning you need to work with a major telecom carrier to physically connect your building. However, as reported by CIOs like the one in our example, this agreement can lock you into a relationship with a single carrier and cut you off from connectivity and service options. And, if your team ever needs to recover from a physical disaster or hardware failure, a lack of diverse resources in your data center can severely complicate your backup plan.

    Connectivity is Unmatched in Colocation Ecosystems

    Colocation environments in major metro areas are quickly becoming the industry standard. The economics of connectivity simply work best in an environment filled with choices. Telecom carriers have a higher incentive to support a facility that includes multiple colocation providers and a wide range of existing customers. In return, those customers – businesses like yours – receive a variety of options that help streamline business in the long run.

    In choosing a colocation site to work with, know that location is a major factor in any site’s value and success. Assess the carriers with a presence in the site, and the service providers that can add additional solutions to your IT strategy. As Gartner recommends, don’t relegate your colocation plans to one-off or siloed projects; instead, make them a focus of your broader hybrid cloud and digital business plans.

    ROI Comes From Cutting Down Infrastructure, Not Building It Up

    In nearly every industry, there’s a pressure to downsize IT infrastructure – which contradicts the fundamental plan of building a physical secondary site. Core functions of IT are now available as a service, reducing the need to depend on on-premises hardware. Data center real estate and upkeep is an expensive game, and colocation sites are successful because of the resources they have available: metro-based locations and service from multiple carriers.

    It’s important to consider your company’s specific needs and roadmap before you make a critical IT decision. However, when you consider the ROI of a colocation site versus the cost and maintenance required to build and maintain a private data center, and combine that with the power and prominence of the cloud, the choice is clear. As you choose a colocation provider, be sure to evaluate service options, carrier selection and your company’s plans for the long run. By getting involved with an active colocation ecosystem, you’ll be supporting the future of both your company and the IT space.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:35p
    Whitman Slims Down HPE, Unwinding ‘IT Supermarket’

    (Bloomberg) — Meg Whitman is taking further steps to unwind the collection of computer assets she inherited when she took the helm of Hewlett-Packard in 2011, keeping up her efforts to slim down the Silicon Valley giant into a more nimble company.

    Hewlett Packard Enterprise will spin off and merge its enterprise-services division with Computer Sciences Corp. in a deal valued at $8.5 billion for HPE shareholders. The plan is the latest in Whitman’s drive to focus the company, which sells corporate computers and software, on the most promising and faster-growing businesses while hiving off billions in acquisitions and assets, including some built up by her predecessors.

    After breaking off the lackluster personal-computer and printer units into a separate company last year, the chief executive officer is still reshaping HPE to boost growth amid new competition in the cloud from rivals such as Amazon. The deal with CSC means Whitman is now exiting the market for information technology outsourcing, which helps customers manage and upgrade their systems, leaving her to concentrate on selling hardware that covers servers, storage and networking, along with software and some specialized services.

    “Back in the day, I think being an IT supermarket was a good strategy — scale was important,” Whitman said on Tuesday. “But a lot of things that built a competitive moat five, 10, 15 years ago actually don’t today.”

    CSC CEO Mike Lawrie will lead the new company. Tysons, Virginia-based CSC, whose main business is IT services, will serve HPE’s existing customers, and the deal is expected to generate $1 billion in cost savings in its first year, according to a statement Tuesday. Whitman will have a seat on the new company’s board, which will be split evenly between directors nominated by each company.

    Shares of HPE jumped 11 percent in extended trading, and CSC stock surged 23 percent.

    ‘Underperforming’ Unit

    HP its services business in 2008, when the company purchased Electronic Data Systems for about $13 billion. Yet enterprise-services division revenue declined to $19.8 billion in fiscal 2015, compared with about $26 billion in 2012.

    “The enterprise services segment for the company had been an area that had been underperforming,”said Shebly Seyrafi, an analyst at FBN Securities, noting that HPE didn’t get a premium price for the services business. “They just haven’t really done well, I would say, with that acquisition.”

    Separately, HPE said fiscal second-quarter profit excluding certain items was 42 cents a share, on revenue of $12.7 billion. Analysts on average had projected profit of 42 cents on sales of $12.3 billion, according to data compiled by Bloomberg. In the current period, which ends in July, profit will be 42 cents to 46 cents, the Palo Alto, California-based company said Tuesday in a statement. That compares with an average estimate of 48 cents.

    Revenue in the Enterprise Group, HPE’s largest unit, posted a 7 percent gain from a year earlier to $7 billion. Sales in the Enterprise Services division being combined with CSC fell 2 percent in the April quarter to $4.7 billion.

    Cusp of Consolidation

    Improvements in the services business make it a good time to make the move with the spinoff and merger, Whitman said on a conference call. And perhaps more importantly, the industry is on the cusp of consolidation, and it’s better to be at the forefront than playing catch-up, she said.

    The new company, whose name will be announced later, could also buoy sales of HPE products, because the company didn’t previously have broad access to CSC’s business for its hardware gear, Whitman said.

    There’s not a lot of client overlap, Lawrie said, with less than 15 percent of shared customers among top accounts. He said the new company is well-positioned for growth as a combined entity.

    “Together, as an agile technology independent services pure play, we will be better positioned to innovate and compete and win against both emerging and established players,” he said on the call. “We will have substantial scale to serve customers more efficiently and effectively worldwide.”

    The transaction will consist of a tax-free spinoff of the HPE unit and merger with CSC. The $8.5 billion value of the deal for HPE shareholders includes $4.5 billion in stock in the newly combined company, a $1.5 billion cash dividend, and the $2.5 billion transfer of debt and other liabilities.

    Past Acquisitions

    When Whitman took over in 2011, HP was reeling from years of acquisitions that failed to give it an edge in a quickly evolving technology industry — and she has since set about shedding some of the sluggish businesses built up by past CEOs. Carly Fiorina, who was ousted in 2005, led the company’s buyout of Compaq Computer Corp. in 2002 — a deal that made it more dependent on the PC market. Mark Hurd, who was CEO from 2005 to 2010, led the purchase of EDS to broaden the company’s services business.

    Whitman said her company’s November split from PC and printer seller HP Inc. makes the new effort to separate the enterprise services business less daunting. She said many of the people who worked on the earlier split will work on this deal as well.

    “This time we have this thing down to a science,” Whitman said.

    5:36p
    FedRAMP’s Lack of Transparency Irks Government IT Decision Makers
    By The WHIR

    By The WHIR

    Four out of five federal cloud decision makers are frustrated with FedRAMP, according to a new report from government IT public-private partnership MeriTalk. Federal IT professionals said they are frustrated with a lack of transparency into the process.

    MeriTalk surveyed 150 Federal IT decision makers in April for the FedRAMP Fault Lines report, and found that 65 percent of respondents at defense agencies, and 55 percent overall, do not believe that FedRAMP has increased security. Perhaps even worse, 41 percent are unfamiliar with the General Service Administration’s (GSA) plans to fix FedRAMP. The GSA announced FedRAMP Accelerated in March.

    “Despite efforts to improve, FedRAMP remains cracked at the foundation,” said MeriTalk founder Steve O’Keeffe. “We need a FedRAMP fix – the PMO must improve guidance, simplify the process, and increase transparency.”

    See also: IBM, HPE: Government Cloud Security Process Broken

    The Authority to Operate (ATO) system, in which an agency completes a security assessment of a system, and authorizes its use, is supposed to allow services to be authorized once and used often. However, MeriTalk found 41 percent of Feds have not used another agency’s ATO, and 35 percent of those with an ATO have not allowed others to use it.

    As a result, 17 percent said FedRAMP compliance is not a factor in their cloud decisions, and 59 percent would consider a non-FedRAMP cloud.

    Top suggestions for improvement are accelerating the Cloud Service Provider certification process to increase the number of secure cloud options (49 percent), and creating an ATO clearing house which forces sharing 47 percent. Additionally, 37 percent at civilian agencies, and 27 percent overall suggested a leadership change at the Program Management Office of the GSA.

    The report recommends improved guidance and expanded training to reduce confusion, adopting the ATO clearinghouse idea to promote sharing and reduce duplication of efforts, and increased transparency.

    Industry advocacy group FedRAMP Fast Forward called for improvement to the program in January.

    This first ran at http://www.thewhir.com/web-hosting-news/fedramp-frustration-lack-of-transparency-irks-cloud-decision-makers

    8:49p
    Salesforce to Use AWS and Own Data Centers in Expansion Push

    Salesforce has officially named Amazon Web Services its preferred public cloud infrastructure provider, the two companies announced Wednesday. The San Francisco-based cloud software provider is planning an international expansion, and the deal with AWS is part of the infrastructure strategy for that expansion.

    The announcement follows a report by the Wall Street Journal earlier this month that Salesforce was using AWS for infrastructure that underpins its new Internet of Things service, but there hasn’t been an official acknowledgement from either company until now.

    Salesforce’s other services, including Marketing Cloud Social Studio, Heroku, and SalesforceIQ, also run on AWS, according to the announcement, and it is now planning to extend its use of Amazon’s cloud to its core services, including Sales Cloud, Service Cloud, App Cloud, Community Cloud, and Analytics Cloud, among others.

    Salesforce has traditionally used colocation data centers to host infrastructure that supports its flagship cloud CRM services. The San Francisco-based company said it would now use AWS to bring new infrastructure online in some international markets more quickly and efficiently, but it will use the public cloud in combination with its own data centers.

    Last year, for example, Salesforce signed a lease for 2MW of data center capacity in the Chicago metro with DuPont Fabros Technology, according to a report by the commercial real estate firm North American Data Centers.

    The company said it would announce locations and timing for the expansion later this year.

    The cloud software company has been rethinking its infrastructure strategy since at least last year, when its VP of hardware engineering, TJ Kniveton, said it was going to switch to a web-scale infrastructure strategy, used by the likes of Google, Facebook, Microsoft, as well as Amazon. Using AWS is one way to take advantage of web-scale infrastructure and one that doesn’t require actually having to build it.

    << Previous Day 2016/05/25
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org