Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, July 31st, 2015

    Time Event
    12:00p
    After Telx Deal, Digital Realty Has Delicate Balance to Maintain

    Its acquisition of Telx has put Digital Realty in a rather precarious position of competing more directly with one of its biggest customers: Equinix.

    More than anything, San Francisco-based Digital is a provider of wholesale data center space, and retail colocation companies like Eqiunix and Telx lease space from companies like Digital as part of their real estate strategy. Digital has had a retal-colo play of its own for some time now, but the $1.9 billion Telx deal takes that play to a whole new level. Telx is one of the biggest colo companies in the US, managing more than 1.3 million square feet of data center space across 20 facilities, 11 of them within Digital’s buildings.

    Both Equinix, based in Redwood City, California, and Telx have pursued a strategy of stimulating growth of ecosystems within their data centers, where customers and service providers interconnect their networks. The bigger the ecosystem, the more attractive the facility is for others, who can interconnect with a multitude of customers, network providers, or partners in a single building.

    A major part of this ecosystem play is connecting enterprises to cloud service providers, including the biggest cloud companies, such as Amazon, Microsoft, Google, and IBM SoftLayer. Equinix executives have said repeatedly that connecting customers directly to these service providers was the fastest-growing service in its portfolio.

    These services allow enterprises to take advantage of public cloud while bypassing the public internet. The main advantages are in performance and security. While Equinix is far ahead of Telx in building out this cloud-on-ramp business, Telx has pursued it quite aggressively as well.

    Scott Peterson, chief investment officer at Digital who architected the acquisition, said that while there will be some overlap between Digital’s newly expanded colo business and Equinix, there are plenty of customers Equinix is not really after that Digital can target.

    “They’re still a very important customer, and obviously colocation companies [in general] are very important customers of ours,” Peterson said. “They have their IBX, and we’re not going to try to create something that’s going to be a head-to-head competitor with the IBX.”

    IBX stands for International Business Exchange, which is what Equinix calls its data centers that act as interconnection hubs. “They established that and got ahead of everybody,” Peterson said.

    “We’ve renewed all of our leases with Equinix last year on 15-year terms with multiple five-year renewal options,” John Stewart, senior VP of investor relations at Digital, said.

    Equinix, Peterson said, has emphasized fostering an ecosystem of interconnection between content providers and companies that deliver content to consumers. “If you want to connect to those ecosystems, you need to be in an Equinix facility.”

    However, there are many companies outside of that content-provider-eyeball-network world that rely on cross-connect services in highly connected buildings, and those are the customer Peterson sees as an opportunity to expand the Telx-charged colo part of Digital’s business that will not infringe too much on the Equinix territory.

    Connectivity to cloud providers is a different matter. “Digital has had a big emphasis on that – and Telx has as well – and this is going to get us further down the road in terms of being a comprehensive provider,” Peterson said. “We’re all going to compete for that. Nobody’s going to get the franchise on it, if you will.”

    Digital reported $420 million in revenue for the second quarter Thursday, up 5 percent year over year. Its net income for the quarter was $138 million.

    Equinix also reported its second-quarter results this week, saying it made about $666 million in revenue, up 3 percent year over year, and about $60 million in net income.

    5:32p
    Friday Funny: Wind Power for Data Center

    Kip and Gary invested in some wind power…

    Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon, and we challenge our readers to submit the funniest, most clever caption they think will be a fit. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.

    Congratulations to Ben, whose caption for the “Bright Aisle” edition of Kip and Gary won the last contest with: “Do you see the light?? I have, and it’s right here!”

    Several submissions came in for last week’s “White Boarding” edition – now all we need is a winner. Help us out by submitting your vote below!

    Take Our Poll

    For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website!

     

    5:38p
    Weekly DCIM News Roundup: July 31

    RF Code unveils a new data center asset management framework and Device42 releases 7.2.0 of its DCIM software with localized language support.

    1. RF Code Unveils data center asset management framework. RF Code unveiled CenterScope, a framework made up of best practices and streamlined methodologies built around the company’s real-time environmental monitoring and asset management software.
    2. Device42 adds localized language in 7.2.0 release. Device42 announced version 7.2.0 of its DCIM software, now with localized language support and many enhancements around IP address management.
    7:42p
    Catching Up with OnApp CEO Ditlev Bredahl at HostingCon Global

    logo-WHIR

    This article originally appeared at The WHIR

    If OnApp CEO Ditlev Bredahl was to start a hosting company today, he would not want to own his infrastructure. Instead, he would use a marketplace like the OnAppFederation, which came out of beta last week.

    “If I could redo UK2Group today, I would have my entire infrastructure with the marketplace,” he said in an interview with the WHIR at HostingCon Global 2015.

    Launching in beta several years ago, OnApp Federation added 49 new cloud providers this week that contribute more than 50 compute locations in 30 countries, extending its coverage in European locations, including London, Cologne and Oslo, in the US and Canada, South America, Asia-Pacific and EMEA. The federation brings together cloud providers that can buy and sell cloud infrastructure through a wholesale marketplace.

    Of course Bredahl launched OnApp in 2010, and should certainly stand behind the federated model, but he said that the message is catching on in the broader industry as web hosts need something in order to give customers scale and geographic reach possible with AWS.

    “This is my sixth startup and this is the first time I really feel like I’m changing something,” Bredahl said, describing OnApp’s federation as the Uber or AirBnb of infrastructure.

    Taking on AWS’ Amazonian Scale

    With nine data centers around the world and aggressive growth plans, Amazon is entering new markets faster than any other cloud provider, and its earnings for its cloud division are certainly impressive if not a little unsettling for other cloud service providers.

    “Amazon has a better feature set now, but more than that they’ve got amazing access to capital and scale,” Bredahl said. “[As a host] you really can’t afford to have the same scale or level of geographic reach.”

    It seems that bigger companies may understand the buy vs. build proposition better than smaller companies, as Bredahl said OnApp has fewer clients this year but more revenue.

    “I think the industry is consolidating but I also think it’s pluralizing. I think you will end up in a situation where you’re selling infrastructure or you’re building infrastructure,” he said. “If you’re building infrastructure you might not be very good at selling it, but if you’re selling infrasturcture you might not be very good at building it.”

    OnApp Acceleration Makes CDN Configuration Easier

    Of the announcements OnApp made this week at HostingCon, Bredahl is most excited about its OnApp Accelerator, a CDN that can be deployed in one-click.

    “I really feel that the Internet is broken,” Bredahl said. “If the founders of the Internet back in the day looked at how the Internet is used today, it wouldn’t have been designed the way it was. If you think about it, it’s the people that distribute content that decide how it should be consumed. It should be the consumer that decides where the content is being served because they are the one that is actually buying it.”

    By making deploying a CDN easier, Bredahl hopes that more websites will actually use one; “Only six percent of all sites are sitting on a CDN. I think that seems really low,” he said.

    “I think the problem with CDN is people don’t quite understand what it actually does and a content delivery network is not really common knowledge for people running small businesses,” he said.

    By using the word accelerate rather than CDN, Bredahl hopes less technical people will understand the benefit a CDN can bring to their websites.

    This first ran at http://www.thewhir.com/web-hosting-news/catching-up-with-onapp-ceo-ditlev-bredahl-at-hostingcon-global

    9:57p
    Report: Utah Cops Get $1M a Year to Park at NSA Data Center

    The massive controversial NSA data center in Bluffdale, Utah, has police presence that’s costing the agency $1 million a year. State Highway Patrol troopers provide the facility that became a center of attention following Edward Snowden’s disclosures about the agency’s mass surveillance practices with a “perimeter presence” under contract with the feds, reported a local Fox News affiliate.

    In a statement, the NSA (National Security Agency) said the move was to ensure the security of its workforce and the larger community. Public outrage following the Snowden disclosures included protests at the site, putting the secretive facility in the national spotlight.

    The contract is largely for posting police cruisers outside. The NSA said it was not a security contract, but a means to show a local presence of law enforcement. The data center has its own dedicated security measures and personnel.

    The wages, which equate to around $50 an hour, are contracted at no expense to the State of Utah, according to state officials.

    “We contract this; it’s no expense to the state of Utah,” Mike Rapich of Utah Department of Public Safety said to the news organization. “The wages are entirely covered in the contract rate. The mileage for the patrol car is covered as part of the contract, and so troopers do it in addition to their regular duty assignment as overtime, and it works out really well.”

    The NSA said the facilities are used to protect national security networks and provide US authorities with intelligence and warnings about cyber threats. But NSA data centers have become a flash point for controversy. The controversy was around [REDACTED] and the [REDACTED] nature of [REDACTED].

    The NSA data center and any news around it are subject to scrutiny given the history, so the NSA is making sure that people know the contract is aboveboard.

    “NSA routinely partners with federal, state, and local emergency responders at domestic locations,” the NSA said in its statement to Fox, for a variety of operational security reasons, “NSA does not disclose the full range of these relationships,”

    The data center is located on the Utah National Guard’s Camp, with the first 30MW phase constructed at a cost of over $1.5 billion over four years. The anticipated cost was $1 billion.

    It is the largest of four (known) sites, sitting at over 1 million square feet of total space consisting of 100,000 square feet of data center and the rest for technical support and administrative space. The other data centers are in Colorado, Georgia, and Maryland.

    The data center was plagued with electrical problems in 2013, resulting in more than 50,000 man-hours of investigation and troubleshooting. There were 10 arc flash “meltdowns” in the data center’s first year or so. Arc flash is dangerous for both workers and equipment.

    10:17p
    Obama Unveils New Federal Supercomputer Research Initiative

    President Barack Obama this week issued an executive order establishing the National Strategic Computing Initiative to ensure the country stays ahead in High-Performance Computing. Supercomputers are used to simulate complex natural and technological systems, such as galaxies, weather and climate, molecular interactions, electric power grids, and aircraft in flight, the president’s office said in a statement.

    The new supercomputer research program is designed to advance core technologies to solve difficult computational problems and foster increased use of the new capabilities in the public and private sectors.The goal over the next decade is to build supercomputers capable of one exaflop, called exascale computers. An exascale computer can perform 1018 operations per second. This would be a thousandfold increase over a petascale computer. The first petascale system came into operation in 2008.

    According to the Top500 list of fastest supercomputer in the world, the most powerful system today is China’s Milkyway 2, at 33.86 petaflops (quadrillions of calculations per second), which is almost double the fastest supercomputer in the US — the US Department of Energy’s Titan supercomputer.

    However, it’s not just about the flops. HPC “must now assume a broader meaning, encompassing not only flops, but also the ability, for example, to efficiently manipulate vast and rapidly increasing quantities of both numerical and non-numerical data,” according to a post from the Whitehouse.

    An example of what the government’s new supercomputer research program is trying to accomplish is Computational Fluid Dynamics modeling. The aircraft industry has significantly reduced the need for wind tunnel and flight testing, but current technology can only handle simplified models of the airflow around a wing and under limited flight conditions. A study commissioned by NASA found that exaflop-level performance woud be able to incorporate full CFD modeling of turbulence and dynamic flight conditions in simulations.

    “By strategically investing now, we can prepare for increasing computing demands and emerging technological challenges, building the foundation for sustained US leadership for decades to come, while also expanding the role of high-performance computing to address the pressing challenges faced across many sectors,” the president’s representatives wrote.

    Obama budgeted $126 million for Exascale computing in the 2012 budget, over $90 million of which went to the Department of Energy’s Office of Science. The DoE has a major ongoing supercomputing research effort focused on reaching exascale computing.

    << Previous Day 2015/07/31
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org