Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, February 3rd, 2014

    Time Event
    1:42p
    Cervalis Opens New Connecticut Data Center
    cervalis-ct2

    The new Cervalis data center in Connecticut. (Photo: Cervalis)

    IT infrastructure solutions provider Cervalis announced it has opened a 168,000 square foot data center in Fairfield County, Connecticut offering colocation, managed hosting, business continuity and disaster recovery, and cloud computing services. The project was announced over a year ago, the data center will be one of the largest in the state.

    “I am very pleased Cervalis has established such a unique, outstanding facility in Connecticut,” said Michael Boccardi, President and CEO of Cervalis.  “This unparalleled operations center can serve as a primary production data center, a backup data center or a hybrid solution for firms that rely on active-active computing environments. Many New York City firms have joined our New England-based clients at our other Connecticut facility, and I expect that trend to continue.”

    The new data center features 75,000 square feet of raised-floor data center space (above the 500 year flood zone) and 30,000-square-feet of dedicated disaster recovery seating with 16 megawatts of redundant power,  powered by two separate utility companies and 3,500 tons of cooling capacity. The facility is just 40 miles from New York City. Catering to financial services companies, Cervalis facilities are SSAE 16 Type 2 and PCI compliant, as well as U.S.-EU Safe Harbor certified.

    “In recent years, many Connecticut businesses have battled not only natural disasters – Snowtober, Hurricane Sandy – but also the significant business interruptions that followed,” Boccardi said. “Cervalis is committed to protecting businesses against these costly interruptions, and we designed our Norwalk facility to allow businesses continuity through access to dedicated work area recovery. During these challenging events, Cervalis maintained 100% uptime and supported thousands of clients in its work area recovery seats and data centers. We expect these unusual weather patterns to continue, and the demand for these offerings remains high.”

    2:03p
    NTT Completes $350 Million RagingWire Investment
    ragingwire-va-powerroom

    A look inside one of the power rooms at the Raging Wire data center in Ashburn, Virginia. NTT Communications has completed a $350 million investment in the company. (Photo: Rich Miller)

    NTT Communications has completed its $350 million investment in RagingWire Data Centers, acquiring an 80 percent ownership stake, the companies said today. RagingWire founders and management will continue to operate the company as a platform under the RagingWire brand and maintain a minority interest in the company.

    In one of the largest data center deals of 2013, NTT dramatically expanded its presence in the U.S. market with this investment, more than doubling its footprint in the U.S with an additional 650,000 square feet of space, not counting numerous RagingWire expansions underway. The deal enables both companies to respond to high demand for data center colocation services worldwide.

    “We are thrilled to welcome RagingWire into the family of NTT Communications group companies,” said Akira Arima, CEO of NTT Com.  “Our combined customer base which includes the top enterprises, government organizations, and Internet companies around the world will benefit from our expanded and enhanced data center and cloud solutions.  RagingWire’s patented data center power delivery design and technology along with their world-class customer service model will be important additions to our global solutions portfolio.”

    NTT has data centers in northern Virginia and Silicon Valley, and globally has data centers in more than 150 locations, with over 2.5 million square feet of server room floor. RagingWire has campuses in Sacramento, California, and Ashburn, Virginia, both of which are undergoing expansions. RagingWire began construction of a new 150,000 data center in Sacremento and will soon break ground on a 78 acre parcel of land in data center alley, Ashburn, Virginia, with designs to build 1,500,000 square feet of space. RagingWire recently detailed the next phase of growth, post-acquisition.

    “RagingWire’s management team and employees are honored to be recognized by NTT with this investment in our business,” said George Macricostas, founder and CEO of RagingWire.  “As part of the NTT family of companies, we plan to accelerate our build plan in North America, extend our data center platform globally, and add even more strategic value to our customers.”

    3:29p
    State of the Data Center Puts IT in the Spotlight

    Jack Pouchet is vice president of Business Development and Director of Energy Initiatives, Emerson Network Power.

    Jack-Pouchet-smJACK POUCHET
    Emerson Network Power

    One in every nine people on earth is an active Facebook user, and mankind created 1.9 trillion GB of data in 2013. The growth of social sites and the proliferation of information are two trends that Emerson Network Power captures in its “State of the Data Center 2013” infographic. These trends have a huge impact on the communications network, IT department and, most importantly, data centers.

    In 2011, Emerson Network Power introduced our “State of the Data Center” infographic, a scan of major trends that affect data centers. We also researched the number of outages and the cost of downtime. This infographic provided a baseline for comparing future trends.

    We recently completed “State of the Data Center 2013,” which we developed as an infographic that illustrates the facts of the year. To sum up the results in a few words, the global dependence on everything digital is pushing IT to the forefront of the organization. Data centers increasingly are relied upon in areas that were traditionally offline pursuits, and consumers have high expectations of speed and performance. I’ll share trends that support these findings, and I’ll also discuss a significant consequence of IT being in the spotlight.

    2013-state-of-data-center-iClick for full size at: http://www.emersonnetworkpower.com/en-US/About/NewsRoom/Pages/State-of-the-Data-Center-2013-Infographic.aspx

    Our Digital Dependence

    The facts confirm the relentless expansion and acceleration of our society’s digital dependence. For example, global e-commerce spending topped $1.25 trillion in 2013. That’s larger than the gross domestic product of Mexico. Even books are digital. It’s expected that 172 million e-readers were sold in 2013 – nearly six times the number of e-readers sold in 2011.

    Online social sharing also grew astronomically between 2011 and 2013. In 2013, 665 million people used Facebook, which, as stated previously, is equivalent to one in every nine people on earth. In comparison, in 2011, there were 500 million users, or one in every 13 people on earth.

    While sites such as Facebook and Twitter rely on the written (or typed, more accurately) word, photography and other imagery are seemingly becoming the language of the internet. On Instagram – “a fast, beautiful and fun way to share your life with friends and family,” according to the site – more than 40 million photos are uploaded every day – nearly 28,000 each minute.

    Video is also a mainstream medium, with YouTube being in the top five most popular social sites. Video files are huge, which also can make them slow to load, which is not acceptable. One-fourth of viewers will abandon an online streaming video if it buffers for at least five seconds; at 10 seconds, half the viewers are gone. In all respects, online performance must be flawless.

    The Impact: Data Growth

    So, how much data does all this digital dependence generate? In 2011, it was 1.2 trillion GB of data; enough for every person in the world to have 10 16-GB iPods. And two years later, it was estimated mankind would create 1.8 trillion GB of data. That’s equivalent to 118 billion of the now more popular 16-GB iPhones, which is enough for every person on earth to have 16 iPhones – that’s three more per person since 2011. Put another way, every hour enough information is created to fill 46.1 million DVDs that if stacked would scale Mount Everest 596 times. Mount Everest stands 29,035 feet above sea level.

    Enormous numbers of people are creating huge amounts of internet traffic and adding gigantic amounts of information in a variety of formats – some extremely storage heavy – every second of every day. It’s no wonder IT has become a star. But given the expectation of consumers that the internet will be always-on, fast and simple, it’s a star that can fall hard with the sudden failure of switchgear to the backup generator. Remember last year’s Super Bowl?

    The Price of Fame

    The infographic illustrates what Emerson Network Power hears every day from our customers and partners, that there is a growing reliance on the data center as an indispensable business asset. And, as data centers become more important, businesses are increasingly aware of the risks and costs of downtime.

    While complete outages were down 20 percent, the costs are shooting up – 33 percent since the release of Emerson’s “State of the Data Center 2011” infographic – and businesses are taking steps to avoid those costs. They are investing in their IT systems and infrastructures because the cost of downtime is becoming more and more prohibitive. Businesses average one complete outage every year (compared to 2.5 in 2011) at an average cost of $901,560. That’s a lot of money.

    The Show Will Go On

    The appetite for everything digital shows no signs of being satiated; the data center will continue to be front and center. So it is indeed a wise strategy to invest in systems and infrastructure that whittle away at the chance of downtime. It will be interesting to see where these trends take us by 2015.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:00p
    Open Compute Gets Down to Business, With New License and Certifications
    Frank Frankovsky

    Frank Frankovsky, chairman and president of the Open Compute Project Foundation, sees the solution provider community as a key enabler of open hardware innovation. The project has announced new licensing and certification programs for participants. (Photo by Colleen Miller.)

    SAN JOSE, Calif. - After a heady week of welcoming large crowds and new members, it’s clear that The Open Compute Project (OCP) has created a vibrant open hardware movement. More than 3,400 attendees participated in last week’s Open Compute Summit, with more than 150 companies as members. OCP hardware is running in data centers for Facebook, Microsoft, Goldman Sachs, Fidelity Investments, Rackspace, Bloomberg, Riot Games and Orange. New user communities are springing up in New Zealand, Australia, Philipines, Korea and Europe.

    As it continues to grow, The Open Compute Project is moving closer to a tipping point that creates lasting change in how IT equipment is built and sold. This raises a new set of questions: Can the revolution give birth to successful businesses? Can the passion of Open Compute translate into profits?

    As it nears its third birthday, the project is working to create business opportunities for solution providers and enterprises that are critical to the success of Open Compute as a business ecosystem. Licensing and certification may not seem as exciting as building cool hardware. But they matter when it comes to selling cool hardware.

    Adding a GPL-Style License

    That’s why the project has added a second licensing option. Current contributions to Open Compute are governed by a relatively permissive license modeled on the Apache Foundation that is designed to encourage wide sharing and innovation. It is  now adding a second license, modeled on the General Public License (GPL), that will require anyone who modifies an original design and then sells that design to contribute the modified version back to the foundation.

    Frank Frankovsky, the Chairman and President of Open Compute, believes the new licensing option will prompt even broader participation in OCP.

    “When we looked at what happens with a GPL-style license, we saw that the requirement to give derivative works back to the project would further accelerate innovation,” said  Frankovsky. “People would say ‘I would give more, but I don’t want to give my competitor a free ride.’ They have a choice now.”

    Last week’s summit highlighted the role of solution providers building hardware based on Open Compute designs. There are now seven official OCP Solution Providers, including Hyve, Avnet, Penguin Computing, AMAX, Quanta QCT, Rackline and Japan’s CTC.

    These companies “are a very important part of the ecosystem,” Frankovsky said. “The solution providers are becoming key contributors.”

    Two Certification Tiers

    OCP has also created a new certification process for compliance and interoperability, with two levels:  OCP Ready and OCP Certified. The project has established two new labs in Taiwan and San Antonio to review and certify hardware products for compliance. WiWynn and Quanta QCT were the first two providers to gain certification.

    “This is a big achievement for us,” said Mike Yang, VP and General Manager of Quanta QCT. “We’re getting a lot more inquiries from customers asking for OCP racks. Things are very promising.”

    Open Compute has helped accelerate a trend in which large hyperscale computing companies work directly with server vendors – either original equipment manufacturers (OEMs) like HP and Dell or original design manufacturers (ODMs) such as Quanta or WiWynn – to create servers that are optimized for specific customer workloads.

    “The critical role that the solution providers play is that they’re building businesses based on how technology is going to be delivered to customers,” he said. “They approach this in a consultative selling role. In my opinion, that’s the way businesses should work. There are very healthy businesses that can be built atop open source.”

    A Role for the OEMs?

    Notable in their absence from the solution provider list are Dell and HP, the leading incumbents in the OEM sector. Both companies were at last week’s Open Compute Summit – with Dell announcing a reseller agreement with Cumulus Networks – but most of the customer wins seemed to be originating with the OCP solution providers.

    “The OEMS are having a really tough time in figuring out how to work with Open Compute,” said Wesley Jess, Vice President of Supply Chain Operations at Rackspace Hosting. Rackspace has long-standing relationships with both OEMs, and continues to work with them on solutions for managed hosting customers. But the company is shifting its cloud solutions to hardware based on OCP designs, working primarily with Quanta.

    Another growing provider adopting Open Compute gear is IO, which announced a new OpenStack cloud product running on OCP hardware developed with AMAX. During his keynote at the Open Compute Summit, IO CEO George Slessman launched a virtual machine on the IO.Cloud from his iPhone.

    “You know how many licenses we paid on this?” Slessman asked. “None. The barriers are coming down.”

    The scope of Open Compute is growing, as the project pushes into networking hardware and contemplates future initiatives for storage.

    “It’s onward and upward,” said Frankovsky. “If you look at the scope of what Open Compute is involved in now, the scope of it is huge. The thing that’s really impressive is that the community is so passionately engaged. You have these big problems that no one company can solve. You have to create a community to solve that. The future of the industry is wide open for us to invent.”

    6:07p
    Highlights from Open Compute Summit V: Servers, Storage and Networks
    Attendees review gear at the HP booth at the fifth Open Compute Summit in San Jose, CA, last week. The event drew about 3,500 participants, who were eager to see what vendors have produced with OCP designs. (Photo by Colleen Miller.)

    Attendees review gear at the HP booth at the fifth Open Compute Summit in San Jose, CA, last week. The event drew about 3,500 participants, who were eager to see what vendors have produced with OCP designs. (Photo by Colleen Miller.)

    The Open Compute Project attendees have left the exhibit hall and returned to their homes and offices, nonetheless the excitement of Open Compute Summit V lingers. With a growing number of companies joining the project and enterprise interest in the efficient hardware generated by the Open Compute movement, the event drew a large interest from the tech and data center community. This year, the project gained tremendous momentum, with tech giants such as Microsoft joining the project and with moments such as IO CEO George Slessman demonstrating a live cloud (running on OpenStack software and Open Compute hardware) being deployed in a data center via an iPhone.

    Visit our photo essay for highlights from the speakers and in the exhibit hall — Open Compute Summit V Highlights.

    8:00p
    DuPont Fabros to Enter Colocation Market
    The ACC5 data center, one of the northern Virginia properties operated by DuPont Fabros Technology.

    The ACC5 data center, one of the northern Virginia properties operated by DuPont Fabros Technology.

    DuPont Fabros Technology is entering the colocation market. The company, which has operated as a “wholesale” provider selling large suites of finished data center space, will begin selling space by the cabinet in the next few months. DuPont Fabros (DFT) made the announcement in its earnings call last Thursday.

    DuPont Fabros will start small, dedicating about 800 kilowatts of capacity in its data centers in northern Virginia and New Jersey to colocation space. This will represent a beachhead for what could become a larger colocation business in the future, said President and CEO Hossein Fateh.

    The move reflects the growing competition in the market for outsourced data center space, which has led both wholesale players and “retail” colocation providers to expand their service offerings. In colocation, a customer leases a smaller chunk of space within a data center, usually in a caged-off area or within a cabinet or rack. In the wholesale data center model, a tenant leases a dedicated, fully-built data center space.

    Open-IX Enables New Strategy

    DFT’s entry into colo is enabled by the emergence of the Open-IX movement, which has led two European Internet exchange operators to launch local peering hubs in the company’s facilities. The London Internet Exchange (LiNX) has taken space in DuPont Fabros’ ACC5 data center in Ashburn, while Amsterdam’s AMS-IX will open a Open-IX exchange in the NJ1 data center in Piscataway, N.J.

    The presence of these Internet exchanges will allow colocation customers to easily access a wide range of networks, instead of having to arrange their own connectivity.

    “We believe this is a good opportunity to step into a cabinet-based offer, where customers can lease one or more cabinet, connect to fiber provider of their choice and to one of the Open-IX switches and subscribe to our Level 1 IT support, should they need assistance,” Fateh said on the earnings call. “This initial launch will allow us time to build out the infrastructure needed to support a robust retail product, while taking a prudent approach into this offer. We expect to have our cabinet-based offering on the market in the second quarter of this year.”

    Starting Small, But Scenarios for Growth

    Services offered to colo customers will include “rack and stack” installations, server reboots, shipping and handling of equipment, network cross connects and cabling.

    “It’s going to be a very small operation,” said Fateh. “In Virginia, we’re talking less than 40 racks. We’re hiring a couple of people to do this, but at the moment, we don’t envision it to be a large part of our business at all.”

    But that may not always be the case. In 2017 through 2019, DuPont Fabros will have up to 13.8 megawatts of leases coming up for renewal in its ACC4 data center in Ashburn.

    “We want to be really ready for a retail product so that when and if some of those lease has come up in ACC4, we’ll be able to re-lease some of that” as colocation space, Fateh said. Shifting to a colo model would produce a better return than leasing that space to “super wholesale” customers, who have large requirements but can often negotiate for attractive lease rates.

    9:00p
    Google Spent $7.3 Billion on its Data Centers in 2013
    google-coldaisle

    Google invested $7.3 billion its data center infrastructure in 2013. Here’s a look inside the cold aisle of a Google data center. (Photo: Google)

    Google continues to pump big bucks into its data center operations, investing a massive $7.35 billion in capital expenditures in its Internet infrastructure during 2013. The spending is driven by a massive expansion of Google’s global data center network, which represents perhaps the largest construction effort in the history of the data center industry.

    The company reported capital expenditures of $2.26 billion in the fourth quarter of the year, slightly below the record $2.29 billion it spent in the third quarter. Google spend $1.2 billion and $1.6 billion, respectively, in the first two quarters of the year, bringing the annual total to $7.35 billion. That more than doubles the $3.27 billion in CapEx the company reported in 2012.

    Google has been steadily ramping up its spending in each of the last seven quarters, crossing the $1 billion per quarter mark late last year. This unprecedented level of spending reflects the breadth of Google’s server farm construction program. Here’s a look at the data center expansions announced in 2013:

    • A $600 million expansion of its complex in The Dalles, Oregon.
    • An additional $400 million expand its campus in Council Bluffs, Iowa, where Google has now committed $1.5 billion to its infrastructure.
    • Fresh investment of $600 million in an additional phase for its data center campus in Lenoir, North Carolina.
    • Another $600 million of investment to support new construction at Google’s South Carolina data center campus in Berkeley County.
    • An additional $390 million of new construction at the Google data center in Belgium.
    • Google also purchased a 1 million square foot former Gatorade factory in Pryor, Oklahoma that could support future expansion of the company’s data center campus in town.

    Each of the 2013 expansion projects has represented an additional phase at an existing campus where Google has already built at least one data center. Building multiple facilities at a single site can be cheaper than building in a new site, as basic infrastructure for power and connectivity is typically installed during the buildout of the first facility, leaving less work and expense in subsequent phases.

    A capital expenditure is an investment in a long-term asset, typically physical assets such as buildings or machinery. Google says the majority of its capital investments are for IT infrastructure, including data enters, servers, and networking equipment. In the past the company’s CapEx spending has closely tracked its data center construction projects, each of which requires between $200 million and $600 million in investment.

    << Previous Day 2014/02/03
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org