Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, June 23rd, 2014

    Time Event
    12:00p
    NTT’s Osaka Data Center Build Illustrates Impact of 2011 Earthquake on Industry

    It has been more than three years since the Great East Japan Earthquake devastated the island country’s northeast. Aftershocks of the disaster are still felt by the country’s data center industry, continuing to shape its dynamics.

    NTT Communications’ expansion of data center capacity in Osaka — way south of the area affected by the tremor — serves as an illustration (in both location and design) of how the industry is being shaped by the event. Osaka is a quickly growing data center market in the country, second only to Tokyo.

    The provider has recently kicked off construction of its Osaka 5 data center, which will offer approximately 3,700 square meters (about 40,000 square feet) of usable space (room for about 1,600 racks).

    Quake boosts data center outsourcing in Japan

    NTT Com is meeting growing concerns in Japan for unexpectedly powerful natural disasters and rising energy costs, which have led to rapidly increasing ICT outsourcing, disaster-recovery countermeasures and use of cloud computing. A growing number of financial and manufacturing companies are relocating headquarters or using facilities in Osaka to back up data centers in Tokyo.

    “After the Great East Japan earthquake (March 11, 2011), many customers in Japan initiated a drastic review of disaster countermeasure and ICT outsourcing,” said Sora Tanaka, who leads product sales and marketing for Nexcenter, an NTT Com data center services prand. ”As a first phase, customers started to use colocation services and moved their systems outside of their owned building. Now, the second phase is starting. Customers are looking for the data center for their BCP site outside the Tokyo metropolitan area.”

    Tanaka says the company’s existing space in Osaka will be out of stock after a few years, and that now is the perfect time to build. According to a report from Fuji Chimera, Tokyo currently accounts for 68 percent of the data center demand  in Japan and Osaka accounts for 26 percent.

    “Osaka area is the second-largest economic center next to Tokyo, and many Tokyo-based companies have branches in the Osaka area,” said Tanaka. ”There is significant distance between Tokyo and Osaka, and customers can receive power from power company Kepco.” (Kepco is different from Tepco, which was in the news a lot in the aftermath of the earthquake)

    Ever more focus on seismic endurance

    NTT has a process it uses when building. Construction and operating costs of the new data center are expected to be 30 percent lower than a comparable facility through use of an existing building foundation and adoption of what it calls a “mega-structure” design to reduce steel consumption while maintaining rigidity (building out a larger shell) and an efficient facility layout. The building is seismically engineered to withstand earthquakes equal to the Kobe Earthquake of 1995 or the 2011 disaster.

    The data center  is six kilometers from Osaka Bay and three kilometers from the Yodo River, putting it at a safe distance from potential tsunamis, floods and high tides. The electric power equipment communication facilities, server rooms and other important facilities will be placed on the second floor in the unlikely event that water does reach the data center. Backup power and office space of about 430 square meters  could serve as business continuity area if a disaster were to strike Tokyo or another city where a customer has domestic operations.

    NTT is tapping power from two separate power substations for reliability.

    SDN for sophisticated connectivity services

    The newest addition Nexcenter’s fleet, the data center will provide cloud and colocation. NTT Com also incorporates software defined networking technologies, based on OpenFlow, to let users change system configurations of the network flexibly and on-demand.

    The facility will be connected to company’s Biz Hosting Enterprise Cloud via an SDN-supported colocation connection service, giving access to the company’s wider footprint. NTT will also provide connectivity services to Osaka 1 and 2 and its data center in the Dojima area, the Kansai region’s major site for Internet exchanges. Direct connection of optical fiber will be possible through cable tunneling and proximity to the Dojima area.

    Efficient air conditioning systems will incorporate water cooling, end wall injection air conditioning and outdoor air to reduce power consumption. It’s anticipated to be one of the most efficient facilities in the surrounding Kansai region.

    Specs on par with Tokyo

    The data center will be easily accessible from multiple railway stations and is within walking distance of many principal areas of Osaka.

    “Demand is not only for Tokyo-based companies. It’s growing on a national basis,” said Tanaka. “Osaka-based companies also look for a data center that has the same specification levels of those in Tokyo, so we need to provide a  high quality data center which has high seismic capacity and directly connected to our secure earthquake-proof cable tunnel.”

    12:30p
    Water-cooled Solutions for High Density Rack Cooling

    Graham Whitmore is the president and CEO of the Motivair Corporation.

    Server densities are steadily increasing for HPC applications in legacy, government and institutional data centers. As densities increase, traditional cooling systems are struggling to keep up. Water cooling is a concern for many data center operators. However, the heat transfer rate of water is about 3-4 thousand times that of air so the efficiency of water cooling is compelling.

    A look into cooling solutions

    Air cooled and water-cooled CRAC units have been the mainstay of data center cooling for many years.  Originally designed for mainframe cooling, CRAC units provide bulk cooling of data center air with underfloor air distribution. These are not suited for high density rack cooling because they cannot direct sufficient cooling capacity or airflow through high density racks. Adding fan-powered floor grilles in front of racks helps but the servers are still dependent on their own fans to draw air over the servers.

    Water-cooled in-row coolers are suitable for low-medium densities up to around 15kW, but they require expensive aisle containment structures in order to function to their full potential, while increasing the total rack footprint by up to 50 percent. Situated between racks and containing a cooling water coil and fan(s), in-row coolers function by drawing air from the hot aisle and discharging cooler air to the cold aisle. The server fans draw this cooler air across the servers, which again is the major limiting factor.

    Passive rear door coolers were the first to address directly removing server heat at its source, using water as the cooling medium. These are replacement rear doors with a chilled water coil mounted inside. Their limiting factor is they must rely on the server fans to push the hot server air across the resistance of the cooling coil. Maximum cooling capacity is limited to about 15kW by coil performance with airflow from the server fans. Server fan power is a critical factor in overall server energy consumption and adding coil resistance to the server fans increases their power draw.

    Direct on-chip cooling is a recent water-cooled solution for high density rack cooling but is not widely used due to the high initial cost, invasive installation inside the servers, and highly specialized installation/service. The most attractive aspect is the ability to use warmer water for this system, due to the high contact temperature within the servers. This reduces the cooling cost of outdoor radiator fans or evaporative towers without the need for refrigeration.

    Active rear doors are comprised of a chilled water coil, variable speed (EC) fans, water control valve and PLC controls. They are dynamic and able to provide the exact cooling capacity, airflow and water flow for any changing heat load up to their rated capacity. Active rear doors extend the front to rear dimension of the racks by about 9”, consuming far less space than in-row coolers. Slightly narrowing the aisles while maintaining rack width saves valuable floor space and allows for future expansion.

    Remove server heat at its source

    A recent independent test by the Lawrence Berkeley National Laboratory determined that in optimum operating conditions only 70-75 percent of the server heat is removed at source, leaving at least 25-30 percent of the heat to be removed by the data center AC system. Less than optimum operating conditions (rack loading, water temperature, etc.) reduce on-chip cooling accordingly.

    Active rear door coolers are designed to remove a minimum of 100 percent server heat at its source.  Current cooling capacity ranges to 45kW. They can also be used to remove the data center building, lighting and personnel loads so no other AC is required in the white space of a rack-based system.

    By removing a minimum of 100 percent server heat, active rear doors are able to deliver return air to the room at or below the server entering air temperature. This also allows the servers to operate at a lower temperature than other systems, improving their overall efficiency and life expectancy.

    eBay recently installed active water-cooled doors in the Phoenix data center after a complete evaluation of alternative systems to remove 35kW per rack.

    A few things to consider

    Active rear door power cost is a combination of EC fan power and external mechanical cooling. They are designed to operate with 65-75F entering water, which can be from a chilled water system, or blended water from other sources including aquifer water, radiators, towers etc. to minimize energy usage.

    The running cost of all chillers is reduced by >30 percent when operating with 65F leaving water, instead of industry standard 45F. This reduction is achieved when the difference between the chiller evaporating and condensing pressures is narrowed by higher water temperature in the evaporator, and the refrigeration compressors consume proportionally less power.

    The growing use of free cooling and adiabatic chillers in Northern and Central states reduces chiller power ≥ 70 percent when delivering 65F or higher water temperature. This is highly advantageous when applied to water-cooled data center cooling systems which can operate at higher temperatures.

    An Independent global user has confirmed the fan power in active doors is offset by the reduction in server fan power, due to reduced frictional resistance, effectively eliminating rear door fan power from the energy equation. EC fans in active doors are controlled thermostatically to operate at the minimum power relative to server load. An air pressure switch overrides in the event that server fan output ever exceeds that of the rear door fans.

    Today, water cooling is a rapidly growing design consideration in high density rack cooling. Active water-cooled rear door coolers offer reduced occupied space, ease of installation, and increased efficiency with lower operating costs when combined with high efficiency chillers. Most importantly they deliver a minimum of 100 percent server heat removal at an attractive initial and operating cost.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    The IPO Road: GoDaddy’s Years-in-the-Making Preparation for Wall Street

    logo-WHIR

    This article originally appeared at The WHIR.

    Once known for its commercials made to appeal more to NASCAR fans than to techies, GoDaddy has upped its seriousness leading up to a new Initial Public Offering. It turns out that the tactics that helped it become one of the largest hosting companies in the world needed an update, and the past few years have seen GoDaddy reassess and reinvent itself to become IPO-ready.

    GoDaddy had actually filed for an IPO back in 2006 aimed at raising more than $100 million, however, the filing was withdrawn, which was at least partly due to it being a bad time for tech companies to launch IPOs. Instead of going public, the company sought private equity, agreeing to let KKR, Silver Lake Partners, and Technology Crossover Ventures buy a 65-percent stake in the company in 2011.

    2011 was also the year GoDaddy’s founder Bob Parsons, who made a media stir when he circulated a video of him shooting an elephant in Zimbabwe, stepped down as CEO, and becoming Executive Chairman. As Blake Irving stepped in as the new CEO in January 2013, Parsons further diminished his role at the company by stepping down as Executive Chairman in early 2014, while remaining on the board.

    Last week, GoDaddy filed for another IPO with the Securities and Exchange Commission, aiming to raise $100 million.

    But GoDaddy has changed a lot since its first IPO filing. The IPO filing shows that it has around 12 million customers worldwide, but, while its revenue has been steadily increasing to more than $1 billion, it has been losing money since 2010.

    The company also has a growing revenue base, and the average revenue per customer has grown from $94 in 2009 to $104 in 2013, which is significant given the number of customers.

    secfilinggodaddy_470

    The Importance of Image

    According to the IPO filing, the company attributes a lot of its brand visibility to its provocative and controversial ads, celebrity endorsements, and Super Bowl commercials, but that the company is repositioning itself to appeal to new markets.

    As Slate’s Seth Stevenson wrote regarding GoDaddy’s Super Bowl ads: “I am loath to admit it, but I think their long-game strategy has paid off. Their early, crass, attention-seeking efforts won them brand recognition as a domain name purveyor, and they’re now pivoting their image to something more broadly palatable.”

    GoDaddy states in the IPO, “During 2013, we began re-orienting our brand position to focus more specifically on how we help individuals start, grow and run their own ventures.”

    One of its latest Super Bowl ads, for instance, featured a 36-year-old woman being able to quit her job as a machine engineer to pursue her puppet-making business full-time because of her website – hosted by GoDaddy.

    The company is also investing substantial resources to increase our brand awareness, both generally and in specific geographies (including Latin America, Europe and India) and to specific customer groups, such as web professionals. Yet the company notes that successfully repositioning the brand might not necessarily mean total customer or revenue growth, or even greater brand recognition.

    Underlying Changes

    But there have also been some significant changes beneath the surface to revamp GoDaddy’s services.

    One seemingly basic improvement was that GoDaddy recently gave customers access to cPanel administration tools for their web hosting accounts. cPanel allows them to do things like manage MySQL databases, add domain names, do one-click application installs, migrate cPanel sites and schedule Cron jobs. This might not be something that appeals to the average customer, but it puts GoDaddy’s services closer inline with what’s common in the hosting industry.

    In recent years, it acquired technology companies including accounting software developer Outright (now GoDaddy’s “Online Bookkeeping” service), local online marketing platform Locu, and invoicing software provider Ronin. All of whose technologies are being used to reinforce GoDaddy’s services.

    While many see the company as a provider of domains and web hosting, GoDaddy wants customers to grow their business and running their operations with productivity tools such as invoicing, bookkeeping and payment solutions, as well as marketing products.

    Moving Beyond its Core Audience

    GoDaddy had been originally positioned as an approachable company that eschewed the techy lingo and concepts that would have intimidated many people in the early days of the Internet.

    Yet to move forward, GoDaddy needs to seek out a more tech-savvy clientele of web designers, developers and startups. One of its steps towards this is its acquisition of Media Temple, also known as “(mt)”, in October 2013. While it was stated that (mt) will continue operating as an independent and autonomous company, (mt) lends credibility to the GoDaddy brand.

    Increasingly, we’re seeing a company that no longer wants to be known for its its outrageous former CEO and raunchy Super Bowl ads. Last year, it hired Elissa Murphy as CTO and sponsored the Close The Gap App, an online tool designed to provide personal strategies for closing the leadership and pay gap.

    Clearly, the formerly unapologetic company is attempting to right the wrongs of its past. And while its former image and capabilities may have suited its legacy customers, GoDaddy is becoming a more innovative company whose culture and technologies are changing with the times.

    As GoDaddy prepares to enter the stock market, it will be interesting to see if shedding its edgy image for a more refined one will ultimately make it more successful – or whether it will be seen as just another web hosting company.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/ipo-road-godaddys-years-making-preparation-wall-street

    2:00p
    Ten of the Strangest Data Center Outages

    Every once in a while, utility power goes out and the backup systems fail, or a technician makes a mistake, and the data center goes down. While outages have become less frequent, as the industry’s practices continuously improve, things still occasionally go wrong. But sometimes there are also instances when something strange and completely unexpected causes the dreaded unplanned data center downtime.

    Here is a list of some of the strangest data center downtime causes we’ve seen:

    The Leap Second Bug

    A leap second is a one-second adjustment that is occasionally applied to Universal Time to account for variations in the earth’s rotation speed. The addition of a single second to the world’s atomic clocks caused problems for a number of IT systems in 2012, when several popular web sites, including LinkedIn, Reddit, Mozilla and The Pirate Bay, went down. In Australia, 400 Qantas flights were delayed by two hours as the airline had to switch to manual check-ins.

    Squirrel takes down Yahoo’s Santa Clara data center

    Squirrels taking a data center down isn’t actually all that rare. They chew everything, including all of those important wires we use to transfer communications. In 2010, “A frying squirrel took out half of our Santa Clara data center,” said Mike Christian, who runs business continuity for Yahoo, during a keynote at the O’Reilly Velocity conference.

    Migration highway

    Moving servers can be a tricky business. NaviSite (now owned by  Time Warner) acquired a hosting provider called Alabanza in 2007 and was moving customer accounts from Alabanza’s main data center in Baltimore to a facility in Andover, Massachusetts.

    They literally unplugged the servers, put them on a truck, and drove the servers for over 420 miles. Many websites hosted by Alabanza were reportedly offline for as long as the drive and re-installation work took.

    Another move-related problem occurred a few months earlier, when Hostway moved ValueWeb servers from Miami to Tampa. Hostway said later that more than 500 servers suffered hardware failures when they were restarted in the new facility.

    Ship drops anchor on the Internet

    Massive undersea cables carry traffic from continent to continent. These cables are durable, considering where they reside. However, there has been at least one instance where a ship dropped its anchor on one of them. There was a plague of undersea cable cuts in 2008, and while not necessarily a data center outage, it did cause downtime for some regions.

    “Every wall is a door.” – Ralph Waldo Emerson

    Nianet, a Danish ISP went down when thieves cut a hole in its Taastrup data center’s walls to sneak in and steal stuff. They walked off with a bunch of networking cards, according to news reports. How the thieves were able to cut through a data center wall, and why they did so just to steal networking cards remains a mystery.

    Careful where you throw your cigarette butt

    At least one data center downtime incident was sparked by smoldering mulch. The Perth iX data center in western Australia shut down for an hour after its VESDA(Very Early Smoke Detection Apparatus) system detected smoke in the data center. The cause was identified to be a smoldering mulch-filled garden bed alongside the outside wall of the facility, most likely lit by a burning cigarette butt.

    Keep on truckin’

    In 2007, Rackspace, a company with a phenominal uptime record, suffered an outage for several hours after a a truck drove into a power transformer, which exploded.

    The backup power tried to kick in, but two chillers failed to start. It took down some of the biggest sites on the internet at the time.

    Czech-mate, Internet

    In 2009, a single errant BGP announcement by an unknown Czech ISP created brief outages at several large hosting companies. Czech provider Supronet “single-handedly caused a global Internet meltdown for upwards of an hour,” said Renesys in a report.

    Is my server down? Yes, down at the pawn shop

    In 2007, two masked men broke into a Chicago data center and stole a bunch of computer equipment. The data center belonged to an old hosting company called C I Host. The company was eventually acquired and the name doesn’t exist anymore.

    Some reported the lone employee working the night of the robbery was tazered and some reported he was pistol-whipped. Around 20 servers were stolen, bringing down a bunch of websites for good.

    Following the robbery, there was rampant speculation as to what happened. Some say the robbers cut into the facility with a high-powered saw (which did happen to Nianet years later). C I Host said that the two men hid in the mechanical closet. There was some speculation that the company staged the robbery in order to commit fraud, but this was unsubstantiated.

    But perhaps the strangest thing about it was that it wasn’t even the first time it happened.

    “One of the biggest mistakes is that people are talking about four robberies,” the CEO told theWHIR at the time. “A robbery means than property has been seized through violence or intimidation. C I Host has technically only been robbed twice in two years. The other two were break-ins where things were stolen, but not robberies.”

    Superstorm Sandy

    It’s hard to argue for a stranger event than Superstorm Sandy. The freak, once in a lifetime (we hope) storm caused havoc in New York. Data center outages it caused led to some amazing stories, like the Bucket Brigade.

    Storms usually die down by the time they reach that far north. However, as Sandy moved north, it took on some extra-tropical characteristics and grew to enormous dimensions. A major deviation of the high-altitude jet stream made the storm take a sharp left towards the coast. And it hit at high tide. The moon was full, so it was an even higher tide than normal.

    This type of storm has never been recorded in the Northeast, making it an even freakier occurrence than a squirrel frying out your power, a mulch fire or a truck swerving off the road. There are some things you just can’t predict in this world.

    Did we miss one? We’d love to hear from you.

    2:00p
    Open Compute – Going Beyond the Hype

    How can you create a truly agile and scalable infrastructure? How do you design a platform built around efficiency and growth? What can you do to keep up with the dynamic growth and demand requirements around the modern data center platform?

    Let’s delve in and find out.

    Founded in 2011, the Open Compute Project has been gaining attention from more and more organizations. The promise of lower cost and open standards for IT servers and other hardware seems like a worth­while endeavor; one that should benefit all users of IT hardware, as well as improving the energy efficiency of the entire data center ecosystem.

    The open source concept has proven itself successful for software, as witnessed by the widespread adop­tion and acceptance of Linux, despite early rejection from enterprise organizations.

    The goal of Open Compute?

    • To develop and share the design for “vanity free” IT hardware which is energy efficient and less expen­sive.

    In this whitepaper from Intel, HP and Data Center Knowledge contributor Julius Neudorfer, we examine the various develop­ments, industry adopters, and functional deploy­ments of Open Compute technologies, as well as potential advantages and limitations.

    Here’s something to think about: one OCP design philosophy is a “vanity free” no frills design, which starts without an OEM branded-faceplate. In fact, the original OCP server had no faceplate at all. It only used the minimal compo­nents necessary for a dedicated function such as a massive web server farm (server had no video chips or connectors). This created a direct shift in how power and resources were utilized.

    Today’s “mainstream” data center designs are based on assumptions that standardized commodity IT hardware will be used over its expected design life. It is assumed that the IT equipment could be re­freshed relatively frequently, but the facility should remain viable for 15 years or more without major changes. In the case of the Open Compute facility, this may also prove true and perhaps because of its energy efficient cooling design, it will be adopted by mainstream data centers.

    There is no question that there are certain types of environments which can benefit economically from lower IT hardware costs and better energy ef­ficiency of the OCP computing paradigm. Download this whitepaper today to learn how data centers will actually adopt the “open” concept and utilize the OCP open hardware “standards” that exist today. Additionally, find out how this type of platform will impact your data center – and business structure – in the near future.

    4:40p
    China’s Milkyway-2 Remains World’s Fastest Supercomputer on Top500

    China’s Milkyway 2, also known as Tianhe-2, has retained the number one ranking on the biannual Top500 list of the most powerful supercomputers in the world for a third consecutive time.

    Recording the same 33.86 petaflop/s (quadrillions of calculations per second) on the Linpack benchmark as the last November 2013 list, Milkyway 2 reflects slowing growth of the power the world’s fastest supercomputers.

    The latest Top500 was announced as the International Supercomputing conference that gets started in Leipzig, Germany, this week.

    The 43rd edition of the Top500 list has only one change in the top 10 systems, with a Cray XC30, installed at an undisclosed U.S. government site, at number 10.

    Fastest computer 30 petaflop/s faster than number 10

    With a 30 petaflop/s difference between number one and number 10 on the June list, the large installations at the top have stagnated. Combined performance of all 500 systems has grown to 274 Pflop/s, compared to 250 Pflop/s six months ago and 223 Pflop/s one year ago.

    A lack of new large-scale installations may be to blame for the lack of growth in Top500 performance ratings, however many other factors contribute.

    The race to exascale seemed to derail briefly last year, as the industry talked about HPCG (High Performance Conjugate Gradients) as an effort to create a more relevant metric for ranking HPC systems. The HPCG metric addresses the fact that HPC system designs are no longer driven by pure computational performance alone.

    The Green500 list of most energy efficient supercomputers, and Graph500 list of data-intensive supercomputers indicate the importance of supercomputer power consumption and workloads.

    Other highlights from the June 2014 Top500 list include:

    • U.S. installations, while still the top country overall, are down from 265 to 233. The number of Chinese systems on the list rose from 63 to 76.
    • A total of 62 systems on the list are using accelerator/co-processor technology, up from 53 from November 2013. Milkyway-2 and Stampede (a University of Texas at Austin supercomputer) are using Intel Xeon Phi co-processors, while Titan (Oak Ridge National Lab) and Piz Daint (Swiss National Supercomputing Center) are using NVIDIA GPUs as co-processors.
    • Intel processors power 85.4 percent of the Top500 systems.
    • The number of systems by vendor ranks HP first, IBM second and Cray third.

    New HPC systems and approaches that are being developed may rescue the stagnated Top500 list in the future. Oak Ridge, Argonne and Lawrence Livermore labs are working together on CORAL – a next-generation supercomputer that may reach 100-200 petaflops. Earlier this month HP rolled out Apollo  — a new converged HPC system featuring up to 160 servers per rack and 100 percent liquid cooling.

    5:30p
    Red Hat to Acquire French OpenStack Cloud Firm eNovance for €70M

    Red Hat has agreed to acquire eNovance, a privately-held OpenStack-oriented company that helps service providers and enterprises build clouds, for approximately €50 million in cash and €20 million in shares of Red Hat common stock.

    The French company, with whom Red Hat has a history of collaboration, has made a lot of contributions to the OpenStack, the open source cloud architecture, and the acquisition would merge two of the top ten contributors. eNovance brings systems integration capabilities and engineering talent to Red Hat.

    The transaction is expected to close in June 2014.

    OpenStack continues to gain traction as startups and enterprise service providers alike continue to partner, innovate and contribute to the project. Red Hat has made several OpenStack-related moves this year, maintaining a leadership position in the ecosystem.

    eNovance was founded in 2008 to help service providers and large scale enterprises build and deploy cloud infrastructures. The company helps several organization manage a multitude of customer web applications on public clouds wordwide.

    It has more than 150 global customers, including Alcatel-Lucent, AXA, Cisco, Cloudwatt and Ericsson, and offices in Paris, Montreal, and Bangalore.

    Red Hat Linux OpenStack-centered partnership

    Red Hat and eNovance first partnered in 2013 to deliver OpenStack implementation and integration services to joint customers, helping accelerate adoption of the Red Hat Enterprise Linux OpenStack Platform.

    The companies expanded collaboration this year to drive Network Functions Virtualization (NFV) and telecommunications innovations into OpenStack. The aim was delivering complete carrier-grade telecommunications offerings.

    “eNovance, like Red Hat, understands the transformative power OpenStack can have on the enterprise market when it is both deployed and integrated in the right fashion,” said Raphaël Ferreira, co-founder CEO at eNovance.

    The acquisition further boosts Red Hat’s OpenStack positioning. In 2014, Red Hat has made several open source infrastructure moves, including acquisition of Ink Tank, the company behind Ceph, the open source distributed object store and file system, and its enterprise support, an OpenStack-oriented partnership with Dell and Telefonica, a strengthened alliance with Hortonworks, provider of the popular enterprise distribution of Apache Hadoop, and innovation in the hot application container space.

    The company has also announced a one-stop cloud shop called OpenShift marketplace.

    6:09p
    Pure Storage Buys 100 Patents From IBM to Protect Itself From Lawsuits

    Giving a strategic boost to its intellectual property portfolio, flash array vendor Pure Storage has acquired more than 100 storage and related technology patents from IBM. The two companies have also signed a patent cross-license agreement.

    Pure Storage was listed recently on CNBC’s Disruptor 50 list, and it’s this disruption that the company is looking to protect, as the move towards all-flash systems continues, and legacy storage vendors try to catch up. After netting a $225 million Series F funding round last April for a $3 billion valuation, CEO Scott Dietzen is looking to keep Pure pure, and while confident in its own granted and pending patents, the agreement with IBM will help protect it against hostile lawsuits by competitors.

    Patent litigation is the wrong way to compete

    Dietzen said Pure Storage pledges to not “make first use of these patents, but rather use this IP only to defend against aggression from those competitors who choose litigation over marketplace competition. Our goal is to keep the battle out of the courtroom and in customer data centers, where it belongs.”

    Upon receiving the $225 million round Dietzen noted that the company was well positioned for long-term independence, as adoption of all-flash arrays over legacy mechanical drives continues to accelerate.

    Joe FitzGerald, a top lawyer at Pure Storage, said, “This transaction significantly increases the number of Pure Storage’s patents, creating a more robust and strategic patent portfolio that will allow our customers to benefit even more from our focus on advancing storage innovation.”

    IBM has led the annual list of U.S. patent recipients for 21 consecutive years. “This agreement with Pure Storage demonstrates the value of IBM’s patented inventions and our dedication to encouraging innovation by licensing access to our extensive patent portfolio,” said William LaFontaine, general manager of intellectual property at IBM. ”IBM’s extensive R&D investment and the industry’s largest storage array patent portfolio are key drivers behind our flash storage leadership.”

    8:13p
    DataBank Plans 20MW Data Center in Minneapolis Market

    Colocation specialist DataBank is known for sturdy infrastructure and connectivity. Those two traits will be the building blocks for a major expansion in the Minneapolis market.

    The company today announced plans to build a new data center in Eagan, Minnesota, that will support up to 20 megawatts of power for customer IT equipment. The new facility builds upon DataBank’s 2013 acquisition of VeriSpace, which provided its initial footprint in the Minnesota region.

    It also means more new capacity in the fast-growing Minneapolis-St. Paul market, which has emerged as a hot expansion target for data center and cloud providers.

    The Eagan project is part of DataBank’s strategy to grow beyond its original footprint in Dallas. The company is also expanding into Kansas City, where it acquired Arsalon earlier this year.

    “This announcement represents an important benchmark in DataBank’s regional plan,” said Tim Moore, CEO of DataBank. “The facility infrastructure and service level we can offer will be unparalleled in this market. This will provide a great advantage for the growth of both our current client-base as well as meet demand from a wide diversity of businesses here in the Twin Cities.”

    Former Taystee site to get robust network connectivity

    DataBank has acquired an 88,000 square foot existing building in Eagan that it will convert into a data center, reinforcing the structure and retrofitting some of the infrastructure to support cooling. The former Taystee Bakery building can accommodate up to 48,000 square feet of raised-floor customer space, deployed in 10,000 square foot increments.

    When completed, the facility will include 20 megawatts of utility power using diverse feeds, configured in a 2N (complete redundancy) system with an on-site power generation plant.

    In addition to highly resilient infrastructure, DataBank sees an opportunity for the Eagan facility to serve as a second “carrier hotel,” offering interconnections to providers in the Minneapolis market and serving as a suburban complement to the existing data hub at the “511 building” downtown.

    DataBank scouted many sites in the area, but was impressed with Eagan’s combination of robust power availability from Dakota Electric and Green River Energy, as well as its existing AccessEagan fiber optic network.

    Governor: “Great news” for local economy

    “DataBank’s expansion is great news for Eagan and great news for Minnesota,” said Minnesota Governor Mark Dayton. “We welcome this new development and congratulate the company on this important project that will create jobs in our state and provide an important telecommunications service to the entire region.”

    Minnesota has emerged as one of the most promising second-tier markets in the country, with a flurry of service providers deploying new mission-critical space. ViaWest recently opened  a 9 megawatt facility; Cologix has been expanding rapidly in the 511 building; Stream Data Centers is building a 75,000 square foot data center in a southwest suburb, and Compass Datacenters has partnered with CenturyLink on a new facility.

    “DataBank’s planned colocation data center facility in Eagan is another sign of our region’s evolving innovation strength,” said Michael Langley, CEO of Greater MSP (Minneapolis Saint Paul Regional Economic Development Partnership). “It’s a win-win for DataBank, the city of Eagan, our region and the state of Minnesota.”

    DataBank currently owns and operates data centers totaling more than 180,000 square feet of data center space in the Dallas area, 15,000 in Kansas City and another 15,000 in the former VeriSpace site in Edina, Minnesota.

    8:44p
    7X24 Exchange Fall Conference

    7×24 Exchange will host its fall conference October 26-29 at the JW Marriott Desert Ridge in Phoenix, AZ. The theme of the conference is Scaling to the Future.

    The 7X24 Exchange is aimed at knowledge exchange among those who design, build, operate and maintain mission-critical enterprise information infrastructures, 7×24 Exchange’s goal is to improve end-to-end reliability by promoting dialogue among these groups.

    More details will be published as they become available. Check 7X24 Exchange site for more information.

    Venue
    JW Marriott Desert Ridge
    5350 Marriott Drive, Phoenix, AZ 85054

    For more events, return to the Data Center Knowledge Events Calendar.

    << Previous Day 2014/06/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org