Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, July 1st, 2013

    Time Event
    11:30a
    Top 10 Data Center Stories, June 2013
    centercore-fidelity

    Here’s a look at Centercore, a multi-story factory-built data center design developed by Fidelity Investments. Fidelity is now commercializing Centercore. (Photo: Fidelity)

    The NSA was the focus of thousands of hot headlines across the web in June, and Data Center Knowledge was no exception. Our coverage of the agency’s new data center in Maryland was the most popular story of the month, followed closely by Microsoft’s expansion in Iowa and Fidelity Investments’ surprise move to commercialize its in-house data center business. Also trending well this month was our coverage of the twice-yearly release of the Top500 list of the world’s most powerful supercomputers. Here are the most viewed stories on Data Center Knowledge for June 2013, ranked by page views. Enjoy!

    Stay current on Data Center Knowledge’s data center news by subscribing to our RSS feed and daily e-mail updates, or by following us on Twitter or FacebookDCK is now on Google+.

    12:00p
    The Immersion Data Center: The New Frontier of High-Density Computing
    cgg-medium

    These tanks in the CGG data center in Houston are filled with 42 servers submerged in a liquid coolant, similar to mineral oil, developed by Green Revolution Cooling. (Photo: Rich Miller)

    HOUSTON – As you enter the data center at CGG, the first thing you notice is what’s missing – the noise and the breeze. Instead of rows of air-cooled black cabinets, the room is filled with tanks of liquid coolant, each containing up to 42 servers.

    This is the new frontier of immersion cooling, with servers submerged in a liquid similar to mineral oil. It’s also what a growing number of data centers may look like in coming years.

    The explosion of data we generate every day is creating a need for industrial strength data crunching. That’s the specialty of CGG, which provides high-end geological and geophysical analysis to its customers, who are primarily from the oil and gas industry. One areas of its expertise involves using powerful computers to sort through mountains of seismic data to produce images of the earth’s geology to help identify the best places to find new sources of energy.

    CGG’s Houston data center is one of several hubs in its global network of 43 subsurface imaging centers. The company has shifted an entire data hall to an immersion cooling technology developed by Green Revolution Cooling (GRC). Instead of cool air flowing through a standing cabinet, the GRC system effectively tips the cooling paradigm on its back, with a liquid coolant flowing across servers housed in a tank.

    Taking Immersion Cooling to Data Center Scale

    Since its launch in 2009, Green Revolution has seen its technology used in facilities from an Austin hosting company, several universities, telecom firms and ultra-scale cloud providers. CGG has been the first company to deploy the GRC system at scale, with dozens of tanks in a single facility, providing its large-scale data processing infrastructure with new levels of energy savings, efficiency and performance.

    In this video, the IT team at CGG takes us inside their data center to provide a detailed look at their unique data center and its use of immersion cooling.

    Will the appetite for ever-more powerful computing clusters push more users to adopt immersion cooling technologies? Intel recently concluded a year-long test with GRC technology and affirmed that its immersion cooling is highly efficient and safe for servers. The giant chipmaker says it will explore the development of reference designs and custom motherboards optimized for immersion cooling.

    Intel’s research lays the groundwork for end users or server OEMs to deploy oil-based cooling technologies on a broader basis. But a key first step is to see the technology working at scale in a live production data center. CGG’s data center provides a window into that experience – and perhaps, the future of high performance computing.

    12:15p
    Video: Taking the Plunge With Submerged Servers
    cgg-grc-quad

    Four of the many tanks of servers submerged in liquid coolant at a CGG data center in Houston, Texas. (Photo: Rich Miller)

    Immersing servers in liquid is a significant change from the air-cooled cabinets seen in most traditional data centers. How do you “take the plunge?” In this video conversation, Laurent Clerc of CGG discusses his company’s process in adopting immersion cooling technology from Green Revolution Cooling, and the considerations in implementing the system. For the past two years, CGG has been running servers submerged in a liquid coolant similar to mineral oil. Clerc, who is the VP of Information Technology for CGG, talks with DCK about maintaining an immersion data center, and the savings made possible through this approach. This video runs 6 minutes.

    For a tour of CGG’s liquid-cooled facility, see The Immersion Data Center: The New Frontier of High-Density Computing.

    For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    1:00p
    Rackspace Bringing Hybrid Cloud to CERN
    A look at the ATLAS particle detector experiment at the Large Hadron Collider (LHC), the huge particle accelerator at CERN near Geneva, Switzerland. (Photo: Image Editor via Flickr)

    A look at the ATLAS particle detector experiment at the Large Hadron Collider (LHC), the huge particle accelerator at CERN near Geneva, Switzerland. (Photo: Image Editor via Flickr)

    Rackspace Hosting has been pushing the hybrid computing message, and CERN is kicking the tires. CERN, the European Organization for Nuclear Research, will be relying on Rackspace’s Open Hybrid Cloud to help it discover the origins of the universe.

    Rackspace has entered into a contributor agreement with CERN openlab, the companies said today. During the year long collaboration, Rackspace will deliver a hybrid cloud solution featuring both private and public clouds powered by OpenStack.

    CERN has the largest research environment in the world, producing more than 25 petabytes of data annually. It is leveraging OpenStack software to manage the resources across its two data centers that power the Large Hadron Collider (LHC) and, literally, help unlock the mysteries of the universe. As the LHC smashes atoms together to discover what makes the universe work, Rackspace is smashing public and private cloud together to discover what makes cloud providers work.

    Could Span 15,000 Servers

    For this project, they are eventually expecting to reach 15,000 hypervisors on 50,000 virtual machines, a not insignificant chunk of infrastructure. At the onset, the preliminary Rackspace private cloud is only going to be 20 physical nodes in CERN’s data center in Switzerland. By proving that the effectiveness of OpenStack, Rackspace has a chance to sell CERN on its federated hybrid cloud capabilities – and through CERN, the world at large.

    “There are two things we want to highlight: that we’re very excited to work with a visionary institution like CERN, and how important this is to our overall hybrid story,” said Darrin Hanson, Vice President Rackspace Private Cloud.  “In our research partnership with CERN openlab, both companies have a very tight alignment on being able to federate the cloud platform. It will be a very robust platform that acts as  simple system.”

    CERN is perhaps most known for the awe-inspiring Large Hadron Collider (LHC). CERN already uses the Rackspace public cloud. However, Rackspace has entered into contributor agreement with CERN openlab where private cloud is coming into play. Rackspace will work with CERN openlab to federate CERN’s current managed services into Rackspace’s open public and private cloud environments.

    “For purposes of the project, we’re defining federation as single governance,” said Hanson. “We’ll be setting up a Rackspace private cloud inside, and we’ll test workloads, and be able to move seamlessly from the Rackspace private cloud. CERN is interested in being able to move workloads more easily.”

    A Future Bursting With Clouds

    This can conceivably be called a test run before something much bigger and greater, with CERN fully testing the federated capabilities before diving in deeper. There’s already a very healthy relationship here. “Our conversation has been around flexible bursting and scaling and capacity planning,” said Hanson. “CERN has been a public cloud customer in the past. What we hope to do with this relationship is open them up to the idea of Rackspace private cloud on their premise and our premise.”

    The expanded relationship consists of certain key elements such as:

    • Federated Cloud Services based on OpenStack Cloud Technologies – Rackspace will work with CERN openlab to federate CERN’s current managed services into Rackspace’s open public and private cloud environments.
    • Personnel Support – Rackspace will fund one full-time member of the CERN personnel team, who will help create cloud federation technologies.

    “This is a landmark moment for Rackspace, as we feel this is an opportunity to take our already mutually beneficial relationship with CERN to new heights,” said Jim Curry, SVP and general manager of Rackspace Private Cloud. “Through ongoing collaboration with CERN openlab, we will broaden the global reach of our hybrid cloud solutions, while simultaneously helping to set the pace of innovation within the field of particle physics.”

    The new agreement is expected to accelerate the pace of innovation within the field of particle physics while broadening the global reach of Rackspace Hybrid cloud solutions.  It will be one of the largest hybrid clouds to date involving a massive amount of research with multiple clouds and data centers.

    This is a marquee customer. At first, CERN will utilize Rackspace for testing and development of applications, with the future to be determined. The story extends beyond the marquee customer – CERN’s thumbs up will prove the technology for countless businesses contemplating using Rackspace hybrid.

    “This is for large, even small customers wanting to take advantage of speed but have security performance issues,” said Hanson. “The private cloud story is an important part of our message. We can support it in a Rackspace data center or anywhere in the world.”

    1:15p
    CDNs Rebound, But Data Center Bellwethers Falter

    The second quarter of 2013 was a rough ride for the data center industry, as several of the sector’s bellwethers experienced selloffs and wound up lagging the broader market. The exception was content delivery providers, as Akamai Technologies (AKAM) and Limelight Networks (LLNW) bounced back after several years of rocky performance. Akamai shares rose 20.5 percent in the quarter ending June 30, while Limelight rose about 9 percent.

    Here’s a look at our Data Center Investor performance chart for the second quarter of 2013:

    dc-stox-2q-2013

    It was a another rough quarter for Rackspace Hosting (RAX), which saw a sharp selloff after its announced earnings that fell short of the expectations set by Wall Street analysts. It was the second consecutive earnings disappointment for Rackspace, which said the rate of growth for its cloud computing business has moderated.

    Shares of colocation market leader Equinix (EQIX) took a hit in early June after the Internal Revenue Service (IRS) said it would review its guidelines for real estate investment trusts (REITs). The agency has formed an internal working group to define its standards for REIT status, and will not approve any new applications for REIT status until the process is completed. Equinix, which has announced plans to convert to REIT status, said it believes data center companies are appropriate for REIT status.

    The largest data center REIT, Digital Realty Trust (DLR), made investment headlines when a hedge fund announced that it was shorting the company’s shares, asserting that Digital Realty had not accounted for the full cost of future maintenance and upkeep of existing data centers. Jonathon Jacobson of Highfields Capital Management asserted that Digital Realty should be valued at $18 per share. Shares of Digital declined 8.8 percent for the quarter, but with DLR at $61 a share it seems safe to conclude that the market has rejected Highfields’ valuation of the stock.

    For the second quarter, the Dow Jones Indsutrials rose 2.3 percent, the S&P 500 gained 2.4 percent and the Nasdaq Composite climbed 4.2 percent.

    1:30p
    Understanding the Value and Scope of Data Center Commissioning

    Michael Donato is a Supervising Engineer, Emerson Network Power, Electrical Reliability Services. He previously wrote on Understanding Data Center Commissioning and Its Benefits.

    MichaelDonato-tnMICHAEL DONATO
    Emerson Network Power

    Commissioning is a relatively new discipline in data centers, and as a result, many data center managers do not have a clear picture of the purpose or the value this important process offers.

    One of the biggest challenges for data center managers investing in the commissioning process is the lack of a consistent approach from commissioning firms. Some commissioners primarily provide administrative oversight, creating a pathway for paperwork to flow. Other commissioners adopt a hands-on approach that can include a broad range of phased activities.

    To further complicate matters, a general misconception exists that assumes commissioning and acceptance testing are one in the same. In fact, acceptance testing is a separate testing requirement—often reviewed by the Commissioning Authority (CxA)—that ensures individual components or pieces of equipment are installed properly and will operate according to the manufacturer’s specifications and industry standards. While certainly a critical step, acceptance testing is just one component of a much more comprehensive commissioning process.

    In the same vein, commissioning has also been confused with equipment startup, another individual construction activity that is often overseen by commissioners.

    Because of these inconsistencies and misconceptions, it is not uncommon for data center managers to request proposals from a handful of different commissioners and end up with a set of very different recommendations, accompanied by widely varying price points. It can be difficult to compare one bid to the next.

    The Argument for Commissioning

    In the midst of these discrepancies, how then, can a data center manager determine the appropriate scope of commissioning activities for their project? To answer this question, it is prudent to consider the reasons why more and more data center owners are investing in commissioning.

    The major impetus behind commissioning data center systems and processes is the increasing complexity of the systems themselves. This complexity presents more opportunities for problems. At the same time, there is less and less tolerance for unplanned downtime. Due to the staggering cost of unplanned outages or failures, today’s data centers must operate reliably 100 percent of the time.

    Appropriate commissioning activities can ensure uptime by identifying the culprits behind data center failures and outages. Nearly 70 percent of early equipment failures can be traced to design, installation, or startup deficiencies. Unnecessary outages are often due to improper coordination and calibration of protective devices, wiring errors, design errors, and human error. Commissioning can help to detect and correct these problems before the failures or outages occur.

    Commissioning is also the answer to a wide variety of other owner concerns. Issues such as ensuring that the operations and maintenance (O&M) staff have adequate resources and training, improving the safety of the data center, and boosting data center efficiency can all be addressed by specifying the right commissioning activities.

    The appropriate scope of commissioning, then, relates directly to specific data center requirements. A three-phase comprehensive approach to commissioning — one that encompasses a wide range of building systems and spans the entire design/build process, from pre-design through occupancy — results in the greatest value to the project owners.

    Phase 1: Pre-Design/Design

    During the pre-design/design phase of a project, the first priority of the CxA is to determine and document the Owner’s Project Requirements (OPR). Based on the OPR, the CxA will develop the written commissioning plan that will identify systems to be commissioned and define the scope and schedule for all commissioning activities.

    Throughout the design process, the CxA will work closely with the design team to complete design reviews and make recommendations on design plans and documents, ensuring that the design of the data center meets the OPR.

    During this phase, the CxA will make sure quality systems and acceptance testing are specified for execution during the construction phase and will also help to establish training guidelines for O&M staff.

    Phase 2: Construction Phase

    Installation, startup, and acceptance testing of systems, equipment, and assemblies within the data center occur during the construction phase of the commissioning process.

    The CxA will review submittals of commissioned equipment and controls, and ensure that all systems and assemblies are properly installed prior to startup. The CxA may witness vendor startup of critical equipment.

    It is the CxA’s responsibility to develop functional and systems testing procedures and conduct all functional and systems tests. The goal of the tests is to ensure that all systems and assemblies operate properly and work together in accordance with the OPR.

    Late in the construction phase, the CxA will help prepare systems operating documentation and ensure that O&M staff receive training based on the requirements established during phase one.

    Phase 3: Occupancy Phase

    After construction of the data center is complete, commissioning activities can continue up to one year post occupancy. During the occupancy phase, the commissioning team will perform any deferred or seasonal testing that could not be completed during construction.

    The occupancy phase also includes an adjustment period during which changes may be made to systems and equipment to ensure optimum operation. During this phase, and prior to the expiration of the original construction warranty, the commissioning team can conduct a warranty review to identify any issues to be corrected by the general contractor.

    Finally, the occupancy phase should include a lessons learned workshop that involves the commissioners, design and construction teams, and O&M staff. The workshop is an opportunity to discuss the project’s successes as well as its challenges, and determine future improvements.

    It is verifiable that commissioning is a critical step in the design and build of a new data center facility, system or addition. To glean the greatest value from commissioning, data center managers must first understand the full scope of potential commissioning activities, and consider the fundamental rationale for commissioning to ensure that project requirements are met. For a seamless process and ultimately greater availability, efficiency and reliability, data center managers should then look for a CxA like Emerson Network Power that offers a scope of services broad enough to encompass all potential requirements.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    SugarSync Expands With Equinix in Silicon Valley

    SugarSync renews and expands with Equinix in Silicon Valley, Latisys’ IaaS solution achieves PCI and HIPAA compliance and NaviSite cloud solutions earn a spot in the UK Government Cloud III program,.

    SugarSync expands cloud with Equinix.  Colocation specialist Equinix (EQIX) announced that SugarSync, a premium file sharing service, has renewed its contract with Equinix and continues to expand in the Silicon Valley (SV4) data center. Experiencing significant growth since SugarSync began with Equinix in 2007 the company’s  infrastructure has doubled in Equinix’s SV4 and SV2 data centers. Equinix provides close proximity to mobile carriers so that SugarSync can ensure an optimal mobile experience to its customers and continued growth in the mobile market. ”Thanks to our relationship with Equinix, we’ve had the physical density and flexibility to partner with a number of carriers using different networking options over time,” said Jason Mikami, vice president of Operations at SugarSync. “With high availability and plentiful network options to choose from, Equinix is ideally suited to provide SugarSync the low-latency bandwidth and high uptime we need to continue to grow our business.”

    Latisys achieves PCI and HIPAA Compliance. Managed hosting provider Latisys announced it has achieved key compliance for industry best practices and regulatory standards to deliver the highest levels of security and reliability as deemed by third party auditors. The compliance reports distinguish Latisys’ entire platform as being in alignment with key regulatory standards and government requirements including PCI Data Security Standard (DSS) 2.0, Health Insurance Portability and Accountability Act (HIPAA), and Gramm-Leach-Bililey Act (GLBA). Additionally, the Latisys IaaS platform is operated under SSAE 16 (SOC 2 Type 2 and SOC 3) audited controls.

    NaviSite placed on UK G-Cloud III program.  NaviSite announced that three of its premier cloud solutions have been accepted onto the new Government G-Cloud III programme. Local UK authorities, Government departments and other public sector organisations will now be able to purchase NaviCloud Dynamic Compute (infrastructure-as-a-service), NaviCloud ONE (desktop-as-a-service), NaviCloud Intelligent Storage: Share and Vault (storage-as-a-service and backup-as-a-service) platforms through the Government’s CloudStore. The third framework iteration of G-Cloud was launched in May 2013 along with a new CloudStore, which has been redesigned to be intuitive and easier to use. “We aspire to build the most secure, robust and intelligent cloud platforms, so to receive this kind of recognition is a real affirmation that we are achieving our goals,” said Sean McAvan, Managing Director, NaviSite Europe Ltd. “Millions of people in the U.K. rely on Government IT to provide critical services. It is very important that these services are delivered in a secure and robust manner, which also delivers value. We look forward to working together as a trusted partner to help Government organisations realise the benefits of scalable, flexible, low-cost cloud solutions.”

    3:00p
    Juniper Networks Selected by Lotus F1 Team

    Network connectivity news around the globe – coming from Level 3 in Monaco, Ciena in Canada and Juniper in the U.K.

    Juniper selected by Lotus F1 team.  Juniper Networks (JNPR) announced that Lotus F1 Team, currently standing a close fourth in the Constructors’ Championship after the first seven races, has built a mission-critical network infrastructure using Juniper’s portfolio of switching, security, wireless LAN, routing and application software solutions. The Lotus F1 Team has also deployed Juniper QFabric technology in its two data centers to flatten the network architecture to reduce latency and improve performance, creating a mission-critical, carrier-class private cloud environment. ”From the design concept of each season’s car, through component engineering and production to testing, qualifying and competing at each race, we have to deliver innovation and excellence with no margin for failure, error or delay,” said Patrick Louis, CEO, Lotus F1 Team. ”Our network underpins the entire operation, so we need a partner who is equally innovative and reliable, and who can secure the highly valuable data we share across the team. Juniper enables Lotus F1 Team to build the best network so we can strive to be the best grand prix team.”

    Level 3 and Monaco Telecom sign agreement. Level 3 Communications (LVLT) and Monaco Telecom announced the signing of a strategic agreement that will connect the Principality of Monaco to Level 3′s global Internet backbone network. Level 3 gains a point of presence in Monaco and Monaco Telecom can meet increasing demand as well as connect to some of the largest capital cities of Europe, Latin America, Asia and the United States. Level 3′s redundant infrastructures will be connected in the Principality with direct access to the EIG (Europe India Gateway) submarine cable. ”I am extremely pleased that Level 3 Communications and Monaco Telecom have joined forces to offer EIG cable and Monaco Telecom customers access to one of the most connected IP networks,” said Martin Peronnet, CEO of Monaco Telecom. “This new agreement reflects the ever-growing demand for high-speed Internet services and business networks. This is why Monaco Telecom has equipped itself with all of the essential elements, global reach and capabilities needed to make it a major asset in the Principality and to further the development of companies in Monaco.”

    CANARIE expands network with Ciena.  Ciena (CIEN) announced that CANARIE, Canada’s Advanced Research and Innovation Network, has deployed Ciena’s 6500 Packet-Optical Platform equipped with third generation WaveLogic Coherent Optical Processors to support the 100G (100 Gigabits per second) upgrade and network expansion of a route connecting Montreal and New York. The new ultra-high-speed network will help satiate an increasingly data-intensive and global environment, with demands such as big data projects like the Large Hadron Collider (LHC) at CERN in Europe. Canada’s TRIUMF facility in Vancouver is a Tier-1 data site for the LHC computing grid and uses the CANARIE network to transfer data to and from the other sites. “With Ciena’s 100G technology, we are able to expand and upgrade the Montreal to New York route of our vast R&E network, while at the same time significantly reducing our operating costs and increasing performance over this busy link,” said Jim Roche, president and CEO of CANARIE ”This is a first step in doing the same at other segments of our network. Ciena has been a critical research and technology partner to CANARIE for more than two decades. CANARIE’s fibre optic networks and Ciena’s optical technologies are invisible to end users, but this infrastructure is absolutely critical to enabling academics, scientists and researchers to leverage global big data resources to create new knowledge and new opportunities for innovation.”

    << Previous Day 2013/07/01
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org