Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, March 6th, 2013

    Time Event
    12:30p
    Data Center Jobs: CBRE

    At the Data Center Jobs Board, we have a new job listing from CBRE, which is seeking a Senior Critical Facilities Manager – Tier III Data Center in Dayton, Ohio.

    The Senior Critical Facilities Manager is responsible for supervising and managing property and engineering staff, including oversight of priorities, shift staffing, recruiting, training, succession planning, and personnel development. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    1:30p
    Improving Capacity Planning Using Application Performance Management

    Jason Meserve is solutions marketing manager for CA Technologies Service Assurance portfolio, which helps ensure the performance, availability and quality of IT services as infrastructure and cloud options evolve.

    Jason-MeserveJASON MESERVE
    CA Technologies

    The IT groups in most organizations serve multiple “bosses.” First, there are the business owners that rely on business applications to drive revenue and improve productivity. Second, there’s the end-user – be it an external customer or internal employee – that demands an exceptional end-user experience. What these groups have in common is they just want the application to work and work flawlessly. They are not concerned with how or how much it costs, they just want it to work and work well. But there’s a third boss that does care about costs: The CFO’s office. They too want things to work, but they’d like to keep budgets in check.

    While IT budgets have remained relatively flat, the demand for IT services is growing sharply, driven in part by the increased use of mobile devices and the consumerization of IT. In addition, today’s IT organizations are tasked with managing an increasingly complex infrastructure comprised of physical, virtual, cloud and mainframe systems all which need to be optimized to deliver today’s business-critical applications.

    Performance Tied to Demand

    Poor performance is often related to increased demand for services. Sometimes it’s a sudden spike in demand caused by a “Black Friday” event, while other times performance problems creep up over time as demand for service slowly grows until it reaches a tipping point. In the either case, the root cause of the performance problem ties back to not understanding and proactively managing computing capacity on an ongoing basis.

    Previously, IT got around this by over provisioning infrastructure for peak demand. This is a very costly way to manage a data center. No organization can afford to have lots of idle servers sitting around eating into the bottom line.  Moreover, while virtualization has helped improve physical server utilization rates from single-digit rates, the average utilization of a virtualized server is still in the 20 to 30 percent range, meaning systems are still underutilized.

    While monitoring tools such as application performance management can warn of system slowdowns and impending disaster as certain thresholds are met or exceeded, IT still needs a way to cost-effectively address the capacity issue without increasing risk to the business. The days of throwing hardware – and therefore money – at the problem are gone for most shops. IT must be able to get the most value for its dollar while minimizing risk and continuing to meet the expectations of end-users and the business.

    Ensuring Application Performance While Reliably Predicting Future Growth

    In order to keep up with the increasing demand for IT services, and deliver an exceptional end-user experience while keeping within budget constraints, IT organizations must be able to proactively identify, diagnose and resolve performance problems by monitoring all transactions. It must also be able to assess current capacity requirements while reliably predicting future growth without having to overbuild the system and spend needlessly for hardware and cloud services that may go unused.

    Technologies such as application performance management (APM) and capacity management can help IT organizations reduce risk and keep a close eye on business-critical application performance while ensuring that capacity needs are right-sized for today’s needs and future growth.

    A modern APM system delivers 360-degree visibility into all user transactions across a hybrid-cloud infrastructure – physical, virtual, cloud and mainframe—to understand the health, availability, business impact and end-user experience of critical enterprise, mobile and cloud applications. With a good APM deployment, organizations can proactively identify, diagnose and resolve problems throughout the application lifecycle to put themselves firmly in control of the end-user experience and optimize the performance of critical, revenue-generating services.

    What are the Benefits of Capacity Management?

    Capacity management provides predictive analytics that allow users to simulate changes to application and infrastructure components in order to help ensure application response time goals are met once the application is moved to the production environment.  Capacity management provides prescriptive insight into the infrastructure needed for optimal IT operations including support for both new workloads and workloads that change over time.  Tangibly, this prescriptive insight not only helps to right-size the application environments on release, but ultimately helps to reduce the number of performance issues often incurred in the roll out of a new application or release.

    2:30p
    European News: Savvis Opens New London Data Center
    The exterior of the Next Generation Data Center, a three-story building that supports 375,000 square feet of wholesale data center space.

    The exterior of the Next Generation Data Center, a three-story building that supports 375,000 square feet of wholesale data center space. NGD was selected this week to host business software group UNIT4. (Photo: Next Generation Data)

    Savvis expands to meet European demand, and Interxion and Next Generation Data record customer wins.

    Savvis opens new London data center. Savvis, a CenturyLink (CTL) company announced the opening of a new data centre for the London metro market. The expansion to Savvis’ operations in Slough, England, boosts the site’s total square footage to 100,000, spreading 8.88 megawatts of power across the site’s two data centers. The new LO5 data center features 35,000 square feet of raised floor space and will have an initial 2.4 megawatts of IT load. ”The success of our existing data centre in Slough confirms that businesses view this area as a strategic geographic location,” said Jeff Von Deylen, president of Savvis. “Our hosting portfolio in Slough allows businesses in the financial services, consumer brands and other verticals to benefit from Savvis’ carrier diversity, interconnectivity and cloud services.” LO5 is the fifth data center in the London area and sixth in Europe.

    Cancer Research UK expands with Interxion.  Interxion (INXN) announced that Cancer Research UK is running all of its primary systems, including CRM systems, finance systems and Citrix virtualised desktops from its London data centre campus. Cancer Research UK said that Interxion was a part of its project to consolidate eight offices into one and move to a new thin client environment. Since the initial move the charity has also implemented VNX storage which replaced all the production storage for its virtualized desktops. “As a charity, our budgets are limited but our needs are increasing all the time,” said Mary Hensher, IT Director at Cancer Research UK. ”Not only did the Interxion team make the infrastructure work as hard as possible for us, adding a second smaller cool corridor that didn’t initially look possible, but their response time was incredibly fast. We know our data centre strategy is future-proofed. If we outgrow our current space, Interxion has a second data centre, LON2, on the same campus.”

    NGD selected by UNIT4.  Next Generation Data (NGD) announced today that it has been selected by global cloud-focused business software group UNIT4, to host the data the company stores on behalf of UK-based clients. To extend existing services and better meet UK legislative and operational requirements UNIT 4  selected NGD. NGD assures UNIT 4 and its customers with IL-3, ISO 27001 and PCI credentials, as well as operational SSAE16 accredited procedures.  “As UNIT4 drives to becoming the foremost cloud software vendor for businesses living in change, it is important that we build on the company’s reputation for secure, reliable and scalable hosting centres,” said Anwen Robinson, Managing Director of UNIT4 Business Software Ltd.UK-based hosting in particular is crucial to the business as it is a major market for UNIT4. This deal will mean that in both the public and commercial sectors we can more easily meet local legislative and security standards. The selection of NGD as the home for our cloud solutions underlines the quality of its centre, personnel and supporting infrastructure.”

    3:15p
    SGI Launches New Big Data Storage Platform

    SGI today announced the InfiniteStorage 5600 (IS5600), a high performance storage platform suited for high performance computing (HPC) and Big Data workloads. Built on a modular architecture, the new 5000-series adds significantly increased performance.

    “SGI customers are constantly pushing the edge of performance requirements for storage arrays,” said Bill Mannnel, vice president of product marketing at SGI. “The flexibility of the InfiniteStorage 5600 platform and the choices it offers enable these users to push the limit without breaking their budget.”

    The new IS5600 is able to intermix multiple drive types and enclosure densities, ranging from 4TB drives to extreme performance SSDs, in a single, scalable system. Tested against the SPC-2 SPEC test the IS5600 recorded the highest throughput per spindle, at 2.5 times the nearest published result. The IS5600 is available in 12, 24 and 60 drive enclosures, with each enclosure housing dual controllers. It can also be configured as an expansion unit to mix and match 2.5″ and 3.5″ drives in a variety of choices for SSD, 15K, SAS, 10K SAS and near-line SAS.

    4:15p
    Storage News: Hitachi, EMC, nScaled, NetApp

    Here’s a roundup of some of some of this week’s headlines from the storage industry:

    Hitachi powers Hollywood visual effects.  Hitachi Data Systems (HDS)  announced that Toronto-based visual effects post-production studio Soho VFX leverages Hitachi NAS Platform (HNAS) to store and manage its most complex visual effects. Soho VFX uses the Hitachi platform to efficiently store and manage the massive image data and make it secure and instantly accessible. These capabilities lower the total cost of ownership, migrate data more quickly, and enable more reiterations of shots because renders are completed more quickly. Soho VFX does visual effects for films such as “The Chronicles of Narnia”, “The Twilight Saga: Breaking Dawn – Part 1 and Part 2″, and new release “Jack the Giant Slayer”.  ”Hitachi NAS Platform continues to evolve with us, which is imperative due to increased work demands and diminished timelines,” said Allan Magled, co-founder, Soho VFX. “Things are constantly changing and thanks to Hitachi Data Systems technology, we are able to handle it every time.”

    EMC launches Xtrem Family of Flash products. EMC announced the Xtrem Family of Flash-optimized server and storage products and introduced a new line of EMC XtremSF PCIe-based Flash cards. The new XtremSF server flash hardware can be deployed as either direct attached storage (DAS) that sits within the server, or it can be deployed in combination with EMC XtremSW Cache server caching software to turbocharge network storage array performance.  EMC also announced the release of XtremIO to select customers, to deliver higher levels of “functional IOPS” to applications that require high levels of random I/O performance. XtremSF 550 GB and 2.2 TB eMLC capacities are currently available, with 700GB and 1.4TB capacities available in the second quarter of 2013. “Flash technology is enabling new levels of application performance and is the single biggest consideration to how customers are architecting their data centers today,” said Zahid Hussain, Senior Vice President and General Manager, EMC Flash Products Division.  ”Today, we are delivering a market-leading and comprehensive portfolio of Flash solutions across a variety of customer use cases and requirements. Going forward, we are dedicated to providing increased value through flash-optimized software and systems to break the barriers of today’s infrastructure silos.”

    nScaled supports NetApp storage systems.  Online backup and disaster recovery provider nScaled announced that its enterprise-class recovery as a service (RaaS) solution now supports NetApp (NTAP) storage systems. Using the NetApp Data ONTAP API the nScaled platform is directly integrated with NetApp storage systems. The new offering aims to provide certified backup, disaster recovery and remote storage for businesses that have or plan to use a NetApp storage infrastructure. “Tight integration between NetApp storage systems and the nScaled recovery service keep data fully protected and make it possible to meet recovery time objectives, without having to make tradeoffs between cost-efficiency, speed or security,” said Gary Hocking, technology director, service providers, NetApp. “nScaled helps NetApp customers considering cloud  based backup and disaster recovery to have consistent and uninterrupted access to data, even in the event of an unforeseen catastrophe.”

    4:16p
    SeaMicro Powers Massive LAN Party on Wheels
    firefall-mgu-2

    The interior of the Firefall Mobile Gaming Unit (MGU), a 48-foot bus packed with 20 high-end gaming stations. (Photo: Red 5 Studios)

    Call it the world’s most advanced LAN party on wheels. The Firefall Mobile Gaming Unit (MGU) is a 48-foot bus packed with 20 high-end AMD gaming stations, which can support LANs of up to 3,000 people and connect gamers  from any location to millions of others around the world. It’s an achievement that requires packing a lot of server power into a small space.

    The solutions was the AMD SeaMicro M10000-XE Server, which packs 256 CPU cores into a 10U chassis. Red 5 Studios, the maker of Firefall, has deployed the SM10000-XE to power the MGU. SeaMicro says the server uses half the power and a third of the space of equivalent computing power in competing rackmount units. It also simplifies installation, management and maintenance by removing the need for top of rack switches, terminal servers and networking devices.

    “The mobile gaming unit would not have been possible without AMD’s SeaMicro server,”  said Mark Kern, CEO of Red 5 Studios. ”It allowed us to install a data center into a closet on a bus, yet achieve performance equal to one of ‘World of Warcraft’s’ original data centers. We found AMD’s SeaMicro technology to be far ahead of the competition with a combination of high performance, low energy use, scalable storage and small footprint. The new server brings us much closer to achieving our vision.”

    The SeaMicro x86 servers allowed the Red 5 Studios team to focus on creating an experience that was worthy of the company’s vision rather than having to constantly tweak and optimize its code so that it would work well. The gaming application is built using kernel-based virtual machine (KVM) technology. It is delivered and managed using OpenStack.

    Firefall, an upcoming free-to-play MMO shooter, is set in a science fiction universe 200 years in the future. The MGU allows Red 5 Studios to bring the FireFall experience to fans around the country.

    “Power and space requirements exist in every industry,” said Andrew Feldman, corporate vice president and general manager, Server Business Unit, AMD. “With an AMD SeaMicro server, Red 5 Studios was able to shrink what would normally take many racks of compute into a suitcase-size container that could easily be transported. We’re excited to help Red 5 Studios redefine the gaming experience and create a mobile gaming center on demand for its fans.”

    Here’s an exterior view of the MGU:

    firefall-mgu-3

    (Photo: Red 5 Studios)

    4:30p
    Video: How Facebook Manages Data Centers at Scale

    When you’re adding servers and data centers as fast as Facebook, standardization and automation are your best friends. At the Open Compute Summit IV, held in Santa Clara in January, Facebook’s Delfina Eberly provided an overview of how the company uses standardization and automation to manage data centers at Internet scale. Eberly, the Director of U.S. Data Center Operations for Facebook, said the company has effectively automated all repairs that didn’t require hands-on attention. As its growth accelerated, Facebook constantly assessed and updated its tools and workflow, and developed an integrated spare parts portal so inventory stocking and parts replenishment was built into the workflow. Facebook also developed a custom ticketing system specific to its needs. While the company is known for its freewheeling engineering culture, data center operrations is a different matter. This is one place where we put a lot of rigor into how we operate,” said Eberly. “In the data center, we put a lot of rigidity into the workflow.” This has reduced the need for “tribal knowledge” in the data center, said Eberly which was important as the company rapidly added new data center techs at various locations. This video runs about 30 minutes.

    For more new about Facebook’s data centers, see the Facebook Data Center FAQ or our Facebook Channel. For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    8:30p
    Interxion Uses Sea Water to Cool Stockholm Data Centers
    interxion-containment-overh

    A cold aisle containment in an Interxion data center, viewed from above. (Photo: Interxion)

    European data center provider Interxion is no stranger to innovation. Over the years, the company has been a pioneer in modular design and cold aisle containment, and is now using seawater to cool a Stockholm data center, generating some serious efficiency benefits. Energy costs have been reduced by 80 percent, the company said, slashing enough IT load to allow additional customers to colocate in the facility.

    Interxion says the Power usage Effectiveness (PUE) for its Stockholm facility has dropped to 1.09, making it one of the most efficient data centers in Europe. The type of efficiency Interxion is experiencing in Stockholm is most commonly associated with facilities using air economization (free cooling) to leverage the cool environment in cool servers.

    “We don’t use outside air. We use chilled water, and we achieve 1.2 from this,” said Lex Coors, VP data center technology for Interxion. “With the sea water we can achieve a PUE of 1.1 because we do not have to cool it over time. With sea water, you can take it in and push it out easily.”

    Mother Nature as Your Chiller

    Seawater cooling systems pump deep, cold seawater through a data center’s HVAC system. As a result, the air circulating within a facility is cooled, which has the effect of lowering the inside temperature. Although the mechanics of this process are similar to chiller systems, seawater cooling completely eliminates the need to cool water down, which requires high levels of energy.

    Interxion’s seawater cooling system is particularly notable because it runs water through multiple data centers multiple times, instead of the conventional strategy to run water through just one facility. This method also reduces operational and environmental costs, as it requires half the amount of water to cool each of the data centers. Interxion also doubles the use of seawater by reusing the warm water to heat local offices and residential buildings before returning it to the sea.

    There are a number of techniques to tap external sources of cold water, effectively using Mother Earth as your chiller. But some work better than others, Coors said. He  noted the challenges of using deep lake cooling systems versus seawater and aquifers.

    “With deep lake and aquifers there is basically a push back on using water,” said Coors. “People are so afraid of legionella that they don’t use water power. We have to look for alternatives. There’s drilling into the ground, but that’s not allowed often. We can use sea water and salt aquifers.”

    Coors mentions there are advantages to working in Europe. “In Europe, we do not have to focus on the smart grid pipes, because basically if you are connected to a power grid in Europe, you’re connected to the national and international grid,” he said. “It’s a matter of paying a little bit more and telling them ‘I want this kind of green power.”

    So will seawater cooling, and its benefits, become a trend in the US? “Depending on the area, it should be a design trend,” said Coors. “It really helps with the environment. If you have to run a pipe a mile, it’s still very beneficial. The Gulf is a bit hot, but in California there are enough opportunities.”

    A Track Record of Innovation

    Interxion has a history of innovating; in addition to seawater cooling, the company has been pioneers in phased design and cold aisle containment. The company has been practicing phased construction in its data centers since 1999, a time when most data centers were being built as “barns” with large open floor plans being built in their entirety.

    “We had to build data centers in 11 countries and only had a limited amount of capital available,” said Coors. “I can from a ship company and took that idea of shipping containers back to the data centers. I didn’t want to build a bunch of data centers I’d have to upgrade in three to four ears. I did not install the whole infrastructure, nor did I build out the whole data center. Instead, we chopped the building into 4 phases of 10,000 square feet and installed infrastructure to support a limited capacity and over time, adding additional infrastructure to not interrupt operation.”

    9:00p
    eBay’s DSE: One Dashboard to Rule Them All?

    DSE-dashboard

    Has eBay developed one dashboard to rule them all? The company took a big step closer to the holy grail of a unified data center productivity metric, unveiling a methodology called Digital Service Efficiency (DSE) at The Green Grid Forum 2013 in Santa Clara, Calif.

    In the conference keynote, eBay’s Dean Nelson outlined a system of metrics to tie data center performance to business and transactional metrics. DSE enables balance within the technology ecosystem by exposing how turning knobs in one dimension affects the others, providing a “miles per gallon” measurement for technical infrastructure. In drawing direct connections between data center performance and cost, the dashboard provides eBay with insights that go directly to its bottom line.

    “We’re making $337 million per megawatt,” said Nelson, the Vice President, Global Foundation Services at eBay. “That’s the productivity of our infrastructure, not the cost overhead. Through the DSE Dashboard, these numbers are laid out in simple terms that are understandable across business roles. This starts conversations at every level about how we achieve goals. It’s that bridge that’s been missing for so long.”

    That data point provides a vivid example of the productivity of data center infrastructure, which typically has construction costs of $5 million to $10 million per megawatt for large users like eBay.

    How To Measure Productivity?

    The Green Grid has spent several years evaluating various metrics that could be used to measure data center productivity. The industry group popularized the use of Power Usage Effectiveness (PUE) as the leading metric for data center energy efficiency. But PUE was primarily a measure of facilities infrastructure, and didn’t address the effectiveness of IT systems within the data center. Various gauges have been proposed to measure productivity, but none has addressed all the objectives for an industry-level metric.

    With Digital Service Efficiency, eBay has developed a methodology it believed can bring these diverse puzzle pieces together. It’s based on eBay’s e-commerce operations, but the company says its approach can be adapted by other data center operators, who can substitute their own business metrics. “While the actual services and variables are specific to eBay, the methodology can be used by any company to make better business decisions,” eBay writes in an overview of its process. “Just as ‘your mileage will vary’ from any MPG rating, DSE provides an introspective view of how well a company has optimized its technical infrastructure.”

    Most importantly, eBay believes it has sorted out a way to integrate the many variables that a data center must serve.

    “Think about this as a Rubik’s cube,” explains Nelson. ”On one side it’s performance.You have cost on the other side. You’re going to know cost per transaction. The third dimension is environmental impact. The fourth dimension is revenue; how much revenue is generated per transaction. There’s a balance needed – you can solve one side fairly easily, but solving all four sides is the goal and the true value.”

    During his presentation, Nelson shared some key metrics on eBay’s data center operations. The auction giant has 52,075 servers consuming 18 megawatts of power to support 112.3 million active users.  That equates to revenue of $54 per user, and $117,000 per server.

    The development of DSE began three years ago, when the company was looking to unify the view of the business, the infrastructure, and assess it’s impact in terms of energy, cost and environment. DSE is a dashboard for the company’s technical ecosystem – the data centers, compute equipment and software that combine to deliver its digital services to consumers.

    An MPG Rating for Data Centers

    “Much like a dashboard in a car, DSE offers a straightforward approach to measuring the overall performance of technical infrastructure across four key business priorities: performance, cost, environmental impact, and revenue,” said Nelson. Drawing a parallel to the Miles Per Gallon (MPG) measurement for cars, Nelson argues that DSE enables a view into how a company’s “engine” performed with real customer consumption, how the car performed as it was being driven, or in eBay’s case, how the eBay.com engine ran while its users drove it.

    “This is what is being consumed, this is how our customers are driving our car,” he said.

    << Previous Day 2013/03/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org