Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, August 12th, 2013

    Time Event
    2:40p
    DCIM Integration: Are IT Management Tools Enough?

    Lara Greden is a senior principal, strategy, at CA Technologies.

    LaraGreden-tnLARA GREDEN
    CA Technologies

    When it comes to driving value through data center operations, one of the prevailing challenges for facilities and IT alike is a lack of common management tools. In a survey conducted by IDC, 63 percent of respondents reported that they do not have a common set of management tools covering servers, network, storage, power, cooling, etc.

    For those on the facilities side, what does it mean to not have an integrated tool set covering the complete data center picture? Likewise, what does it mean to not have the areas of power, space, and cooling visible and integrated with the tool sets that IT uses?

    Lack of Tools Can Hold Back Productivity

    It means more time spent on manual tasks. It reduces the ability to be agile. It also means a missed opportunity to provide value to the business, be it maintaining availability when it matters most or providing the data center capacity to meet the needs of new products and services that contribute to revenue generation.

    By bringing together accurate information on power, space, cooling, network, compute, storage, and more, Data Center Infrastructure Management software suites help facilities and IT get an integrated picture of their data center environment. It makes information that was previously hidden in somebody else’s “black box” become transparent. Furthermore, the analytics that are possible when you have a DCIM system in place that truly integrates with your environment helps you address risk and capacity in new ways.

    Beyond IT Management Tools: Benefits of Integration

    Let’s take the scenario of integrating with the various physical systems supporting the data center and the analytics made possible. It allows you to see all relevant metrics for a Computer Room Air Conditioner (CRAC), battery, or circuit breaker from one place. For example, one organization that I work with was able to get early identification of the potential failure of circuit breakers before the Building Management System (BMS) notified them thanks to their DCIM implementation. Being able to make the repairs faster put the organization at less risk for a potential incident and provided immediate value.

    DCIM Helps Manage Data Center Capacity

    Providing and meeting the capacity needs of the business is often a major driver for investment in DCIM software suites. Another organization I know of had an extremely lengthy, manual, and costly process for finding space for new servers in the data center. Even a third party was involved, yet data was typically still inaccurate and not current.  By looking to DCIM software as the technology solution to support a people and process change, the organization can significantly simplify their processes and greatly improve the agility of their data center operations. Analytics through DCIM can provide options for where to place devices with sufficient power, space, and cooling.  No more tossing the information back and forth, or making guesses, and then dealing with the issues later. This is another tangible area of savings for DCIM software.

    Integration with the equipment, devices, and systems in the data center is key for unlocking black boxes and achieving the benefits of transparency and analytics through DCIM.  Integration capabilities play into the timelines of implementing DCIM software, and are one of the key areas to consider in looking under the hood of DCIM software offerings.  When you can truly integrate the key data sources required, you will be able to stand behind your decision to invest in DCIM.

    An upcoming blog post will discuss some specific questions that we see forward looking organizations asking as they build the business case for integrated DCIM software suites.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:15p
    Hurricane Electric Makes Canada IPv6ier

    Hurricane Electric continues to expand expand the reach of its next-generation IPv6 Internet infrastructure and services. The company has added two new Points of Presences in Canada, one at Global Server Centre in Winnipeg and the other at DataHive in Calgary. This joins a presence at Equinix Toronto and Cologix in Vancouver in Montreal, making the whole country more IPv6ier.

    “We are pleased to offer reliable and cost-effective connectivity options while strengthening our commitment to the Canadian market,” said Mike Leber, President of Hurricane Electric. “Customers of both Global Server Centre in Winnipeg and DataHive in Calgary will have access to the depth and reach of Hurricane Electric’s global network, allowing for reduced router hops and improved fault tolerance.”

    Hurricane Electric first deployed IPv6 on its global backbone in 2001. It’s global internet backbone is one of the few that is IPv6 native and doesn’t rely on internal tunnels for its IPv6 connectivity. IPv6 is a core service, every customer is provided it as well as classic IPv4 connectivity.

    Hurricane Electric has been on an overall expansion spree. The company added a fifth New York location with Point of Presence at Telx.  It also added new PoPs at Interxion Copenhagen and Madrid.

    3:45p
    Amazon’s Customer Win Trinity: Startup, Enterprise, Big Data

    cloud-rows-dreamstime

    There’s been a flurry of cloud news in the past few months, including industry continued price cuts, heated debates around APIs, IBM buying SoftLayer, or upcoming cloud exchanges. Through it all, Amazon Web Services (AWS) continues to lead the pack, with staggering growth.

    Amazon (AMZN) doesn’t break out revenue for its cloud business. But in the company’s most recent earnings report, the revenue from the “Other” bucket was $844 million, up from $515 million a year ago, for a staggering 64 percent annual growth rate. The “Other” category contains other businesses, but AWS is believed to be driving almost all of the growth. AWS remains the 800-pound gorilla in the room, despite a cloud industry which has entered into hyperdrive, particularly in recent months.

    The company’s customer win case studies for the month illustrate the range of the company’s appeal.

    One case study showcased the Schumacher Group, the third-largest and fastest growing physician management company. It’s an enterprise that deals with sensitive medical information. The appeal to developers and startups has long been documented, so AWS is trying to get out some solid enterprise stories, and Schumacher provides an example of this capability.

    The company works with over 200 hospitals and 4 million patients. Schumacher uses Amazon EC2 and S3, backing up a lot of archival images to S3.  Its data stored is in SQL environments and it was one of the early adopters of Redshift.

    Schumacher made the transition because it needed to be agile and respond to business needs. “Healthcare changes at the sign of a signature of a president, a governor – we need to be able to be very responsive, ” said Douglas Menefee, Chief Information Officer, Schumacher Group.

    A critical project came up that would have taken seven to eight weeks to provision and procure the equipment,and three to four weeks to get everything configured. On AWS the project cost $4,000 to $5,000 rather than the anticipated $80,000 for equipment and staffing had they built their own solution.

    Forrester recently  published a report that placed AWS ahead of the pack when it comes to enterprise cloud dev choices. Windows Azure and Google weren’t far behind, however. Application integration, mobile and internal Web applications are the three top uses for cloud environments over the last 12 months.

    The company still touts startup wins, however. Hailo is another AWS customer. It’s venture funded and has global scalability goals. The company provides a virtual hail-a-cab smartphone application. It needed to get up and running and expand with demand. It grew to half a million passengers across Europe and North America in 18 months. The flexibility AWS, and cloud in general provides to startups and young companies is immeasurable; the capex hurdles have all but disappeared to get up and running. It means there’s room for innovation, and the little guy can grow to compete with established competition.

    The third recent customer case study was the Genome Medicine Institute (GMI), which analyzes genomes to better understand human diseases. Faced with inadequate storage and performance coupled with increasing costs, GMI moved to AWS. The move reduced computing time by more than half, doubling its ability to analyze the human genome.

    This is a big data example for AWS, filling the customer trinity for the month: the startup, the enterprise, the big data use case. Being able to process big jobs was a major first appeal of the cloud due to the capital expenditures it took to deploy similar hardware. It’s short, intensive usage of a lot of servers, meaning that the opex appeal of renting capacity out is large.

    5:00p
    Closer Look: The Argonne MIRA Supercomputer

    The Mira supercomputer in Argonne National Laboratory

    A look at the cabling supporting the Mira supercomputer in Argonne National Laboratory (Photo: ANL)


    The U.S. Department of Energy’s Argonne National Laboratory in suburban Chicago recently held a ceremony to commission its new supercomputer, known as MIRA, which is currently the world’s fifth-most powerful supercomputer. But as the Voice of America reports, lawmakers are concerned that the United States is losing ground in international supercomputing, a field the U.S. has dominated for decades. This video from VOA provides a closer look at MIRA, and explores its position on supercomputing’s evolving frontier. This video runs about 2 minutes, 30 seconds.

    See The Top 10 Supercomputers, Illustrated for a visual guide to the Top 500′s leading systems. For more supercomputing news, see our HPC Channel. For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    8:15p
    S&C Highlights Benefits of Medium Voltage Power Infrastructure
    Dell-module-ebay-470

    A modular data center packed with servers at eBay’s Project Mercury facility in Phoenix, which uses an S&C medium voltage power system (Photo: eBay)

    As data centers scale up, should the voltage on your power distribution scale along with it? That’s the question being posed by S&C Electric Company, which has been around for  over a century and is now seeing data centers become a growing portion of its business.

    At a time when many data center operators are seeking greater efficiency from their power distribution, S&C says its medium voltage UPS solutions offer up a better total cost of ownership (TCO), particularly in this world of bigger data centers. This approach is finding some believers in Phoenix, where S&C equipment is supporting data centers operated by eBay and Phoenix NAP.

    “For 50, 70, 150 megawatt data centers, you can’t serve it properly at 480 volts,” said Troy Miller, Marketing Manager in the Power Quality Products Division at S&C Electric Company. “At medium voltage, you get the best TCO. You save space.”

    Moving the UPS Outdoors

    S&C says its medium voltage design offers several potential advantages. It makes it easier for data centers to use the UPS system to provide backup power to an entire facility, including chillers and other mechanical infrastructure as well as the IT equipment. Because it can work outdoors, S&C’s UPS and distribution switchgear also allows data center operators to move the UPS infrastructure into the equipment yard, leaving more space for rentable cabinets inside the white space.

    There are some considerations and potential tradeoffs. S&C’s PureWave UPS is an “offline” system, meaning the batteries are connected to the energized bus only when the system is supplying power to the load. Most conventional data center UPS systems are online systems.

    Phoenix NAP is one example of a multi-tenant data center leveraging S&C to maximize its footprint. S&C provides the full backup for that location. Phoenix NAP houses its PureWave UPS outdoor in an equipment year, allowing it to maximize the data center space within.

    eBay was out of room at Project Mercury. The containerized data center in Phoenix is one of the most efficient data centers ever built. eBay leveraged S&C’s solution to free up some space inside, placing power infrastructure outside.

    Case Study: International Trading Company

    There are also examples of enterprise implementations. A major international trading company needed a reliable power system to keep its new Tier 3 data center running smoothly. The facility’s future load of 200 MVA challenged engineers to develop a highly reliable distribution system on a limited real estate footprint.

    The customer’s engineering firm proposed building a 34-kV distribution system to support the data center. S&C was hired to review the design and found it wasn’t compatible with the facility’s power reliability and redundancy needs. The proposed solution also would cost a truckload. S&C presented a new design that ensured reliability and redundancy while considering space constraints.

    S&C proposed a compact 138-kV/13.8-kV substation that provided redundant power sources using two independent energized buses. Each bus could accommodate up to four 25-MVA transformers. It was more reliable and fit into the company’s small 150 x 350 foot lot located at one end of the site.

    “In the case of this customer, we created a 100 megawatt substation expandable to 200 megawatts,” said Miller. “We delivered the substation in 9 months.”

    Maintenance Benefits

    There’s also a potentially unforeseen advantage of moving the power distribution outside. “We worked with a lot 3-letter agencies,” said Miller. ” They were excited about doing the power distribution outside – maintenance people don’t need clearance.

    “One of our key messages is that we understand medium voltage, we’re on the power side of the house,” said Miller. “From 2 megawatts to 16 megawatts, as you’re expanding, on medium voltage you can double capacity easily.”

    The biggest trend the company is seeing is that data centers are getting bigger and bigger. “When you start talking about 30, 60 100 megawatts, and you want to do that with diesel generator backup, you have to think about how you do this in a different way,” said Miller. “It doesn’t make sense at 480v.  Natural gas turbines is one potential way, and there’s what we’re doing with medium volt input.”

    8:26p
    Rackspace Adding 50 Servers Per Day
    dft-acc5-datahall-470

    As Rackspace expands its cloud, it’s adding more than 50 servers a day, filling up wholesale data halls like this one at DuPont Fabros in Ashburn, Virginia. (Photo: DuPont Fabros Technology)

    The growth is back at Rackspace Hosting. The cloud computing specialist added 4,762 physical servers in the second quarter of 2013, an average increase of 52 servers per day. It’s the equivalent of rolling in at least a full cabinet of servers every day, including weekends.

    The server growth in the three months ending June 30 marked a significant increase from recent reporting periods. Rackspace added 3,598 servers in the first quarter of 2013, and just 1,473 in the final quarter of 2012. The gain brought Rackspace’s total servers under management to 98,884.

    The results were welcomed by investors, who have scrutinized the pace of growth in Rackspace’s cloud computing business in recent quarters, as the company has reported a lengthening of some enterprise sales cycles as it goes through a transition to its new cloud architecture based ion the OpenStack cloud platform.

    Cloud Expansion Drives Growth

    So what’s driving growth? Rackspace (RAX) is deploying new cloud infrastructure across several regions.

    “We just completed a new cloud build-out in Australia,” said Rackspace CEO Lanham Napier. “When we built a new cloud, obviously, we create capacity upfront, and so we had a number of servers going online for that. We’re building a new cloud in Virginia, as well as in Hong Kong. So part of what’s driving this number is that we just flat out have capacity expansion and increases as we add these new clouds on a global basis.

    “The other element that drives this number is that it’s basically success-based growth. We did a little bit better growth-wise this quarter. And as we grow a little bit faster, we add more servers.”

    As a result, Rackspace invested $119.8 million in property and equipment in the second quarter, compared to $105.5 million in the prior quarter and $69.3 million in the same quarter in 2012.

    “We’ve increased our investment levels to play for a bigger, long-term outcome,” said Napier. “We feel we have the opportunity to build a really important global company and now is the time to make those investments. We want to make sure we are giving ourselves the proper funding to make the investments necessary to build a great long-term company. This is how we see things right now. We think now is the time to really go for it.”

    << Previous Day 2013/08/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org