Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, November 19th, 2013

    Time Event
    12:30p
    NVIDIA Partnershp With IBM Could Widen Use of GPU Accelerators
    NVIDIA_Tesla_K40_GPU_Accele

    NVIDIA’s new Tesla K40 graphics processing unit. (Photo: NVIDIA Corp.)

    In a move that could help expand the market for graphics processing units (GPUs), NVIDIA and IBM will collaborate on GPU-accelerated versions of IBM’s wide portfolio of enterprise software applications on IBM Power Systems. The move, announced Monday at the SC13 conference in Denver,puts NVIDIA’s GPU accelerator outside of the high performance computing (HPC) realm for the first time, pairs it with IBM’s Power8 CPU, and opens the door for use in enterprise-scale data centers. With big data in mind, the collaboration aims to enable IBM customers to more rapidly process, secure and analyze massive volumes of streaming data.

    “This partnership will bring supercomputer performance to the corporate data center, expanding the use of GPU accelerators well beyond the traditional supercomputing and technical computing markets,” said Ian Buck, vice president of Accelerated Computing at NVIDIA. “It will also provide existing supercomputing and high performance computing customers with new choices and technologies to build powerful, energy-efficient systems that drive innovation and scientific discovery.”

    NVIDIA and IBM also plan to integrate the joint-processing capabilities of NVIDIA Tesla GPUs with IBM POWER processors. By combining IBM POWER8 CPUs with energy-efficient GPU accelerators, the companies aim to deliver a new class of technology that maximizes performance and efficiency for all types of scientific, engineering, big data analytics and other high performance computing (HPC) workloads.

    “Companies are looking for new and more efficient ways to drive business value from Big Data and analytics,” said Tom Rosamilia, senior vice president, IBM Systems & Technology Group and Integrated Supply Chain. “The combination of IBM and NVIDIA processor technologies can provide clients with an advanced and efficient foundation to achieve this goal.”

    IBM Power Systems will fully support existing scientific, engineering and visualization applications developed with the NVIDIA CUDA programming model, allowing supercomputing centers and HPC customers to immediately take advantage of groundbreaking performance advantages. IBM also plans to make its Rational brand of enterprise software development tools available to supercomputing developers, making it easier for programmers to develop cutting-edge applications.

    1:30p
    DCIM Makes A Difference for Colo Providers

    Lara Greden is a senior principal, strategy, at CA Technologies. Her previous post Putting Your DCIM Plan into Action appeared in October 2013.

    LaraGreden-tnLARA GREDEN
    CA Technologies

    DCIM is essential for optimizing the efficiency of data center operations, for mitigating risk and for enabling IT agility. But DCIM can also help IT get maximum value out of its colo providers. And that value is becoming increasingly important as IT looks beyond its four walls for much-needed infrastructure capacity.

    Companies like RagingWire, Datotel, and Logicalis are using DCIM technology to bring highly differentiated value to their customers—while running more efficient data centers and improving the bottom line. But their real strategic reason for using DCIM is to serve their customers and differentiate their offerings. DCIM is helping them achieve market leadership in two fundamental ways: transparency and state-of-the-art data center operations.

    Transparency

    Transparency helps customers make decisions faster when it comes to problem resolution, server provisioning, and capacity planning and growth. For example, accurate data on power consumption help IT assess key questions, such as the operational cost savings of remapping VMs to increase utilization, or the total cost-of-ownership impact to deploying a new server model or a new mainframe box. Furthermore, features like visibility into the power chain or 3D visualization of the thermal environment also helps IT quickly respond to and identify root causes of problems. This transparency also allows the colo provider to easily demonstrate a quantified value proposition to grow their customer’s use of their facilities and services.

    RagingWire takes the notion of transparency a step further. They offer a 100 percent availability SLA, and they back it up by offering customers visibility into power, cooling, and security. This means that, beyond 100 percent availability, customers can also access operational data on the critical systems that impact the business services that they deliver through a RagingWire data center. Through their DCIM system known as N-Matrix, RagingWire is raising the bar on bringing value to customers by offering monitoring, analysis, asset management and 3-D visualization capabilities.

    State-of-the-Art Data Center Operations

    DCIM represents a state-of-the-art data center. Colo customers want to know how you are helping to ensure uptime and availability. At minimum, they expect real-time monitoring and intelligent alerting across all data center power and cooling infrastructure, regardless of vendor. This includes batteries, PDUs, UPSs, chiller plant, CRAH, CRACs, rack temperature and humidity. For example, Logicalis, a managed service and colo provider, helps ensure uptime and availability for their customers by monitoring more than 1000 data points covering the power and cooling infrastructure and thermal environment. Achieving operational monitoring at this scale is possible with DCIM technology.

    Customers know that early warning prevents disaster. This requires proactive monitoring as enabled by DCIM technology. There is also a growing expectation that the colo provider not only assures the shared infrastructure through these capabilities but extends that benefit to the tenant’s infrastructure within the rack or cage.

    Organizations also expect that the provisioning of new servers in racks be backed by intelligence. They require visibility into knowing that a newly-placed server will have sufficient power and cooling. They are no longer comfortable with simply assuming that sufficient power and cooling is available. The server placement functionality of DCIM software must also meet other challenges, such as finding sufficient rack space and meeting user-specified criteria around provisioning and decommissioning servers.

    Datotel brings the value of sophisticated power monitoring straight to their customers. In a business defined by change, fast growth, and complexity, Datotel sought to monitor power at each element in the power delivery system so as to enable customers to proactively manage power consumption and have insight for growth. By implementing DCIM technology, Datotel now advises customers on energy efficient devices and computing models, and help customers achieve growth while controlling energy costs.

    Taking a Portfolio Approach

    If you have come to expect DCIM from your colo provider, what does this mean for your organization’s data center portfolio strategy? Increasingly, IT management and DCIM users will seek a seamless experience for managing data center resources across the portfolio. Again, this is driven by DCIM’s central role in answering the critical question of “do we have the capacity to support business goals?” Organizations will place increasing importance on the DCIM solutions’ architecture as it relates to scalability, security, and integration with the physical data center.

    In addition, as enterprises look to deploy and expand their DCIM implementations, they will increasingly require an easy on-boarding path for working with colo providers, ranging from asset data to real-time monitoring, and overall aggregation of data across their digital infrastructure. A portfolio approach to DCIM, across owned and leased data center resources, offers many benefits. It helps IT and data center managers gain visibility on floor and rack space, power capacity, and the thermal environment across all data center locations, and do data-driven capacity planning that improves business results.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:31p
    NetApp Introduces New Flash Storage
    The NetApp EF550 all-Flash storage array. (Photo: NetApp)

    The NetApp EF550 all-Flash storage array. (Photo: NetApp)

    NetApp (NTAP) today introduced two new hardware platforms – the EF550 and E2700, and updated a third – the E5500, to address specialized workloads, and the performance requirements of data-intensive applications.

    “Our product innovation strategy is to deliver the industry’s leading portfolio of cloud-integrated and flash-accelerated storage and data management solutions,” said George Kurian, Executive Vice President, Product Operations, NetApp. “These solutions satisfy our customers’ needs for their multiple workloads and deployment models — either dedicated or as part of a shared infrastructure. By expanding our EF-Series all-flash arrays and E-Series platforms we are meeting growing market demand for dense, performance-oriented architectures and delivering superior performance, reliability, efficiency and scale.”

    New platforms

    A new EF550 Flash array offers sub-millisecond response times to accelerate latency-sensitive applications, with remote replication capabilities to back up data to a remote site. The EF550 has more than 12GB per second of memory bandwidth, and in a high availability environment tested at 400,000 IOPS, and can scale up to 1.5 Petabytes per system.

    The new E2700 provides storage for remote and branch offices, with 60-drive, 4U standard rack enclosure.Meanwhile, the E5500 expands enterprise data protection with the SANtricity suite of data replication features and flexible interface offerings with 10G iSCSI and 16G FC, in addition to SAS and IB. The E5500 scales to 1.5PB per system for data-intensive storage at scale.

    Connecting Clouds

    Previously NetApp outlined its strategy for using its clustered Data ONTAP operating system and provide seamless cloud management across any blend of private and public cloud resources. By harnessing the versatility and efficiency of Data ONTAP, NetApp now delivers a universal data platform, enabling dynamic data portability and extensive customer choice across private cloud, public cloud service provider, and hyperscale cloud provider options. NetApp has more than 175 cloud service providers delivering over 300 cloud services built on Data ONTAP.

    2:30p
    Salesforce.com and HP to Team on Superpods

    Tech giants Salesforce.com (CRM) and HP (HPQ) have joined forces on the Salesforce Superpod, a new dedicated instance in the Salesforce multi-tenant cloud, running on HP’s Converged Infrastructure. The companies plan to jointly develop and market the Salesforce Superpod to the world’s largest enterprises, with HP as the first customer.

    The announcement was made in conjunction with the Salesforce.com DreamForce 2013 conference is under way in San Francisco this week. The Twitter conversation can be followed at #DF13.

    “The Salesforce.com and HP partnership is a breakthrough in cloud computing,” said Marc Benioff, chairman and CEO, Salesforce.com. “The Salesforce Superpod will allow individual customers to have a dedicated instance in the Salesforce multi-tenant cloud, powered by HP’s technology and fully managed within salesforce.com’s world-class data centers.”

    The Salesforce Superpod will be available for an additional fee to salesforce.com’s largest customers. Salesforce.com CEO Marc Benioff will be joined on stage by HP’s Meg Whitman Tuesday to discuss the new strategic partnership.

    “Trends like cloud, mobility and big data are creating a ‘New Style of IT’ and transforming what enterprise customers expect and need from technology,” said Whitman. “I’m excited to have HP and salesforce.com work together to help customers tackle these exciting challenges. By jointly developing and using each other’s technology, the Salesforce Superpod will deliver the highest standard in performance, reliability and management. HP intends to be the first customer of the Salesforce Superpod.”

    3:00p
    NVIDIA Launches Tesla K40 GPU Accelerator
    NVIDIA_Tesla_K40_GPU_Accele

    NVIDIA’s new Tesla K40 graphics processing unit. (Photo: NVIDIA Corp.)

    NVIDIA (NVDA) unveiled the NVIDIA Tesla K40 GPU accelerator, which the company called its most efficient architecture ever, achieving 4.29 teraflops single-precision and 1.43 teraflops double-precision peak floating point performance. The new K40 features double the memory and up to 40 percent higher performance than the K20X GPU, and 10 times higher performance than today’s fastest CPU.

    “GPU accelerators have gone mainstream in the HPC and supercomputing industries, enabling engineers and researchers to consistently drive innovation and scientific discovery,” said Sumit Gupta, general manager of Tesla Accelerated Computing products at NVIDIA. “With the breakthrough performance and higher memory capacity of the Tesla K40 GPU, enterprise customers can quickly crunch through massive volumes of data generated by their big data analytics applications.”

    Key features of the Tesla K40 GPU include 12GB of GDDR5 memory, 2,880 CUDA parallel processing cores, dynamic parallelism, and PCIe Gen-3 interconnect support. The Tesla K40 GPU accelerates the broadest range of scientific, engineering, commercial and enterprise HPC and data center applications. The NVIDIA Tesla K40 GPU accelerator is available immediately.

    TACC, HP, NVIDIA partner for remote visualization project

    The Texas Advanced Computer Center (TACC) at The University of Texas at Austin, along with technology partners HP and NVIDIA,  announced that they will deploy Maverick in January 2014, a unique, powerful, high performance visualization and data analytics resource for the open science and engineering community. In addition to launching Maverick in January 2014, TACC this month is deploying Stockyard, a 20 petabyte large-scale global file system. Other systems for storing and analyzing data sets and for hosting web portals and gateways that provide access to scientific data will be announced in 2014.

    The Maverick system will be comprised of five racks containing 132 HP ProLiant SL250s Gen 8 compute nodes and 14 HP ProLiant management, login, and Lustre router servers. Each of the 132 compute nodes will include two ten-core Intel Xeon E5-2680 V2 processors with 256GB of DDR3 1866MHz memory each, a Mellanox Connect-X3 FDR InfiniBand FlexibleLOM adaptor, and one NVIDIA Tesla K40 GPU accelerator. A Mellanox FDR InfiniBand interconnect will provide a high-performance communication platform.

    “This system will be great for Big Data analysis — every node in Maverick will have large memory, a state-of-the-art GPU accelerator, and be connected to massive data storage,” said Niall Gaffney, TACC’s director of Data Intensive Computing. ”Data scientists and all researchers will be able to use visual analysis techniques to explore data.”

    Partner support

    The Tesla K40 GPU will be available in the coming months from a variety of server manufacturers, including Appro, ASUS, Bull, Cray, Dell, Eurotech, HP, IBM, Inspur, SGI, Sugon, Supermicro and Tyan, as well as from NVIDIA reseller partners.

    • SGI announced the availability of NVIDIA Tesla K40 GPU accelerators in fully managed and integrated solutions across its entire server product line. Tesla K40 GPU accelerators are available in UV 2000, Dense Rackable servers, the Rackable C2108 and UV 20 servers, and they can be hosted in an SGI ICE X fabric via Rackable service nodes. ”NVIDIA’s accelerators enable our customers to realize significant improvements in processing performance,” said Bill Mannel, general manager, Compute at SGI. “Accelerator-based HPC solutions feature intelligent NVIDIA GPU Boost technology, which converts power headroom into a user-controlled performance boost, enabling our customers to unlock the untapped performance of a broad range of applications to address compute and Big Data challenges.”
    • Cray announced the Cray CS300 line of cluster supercomputers and the Cray XC30 supercomputers are now available with the NVIDIA Tesla K40 GPU accelerators. ”The addition of the NVIDIA K40 GPUs furthers our vision for Adaptive Supercomputing, which provides outstanding performance with a computing architecture that accommodates powerful CPUs and highly-advanced accelerators from leading technology companies like NVIDIA,” said Barry Bolding, vice president of marketing at Cray. “We have proven that acceleration can be productive at high scalability with Cray systems such as ‘Titan’, ‘Blue Waters’, and most recently with the delivery of a Cray XC30 system at the Swiss National Supercomputing Centre (CSCS). Together with Cray’s latest OpenACC 2.0 compiler, the new NVIDIA K40 GPUs can process larger datasets, reach higher levels of acceleration and provide more efficient compute performance, and we are pleased these features are now available to customers across our complete portfolio of supercomputing solutions.”
    • Supermicro debuted new 4U 8x GPU SuperServer that supports the new and existing active or passive GPUs (up to 300W) with an advanced cooling architecture that splits the CPU (up to 150W x2) and GPU (up to 300W x8) cooling zones on separate levels for maximum performance and reliability. In addition, Supermicro has 1U, 2U, 3U SuperServers, FatTwin, SuperWorkstations and SuperBlade platforms ready to support the new K40 GPU accelerator.
    • Cirrascale announced it will offer the NVIDIA Tesla K40 GPU accelerator throughout its GPU-enabled blade server and high-performance workstation product lines. Utilizing a pair of the company’s latest proprietary 80-lane Gen3 PCIe switch-enabled risers, the GB5400 supports up to eight discrete NVIDIA Tesla K40 Accelerator cards in a single blade. “As always, NVIDIA is pushing the performance envelope with its latest GPU accelerator,” said David Driggers, CEO, Cirrascale Corporation. “Our customers and licensed partners in HPC are moving rapidly to take advantage of this increased performance, and want to ensure they can scale the solutions they choose. We’re confident the Tesla K40 GPU with our latest Gen3 switch-enabled riser meets these needs.”
    3:00p
    10 Key Considerations for a Successful DCIM Project

    The modern data center has evolved into a distributed infrastructure utilizing new kinds of resources. The proliferation of cloud computing, IT consumerization, and an increased amount of data has resulted in new kinds of demands around the data center environment. Administrators are tasked with creating a more proactive environment capable of greater resiliency and better efficiency. These administrators must turn to intelligent monitoring and management solutions which will give them the information that they need to make decisions both now – and in the future.

    This white paper from Raritan explores 10 key considerations around a very important data center topic: DCIM. The data center requires better ways to monitor and manage key resources. The paper discusses, in detail, the following concepts:

    • Be clear on problems you want solved
    • Understand current processes and desired outcomes
    • Don’t get caught in the feature comparison trap
    • Partner with peers in other functions and gain alignment
    • Ensure there is a team identified to own and maintain the system
    • Start small and expand over time
    • Expect integration – make sure the platform is open
    • Take advantage of expert services
    • Ensure people are trained and regularly monitor usage
    • Partner with a trusted, reliable experienced vendor

    Download this white paper today to learn the key DCIM considerations which can help you create a robust and well-managed data center environment. As more users continue to utilize cloud computing and emerging technologies – the data center will always have to evolve to keep up with these demands. In line with this evolution, there will always be the requirement to properly manage distributed and complex data center environments. By having a solid management platform, your data center infrastructure will see not only functional benefits, but financial gains as well.

    3:30p
    HP, Fidelity Say Modular Designs Are Enterprise-Ready
    Jake Ring, GE Critical Power, and David Rotheroe, Distinguished Technologist and Strategist, HP, present about the benefits of fully modular data centers at the 7X24 Exchange Conference.

    Jake Ring, GE Critical Power, and David Rotheroe, Distinguished Technologist and Strategist, HP, present about the benefits of fully modular data centers at the 7X24 Exchange Conference in San Antonio, TX. Ring noted the significant modular market drivers of data storage growth, speed of development, lower upfront investment, lower operating costs and the ability to fit one’s infrastructure to one’s IT load. (Photo by Colleen Miller.)

    SAN ANTONIO - Modular designs are driving significant cost savings in creating mission-critical data centers at Fidelity Investments and HP, who say their experience demonstrates that modular designs are ready to deliver high availability for enterprise workloads.

    The two companies shared case studies yesterday at the 7×24 Exchange 2013 Fall Conference, which brought together more than 800 data center professionals at the JW Marriott Hill Country Resort in San Antonio.

    The presentations offered the latest data points in an ongoing discussion about the cost of modular data centers, and the types of workloads they should support. Over the past year, the debate has advanced from research reports on modular economics to real-world case studies from marquee enterprise brands.

    It should be noted that HP and Fidelity are motivated to evangelize the merits of modularity for the enterprise, as they are each marketing their modular designs to the data center industry. But both companies are conspicuously eating their own dog food, and says it’s delicious – and significantly cheaper than traditional approaches.

    Here’s a look at Monday’s case studies.

    HP and GE

    When it needed to expand a data center in Georgia supporting its internal IT, HP evaluated a range of approaches, including a traditional bricks-and-mortar facility, a hybrid design combining buildings and modules (like HP’s “butterfly” design) and fully containerized modular solutions. The winning design would have to support high-density deployments and deliver high-reliability in a 2N power design.

    HP went with a fully modular design using the HP 240a EcoPOD modules, paired with the new PowerMOD containerized power and cooling solution from GE Structured Solutions, a unit of GE Critical Power. The EcoPOD is a double-wide design that uses air cooling and, importantly, looks and functions like a raised-floor data hall.

    A look inside one of HP's latest designs for its Performance Optimized Datacenter (POD). Airbus just deployed two PODs for a supercomputing cluster in Europe.

    A look inside the hot aisle in an HP EcoPOD data center. (Photo: HP)

    “A lot of people believe modular is just for scale-out and low reliability,” said Dave Rotheroe, Distinguished Technologist and Strategist for HP IT. “It’s not true. Modular designs can and do apply in the enterprise. I’m using them today. Nobody seems to believe I can have an enterprise data center at a lower cost using containers.”

    But cost was a key driver in the decision, along with speed-to-market. Rotheroe said the modular designs offered meaningful gains in both areas.

    “Our deployments are 10 to 20 percent cheaper than the traditional brick-and-mortar data center I would have built,” said Rotheroe. “We think in the future, pricing will drop and it will become much, much cheaper (than traditional bricks and mortar). It won’t replace everything, but it will be a major part of the market.”

    The EcoPOD will have about 1 megawatt of power capacity, and be able to house 44 racks of IT gear.

    GE Critical Power has been deploying containerized power solutions for industrial customers for some time. With its PowerMOD solution, which it officially unveiled last week, it has productized its offering for the data center market. It features a transformerless UPS in either 500kW or 1,000kW size, as well as the ability to use free cooling to maintain the environment for the batteries.

    4:00p
    Scenes From the 7×24 Exchange Conference
    Carly Fiorina, former chairman and CEO of HP

    Carly Fiorina, former chairman and CEO of HP, gave the keynote address at the 724 Exchange 2013 Fall Conference in San Antonio. (Photo by Colleen Miller.)

    The 7X24 Exchange 2013 Fall Conference event in San Antonio drew a record attendance of about 820 participants, and kicked off Monday with a keynote from former HP chairman and CEO Carly Fiorina. The theme of the conference is “Turning Vision into Action,” and was reflected in a day of presentations about leading change at an organization or business, deploying innovative data center solutions such as modular prefabricated units and innovative liquid cooling technologies. Check out Photo Highlights: 7×24 Exchange Conference Kicks Off.

    4:30p
    Cloud Building Continues, as CenturyLink Acquires Tier 3

    Telecom provider CenturyLink continues to build its cloud computing operation through acquisitions. After buying up Savvis and AppFog, CenturyLink (CTL) said today that it has now acquired public cloud provider Tier 3 and branded its offerings as CenturyLink Cloud. Terms of the deal were not disclosed.

    CenturyLink said Tier 3’s products will form the foundation of its cloud strategy and anchor the new Seattle-based CenturyLink Cloud Development Center.

    “Our mission is to provide world-class managed services to global businesses on virtual, dedicated and colocation infrastructures,” said Jeff Von Deylen, president of CenturyLink’s Savvis organization. “Tier 3’s innovative automation and self-service platform are game-changing for our global enterprise clients. From greenfield development to mission-critical apps, businesses have a trusted technology partner to seize new market opportunities. This acquisition underscores our continued commitment to delivering the most complete portfolio of cloud services.”

    Jared Wray, the founder and chief technology officer for Tier 3, will now serve as chief technology officer for the CenturyLink Cloud organization, and help the company embrace Devops and open source development through the Cloud Development Center.

    “We founded Tier 3 in 2006 with a vision for cloud services that make life easier for enterprise developers and IT alike,” said Wray. “We now have an amazing home at CenturyLink to carry this vision forward. Our platform roadmap will combine with CenturyLink’s global network and data center footprint and managed services team to help change the face of enterprise computing.”

    Headquartered in Monroe, La., CenturyLink is an S&P 500company and the third largest telecommunications company in the United States. In addition to its cloud and telecom services, the company also offers advanced entertainment services under the CenturyLink Prism TV and DIRECTV brands.

    << Previous Day 2013/11/19
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org