Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, November 13th, 2013

    Time Event
    12:30p
    Avere Systems Launches Cloud NAS Storage

    avere-cloudnas

    Looking to change the economics and functionality of data storage for the cloud era, Avere Systems announced its new Cloud NAS solution this week at the Amazon Web Services re:Invent event in Las Vegas.  A key component of  Cloud NAS is the introduction of Avere FlashCloud software that integrates legacy Network Attached Storage (NAS) with Amazon S3 and Glacier services into a single global namespace (GNS) that presents a unified view of all files via familiar NAS protocols.

    Avere Cloud NAS is designed to help enterprises overcome the challenges of achieving cloud-scale storage economics, while being unfamiliar with an object storage interface. Cloud NAS integrates existing storage systems with the cloud without sacrificing the availability or security of their data.

    “We are currently working with customers who have massive amounts of data who had never thought it was possible to leverage the cloud for more than a fraction of their storage,” said Ron Bianchini, Avere President and CEO. “Avere Cloud NAS eliminates the most serious technological challenges these customers face with moving to the cloud. Moreover, the combination of Avere with Amazon Web Services offerings gives them an enormous cost advantage over traditional storage models and allows them to bend the data center cost curve.”

    Running on Avere FXT Edge filers, FlashCloud combines scalability and clustering for high availability, and ensures that data is always available even in the presence of network outages and other failures.

    “Avere has always been on the forefront of pushing the boundaries of file storage performance and efficiency,” said Terri McClure, Senior Analyst, Enterprise Strategy Group. “Avere Cloud NAS is a natural evolution of the product, advancing the adoption of cloud storage for serious business use cases. This latest offering provides that critical link for enterprise customers to utilize the cloud so they can reduce capital and operational expenses while benefitting from Avere’s benefit of scale out and scale up performance.”

    In conjunction with Avere FlashCloud, existing Avere products such as FlashMove and FlashMirror provide a fast path for easily moving online, live data to and from the cloud without disruption. Data can also be moved between cloud providers or mirrored to another location to mitigate the risk of a service outage.

    2:00p
    Servergy Raises $20 Million Funding, Adds Partner

    Maker of hyper-efficient servers, Servergy, Inc. has raised $20 million in funding for the Q4 commercial launch of its new class of Cleantech Servers. Servergy is an IBM Business Partner that has created a new class of clean and green PowerLinux servers.

    “We feel blessed by the strong confidence the investment community and market has shown in Servergy and our breakthrough new class of servers,” said Bill Mapp, Servergy chairman and CEO. “We have our patents in place and a top-notch team of professionals on board. Together with our top global partners, we look forward to launching our new Cleantech Server line that helps solve some of the toughest power, real estate and cost challenges faced by data centers and companies who are grappling with the explosion of big data, cloud, caching and storage applications, globally.”

    The private funding subscription, for the U.S. manufacturer, was offered to accredited investors by The Williams Financial Group.

    The funding round enabled the company allowed to gear up with infrastructure and personnel, to drive the sales strategies and engineering completion for its new hyper-efficient Cleantech Servers. Engineered using Servergy’s patented Cleantech Architecture, Cleantech Servers are positioned as a “clean and green” high-density, high I/O accelerator for I/O intensive workloads, such as big data, the cloud, caching and storage applications.

    The company is accepting pre-orders for expected delivery beginning in late 4Q2013.

    Partner Announced

    This week, Servergy announced that SUSE is now a go-to-market partner for Servergy’s new PowerLinux Cleantech Server line and its new CTS-1000 servers. The partnership will offer Servergy’s customers the advantages of SUSE’s PowerLinux expertise and support, along with its scale-up and scale-out options, and new capability to run workloads in the cloud and virtually.

    2:42p
    The Mismatch Between Virtualization and Storage

    Sachin Chheda is the Director of Product and Solution Marketing at Tintri; he has long been involved in Information Technology with positions at HP, NetApp and Nimble Storage–developing and taking to market products that power some of the largest enterprises. This is part one of a two part series about the mismatch between virtualization and storage.

    sachin_chheda_tnSACHIN CHHEDA
    Tintri

    The IT world is becoming increasingly virtualized. A recent Enterprise Strategy Group (ESG) survey revealed that one third of respondents have already virtualized more than half of their x86 servers. But that percentage is expected to increase significantly over the next few years. Analysts and experts agree that almost all new IT workloads are being deployed in virtual environments.

    Many enterprises started their virtualization journey by focusing on their tier-two and tier-three applications. Impressed with the results of their first initiatives, these organizations are actively extending virtualization to include key tier-one applications and end-user desktops, taking advantage of the unmatched flexibility, agility, scalability and availability virtualization can bring to business-critical systems.

    This widespread adoption of virtualization is driving a software-defined approach to IT infrastructure — using the flexibility and configurability of software to decide how, when and where virtual machines (VMs) and applications are running and stored. This software-centered design does not tie the data center into any particular configuration; enabling IT to flexibly configure and scale the virtual infrastructure to best serve applications and end users. But true software-defined infrastructure is not possible without a storage platform designed and optimized for the unique needs of virtualized environments.

    Storage Impacts

    Although virtualization has improved the performance and manageability of the servers in the enterprise, it has created extra workload for the storage platform. The majority of enterprises embarked on the path to virtualization using general-purpose storage based on LUNs and volumes. Most were happy with the results they obtained while they still had the initial exuberance to tackle the many challenges general-purpose storage presents in virtualized environments. But now they are suffering from inadequate storage performance and the tremendous strain on their already overloaded IT staff.

    Storage Capabilities vs. Demands of Virtualization

    The storage management burden is due in large to the significant mismatch between the capabilities of traditional storage and the demands of virtualized environments. General-purpose storage was designed to meet the needs of every system and application in the customer’s environment. But by trying to solve a wide range of problems with just one dated approach, the environment effectively becomes “jack of all trades, master of none.” There are four main challenges enterprises face when using general-purpose storage in virtual environments, including increased complexity and management, inadequate storage performance, insufficient data protection, and the disappointingly low ROI of virtualization initiatives:

    1. Management complexity: Virtualization has simplified the management of compute infrastructure with VMs, but made storage management much more complex. IT administrators spend an excessive amount of time configuring and managing storage to meet the requirements of the virtual environment. These tasks are complex, error-prone and time-consuming for IT organizations using general purpose storage. Analyst research reports and recent VMware surveys agree that in typical IT environments, two-thirds of all IT resources are spent on management, leaving only one-third for more strategic initiatives. That statistic may actually be lower, considering the additional burden of storage administration on increasingly virtual environments.

    2. Storage performance: Virtualization places additional demands on storage performance. IT administrators must ensure proper configurations so performance doesn’t suffer when multiple users’ applications need simultaneous access to shared storage, or there are heavy workloads during crunch times. Some IT organizations try to solve the problem by paying lots of money for very fast, flash-only storage solutions. IT can throw any workload at these systems with decent results, but it comes at a significant price and can still be unpredictable in rapidly changing virtual environments.

    Other IT organizations try to improve performance by bolting flash options onto traditional storage systems or by adding disks to existing legacy solutions. With any of these disk-based approaches, enterprises end up significantly overprovisioning capacity, as the storage isn’t intelligent enough to automatically tune itself for various virtualized applications. Scaling storage to meet the growth in virtualization can also be a challenge. Deploying additional storage systems to meet growing performance and capacity needs increases administrative overhead. Using a traditional scale-out storage approach doesn’t solve the problem, as it adds unnecessary complexity and it still requires administrators to manually organize storage for virtualization.

    3. Data protection: The need for VM-level data protection and availability are critical in a virtualized environment. General-purpose storage solutions look at backup and recovery from a volume or LUN level vs. the VM-level. This contributes to additional complexity as IT administrators must keep closer watch on VM-to-LUN mapping. There are ways to look at data protection from an application level, but that adds significantly more cost and complexity. Replication for business continuity and disaster recovery suffers from the same challenges except they extend over the network. Management complexity and bandwidth costs are the top reasons IT organizations shy away from deploying disaster recovery.

    4. TCO and ROI: Virtualization has significantly increased the costs of storage management and underlying storage infrastructure, negatively impacting the ROI of virtualization. Enterprises need storage solutions designed specifically for virtualization to improve TCO and ROI.

    In part two, we explore the evolution of storage to better serve virtualized environments.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:00p
    Red Hat Expands OpenShift Online, Lowers Pricing

    At the AWS re:Invent conference in Las Vegas this week Red Hat (RHT) announced an expansion of its OpenShift Online public Platform-as-a-Service (PaaS) offering and services for its partner ecosystem. Red Hat is now expanding OpenShift Online’s global availability, and introducing new pricing and memory options.

    Red Hat is expanding Silver tier commercial availability of OpenShift Online to 14 new Eurozone countries, including Greece, Poland, Bulgaria, Romania, Czech Republic, Hungary, Slovakia, Croatia, Slovenia, Lithuania, Estonia, Latvia, Cyprus and Malta. With this expansion, Red Hat’s technical support and additional platform resources is available for developers and application providers in more than 30 countries. Red Hat is also lowering OpenShift Online’s gear/hour pricing by 50 percent and providing additional gear sizes to host larger applications.

    New gear/hour pricing for OpenShift Online is effective immediately. Additional memory options and expanded global access to OpenShift Online will be available in December 2013.

    Partner Ecosystem

    Red Hat also announced the expansion of its OpenShift partner ecosystem, with several new partners to help deliver on its vision to deliver even more choice to developers across public, private and hybrid cloud deployments. The OpenShift suite of open source PaaS solutions provides built-in support for Node.js, Ruby, Python, PHP, Perl, and Java and the capability for developers to add their own. OpenShift also supports many popular frameworks, including Java EE, Spring, Play and Rails. New OpenShift Ecosystem Partners include AppDynamics, Continuent, Kinvey, Netsource Partners, Phase2, StrongLoop, Vizuri, and Pat V. Mack.

    “Whether its an expansive portfolio of complementary technologies or a rich network of partners to help end-users reap the benefits of cloud, customers want choice,” said Julio Tapia, director, OpenShift ecosystem, Red Hat. “With the OpenShift Partner Program, Red Hat continues to build upon our goal to bring developers and customers the industry’s broadest ecosystem to meet their evolving needs. Today’s announcement highlights the breadth and depth of partners OpenShift is attracting, and the joint value we can deliver to customers.”

    3:30p
    How to Develop ROI for DCIM Projects

    Let’s create the scenario:  You and your IT associates have done a thorough evaluation of DCUM software and are convinced that DCIM will make a major difference in the operation of your data center. Through your research, you’ve identified one or more high-value, chronic problems that DCIM will certainly fix. You also have buy-in from related departments, such as the facilities team. Your next big challenge is to get final approval from the decision makers who control the budgets. Clearly the best time to budget and get funding for DCIM is in conjunction with a major business-related IT project such as a data center construction, relocation, co-location, expansion or even the deployment of a new, business-critical application.

    The budget for all of these projects will include a significant allocation for additional resources and tools to facilitate successful project execution. Based on Raritan’s experience with organizations like the one in the scenario – they have learned that several simple lessons can help you build a simple RIO model and get approval for your DCIM deployment. This white paper has consolidated these key lessons into this white paper.

    As the modern data center continues to evolve and new demands are placed around current infrastructures – good management solutions will become critical. Download this white paper today to learn about some key lessons which will help you budget for a successful DCIM deployment. Some of these lessons include:

    • Understanding common, costly data center problems that can be fixed with DCIM
    • Identifying the high-value problems inside of your data center
    • Understanding how much these problems are actually costing you
    • Learn to align the DCIM project to corporate objectives
    • Build and ROI model, identifying and selling it to the decision makers

    Finally, this paper will give you a sample ROI model and even a presentation template to get you ready to create your next big DCIM project budget.

    4:28p
    AMD Empowers APU Software

    During its annual APU13 Developer Summit in San Jose this week, AMD announced it is enabling its accelerated processing units (APU) for next-generation servers through advancements in software tools developed by AMD, and in collaboration with technology partners and the open source community.

    Empowering Developers

    “Servers must be efficient, scalable and adaptable to meet the compute characteristics of new and changing workloads. Software applications that leverage server APUs are designed to drive highly efficient, low-power, dense server solutions optimized for highly parallel and multimedia workloads,” said Suresh Gopalakrishnan, corporate vice president and general manager of the Server Business Unit at AMD. “We have evolved our processor roadmap to support this opportunity, and now we are showcasing how the APU software ecosystem is gaining momentum and what developers can do to participate.”

    With server APUs based on Heterogeneous System Architecture (HSA) on the horizon, AMD has developed tools for software developers to take advantage of the benefits that HSA provides. HSA enables the CPU and GPU to work in harmony on a single piece of silicon, seamlessly moving the right tasks to the best-suited processing element with no data transfer penalties and makes more memory available to the GPU so that complex processing tasks can fit in a single node.

    With both CPU and GPU compute capabilities available in its a HSA based server, AMD is collaborating with technology partners and the open source community to provide tools for developers. AMD and Oracle have joined forces on Project Sumatra – an open source project to enable developers to code in Java and take advantage of GPU compute. PGI Accelerator Compiler enables developers to add OpenACC directives that support AMD APUs and discrete GPUs to Windows and Linux Fortran, C and C++ programs.  At APU13 AccelerEyes is showcasing the use of libraries to enable heterogeneous computing, and HP is providing an overview of its HP Moonshot dense server environment for hosted desktop.

    Unified Software Development Kit

    Also announced at APU13 – AMD launched a new unified Software Development Kit (SDK,) an improved CodeXL tool suite with added features and support for the latest AMD hardware, and added heterogeneous acceleration in popular Open Source libraries. “Developers are essential to our mission of realizing the full potential of modern computing technologies,” said Manju Hegde, corporate vice president, Heterogeneous Solutions, AMD. “Enriching the developer experience by harnessing these technologies is a critical part of AMD’s mission to accelerate developer adoption.”

    AMD is helping to achieve this mission by improving its unified SDK, by adding several new capabilities, adding a Media SDK 1.0 beta, and promoting new heterogeneous acceleration optimizations in several open source libraries with the goal of making it simple for developers to accelerate applications. It is also improving its CodeXL tools suite by supporting Java and incorporating static kernel analysis capabilities.

    AMD also announced at APU13 details about “Kaveri,” the third generation performance APU from AMD, during a keynote delivered by Dr. Lisa Su, senior vice president and general manager, Global Business Units, AMD.

    “Kaveri” is the first APU with HSA features, AMD TrueAudio technology and AMD’s Mantle API combining to bring the next level of graphics, compute and efficiency to desktops (FM2+), notebooks, embedded APUs and servers.  FM2+ shipments to customers are slated to begin in late 2013 with initial availability in customer desktop offerings scheduled for Jan. 14, 2014. Further details will be announced at CES 2014.

    5:45p
    Oracle Boosts Big Data Appliance, Adds Cloudera

    Oracle gives a storage and Cloudera boost to its big data appliance, Alcatel-Lucent helps uncover mobile analytics, and CA ERwin bridges the big data gap in the enterprise.

    Oracle Boosts Big Data Appliance

    Oracle (ORCL) announced that Big Data Appliance X4-2 is now available, providing enterprises with a comprehensive and secure engineered system optimized to run Cloudera’s entire Platform for Big Data, Cloudera Enterprise, at a low overall total cost of ownership. The new appliance includes the entire Cloudera Enterprise technology stack and 33 percent more storage capacity for a total of 864 terabytes per rack. Meeting diverse needs the appliance features Cloudera Distribution for Apache Hadoop, Oracle NoSQL Database, Cloudera Impala and Cloudera Search. Oracle also announced that it is a co-founder of the Apache Sentry project to deliver fine-grained authorization to data stored in Apache Hadoop. “Oracle Big Data Appliance X4-2 continues to raise the Big Data bar, offering the industry’s only comprehensive appliance for Hadoop to securely meet enterprise Big Data challenges,” said Çetin Özbütün, senior vice president, Data Warehousing and Big Data Technologies at Oracle. “Now that Oracle Big Data Appliance comes with the Entire Cloudera Enterprise Technology Stack and a significant increase in storage capacity, enterprises can build an even more cost-effective Big Data platform that can help generate new business value quickly and effectively.”

    Mobile analytics enhanced by Alcatel-Lucent

    Alcatel-Lucent (ALU) announced that its Motive Big Network Analytics solution is enhancing the way mobile operators capture data and extract intelligence from their networks. The new solution is the newest addition to its Ultra-Broadband portfolio, to help operators. The solution will allow them to better use their greatest asset – the network – and unlock not just data, but the intelligence to help them make future decisions. The Motive solution is comprised of Alcatel-Lucent’s Wireless Network Guardian (WNG), its Kindsight Security Analytics, and its Big Network Analytics Data Miner. “Our Motive Big Network Analytics solution combines key characteristics of Big Data and mobile network analytics,” said Andrew McDonald, President, IP Platforms Division at Alcatel-Lucent. ”It allows service providers to internally leverage their greatest asset — network data – to make better informed decisions about future deployments, service enhancements and network optimization. Leveraging this data complements operator Big Data projects by allowing them to more easily identify ‘sweet spots’ for market differentiation. In turn, it introduces more exciting services and a better experience for customers.”

    CA ERwin bridges big data gap

    CA Technologies (CA) announced a new release of CA ERwin CA ERwin Data Modeler, the company’s industry-leading solution for collaboratively visualizing and managing business data across the enterprise in support of data governance, Big Data analytics, business intelligence and other initiatives. The new r9.5 release includes support for Big Data technologies such as Apache Hadoop Hive, Cloudera and Google BigQuery, driving integration and a centralized view of both the traditional and newer data sources now impacting business decision-making. In addition, a new report designer in CA ERwin facilitates better visualization and sharing of information across the enterprise and among a broad range of both business and technical users. “Today there are entirely new, non-conventional sources of information, such as Big Data, that factor into the business analysis equation, but determining the relevance and relative value of a given data element is exceptionally difficult,” said Al Hilwa, program director, Application Development Software, IDC. “CA ERwin enables organizations to quickly and easily connect the dots between their disparate data sources, giving them meaningful context that is critical to the success of their data management efforts.”

    6:00p
    Facebook Says Iowa Server Farm Will be Wind-Powered
    fb-fanwall-lulea

    Facebook already uses moving air to cool its data centers, as seen with this huge fan wall in its Lulea data center. The company said today that it will use wind to generate power support for its new facility in Iowa. (Photo: Facebook)

    From the time it enters production, Facebook’s data center in Altoona, Iowa, will run exclusively on renewable energy, thanks to a wind project located less than two hours away, the social networking giant announced today.

    Details about Facebook’s use of wind power in Iowa aren’t hugely surprising, as the official confirmation of Facebook’s work in the state mentioned that Iowa “has an abundance of wind-generated power.”

    The Wellsburg wind farm in Iowa’s Grundy County will be functioning next year, and will produce as much as 138 megawatts for the Iowa grid — more than enough to power up the Altoona facility when it starts handling traffic in 2015 and “for the foreseeable future,” according to Facebook’s announcement.

    The wind farm should be a more significant renewable asset for Facebook than the 100-kilowatt solar array beside Facebook’s data center facilities in Prineville, Ore., which is primarily meant to power the office space.

    Strategically it will play a role that’s more like Facebook’s data center in Lulea, Sweden, which is powered by renewable hydroelectric energy

    MidAmerican Energy will own the new Iowa wind farm, which Facebook started working on with the wind development company RPM Access. RPM last year received an equity investment from Google, which has a data center of its own in Iowa.

    Facebook once faced strong criticism from Greenpeace about its energy use. In 2011 the attacks largely died down, with Facebook declaring a goal of running all of its equipment with renewable energy. Companies such as Box have since made similar commitments.

    By the end of 2015, Facebook wants to be using renewable energy sources to run at least a quarter of its data center infrastructure, the company said.

    The superstructure for the Facebook data center in Altoona, Iowa is under construction. (Photo: Facebook)

    The superstructure for the Facebook data center in Altoona, Iowa is under construction. (Photo: Facebook)

    9:00p
    Dunking for Density: New Projects Pursue 3M’s Take on Immersion Cooling
    3m-applied-cooling

    This immersion cooling project in Hong Kong was created by Allied Control using a two-phase cooling technique called open bath immersion (OBI), using 3M’s Novec fluid.

    We’re continuing to see new examples of immersion cooling at meaningful scale. In July we brought you an update on an immersion cooling system at CGG using technology from Green Revolution Cooling. We’ve also been tracking early projects using “open bath immersion” cooling based on technology developed by 3M.

    Open bath immersion (OBI) is an example of passive two-phase cooling, which uses a boiling liquid to remove heat from a surface and then condenses the liquid for reuse, all without a pump. The servers are immersed in 3M’s Novec, a non-conductive chemical with a very low boiling point, which easily condenses from gas back to liquid. The OBI technique, which we first saw at last year’s Data Center World show, is now in use in a handful of sites. Here’s an overview of some of these projects.

    Allied Control

    Hong Kong-based Allied Control is specializing in developing high-density cooling solution using 3M’s Novec and OBI. The company has recently deployed a 500kW high performance computing (HPC) production installation known as Immersion-2 for a client in Hong Kong. This design uses OBI in standard 19-inch racks, and was deployed in less than six months. Allied Control says the system operates at a Power Usage Effectiveness (PUE) of 1.02, which would make it one of the most efficient designs in the world, even though Hong Kong has a hot and humid climate. The facility is located in a high rise building and fits in the size of a standard shipping container.

    One issue with a rack-mounted approach to immersion cooling is weight. “The weight is indeed a small challenge for standard sized racks, but actually more due to the increased system density, not really the fluid,” said Alex Kampl, VP of Engineering for Allied Control. “You also remove a lot of weight by not using air cooling. We’ve been working with a rack manufacturer who has been very helpful.” Here’s a look at the facility:

    These racks are filled with high-density servers immersed in cooling fluid. The installation in Hong Kong was created by Applied Control using 3M's Novec fluid. (Photo: Applied Control)

    These racks are filled with tanks containing high-density servers immersed in cooling fluid. The installation in Hong Kong was created by Allied Control using 3M’s Novec fluid. (Photo: Allied Control)

    Inside the rack-mounted tanks, the heat from dozens of servers causes the Novec fluid to boil. The vapor cools when it reaches the condenser at the top of the tank and is then reused. (Photo: Applied Control)

    Inside the rack-mounted tanks, the heat from dozens of servers causes the Novec fluid to boil. The vapor cools when it reaches the condenser at the top of the tank and is then reused. (Photo: Allied Control)

    The company previously built a dedicated immersion-cooled facility called Immersion-1 to cool a unique supercomputer comprised of FPGAs (field programmable gate arrays), which are semiconductor devices that can be programmed after manufacturing.

    Allied Control created Immersion-1, a system using 6,048 FPGA chips combining 890 million logic cells, which will encompass up to 24 tanks. The company says that a similar installation using traditional air cooling would require more than 8,500 Dual Xeon 1U servers in more than 200 racks.

    “Immersion-1 has become a massive prototype and proof of concept for a whole new generation of computing,” Applied Control says on its web site. “Since the special application tweaks the maximum performance out of each FPGA, they generate much more heat than in traditional FPGA applications. Often, FPGAs have to be throttled down or are not running at maximum performance due to cooling issues. In case of Immersion-1, the cluster would not be able to run on passive cooling and the FPGA chip temperature rises  above its maximum specifications within seconds. Only by using immersion cooling it was possible to build and run Immersion-1 with its very demanding cooling requirements.”

    The company has also developed a design concept to adapt Intel’s Xeon Phi coprocessor for HPC workloads in immersion cooling, and is  interested in developing high-density designs using Intel’s Dense Form Factor (DFF) cards in open bath immersion.

    “OBI is in its early stages, but I am sure we’ll see exciting progress very soon,” said Kampl. “Unfortunately we are wasting a lot of time right now to literally remove unnecessary parts from hardware built for air-cooling, so my hope is that system designers start offering similar hardware like the DFF cards.”

    The Allied Control technology will be on display at the 3M booth at next week’s SC13 conference in Denver.

    Next: Projects by 3M, Lawrence Berkeley Lab

    << Previous Day 2013/11/13
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org