Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, June 2nd, 2014

    Time Event
    12:00p
    Huawei’s Cloud Connect Seeks to Link Multiple Clouds, Integrate Physical and Virtual

    Huawei, the Chinese networking equipment giant, announced Agile Data Center Cloud Connect — a solution designed to simplify building cloud service systems for enterprises, aiming to address challenges associated with mixing physical and virtual devices in cloud infrastructure.

    The main components are Huawei’s CloudEngine series of data center switches and Agile Controller cloud applications. It also includes cloud platforms and data center network resources for simple automation.

    Huawei launched its data center network architecture, called Cloud Fabric Data Center, in 2012. The company says the solution has attracted about 360 global customers to date.

    About 1,800 of its CE12800 switches have been deployed in cloud computing data centers, and the vendor is now looking to integrate and connect all of moving pieces.

    “We’ve introduced the Agile Data Center Cloud Connect Solution and we want to work with our partners to build a fully integrated cloud service system,” said Liu Shaowei, president of Huawei’s enterprise networking product line. “The solution will integrate network, compute and store resources in data centers to unify the virtual and physical network worlds, implementing multi-cloud connectivity and cloud-based network automation to make cloud computing simpler.”

    Rapid development of cloud computing, Big Data and mobility have brought unprecedented data center infrastructure challenges as well as opportunities.

    “Currently, enterprise servers, storage devices and switches in data centers are highly virtualized,” said Shaowei. “These devices form a virtual world to support cloud computing. However, there are still a large number of independent physical network devices in data centers, and the distinction between the virtual and physical worlds makes it difficult to implement fast service deployment, unified resource allocation, fault isolation and diagnosis and automated service optimization.”

    Automatic network configuration

    Cloud Connect is about connecting clouds, so it’s not just a clever name. The aim is to allow IT administrators to ensure the provision of network resources and implement cloud-based network migration more effectively. It helps to define and adjust network requirements.

    The Agile Controller is capable of interpreting three types of perspectives: the application profile perspective, the logical network perspective and the physical network perspective. It automatically converts application profiles into the required logical networks and delivers the associated configurations to physical network devices, allowing network resources to be dynamically migrated or adjusted on-demand and based on service requirements.

    Interlinking an ecosystem

    Huawei is building a cloud computing data center ecosystem, which Cloud Connect helps to interconnect. The company wants to link various cloud platforms, unifying ICT resource allocation and allowing seamless integration with popular cloud platforms. It works with:

    •  VMware’s vCloud cloud management platform and NSX network virtualization platform to provide automated network policy migration and VxLAN based hardware gateway solutions.
    • Microsoft’s Cloud OS to provide a hybrid overlay fabric network solution. This has been successfully deployed in China Mobile’s cloud computing data center in Guangdong.
    • It also connects to cloud platforms such as OpenStack, for centralized management of the network and IT resources.
    • It connects to the Huawei FusionSphere cloud platform to deliver an end-to-end distributed cloud data center (DC2) solution.

    Cloud Connect displays both physical and virtual network resources in a unified view. It decouples logical networks from physical networks and abstracts the differences and dependencies between them. Decoupling makes it compatible with various physical network models and technologies.

    12:00p
    Survey: Industry Average Data Center PUE Stays Nearly Flat Over Four Years

    While the majority of data center operators who participated in a recent industry survey measure Power Usage Effectiveness, their average efficiency ratio has not changed by much over the past four years.

    This was one of the conclusions of this year’s survey by the Uptime Institute, which released the results at its annual Symposium in Santa Clara, California, in May. Starting with 1.89 in 2011, average PUE among the companies surveyed went down to 1.8 in 2012 and further down to 1.67 the following year. This year, however, it was up to 1.7.

    As Matt Stansberry, Uptime Institute’s content director who presented the survey results at the conference, put it, “We’re not really getting anywhere.”

    Uptime Survey 2014 PUE slide

    PUE is a data center energy efficiency metric created by HP and donated to The Green Grid, a data center industry organization that was able to make it a de facto standard. On its face, the metric is fairly simple, comparing the total amount of power coming into a data center facility to the amount of power used by IT equipment.

    In practice, however, the PUE ratio alone does not tell much about a facility’s efficiency unless information about how it is measured is also available.

    Zombie servers are a big problem

    While mechanical and electrical infrastructure may be extremely efficient, there is very often a lot of idle IT equipment on the data center floor that does nothing but draw power. As a result, an extremely inefficient data center can have a low PUE.

    Uptime has been bringing attention to the issue of “comatose” servers through its annual Server Roundup, where companies compete by identifying and shutting down idle machines in their data centers. Companies that shut down more than others win.

    This year’s winners were Barclays, which decommissioned more than 9,000 servers, and Sun Life, which retired bout 440.

    In its survey, Uptime asked participants to estimate how big of a problem this was. Here is what they answered when asked what percentage of their servers were likely comatose:

    Uptime Survey 2014 comatose servers

    About 45 percent of enterprises did not have scheduled auditing for comatose servers. Half of them attributed it to lack of management support; one-third said their resources were too limited, and about 20 percent said the return on investment did not justify the effort.

    PUE goals lofty

    While optimizing IT assets plays a big role in efficiency, PUE is an important tool for data center operators to measure efficiency of their mechanical and electrical infrastructure and to track efficiency improvements.

    About half of the operators surveyed this year targeted PUE between 1.2 and 1.5. About 40 percent were aiming for 1.5 to 2.0. Those working toward PUE of 1.0 to 1.2 constituted 12%, and two percent were targeting PUE between 2.0 and 2.5.

    There were also some participants who were targeting PUE less than 1.0, although they constituted only two percent of the total. This is possible in facilities that have on-site energy generation capabilities.

    12:00p
    Bri Pennington Joins Data Center Knowledge as Content Director

    Data Center Knowledge welcomes Bri Pennington, who is joining as the organization’s new director of content. In this role, she will be responsible for growth of the brand’s engagement with the audience.

    Pennington is an experienced communications professional, having held senior communications positions at Ford Motor Company and the American Indian Economic Development Fund, among other roles.

    She is replacing Colleen Miller, who will continue to work with iNET Interactive (parent company of Data Center Knowledge), managing marketing projects to increase social and online outreach to technical and data center audiences.

    Miller is a journalist and social media specialist, with more than two decades of writing, editing and Web experience. She has been part of the Data Center Knowledge team for four years, bringing news and analysis on the data center industry through the website and social channels of Data Center Knowledge.

    12:30p
    Five Signs You Are Outgrowing MySQL

    Tony Barbagallo brings over 25 years of product management, corporate, and channel marketing expertise to Clustrix. Prior to Clustrix, Tony led the launch of Skyera, a new entrant in the enterprise solid state storage market.

    The data management landscape is rapidly evolving. With the emergence of ‘super apps,’ processing millions of user interactions per second, in addition to big data and the cloud, it’s clear organizations are in need of the “next generation” of databases.

    Many have already been making the transition from MySQL to a scale-out or NewSQL database. They’ve benefited from faster analytics, increased scalability, as well as higher performance and availability of their data.

    If you’re unsure your organization is ready for the change, here are five signs to help you identify whether or not you’re outgrowing your MySQL database.

    Difficulty Handling Reads, Writes and Updates

    Many analytics platforms are built on MySQL databases that simply can’t scale to support detailed analytics or advanced feature sets. As your load increases, if you are finding it difficult to handle additional reads and writes, you may need a different database. With a scale-out approach, administrators can easily add extra nodes to process additional demand, making the handling of increased transactions much easier.

    Slow Analytics and Reporting

    MySQL databases don’t provide any real-time analytics capabilities, as they offer no support for joins and other SQL constructs. To address this problem, both Multi-Version Concurrency Control (MVCC) and Massively Parallel Processing (MPP) are required for processing massive workloads because they allow writes and analytics to happen without interference and use multiple nodes and multiple cores per node to make analytic queries go faster.

    Frequent Downtime

    MySQL databases are built with a single point of failure, meaning if any component – such as drive, motherboard, or memory – fails, the entire database will fail. As a result, you might be experiencing frequent downtime, which can result in loss of revenue. You can use sharding and slaves, but this quickly becomes fragile. A scale-out database keeps multiple copies of your data and provides built-in fault tolerance and continues in a completely operational state even with node and / or disk failures.

    High developer costs

    Developers working with MySQL databases must often spend a large portion of their time fixing plumbing issues or addressing database failures. Developers who work with a scale-out database are free to instead work on developing features and getting the product to market quicker. Time to market decreases and companies are able to earn revenue quicker.

    Maxed out servers

    Are your servers running out of RAM and starting to page to disk or temp tables? If you see the servers maxing out for extended periods of time, or it happens frequently throughout the day, it’s a key indicator that MySQL can’t keep up with your growth. Adding hardware is the quick fix, but it’s also very expensive and isn’t a long-term solution. If organizations used a scale-out approach, data can be replicated across nodes, and as transactions increase in size and amount, workload is shifted to other nodes within the database.

    As your organization and customer data grows, consider transitioning to a database that will scale to meet your needs. So if you fall into any of the five categories discussed, consider a scale-out approach.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    12:30p
    Top Ten Data Center Stories, May 2014

    Google’s use of artificial intelligence to squeeze efficiencies in its data center was the most popular story on Data Center Knowledge this month. Other topics of interest were downtime at Internap and Rackspace seeking acquisiton partners. Here are the most viewed stories on Data Center Knowledge for May 2104, ranked by page views. Enjoy!

    Stay updated on data center news – Subscribe to our RSS feed and daily e-mail updates. Follow us on Twitter or Facebook or Google+.

    2:30p
    How to Save Money and Safely Avoid Overcooling Your Data Center

    Today’s data centers are experiencing more demands from the IT organization. Throughout the entire process, as administrators are continuously tasked with running a high-density, multi-tenant, data center platform as efficiently as possible. This means controlling power, workloads, and of course – cooling.

    Worldwide demand for new and more powerful IT-based applications, combined with the economic benefits of consolidation of physical assets, has led to an unprecedented increase in data center density.

    Data center professionals are being asked to be more efficient with their resources including energy and cooling. The methods used to manage your data center energy and cooling may no longer be sufficient. In this eBook from Raritan, we find out how you can improve your efficiency without sacrificing uptime.

    The concept of cooling and data center design revolves around a few key principles. As this eBook outlines, seven key points to consider:

    • Energy Usage Today
    • Cost of Cooling Today
    • Common Methods of Saving Energy
    • ASHRAE Guidelines Enhanced
    • Monitoring Cooling
    • Potential Savings
    • Getting Started

    The important point to consider is the need for direct visibility into the modern data center. The critical nature of your IT platform necessitates the requirement for an intelligent DCIM solution. Why? The goal isn’t only to create an efficient environment. Ultimately, a powerful DCIM platform can help a data center become proactive and allow administrators to catch problems before they become serious challenges.

    Download this eBook today to learn about cooling best practices and where you can create real data center savings. Remember, an efficient data center allows your organization to grow dynamically – while still meeting future business demands.

    5:28p
    HP and SAP Release the Kraken With HANA Appliance, Cloud Services

    HP announced a new infrastructure solution for SAP‘s in-memory computing system HANA, which converges database and application platform capabilities for transactions, analytics, text analysis, predictive and spatial processing. The system is a product of the HP and SAP co-innovation initiative called “Project Kraken.

    This is the latest in HP’s effort to deepen services around HANA. In addition to the converged system, the company also announced a cloud delivery model for HANA and various support and consulting offerings.

    HP is not the only hardware vendor building a business around HANA. Its rivals IBM, Cisco, Dell and Fujitsu also have infrastructure solutions and services for the German business software giant’s in-memory computing system.

    “Businesses are adopting SAP HANA to make real-time business decisions based on massive data sets, but are often constrained by their infrastructure’s scalability and availability,” said Tom Joyce, senior vice president and general manager of HP’s Converged Systems division. “With HP ConvergedSystem 900 for SAP HANA, HP is helping customers achieve business transformation with SAP HANA by delivering a system optimized for the largest, most mission-critical workloads.”

    HP ConvergedSystem 900 for SAP HANA, which follows a previously released ConvergedSystem 500, is a pre-configured, optimized and tested system. It is certified by SAP to deliver up to 12 terabytes of data in a single memory pool.

    HP and SAP announced the converged system in conjunction with the SAP SapphireNow conference taking place this week in Orlando, Florida. At the event, they are demonstrating a test system, built on HP servers and optimized with HANA SAP’s Business Suite applications.

    In addition to buying the infrastructure solution, customers can instead choose to make HANA an operational expense by opting for HP’s new managed cloud offering. HP will host the HANA infrastructure and provide SAP’s business applications as a service. This is part of HP’s Helion cloud services initiative, announced in May.

    HP also offers Rapid Deployment Services for SAP solutions in the cloud, which aims to help clients deploy new application functionality optimized for SAP HANA quickly.

    The vendor expects to make ConvergedSystem 900 for SAP HANA available worldwide in the fall. Cloud-based HP Helion Business Applications for SAP are available now.

    8:31p
    IBM Opens SoftLayer Data Center in Hong Kong as Part of $1.2B Cloud Push

    logo-WHIR

    This article originally appeared at The WHIR.

    IBM opened up a new SoftLayer data center in Hong Kong over the weekend, a month later than the company expected after a slight delay in approval from the Chinese government.

    The SoftLayer data center in Hong Kong is the first of 15 data centers IBM will open as part of its $1.2 billion investment to extend its cloud services around the world.

    “Our expansion into Hong Kong gives us a stronger Asian market presence as well as added proximity and access to our growing customer base in region,” Lance Crosby, CEO of SoftLayer said. “This new data center gives the fast-growing, entrepreneurial businesses that Hong Kong is known for a local facility to tap into SoftLayer’s complete portfolio of cloud services.”

    SoftLayer already had a strong presence in Asia through Hong Kong and Singapore. Prior to IBM, SoftLayer’s Asian operations were headquartered in Singapore, and the company opened its Singapore data center in the fall of 2011, expanding its network to Tokyo and Hong Kong. SoftLayer cloud customers in Asia include website security provider Distil Networks, online booking agency Tiket.com and digital advertising platform Simpli.fi.

    SoftLayer’s Hong Kong data center has capacity for more than 15,000 physical servers and network connectivity provided by multiple Tier 1 carriers including NTT, Tata, and Equinix.

    Hong Kong is a popular destination for cloud computing companies looking to serve Asian customers due to its connectivity and location within China. Most recently,Alibaba’s cloud division, Aliyun, opened a data center in Hong Kong, its first in mainland China.

    As SoftLayer begins taking cloud orders for its new data center in Hong Kong, its parent company IBM is facing some challenges in China as a new initiative is reviewing whether its domestic banks reliance on servers from IBM is a compromise to national financial security.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/ibm-opens-softlayer-data-center-hong-kong-part-1-2-billion-cloud-push

    << Previous Day 2014/06/02
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org