Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, July 10th, 2013

    Time Event
    12:00p
    For Emerson, Focus on Cooling Expands into Thermal Management
    Emerson-Rack

    At January’s Open Compute Summit, the Emerson Network Power team displayed rack solution that can support densities of up to 45KW per rack, just one example of how the company is adapting to pursue the hyperscale market. (Photo: Colleen Miller)

    Cooling isn’t just about CRAC units anymore. For many years, data center cooling involved placing CRAC (computer room air conditioner) units at the perimeter of a data hall to feed cool air into a sub-floor plenum. But managing a data center environment has now become more complex and sophisticated, involving higher power densities and warmer temperatures. These advances  have shifted cooling equipment to the row and rack level, as well as a wide range of containment strategies, and software to tie it all together.

    That’s why Emerson Network Power is adopting a new name for its Liebert precision cooling operation, which will now be known as the Thermal Management business. The new business will have annual revenues of approximately $800 million. While it may seem like a euphemism, Emerson says the updated moniker reflects a new way of thinking about cooling challenges.

    “The data center is an active, always-changing ecosystem where IT needs, geographic location and external weather conditions are connected, and changes in any one area have broad implications,” said John Schneider, who will lead the new Thermal Management business as vice president and general manager. “We are delivering the next generation of data center cooling, with innovative services, software and hardware integrated and optimized to reliably, efficiently and cost-effectively control and manage heat.”

    Focus on Advanced Environments

    So what does this “next generation” approach include? It likely means a bigger focus on specialized environments for hyperscale data centers, high performance computing systems, and environments customized to process “Big Data.” An example could be seen at the recent Open Compute Summit, when a team from Emerson Network Power demonstrated a rack that can support power densities of up to 45KW per rack, integrating power distribution and back-up into the Open Rack specification. Emerson was the first of the major power and cooling vendors to participate in the Open Compute initiative, which is oriented toward hyperscale data center operators such as Facebook.

    Schneider said the development of Emerson’s Thermal Management unit was influenced by expanded temperature guidelines from the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). Raising the baseline temperature inside the data center can save money by reducing the amount of energy used for air conditioning, and can allow expanded use of free cooling (the use of fresh air instead of air conditioners to cool servers).

    But pushing the boundaries on temperature requires more granular monitoring and management of the data center environment, so any thermal problems can be quickly detected and addressed. Emerson says its Thermal Management unit will offer air-side and water-side free cooling solutions and innovative pumped-refrigerant economizers in addition to state-of-the-art controls and wireless sensors.

    Building Upon the Liebert Legacy

    “For 48 years, the Liebert brand has been synonymous with IT cooling and innovation, and today’s announcement represents another significant evolution within that space,” said Scott Barbour, executive vice president, Emerson, and business leader, Emerson Network Power (Systems). “The basic need to manage heat is not changing. There are now more ways to do that, and this expansion of what began as our precision cooling business reflects the entire spectrum of possibilities.”

    Introduced in 1965, Liebert CRAC systems were the first self-contained IT cooling units capable of maintaining air temperature, humidity and air quality within precision tolerances. Today’s data center infrastructure systems use networks of wireline and wireless sensors to monitor equipment performance and environmental conditions and are better equipped to act on that data. These capabilities were not available in the past and, even today, the additional intelligence often is lost in floods of data.

    “What we’re doing with our approach to thermal management is helping our customers analyze, understand and act on that data to realize more efficient, more sophisticated real-time environmental control,” Schneider said. That control is realized through innovative hardware, software and services that include everything from more intelligent and versatile cooling technologies to data center infrastructure management systems to remote service delivery programs.

    12:38p
    Points to Consider Before Buying a Data Protection Solution

    Jarrett Potts, director of strategic marketing for STORServer, a provider of data backup solutions for the mid-market. Before joining the STORServer team, Potts spent the past 15 years working in various capacities for IBM, including Tivoli Storage Manager marketing and technical sales. He has been the evangelist for the TSM family of products since 2000.

    Jarrett-Potts JARRETT POTTS
    STORServer

    In the second part of our series, we discussed the importance of finding a solution that’s easy to use, treating data differently, eliminating the burden of virtual machine backups and using built-in data reduction technologies.

    In part-three of our series, we will discuss how making the right licensing decision can save you money, how to scale data protection, why to set different policies for different data and the role of unified recovery management.

    License Correctly and Save Money

    Can two of the same things have two different prices? Absolutely. Not only can they have different prices, but they can also be dramatically different.

    In the last few years, data protection solution providers have started to offer something besides “core-” or “server-” based. When buying software consider all options. One of the newest options is capacity.

    Look for a company that offers pricing options that allow users to pay for solutions in the manner that makes the most financial sense. In the past, licensing models were based on the number and power of processor cores in the servers being protected. They also had cost advantages for organizations with relatively large amounts of data and a small number of servers, or for organizations with other software products licensed this way.

    But, some also offer a capacity-based licensing option that allows organizations to pay for the software based on the amount of data being protected. This model has cost advantages for organizations with a relatively large number of servers. It also eliminates licensing cost surprises when servers are added or cores are upgraded. The software includes tools to help the organization make accurate budget forecasts.

    The capacity-based model has particular value in infrastructures with multiple applications that require data protection solutions. Under the capacity-based model, these solutions are included at no additional cost. Also, using advanced features, such as data deduplication, which reduces the amount of data being protected, can decrease the amount of data being measured against the license cost.

    When looking for a backup and recovery or archive and retrieve solution, ask the vendor to show all the different ways it can be licensed. Ask for a price for all options, and then make an informed decision.

    Scalability–You Grow, It Grows

    When keeping pace with growing data, a major concern for IT organizations in terms of storage and data protection is how the solution will handle the growth.

    If a user’s business has grown capacity by 40-60 percent in each of the past three years, and it now supports billions of data objects, a solution is needed that will grow with them. This growing of capacity may be outpacing the data protection solution, and there is a need to find a way to scale the protection.

    This growth can be handled, but the scaling must be done in a logical manner. There are three ways to do this:

    • Scale out: This usually means that new hardware and software will be added as further resources to handle the load from the growth and involves a significant investment in new resources.
    • Scale up: This is where users add new software on existing servers, such as creating a second copy of an application on existing hardware. While this does cut the cost of new hardware, it assumes that the existing hardware can handle the load.
    • Scale in: If users can find a data protection solution that grows as they grow without additional resources, they have hit pay dirt. There is usually no additional investment involved, however, users must have 20/20 foresight.

    A solution should be on hardware that will grow into the future and have software with proven ability to grow at the same pace or better than the company’s growth.

    When choosing a product for data protection, decide up front if you want to scale up, out or in. That up-front decision will dramatically change the amount spent down the road in years three and five.

    Not All Data is Created Equal. Stop Treating It That Way.

    IT organizations can drive up the cost of storage unnecessarily by treating all data the same and storing it all on the same media. Let’s face the fact: a resume is not as important as the payroll database or even the email database. So, why do IT folks use the same storage policy for both?

    Stop using one policy to rule all data. It might be simple, but it will kill the bottom-line. Find a data protection solution that allows policies to be set to treat data differently.

    Important data should be prioritized as “tier one” and get backed up the most often and most quickly. Perhaps that data can stay on disk for fast restore.

    Everything else that is not business critical is considered “tier two” or “junk data” and should go directly to tape to be stored since it is not business critical. Junk data, like photos or temp files, can be deleted.

    Tier two data is a great target for hierarchical storage management (HSM), which allows organizations to store data on different tiers based on specific policies and enables administrators to migrate and store data on the most appropriate tier. For example, older and less-frequently accessed data can be moved to a slower, less-expensive storage platform, such as tape, leaving more expensive disk storage available for more high-value data.

    A data protection solution should help reduce costs by providing automated, policy-based data life-cycle management, moving data to the most cost-effective tier of storage while still meeting service level requirements. This helps ensure recovery objectives are met and transparent data access is achieved. Automated data archiving also helps organizations ensure compliance with data retention policies and reduces associated costs.

    Recovery: A Unified Approach

    Is your organization using different products to protect different types of data or different systems? If so, start thinking about standardizing on a single product. Think of all the time, training and resources that will be saved.

    Unified recovery management (URM) brings under one user interface the ability to manage data protection throughout the business, supporting different applications and types of data on multiple operating systems in various locations and with diverse policies and backup requirements. From a single point, administrators can manage multiple data protection and recovery tools, including diverse solutions that are dedicated to different tasks. It helps eliminate the costs and complexities associated with deploying and managing multiple point solutions.

    When looking for a company that provides data protection, look for one that simplifies and streamlines storage management, helping organizations control both the risks and costs of data protection and recovery. With fewer “moving parts” for managing the various solutions in operation, administrators can ensure faster, more reliable backup and recovery processes. The solution also provides built-in replication for highly available disaster recovery, helping to reduce downtime and the business costs that can result. These process improvements contribute to higher levels of service, making it easier for organizations to meet service level agreements.

    The ability for a single person with very limited knowledge to manage an entire businesses’ data protection solution is important. With a unified approach, users gain the ability for IT staff to be nimble and forward-thinking. No longer are they in “reactive mode,” but they start to exist in “proactive mode.” This is important because it saves time and money.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:01p
    New Web Tool Highlights Hurricane Risk for Energy Infrastructure
    Screen capture of the interactive map from EIA showing current predicted path of tropical storm Chantal as well as the nation's energy resources.

    Screen capture of the interactive map from U.S. Energy Information Administration (EIA) shows current predicted path of tropical storm Chantal as well as the nation’s energy resources.

    As the most active part of the hurricane season approaches, the U.S. Energy Information Administration (EIA) has rolled out helpful online resources which show not only the projected path of storms but also how the storm might impact energy sources and infrastructure. Hurricanes can affect the United States’s energy infrastructure, especially when storm paths traverse offshore production rigs and pipelines in the Gulf of Mexico, coastal refineries, power plants, and energy import and export sites.

    The agency’s interactive maps use real-time data feeds from the National Hurricane Center and combine that data with more than 20 map layers, which display the United States’ energy producing sources – gas and oil plants – as well as electric utilities. This new tool allows uses to better see and understand the potential impact of a storm.

    “This new mapping capability combines detailed energy infrastructure information with real-time tropical storm information from the National Hurricane Center,” said EIA Administrator Adam Sieminski.

    The new maps are now online at http://www.eia.gov/special/disruptions/. Presently, the public can see the current predicted path of tropical storm Chantal, moving from the Caribbean’s Leeward Islands toward the Atlantic coast of Florida. As the National Hurricane Center revises its predictions, the maps will be updated immediately.

    EIA, which is the statistical and analytical agency within the U.S. Department of Energy, collects, analyzes, and disseminates independent and impartial energy information.

    Bookmark our Disaster Recovery channel for more stories on disasters and disaster planning.

    2:30p
    Video: A Cloud Conversation with Interxion

    In this video recorded at GigaOm Structure 2013, Jelle Frank van der Zwet, cloud marketing manager at carrier-neutral colocation provider Interxion, speaks about cloud migration in Europe and Interxion’s footprint in Europe. Interxion, known for its rich network of interconnects, recently announced it is housing the London Internet Exchange’s IT hosting infrastructure out of its London data center. Video runs 4:14.

    For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    3:00p
    At WPC 2013, Microsoft Points Partners to the Cloud

    At its annual Worldwide Partner Conference (WPC) this week in Houston Microsoft (MSFT) is focusing all attention on its transformation to a devices and services company – emphasizing the underpinning trends of cloud, mobility, big data and enterprise social. Microsoft CEO Steve Ballmer opened the conference talking about this transformation to the more than 16,000 attendees, made up of value-added resellers and channel partners.  The event conversation can be followed on Twitter hashtag #WPC2013.

    Cloud

    With 3,265 software piracy cases settled around the world in the past year Microsoft gladly embraces the cloud computing advantages that it can benefit its partners. A Microsoft sponsored IDC study revealed that “partners with more than 50 percent of their revenue related to the cloud have been benefiting from higher gross profit, more new customers, increased revenue per employee and faster overall business growth.”

    “Cloud alone hasn’t caused these impressive numbers, though that is absolutely part of it; top-performing partners were visionaries that took on cloud technologies before their peers,” said Darren Bibby, program vice president of Channels and Alliances Research, IDC. “We’re at the point in the industry’s overall cloud transition where partners that don’t move some of their business to the cloud likely won’t survive. And some partners that are getting ready to sell their business or retire may be OK with that. Most won’t be.”

    Microsoft launched several new programs and services to help partners embrace the challenges and opportunities associated with cloud computing. Cloud OS Accelerate is a new program for key partners Cisco, NetApp, Hitachi Data Systems, HP and Dell, where Microsoft will invest more than $100 million to help put thousands of new private and hybrid cloud solutions into the hands of customers. A new self-service business intelligence (BI) solution, Power BI for Office 365, combines the data analysis and visualization capabilities of Excel  with the power of collaboration, scale and trusted cloud environment of Office 365. New Windows Azure Active Directory capabilities  will make it possible for ISVs, CSVs and other third parties to leverage Windows Azure’s directory to enable a single sign-on (SSO) experience for their users, at no cost.

    Independent Software Vendors

    Microsoft announced new partner agreements with four leading global independent software vendors (GISVs). Microsoft’s R&D and worldwide reach help GISVs stay ahead of the curve and achieve new economies of scale. At the conference the Microsoft Dynamics group announced  that it would bring aboard four new GISVs, which have signed deals to participate in the exclusive program: distribution and manufacturing provider I.B.I.S. Inc., automotive software maker Incadea, food supply-chain consultancy Anglia Business Solutions, and retail automation vendor Escher Group.

    “As technology marches on and business needs evolve, the demand for software with specialized functionality is an ever-growing industry,” said Neil Holloway, corporate vice president, Microsoft Business Solutions (MBS) Sales and Operations. “We’re here this week to show how Microsoft is investing in its partners, how focused we are on helping deliver the best experiences to enterprise customers, and how the biggest, best opportunities for ISVs still lie ahead.”

    Recognizing its top-performing partners at WPC Microsoft awarded four Microsoft Dynamics partners for their innovative use of Microsoft Dynamics to deliver strategic and valuable solutions that meet diverse customer needs. Tribridge was also named the Microsoft Dynamics Outstanding Reseller of the Year.

    Organization Structure Changes Coming?

    Amid these transformational changes Microsoft is continuously under pressure to adapt and evolve its business strategies. Just last week  Microsoft Interactive Entertainment executive Don Mattrick departed to become Zynga’s new CEO. Several sources have noted that Steve Ballmer will unveil plans on Thursday to drastically restructure the $286 billion company.

    3:00p
    Important Geographic and Risk Mitigation Factors in Selecting a Data Center Site

    The modern data center is no longer a singular location. Rather, new demands require the data center to be a geographically distributed network of resources. As new services, cloud components and users find their way into the data center model – there will be more reliance around the services that the data center provides. The selection process that goes into choosing the right data center can be rather tedious with numerous factors. However, geographical factors are often overlooked in site selection activities, or at best are incompletely examined. Many data centers produce information about hardware reliability or facility security, but often geography as a measure of a facility’s ability to competently serve its clients is neglected.

    fortrust

    (Image Source: FORTUST via Federal Emergency Management Agency)

    According to this whitepaper, companies in the process of data center site-selection use various criteria to determine the best facility to entrust their information. The prevalence of natural disasters in U.S. regions is another factor by which companies can measure data center operations. Enterprises that outsource data center operations can mitigate certain risks by choosing locations in areas deemed low risk by historical and analytical data.

    To truly understand the magnitude of the geographical selection process, this whitepaper covers several core points:

    • Seismic zone data and fault line analysis
    • Defined flood zones
    • Weather and oceanic patters
    • Natural disasters

    Download this whitepaper today to learn about other core data center selection factors including:

    • Access to more than one grid
    • Power grid maturity
    • On-site power infrastructure
    • Fiber backbone routes and their proximity to the datacenter
    • Type of fiber in proximity
    • Carrier presence
    • Carrier type

    As reliance around the data center continues to grow, it’s important to work with a provider that is capable not only meeting your needs to day – but one that can plan with you in the future.

    5:31p
    Colt Adds 1.65MW to Netherlands Data Center
    colt-netherlands3-470

    The inside of Colt’s data center in the Netherlands, filled with double-stacked rows of modular data centers. Colt has expanded the facility’s capacity by 1.65 megawatts. (Photo: Colt)

    Colt is expanding its data center in the Netherlands with an additional 1.65 megawatts of power and 10,764 square feet of space (1000 square meters). The company initially deployed 3.3MW there earlier this year, and the additional capacity is telling of the company’s growth in the Netherlands.

    “Over the coming years, we plan to continue to expand the facility, which has an ultimate capacity of 10,000 square meters (107,639 square feet) providing secure and scalable solutions that allow businesses to remain competitive as their data center and IT requirements evolve,” said Adriaan Oosthoek, Executive Vice President of Colt Data Centre Services.

    Colt’s facility is strategically located in Roosendaal between major cities of Rotterdam and Antwerp. The facility acts as key hub for the delivery of Colt’s low latency services in both The Netherlands and Belgium. The data center has a design Power Usage Effectiveness (PUE) of 1.21 and uses fresh air cooling for most of the year. The facility has a 32MVA power connection, meaning the energy supply can scale right along as the data center grows.

    The new capacity targets the region’s growing colocation market. CIOs are under pressure to reduce costs, and colocation is a great opportunity to minimize upfront capital expenditures by moving toward an Opex model. Colocation also provides these companies the ability to scale that isn’t found (at least not easily so) with building an enterprise data center.

    “Demand for greater cost efficiency continues to drive IT investment decisions for many businesses in the region and this is translating into increased demand for colocation services of all sizes,” said Oosthoek,. “By putting flexibility right at the heart of our data center solutions, we are able to address this demand quickly and cost effectively from our facility which is strategically located in Roosendaal between two of Europe’s major economic and transportation hubs.

    For insight into Colt’s deployment process, check out this time-lapse video of the initial phase in Roosendaal.

    5:43p
    Latest AWS Price Cuts Target the Enterprise – and the Competition

    Amazon Web Services announced it is dropping the prices of dedicated instances on EC2 cloud computing by up to 80 percent. Before talk of price wars and commoditization kicks in, there’s a very strategic reason for Amazon’s moves.

    So what’s the impetus for price cuts for this particular product? Given that IBM strengthened its position with “born on cloud” companies through its acquisition of SoftLayer, it makes sense that Amazon would continue to try to push upmarket, towards the enterprise, given its strengths with startups. Both companies are targeting one another’s strongholds.

    Amazon’s announcement might have impacted financial sentiment of at least one of their competitors. Rackspace stock dipped as much as 8.5% today, perhaps amidst concerns that the AWS price cuts might affect Rackspace’s cloud business. Rackspace has a lot of dedicated and managed cloud customers, so the cuts in AWS’ dedicated EC2 offerings could potentially be seen as a threat to this business.

    The latest price drops are for dedicated instances, which are different from regular EC2 instances in that they run on single-tenant hardware. These instances are ideal for enterprise workloads, especially those under the umbrella of corporate policies or industry regulations. They are isolated from other instances belonging to other customers at the host hardware level.

    Here are the price cuts:

    • Dedicated Per Region Fee - An 80% price reduction from $10 per hour to $2 per hour in any Region where at least one Dedicated Instance of any type is running.
    • Dedicated On-Demand Instances - A reduction of up to 37% in hourly costs. For example the price of anm1.xlarge Dedicated Instance in the US East (Northern Virginia) Region will drop from $0.840 per hour to $0.528 per hour.
    • Dedicated Reserved Instances - A reduction of up to 57% on the Reserved Instance upfront fee and the hourly instance usage fee. Dedicated Reserved Instances also provide additional savings of up to 65% compared to Dedicated On-Demand instances.

    There have been several price cuts this year at AWS (as well as Google and Microsoft), prompting many to declare a price war. A RightScale survey in March of this year documented the price reductions, with AWS leading the pack with 29 price cuts over a 14 month period. Comparing cloud prices is often like comparing apples to oranges; However, these particular cuts appear to be very strategically driven.

    AWS wants to win over the enterprise. The company releases Data Warehouse service called Redshift last February. There was also AWS Glacier, a low cost cold storage/archive storage, and some high memory instances. The company is clearly expanding its product and feature sets to appeal to the enterprise, and with these particular price cuts, to appeal to the pocketbooks of the enterprise.

    There most likely will be cuts by the competitors in tow, and more price cuts down the line. AWS is the clear leader in terms of cloud market share, but it desperately wants to win the enterprise customer and build on its strengths with “born on cloud” and internet-centric companies.

    << Previous Day 2013/07/10
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org