Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, March 20th, 2013
| Time |
Event |
| 12:00p |
SunGard Seeks to Make Business Continuity User-Friendly  A close look at the customer cabinets inside SunGard Availability Services’ data center located at 1500 Spring Garden, Philadelphia, PA.
SunGard Availability Services believes business continuity services should be simpler for its customers to manage. This isn’t a hunch: SunGard engaged more than 100 customers every three weeks in its effort to design and develop a better way to deliver business continuity services.
The result is SunGard Assurance(cm) a continuity management software as a service offering. The end result is a product that is more user friendly – “Facebook easy,” one might say. It also serves as a platform for democratizing the entire process, allowing less technical stakeholders to provide valuable input.
SunGard Assurance(cm) is a secure SaaS solution available anytime from any device, including mobile devices, with a service level agreement (SLA) guaranteeing 99.9 percent uptime. Its interface is simplified and designed to be accessible for even those who aren’t disaster recovery experts. The solution also incorporates dynamic plan templates that dramatically reduce the amount of data entry needed to create plans that meet the test of disasters. There’s also Integration with Configuration Management Databases (CMDBs) to provide a real-time view of data center infrastructure configuration.
“This solution is designed for all people involved in business continuity planning and execution,” said Louis Grosskopf, general manager, Business Continuity Software. for SunGard, who said there’s a good reason to involve more personel in the planning and execution. “There’s a common misperception that the team of business continuity (BC) planners are the only ones responsible for BC. However, these teams – usually five to 10 people depending on the size of an organization – must rely on others in their organization to understand and be prepared to act on their BC plan.
“The ‘novice planners’ also called the ‘innocent bystanders,’ are the people that outnumber the BC plan team and usually have no disaster recovery (DR) experience,” Grosskopf added. “However, they’re expected to provide the information critical for recovery (e.g. the manager of accounts payable, or a bank branch manager; someone with no DR experience).”
Business continuity assurance helps customers deliver on the core business benefits of disaster recovery and business continuity planning at the time of need:
- Providing service to customers with less interruption
- Safeguarding customers and employees before, during and after disaster scenarios
- Protecting corporate reputation-Enhancing shareholder value
Since the 1980s, business continuity management (BCM) has seen numerous shifts in regulatory pressure, and each issue forced customers to react swiftly, according to Grosskopf.
“First it was data center recovery, then Y2K and next was terrorism,” he said. “The current issue disrupting BCM is state-sponsored cyber threats. As the world of business continuity stays focused on ever-increasing pressures from regulators, business leaders today demand broader participation in the planning process and increased confidence that today’s plans will lead to better outcomes. These changes in market dynamics are driving a need for a new business continuity approach.”
The company gives the example of natural disasters like Superstorm Sandy and the business risks they pose to customers.
“”The outcome we all seek is to keep the business running, our employees safe and our shareholders protected from risk,” said Grosskopf. “With higher engagement from the whole organization, assurance aids customers by increasing confidence at the time of need that the plan meets the test of disruption.” | | 12:30p |
Mapping a Course to Data Center Efficiency Jack Pouchet is vice president of business development and director of energy initiatives for Emerson Network Power.
 JACK POUCHET
Emerson
Data center energy efficiency has been an increasing focus since the issue emerged in 2007. We believe dramatic energy savings can be realized without heroic measures that compromise availability. The key is to focus on the core IT systems, rather than just support systems. This is based on the cascade effect, which shows that focusing first on saving energy at the server-component level will drive energy savings throughout the data center.
In 2007, Emerson Network Power introduced a free, vendor-neutral roadmap to saving 50 percent of your data center energy use. While many of the roadmap’s core principals –- such as the cascade effect –- still hold true, the industry has evolved at rapid rate over the past five years. The need to maintain or to build highly available data centers remains the same, but IT and critical infrastructure technologies have changed, creating new opportunities to optimize efficiency and capacity strategies.
As a result, we’ve updated the approach to incorporate advances in technology and new best practices that have emerged since 2007.
Ten updated strategies serve as a roadmap. In total, they have the potential to reduce a data center’s energy use by up to 74 percent in a typical 5,000 square-foot data center with a PUE of 1.9 and energy consumption of 1.5 MW.
- Low-Power Components: The cascade effect rewards energy savings at the server component level, which is why low-power components, such as high-efficiency processors, represent the first step. [Save 172KW or 11.2%].
- High-Efficiency Power Supplies: Power supply efficiency has improved since our original approach in 2007, but power supplies continue to consume more energy than is necessary. The average power supply efficiency is now estimated at 86.6 percent, well below the 93 percent that is available. [Save 110KW or 7.1%].
- Server Power Management: Server power management can significantly reduce the energy consumption of idle servers. Data center infrastructure management systems that collect real-time operating data from rack power distribution systems and then consolidate that data can track server utilization, aiding in the effective use of power management. [146KW or 9.4%].
- ICT Architecture: Unoptimized network architectures can compromise efficiency and performance. Implementing a cohesive ICT architecture involves establishing policies and rules to guide design and deployment of the networking infrastructure, ensuring all data center systems fall under the same rules and management policies. [Save 53 KW or 3.5%].
- Server Virtualization and Consolidation: Virtualization is facilitating the consolidation of older, power-wasting servers onto much less hardware. It also increases the ability of IT staff to respond to changing business needs and computing requirements. Most data centers have already discovered the benefits of virtualization, but there is often opportunity to go further. [Save 448KW or 29%].
- Power Architecture: Historically, data center designers and managers have had to choose between availability and efficiency in the data center power system. Now, new advances in double-conversion UPS technology have closed the gap in efficiency, and new features enable double-conversion UPS systems to reach efficiencies on-par with line-interactive systems. [Save 63KW or 4.1%].
- Temperature and Airflow Management: Take temperature, humidity and airflow management to the next level through containment, intelligent controls and economization. From an efficiency standpoint, one of the primary goals of preventing hot and cold air from mixing is to maximize the temperature of the return air to the cooling unit. [Save 80KW or 5.2%].
- Variable-Capacity Cooling: Cooling must be sized to handle peak load conditions, which occur rarely in the typical data center. Cooling systems that can adapt to changing condition and operate efficiently at partial loads save energy. [Save 40KW or 2.6%].
- High-Density Cooling: Optimizing data center energy efficiency requires moving from traditional data center densities to an environment that can support much higher densities. High-density cooling makes that possible. [Save 23KW or 1.5%].
- Data Center Infrastructure Management: Data center infrastructure management technology can collect, consolidate and integrate data across IT and facilities systems to provide a centralized real-time view of operations that can help optimize data center efficiency, capacity and availability. DCIM also delivers significant operational efficiencies by providing auto-discovery of data center systems and simplifying the process of planning for and implementing new systems. [Because DCIM is integral to many Energy Logic 2.0 strategies, it isn’t possible in this model to attribute an isolated savings percentage to DCIM.]
This new process demonstrates the potential that still exists to optimize the data center. The introduction of a new generation of management systems that provide greater visibility and control of data center systems, and a continued emphasis on efficiency, serve as proof that there is no time like the present for the industry to begin taking significant actions to reduce the overall energy consumption of data centers.
 Organizations need a clear roadmap for driving dramatic reductions in energy consumption without jeopardizing data center performance. But just how far can a data center efficiency approach drive you? Take a look at how far each of 10 energy-saving steps could take you via electric car. The cumulative result can literally drive you around the world.
To see how much each strategy can save your data center visit the Cascading Savings Calculator. This online tool lets you explore the impact of each strategy by entering information that is specific to your data center, such as the load and facility PUE.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:00p |
Inside ViaWest’s Tier IV Data Center  ViaWest’s new Lone Mountain data center features a cooling system that can alternate between three methodologies, dependent upon weather conditions. The “Super-CRAC” unit in front managed a chilled water loop. (Photo: ViaWest)
ViaWest recently opened the doors at its new Lone Mountain Data Center in north Las Vegas. The facility will offer 74,000 square feet of raised floor space within a 110,000 square foot building. The Lone Mountain facility is the first Tier IV designed colocation facility in North America, according to the Uptime Institute. Check out our photo feature, Closer Look: ViaWest’s Tier IV Las Vegas Data Center, for details. | | 1:30p |
Siemens Brings Clarity to Crowded DCIM Market With more than 75 companies now offering tools under the wide umbrella of DCIM, it isn’t easy for a new player to make a splash. Unless that new player is global electronics and electrical engineering powerhouse Siemens, which has focused its ambitions on the data center and heading into DCIM in a big way.
Datacenter Clarity LC is the company’s foray into the world of DCIM (data center infrastructure management), a suite that combines management and facilities management functions. The company has thrown its muscle into this effort, boosted by a broad existing portfolio of data center solutions, a history in efficiency, as well as a global talent pool of engineers in support.
Meeting point between IT and Facilities
The DCIM solution unveiled last month, Datacenter Clarity LC, consists of engineering and lifecycle management software tools that ensure uptime while optimizing energy and operational efficiencies to accommodate the rapidly changing needs of today’s data centers. It integrates information from both IT and facility assets, workflows and work orders, and conducts “what if” analyses.
“Datacenter Clarity LC can help you optimize capacity planning while driving operation and energy efficiencies,” said John Kovach, Siemens’ new Global Head of Data Centers.
Datacenter Clarity has an open API architecture facilitates interoperability with other systems. “Our vendor neutral solution supports more than 400 protocols from both the IT and facility perspective, giving customers total visibility of their data centers,” said Kovach.
Siemens Ripe for DCIM Play
Siemens’ DCIM power play wasn’t out of the blue. Siemens has an established track record in facility/enterprise infrastructure development, separately providing different aspects of data center infrastructure to customers over the years. The list of what the company provides isn’t short:
- Medium Voltage gear that connects the building to the utility grid.
- Low Voltage gear that distributes the power throughout the building.
- All the interconnecting controls to enable the Emergency Generators and UPS equipment.
- Complete Power Monitoring system that provides detail on the usage, power balance and consumption of the entire facility.
- Building Automation and temperature controls for the cooling of the facility.
- Fire and Life Safety systems to protect the people and equipment within in the building.
- Perimeter and physical security of the building to control access to the building, CCTV systems to provide visual images of the critical locations within the facility
The company sees a formal DCIM play as the logical evolution of its data center strategy, and is setting out to bridge the divide of IT and facilities management.
“The exponential growth and importance of data centers was leading to a need to bridge the growing “silos” of IT and facilities’ management of data centers,” said Kovach. “Having the two areas collaborate and work together was a constant challenge requiring a central system that would eliminate the inefficiencies that were developing from those separate silos. This is the purpose of DCIM – and Siemens’ existing expertise in the different infrastructure areas, coupled with our established leadership in energy and operational efficiency, seemed a perfect fit.” | | 2:00p |
Behind Blue Waters: Assembly of Cray XE6 Blades What goes into making a supercomputer? For Blue Waters, the National Center for Supercomputing Applications (NCSA) project at the University of Illinois, there will be more than 235 Cray XE6 cabinets, each with 24 blade assemblies. In this video, Cray’s Director of Manufacture Logistics Group Steve Samse demonstrates the assembly of the board and components that make up each blade. The work is being performed at the Cray facility in Chippewa Falls, Wisconsin. The video runs 2:49 minutes.
For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 2:30p |
Custom Data Centers: Responsibilities of the Stakeholders This the third article in series on DCK Executive Guide to Custom Data Centers.
Like any large scale project, when commissioning a data center design, whether standard or custom, a clear understanding of the responsibilities and points of contact (POC) and/or project managers (PM) need to be carefully selected and agreed to by all involved parties. It is highly recommended that the POC or PM for the organization that is purchasing or leasing the data center be generally familiar and have some experience with the operation and basic technologies of a data center. This is especially important for a custom design, and simply appointing an “all purpose” internal POC or PM without any specific data center experience should be avoided if at all possible. If such a qualified person is not available internally, consider utilizing a qualified independent consultant to act as the POC or PM or at the very least a trusted advisor. While they do not have to be an engineer, they do need to be able to fully understand what is being asked of the bidding data center design and build firms and the implica¬tions of their responses, questions or change requests as the designs are developed.
Before delving into the details, let’s first clarify the gen¬eral data center categories and terms; standard, build to suit and of course a custom design.
Standard Data Center
While there really is no such thing as a generic “standard” data center, it generally involves a design that follows common industry standards and best practices. This usually covers the layout of the rows of cabinets, typically capable of supporting a moderate pow¬er density, then selecting the tier level of infrastructure redundancy and the total facility size commensurate to your organization’s immediate and future growth expectations. This type of data center is readily available for lease or purchase (please see part 1 of this series “Build vs Buy”) and is built using standard equipment and straightforward designs.
Build to Suit Data Center
The “Build to Suit” term and other similar marketing names such as “Turn Key” and “Move-In Ready” are used by some data center builders and providers in the industry. While the name sounds like, and would seem to imply a completely custom design, it generally offers a somewhat lower level of customization within certain limits of a basic standard design. This should be given serious consideration, since in many cases it may meet some or most, if not all of your specialized requirements, with a minimal cost impact. Also by keeping within the basic framework of a standard design, it would be less likely to face early obsolesce should a normal traditional technology refresh occur.
Custom Design Data Center.
Like a custom built race car, designed and built for performance, a custom data center should represent a technically leading edge, tour de force design. In the case of a data center, the extreme performance is typically manifested in the form of higher flexibility, reliability, energy efficiency and power density, or some combination thereof.
Hardly a week goes by without some headlines in the data center publications announcing a new custom built data center based on a radical new design, most commonly by a high profile firm in the Internet search, social media and cloud services arena, such as Google, Facebook, or Microsoft. It is important to understand that these are typically based on very large scale dedi¬cated applications and may involve specialized custom built hardware for use in so called hyper-scale com¬puting. As an example, Facebook and Google utilize unique custom built servers (each has their own differ¬ent server design), which do not have standard enclo¬sures and require special matching cabinets, as well as specialized power and cooling systems.
This results in some technical and financial advantages, primarily related to lower cost per server and bet¬ter overall data center energy efficiency. However, be¬fore embarking down the path of a highly customized data center design, it is important to understand that it requires a sufficiently large scale and IT architecture. It also may limit the general ability to support standardized racks and IT equipment. Let’s look at some emerging trends in custom data center designs.
Hybrid and Multiple Tier Levels
Tier levels generally refer to the level of redundancy and fault-tolerance resulting in a projected level of availability rating for a data center (1 lowest, 4 the highest).
One area of customization that is becoming more popular is the incorporation of multiple tier levels of infrastructure redundancy within the data center. This can lower costs and may increase energy efficiency by creating a lower tier level (i.e. tier 2) zone for less critical applications, while still providing a high level of redundancy (tier 3-4) area for the most critical systems and applications.
There are also those data center operators and owners that do not feel that they have to exactly follow all the requirements of the tier level system, but may prefer to use selected concepts and have a hybrid design. This allows them the flexibility to allow for greater level of redundancy of the electrical systems (i.e. 2x[N+1] dual path system — comparable to a tier 4 design), while using a less complex and lower cost cooling system, with only N+1 cooling components (for more details on tier levels please refer the “Uptime” section in part 1 “Build vs Buy”).
Of course, once you have begun to explore a custom design you may choose to mix the multiple and hybrid design schemes to match you organizations various applications and systems requirements and may also lower your CapEx and OpEx costs.
There is also a growing trend to try to segregate hard¬ware by environmental requirements. Systems such as tape backup equipment in particular requires tight en¬vironmental control, yet does not require much actual cooling or power density. By isolating them from other hardware such as servers, you are able to properly sup¬port and maintain the reliability of more sensitive disk based storage and tape library equipment, by tightly controlling the temperature and humidity. This also improves the energy efficiency of the cooling system for other more robust hardware such as servers, or the new solid state storage systems, by allowing for raised temperatures and expanded humidity ranges (for more on this please refer to part 3 “Energy Efficiency”).
Containerized Data Center
The data center in a container is an alternative that is beginning to find some traction in the data center in¬dustry. These can be either an add-on to a traditional facility or the basis for an entire “data center”, based primarily on containerized or modular prefabricated units. Some designs are based on a core power and cooling infrastructure system meant to support these systems that are weather-proof units which can be placed on a prepared slab and then connected to the core power and cooling systems. While some other containers may require a warehouse type building to shelter them and again need to be to be connected to the core support systems.
Although similar in concept, it is important to distinguish the difference between actual container units and some modular data center systems. It is important to note that containerized solutions or modular systems are not necessarily an inexpensive alternative to a traditional brick and mortar data center facility. They are typically best suited to very high density applications of tightly packed mostly identical hardware, typically thousands of small servers or several hundred blade servers, configured to deliver hyper-scale computing. Their main attraction is for those large organizations that required the ability to respond quickly to rapid growth in computing power and also to a certain degree to minimize initial capital expense, by just being able to add containers or modules, on an as needed basis.
Regardless, it is important to note that whether you consider a container or a modular system, they still have to be installed at a data center facility that will support and secure them and that the overall facility infrastructure must be pre-designed and pre-built for the total amount of utility power, generator back-up capacity, as well as power conditioning (i.e. UPS typi-cally required for most containers), and in some, but not all cases, a centralized cooling plant.
Containers can be part of a hybrid custom design, based on a more traditional data center building as a core primary data center building which is relatively standard. However, the overall facility infrastructure has pre-allocated space, as well as power and cool¬ing infrastructure for containerized systems which can then be easily added as needed for rapid expansion.
Open Compute Project
There are also some resources for “non-standard” or leading edge “outside of the box” designs. One in par¬ticular is the Open Compute Project (OCP) which has published its highly energy efficient basic designs and specialized IT equipment specifications. While not every organization is an ideal candidate for all the elements disclosed in the OCP designs, some aspects of the designs can be chosen selectively and incor¬porated into a custom data center. Some data center providers offer to build a data center based on the OCP designs.
You can download a complete PDF of this article series on DCK Executive Guide to Custom Data Centers courtesy of Digital Realty. | | 3:00p |
Cloud Computing: A CFO’s Perspective Cloud computing and the technologies surrounding the platform have made a big impact on the modern business. Now, more organizations are looking for ways to leverage the cloud and see where it can create cost savings. Although many IT professionals will fight for a cloud model – in many cases, the CFO needs to make a good recommendation as well.
The IT infrastructure is an absolutely vital part of any company. In fact, IT is now at the top of the CFO’s agenda. According to Gartner’s The CFO’s Role in Technology Investments, 26 percent of IT investments require the direct authorization of the CFO and 42 percent of IT organizations now report to the CFO. This is why, in recent years, the IT department and the business organizations have become much closer in terms of the technologies the entire unit wants to deploy. Just like any new technology, the cloud can have very positive results for a company. However, these results only come about after thorough planning around cloud computing.
In this whitepaper, HP takes a deeper look at the cloud – but directly from a CFO’s viewpoint. This means analyzing key benefits in moving towards a cloud platform. This includes:
- Moving capex to opex.
- Adding speed and flexibility.
- Creating instant access to innovation.
- Creating a better and more resilient environment.
Download HP’s whitepaper to see how a CFO should view the cloud and where key benefits are located. Remember, there are a lot of uses for cloud computing. Many organizations can leverage a cloud model to reduce legacy systems or create a private infrastructure capable of agile growth. However, the key here is ensuring that the entire business entity can see the direct benefits of cloud computing. That means other IT departments, other business units, and of course – the CFO. | | 9:52p |
Power Outage Knocks DreamHost Customers Offline Web hosting provider DreamHost experienced an extended outages when power systems failed at its data center in Irvine, Calif. The incident created hours of downtime across Tuesday and Wednesday for many of DreamHost‘s more than 350,000 customers, who host 1.2 million blogs, websites and apps with the company.
The problems started at 3 pm Pacific time on Tuesday, when the uninterruptible power supply (UPS) system failed suddenly at the Irvine facility operated by DreamHost’s data center provider, Alchemy Communications. When the UPS systems died, the emergency back generators also failed to start properly.
“The power failure lasted just a few minutes, however it created a number of major issues with our network and systems in the Irvine DC that took many hours for our operations teams to recover from,” wrote DreamHost CEO Simon Anderson on the DreamHost status blog .” Not the least of which was the loss of several critical pieces of networking hardware which did not survive the power event.”
Second UPS Issue Prompts More Downtime
Anderson said Alchemy has a “good track record” in maintaining uptime at the Irvine facility, but may have been conducting unannounced UPS maintenance. At 4:30 a.m. Pacific on Tuesday, the UPS systems failed again.
“This resulted in another complete power outage and another intense period of reboots, restores and system checks from our team,” said Anderson. “The time to restore most services in the wake of this second power outage was much quicker, mainly because there were no resulting hardware failures and we had learned from the first failure. Alchemy has opted to run the Irvine DC on generators until the UPS issues are fully identified and resolved.”
Anderson apologized to customers for the outage and said that DreamHost would offer service credits to affected customers.
” I fully recognize that any disruption to services can affect important production environments and projects,” the CEO said. “Our team will work diligently to ensure that we mitigate the power issues going forward, including a full audit of all facilities that house DreamHost customer data. We will learn from this event and continuously improve our operations and services.”
DreamHost has been involved in a number of high-profile outages over the years, but those incidents never seemed to slow the growth of the company, which has focused on affordable web hosting accounts.
Last year DreamHost expanded its data center footprint to the East Coast, adding a presence in Ashburn, Virginia as part of a broader effort to improve its reliability and boost network performance. |
|