Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, May 28th, 2013
| Time |
Event |
| 11:30a |
Data Center Jobs: McKinstry At the Data Center Jobs Board, we have a new job listing from McKinstry, which is seeking a Senior Data Technician (HVAC Controls/Critical Facility) in Denver, Colorado.
The Senior Data Technician (HVAC Controls/Critical Facility) is responsible for the successful and timely completion of assigned projects by utilizing appropriate resources effectively and balancing the customer requirements with the agreed upon strategies of the company, the definition of customer project requirements, communicating with other McKinstry Departments to ensure agreements are successfully managed, opportunities are maximized, and customers are satisfied, evaluating industry standards as new standards emerge for best practices, and closely coordinating these potential opportunities with our clients and share all applicable information abroad. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 12:30p |
How Design Can Save the Average Data Center More than $1M Peter Panfil is Vice President Global Power Sales, Emerson Network Power. With more than 30 years of experience in embedded controls and power, he leads global market and product development for Emerson’s Liebert AC Power business.
 PETER PANFIL
Emerson’s Liebert AC Power
There are many options to consider in the area of data center power system design, and every choice has an impact on data center efficiency and availability. The data center is directly dependent on the critical power system, and a poorly designed system can result in unplanned downtime, excessive energy consumption and constrained growth.
When making choices, consider the UPS system configuration, UPS module design and efficiency options, and the design of the power distribution system.

Increase Utilization Rate to Improve Efficiency
Most businesses need to consider having some level of redundancy in their UPS system to mitigate the cost of downtime, eliminate single points of failure and provide for concurrent maintenance.
A concern often raised in discussions about redundancy is utilization rate. A 2N UPS system that has the highest availability unfortunately offers the lowest utilization. Each bus of a 2N system can only be loaded to 50 percent so that one bus can provide full load in the event the other bus is not available. Many business critical data centers use 40 percent as the peak loading factor on each bus in this configuration to allow for variations in IT power draw and provide a cushion for immediate expansion capability. Customers have expressed concern that they don’t trust all UPS suppliers to be able to support 100 percent load.
Find a UPS supplier you can trust whose UPS can provide full load across the range of high and low line conditions, temperature to 40C, blocked filter, fan failure and altitude. Potential cost savings to move utilization to 45 percent: $2k/yr
Don’t Gamble on Availability – Fault Isolation Matters
Transformers play a critical role in the power system by providing circuit isolation, localized neutral and grounding points for fault current return paths, and voltage transformation.
Removing the transformers can result in a smaller, lighter footprint that is well suited for installation in the row of racks. Removing the transformers also exposes the UPS system to faults that could reduce the availability or push the critical load over onto utility power more often.
One very common fault that has this effect is a DC ground fault. Shorting the positive or negative battery terminal to ground in a transformer based system results in an alarm, but the UPS continues to provide protected power. Shorting the positive or negative battery terminal to ground in a transformer-less architecture at best results in a transfer to bypass and the load exposed to unprotected power and at worst, drops the critical load.
One transformer-less UPS manufacturer even filmed the performance of their transformer-less UPS on a battery ground fault. The UPS output waveform went through severe gyrations, the UPS groaned, cables shook and the UPS transferred to bypass. That manufacturer touted this as robust performance.
Do you consider transferring to bypass during one of the most common UPS system faults robust? Don’t bet your career on it. Potential Cost Savings of increased availability: $505k per occurrence
Modern Transformer-Based Topologies + Advanced Energy Optimization = State of the Art Technology
There is the misperception that transformer-based UPS systems are “old technology”. This myth is spread for the most part by UPS manufacturers who only offer transformer-less UPS. Modern transformer-based UPS systems deploy the latest DSP-based controls and energy optimization features to offer the best availability for business-critical applications, and at efficiencies that meet or exceed transformer-less offerings.
One such energy optimization mode is Intelligent EcoMode, which provides the majority of the critical bus power through the continuous duty bypass. This technology keeps the inverter active and always ready to assume the load in the event of an outage — a dramatic improvement over energy optimization modes that do not keep the inverter active. These UPS systems that do not deploy the latest active inverter Intelligent Eco-Mode often have a notch in the output waveform going in and out of Eco-Mode. They have to perform an interrupted transfer to turn off the bypass before turning on the inverter. Notch in the output? Interrupted transfer? Gulp! Increased cost savings using Intelligent Eco-Mode: $20,350/yr.
Weigh Safety Risks and Hidden Costs of Alternative Distribution Voltages
480V 3-wire distribution is the norm in enterprise data centers. There has been a lot of discussion about going 400/230V 4-wire direct from the UPS to the server. While this configuration looks good on paper it has some significant limitations. Fault current can be much higher in these direct-to-the-server configurations. This poses an equipment and personnel risk. This configuration also strands capacity in the gear, requires higher ampacity buses and can increase wiring costs. Before going to this extreme consult with your data center trusted adviser to understand the costs and risks associated with this architecture. Potential Cost Savings using higher distribution voltages: $3,500/yr
Do Your Research
Due diligence on the latest UPS technology and efficiency optimization modes will help you choose improved critical power systems with the highest availability and new levels of utilization and efficiency.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:17p |
TACC Gets 100 Gigabit Connection for Supercomputers  The Stampede supercomputer, pictured above, is one of the systems at the Texas Advanced Computing Center in Austin, which will benefit from a new 100 Gigabit connection to Internet2. (Photo: TACC)
The Texas Advanced Computing Center (TACC), home to the Stampede supercomputer, is preparing to accelerate its Internet2 connectivity to 100 Gbps this summer, and NASA Ames selects a SGI system to support research.
TACC Upgrades to 100 Gbps
TACC at the University of Texas at Austin announced that it will be upgrading from 10 gigabits per second (Gbps) to 100 Gbps in Internet connectivity with the help of Internet2. The upgrade will empower scientists to reach TACC using Internet2′s new 100 Gigabit-Ethernet and 8.8-terabit-per-second optical network, platform, services and technologies. It will also enable the University of Texas Research Cyberinfrastructure (UTRC) to have 100 Gbps connectivity between UT System institutions and other research universities throughout Texas.
“TACC’s world-class computational, visualization and storage systems enable users to create and manipulate petabytes of data, and we’ll add new systems focusing on data intensive computing starting this summer,” said TACC Director Jay Boisseau. “This Internet2 bandwidth upgrade will enable researchers to achieve a tenfold increase in moving data to/from TACC’s supercomputing, visualization and data storage systems, greatly increasing their productivity and their ability to make new discoveries.”
Internet2 is comprised of over 220 U.S. universities, 60 corporations, 70 government agencies, and more than 100 research and education partners. Internet2 recently established direct peering with Microsoft Cloud Services, enabling improved access to infrastructure and application services that support virtual learning environments and large-scale data intensive research projects.
“The 100 Gbps Internet2 Innovation Platform serves as an accelerator to scientific research and provides increased access to the most advanced computational resources for faculty members at institutions around the country,” said Jim Bottum, Internet2 Inaugural Presidential Fellow and chief information officer at Clemson University. “By streamlining access to the latest advanced national cyberinfrastructure systems like Stampede at TACC, researchers are afforded a lightweight avenue for conducting transformational research.”
SGI selected by NASA
SGI announced that NASA’s Ames Research Center has selected an SGI UV 2000 shared memory system to support more than a thousand active users around the country who are doing research for earth, space and aeronautics missions. The new Endeavour system, named in honor fo the Space Shuttle Endeavour, is based on the latest Intel Xeon processor E5-4600 product family and has a total of 1536 cores and 6TB of global shared memory. It will provide large, shared memory capability and will enable solutions for many NASA science and engineering applications, including simulation and modeling of global ocean circulation, galaxy and planet formation, and aerodynamic design for air and space vehicles.
“A portion of our current code base requires either large memory within a node or utilizes Open MP as the communication software between tens to hundreds of processors,” said William Thigpen, high-end computing project manager at the NAS facility. “The largest portion of Endeavour is able to meet the large shared memory requirement with 4 terabytes of addressable memory and can apply over 1,000 cores against an Open MP application.” | | 3:51p |
IMN’s Financing, Investing & Real Estate Development for Data Centers To Convene in NYC The IMN’s Financing, Investing & Real Estate Development for Data Centers conference will kick off on Thursday this week, with a opening panel discussing how issues in the macroeconomy are impacting data centers.
The panel will be moderated by Tom Watts, Managing Member, Watts Capital, who will be joined by Philip Johnston, Territory Sales Manager, Extreme Networks; Eric Wells, Vice President, Data Center Services, Fidelty Investments; Rob Stevenson, Managing Director – Head of U.S. REIT Research, Macquarie; and James Breen, Senior Analyst, Internet Infrastructure Equity Research-Communications Services, William Blair.
The other sessions will cover popular topics including:
- Mergers, Private Equity & IPOs
- The President/CEO Panel: Colo Players
- Scale vs. Scalability vs. Rightsizing
- Connectivity-Creating & Tapping into New Revenue Streams
- Evaluating & Standardizing Data Center Standards & Performance Metrics
- Should you Own the Underlying Data Center Real Estate?
The event for data center owners, data center tenants, data center investors, and capital and service providers will held at the Conrad Hotel in New York City on May 30-31, 2013. For more information, visit the IMN website. | | 5:14p |
Savvis Opening More Data Centers Around Globe  An example of the hot aisle in a Savvis data center in the London market. (Photo: Savvis Slough Campus, Luben Solev)
To respond to its growing global customer demand, data center operator and cloud hosting provider Savvis, a CenturyLink company (CTL), said Tuesday it is opening two new data centers and expanding eight existing data centers. The additional data centers will be added in European and Asian markets where Savvis already has a presence.
The Web Hosting Industry Review reported in Savvis Expands Data Center Footprint in Nine Global Markets that the expansion plan “brings Savvis’ total available data center space to more than 2.4 million square feet across more than 50 data centers located throughout North America, Europe and Asia.”
The company has opened new data centers in Hong Kong and London, which Savvis announced last month, as well as expanded eight existing Savvis data centers.
For more information, bookmark our company page on Savvis. | | 7:00p |
What About Dell’s $1 Billion Cloud Buildout?  Dell’s use of modular data centers – like this unit deployed for eBay – was a key part of its plans to expand its data center network to host its own public cloud offering, an initiative which was discontinued last week. (Photo: eBay)
Dell has abandoned its plans to offer its own public cloud, shifting to a partner-focused cloud model instead. Most of the chatter has been around what this means for OpenStack. But what does this mean for Dell’s plan to invest $1 billion in data centers?
Back in 2011, Dell announced its plans to spend $1 billion on data centers to deliver its public cloud products. The company planned to build 10 data centers in 24 months; an aggressive plan by all accounts. However, with Dell dropping its direct public cloud offering, much of the infrastructure that would have populated these data centers has now disappeared.
Has Dell held strong to its data center buildout plans? If not, has that plan significantly changed since dropping public cloud? If the company has or will invest this money in infrastructure, what will it be used for? Data Center Knowledge reached out to Dell to ask how its shift in public cloud plans will impact its planned data center expansion, but the company hasn’t responded.
What We Know: Deployments in Quincy and Slough
Few public announcements have been made about Dell’s data center expansion. Here’s a look at what we know.
In late 2011, a UK data center in Slough entered production. It wasn’t a massive amount of space, consisting of 5,000 square feet divided into two sections – a raised-floor area featuring rows of cabinets using hot aisle containment and in-row cooling units, and a second section with IT capacity deployed in Dell Modular Data Centers. The Slough facility, which was developed by retrofitting an existing structure, will meet Tier III reliability standards and a power usage effectiveness (PUE) of about 1.5.
One of the U.S. data centers announced and built was in Quincy, Washington where Dell purchased 80 acres of land and filed plans to build a 350,000 square foot data center. This was to be a key component of a global data center expansion to support the company’s push into cloud computing services. The first phase of the project was unveiled in February 2012, featuring 40,000 square feet of data center space.
Last year a Dell executive outlined plans to build 20 data centers in the Asia Pacific region, commencing with one in India. In 2011, CEO Michael Dell stated that the company would build a data center in Australia.
Dell’s Shift in Strategy
The plan was to target its cloud offerings to all three tiers of the cloud market, including Infrastructure as a Service (IaaS) offerings for both compute and storage, Platform as a Service (PaaS) for application development and deployment, and an SaaS-level Virtual Desktop as a Service offering atop Microsoft’s Hyper-V virtualization solution.
A few weeks ago, the company acquired Enstratius, which greatly deepened its capabilities in cloud management. Dropping public cloud makes sense considering the company’s growing play in the enterprise and on the platform level. There’s a lot of competition in terms of public cloud – AWS, Google, Microsoft, Rackspace and OpenStack all come to mind – as well as new players joining the fray everyday (VMWare is a recent example). By offering its own public cloud, Dell threatened to cannibalize its channel somewhat, though the company was positioning its offering as complementary to partners. Dropping these plans means there’s no longer any potential conflicts of interest and it can supply these partners in a completely complimentary way. But it begs the question – what about the infrastructure?
The company’s shift to using modular data centers means that it might not have had to make the same capital commitments to cloud as it once pledged. The company does have some initiatives like Workstations that promise or hope to fill up data center space. The company still believes in the growth of the public cloud, it just isn’t supplying it directly out of its data centers anymore.
“Many Dell customers plan to expand their use of public cloud, but in order to truly reap the benefits, they want a choice of providers, flexibility and interoperability across platforms and models, the ability to compare cloud economics and workload performance, and a cohesive way to manage all of it,” said Nnamdi Orakwue, vice president, Dell Cloud, after dropping the public offering. “The partner approach offers increased value to Dell’s customers, channel partners and shareholders, as part of our comprehensive cloud strategy to deliver market-leading, end-to-end cloud solutions.”
It’s a sound strategy. However, the earmarked billion dollars now leaves us asking – what happens with the infrastructure? | | 7:30p |
World Bank Unit Invests in Modular Infrastructure  An example of a Flexenclosure eCentre modular data center that was recently deployed in Sudan. The private equity arm of The World Bank has invested in Flexenclosure in hopes of accelerating wireless communications access in emerging markets. (Photo: Flexenclosure)
Factory-built data centers are gaining traction as a tool for bringing IT operations to emerging markets. A unit of the World Bank said today that it will invest in provider of modular data centers in hopes of accelerating wireless phone access in parts of Asia and Africa.
International Finance Corporation (IFC), the private equity arm of the World Bank, is leading a consortium that it investing $24 million (U.S. dollars) in Flexenclosure, a Swedish company that has developed pre-fabricated data centers ahnd power infrastructure, primarily for the telecom industry. The round also includes Flexenclosures existing investors, the Swedish investment funds Industrifonden and Andra AP-fonden.
The investment will support the deployment of Flexenclosure’s modular data centers, as well as the on-site power systems that can support wireless towers using a combination of solar and wind power and batteries. eSite can provide backup power in areas where the grid is unreliable, and stand-alone power in areas where grid power is unavailable and diesel generators are impractical.
Boosting Emerging Economies
IFC, which focuses its investment in boosting emerging economies and reducing poverty, sees the potential for Flexenclosure’s technology to extend the reach and reliability of communications in rural areas.
“An estimated 800,000 cellular base stations in emerging markets rely on diesel generators for their power supply,” said Andrew Bartley, IFC’s Chief Investment Officer for Telecoms, Media, and Technology. “This is a great potential market for Flexenclosure’s innovative product offering. Its growth strategy is directly aligned with IFC’s goal to improve access to mobile-phone systems for people in rural areas in emerging markets while also reducing global greenhouse-gas emissions.”
Flexenclosure, which was founded in 2001, has helped clients like Ericsson and Airtel expand networks into remote areas of Africa and Asia.
Expanding in Asia, Africa
“During the last year we have opened offices in Nigeria, Kenya, Pakistan, India, Malaysia and Dubai,” said Flexenclosure CEO David King. “Having IFC as a strategic investor will give us access to their global expertise and network, further enhancing our expansion strategy. We have an aggressive research and development program and are growing our sales operations in emerging markets.”
The new capital will be used to further develop Flexenclosure’s eSite and eCentre technologies. Here’s an overview:
- eSite is a family of energy-efficient hybrid power systems for base station sites in areas where grid power is unreliable or unavailable. The product can be configured to use solar panels and wind turbines, and can be used to reduce the use of diesel generators, or as a “community power” system to support not only a wireless station, but also provide power for mobile phones, water pumps and schools. eSite includes software that can manage multiple power sources to achieve the most economic approach.
- eCentre is a prefabricated modular data centre solution to house and power data and telecom equipment. Optimized for energy efficiency and low total cost of ownership, eCentre includes power, cooling and security.
Flexenclosure customers say the system can accelerate infrastructure deployment in areas where construction is difficult. One example is provided by MTN Ghana, a mobile network with 8 million sibscribers.
“We have found that building a brick and mortar data centre in this and the adjacent countries is problematic because of faltering building standards, and a containerized solution is a lot quicker,” said MTN operations manager Max Maxted. “The situation is similar with for example MTN Nigeria and we see this solution as a very positive one.”
Here’s a video that provides an overview of Flexenclosure’s eCentre enclosures:
|
|