Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, January 29th, 2015
| Time |
Event |
| 11:00a |
Vantage Adds Data Center Facilities Management Services Vantage Data Centers announced addition of Critical Facilities Management services to its list of offerings. CFM services are a way to outsource day-to-day data center management expertise.
The Santa Clara, California-based provider’s data center management suite includes consulting and custom on-site support from Vantage employees. That support goes as far as shipping and receiving of equipment, landscape maintenance, and trash removal.
Vantage is the latest multi-tenant data center provider to add facilities management services. Two other recent examples are T5 Data Centers, which started a facilities management company last July and QTS Realty, which launched a facilities management service in October. All three companies have traditional roots in wholesale colocation. Companies across this space have been looking for ways to diversify their services beyond providing space, power, and cooling to their tenants.
Vantage chief operating officer Chris Yetman said adding facilities management was a natural evolution for the company.
“This is the result of having been there ourselves, and us having been out and about in the market, having conversations,” he said. “After bringing what we outsourced back in [think maintenance that normally is done by vendor], we realized that there isn’t only a need for this, but we’re very good at this.”
QTS made similar comments about its decision “It was a natural progression for QTS to enter the critical facility management outsourcing space as we have designed, built, owned, and operated our own mega-scale data centers over the last ten years,” said Danny Crocker, vice president of operations for QTS CFM. Crocker said reception from the marketplace had been exceptional since the October launch.
Besides data center providers themselves, there are companies that focus on critical facilities management as their core business. Giants like HP also offer CFM services.
Vantage either works with an existing team or sends a team to your data center to manage everything on site. The team will also make suggestions for ways to optimize customers’ facilities.
Focus on Computerized Maintenance Management Systems
Vantage will work with a customer’s existing systems and tools or bring in its own, including its own Computerized Maintenance Management System. “We find many systems are often capable but not managed correctly,” Yetman said.
“We’ve restructured our CMMS (which we will make available to the customer). We link the records in the CMMS to the dependencies. Let’s say you have a CRAH unit — it knows and has the dependency on the power panel that feeds it. We take it a step further and tell customers and what specific people or teams are potentially affected with maintenance. If something goes wrong, we can tell the exact impact.”
Even the best systems can be subject to entropy over time, according to Yetman. To combat this, Vantage built in reporting that indicates what might be falling apart. Instead of yearly maintenance reports, it’s a weekly occurrence, so nothing ever falls out of control without notice.
The company has spent a lot more time on training and understanding the data center equipment. It currently aims for a certified technician for each UPS on each staff on each shift, for example. “We save money by not outsourcing maintenance, and there’s job satisfaction from the crew because we’re investing in their education,” he said.
A lot of the traditional parts of running an enterprise data center are being outsourced to leverage economies of scale. Itss colocation for physical space and power; it’s cloud for IT; and facilities management is for day-to-day operations.
There are several reasons to consider outsourcing the day-to-day. Potential Vantage customers may be stuck with underperforming teams because facilities management isn’t their core competency. “A company may be more about software development than running the servers, so they don’t put a focus on it,” said Yetman. “They’re too busy putting energy in another part of the business.”
Some customers are also disappointed with existing maintenance contracts with vendors, because a vendor can’t provide this level of individual care.
More Potential for Sale-Leaseback Deals
Besides another revenue stream, addition of facilities management services gives Vantage and others a possible avenue for expansion through sale-leaseback transactions.
“If we look at a facility and fall in love with the environment and underpinnings, there’s a chance we can ask to take it further off your hands,” said Yetman. “We can see if there’s opportunities to further our footprint, and maybe even give us opportunity to expand in that market; then open up more space. It would free up capital for the company [customer], and we’d run it for them. Our economies of scale would mean it would run more efficiently and we’d still have an opportunity to get a margin.” | | 4:30p |
How Data Center Operators Can Avoid Energy Price Hikes This Winter Tim Comerford is SVP of Biggins Lacy Shapiro’s energy services group and principal of Sugarloaf Associates. Joe Santo is a principal and director of business development at Premier Energy Group, LLC.
Energy consumption is one of the largest operating expenses for a data center, contributing to nearly 50 percent of total operating expenses. Due to last winter’s “polar vortex” that caused a deep freeze in much of the U.S., many large energy consumers in unregulated markets saw their energy prices quadruple. In fact, we have seen a tremendous amount of volatility in energy prices over the last decade.
Data center operators and owners can minimize the impact of unpredictable energy markets by better understanding the markets and establishing smart energy procurement strategies. Below is background on energy pricing trends, factors likely to impact future pricing, and proactive strategies for procuring energy in an unpredictable market.
Factors Impacting Pricing
There are a number of factors impacting natural gas and electric rates, including:
1. Natural Gas Storage: In the beginning of 2014, natural gas stockpiles hit the lowest level since 2004 as a result of cold weather and winter storms. Due to the mild weather this past summer and so far this winter, natural gas storage is slightly above last year and about 260 billion cubic feet behind the five-year average.

What do these numbers mean for energy pricing? A cold winter will likely move this market higher. If we exit the 2014-2015 heating season with low natural gas storage levels as we did this past year, there will be upward pressure on the market through the 2015 season.
2. Retirement of Coal-Fired Power Plants: Natural gas generation of electricity continues to grow as coal-fired power plants are retired. This has created a permanent increase in demand for natural gas. A few key statistics are:
- Natural gas has become the fuel of choice for electric generation, especially as new EPA standards impact 1,400 coal and oil units.
- Scheduled coal plant retirements between 2013 and 2020 will result in increased natural gas generation.
- Approximately one-third of electricity in the U.S. is generated using natural gas. Another one-third is coal and the last one-third is comprised of all other (nuclear, renewable, etc.).
As coal-fired power plants are retired, the increased base load natural gas demand for electric generation will increase price sensitivity.

3. Natural Gas Exporting (Liquid Natural Gas): In 2015-2016, large energy companies will begin exporting natural gas to Asia and Europe where they can achieve prices roughly triple the price in the U.S. This will cause a longer term change to the supply-demand balance. It will also begin what could be a transition from a North American natural gas market to a global natural gas market (similar to oil).
Worldwide Natural Gas Prices – Snapshot as of June 2014:
- United States: $3.80 /dth
- Europe: $7.80 /dth
- Asia: $14.00/dth
- South America: $15.00/dth
Where Do Prices Go From Here?
The severe winter last year caused a spike in spot prices for electric and natural gas. In January 2014, market prices for natural gas were in excess of $20/dth in many of the Northeast markets. These increased gas prices pushed electric rates higher, with customers in New Jersey and surrounding markets seeing average electric prices for January 2014 over $0.20/kwh.
Prices stabilized this summer, as the weather was mild across most of the country, and gas storage levels improved. In general, higher load factor customers in Texas are now able to fix a price for a multi-year term in the range of $0.04/kwh – $0.05/kwh. For higher load factor clients in New Jersey, this range is $0.082/kwh – $0.092/kwh.
In the coming months, weather will be a main driver of energy prices. If a cold winter results in upward pressure on natural gas prices, then electric rates will also likely increase.
Longer term, we see demand for natural gas increasing due to coal plant retirements and increased natural gas exports. This increased demand will put upward pressure on both natural gas and electric prices.
Proactive Management in an Unpredictable Market
Energy procurement should not be an annual task, or something reviewed just prior to the expiration of a supply contract. This is an ongoing process which, if managed correctly, can lead to positive bottom line results despite the extremely volatile market.
There are two important strategies that can be employed when structuring an energy supply agreement to limit one’s exposure to price run-ups or spikes:
1. A Fixed Price Agreement: This is a common strategy that provides a customer with price and budget certainty. In this case, usage becomes the only variable that needs to be monitored and managed.
2. A “Block and Index” Structure: Here, a customer can fix all or a portion of their price. The pricing can be locked in blocks or percentage levels at different times. While this requires more management and oversight, it allows a client to dollar cost average their price, similar to what an investor would do with a stock purchase.
Given the energy factors discussed above, plus the projected cold temperatures for February, we could be faced with rising rates in the future. As a result, businesses need to look at the importance of proactively managing their energy procurement now in order to reduce the potential negative exposure that could be coming down the road.
 After reaching 10-year lows in 2012, natural gas prices have continued on an upward trend. We saw a pull-back in prices over the summer and in December, but a projected cold February could push prices back up. This graph shows the 12-month average future price trend for natural gas since April 2012.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:00p |
Seeking Efficiency: The Data Center Holy Grail Efficiency is a key factor that all data center managers seek, whether they are squeezing a bit of power by making server fans variable speed or they are making bigger differences at the power and cabinet level, it is often the “holy grail” for the data center industry.
All types of data center professionals — from academic, government, enterprise or service provider sectors — can relate to the relentless pursuit of more efficiency. As demands for reduced costs continue, one way to shave expensive power bills is to “do more with less” or become more efficient at doing the same amount of work and storing the same amount of data.
In advance of spring Data Center World Global Conference, Data Center Knowledge had the opportunity to discuss efficiency issues with two conference speakers, Scott Milliken, Computer Facility Manager, Oak Ridge National Lab (ORNL) and Chris Crosby, founder and CEO of Compass Datacenters.
ORNL is a multi-program science and technology national laboratory managed for the U.S. Department of Energy (DOE). Compass Datacenters is known for building natural disaster-resistant, Tier III-certified, LEED Gold, dedicated data centers where customers need them.
Moving Toward Efficiency
“I plan to talk about process, policy and design in the data center,” said Milliken, adding he will discuss what was changed to help the pain points at the lab. Although some of the world’s fastest supercomputers live at the Oak Ridge National Lab, they don’t put pressure on Milliken and his team. Rather the legacy commodity hardware that is supported in the same facility, taking 50 percent of the floor space, is what has challenged the data center’s progress toward becoming more energy efficient.
“When I came to ORNL five years ago, it had the number one supercomputer in the world. It was impressive. The supercomputer comes with engineering drawings where the piping goes, etc.,” he said. “It was a top-notch managed facility, with focus on supercomputing, but supercomputing is only half of our floor space. The rest of the facility has typical commodity equipment.”
Therein lies the rub. The ORNL had disparate commodity systems that were not managed overall. “We have commodity equipment owned by different people,” he added. “So it has organically grown, like the Wild, Wild West, with no standards or documentation.” He inherited a mix of commodity equipment owned by different people, some by administration, some by different departments, including research departments with their own workgroups and clusters. To achieve more efficiency, the data center team instituted standards, process and documentation.
Milliken explained that “Wild West” situation is a common phenomenon, but issues may be unique for each location. “Previously, we didn’t have an overall management strategy. We didn’t even have the same size cabinets. We had different cabinets, different power connections to everything,” he said.
 Scott Milliken, Oak Ridge National Lab.
In terms of moving people away from a “Bring Your Own Equipment” approach to meeting specs called for a bit of strategy. “We said, if it meets our specs, then it’s free to host the equipment here. If it doesn’t there will be a cost. People ask, ‘What will it cost?’ Then they quickly ask, ‘What do I need to order?’”
Milliken added that because they are doing research or supporting researchers that they don’t necessarily have particular equipment requirements, they just need cycles of compute and space for storage.
“Our biggest pain point was electrical. That was the number one issue. There was a single source dependency so you had to schedule downtime,” he said. After electrical, the data center also had networking and cooling challenges to face. “We changed the point of view from an apartment manager to a business hotel manager. In a business hotel, you don’t bring your own furniture. You just bring your luggage and check in.”
Since making changes, the power usage effectiveness (PUE) measurement has been tremendously impacted, according to Milliken. “We have seen a 30 percent increase in efficiency. The new space the PUE is 1.12 or 1.13, compared with 1.4 or 1.5 in the old space. As we move more equipment to the new space, it gets more efficient.”
Cross Purposes: The Need to Come Out of Silos
Crosby, who is also giving two sessions at Data Center World, will present one session on efficiency, where he plans bring out the issues impacting increased efficiency, such as “siloed” thinking by different areas, such as IT and facilities.
“What the server industry has done around power efficiency has been the opposite of what’s been done on the data center side,” he said. “As we get more and more efficient on cooling, by raising temps and having a larger delta (difference between intake and outlet temps) this facilitates the mechanical cooling equipment being more efficient. At the same time, servers are now being designed with variable speed fans now, and there is less delta in the inlet and outlet temperature.”
These efforts which could defeat the other’s move toward efficiency are the “unintended consequences of operating in a vacuum,” said Crosby. “Neither understands what the other is doing.”
Crosby added, “I am taking the long view. Most data centers are using a mix of legacy equipment and new equipment. They are not following a technology fad.” Unlike the examples frequently cited of Facebook or eBay, not every data center has the benefit of a completely homogeneous environment.
 Chris Crosby, founder and CEO, Compass Datacenters.
Data center managers have to be careful when applying the lessons of others, Crosby said. “They have to be sure the others are following the assumptions they are following.”
For example, in open hardware where there is no case on the server equipment, the assumptions at the beginning were specific to the facility and team of workers. “The origin of this was Facebook had such scale that it didn’t make sense for techs to remove all the cases to work on the servers,” Crosby noted. “It was not about server efficiency at all.”
In another session, Crosby is going to discuss the worker safety implications of high voltage electrical equipment. The issue of arc flash, or electrical explosions which can result in damage, injury or death, is a serious issue for the data center industry. For more details, see this DCK post – Crosby: Weak Commissioning Poses Risks to Reliability, Safety
To learn more about designing for, and increasing, efficiencies in the data center, attend the sessions by Crobsy and Milliken at spring Data Center World Global Conference in Las Vegas. Learn more and register at the Data Center World website. | | 6:44p |
Cisco Streamlines Cloud Management Software Purchasing Cisco launched a new enterprise cloud suite, branded ONE, that helps automate, provision, and run applications in private and hybrid cloud environments. The vendor’s ONE strategy is a new approach to software licensing and bundling.
The changes are all meant to streamline buying and receiving Cisco’s cloud management software for customers.
“Unlike Cisco’s previous focus on selling cloud management software as part of complex and expensive services engagements, the Enterprise Cloud Suite is designed to appeal to a broad range of customers that want pre-packaged and easy to deploy hybrid and private cloud management software solutions,” IDC Analyst Mary Johnston Turner said via email.
Management is simplified through modular automation, meaning different automation features can be added one by one, in bite-size chunks.
The suite provides consistent, policy-based infrastructure instances optimized for application performance across compute, network, and storage. It has out-of-the-box templates for Windows or Linux-based application stacks and provides tools for developers to design and deploy custom stacks.
With Cisco ONE software strategy, Cisco bundles hundreds of formerly separate software offerings into three suites: Data Center, WAN, and Access. The company has improved licensing portability. | | 7:14p |
Quantum Adds AWS as Archive Storage Tier Niche storage solutions vendor Quantum Storage Systems has added cloud archiving and backup on Amazon Web Services as an option, making the public cloud a possible storage tier on its Q-Cloud Archive and Vault products.
Quantum provides primary file storage, data protection, and archiving tailored to specific industries, such as media and entertainment or geospacial imaging. It’s scale-out storage and application-based workflows for large datasets and/or high-performance requirements.
Storage tiering means using the right kind of storage for the right kind of needs. Adding AWS as a disaster recovery and archival tier shifts the capital expense of buying storage capacity to operational expense of renting it from the cloud provider.
A media company, for example, needs specific workflows to work with huge amounts of streaming-type data, such as for editing a movie or formatting a film for multiple formats. They need high data throughput, high performance and a very structured workflow. When the work is done, however, there is no need to store the finished product on expensive high-performance storage systems. A surveillance company records a ton of video but doesn’t necessarily watch all of that video.
“For each of these workflows, there’s an appropriate storage tier,” said Dave Fredrick, Quantum’s senior director of product marketing. “For acquisition and editing, they need the fastest storage available to allow editors to look simultaneously. Customers pay a premium price to get that kind of performance. But when they get to preservation stage, they want a low cost.”
Setting up AWS as a cloud archiving tier on their own would mean building their own data protection and workflows from scratch to shape AWS for their needs. Quantum is offering to do the heavy lifting for them.
“Getting to the cloud is not as easy as customers think,” said Fredrick. “A user-friendly ‘Dropbox experience’ is not at all what happens.”
The company has three main products. Q-cloud Archive and Vault are focused on scale out storage. Stornext is for high streaming file performance. It is a file system that creates a global namespace that traverses disparate storage resources (primary, extended, tape, and now also cloud).
Stornext keeps track of the original file in the original location, has a policy engine to create business rules and take advantage of storage tiering, and is multi-protocol (no need to setup transfers or gateways). | | 8:21p |
Cannon, Stulz Collab on Modular Data Center for Government Client Cannon Technologies, a U.K. data center vendor, has modified one of its modular data center products for a U.K. government customer, integrating it with a cooling system by Stulz, a German data center cooling supplier.
Modular data centers are used to deploy data center capacity quickly. Some users, such as military organizations, also want to be able to move a data center from place to place, which is the use case that made the modification necessary.
Modular data center solutions are prefabricated at a factory and shipped to the customer’s site for quick installation, but many of them, including Cannon’s, need to be plugged into a chiller plant to get chilled water for their cooling systems. The government customer wanted a chiller-less data center, however.
Stulz designed its Wall-Air Evolution direct expansion cooling unit for telecommunications containers, which are self-sufficient boxes of servers placed in remote areas. This made it a good fit for Cannon’s solution.
The combined solution is a data center that consists of multiple modules, and each module has a Wall-Air Evolution unit that can be taken off and placed inside for shipping and storage.
A single unit can provide up to 15kW of cooling capacity, and 16 can provide more than 200kW in N+1 configuration.
The Stulz unit pushes cold air at a low flow to the floor of the module, creating a “pool” of cold air, which server fans pull through the server. When outside air is cold enough, the system reduces reliance on DX cooling.
The modified version of the modular data center is available to other customers as well.
“Having exceeded all expectations, the Wall-Air Evolution units are now being deployed worldwide in the TMDC [Transportable Modular Data Center] and T4 Data Campus modular data center systems,” Mark Awdas, head of engineering at Cannon, said in a statement. | | 10:57p |
VMware and Google Join Forces in Fight for Enterprise Cloud Market Share VMware is making four Google cloud services available on its vCloud Air hybrid cloud. The two are competitors to a certain extent, but the deal, announced Thursday, puts them both in better position to compete for enterprise cloud market share with Amazon Web Services and Microsoft.
Unlike Google (and AWS) VMware has a huge presence in enterprise data centers — something it has been aiming to leverage by providing cloud services that integrate with customers’ existing in-house environments. Microsoft is also leveraging its ubiquity in corporate server farms to push its Azure cloud services to enterprise users.
With a joint offering, Google’s cloud has a foot in the door with VMware’s install base, while VMware has the benefit of the massive scale of Google’s data center infrastructure, making for a strong enterprise cloud play.
The deal also fleshes out the range of services in VMware’s cloud portfolio. Its hybrid cloud has only been around since 2013, so the company hasn’t had the time to build out its feature set to catch up with the market. Since price wars among the giants have made raw cloud infrastructure services a commodity, having a rich feature set is important to stay relevant in the space.
VMware will sell Google’s cloud storage, analytics, database service, and cloud domain name system for routing users to Internet applications. All will be available in the first half of this year. Pricing has not been disclosed, and more services are expected to be added in the future.
- Google Cloud Storage: low cost object storage service
- BigQuery: real-time big data analytics service for business intelligence
- Google Cloud Datastore: Schema-less, document-based NoSQL database that scales automatically
- Google Cloud DNS: Low latency DNS service
“Our collaboration will provide customers with a unique hybrid solution that combines the power and efficiencies of VMware virtualization and the hyperscale of Google Cloud Platform,” Murali Sitaram, managing director of global partner strategy and alliances at Google, said in a statement. “As a result of this agreement, enterprise customers will be able to combine their VMware cloud environments with the security, scalability, and price performance of Google’s public cloud, built on the same infrastructure that allows Google to return billions of search results in milliseconds.” |
|