Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, June 4th, 2015
| Time |
Event |
| 12:00p |
Facebook to Build Third Data Center in Iowa In one of a string of mega data center announcements by web giants this week, Facebook said it has made plans to build a third data center on its Altoona, Iowa, campus. The social network’s first data center in Altoona is live, and the second one is currently under construction.
Such rapid data center capacity expansion illustrates how quickly Facebook’s user base and the amount of content its users upload are growing. The amount of people and businesses consuming online services in general is growing rapidly, and companies that provide those services continue building data centers to make sure they can keep up with demand.
Earlier this week Google announced plans to build new data centers in the Atlanta metro and in Singapore, and Microsoft revealed plans to build its first two Azure cloud data centers in Canada. Also this week, Amazon and local officials in Ohio made public the company’s plans to build its next data center there.
One of the things all four of these companies have in common is they are all referred to as “web-scale” or “hyperscale” data center owners and operators. They build massive data centers optimized as much as possible to run their specific applications, from servers and network switches down to electrical infrastructure and building design. As a result, they benefit greatly from efficiency savings gained from optimization at such enormous scale.
Construction of the third Facebook data center in Altoona is expected to start this summer, a company spokeswoman said via email.
The company chose to expand in Iowa and not at two of its other data center campuses in the U.S. – it has one in Prineville, Oregon, and another one in Forest City, North Carolina – for a number of reasons. They include a “shovel-ready” site, access to fiber and power, a strong talent pool for construction and operations staff, access to wind power, and support from local officials, she explained.
This will be the first Facebook data center campus to have three full-size data centers. The company has also built smaller, stripped-down data centers for “cold storage” in addition to its full-size facilities at the Oregon and North Carolina campuses.
Facebook anticipates building three in Altoona to come online in late 2016, according to the spokeswoman. “Like we’ve done at our other data center sites, we will expand as business needs dictate,” she said.
The company announced the launch of its first Altoona data center just last November. Facebook has contracted for about 140 megawatts of power in a long-term power purchase agreement with a nearby Wellsburg wind farm. The company uses wind power to offset carbon emissions associated with power consumption of its data center campus.
Size or design of the third building has not been finalized yet, according to the spokeswoman, but the power purchase agreement will provide enough capacity to cover all three Facebook data centers at the site. “This wind farm adds 140 MW of new renewable energy to the grid and is enough to match our projected load needs,” she said. | | 3:00p |
Arkin IT Monitoring Tool Understands Plain English Fresh off raising another $15 million in funding, Arkin this week announced general availability of a tool that enables IT administrators to launch queries in English to discover what components are actually running inside their data center.
Deployable on-premise or invoked as a Software-as-a-Service application, the company’s head of marketing Mahesh Kumar explains, the Arkin Visibility Platform makes use of a Google-style search engine to make possible to discover all the virtual and physical infrastructure elements of a data center.
As an IT monitoring tool, it enables IT administrators to more easily collaborate across increasingly complex software-defined data center environments.
Rather than requiring a specialist to monitor IT environments, the tool is designed to make it possible for the average IT administrator to comprehend IT environments that get more complex with each passing day.
“IT operations have not been able to keep pace with modern IT environments,” says Kumar. “Our approach lets IT administrators create dashboards on the fly.”
That capability, says Kumar, is provided by an analytics engine embedded within the Arkin Visibility Platform. Admins can launch multiple types of queries once the overall environment is indexed.
Unlike legacy IT monitoring tools, the founders of the company had the luxury of designing a tool from the ground up to be more accessible to the average IT operations manager.
The visibility provided by the Arkin IT monitoring tools plays into a debate over the level of sophistication required to manage a modern data center environment. As data centers become more instrumented than ever IT, administrators have more access to machine data than ever before.
If the process of correlating all that data can be simplified to the point where the average admin can discover the root cause of an issue than, in theory, IT organizations should need fewer server, storage, and networking specialists.
The one thing that is clear is that most IT organizations can’t afford to keep throwing labor at IT environments that become more complex to manage as they increasingly scale. As such, there’s no doubt that many of the processes will need to become increasingly automated.
The only challenge, of course, is that IT organizations can’t automate what they don’t actually see or know about in the first place. | | 3:30p |
Best Practices for Data Protection and Recovery in the Cloud David Zimmerman is the CEO of LC Technology International.
Protecting vital company information with a formal plan often comes at a high cost but not having one could result in the crippling loss of data, trust from customers or partners, or reduced revenue due to a stoppage in business.
In recent years, much of the talk around data management has shifted toward the cloud as a low expense and highly scalable solution that can be accessed via any internet connection. However, there are some serious limitations to protecting data in the cloud:
- Managing files is your responsibility. Cloud providers typically won’t be much help if there is a glitch and you tell them, “We put our accounting files in this folder, and now they’re gone.” Even if you certainly did place them in the right folder, you likely don’t have any recourse against the cloud provider. It’s your responsibility to manage the files properly and ensure you have backups.
- Public clouds are multi-tenant, which means if your “neighbor” suffers a security breach, then your data can possibly be exposed. Public clouds are cheap and offer immediate scalability, but their shared component means very sensitive data should be kept in-house.
- The cloud is, of course, tied to internet connectivity. If you have an ISP outage, then you can’t access your cloud-based data. If some of your information is absolutely mission critical, then ensure you have physical media backups to help you run the business in the case of poor internet connections.
- Some cloud storage companies are simply not reputable. The market has evolved and pricing continues to drop, so there is not much of a benefit in going with a lesser known provider. Choose one of the big names in cloud storage for the best safeguards and customer service.
Knowing the cloud’s limitations, how should companies go about protecting their most valuable assets? Here are several best practices that can help brands manage and store their data (no matter where it’s held):
- Create a detailed protection plan. Similar to any marketing or sales endeavor, you need a plan in business in order to be successful. Data protection is no different and requires a set plan that offers step-by-step instructions. A plan also instills accountability as different staff members can be given their own tasks which contribute to the plan’s overall success. A sound plan takes time to develop, but it’s well worth it. Be sure it includes the schedule for data backups, who is responsible for managing physical or cloud-based assets, and who will communicate to customers if data is lost.
- Handle devices carefully. Hard drives and especially Secure Digital Cards (SD) should be handled with care. These portable storage media might be used by employees in the field, or those that take a considerable amount of photos or video content. SD cards can be very sensitive and should not be used as long-term storage options.
- Backup the backups. Redundancy is a simple yet effective tactic to protect your data. Storage is cheap. Whether it’s a 2 TB hard drive or storage through Google or Amazon, you can keep a massive amount of information at minimal cost. Given the low cost barrier, it’s wise to have multiple layers of backups for maximum protection, such as a private cloud along with some on-premises backups.
- Check the laws and adjust access accordingly. Many businesses need to hold to higher standards of data protection due to privacy and security laws. Make sure all of the company’s regulatory and legal requirements are exceeded in order to prevent disastrous lawsuits or fines. In addition, you want to restrict access to information among staff members. Not every employee needs to access sensitive customer or partner data. Smart access controls (with redundancies to cover absenteeism or resignations) are vital for reducing the incident of breaches or risking data exposure by disgruntled employees.
- Centralize data management for better security and simplicity. Today’s businesses are pulling in data from many different sources such as social sites, customer service, email, mobile marketing, and several others. It’s important to assess all of these sources and then put in place centralized management, which helps eliminate duplicated efforts. Staff will appreciate being able to grab info from one source, especially marketing departments that might uncover unexpected results from these combined data points.
- Analyze your metrics and test the plan. The sales managers will review performance after a big sales campaign, and IT managers should do the same with data protection. The formal protection plan should include various benchmarks and goals as well as data that can be reviewed on a schedule. The plan shouldn’t be set in stone but continually evolving based on the latest information. Institute procedures to track the trail that data follows to identify any breaches in protocol or sources of improvement. Testing the plan is another cost/benefit analysis where companies should look at the time spent testing versus the detriment of not having access to data.
Recovering Lost Data
A business that puts in place a great plan while using a mix of private clouds and on-premises storage can still find that it needs to recover lost data. Perhaps an employee dropped a crucial hard drive or hackers accessed a forgotten Dropbox account, data can still be breached. However, you can take steps to ensure that information can be retrieved and operations returned back to normal quickly. Just know that not all recovery tools are created equal.
Steer clear of any free data recovery utilities. There are many software programs that promise to extract lost data from USB or hard drives. These might work, but they are very risky due to the threat of malware and they very often have no customer support. Spending a small amount of money with a reputable recovery firm is the best option to retrieve data without any hiccups. You want a firm that is well reviewed, will gladly offer references, and offers technical reasons why their tactics and programs are industry leading.
Companies without formal plans for disaster recovery and protection are placing themselves at risk for legal problems, business interruptions, and perhaps the loss of patented information. By following data protection best practices that involve centralized management and smart storage choices, companies can greatly lower the odds of data loss and focus on more revenue-generating opportunities.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| | 4:41p |
REIT Buys Phoenix Data Center Leased to American Express Griffin Capital, a publicly registered non-traded real estate investment trust, has acquired a Phoenix data center and office building for $91.5 million. The property is fully occupied by a single tenant: American Express Travel Related Services Company, a subsidiary of American Express.
Griffin and its Essential Asset REIT II (the first REIT offering is now closed) likes fully leased properties, including data centers, offices and healthcare assets. Fully leased buildings provide ongoing revenue and relatively little investment. About 70 percent of Griffin Capital’s acquired assets are fully-leased.
Last year, Griffin acquired 80 percent of a Digital Realty data center in Ashburn, Virginia. In 2012 it acquired a three-building 155,000 square foot office and data center campus fully leased AT&T Wireless. It bought a Verizon data center in New Jersey in 2013. All three properties were fully leased at the time of purchase.
The Phoenix data center is 300,000 square feet across three stories and the office building is over 200,000 square feet. The property sits on a 28-acre site and is part of the American Express Beardsley Campus, developed as a build-to-suit for the tenant in the 1980s.
“We are pleased to acquire this critical operating asset, whereby the tenant has continued to demonstrate a meaningful financial commitment over several decades,” said Louis Sohn, Griffin Capital’s director of Acquisitions in a release.
Institutional real estate investment manager American Realty Advisors sold the property on behalf of a client.
In related news, Carter Validus also recently made the first data center acquisition in its REIT II offering, acquiring Online Tech’s Indianapolis data center for $7.5 million. The facility is 43,000 square feet supported by 3 megawatts.
Online Tech, which acquired the data center just last year, meets at a perfect crossroads with Carter Validus and other similar REITS; it’s a data center provider that has a strong focus in the healthcare vertical, like a mission critical Russian nesting doll. | | 5:02p |
HP to Open IT Infrastructure Management APIs Taking a big step toward fostering the development of an open software-defined data center, HP this week announced it will publish open application programming interfaces (APIs) that anyone can use to program its infrastructure. The company made the announcement at the annual HP Discover conference in Las Vegas.
Paul Miller, vice president of converged application systems at HP, says the HP Composable Infrastructure API is actually only phase one of a later Project Synergy effort the company kicked off last year. Later this year HP will publish open APIs for its entire IT management software stack.
“This phase is really about infrastructure as code,” says Miller. “Via a single line of code IT organizations will be able to provision HP infrastructure.”
To support this effort HP this week also announced an HP Composable Infrastructure Partner Program, which counts Chef Software, Docker, Puppet Labs, Ansible, VMware, and Schneider Electric among its first members.
The goal, says Miller, is to make use of open APIs that IT organizations can use to “compose” infrastructure from any number of HP or third-party management tools. To facilitate that process HP will make available software development kits (SDKs) for its APIs in July.
Upgrades Beef Up Data Center Management Software
At the core of that effort is HP OneView, the management software that HP relies on to unify the management of its infrastructure. Via the addition of REST APIs in the version 2.0 release of OneView rolled out this week, Miller says IT organizations can now essentially program HP infrastructure.
The latest release of OneView automatically updates itself in data center environments consisting of equipment sold by HP. The issue that most IT management platforms have is that it’s difficult for them to keep track of ongoing changes to the data center environments.
“The issue is what do you day two after the IT management platform is deployed,” says Miller. “With automatic updates it’s now possible to keep up to speed with those changes.”
OneView is designed to unify processes, user interfaces, and APIs across HP server, storage, and HP Virtual Connect networking devices.
With this release HP is adding more server profile templates to make it easy to define firmware and driver baselines as well as server, LAN, and SAN settings in one place, all of which can be updated multiple times as the data center environment evolves.
In addition, those templates can be migrated between data centers and used to recover workloads across server platform types, configurations, and generations.
OneView 2.0 also delivers additional automation, proactive monitoring, and guidance for storage area network (SAN) administrators along with support for storage devices using Fiber over Ethernet (FCoE) connections. It now proactively identifies and alerts administrators to zoning errors, broken path, and orphaned volumes in addition to making configuration reports available.
HP also announced that OneView has been integrated with HP Virtualization Performance Viewer (vPV) to make it easier to plan capacity requirements, understand the impact of maintenance operations, and mitigate configuration risks by detecting how virtualization clusters are striped across HP BladeSystem enclosures. | | 5:30p |
IBM Acquires Managed Private OpenStack Cloud Startup Blue Box IBM this week acquired Blue Box, a managed private cloud provider built on OpenStack. Financial terms were not disclosed.
The Seattle-based cloud provider simplifies private cloud for enterprises by offering it as a managed service. Its turnkey private clouds are hosted in customers’ data centers but managed by Blue Box, similar to Cisco’s Metacloud. Blue Box gives IBM capabilities to deliver public cloud-like experience within a data center of the client’s choice.
Blue Box supports managed infrastructure services across hybrid cloud environments and IBM’s Platform-as-a-Service called Bluemix. Customers gain simplified and consistent access to infrastructure in whatever setup they want (local, dedicated, public cloud, etc.). It also provides a single management tool for OpenStack-based private clouds regardless of where they reside.
IBM will continue to support Blue Box clients and will further their technology. Blue Box clients may also leverage other IBM services, while IBM clients now gain a new private cloud option.
“Together, we will deliver the technology and products businesses need to give their application developers an agile, responsive infrastructure across public and private clouds,” said Blue Box Founder and CTO Jesse Proudman in a press release. “This acquisition signals the beginning of new OpenStack options delivered by IBM. Now is the time to arm customers with more efficient development, delivery and lower cost solutions than they’ve seen thus far in the market.”
If a company does something well when it comes to private OpenStack clouds, chances are a big technology company will scoop it up. The acquisition is the latest example of consolidation and shake up in the private OpenStack cloud space, which is becoming less of its own space, and more part of a wider portfolio as the tech giants build out their cloud offerings.
Cisco acquired Piston Cloud Computing this week and Metaloud late last year. After private OpenStack cloud provider Nebula went defunct, its engineers found a home at Oracle in a quick fashion.
Enterprises often want different types of clouds – secure, managed private cloud for some workloads, unmanaged and/or public for others. Providers like Blue Box, while innovative, don’t offer the full spectrum. It often means an enterprise either has to seek out several vendors and relationships or turn to a service provider or technology vendor with diverse offerings or partnerships in place to meet needs.
Gartner forecasted that over 70 percent of enterprises would be pursuing a hybrid strategy this year in a Gartner Data Conference Poll presented in October.
IBM continues to stick to its game plan regarding cloud, and it appears to be working. It’s cloud revenue across public, private and hybrid environments was $7.7 billion in the twelve months ending in March.
This acquisition, as well as other recent cloud moves, are in a bid to help customers put data in whatever setup they want easily, as well as to let them migrate to different setups easily. | | 6:00p |
HP Partners With Arista on Data Center Switches HP has struck a partnership with data center network switch vendor Arista Networks, the companies announced at this week’s HP Discover conference in Las Vegas.
Collectively, HP and Arista likely expect to be able to squeeze rivals such as Cisco and Juniper Networks that have traditionally dominated the high end of the data center switch market. The degree to which HP and Arista can actually leverage this alliance in the field remains to be seen.
Paul Miller, vice president of converged application systems at HP, says the alliance with Arista is specifically focused on high-end data center environments where network latency needs to be as low as possible. In other data center environments, Miller says, HP will continue to emphasize its own set of complementary top-of-rack data center switches.
“The line between web-scale and the high end of the enterprise is a little blurry,” says Miller. “But in traditional enterprise we’ll continue to focus [on] selling HP switches.”
Longer-term, Miller says, HP will work to converge the programmable environment that Arista has created for its Ethernet switches with the OpenStack management framework HP is using to create a software-defined data center (SDDC) environment.
The alliance with Arista is intended to make HP a more appealing alternative in web-scale data center environments and data centers managed by cloud service providers that have embraced high-performance Arista Ethernet switches. Such data center operators have generally eschewed commercial servers from vendor such as HP in favor of white boxes, and increasingly do the same with data center switches.
In fact, HP last year formed a joint venture with Foxconn to build custom servers specifically for this market. The challenge is that many web-scale companies prefer to not only build their own servers, but also rely on white-box switches running open source operating systems.
HP also has significant presence in high performance computing environments and the telecommunications sector, both of which could be beneficial for Arista.
At present, Arista has over 3,200 customers and is expected to soon achieve double-digit market share in a data center switch market that is growing at 15 percent a year. | | 6:30p |
Involta Buys Data Recovery Services and its Ohio Data Centers Iowa-based Involta has acquired majority of the assets of Data Recovery Services, a data center services provider in Ohio and Pennsylvania. Financial terms were not disclosed and the deal is expected to close by the end of the month.
Involta offers colocation, as well as more hands-on and workforce intensive managed services and consulting. The acquisition is a managed services-rollup. The deal doubles Involta’s headcount to 200 and “positions Involta as a powerhouse in the Ohio market,” said Carl Gordulic, DRS President and COO, in a release.
The acquisition includes three Ohio data centers, two acquired in Youngstown and one under a management agreement in Columbus. Youngstown is strategically located between Cleveland and Pittsburgh, Pennsylvania.
Involta also acquired leased data center space in a Pittsburgh facility, as well as 5,500 miles of fiber across Ohio and western Pennsylvania.
There is increasing consolidation among regional providers serving secondary and tertiary cities. Ohio and the Midwest in general are also seeing a data center renaissance, including massive projects like Amazon’s $1.2 billion Ohio data center project and Facebook announcing a third data center in Iowa, helping to put both on the map as good states to build data centers in.
Last year, Involta raised $50 million from private equity firm M/C Partners. M/C managing partner Gillis Cashman recently discussed the firm’s interest in secondary data center markets, stating these markets are attractive due to an imbalance between enterprise data center need and the available high-quality data center infrastructure.
Since the private equity raise, Involta has more than doubled the number of data centers it operates, going from five to over 10 following the acquisition. However, the core of its business is acting as a strategic IT partner to mid- and large-size enterprises in Tier II and III cities.
DRS customers will keep getting the services they’re used to as well as gain an opportunity to expand across Involta’s footprint and expertise if desired.
In addition to the acquired DRS facilities, Involta operates eight other data centers across secondary markets in Iowa, Idaho, Minnesota, Arizona, and Ohio, where it has existing strong market position in Akron in verticals like healthcare. | | 9:00p |
Equinix to Offer Private Links to Alibaba Cloud Data Centers As it prepares to launch its first cloud data center in the U.S., Aliyun, the cloud services arm of Chinese internet giant Alibaba has struck a partnership with American data center services giant Equinix that will expand its ability to cater to enterprise cloud users.
Equinix will provide private direct network connections to Aliyun’s cloud from its data centers as a service to its customers. The Redwood City, California-based colocation provider already offers such connections to data centers operated by Aliyun’s major U.S.-based competitors, such as Amazon Web Services, Microsoft Azure, Google Cloud Platform, IBM SoftLayer, and Verizon.
The point is to provide access to cloud services that are usually accessed via the internet through dedicated private network connections that bypass the public internet. Reportedly, such connections are faster and more secure than the internet, making public cloud services more palatable for security- and performance-conscious enterprises.
Equinix will start offering direct access to Aliyun’s cloud from its Silicon Valley and Hong Kong data centers in the near future. Asian tech companies favor Silicon Valley as a hub for accessing U.S. cloud services markets. Hong Kong is one of two main hubs in Asia international companies use to serve customers in the region and, importantly, in mainland China. The other one is Singapore.
Aliyun recently announced it would establish its first U.S. cloud data center in Silicon Valley. Direct links from Equinix data centers in the region are likely to provide customers access to that facility.
Aliyun has not provided much detail about its strategy for the Silicon Valley data center, but the company is most likely leasing a big chunk of capacity from one of the area’s big wholesale data center providers, such as CoreSite, Vantage, QTS, DuPont Fabros, or Digital Realty.
Chris Sharp, vice president of cloud innovations at Equinix, said for his company the partnership’s value was in being able to better cater to its multinational clients. “Now you can privately consume the Aliyun services in Hong Kong and Silicon Valley, which is a huge advantage for a lot of our multinational customers,” he said. Expanding their reach into Asia is a “critical next step for a lot of our multinational customers.”
A recent Cisco Cloud Survey forecasted that by 2018, most cloud traffic will originate in Asia Pacific. China is the biggest market in the region, and Aliyun owns about 23 percent of the country’s Infrastructure-as-a-Service market, according to the research firm IDC.
Aliyun is the largest cloud service provider in the country. With more than 1.4 million customers, it is China’s answer to Amazon Web Services. This means Equinix, via its new Chinese partner, can now provide customers the “ability to go deeper into Asia than they ever had before,” Sharp said.
But the deal is also advantageous for Aliyun, because it gets access to the rich ecosystem of customers and service providers that interconnect at Equinix data centers around the world. A big part of that ecosystem is in Silicon Valley, but the partnership is not limited to the two initial locations. There are plans to expand it to other markets as well.
Earlier this week Equinix announced an agreement to acquire European data center provider TelecityGroup. If completed, the deal will make Equinix the biggest data center provider in Europe. | | 10:24p |
DCIM: What to Consider Before You Buy This is the second part of our five-part series on the countless number of decisions an organization needs to make as it embarks on the DCIM purchase, implementation, and operation journey. Read Part 1 here.
In part one of this series we examined the vendor promises for DCIM. If your organization is now ready to consider evaluating and purchasing a DCIM platform, be prepared to devote enough internal resources to the process of developing an RFI (Request for Information) or RFP (Request for Proposals). Even if you have already spoken to some of the vendors and seen a few web demos, it should not just be a copy-and-paste amalgamation of vendors’ marketing brochures.
Assemble the Stakeholders
DCIM systems cover a very broad range of areas: Facilities, Operations, and IT. There is a real need for a unified solution that can collect and aggregate information from facilities and IT systems and then be able to display it in a manner that is meaningful and correlates to some actionable items for all factions. Like any other assessment process of a complex multifaceted problem, the evaluation team members should be composed of managers and technical personnel representing those domains with a common holistic goal. Beware: the question of which of these domains are driving or funding the DCIM project can become an issue if the politics of IT-versus-facilities comes into play while developing the requirements.
Define Expected Functionality
A DCIM project has been compared to a major ERP (Enterprise Resource Planning) software implementation, potentially touching almost every aspect of an organization’s data center fabric. This is a time-consuming process in itself and should be done well before calling the legions of sales reps for the proverbial dog-and-pony show. Besides the usual purchasing boilerplate, the basic requirements should first focus the organization’s most critical overall long-term requirements, but also reflect the specific pain points that need the most attention. Clearly assign weighting factors to each requirement function, such as: mandatory, highly valued, optional, or future. This is not to say that during the multitude of vendor presentations they will not see a useful or “must-have” feature or function that the evaluation team had not initially envisioned. They can then consider adding it to the final requirements list.
The major areas that virtually all DCIM packages cover include basic facilities functions. Of course, no DCIM package would be complete without the PUE (Power Usage Effectiveness) calculator. These “dashboards” provide visibility into the basic power utilization by simply gathering the fundamental energy information from the output of the UPS (roughly presumed to be the “total IT power”) and then compare it with the “total facilities power” (assuming that it was properly instrumented), and viola, it spits out a PUE result. This provides baseline information that hopefully will be used in improving the site’s power and cooling infrastructure energy efficiency.
There are also systems that include some form of CFD (Computational Fluid Dynamics) type functionality to monitor power and environmental conditions across the entire whitespace down to the rack level and can provide a graphical thermal mapping. This is an area that most BMS (Building Management System) tools have limited visibility in. One of the other management areas involves the ability to do a capacity management function and do “what if?” modeling. This can help avoid mismatch of the classic space-power-cooling triangle.
Other DCIM platforms are aimed more toward IT administrators which some IT-centric vendors have applied the new DCIM name to. These tools were formerly classified as IT “Asset Management,” or Network Management, but now also incorporate the ability to gather and correlate energy usage information.
Predominant Vendor Influence
There are presently approximately 75 vendors in the DCIM category, many of whom are smaller firms with DCIM as their only product. There has already been consolidation, which will continue to increase over the next few years. In most data centers, there are usually one or more incumbent major vendors that supply and support power and cooling systems, as well as the BMS. They naturally are the prime candidates to offer their own DCIM systems, since they have a foothold via the installed major facility critical equipment. Moreover, the financial stability and size of the vendor is an issue, and while it may be convenient, do not make the major incumbents the only vendors to be invited to participate in the evaluation.
While it make sense to consider these factors as part of the evaluation, vendor background and culture also plays a significant role in the product features and market segment they are focused on: facilities or IT. The larger players that primarily offer power and cooling equipment naturally tend to focus their product features toward the facilities manager, while those vendors that have a stronghold and history of IT enterprise software products play to the IT departments. Furthermore, these vendors are cross competing and continue to add features to be able to expand their functionality in order to broaden their appeal to all sectors.
Hardware
While the essence of a DCIM system is the software, it needs to gather data points from a wide variety of hardware devices. For existing facilities this can involve indirectly collecting data from existing sensors connected to the BMS by means of a hardware gateway or some form of data export routine via a software interface with the existing BMS. This will involve system integration services, which typically require additional charges by the BMS vendor, as well as the system integrator.
In most cases a considerable number of environmental sensors need to be located throughout the data center white space, and additional electrical metering will need to be added to monitor energy usage of sub-systems (typically cooling). While the costs of the devices are relatively easy to quantify, the cost to install the devices can be very substantial and also may require shutdown of some systems or sub-systems. These factors and costs should not be overlooked.
Software License Considerations
Besides the price of the basic DCIM core software and optional modules, each DCIM vendor has differing license models. From the IT perspective it can be based on the number of monitored devices (servers, storage arrays, etc.), powered IT racks, or electrical monitored points (i.e. per circuit), as well as the environmental sensors. On the facilities side, charges may be based on the number of devices, such as generators, UPS, PDU, CRAC, CRAH, chillers, etc. Therefore, it is very important to clearly understand the various vendors’ licensing terms in order to project the present and future expansion costs.
Product Roadmaps
Like every other system, virtually every DCIM platform has evolved and added features over time. When evaluating vendor offerings, examine the product history (version upgrades) and the promised future roadmap. This can be in the form of separate modules or future feature and functionality version upgrades. What are the module add-on costs, or will they be included in the next full-version release? Many maintenance contracts covers software fixes but not major version upgrades. Of course, like any other evaluations, obtain customer references and speak to existing users.
The Bottom Line
If done properly, the selection of DCIM platform is not simply choosing an amalgamation of software and hardware. It is a philosophical commitment to a holistic approach by facilities and IT to work together to improve the overall energy and operational efficiency, and even availability of the data center. Organizations should not underestimate the scope and depth, as well as the staff time and effort needed to define the evaluation requirements, and to implementing a broad-scale DCIM project. When seeking vendor proposals, be sure they addresses all the additional aspects and projected costs for sensor installation and system integration, training, and annual support.
Coming up in the series: benefits and limitations, implementation challenges, costs (direct and hidden), and ultimately the cost justification for DCIM.
Read Part 1 here. |
|