Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, May 11th, 2015
| Time |
Event |
| 12:00p |
How Etsy Optimized Data Center Infrastructure for Efficiency Etsy, which recently held a successful IPO, is a growing marketplace of diverse, unique, and handmade goods. The company’s operations philosophy is centered around what it calls “designated ops,” which emphasizes deep collaboration and community not just across developers and operations, but across the entire company.
“It’s a mistake to think of these topics as domain expertise of one particular group,” said Mike Rembetsy, Etsy’s vice president of technical operations. “These topics shouldn’t be — though they are historically — the realm of operations engineers.”
Rembetsy stressed that true efficiency is dependent on the human element, the communication between people. For the data center to run like a well-oiled machine, people need to work with the bigger interest in mind.
Etsy’s designated ops approach is not to be confused with dedicated ops. Like DevOps, it stresses collaboration, but that collaboration extends out further than between only development and operations.
In designated ops, someone within the operations team will embed themselves into other teams. “The job of that person is to act like a member of that team,” said Rembetsy. “Any changes happening, their job is to help facilitate an operational mindset within the process of development. They’re not there to configure all the network checks etc., they’re there to help cultivate the knowledge back into the development process to help foster the collaboration.”
This approach means if help is needed in one domain, there’s a bridge that’s facilitated to allow teams to collaborate, he said. “It’s pretty impressive to sit back and watch. I’m super proud of it”
Even the public-facing client side of Etsy needs to know a lot more about issues like networking resource constraints, and they need to know more than the average worker, said John Allspaw, senior vice president of technical operations at the company.
Etsy’s Data Centers
As with any successful web property, it’s always interesting to take a peak inside the company’s data centers, which Etsy operates with an emphasis on sustainability.
“It’s not just a data center value, it’s a company-wide value,” Allspaw said about the sustainability focus.
The company works on maximizing use of renewable energy with its data center providers Sabey and Equinix. “We want to provide renewable energy as well as onsite generation,” said Allspaw. “The difficulty with that is multi-tenant data centers are focused on reliability.”
Because reliability is goal number-one for the multi-tenant data center, often these interests aren’t fully aligned with renewable energy use.
The team has been talking with providers about onsite renewables, and it looks at Power Purchase Agreements if local sourcing isn’t possible. “It’s a challenge to find a provider that can provide a data center level of power consumption that meets requirements,” said Allspaw.
Equinix was Etsy’s first data center provider (since 2011), and more recently it has added a Seattle location with Sabey.
The Equinix data center in San Jose, California, (SV5) was selected partially because it used a newer cooling technology that used less water. “Since then, when choosing another data center, we’ve continued on the idea of efficiencies, renewable energy, and design,” said Rembetsy. “The reason we do choose providers are values based on longer-term renewability.”
Efficiency at the box Level
Etsy began making their racks as dense as possible in 2011, going from 2kW per cabinet to 5-6kW per cab. The first step was to use Supermicro servers “They seemed to have the most efficient power supplies and we were running a relatively dense configuration,” said Rembetsy. Since then, it has diversified its vendors.
To get to its current densities, the company credits playing around with MapReduce and Hadoop several years ago. “Hadoop gave us a crash course in how to make a dense rack,” said Rembetsy.
The company performed big data jobs early on in the cloud, using Amazon’s Elastic Map Reduce service.
“That [EMR] got us to a place where we think about what data we collect. At the time it was really inefficient from a cost perspective, but there weren’t any creative constraints,” said Rembetsy. “It was quite trivial to write Hadoop jobs that were inefficient.”
The team eventually decided to bring it in house. “Our Hadoop cluster wields a huge amount of flexibility,” said Rembetsy. “We can get a lot done on bare metal, and we know very much what we’re looking for. Cloud services, that’s where efficiencies fall away. Spin up and spin down doesn’t have the same value proposition.”
Hadoop clusters tend to use a lot of power, so the company again looked for efficiency in servers.
Hadoop servers (Etsy uses Supermicro boxes to run Hadoop) were nice and cool in the front, but the back was pretty hot. Etsy employed cold-aisle containment to help airflow and reduce the number of fans.
Rembetsy and his team took a look at the CPUs itself and were able to switch from 145w top-of-the-line processors to 90w CPUs. “We looked at the wattages versus performance from a power efficiency perspective,” said Rembetsy.
“Efficiencies are going to come in different forms or shapes,” said Allspaw. “There are a couple of scenarios that are not entirely intuitive choices: the one that comes to mind is diversifying our CDN. Moving from one to three allowed us to drive further efficiency, but using three CDNs running simultaneously meant we had to make our code work for it. We pulled the cleverness out so we could treat CDN the same as any other.” | | 2:10p |
Microsoft Invests In Several Submarine Cables In Support Of Cloud Services Microsoft is investing in several submarine cables to connect data centers globally and in support of growing data network needs. The latest investments strengthen connections across both the Atlantic and Pacific oceans, connecting several countries.
Microsoft continues to significantly invest in subsea and terrestrial dark fiber capacity by engaging in fiber relationships worldwide. Better connectivity helps Microsoft compete on cloud costs, as well as improves reliability, performance and resiliency worldwide. The investments also spur jobs and local economies.
Microsoft deals with Hibernia Networks and Aqua Comms strengthen connectivity across the Atlantic, while the New Cross Pacific (NCP) Cable Network and a first physical landing station in the US will better connect North America to Asia.
Across the Atlantic, Microsoft is investing in a cable with both Hibernia and Acqua Comms to connect Microsoft’s North American data center infrastructure to Ireland and on to the United Kingdom. Hibernia said the partnership will translate to a cost-benefit for Microsoft customers.
Hibernia Networks’ new Express cable will connect Halifax, Canada to Ballinspittle, Ireland to Brean, UK. The cable helps support Microsoft’s backbone, connecting several data centers. The Express cable pair will yield in excess of 10 Tbps per pair, which is nearly triple the 3.5 Tbps per pair delivered on the current systems.
Microsoft is the first foundation customer on Acqua Comms’ America Europe Connect (AEConnect) project, a submarine cable system being built by TE SubCom. The infrastructure will also support Microsoft’s cloud services.
Across the Pacific, the New Cross Pacific (NCP) Cable Network has commenced construction. Microsoft joined the NCP consortium comprised of several major Chinese telecommunications companies. As part of its participation, Microsoft will invest in its first physical landing station in the US connecting North America to Asia.
NCP will provide better data connections between North America and Asia. The NCP will link Hillsboro, Oregon to several points on China’s mainland, South Korea, Taiwan and Japan. The cable system will deliver up to 80 Tbps of capacity.
Last year, Microsoft agreed to buy capacity on a fiber optic submarine cable a company called Seaborn Networks is building between U.S. and Brazil, the first cable to connect the two countries directly. Microsoft’s Brazil South availability region for its Azure cloud services launched June 2014.
Google, one of Microsoft’s biggest competitors in the cloud market, has also been investing in submarine cable capacity. The latest was Google’s participation in FASTER, a five-company effort to build a trans-Pacific cable system that will link major cities on the U.S. west coast to two coastal locations in Japan.
“As we look at how people and businesses will interact with technology in the future, there are investments we need to make now to support our customers and help their businesses grow,” wrote David Crowley, managing director, Network Enablement, Microsoft.
Microsoft’s latest earnings showed the Commercial Cloud division, which includes Azure, Office 365 and other services, grew over a 100 percent. As the company expands its global cloud infrastructure, a strong subsea strategy is needed to ensure high availability. | | 5:00p |
3 Quick Ways to Optimize Storage – Without Adding Any Hardware As I walked through a customer’s data center – we had a very interesting conversation. The topic revolved around deploying a series of new applications which were quite IO intensive. The saddened storage architect was now telling me he was being asked to do the last thing he really wanted to do – add more hardware.
It’s not that he didn’t want to add another amazing controller – he was just tired of adding shelf after shelf without really doing much optimization. Sure, there was some. However, direct integration with applications and even the cloud layer certainly wasn’t happening. So, with that in mind – what are some ways you can ask your storage environment to do more for you … without adding more disk? Here are a three quick ways to help you optimize storage.
- Using your hypervisor. Your hypervisor is a lot more powerful than you think! Features from technologies like XenServer and VMware offer a lot of great controls around your storage architecture. You can now create flash-optimized storage architecture which delivers extremely high performance with consistent fast response times. Here’s another one – thin provisioning from the hypervisor involves using virtualization technology to give the appearance of having more physical resources than are actually available. Thin provisioning allows space to be easily allocated to servers, on an as-needed or scale-as-you-go basis. And, yet another example is creating storage monitoring and alerts. Proactively monitoring your storage resources allows you to find issues before they become real problems. These alerts can be set for thresholds around performance, capacity, read/write access, and much more.
- Look for hidden features. Have you taken a look at your new policies? What about some of the latest feature releases and updates? EMC, NetApp and other major vendors completely understand that optimization is now the new language between storage architects. They’ve built in new powerful features which allow for greater data agility and control. New ways to compress data and control block as well as file-level storage can really help impact how much storage resources are actually needed. Don’t only look at hidden features either. Out-of-the-box configurations absolutely need to be looked at as well. For example, what is your current deduplication rate? Is it 40%? Maybe 80% on some volumes? When was the last time you looked at the efficiency of your deduplicated volumes? Features that aren’t often looked at and existing features which impact storage should all be reviewed. As storage requirements change, your policies and configurations have to adapt as well.
- Use the cloud! New ways to extend into the cloud has allowed data and storage engineers to do great things with their environments. You can literally set policies where certain thresholds immediately point new users to a cloud-based environment. Dynamic load-balancing and data replication mechanisms allow for transparent data migration. OpenStack, CloudStack and Eucalyptus all create powerful connection mechanisms for storage extension. This way, you can specify exactly how much storage you want to use internally – and outsource the rest. Over the years, pay-as-you-grow models have become a lot more attractive from both a pricing perspective and a technological aspect. APIs are a lot more powerful, it’s easier to extend your environment, and hybrid cloud models are more popular than ever. Cloud providers now allow you to pay for only the space that you need to use. This is great for storage bursts, offloading a piece of your storage architecture, and even creating new – completely cloud-based – solutions for your business.
Consider this – The latest Cisco Global Cloud Index report indicates direct growth in global data center and cloud traffic. For example:
- Annual global data center IP traffic will reach 8.6 zettabytes (715 exabytes [EB] per month) by the end of 2018.
- Overall data center workloads will nearly double (1.9-fold) from 2013 to 2018; however, cloud workloads will nearly triple (2.9-fold) over the same period.
- By 2018, 53 percent (2 billion) of the consumer Internet population will use personal cloud storage, up from 38 percent (922 million users) in 2013.
- Globally, consumer cloud storage traffic per user will be 811 megabytes per month by 2018, compared to 186 megabytes per month in 2013.
All of this data will have to live somewhere. And for the most part – that somewhere is your data center. More applications, more mobility, and an ever-evolving user have created new demands around storage resource utilization. Fortunately for us – virtual and software-defined technologies are making it much easier for storage and data to do great things.
With all of that in mind – there is still such a diverse array of applications and storage solutions that it’s incredibly difficult to nail down just 3 ways to optimize storage. New ways to deliver efficiency, better physical storage controls, and software-based optimizations are all designed to make your storage work better for you. Remember, as your build out your own storage environment – many times there are great ways to optimize your data control methodology outside of just adding hardware. Using the cloud, or even existing features, are really great ways to quickly optimize your storage environment. | | 5:15p |
DCIM: The Promises, Politics, Challenges, and Cost Justification – Part 1 This is the first of a five-part series on Data Center Infrastructure Management (DCIM).
by Julius Neudorfer
The advent of DCIM as a term originated not long after The Green Grid introduced its now widely recognized Power Usage Effectiveness “PUE” metric in 2007. A substantial percentage of early DCIM offerings were primarily marketed as PUE dashboards in order to gain better acceptance and recognition of the drive to measure and improve energy efficiency in the data center facility.
DCIM has evolved to become a very broad term, widely utilized by a myriad of vendors that have depicted it as the ultimate data center management tool. So is DCIM mostly market-driven hype, or are there tangible benefits to be realized? Considering the costs of products and implementation, where is the return on investment? This five-part series will exam the business benefits, pain points and processes that DCIM product suites are attempting to deliver or improve.
I have written about DCIM previously but have noticed a significant uptick in interest this past year by potential end-user organizations – not just increased marketing efforts by the major players in the data center market. However, the data center landscape itself has undergone significant changes since DCIM was first introduced, and many enterprise organizations have moved toward colocation and cloud services to meet increased capacity demands, rather than build more data centers. In response, some colocation providers are implementing DCIM as a selling point to their clients by offering them a peek behind the curtain, and providing their customers access to selected aspects of their internal DCIM systems.
Originally, some of the early DCIM products were based on adaptations of traditional Building Management Systems (BMS) platforms. They were almost totally oriented toward energy usage, so that the facilities’ team could monitor, manage and hopefully optimize the power consumed by the electrical and cooling systems. And like the PUE metric, these were facility-centric packages that did not correlate with the energy efficiency of the IT hardware (or any other aspects of the computing systems). In contrast, IT administrators have long had a wide variety of administrative management consoles and tools that look at different aspects of their servers, storage and networks systems. However, in many cases they were also somewhat segregated and specialized, especially in larger enterprise systems; and virtually none of these systems examined IT energy usage or efficiency.
The Promise
The long-term promise of DCIM is to help both IT and facilities managers work together to make more informed, and presumably better, decisions about overall energy usage – operational and workflow optimization – as well as capacity modeling and planning. As the name implies, it is rational to infer that DCIM presumably provides organizations with the ability to monitor and manage their infrastructure. However, if you ask what “infrastructure” means to different factions of the data center world, you will still get diverse answers. The same seems to hold true for today’s DCIM offerings by various vendors.
The current DCIM options cover an extensive array of products with a wide range of features and functions. Conceptually, they are based on a centralized data repository that can deliver an integrated and interrelated view of all the assets and status of the physical facility infrastructure (space, power, cooling and network cabling) as well as IT systems (servers, storage, networking and even applications). Nonetheless, in many organizations facilities and IT teams are traditionally siloed: culturally, technically and politically.
The Politics of DCIM
Given the historically divergent cultures of IT and facilities, it should come as no surprise that politics play a significant role in DCIM decisions. This tends to apply more to traditional data centers than to new build-outs. It also seems to relate to the type of organization: conservative financial firms vs. internet-based services, such as cloud, search and social media.
The new economic reality focused on cost reduction has resulted in the rising costs of energy becoming a significant percentage of the OPEX. Suddenly, PUE became a buzzword that even the CFO heard about; and at least awareness of the need for basic energy efficiency measurements become a Key Public Indictaor (KPI). However, justifying and maximizing the promised DCIM value proposition and the buyer’s motivation can be at opposite ends of the spectrum. Clearly, senior management wants better reporting metrics and lower OPEX. However, the technical staff – which should be directly involved with the product evaluation, testing, and purchasing recommendations – may have mixed motivations.
While DCIM ultimately should make IT departments more productive by improving and automating processes, like any labor saving system it could eventually reduce head count. Prior to the 2008 financial crisis, data center managers were primarily motivated to provide security and 24×7 availability for their facilities (the proverbial five “9”s), and to satisfy customers’ requirements.
Employee productivity was important, but maintaining qualified personnel that kept everything operational was even more so. In effect, as long as power and cooling met IT requirements, there was very little reason for elaborate facility efficient metrics. While general operating costs were effectively kept in check (in-house staff, vendor-provided equipment services, etc.), availability became the KPI. In essence, this binary indicator assigned a facility a 1.00 if it had no outages or a 0.00 if there were any (in which case fingers are pointed and perhaps heads will roll).
The politics of social responsibility, carbon and water usage metrics (CUE and WUE), and disclosure also come into play. Prior to the development of the Green Grid metrics, very few organizations really tracked data center energy efficiency (much less water usage and carbon footprint). There was not much to report either internally or publicly, and data centers never attracted the attention of watchdog organizations such as Green Peace. Now it seems that even those organizations that strive to build new facilities with PUEs below 1.2 or even 1.1 are still targets and subject to criticism – despite the vastly improved sustainability performance of new facilities.
The Challenges
One unclear issue for facilities operators is the difference between existing Building Management Systems (BMS) and DCIM, and any additional benefits it offers. IT teams are also very reluctant to interconnect their systems to the facilities systems for a variety of reasons. Most recently, security has become a major concern and impediment.
In speaking to a variety of DCIM vendors, several customer-driven issues have emerged. They have seen that some of the end-users expect to automate or manage workflow processes that may not even presently exist as well-defined manual processes (i.e. when IT servers need to be added, operations and facilities should effectively allocate and provision space, power, cooling and network, etc.). This in itself indicates a basic visibility gap that needs to be addressed, and DCIM may help with organization and management.
So, how can CEO-level management executives interpret and evaluate the usefulness of the promised deliverables, much less who should specify, purchase, install and operate DCIM systems? Clearly before embarking on a trip down the DCIM wormhole, a team composed of facilities and IT needs to agree and address the major pain points and primary areas of improvement by taking a holistic approach to energy efficiency, workflow process, and asset management, as well as capacity optimization, modeling and planning.
The Bottom Line
DCIM has now moved far beyond a glorified PUE console as it embarks on its second and third generations as a product category. However, despite the early hype and projections of massive and widespread adoption by 2015, DCIM is still a complex solution for a multidimensional problem. Plus, it’s being promoted to a somewhat skeptical and perhaps confused audience of potential buyers.
What are the drivers and key motivators for data center decision makers to implement or defer DCIM deployment? Should DCIM be viewed as a strategic investment or just another additional (perhaps unnecessary) expense? We will address these issues over the course of this series, examining details behind the decision to buy or not to buy DCIM, its benefits, implementation challenges, expenses (direct and hidden), and ultimately the justification of cost.
| | 5:28p |
DCIM Weekly News Roundup BY: John Rath
Here is a roundup of latest data center infrastructure management (DCIM) news from elsewhere on the web:
- iTRACS upgrades Converged Physical Infrastructure Management (CPIM). Enterprise DCIM provider iTRACS, a Commscope company, released version 3.2 of its CPIM software, which combines DCIM with Commscope’s automated infrastructure management solution imVision. iTRACS says the benefit of this integration provides users real-time visibility into the physical cabling infrastructure that connects IT assets in the data center.
- Emerson business unit Therm-O-Disc and No Limits Software form strategic partnership. Emerson Network Power’s Therm-O-Disc business announced strategic brand support and collaboration with data center management solution provider No Limits software. The collaboration pairs Emerson’s smart wireless sensors for the data center with the No Limits RaMP DCIM solution.
- University of Cambridge selects Emerson for DCIM and data center solutions. The the University of Cambridge selected a range of Emerson Network Power solutions and its Trellis DCIM offering to help drive operational efficiencies for the university’s existing data center ecosystem. With Trellis, the university looks to gain real-time insights into power, thermal management and IT equipment to help them manage capacity and increase efficiencies.
- FNT Software discusses 10 steps to successful DCIM adoption. German software company FNT Software talks about what really matters in the quest for deriving value from DCIM implementations.
- Tier44 releases EM/8 DCIM solution on ServiceNow store. Enterprise data center management company Tier44 announced that its EM/8 data center monitoring solution has been released on the ServiceNow enterprise application marketplace. Integrated with the ServiceNow platform, the EM/8 solution offers monitoring and management capabilities for ServiceNow users.
| | 5:30p |
Canon’s DCIM Needs Centered Around Integrating With Existing Systems BY: Jason Verge
Canon’s single biggest concern in its search for a DCIM vendor was the ability for it to integrate with existing systems. The company wanted a vendor that not only worked with other systems in place, but one that could act as the foundation for things to come.
Canon has data centers across Virginia and New York consisting of 3,000 physical servers. When the company decided to open a second New York data center, it wanted to start right. That meant first off, it needed to change the way it was doing things.
Canon was using spreadsheets to track physical infrastructure managed across two teams. Within the two data centers, there were several other teams all using individual spreadsheets to track their concerns.
The fact that the infrastructure behind these teams was interrelated caused problems, and the spreadsheet method meant that not each had the same or accurate information. As a result, the Infrastructure management team wanted to bring in more formalized tools.
DCIM is a broad term, although it’s incorrectly being pigeonholed as a tool for power and cooling management. Because of obfuscation with the term DCIM, many assess whether they need DCIM from solely a power and cooling management perspective. This was not Cannon’s approach.
In Cannon’s case, its primary needs surrounded integrating with ITSM and other systems in order to gain insight into what was going on with the servers, rather than just the wider facility environment. The key to its successful deployment was knowing immediate needs and growing from there.
Nlyte’s Vice President of Marketing Mark Harris suggests removing “DCIM” from our vocabulary and talking about needs instead. In Cannon’s case, the big need was integrating with existing systems. This tipped the scales in favor of Nlyte.
“DCIM has significant value at the power and cooling level, but I think it has larger fiscal value at the IT side of it – what’s in the rack, why it’s there, and who owns it,” said Harris. “It’s about more than keeping things running; it’s about keeping things running at the right cost.”
Canon needed to tie DCIM into ITSM systems, service management, and ticketing, and tying it all into general ledgers. Integration capabilities are not a “yes/no” checkbox when selecting a DCIM vendor; the answer is always yes, said Harris.
“A selling point of Nlyte was the off-the-shelf integrations with some products that we already had, and with some that we didn’t currently have. But we are beginning to realize what we need as a large enterprise organization,” said Sean Hendershot
manager for Canon’s U.S.A. data center ops and IT infrastructure division. “What Nlyte has done is provided us with the push of ‘we need to stop doing it the old way and start doing it a better way.’”
Nlyte has created several out-of-the-box connectors, which means it will not only integrate with existing systems but will continue to do so easily.
The alternative is getting a programmer in every once in a while to sit down and make a connection, which creates ongoing expenses and headaches. Canon’s other systems frequently update and Nlyte updates as well. Using thousands of lines of code written once is a risky, horrible way of doing things.
DCIM should be viewed as a long-term, big enterprise class system, which will be in place for dozens of years. An enterprise’s other systems will change during that time, so it’s not about if it integrates now, but if it will continue to integrate as the makeup of an enterprise’s systems change.
”The main value of DCIM is discipline,” said Harris. “Yes, it’s a bunch of tools, but the big value is it gives you a purpose built platform to design and build processes, and to perform processes by design rather than reaction.”
An unforeseen effect of Canon using DCIM wasthat it ended up choosing not to use a Configuration Management Database (CMDB) in conjunction with Nlyte. Nlyte’s strength is in managing the lifecycle of assets. Its strength meant Cannon was able to use Nlyte as the single source of information.
The Infrastructure Management team provisioned and briefly trained application users directly on the Nlyte system. These business users can now self-serve where their hardware or virtual machines are running. Beyond the core team of six daily users, there are now more than 250 of these application owners that access Nlyte on an as-needed basis.
Nlyte Suite 7 DCIM Solutions
Formerly called Global DataCenter Management, Nlyte is a leading provider of DCIM solutions. Founded in 2003, the company has quickly transformed itself into a key player in the DCIM segment. Nlyte 7 Suite is a purpose-driven DCIM solution from Nlyte that is scalable, fully extensible and customizable. The industry patent asset allocation, along with contextual content repository, is the strength of the Nlyte 7 Suite. Here is a comprehensive evaluation of the Nlyte 7 Suite:
Summary
Nlyte is a major player in the DCIM segment. The current version of Nlyte 7 provides greater efficiency for asset management and change management. It seamlessly integrates with popular enterprise management solutions like the BMC’s Change Management, VMware, HP, RF Code, Server Tech and CMDB. In a short span of time, Nlyte has transformed the DCIM suite into a business life cycle management solution.
Asset Management (10/10)
The Nlyte DataCenter allows you to holistically manage your entire infrastructure. Nlyte uses a central repositoryof assets with key attributes while maintaining contextual relationships between assets and asset parameters.Working in conjunction with the Nlyte Materials Catalog, it records asset attributes like size, weight, power and connections, and automatically updates the database. It effectively manages physical, logical and virtual assets along with the asset location. Assets can be grouped by cage, pod, logical or user-defined categories. It offers automatic asset discovery and reconciliation.
Power Management (9.0/10)
The Nlyte Connection Manager enables you to graphically view power connections within the infrastructure at any level. You can check the entire power path, including connection points and connect information, right from the supplier’s sub-station feed to the data center through UPS, PDUs, generators and power strips at any level. You can assess the asset properties, connection type, connection details, destination details and source port information. The graphical view can be filtered based on fiber, power or network. The circuit endpoint allows for the viewing of asset connectivity. This connectivity data can be exported into Excel spreadsheets.
Thermal Management (9.0/10)
The Nlyte DataCenter enables you to map the entire power path of the power from the supplier’s sub-station to the datacenter and to each connected device. The suite monitors the power consumption of every device including air conditioners and IT and non-IT equipment. It provides clear visibility into power usage effectiveness (PUE) values by calculating the IT equipment power (ITEP) and the total facility power (TFP). Based on these values, automatic provisioning of power devices can be achieved.
Space Management (9.0/10)
The Nlyte Floor Planner provides a CAD-style graphical user interface to visualize the physical layout of the racks, rooms and floors in the data center. You can easily track the asset chain and make changes accordingly. Power provision, airflow, heat control and use of space can be effectively managed using this application. Color-coded designations are given to different performance thresholds that apply to each device. A layered representation of the entire asset infrastructure can be viewed. At the same time, categorized views of cabinets, racks, cage representations, floor standing servers and power infrastructure can be observed.
Visual Modeling (8.0/10)
Nlyte Dashboard & Reporting provides a number of pre-defined and user-defined dashboards to visualize critical operational metrics. It offers visual representation of assets in the form of charts and graphs. The built-in analytics engine enables you to create real-time analytics of heat and power metrics in addition to a detailed view of asset inventory. With Nlyte visual modeling, spreadsheets or Visio diagrams are not necessary.
Hypothetical Modeling (9.0/10)
Nlyte Predict enables you to envisage the future state of your data center’s power, cooling, space and networking capabilities based on real-time analytics and historical usage metrics. Using these “what-if” models, you can assess the effect of any change made to the infrastructure before actually making the change. While installing new assets, you can determine if there is enough space, power and networking options. You can investigate the benefits of upgrading your servers, and you can create forecasted projects and envisage actual costs.
Access & Control (9.0/10)
The Nlyte Integrator is a NgaugeAPI-based connector that enables you to integrate DCIM solutions with a number of third party applications or existing systems like VMware, HP, RF Code, Server Tech and CMDB.It is based on Intel’s Datacenter Management technology. Being a purpose-built web service, Nlyte Integrator makes it easy to integrate it with existing systems or third party solutions.
Reporting & Alarming (8.0/10)
The Nlyte Dashboard & Reporting feature allows you to quickly generate a wide variety of reports to obtain critical operational metrics. It comes with a built-in reporting and analytics engine. You can use templates or customize your reports for the IT, finance and executive departments. Reports can be automatically generated and delivered. Report delivery is quick and consistent. | | 6:11p |
IBM Guarantees Power Server Utilization Rates At the IBM Edge 2015 conference today IBM unveiled a raft of POWER8 offerings, including a server on which it guarantees utilization rates of up to 70 percent without any compromise to application performance.
The IBM Power System E850 is a four-socket system that comes configured with 4TB of memory. Within the confined of terms and conditions set forth in the IBM Performance Guarantee Program, Don Boulia, vice president of cloud services for IBM Systems, says IBM is not only guaranteeing application performance, but is also only charging customers based on the actual number of cores they use inside any given POWER8 system.
Rather than requiring IT organizations to pay to over provision IT infrastructure, the IBM Power Series gives IT organizations the ability to dynamically scale up and down their IT costs as the nature of application workloads change. In addition, Boulia notes that IBM gives customers the flexibility to move licenses for IBM software between systems regardless of whether they are running on premise or in the cloud.
In addition to the IBM Power System E850, IBM today also unveiled an enhanced IBM Power System E880 that can scale to 192 cores and an implementation of a converged IBM PurePower System that comes bundled with a distribution of OpenStack from IBM.
Also unveiled at the conference is an updated IBM XIV Gen3 system that can store 50 to 80 percent more compressed data and IBM Spectrum Control Storage Insights, a storage management console accessed via the cloud that can be used to manage storage systems running in the cloud and on premise. IBM is also showing a technology preview of a cloud archive service from Iron Mountain through which IT organizations can access live data stored remotely in Iron Mountain data centers.
Finally, IBM also announced a no charge trial of a Mainframe Data Access Service on Bluemix from Rocket through which mainframe applications can access data via the IBM Bluemix cloud integration platform starting next month.
While Boulia says IBM will put Power Series servers up against Intel x86 servers running any application workloads, the Power Series systems are especially tuned for Big Data and cloud applications that typically require higher levels of data throughput.
“For some workloads it comes down to bandwidth and bus size,” Boulia. “It’s about how much data can actually be moved through the whole system.”
At present, Intel clearly dominates both categories in terms of overall adoption. But IBM contends that thanks to the POWER8 alliance both IBM and its infrastructure partners will be able to compete with Intel-based systems on all fronts for years to come.
| | 6:35p |
CommScope’s iTRACS HEADLINE: CommScope’s iTRACS
iTRACS, originally founded as an independent company, has been focused on infrastructure management since the 1980s. With the launch of the iTRACS DCIM platform in October of 2009, iTRACS decided to focus specifically on the data center market.
Since its deployment, iTRACS has released more than updates and new releases on the original platform. iTRACS DCIM 4.1, with the browser-based interactive SimpleView visual interface, is due for release sometime in June 2015. CommScope acquired Tempe, Arizona-based iTRACS in March 2013 where its business and R&D offices reside. Today, iTRACS is a standalone business unit that is 100 percent focused on DCIM. CommScope, Inc. is headquartered in North Carolina.
Currently, iTRACS is a DCIM Suite that helps organizations optimize the capacity, availability, and efficiency of their data center physical infrastructure. There are three differentiators around which the technology is built upon. The iTRACS DCIM platform is designed around a holistic, open and interconnected architecture. This means that iTRACS aggregates, analyzes, visualizes, and presents all of the information a data center administrator would need to effectively manage the entire data center ecosystem across both IT and facilities.
The functionality of the iTRACS DCIM platform encompasses seven areas of DCIM: Asset/Space Management, Operations Management (monitoring, alarming, etc.), Change Management (workflow), Resource/Power Management, Connectivity Management (network cabling and patching), Availability Management, and Capacity Management/Planning.
“The data center is one of the most complex, interconnected, entities on earth, said William Bloomstein, director of strategic solutions marketing for CommScope. “No asset is an island. All assets are interconnected and interdependent. You cannot modify one without creating impacts – intended and unintended – on others. Smart decisions demand DCIM solutions like iTRACS that enable you to visualize and understand, in rich granular detail, how your space, power, assets, network connectivity, and cooling resources are interconnected and interdependent. One of iTRACS’ unique capabilities is the ability to make this interconnectivity visually understandable and operationally manageable. This is a topic that comes up repeatedly as decision-makers compare and contrast DCIM vendors in an attempt to decipher who comprehends connectivity and who does not.”
Many of the world’s largest organizations and Fortune 500 companies across a wide range of verticals – finance, Internet, media, service providers, transportation, and healthcare – currently utilize the iTRACS DCIM platform.
iTRACS CPIM DCIM Solution Details
Converged Physical Infrastructure Management (CPIM) is an innovative product from iTRACS that was introduced in 2009, primarily targeting large enterprises’ DCIM needs.
Asset Management
iTRACS CPIM stores asset information in a single large repository database. It records asset information like asset model, dimensions, weight, age, purchasing details, physical location, network and power connectivity and other related details. Auto discovery of assets is available. The application offers a single pane view into the entire asset base. With this holistic view, it is easy to locate an asset and analyze its performance. Additions and changes to assets are dynamically monitored and recorded. Assets can be manually entered or can be imported from a third party database.
Power Management
iTRACS CPIM collects power consumption data directly from endpoints like PUDs, UPSs, other metering devices and other assets that can communicate over IP networks using basic protocols. Performance benchmarks are stored when directly monitored data is not available. It monitors point-to-point power consumption as well as the entire power chain of the infrastructure. You can measure and analyze the power path not only within the data center but also across the entire building and facility’s infrastructure. This information is managed in the CPIM environment and presented to you in a holistic view using 3D visualization.
Thermal Management
iTRACS offers PowerEye, a powerful end-to-end energy efficiency strategy integrated into the CPIM solution. It provides visibility into the total energy consumption of the infrastructure. Deep-dive reporting lets you know how much power is used across the entire chain. Using this data, you can calculate the power utilization efficiency across the infrastructure or for a specific group or zone and take steps necessary to optimize power resources. Real-time notifications are delivered when power thresholds are exceeded.
Space Management
CPIM creates a holistic view of the graphical representation of the entire infrastructure for a better understanding of where each asset is located. This information enables you to plan and make changes to the infrastructure without disrupting services. While adding or relocating a new asset, you can check the best location for that asset. For any planned change, the impact on resources can be predicted. Deployments can be made faster. Forecasting, planning and the allocation of power, space and resources can be done effectively.
Visual Modeling
CPIM provides a variety of capabilities to manage the life cycle of devices in a context-rich 3D model. In fact, 3D visualization is the key aspect of iTRACS CPIM solutions. The application collects asset information, power consumption details and energy efficiency. This data feeds directly into iTRACS’ interactive 3D visualizations. A rich graphical representation of the entire infrastructure enables you to conveniently monitor and manage each asset in the infrastructure. You can either use the native graphics platform or import designs from third party software like AutoCAD.
Hypothetical Modeling
iTRACS offers Future View for an efficient hypothetical modeling. This module creates a visual model of the current state of the infrastructure and compares it with the past and future states of the infrastructure. This comparison enables you to predict the performance of each device or element within that entity. Moreover, it is easy to understand the effect on power and space arising with the addition or relocation of assets within the infrastructure. This “what-if” scenario enables customers to gain insights into the affected behavior, performance and capacity requirements before new deployments are made.
Access & Control
CPIM uses an open system architecture, which makes it easy to integrate with third party management solutions like BMS systems, security suites and CMDBs using standard protocols. CPIM that is built on Open Exchange Framework easily integrates with VMware, RF Code and HP Systems Insight Manager. Any device that has an IP address can be accessed and monitored using CPIM solutions. Taking advantage of the DCIM browser and iPad interface, users can access CPIM anytime, anywhere.
Reporting & Alarming
CPIM offers a wide variety of templates to quickly create different reports. The interface is easy to use. You can generate different reports for different users, groups or zones. You can create reports to understand the total energy consumption, thermal conditions, network utilization, status of assets, power distribution mapping, etc. The platform automatically initiates processes like email alerts or remote script deployment when required. | | 9:30p |
ScaleFT Wants To Help Ops Teams Tackle Complexity Of Running On Public Clouds A new startup called ScaleFT or “Scale For Teams” is tackling the complexity operations teams face when using public cloud infrastructure. ScaleFT’s tools will be available commercially to all operations (ops) teams, and strategic investor Rackspace may use ScaleFT as it builds out a service business atop big public clouds.
ScaleFT’s tools are agnostic and meant to help operations teams, often overwhelmed by complexity, run on AWS, Google Compute Engine, Azure and Rackspace OpenStack safely and securely.
The problem, as ScaleFT co-founder Jason Luce describes it is, “operations sysadmins and DevOps folks at every company besides Google, Etsy and Facebook (as a few examples) do not have modern tools to deal with the massive infrastructure they are delivering.”
Luce is the former vice president of finance at Rackspace and was joined by several others from the company, including two former Cloudkick employees. Rackspace acquired Cloudkick in 2010 to jumpstart its managed cloud offerings.
The ScaleFT team includes Paul Querna, Rackspace director of corporate strategy and former chief architect at Rackspace-acquired Cloudkick; Russell Haering, an engineering manager who also came aboard in the Cloudkick acquisition; and Robert Chiniquy, a former Rackspace engineering manager.
“This is a group of capable people that have experienced the same pain, so we’re building a platform focusing on ops teams,” said Luce, adding that there is a big need for these agnostic tools. “If you’re running on just Amazon, you can’t use only their tools. And you won’t use two sets of tools. We’re going to ‘bring them down the hill’ so everyone can use them.”
ScaleFT will provide three core products, all built with delivery flexibility so they can be consumed as multi-tenant Software-as-a-Service, single tenant cloud deployment, or on-premises.
Scale Access is the company’s first product, which will tackle authentication practices. Scale Access makes secure workflows easy while ensuring compatibility with all current tools, according to ScaleFT. Luce said the available authentication technologies like RSA and SSH are too complex and that many businesses will require superior security practices.
The other two products have not yet been fully revealed, however, Luce said the second will analyze what everybody does on the server, while the third product, Secrets, leaves keys on servers for specific individuals.
Rackspace’s strategic investment in ScaleFT makes sense given the company’s evolving strategy over the last 18 months or so. The company placed a big bet on OpenStack whereas it’s historically been cloud agnostic. That bet paid off in some ways given OpenStack’s rise, but it shifted the focus away from a potentially very lucrative multi-cloud services business.
“We needed to remain independent,” said Luce.
Luce said that OpenStack adoption is currently occurring heaviest among the bleeding edge technology companies that often like the Do-It-Yourself (DIY) approach rather than handing everything over to a service provider. Rackspace did land significant big enterprise deals, but the company is choosing not to limit itself and to become agnostic once again.
“We’re building a platform designed to help Rackspace with this initiative, which is why Rackpace is a strategic investor in our seed round,” said Luce.
| | 10:44p |
Optimum Path’s Visual Cloud Manager Based out of Tampa, FL, Optimum Path develops software for advanced visualization and planning of IT physical and logical infrastructure for cloud-based applications and services. Its Visual Data Center software has been in production since 2007, working to simplify management for owners and operators of many of the largest data centers in the world.
Founded by Jim Yuan (CEO/CTO) and Steven Webel (COO), Optimum Path solutions are designed to deliver productivity enhancements as well as cost savings. Its enterprise and SaaS applications visualize the relationships between both physical and logical connections in converged and complex environments including data centers, colocation facilities, buildings and enterprises. Furthermore, its brands – Visual Data Center, Visual BMS and Visual Cloud Manager – provide the kind of intelligence required to isolate impacts, model changes, predict capacity limits, improve uptime, reduce time-to-repair and decrease operational expenses.

As a global organization, some of Optimum Path’s reference customers include:
- Raytheon
- IBM
- BBVA
- Siemens
- Cbeyond
- Shell
- CSX
Its latest product – Visual Cloud Manager – was developed in 2014 and is a highly visual and centralized management portal which enables the end-to-end management of converged infrastructure. Visual Cloud Manager unifies virtual infrastructure in a single pane of glass with spatial 3D representation for the logical and physical networks, devices, systems, facilities, and access control in highly dynamic IT environments. This new cloud delivered SaaS platform provides rich visualization and analytics which are accessible from almost anywhere. The advanced rights management framework is ideal for multi-tenant environments where delegated access level control is required and in environments where operators wish to offer new and differentiating services to customers.
“Our technology is a robust, full set of DCIM features capable of helping manage and monitor IT, facility and environmental aspects of the data center, including full workflow management module, said Steven Webel, COO of Optimum Path. “The core differentiator for us is the fact we are a 100 percent software development company. For our solutions we are able to work with other applications via integration or customize our features to meet customer needs. Finally, our platforms serve as an OEM solution in the data center space for multiple global data center software solution providers.”
| | 10:44p |
Raritan’s DCIM Monitoring and Operations Raritan delivers power management solutions, DCIM software, and KVM-over-IP for data centers of all sizes. Headquartered in Somerset, NJ, Raritan has 50,000 locations in more than 76 countries worldwide. Furthermore, Raritan’s hardware and software solutions help increase energy efficiency, improve reliability, and raise productivity.
Raritan’s DCIM journey begins with its knowledge of this space. The company has been working with data centers for three decades and have a presence in data center and lab locations across all industries and of all sizes: from small to hyperscale data centers; data centers that are geographically dispersed; and data centers pushing the envelope with new ideas and architectures. As a result, Raritan has created the core competencies needed to develop a DCIM solution that is easy to deploy, use, and scale within heterogeneous environments.
“DCIM is a journey, not a sprint,” said Herman Chan, senior vice president and general manager, DCIM. “In a confusing early market, our approach remains pragmatic and purposeful with a single goal: customer success. The winners will be those who can execute.”
Another concept around Raritan’s DCIM is that it does not try to fill every checkbox on an RFP. Features that are unique to their offering, such as Intelligent Asset Search or Psychrometric Charts with Thermal Envelopes, were added because their customers had a real need for them. And, since Raritan’s software is widely used, the company is able to gain even more customer insights and drive successful deployments.
Founded by Ching-I Hsu, Raritan’s current chairman and CEO, its first DCIM solution (DCIM Monitoring) was introduced in September 2008. Several months later. Raritan introduced DCIM Operations. DCIM Monitoring and Operations are designed to work together or independently. | | 10:45p |
Nlyte Software’s 7 Suite Based in Silicon Valley (San Mateo) with offices across North America and Europe, Nlyte Software was formed in 2004 originally as Global DataCenter Management (GDCMTM).
Today, more than 10 years later, Nlyte Software is delivering its 7th generation of products to hundreds of customers worldwide, based on market and customer feedback. The company directly focuses on clients, ensuring they derive maximum business value from their DCIM deployments. Nlyte works to ensure that their solutions are tightly integrated into existing business processes, IT systems and infrastructure.

Today, two factors help define Nlyte’s DCIM solution:
- A robust set of application functionality that supports the entire DCIM lifecycle. The platform is scalable to support millions of assets across up to 150k racks. Also, it’s built on a modern web-based, service-oriented architecture that can support dozens of data centers from a single centrally located instance. Finally, Nlyte provides many out-of-the-box connectors to leading virtual, ITSM and environmental monitoring systems; but the architecture also allows for easy integration into other systems.
- Nlyte is currently on the 7th generation of its 11-year-old technology (stay tuned for the 8th!). Furthermore, founders Rob Neave and Lee Moreton are still with the company and helping develop the platform.
As a growing organization, Nlyte’s DCIM solution continues to evolve with the demands of the market. Some of the company’s customers include, BMC, Canon, TransUnion, HP and Suncorp.
The Nlyte data center infrastructure management (DCIM) solution automates the management of processes, policies and dependencies that surround data center infrastructure.
“We aim to simplify the entire data center management process through a next-generation set of tools,” said Doug Sabella, CEO of Nlyte. “By abstracting the logical and physical components of the data center, we incorporate true intelligence around functions like capacity planning, asset controls, real-time monitoring, reporting, and even workflow management.”
Nlyte Suite 7 DCIM Solution Details
Formerly called Global DataCenter Management, Nlyte is a leading provider of DCIM solutions. Founded in 2003, the company has quickly transformed itself into a key player in the DCIM segment. Nlyte 7 Suite is a purpose-driven DCIM solution from Nlyte that is scalable, fully extensible and customizable. The industry patent asset allocation, along with contextual content repository, is the strength of the Nlyte 7 Suite. Here is a comprehensive evaluation of the Nlyte 7 Suite:
Asset Management
The Nlyte DataCenter allows you to holistically manage your entire infrastructure. Nlyte uses a central repository of assets with key attributes while maintaining contextual relationships between assets and asset parameters.Working in conjunction with the Nlyte Materials Catalog, it records asset attributes like size, weight, power and connections, and automatically updates the database. It effectively manages physical, logical and virtual assets along with the asset location. Assets can be grouped by cage, pod, logical or user-defined categories. It offers automatic asset discovery and reconciliation.
Power Management
The Nlyte Connection Manager enables you to graphically view power connections within the infrastructure at any level. You can check the entire power path, including connection points and connect information, right from the supplier’s sub-station feed to the data center through UPS, PDUs, generators and power strips at any level. You can assess the asset properties, connection type, connection details, destination details and source port information. The graphical view can be filtered based on fiber, power or network. The circuit endpoint allows for the viewing of asset connectivity. This connectivity data can be exported into Excel spreadsheets.
Thermal Management
The Nlyte DataCenter enables you to map the entire power path of the power from the supplier’s sub-station to the datacenter and to each connected device. The suite monitors the power consumption of every device including air conditioners and IT and non-IT equipment. It provides clear visibility into power usage effectiveness (PUE) values by calculating the IT equipment power (ITEP) and the total facility power (TFP). Based on these values, automatic provisioning of power devices can be achieved.
Space Management
The Nlyte Floor Planner provides a CAD-style graphical user interface to visualize the physical layout of the racks, rooms and floors in the data center. You can easily track the asset chain and make changes accordingly. Power provision, airflow, heat control and use of space can be effectively managed using this application. Color-coded designations are given to different performance thresholds that apply to each device. A layered representation of the entire asset infrastructure can be viewed. At the same time, categorized views of cabinets, racks, cage representations, floor standing servers and power infrastructure can be observed.
Visual Modeling
Nlyte Dashboard & Reporting provides a number of pre-defined and user-defined dashboards to visualize critical operational metrics. It offers visual representation of assets in the form of charts and graphs. The built-in analytics engine enables you to create real-time analytics of heat and power metrics in addition to a detailed view of asset inventory. With Nlyte visual modeling, spreadsheets or Visio diagrams are not necessary.
Hypothetical Modeling
Nlyte Predict enables you to envisage the future state of your data center’s power, cooling, space and networking capabilities based on real-time analytics and historical usage metrics. Using these “what-if” models, you can assess the effect of any change made to the infrastructure before actually making the change. While installing new assets, you can determine if there is enough space, power and networking options. You can investigate the benefits of upgrading your servers, and you can create forecasted projects and envisage actual costs.
Access & Control
The Nlyte Integrator is a NgaugeAPI-based connector that enables you to integrate DCIM solutions with a number of third party applications or existing systems like VMware, HP, RF Code, Server Tech and CMDB.It is based on Intel’s Datacenter Management technology. Being a purpose-built web service, Nlyte Integrator makes it easy to integrate it with existing systems or third party solutions.
Reporting & Alarming
The Nlyte Dashboard & Reporting feature allows you to quickly generate a wide variety of reports to obtain critical operational metrics. It comes with a built-in reporting and analytics engine. You can use templates or customize your reports for the IT, finance and executive departments. Reports can be automatically generated and delivered. Report delivery is quick and consistent.
| | 10:45p |
Modius Headquartered in San Francisco, CA, and with offices throughout the U.S., Modius, Inc. is a provider of data center infrastructure management (DCIM) software for optimizing the infrastructure and operations of critical facilities. Founded by Craig Compiano in 2004 (with the product, OpenData, in production since 2006), Modius has customer deployments across the Americas and Asia. Modius’ flagship offering, OpenData, is a software application that actively integrates power and environmental intelligence with other management applications.

“Our secret sauce is data collection and Big Data analytics,” says Mark E. Stumm, vice president of marketing and product strategy. “We are well positioned for the Internet of Things (IoT) in that we collect, analyze and act on data produced by every device in the data center (networked and serial). We use this real-time data collection to make smarter DCIM decisions than our competitors that are using plate values and theoretical models to manage their infrastructure.”
Having been in the market for a long period, Modius has had time to craft its DCIM solution to evolve with the modern data center. Customers range in size and span the glove and include: Charles Schwab, USC, Lawrence Livermore Labs, XIOLINK, Cologix, ISWest, Virginia Tech, Plantronix, and Qualcomm.
“We have seen a significant increase in companies looking to establish DCIM capabilities in their data centers over the past 12 months,” said Craig Compiano, CEO of Modius. “Many factors including a stronger economy, rising energy costs, and a maturing DCIM market have fueled this spike in DCIM adoption. Companies have previously been challenged to cost effectively improve operational efficiencies in their facilities, but now are realizing that DCIM products like Modius OpenData can provide Real-time Operational Intelligence (RtOI) to drive operational improvements by leveraging data from the Internet of Things (IoT).”
At Modius, the idea is to provide the holistic monitoring and real-time decision support that organizations need to better manage availability, capacity, and efficiency across their entire data center operations. Its DCIM solutions aim to provide operational intelligence for the extended power and cooling chain – from the grid to the chassis. OpenData software solutions by Modius are completely vendor neutral and highly scalable with no coding required to add new devices across multiple sites. Finally, OpenData’s ease-of-use and flexibility are taking the data center management process to a new level where users can easily configure their own dashboards and reports with the power of advanced OpenData analytics at their fingertips.
Modius OpenData DCIM Solution Details
Modius Inc. is a leading provider of DCIM solutions. Founded in 2004 in San Francisco, California, the company offers DCIM solutions to simplify complex and diverse IT facilities’ management systems for improved performance efficiency. The company has customer deployments in the U.S. and Asia. OpenData is a powerful DCIM software offered by Modius that integrates power and environmental intelligence into management applications. Here is a comprehensive evaluation of OpenData.
Asset Mgt
OpenData provides accurate accounting of every asset of a data center. It uses a single database to accurately house all physical asset data, including power infrastructure, data center layout, IT &telecom equipment, end-to-end network cables, power cables and HVAC devices across the infrastructure and multiple sites. The asset information can be visualized to understand the relationship between each asset. However, it doesn’t automatically discover assets.
Power Management
OpenData provides real-time monitoring of a data center’s energy consumption to help you understand how it is affected by the physical environment. It provides visualized information related to the PUE and DCIE values. You can check the PUE value for a mixed-use or a stand-alone data center or calculate the partial PUE of certain groups or zones. Using various protocols, OpenData collects energy and performance data of all cooling and power equipment. With critical power information and power chain dependencies on hand, you can quickly resolve power anomalies and improve the system’s efficiency.
Thermal Management
OpenData provides visibility into system and device-level performance factors that affect environmental conditions. The application continuously measures all data points from environmental sensors. It lets you understand how humidity and temperature values go up and down with changing climatic conditions. The real-time view in the dashboard helps you to monitor the device performance together with environmental conditions throughout the day. PUE and DCIE values can be effectively monitored and controlled. With OpenData’s operational intelligence, you can gain deeper insights into your facility’s infrastructure and make changes accordingly.
Space Management
OpenData monitors the entire energy profile and IT equipment utilization to help you manage and forecast implementation of new services, applications and throughput. It captures performance data of the cooling and power chain from the chassis to the grid to provide realistic resource consumption details. It provides a visual representation of the entire infrastructure to let you conveniently monitor and control assets.
Visual Modeling
OpenData visualizes real-time monitoring and data-driven analysis using rich graphics for improved resource management. It leverages data from as many sources as possible and provides a real-time dashboard for reporting and analytical purposes. The intuitive interface is easy to use and user-centric. No technical knowledge is required to use the interface. You can quickly learn to adjust thresholds, add or remove new devices or view daily performance trends of devices across the infrastructure. However, there is no native graphic creation platform. Visualization is created on a third party platform and imported into the system.
Hypothetical Modeling
OpenData enables you to effectively manage and forecast energy and data center performance requirements. By understanding the energy profile of the data center and the extent of IT equipment utilization over time, you can safely determine where to add new equipment without overloading the facility.It is easy to use the dashboard to analyze energy usage and performance trends across the infrastructure.
Access & Control
OpenData supports basic protocols. It communicates with other network equipment using SNMP, BACnet, Modbus and wireless protocols. It can capture data from any device regardless of the vendor. It can collect data from building and environmental management systems. It easily integrates with other infrastructure management solutions including RFCode, SAP Business Objects and Schneider StruxureWare Operations. A serial-to-network gateway is used to communicate with devices that are not accessible on the network.Only IP-based remote access is natively provided.
Reporting & Alarming
OpenData provides device-level analytics and reporting. It offers a broad range of templates to quickly generate reports. You can use pre-defined report templates or customize them. Reports can be generated for a data center’s operational status, efficiency and capacity at device-level or network-level. OpenData takes a unified approach to alarm management by tracking the performance of the entire equipment against a single threshold policy.Alarms are sent using a centralized notification engine.
| | 10:46p |
Geist Geist is a data center provider of power strips, monitoring equipment, cabinet containment and in-rack cooling, and DCIM systems. The roots of this platform trace back several years with the actual product launched in 2007 by a group at RLE technologies, which was later acquired by Geist in 2009. Today, the company is based out of Lincoln, NE, and supports users with offices and manufacturing plants all over the world.
Geist DCIM appeals to both the facility and IT sides of the DCIM market. To work with a diverse set of data center demands, its DCIM solutions offer real-time monitoring, alarming, visualization, and device integration. Geist’s DCIM platform has the capability to communicate with all industry standard protocols and has a services team to ensure all connected devices are integrated properly.
With the addition of Environet Asset in March of this year, Geist DCIM now has a full asset management solution that lends to better capacity planning, space management, reporting, and workflow management.

“With our extensive line of DCIM products, we are able to deliver a full solution that covers both IT asset management and facility real-time monitoring,” says Matt Lane, presiden of Geist Global’s DCIM Division. “We believe that these products position Geist as a leading DCIM solution provider in the market.”
Currently, Giest has more than 250 individual clients that use the DCIM products. These enterprise clients spanning the globe include:
- Lightbound
- Hosting.com
- BCD Travel
- GameStop
- Expedient
- Cosentry
Today, with 450-plus instances of its DCIM software installed on five continents monitoring over 500,000 equipment cabinets in 1,500 environments, Geist creates comprehensive data aggregation systems that automate data center reporting and provide critical infrastructure management tools. Its two data center monitoring and management systems, Environet and Racknet, fit all needs from power strip aggregation to enterprise level DCIM systems.
Geist DCIM Solution Details
Geist is a global provider of data center infrastructure management solutions. The company was founded in 1948. With its global headquarters located in Lincoln, NE, USA., and offices in Europe and Asia-Pacific, Geist provides DCIM solutions to businesses across the world. Geist offers an intelligent control platform that integrates business procedures across the data center and provides a single point of management. Here is a comprehensive evaluation of Geist DCIM solutions.
Asset Management
The Geist DCIM platform Environet simplifies the process of visualization and management of both the logical and physical data center infrastructure, providing users with the information needed to effectively manage assets, networks, power, and space. The Environet Asset solution understands where available capacity exists and utilizes work orders to effectively manage the entire asset lifecycle. Dashboards and reporting provide real-time visuals for what is happening within the data center and facility infrastructure.
Power Management
The Geist DCIM platform enables you to monitor and manage the power consumption details of space, power, cooling reviewing real-time data of mechanical, power, cooling and electrical usage. By visually mapping this data, you can quickly determine the total power usage and power capacity available at any point in time. Geist’s history in the power strip business gives it a great degree of insight and technical competency in Power monitoring
Thermal Management
The Geist DCIM platform comes with a library of predefined functions to control valves and other equipment based on temperature, humidity and airflow values. When the predefined thresholds are met, the temperature can be automatically adjusted. You can analyze key metrics of PUE trending, PUE comparative and power metrics.
Space Management
This versatile new software identifies available capacity and resourcefully employs work orders to manage the entire asset lifecycle. The enterprise visualizations provide easy navigation throughout the data center, from the 3D site level down to each individual connection. With convenient drag-and-drop functionality, connection mapping is simple and easy to configure. Dashboards and reporting provide real-time visuals within the data center allowing users to make intelligent, proactive decisions within their data center infrastructure.
Visual Modeling
The new Environet Asset module can be combined with Geist DCIM’s existing Environet Facility to provide a powerful data center and facility infrastructure management solution. Together, the software provides specialized views of all aspects of the data center environment and capacities including real-time power, cooling, environmentals -all coupled with full lifecycle management of IT assets.
Hypothetical Modeling (9.0/10) Geist’s Environet Facility’s graphically rich interface and intuitive design as qualities that ease monitoring of data center and facility equipment. Real-time monitoring brings the activity of a single facility, or an enterprise of data centers, into a single graphical view to immediately alert users of potential threats. It can be customized to integrate many different types of devices using industry standard protocols.
Access & Control
The DCIM module Environet Facility simplifies monitoring by integrating multiple communication protocols into one complete system. It provides the data granularity for efficient management of both the facility and the data center infrastructure. Environet Facility transforms complexity into simplicity with unprecedented visibility and management over environmentals, power consumption and cooling.
Reporting & Alarming
The DCiM solution is supplied as a stand-alone solution preinstalled on a server. All rack and network components are automatically identified by SNMP, making system setup quick and easy. Because the solution is vendor-neutral, data centre hardware from virtually any manufacturer can be integrated. A convenient and web-based user interface provides an overview of the operating states and environmental parameters of the devices integrated into the Racknet DCiM solution. Asset management and reporting on power consumption, power usage effectiveness and temperatures is performed, for example, for a complete rack row, an individual rack or IT device. All metrics and reports are clearly displayed and, depending on needs, shown in a 2D or 3D view, on a dashboard or in table format. The Racknet system automatically triggers a warning via SNMP if critical and preset limits for power and environmental parameters are exceeded.
| | 10:46p |
FNT While FNT’s headquarter are located in Ellwangen, Germany, it operates internationally with subsidiaries in the United States (Parsippany, New Jersey), Singapore, Dubai (UAE) and Russia (Moscow). FNT cooperates in close partnerships with well-known IT service providers and system integrators worldwide. Its standard software, FNT Command, is used worldwide as a DCIM and IT management application for communications service providers, enterprises, data centers and governmental organizations by more than 25,000 users since 1994.
In looking at FNT’s DCIM platform, its products are user-friendly, web-based, and able to support multiple-tenant and multilingual environments. FNT’s software has been used by large enterprises and data centers for more than 20 years and was originally developed as a solution for IT infrastructure and network management for IT service providers and cable network operators.
More than half of Germany’s DAX 30 companies use FNT software to plan, document and manage their infrastructure. Because of these large customers, FNT developed a sophisticated data model in one central database that integrates all CIs, IT assets and data center assets, such as locations, buildings, rooms, power and cooling assets, physical network, IT assets, racks, server, virtualized server, applications and services. When dealing with big companies, it is important to have open connectivity to all other IT systems that are in use (such as auto-discovery tools, IP management tools, EAM tools etc.). Therefore, FNT provides an open connectivity layer and integrated ETL-software for maintenance of all interfaces that are used. Some customers have more than 200 interfaces exchanging IT infrastructure data with FNT Command from other software tools within their IT landscape.
FNT’s DCIM-specific extension provides capabilities such as, monitoring, alarming and dashboarding; integrated asset and lifecycle management; and IMAC workflow. Capacity management as well as power and cooling management were introduced in June 2010 and has been enhanced continuously since its first release. Today, FNT provides a fully integrated DCIM solution with many DCSO capabilities.
Founded by Nikolaus Albrecht and Horst Haag, who still lead the owner-run company today as managing directors, FNT is a pure software vendor with 240 employees and a great innovation history. All products have been developed in close cooperation with customers and based on market needs.
Most of FNT’s customers have to manage complex infrastructures and IT landscapes across a range of industries, including the automotive field (BMW, Audi, Volkswagen, Daimler, Porsche etc.); 8 of Germany’s 10 largest banks (Commerzbank, Deutsche Bank, German Stock Exchange, Finanzinformatik etc.) and many financial institutions; plus, 7 of Germany’s 10 largest airports (Frankfurt Airport, Munich Airport etc.).
Worldwide, more than 220,000 data center racks are managed in data centers and IT Infrastructures using FNT software.
“Data center managers and IT managers who want to be prepared for the future must be able to trust in the accurate and universal documentation of their IT landscape and data center infrastructure,” said Nikolaus Albrecht, CEO of FNT. “Based on our comprehensive data model and the unified management approach, including all areas of modern IT landscapes, enterprises can be confident that software from FNT is providing the right tools and be prepared for the upcoming digital transformation.”
 | | 10:47p |
FieldView 2015 FieldView’s DCIM solutions deliver innovative data center monitoring and management tools. Its newly introduced (March 2015) FieldView 2015 DCIM is designed to improve data center resilience by capturing, correlating and analyzing massive amounts of live data, enabling “what if?” simulation of potential failures, contemplated changes and planned maintenance downtime.
FieldView DCIM’s customizable dashboards enable vivid presentation of operational patterns and trends, while APIs and a data warehouse enable simple and efficient interconnection with other DCIM tools, ITSM tools, orchestrators, control systems, dashboards, “big data” clusters and other applications. The latest platform, FieldView 2015, adds:
- A “single-page,” intuitive, user interface
- New dashboards
- Enhancements for alarms
- Enhanced analytics capabilities
- Internationalization
- Support for mobile browsers
The fully-customizable dashboard allows for a single pane, dynamic view summary of all the monitored metrics.

Based out of Edison, New Jersey, FieldView is engineered to handle the largest data center environments in the world. Large corporate data centers and colocation depend on FieldView DCIM solutions to handle the sheer volume of data generated by their facilities. FieldView DCIM helps data centers operate at peak efficiency by enabling operators to identify power and cooling efficiency improvements that positively affect the PUE and energy bill, operational costs, increase reliability and maximize the use of capital expenditures to optimize space, power, cooling and cabling capacity.
Founded by Fred Dirla, FieldView DCIM software solution has been on the market since 2006. FieldView Solutions was spun off as an independent entity in 2009 and is now the largest independent software developer in the DCIM monitoring space. FieldView DCIM is used on six continents, monitoring 2.5 gigawatts of data center infrastructure power worldwide. Its solution was built specifically to meet the unique needs of multi-tenant data centers or colocation centers.
“FieldView 2015 is the first in a next generation of DCIM monitoring tools for all types of data center operations,” said Sev Onyshkevych, CMO of FieldView Solutions. “It’s a welcome departure from the inefficient or inadequate, custom-built software or Excel and Visio tools typically used to monitor mission-critical facilities. Through an automated process of gathering data from facilities infrastructure and IT devices, FieldView 2015 is able to maximize efficiency while simultaneously reducing operational costs.”
What helps set FieldView 2015 apart is its ability to bridge the operational data needs of IT and facilities managers, as well as the executive suite by assembling data required by all functions, and to simplify usage by reducing the number of clicks required to access critical or most commonly used information. | | 10:47p |
Cormant-CS Cormant has delivered infrastructure management solutions for more than 11 years from its global headquarters in San Luis Obispo, CA. The company also has offices or subsidiaries in the UK, Philippines and Australia, as well as partners all over the world.
Cormant-CS is an easy-to-use DCIM platform which works to fulfill any organization’s needs, including matching business processes. Cormant-CS couples mobility with the flexibility of the product to help data centers, campuses, enterprise buildings and more. Its DCIM solution can scale all functionality from a single site to hundreds of sites spread across the globe.
Both Cormant’s licensing model and software allow organizations, small or large, to streamline IT management and processes. Cormant-CS has does provide entry-level pricing with an asset-based licensing model to allow organizations to purchase Cormant-CS and then grow the scope to fit business needs. All of these core aspects of Cormant have contributed to its high customer retention rate since their first product was launched in 2003.
Paul Goodison, the current CEO and one the founders of Cormant, has more than two decades of experience managing major IT projects across the globe. Realizing that portability is the critical key to solving the challenge of IT infrastructure management, Goodison and two of his colleagues created Cormant, which subsequently developed Cormant-CS (formerly CableSolve). Cormant is focused on working with customers to improve their management, control and reporting processes via a single-pane-of-glass view to where the physical and logical layers of IT infrastructure meet.
Cormant focuses on data centers and organizations of all sizes spanning the globe. Some of their major customers include Barclays Bank, NATO, McKesson, the U.S. Senate, and AIG.
“Having founded a company that has been selling infrastructure management solutions since 2003 and having seen the growing acceptance of Data Center Infrastructure Management and Cormant’s part in that market, it is great to see some of our earliest, 2003 – 2004 visionary customers still benefiting from our solution today,” said Cormant CEO Paul Goodison. “We look forward to the next 3 – 4 years with great excitement as some of the FUD in the DCIM space recedes and Cormant continues to introduce new ways for our ever-growing customer-base to achieve true business value from the software.”

Cormant-CS DCIM Solutions
Cormant Inc. is a leading provider of communication, information, and technology infrastructure management solutions to businesses of all sizes. Headquartered in San Luis Obispo, CA, with a sales office in the UK and distribution partners throughout Southeast Asia, UK, New Zealand, Canada, Australia, Ireland, Hong Kong and China, Cormant provides quality DCIM solutions to companies across the globe.
Cormant-CS, formerly known as CableSolve, is the innovative DCIM solution offered by Cormant. Here are some of the features offered by Cormant-CS.
Asset Mgt (10/10)
Cormant-CS uses a proven methodology of combining server, desktop, handheld, web and API functionality to control and manage all aspects of the physical layer in an infrastructure. It records information related to every asset, including the purchase, location, ownership, support, and configuration details. Multiple sources of information are standardized for a better management of data. The application stores the asset information in a single structured database, which can be viewed from across the infrastructure and makes searching for any asset quick and easy. Adding information to the database can be done using a desktop client, a handheld device, SNMP discovery, XML API, or imported spreadsheets. Using a single dashboard, it is possible to effectively manage all assets across the infrastructure.
Power Management (8.0/10)
Cormant-CS records the full power path and connectivity of each device within the infrastructure. It provides the ability to fully view all connection paths and monitor power utilization at any level. Redundancy can be effectively monitored. It is possible to check how much power is in use and what capacity is available to ensure that power caps are not exceeded.
Thermal Management (8.0/10)
Live readings from energy-monitoring sensors are taken to collect data on thermal conditions across the infrastructure. Cormant-CS offers the ability to collect this data from SNMP-enabled sensors and intelligent PDUs for a holistic understanding of energy consumption, which reduces PUE levels in the long run. The built-in scripting engine can be used to query and record DCIE or PUE statistics.
Space Management (8.0/10)
With rack and floor plan views visualized in the application, space management becomes easy. The centralized database and a practical work-order system reveals how much capacity is in use and how much capacity is available, along with potential hotspots. This has implications for the equipment, power, network and energy.
Visual Modeling (8.0/10)
Both floor and rack plan views are supported by Cormant-CS. It provides a rich graphical representation of the entire infrastructure. When an asset is reconfigured or relocated, this graphical representation is automatically updated in real time, so it is always possible to check exactly where each asset is located and exactly how much capacity is available. Static data or network queried data can be used to get this visualization. However, 3D visualization is not available.
Hypothetical Modeling (8.0/10)
Cormant-CS provides a practical work-order management system wherein steps required for each change in the infrastructure are predefined. The visibility of information in an all-in-one database enables a user to make informed decisions about the demand for equipment, assets, storage, power, and network access. The powerful built-in scripting engine allows for the creation of scripts that send alerts when changes are made.
Access & Control (10/10)
With the rise of smartphone use, the way in which information is accessed has completely changed. The strength of Cormant-CS is mobile access and control from both smartphones and tablets. You can use Android, Apple or BlackBerry devices to monitor the infrastructure from anywhere. The web-based API with open standards is easy to integrate with an organization’s existing systems and provides read-write capabilities. It also has a powerful built-in scripting engine that enables a user to send remote commands from the Cormant-CS application to any device within the infrastructure.
Reporting & Notifications (9.0/10)
Cormant-CS provides a detailed visualization of data center infrastructure information. The information can be displayed in multiple formats like dashboard views, historical views, and scheduled reports, which can be easily customized and e-mailed directly to users. The built-in script engine provides an easy way to be notified about any changes to the assets, DCIE statistics, PUE statistics, or any other related information.
| | 10:48p |
IO’s BASELAYER OS Based out of Chandler, AZ, BASELAYER was spun out of IO in December of 2014. Currently, IO is the second largest private data center operator in the world. Prior to the new entity, BASELAYER OS was known as IO.OS – a data center management platform – and served more than 600 enterprises through IO’s colocation business.
Several of these named clients and enterprises using BASELAYER today include IO, Fortrust, Goldman Sachs, CentryLink, and SRP. Furthermore, IO uses the Service Provider edition of BASELAYER OS to extend to all clients located in IO’s facilities across the globe (over 2 million sq. ft. of data center space).
George and William Slessman founded BASELAYER OS, which has been a DCIM solution since 2011.
“BASELAYER’s product structure is built to allow users to layer in components to match their corporate goals,” says Samir A. Shah, vice president of product management. The graphic below shows the current and future state structure of our DCIM product. Our goal is to continue adding modules to our core services to unlock value at all levels of the IT stack.”

In working with creating a holistic data center management platform, BASELAYER enables several DCIM functions that include:
- “Single pane of glass” visibility – comprehensive view of both global data center activity as well as third party data feeds (including web services and big data analytics)
- Dynamic provisioning – programmatically matching application needs with data center capabilities
- Virtual sensors – visualize, control, track, and manage complex system performance
- Real-time PUE – view and graph PUE for an individual virtual machine, rack, module, or multiple data centers
- Business intelligence – real-time visibility of operational performance facilitates informed decision making and planning
As an example, using the visualizer module within BASELAYER, users can create custom dashboards to enable better business visibility. In the case below, a generator status dashboard shows real-time generator fuel levels in combination with forward-looking diesel pricing based on third-party web services feeds.

Operators can use this dashboard to estimate not only when replacement is needed, but optimize their decision around when diesel fuel pricing is lowest.
“At BASELAYER, we simplify the data center and make it smart,” added Shah. “BASELAYER OS is the first complete data center operating system that integrates both modular and legacy data center infrastructure, enabling real-time visibility, control, simulation, optimization, and automation with users IT equipment, applications.”
BASELAYER DCIM Solution Details
IO is a global provider of software-defined data center infrastructure management solutions. The company was founded in 2007. With its global headquarters located in Arizona, U.S., and offices in Europe and Asia-Pacific, IO provides DCIM solutions to businesses across the world. IO offers an intelligent control platform that integrates business procedures across the data center and provides a single point of management. Here is a comprehensive evaluation of IO DCIM solutions.
Asset Management
The IO DCIM platform collects physical, virtual and logical data of entire assets and abstracts the data into a powerful management layer. There is proper coordination between the asset information and changes happening in the application stack. With a unified management platform, you can monitor and manage entire asset information across the data center and extended enterprise. You can group assets by location, users, zones or buildings.
Power Management
The IO.OS platform enables you to monitor and manage the power consumption details of space, power, cooling and IT from a single view. It presents real-time data of mechanical, power, cooling and electrical usage so that you can proactively resolve problems before they turn into critical issues. You can track power consumption details for a single device, user-defined group, zones or for the entire infrastructure. By visually mapping this data, you can quickly determine the total power usage and power capacity available at any point in time.
Thermal Management
The IO.OS platform seamlessly collects information from the entire environment, including sensors to provide continuous feedback to your team. It comes with a library of predefined functions to control valves and other equipment based on temperature, humidity and airflow values. When the predefined thresholds are met, the temperature can be automatically adjusted. IO.Analytics offers various dashboards to track and manage the efficiency of power, capacity and cooling modules. You can analyze key metrics of PUE trending, PUE comparative and power metrics.
Space Management
IO.Analytics is a web-based application that provides a visual mapping of the entire asset information within a data center. Using this visual aid, you can quickly identify where each device is located and how much capacity is in use. Simple drop-down menus allow you to compare single or multiple modules over a configurable time frame. By knowing the available capacity in real-time, you can add or relocate equipment on-the-go.
Visual Modeling
Visualizer, which comes with the IO.OS, displays custom graphics that represent the equipment in the entire data center, including PDUs, chillers, floor layout and generators. These visual drawings can be accessed through a browser, smartphone or a desktop. Once created, these visual drawings are linked to real-time data to provide infrastructure alerts and alarms, historical trend graphs, capacity warnings and trends and current status. IO.OS Immersant is a visual module that lets you virtually walk inside the data center and view the status in a first-person view. You no longer need to physically check the datacenter.
Hypothetical Modeling
The Visualizer and Immersant modules in the IO.OS provide you with a real-time representation of the data center. You can virtually move inside the data center and conveniently monitor and manage assets from a unified view. By looking at historical trends and current trends, you can determine the future requirements of power, space and cooling requirements of the data center. Before installing new equipment, you can hypothetically model it to see the effect of the change on the power, cooling and space capacity.
Access & Control
The IO DCIM platform can collect data from physical and virtual sensors and all other equipment, including SNMP-networked devices. It supports 150+ communication protocols, including Modbus and BACnet. The IO.OS Translator translates this information into OPC, the native language of IO.OS. You can configure the system to start or stop a process based on the data collected from the sensors. Real-time monitoring and alarm notifications are available. You can customize security settings to provide role-based security to manage user access. Using the power of HTML5, you can access the IO DCIM dashboard with your mobile device from anywhere, anytime.
Reporting & Alarming
With IO.Analytics, data trending is at your fingertips. You can use the reporting feature to create a wide array of reports. Key metrics can be visualized in graphs, pie charts or dials to help you quickly identify data center resource usage trends. You can create multiple representation layers based on a customer or a module. Reports can be created with a data range of up to 5 months back. It provides real-time alarm notifications for all critical data.
| | 10:48p |
CA DCIM Headquartered in New York, CA Technologies has been in the DCIM market for some time. Prior to coining the term DCIM, CA’s management model was referred to as Operational Energy Management and Data Center Infrastructure and Energy Management. CA’s early offerings in the Operational Energy Management market included the energy monitoring solution CA ecoMeter, which was introduced in 2009.
The team that developed the first version of CA’s DCIM solution based on CA ecoMeter included, Dhesi Ananchaperumal, SVP & business unit executive and distinguished engineer at CA Technologies; Peter Gilbert, VP of business strategy, and Terrence Clark, SVP and general manager at CA Technologies. All of this technology was built in-house and not through an acquisition.
In developing its technology, CA now has a very direct offering to the data center ecosystem.
“The unique value proposition of CA DCIM is founded on our software development and IT expertise, vision of true facilities and IT convergence for Infrastructure and Operations (I&O), worldwide presence, and customer satisfaction,” said Francois Cattoen, product manager for CA DCIM, CA Technologies. “The CA DCIM solution is hardware agnostic and has a data collection and integration approach for the physical infrastructure that makes for easy deployment and integration with third party solutions. Our highly scalable architecture, breadth of use cases, superior user experience, and integration approach differentiates CA DCIM.”

CA DCIM is deployed within very demanding data center environments throughout the world. These data center environments require high availability, high security, and hyperscale capabilities. Some of these demanding environments include RagingWire, Datotel, and Facebook. BBVA, Logicalis Group, Entel and Sicredi are other examples of customers using CA DCIM.
“CA DCIM enables organizations to prevent downtime with continuous monitoring, intelligent alerting and visibility into the data center and IT infrastructure,” added Cattoen. “It can be deployed rapidly, easily and efficiently so IT/facilities can quickly respond to business needs. And for service providers, CA DCIM helps optimize infrastructure to more efficiently support tenants and can help service providers differentiate and increase revenue with new services.”
CA DCIM Solution Details
CA Technologies is one of the leading providers of software solutions to businesses of all sizes. Founded in 1976, the company has quickly transformed itself into a valued leader in the IT industry. CA DCIM is a combination of two products: CA ecoMeter and CA Visual Infrastructure. Here is a comprehensive evaluation of CA DCIM solutions.
Asset Management
CA DCIM solutions provide an auto discovery feature that automatically discovers assets present in the data center. It uses a wide range of protocols, including SNMP, Modbus and BACnet, to collect data from a wide range of devices. The asset information is stored in a centralized database. Data can be manually added too. This data is visualized in a 3D format to help you better understand the location and details of every asset.
Power Management
CA Visual Infrastructure is a 3D application designed to provide real-time monitoring of space, cooling and power within the data center and IT facilities. It tracks the entire power path from the sub-station feed to the equipment in the data center. It measures and analyzes power load and consumption across data centers, buildings, systems and multiple devices. You can easily determine areas of low and high power consumption, underperforming and efficient assets, consumption patterns and waste. It uses intelligent alerting to identify power issues and reduces false alarms.
Thermal Management
For an efficient thermal management, CA DCIM offers CA ecoMeter, a product that captures real-time data of energy use across the data center. It reads data from environmental sensors and records it in the database to provide it in a 3D format for a better analysis of the resource usage. It reads data from multiple sources and consolidates it. CA ecoMeter creates standard and custom metrics for power usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCIE) values to gauge the performance of data center facilities. Based on these metrics, you can take appropriate actions to optimize energy efficiency in the data center.
Space Management
The data collected by real-time monitoring of CA Visual Infrastructure allows you to efficiently manage the space within the data center. It provides a detailed analysis in a 3D visualization format to better understand the entire infrastructure in a top-down view. You can determine the capacity in use and the capacity available. You can decide where to install new racks where excess cooling and power is available. The addition or relocation of assets can be done pretty quickly.
Visual Modeling
3D visual modeling is one of the striking features of CA DCIM software. To better meet business objectives, CA DCIM enables you to visualize your data center’s environment in 3D and efficiently manage power, cooling and space capacity in the data center. Using this 3D visualization tool, you can gain insights into the building layout, inventory locations, resource capacity, analyze resource usage trends and control the asset management. The interface is easy to use.
Hypothetical Modeling
Taking advantage of CA ecoMeter and CA Visual Infrastructure, CA DCIM software provides a good platform to effectively manage capacity and power in a data center. This data is offered in a 3D visualization model to better understand how each asset is placed within the datacenter. In addition, CA DCIM software offers future capacity planning with “what-if” scenarios. Before actually making a change to the infrastructure, you can determine the effect a device will have on the power and space of the data center without disrupting any service or load.
Access & Control
CA DCIM software uses standard protocols like SNMP, Modbus and BACnet to tracks assets within the data center. Integrating this application with other third party solutions like VMware, HP and RF Code is quick and easy. CA Visual Infrastructure augmented with Foundation Services provides mobile device dashboards to enable users to quickly access the software from any device, anytime and anywhere.
Reporting & Alarming
CA DCIM software provides a highly intuitive 3D interface to create a wide array of reports. You can create live reports, trend reports and charge-back reports. Report templates can be customized to suit customer-specific needs. The CA DCIM application uses a building management system (BMS) or an energy monitoring system (EMS) to generate alerts or control functionalities for IT infrastructure and facilities devices |
|