Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, April 22nd, 2015

    Time Event
    12:00p
    Why Combined Heat and Power Makes Sense for Data Centers

    Data center operators are late adopters of combined heat and power, a technology that’s been around since the late 1800s and used extensively in hospitals, refineries, petrochemical and biotech facilities. But using such plants as the source of data center power can have substantial energy efficiency and cost reductions benefits.

    That’s according to Terence Waldron, president at Waldron Engineering and Construction, who spoke about the use of combined heat and power plants (CHPs) in data centers Tuesday at the Data Center World conference in Las Vegas.

    CHPs convert natural gas to energy efficiently, but when used in tandem with absorption chillers can also in some cases provide all the chilled water a data center may need, he said. A CHP can also act as backup in case of utility-feed failure, so there are reliability benefits as well.

    Finally, a facility that has a CHP has substantially lower overall greenhouse-gas footprint than a facility that gets all of its power from the utility and uses a boiler for comfort heating.

    There is another technology on the market that converts natural gas into electricity that has been getting more and more attention as a source of data center power: fuel cells. In Waldron’s opinion, while CHPs are less efficient generators than fuel cells, they require much smaller footprint to generate the same amount of power and win on efficiency when their ability to create chilled water is factored in. CHPs are also a lot cheaper, he said.

    In Delaware, a group of residents recently successfully lobbied officials to block a major project to build a data center powered by a combined heat and power plant. Officials blamed lack of detailed information in the developer’s plans for the project’s failure to reach approval.

    Converting Energy Loss to Chilled Water

    Generally, CHP engines are 40-percent efficient in the way they convert fuel into energy, according to Waldron. An average utility is 33 percent efficient. Energy is lost in the generation process, transmission, and transformers on the user’s property.

    With a 40-percent efficient engine, the remaining 60 percent of energy takes the form of heat, which an absorption chiller converts into chilled water. Coincidentally, the amount of chilled water a 40-percent efficient engine can produce this way is about equivalent to the amount of chilled water needed to cool the servers it powers, Waldron said.

    “It’s an interesting balance,” he said. “No-one designed the engine in that way. It just happened that way.”

    Of course, your mileage may vary based on the design of the data center and on the kinds of IT equipment it supports.

    Not a Replacement for Electrical and Mechanical Gear

    Deployed as a layer on top of the existing power and cooling infrastructure, the CHP becomes the primary source of both electricity and chilled water. To be clear, Waldron does not advocate for relying on the CHP alone for both of those resources.

    “You always apply it as a layer sitting on top of your existing infrastructure,” he said. That way it works in tandem with utility power and the facility’s dedicated chillers.

    The utility feed picks up whatever the CHP doesn’t serve, and the chillers pick up whatever chilled water is still required.

    Some Costs Will Go Up

    Deployment of a CHP impacts the bottom line negatively in two ways. One is the added line item of maintenance, since the data center operator usually has to sign a long-term service agreement with a vendor.

    The other potential drawback is higher utility rates. Having a CHP makes you a “poor load-factor client” in the utility’s eyes, since your utility load is now greatly reduced and fluctuates a lot more. “You used to be a phenomenal load-factor client,” Waldron said.

    Still, even with these two added costs, payback on a CHP can be four to five years for a data center operator. These projects don’t provide the highly desirable two-year payback periods, but the value of utility projects lasts for years and years.

    In many states, governments have also forced utilities to provide incentives to customers that install CHPs, since regulators like the efficiency and low CO2 emissions they provide.

    Arizona Data Center Saves Lots of Cash Annually

    Waldron presented a case study of a CHP deployment by a data center customer in Arizona he could not name due to confidentiality agreements. The plant has been saving the customer about $776,000 per year in operational costs.

    The customer’s business-as-usual annual cost was close to $2 million. Considering the annual savings and that the capital cost of deploying the CHP was $3.7 million, the payback in that particular case is about 4.8 years, according to Waldron. That’s “simple payback,” which doesn’t include things like utility incentives, capacity payments, and tax treatment.

    3:00p
    Canonical Expands Ubuntu’s Container and OpenStack SDN Capabilities

    Looking to make its distribution of Linux the preferred platform for next-generation data centers Canonical released an update to Ubuntu that strengthens its ability to support Docker containers and software-defined networking (SDN) environments.

    Version 15.04 of Ubuntu adds formal support for an LXD hypervisor that Canonical created to make it easier to run containers in isolation. While there is a lot of debate about where containers should run, Mark Baker, product manager for OpenStack and Ubuntu at Canonical, says LXD hypervisor provides a light-weight way to deploy Docker containers on top of a virtual machine platform without creating a lot of unnecessary processing overhead.

    At the same time, Canonical is using this interim release of Ubuntu to strengthen its ties to OpenStack. Version 15.04 for Ubuntu adds support for Kilo, an SDN project that is a component of OpenStack. To complement that effort Canonical has also included in this release of Ubuntu support for ZeroMQ (0MQ) as a brokerless messaging system.

    Finally, with this release Canonical has also formally added support for Snappy Ubuntu Core, a transactional implementation of its operating system that makes it easier to add and rollback application updates.

    Baker says that one of the things that distinguishes Ubuntu is that LCD hypervisors are designed to run as a daemon in user space. As such, it’s possible for each implementation of LCD to be addressed by its own RESTful application programming interface (API). That capability should enable IT organization to begin making use of live migration of containers within a data center environment in much the same way IT administrators move traditional virtual machines around the enterprise, says Baker.

    For all those reasons Baker says adoption of Ubuntu in data centers running Docker containers has been particularly strong.

    “We’ve been doing a lot of early work in containers,” says Baker. “There are six times as many Docker implementations on top of Ubuntu than any other platform.”

    With the rise of containers providers of operating systems of all types see a major opportunity to upend the status quo inside the average data center. As developers continue to employ containers, the need for large operating systems to support legacy virtual machines is giving way to a new generation of lighter-weight operating systems that drive utilization rates of servers significantly higher.

    It remains to be seen how rapidly this new generation of operating systems will be employed inside and outside of the cloud. But the one thing that is clear is that amount of operating system diversity inside the data center is going to increase.

    3:30p
    How NOT to Ship Server Equipment: 5 Mistakes to Avoid

    Dave Levine is President of the Washington DC unit of Craters & Freighters, a packing/crating and shipping firm with 65 locations nationwide, focused on the safe handling and transport of items including server equipment.

    Shippers of server equipment must consider a number of factors, including safety, speed, and – of course – cost. Over the years, we have become aware of a number of shippers making a common set of mistakes in the name of speed and cost, yet at the expense of safety. Damage claims not only have direct costs (especially if iron-clad insurance isn’t purchased), but can have indirect costs by jeopardizing mission critical shipments.

    In order to ensure that unracked equipment reaches its destination all in one piece, companies should avoid these top 5 mistakes. [Note: Some of these mistakes relate to hiring a packing and shipping company, while others relate to the do-it-yourself approach.]

    1. The company you hire should be the same one that will actually handle the equipment. The freight industry makes liberal use of partnerships and outsourcing, not all of which are in your best interest as the equipment owner. Be wary of companies that outsource any aspect of handling your equipment while it is still unpacked. Why would you entrust the riskiest part of the process to a company you’ve never heard of, nor had the opportunity to evaluate directly?
    2. Don’t settle for “blanket” mover coverage of cents per pound. Your equipment is presumably worth much more than, say, $0.60 per pound. Make sure you get 100 percent full value insurance (if your company doesn’t carry similar coverage).
    3. Pallets do not protect server equipment! Ensure that you or your packing vendor surround your equipment in double wall cardboard boxes at the very least, if not wooden crates. Simply strapping equipment to pallets is only recommended in scrap/recycling situations.
    4. Know the limitations of OEM (original equipment manufacturer) packaging. OEM packaging is meant to provide very basic protection to equipment that incurs very few handoffs. After all, it is generally cheaper for a manufacturer to replace, say, 0.1 percent of items due to damage than it would be to “overpack” (from their perspective) 100 percent of its shipments in an effort to achieve a near-zero damage rate. In the comparatively rough freight world, better protection is required. Before shipping your valuable equipment, get a complete description of the packing methods to be employed by your vendor or staff. This is critical when considering quotations from multiple vendors. Be sure you are comparing “apples to apples”’ and not just prices! One final note: Be especially careful if you’re shipping small, fragile OEM-packed equipment via a ground service that has the roughest handling environment of them all. Items are routinely stacked, tossed onto sort belts, and allowed to fall up to several feet into sort bins, without regard to proper orientation.
    5. NEVER pack with StyrofoamTM–like foams. Technically better referred to as expanded polystyrene, these foams not only provide minimal shock and vibration protection, they also flake easily. These flakes can pass through openings in your equipment, doing permanent and serious harm. It is always advisable to cushion sensitive equipment with cut-to-fit polyethylene foam, which acts like a shock absorber in your vehicle, gently protecting your equipment from the rigors of freight handling.

    Damage risk is inherent in the shipping world, but by following good practices and avoiding the most common mistakes, you can drastically tilt the odds of a safe arrival in your favor. Good luck and safe shipping!

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    Cloud, Disk or Tape? What you absolutely need to know to optimize data protection

    Most organizations are challenged with the trade-off between tighter IT budgets/limited resources and shorter SLA’s (RTO/RPO) and increasing data growth. While options like disk based appliances or cloud targets are emerging, the reality is that most enterprises still have a significant volume of tape infrastructure and legacy backup footprint that is operationally burdensome.

    Join Iron Mountain and EMC in a new webinar next week on Wednesday, 4/29 at 1:00pm EDT to discuss this reality of cloud, disk, and tape infrastructures, and see the latest trends, benchmarks, and best practices for optimizing data protection effectively.

    Register Now

    About the Presenters

    Sam Gopal, Iron Mountain Data CentersSam Gopal is the Director of Product Management for Iron Mountain’s Data Center service line. He has over 15 years of experience in management consulting, IT systems integration, business process improvement and corporate strategy. View Full Bio

    J.P. Corriveau, EMC Core TechnologiesJ.P. Corriveau is the Director of Product & Technical Marketing at EMC. He joined EMC in 2005 to leverage over 25 years of enterprise experience delivering innovative enterprise wide solutions for the data protection market. View Full Bio

    About the Moderator

    Bill Kleyman, MTM TechnologiesBill Kleyman is the Director of Strategy and Innovation at MTM Technologies. He is an enthusiastic technologist with experience in data center design, management, and deployment. View Full Bio

    Register Now

    4:00p
    IBM Cloud Identity Takes Shape, But Overall Revenue Flat

    IBM believes the cloud market is starting to bifurcate between low-end commoditized cloud and higher-profit enterprise cloud. The company has reinvented itself several times in its history, but its focus has long remained on the enterprise.

    While IBM cloud revenue was impressive, its overall first-quarter revenue, reported this week, was flat year over year. A glance over financials for its multiple business lines reveals the word “down” far more than “up.”

    The company announced its cloud service revenue grew more than 75 percent in the first quarter, propped by larger-sized enterprise deals centered on analytics. Its rolling 12-month cloud revenue is now more than $7.7 billion. The “as-a-Service” revenue is $3.8 billion, up from $1.5 billion last year.

    Cloud is a bright spot, but evolution is painful. It’s arguably why fellow tech giant Dell went private, because the books are never pretty during times of sweeping change.

    It’s hard to gauge if IBM’s evolution is a success because of lack of overall revenue growth, but the need to evolve is unquestionable. Enterprise IT is moving towards an “as-a-Service” model, but that business has to come from somewhere.

    A historical parallel is big telecoms trying to deal with wireline loses in the aughts. Two types of telecoms emerged as people ditched wired phone lines en masse: the transitory telecom looking to other buckets and the consolidator propping up wireline revenues through buying assets on the cheap.

    Would it have been wiser to invest in a wireline-heavy company with rising overall revenue or a company with flat to modestly shrinking overall revenue with the mix shifting away from wireline? It’s hard to argue that in the long term the former wins over the latter. The same applies to enterprise tech companies and legacy enterprise software businesses.

    IBM is undergoing transition, making several billion-dollar investments in cloud, including $1.2 billion on new data centers housing cloud and a billion on its Platform-as-a-Service Bluemix. Six new data centers are expected in 2015. It is investing in its evolution.

    Its recent deals show that IBM knows its identity in the cloud world. There is a market split occurring between commodity cloud and high-value cloud, and IBM’s recent deals exemplify that split.

    Two larger deals signed by IBM in the quarter were ShopDirect, one of the largest retailers in the U.K., and the Weather Company, which provides weather data to all kinds of companies and businesses. Both companies employ a public Infrastructure-as-a-Service provider in addition to IBM. IBM comes in because of a need to do advanced analytics.

    ShopDirect is using IBM cloud to understand mobile buying patterns to customize offerings, while Weather Company is looking at data from millions of sensors and tailoring to specific industry needs, such as insurance and retail.

    Other recent deals with the U.S. Army, Marriott Hotels and Coca-Cola are hybrid style deals, combining IBM analytics cloud with commodity IaaS or existing IT systems not necessarily “owned” by Big Blue.

    The biggest public cloud players are playing at both ends of the market and still pose a threat, despite a clearer cloud market split. Both Amazon Web Services and Microsoft Azure have released features targeted at startup developers and enterprises, while Google and VMware partnered to the same end. AWS has recently gone as far to say that they want to win the entire enterprise data center.

    IBM’s investments reveal faith in the transition, and that the company is finding its niche in the cloud market with enterprise services and analytics. However, it’s growth in cloud will need to offset shrinking buckets, and the transition needs to happen in the organization in addition to the revenue.

    “In the first quarter we had a strong start to the year,” said Ginni Rometty, IBM chairman, president, and CEO, in a press release. “Our strategic imperatives growth rate accelerated, demonstrating the power of our offerings in these new opportunities and contributing to improved revenue performance. Our focus on higher value through portfolio transformation and investment in key areas of the business drove continued margin expansion.”

    4:30p
    In Data Center Perimeter Security, TCO is a Continuous Process

    Building perimeter security, the outer edge of data center early warning systems, is kind of like playing a Tower Defense game. In both instances, there are multiple options available for aligning what you deploy with your needs. However, when considering all the variables involved in Total Cost of Ownership (TCO), measuring it on an ongoing basis can be much trickier than playing a video game.

    Southwest Microwave’s Tim Claus, a former electronics tech in the Navy, said during his Data Center World presentation in Las Vegas this week that deploying data center security around the perimeter is not as easy as throwing up a fence and slapping cameras around. A perimeter security strategy needs to be carefully calculated.

    Claus addressed TCO as well as identifying pros and cons of various data center security solutions and pointing out the importance of understanding the pros and cons of each type of sensor and backing them up with complimentary capabilities. He also discussed the design of systems: Where should you put what types of sensors and why?

    It’s best to approach Total Cost of Ownership as a continuous process, aligned with the threat level, and partnered with someone with multiple types of technology who is willing to assist on implementation, said Claus.

    Types of sensors include: fence-mounted intrusion detection systems, buried cable sensors, and microwave. All have pros and cons and can complement one another in different ways.

    Video analytics is also becoming more popular, according to Claus. Vendors are developing technology to actively discern threats. However, these sensors are best paired with physical sensors to tune out false alarms. “In our sensor testing, we determined we always want to use an area of limited view because it can be distracted in a large area,” said Claus.

    Setting up a plan for ongoing operations and training are major cost considerations.

    Claus said to consider the following factors:

    • How expensive will it be to turn on power to a complex system?
    • How measurable is ROI?
    • What are acquisition costs, installation costs, change costs and operational costs?

    Most people don’t consider costs involved in making changes, according to Claus. “Most camera systems don’t stay the same throughout the life. What you don’t want to do is spend $100,000 and have to replace it because it couldn’t evolve or change. Can the system expand, and what’s the cost of that expansion?”

    Maintenance costs vary, said Claus. Power costs also need to be taken into consideration. A microwave may be more or less expensive than buried cable based on configuration.

    The benefits of data center security include early-warning perimeter-threat detection and assessment. “We want to give your security team as much time as possible,” said Claus. However, false alarms can be dangerous as they can make people potentially complacent when something serious happens.

    Each sensor is prone to certain types of false alarms. Dew on grass can trigger false microwave warnings, and strong winds can trigger fence sensors. Ultimately, the cheapest fence system with 20 nuisance alarms a day may not be cheapest after some time. Employing different types of sensors helps reduce false alarms but raises cost.

    Perimeter security is about performing intrusion detection as early, accurately and cost effectively as possible. An experienced intruder can climb over razor wire in three seconds, said Claus. It makes you wonder why there’s never been a data center heist movie.

    A basic deterrent consists of a fence, lighting, and even thorny bushes. “Typically a fortified site is an optimal deterrent,” said Claus. “Inside, we can add physical detection devices, buried cable systems or pressure type sensors.”

    The big terms in perimeter security are Probability of Detection (PD), False Alarm Rate (FAR) and Nuisance Alarm Rate (NAR). In designing perimeter security you want a high probability of detection, but the lowest FAR and NAR as possible. Each type of perimeter security has pros and cons. The right fit largely depends on the situation and environment.

    To apply security products, you need to define the type of threat first: Are they terrorists or local kids? “The type of threat will guide you to the right budget and product,” said Claus. “We’ve seen it all.”

    After adding a deterrent around the perimeter, the next step is to determine how many layers of protection you need. Single layer sensor protection is a fence with sensors on the inside.

    A dual layer approach combines multiple types, in addition to better coverage; it better allows tuning out false alarms. One sensor technology is usually placed at the outer perimeter and the second at the asset.

    Multi-layer protection includes several different types. These sensors can extend beyond the perimeter to help detect someone doing reconnaissance. The downside is that it also detects animals and other triggers of false alarms.

    There are several design considerations such as the terrain, and the line of sight. There are access and integration considerations as well and multiple ways to connect software and sensors. You don’t want staff to trigger false alarms. “What we found is that sensor providers are providing software development kits that allow sensors to tie in,” said Claus.

    You even need to consider whether sensors are visible. They can be ugly, however, seeing the sensors can be a deterrent, as in the case of microwaves.

    5:49p
    EMC and Iron Mountain Take Data Backup Underground

    EMC has partnered on a data backup solution for businesses with service provider Iron Mountain, whose flagship underground data center is located 220 feet deep, in a cavern near Pittsburgh, Pennsylvania.

    The two companies’ solution will consist of off-site data vaulting and EMC’s backup appliances and replication and deduplication software called Data Vaulting. Customers usually buy the two together, according to a joint press release.

    Iron Mountain’s National Data Center in Pennsylvania, located in a town called Boyers, is in a former limestone mine. Besides operating the data center, Iron Mountain, whose roots are in physical document storage, also stores tape reels for movie studios, photo archives, and documents for the U.S. government’s Office of Personnel Management in the cavern.

    Emphasis on data center services in the facility is recent. The company has expanded data center space in the cavern over the past several years. It has also retained Compass Datacenters to build a second data center in the suburbs of Boston.

    Late last year Seagate, another storage vendor, took space in the underground data center for its cloud backup and disaster recovery services. Iron Mountain acts as a reseller for those services too.

    “Our collaboration with EMC means companies can protect their data offsite, improve their disaster recovery processes and benefit from the scalability and efficiency that the cloud provides,” Eileen Sweeney, senior vice president and general manager of data management at Iron Mountain, said in a statement.

    While the Iron Mountain facility in Pennsylvania isn’t the only underground data center, they are rare. There are several more in North America, Europe, and Asia.

    See the Data Center Knowledge list of underground data centers here.

    9:26p
    Roundup: Products Launched at Data Center World Spring

    From cooling to cabling, numerous vendors rolled out a variety of data center infrastructure products at this week’s Data Center World in Las Vegas. Here’s a roundup:

    Monitoring: TrendPoint Updates Data Center Power Metering Platform

    Energy monitoring company TrendPoint launched a new version of its branch circuit power meter (BCPM) for data centers. One of the company’s marquee customers is Facebook, which uses the BCPM product in its data centers.

    In BCPM 2.0 TrendPoint has added waveform capture and harmonics visibility for branch circuits. The vendor aims to simplify power metering by putting monitoring of all kinds of data center power equipment (switchgear, switchboards, distribution panels, etc.) on a single platform.

    “We can implement one power meter, one platform, one system in one piece of equipment, and provide all necessary data points with less complexity for communication, integration, and device management,” TrendPoint CTO Jon Trout said in a statement.

    Modeling: ManageEngine Launches 3D Data Center Modeling Software

    ManageEngine, an IT-management vendor, announced beta release of its RackBuilder Plus 3D modeling software for data centers. A tool for data center management, the software gives admins a virtual visualization tool for layout of their data centers.

    Users can build their 3D models by adding racks, hot and cold aisles, walk paths, and walls to the data center view. RackBuilder Plus is built on ManageEngine’s OpManager software for data center infrastructure management and has the ability to monitor devices on the data center floor.

    Customers can convert RackBuilder Plus into the fully-fledged OpManager DCIM solution to add network management, physical and virtual server monitoring, fault management, workflow automation, and asset management.

    Cooling: Mestex Intros Evaporative Cooling System

    Cooling systems vendor Mestex announced its new Aztec AMC HVAC system for data centers. The company said the design combines evaporative cooling with an advanced “air-turnover system,” a rare combination in industrial applications.

    The system includes direct-drive plenum fans with variable frequency drives that can dial cooling down when IT gear is operating at partial load. Web-based digital controls help monitor an adjust the system for comfort, costs, and energy efficiency.

    Evaporative cooling does not require use of refrigerants, which lowers power consumption over traditional data center cooling. “Unlike other technologies just starting to be developed to reduce energy, costs, and environmental impact, this technology is available today and proven effective,” Mextex President Mike Kaler said in a statement.

    Racks: Enlogic Launches Plug-and-Play Rack

    Enlogic introduced a data center rack that combines power distribution, energy management, environmental monitoring, and door access controls, all working as an integrated system.

    The idea is to reduce installation time, simplify connectivity, and have comprehensive cabinet-level management. Enlogic said it wanted to bring plug-and-play convenience to installation of servers in the data center rack.

    Power Distribution: Enlogic Intros Data Center PDUs

    Enlogic also launched two PDU products at the conference.

    One is a wireless-metered PDU, which has a built-in antenna module that transmits energy and environmental data, reducing the amount of necessary cabling. It addresses security by using a proprietary wireless protocol designed to carry power data only, segregating it from other wireless networks.

    The other one is a horizontal PDU with hot-swappable management capability.

    Cabling: Sumitomo Brings Eight-Fiber Connector to Market

    Sumitomo Electric Lightwave, which provides optical fiber and connectivity solutions, launched a connector that enables real-time on-site cable builds and permanent terminations of jumpers, trunk cables, harnesses, and arrays.

    The eight-fiber QSFP-4oG0SR4 spice-on connector is meant to simplify 40 GbE and 100 GbE connectivity in data centers. QSFP stands for Quad Small Form Factor Pluggable.

    Because it has eight fibers instead of the usual 12, it helps increase efficiency by eliminating unused fibers. It has to be used with eight-count fiber ribbon cable.

    Cabling: Total Cable Solutions Pushes 40/100G Signals to 300 Meters

    Total Cable Solutions released a new end-to-end OM4+ solution for data centers designed to push 40/100G fiber signals up to 300 meters. The system includes trunk cables, harnesses, patch cords, MTP cassettes and adapter plates in violet to easily identify the new OM4+ infrastructure.

    TCS chose to adopt the violet color in an effort to create standardization and avoid confusion in the data center. Other vendors have used the color as well.

    10:20p
    Ten Need-to-Know Tips for Selecting a DCIM Vendor

    Data Center Infrastructure Management’s (DCIM) promise has been discussed ad nauseum. So, many potential end-users understand the concept and like what they’ve heard about theoretical benefits but just aren’t quite sure how to begin their journey.

    In a Data Center World presentation, Stuart Hallin, a senior technical consultant for Cormant, offered some vendor-neutral tips on how to pick a DCIM solution so users can move toward implementation. He discussed the general criterion required to pick an initial solution and how to select the right vendor among what has become a virtual “sea” of DCIM vendors – all making big promises. Here are 10 things you need to know:

    1. Focus on needs not wants.

    A big mistake in making a DCIM selection is picking based on features, not needs. A lot of DCIM vendors promise the world, but many are, in truth, specialists in particular areas. One size does not fit all, and your situation is unique. Establish a few of your biggest immediate needs, and select a vendor strong in those areas. Look at DCIM from a cost vs. pain perspective instead of focusing on shiny new features.

    2. Have a deeper goal in mind.

    You likely will not implement your end game out of the gate. Keep your ultimate goal (i.e. automation, integration) in mind and understand that DCIM will evolve in that direction.

    3. Don’t make setup and configuration the center of your evaluation.

    Again, DCIM needs to be viewed as a long-term strategy. Easy setup does not necessarily mean that DCIM will evolve with you.

    4. Focus on enforcement of process, updates and keeping DCIM accurate.

    This one falls under the “DCIM is not a magic bullet” category. The best tools in the world are useless in the wrong hands, and DCIM is an ongoing process. Look to get better measurement tools to improve the quality of what you measure such as RFID and barcodes. In addition, this will make system updates easier on operators and help DCIM stay up to date.

    5. Establish a Framework.

    Start at a high level and dive deeper from there. Hallin suggests starting with equipment lifecycles. Vendors are more than happy to assist with implementation, so definitely take advantage of it.

    6. Share goals and working processes with vendors.

    Vendors know what works and what doesn’t, and they want your DCIM implementation to succeed; so be open with them. Other suggestions from Hallin include:

    • Share process framework info with your Request For Information/Request For Proposal (RFI/RFP) to increase understanding.
    • Narrow potential solutions by issuing RFI/RFP, then request demos from the best responses.
    • Ask vendors your toughest questions in the RFI/RFP.

    7. Make them prove it.

    Vendors make a lot of promises so make them prove they can meet your top needs with a Proof-Of-Concept or trial. Again, this is more worthwhile if you’ve already established your criteria. Hallin suggests paying for the trial, some consulting, or a vendor’s on-site presence for the best evaluation. DCIM vendors are increasingly competitive for these deals, so make them work for it.

    8. Start with a single site or area for initial deployment and grow out.

    Set main stakeholders. Are they in operations, facilities, networking, sysadmin etc.? Then, evaluate accordingly. Look for thorough operations analytics capabilities for the given stakeholder. Keep in mind that each stakeholder looks for different things. A manager wants to easily see the past, present and future; and how much DCIM is reducing costs or increasing efficiency. Taking easy snapshots and enforcing processes are two more ways to help make initial deployment successful.

    9. Understand cost of investment.

    The investment goes a lot further than just DCIM. In addition to acquiring a license, there’s maintenance and support, professional services, additional hardware like RFID scanners and mobile devices, and time. How many hours a week do you want to invest on training internal staff?

    10. Determine the sources for budget and people.

    Hallin says to consider tapping budget from multiple departments and determining how much work will be outsourced. The top outsourced item is record conversion. Make one or two people “own” DCIM. While you might want multiple staffers to be able to pull useful knowledge from DCIM, you need to make it clear who is ultimately responsible for its success.

    << Previous Day 2015/04/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org