Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, July 6th, 2017
Time |
Event |
12:00p |
The Peculiarities of High-Availability Data Center Design on a Cruise Ship While watching the sun disappear below the horizon or stargazing at night from the deck are the staples of a cruise experience, vacationers also want to watch movies on-demand or browse the internet while in their cabins.
Much like a big hotel, a cruise ship usually has a data center onboard to provide digital services. While a data center on a ship is similar to one in a hotel – both have servers, storage, and networking gear to run software – there are some differences.
Cruise ships are mobile, speeding toward their next port of call in the Baltic Sea, the Mediterranean coast or the Canary Islands, and ensuring service availability means both primary and backup data center is usually on the same vessel, not miles apart.
That’s the design the German IT services provider BSH IT Solutions implemented aboard six vessels operated by TUI Cruises, a Hamburg-based joint venture between TUI AG and Royal Caribbean Cruises Ltd.
Niels Heider, project manager at BSH, and his team installed primary and backup data centers in separate fire zones and on different decks. One data center is located in the bow of the ship, while the other is on the stern, he told Data Center Knowledge in an interview.
“It’s the same. Ships are the same as a hotel,” he said. “It’s nothing special.”
The data centers allow TUI to run important ship operations, such as allowing passengers to order food, drinks, and other services. The IT infrastructure also provides passengers with Wi-Fi access, video-on-demand, and Wi-Fi calling, while giving employees access to email and other administrative applications, Heider said. The typical applications are Microsoft Exchange, Active Directory, and SQL Server.
TUI announced in June an implementation of DataCore Software’s SANsymphony storage virtualization software throughout the six-vessel fleet, including the newly built 2,534-passenger Mein Schiff 6, which made its maiden voyage last month.
TUI standardized on VMware, running VMs mostly on Hewlett Packard Enterprise servers and storage hardware, but one ship uses Dell servers and storage, Heider said. On the networking front, some ships use Cisco networking equipment, while others use HPE networking gear.
Heider’s team deployed DataCore’s SAN virtualization software to simplify storage management and ensure high availability. The software synchronously mirrors data across two data centers on each ship, so if one of them fails, the other takes over automatically. “It’s really good software and easy to use,” he said.
BSH spent the last four to five years upgrading and installing the data center infrastructure in TUI cruise ships. It takes about nine months to plan, test, and install the technology on each vessel, Heider said.
On the recently finished Mein Schiff 6, for example, each data center has two physical servers running virtual machines and another six physical servers running and housing the video-on-demand system. Each data center houses 26TB of storage.
To provide passengers and the 1,000 crew members on the ship the bandwidth they need, the BSH team built a network using HPE’s 10 Gigabit Ethernet networking equipment. The connection between the data centers is 40 gigabits per second. | 3:00p |
Google Hopes Nutanix Can Unlock the Enterprise Data Center for Its Cloud Business Google Cloud Platform’s new partnership with Nutanix, announced last week at the Nutanix .NEXT conference, at first glance seems to be a win for both companies. The agreement promises seamless integration between customers’ environments in Google’s cloud and Nutanix infrastructure in their own data centers. It also promises a new Internet of Things platform that combines machine learning-enabled Nutanix boxes at the edge with core data infrastructure in the cloud using Google’s open source machine learning software library TensorFlow.
Google, now a unit of Alphabet, had built the largest cloud on the planet long before the word “cloud” was being bandied about. The trouble was, it was a custom private cloud built to meet the needs of Google and Google only. By the time the search company decided to get in the public cloud game, Amazon had a six to eight year head start and had already built Amazon Web Services into a company with a billion dollars of annual revenue in its sights.
And although Microsoft Azure started at about the same time as GCP, Redmond had an advantage in that it already had deep business relationships with the enterprise and enough VAR partners to populate a small country.
Playing catch-up, Google has spent billions of dollars annually to build its network of data centers and telecommunications infrastructure. As we reported a few months back here on Data Center Knowledge, last year it shelled out $10.9 billion in capital expenditures, much if not most of it on this cloud infrastructure.
But although the company has built enough to now compete effectively in most global markets, it’s still paying the price for being late to the game. As Gartner VP Michael Warrilow told Data Center Knowledge back in March, Google Cloud Platform is “the third horse in a two-horse race, but it could well become a three-horse race. They’re doing all the right things. They’re enterprise-scale, but are they enterprise-friendly? And the answer is, that’s still a work in progress.”
Partnerships like the one with Nutanix are key to continue making progress.
Like Google — and everyone else for that matter — San Jose-based Nutanix has its eyes on the enterprise and specifically the hybrid cloud. It’s been an innovator in the data center arena, a pioneer of both hyper-converged infrastructure and software-defined storage. Among other things the company offers solutions like Calm, which allows for easy shifting of workloads from on-premises to public cloud as needed.
“Hybrid Cloud needs to be a two-way street,” Nutanix president Sudheesh Nair said in a statement. “The strategic alliance with Google demonstrates our commitment to simplify operations for our customers with a single enterprise cloud OS across both private and public clouds – with ubiquity, extensibility, and intuitive design.”
Until Google, Nutanix’s most important partnership has been with Dell, which distributes its stack on its XC Series servers and which also now owns VMware, AWS’s chief hybrid cloud partner. Lenovo is also a partner, which introduced its ThinkAgile SX for Nutanix last week, also at the .NEXT event.
The deal with GCP will fill a sweet spot and allow Nutanix’s cloud OS to span private to public with the click of a mouse, employing Kubernetes on-prem and Google Container Engine in Google’s cloud. Also, by utilizing Calm for GCP users will have a single control plane for managing applications between GCP and local cloud environments, and Nutanix Xi Cloud Services for GCP will allow for easy “lift-and-shift” operations to quickly move workloads from on-prem servers to the public cloud when needed.
“With this strategic alliance with Nutanix, Google is addressing one of the most pressing technology challenges faced by enterprises – the ability to manage hybrid cloud applications without sacrificing security or scalability,” Nan Boden, Google Cloud’s head of global technology partners, said in a statement. “Partners like Nutanix are essential for us to build a thriving ecosystem and help enterprises innovate faster.”
This partnership should bolster GCP by giving it something other than a me-too hybrid cloud solution to go against the AWS-VMware partnership and Microsoft’s Azure Stack, and eventually giving it a more solid presence on the edge.
How well it pans out might depend more on Nutanix’s commitment than on anything Google does. Nutanix should have plenty of motivation, however. Although the company had a valuation of around $2 billion in 2014, that was primarily due to venture capital funding. Otherwise, it’s been losing money. For the third quarter of fiscal 2017, the company showed a net loss of $112 million, compared to $46.8 million in the third quarter of fiscal 2016. That’s not as bleak a picture as it might seem. Revenues for the quarter were up over 90 percent and deferred revenues showed a rise of over 160 percent year-over-year. | 3:30p |
Baidu Partners with NVIDIA to Apply AI Across Cloud, Autonomous Vehicles  Brought to you by IT Pro
Chinese internet search and cloud company Baidu rallied in premarket trade on Wednesday after it detailed a partnership with Santa Clara-based chipmaker NVIDIA to expand investments in artificial intelligence across its cloud and self-driving vehicle initiatives.
According to Baidu president and COO Qi Lu, who spoke at Baidu’s AI developer conference in Beijing on Wednesday, one of the ways it will work with NVIDIA is by bringing its next-generation Volta GPUs to Baidu Cloud.
Baidu will deploy NVIDIA HGX architecture with Tesla Volta V100 and Tesla P4 GPU accelerators for AI training and inference in its data centers, according to the announcement.
NVIDIA is betting on a future where the majority of workloads in data centers will be deep learning, and is helping cloud providers like Baidu, Alibaba, and Tencent in China, and in the U.S. with Google and Microsoft.
See also: NVIDIA CEO: AI Workloads Will Flood Data Centers
Though NVIDIA still makes most of its money from selling graphics chips for video games, it has emerged as the leading provider of processing power for AI software, according to MIT Technology Review, which recently named the company the smartest company, ahead of Amazon and Alphabet, which ranked #3 and #5 out of 50, respectively.
Researchers and companies will be able to leverage Baidu’s PaddlePaddle deep learning framework with NVIDIA’s TensorRT deep learning inference software to develop services with real-time understanding of images, speech, text and video, Baidu said in a statement. PaddlePaddle is used to develop Baidu search rankings, image classification services, real-time speech understanding, visual character recognition, and other AI-powered services.
“NVIDIA and Baidu have pioneered significant advances in deep learning and AI,” Ian Buck, NVIDIA vice president and general manager of accelerated computing, said in a statement. “We believe AI is the most powerful technology force of our time, with the potential to revolutionize every industry. Our collaboration aligns our exceptional technical resources to create AI computing platforms for all developers — from academic research, startups creating breakthrough AI applications, and autonomous vehicles.”
On the autonomous vehicle project, Baidu will adopt NVIDIA’s DRIVE PX platform for its self-driving car initiative, with plans to develop self-driving cars through partnerships with major Chinese carmakers including Changan, Chery Automobile, FAW, and Greatwall Motor. NVIDIA and Baidu will work together on Baidu’s self-driving car initiative known as Apollo, an open platform for self-driving cars, which leverages NVIDIA technology.
See also: Edge Data Centers in the Self-Driving Car Future
On the consumer end of things, Baidu plans to bring AI capabilities to NVIDIA SHIELD TV with its DuerOS conversational AI system, adding voice command capabilities.
“Today, we are very excited to announce a comprehensive and deep partnership with NVIDIA,” Lu said at the event. “Baidu and NVIDIA will work together on our Apollo self-driving car platform, using NVIDIA’s automotive technology. We’ll also work closely to make PaddlePaddle the best deep learning framework; advance our conversational AI system, DuerOS; and accelerate research at the Institute of Deep Learning.”
Earlier this year, NVIDIA partnered with four Taiwanese electronics manufacturing giants who will design and manufacture its latest AI servers for data centers operated by cloud providers.
This article originally appeared on IT Pro. | 4:00p |
One of China’s Poorest Provinces Emerges as a Big Data Hub China is a country of extremes, with well-developed industrialized cities flourishing while inhabited yet rugged and primitive regions struggle.
One of the remotest and historically poorest provinces in Southwest China—Guizhou—has come a particularly long way in a short time and is well on its way to becoming a hub for China’s push into big data. What resembled suburbia a decade ago has been converted into a new urban district complete with skyscrapers, a convention center, and data centers.
High-speed railways, bridges, tunnels, and added international flights linking it to domestic and foreign cities have lifted the province from isolation and connected it with the world.
Ranked 25th out of 31 Chinese provinces economically, Guizhou has hosted the country’s four-day International Big Data Expo three years in a row, and by the time the 2017 event concluded at the end of May, exhibiting companies inked contracts worth $2.4 billion, according to a report by NPR.
Many technology behemoths made the trek to the Far East for the event: Apple, Facebook, Microsoft, Google, Amazon, Intel, IBM, and Dell were all there. Silicon Valley elites such as Stanford’s AI and ethics professor Jerry Kaplan, start-up entrepreneur and creator of Founders Space Adelyn Zhou, and Steve Hoffman, regional lead of developer relations at Google were there as well, according to the Expo website.
With an average year-round temperature of 59 degrees, Guizhou is well-suited for data centers, and the central government has done an admirable job of attracting firms with pilot programs and discounts on hydro electricity.
Taiwanese electronics company Foxconn, which in addition to iPhones, Kindles, PlayStations, and other gadgets manufactures servers, has a factory and a 6,000-server Green Tunnel data center located an hour’s drive from the city.
Like many companies in China, Foxconn is trying to make its manufacturing operations more efficient through the use of cloud computing, networked machines and eventually, artificial intelligence. All of this requires storing and analyzing huge amounts of data. | 4:30p |
Surviving the Fallout of the Deep Root Leak: Best Practices for AWS Sekhar Sarukkai is the Co-founder and Chief Scientist at Skyhigh Networks.
The Deep Root Analytics leak that affected 198 million Americans sent shockwaves throughout the world, as the majority of the adult US population had its voter information exposed to the public. Within the context of the increasingly turbulent cybersecurity political landscape, data protection is becoming essential for both the public and private sector alike, yet these leaks seem to be happening more frequently.
The Deep Root security incident, alongside many others before it, only further proves the necessity of proper security practices for frequently used but often-neglected IaaS systems such as AWS. There were essentially no such protections for Deep Root’s data stored in an AWS S3 bucket. Anyone with a simple six-character Amazon subdomain could access it.
Data vulnerability is nothing new to the security industry, but adopting the correct best practices can make any bit of data, no matter how sensitive, secure in AWS. Although Amazon has made significant investments in securing its AWS platform, gaps still exist that hackers could utilize to either gain access to secure information, take an application offline, or erase data entirely.
Amazon has developed sophisticated tools, such as AWS shield, for DDoS attacks, yet a larger, more coordinated effort could overwhelm the system. Even with such protections, many data breaches are caused by insiders, whether that is due to negligence or malicious intent. In fact, enterprises face nearly 11 insider threats per month on average, making internal and external security essential to safeguarding sensitive data.
Another important AWS vulnerability is improper configuration. Within the shared responsibility model, Amazon monitors AWS infrastructure and platform security, as well as responds to incidents of fraud and abuse. Since customers often require custom applications, they are responsible for configuring and managing the services themselves, notably EC2, VPC, and Amazon S3. This includes installing updates and security patches, otherwise vulnerabilities may arise if left unattended.
Best Practices for a More Secure AWS
- Activate multi-factor authentication when signing up for AWS in order to enable an additional level of security from the start. This should be applied to both the root user account and all subsequent IAM users. Authentication for the root account should be tied to a dedicated device independent of the user’s mobile phone, in case the personal device is lost or the user leaves the company.
- Use a strict password policy, as users tend to create easy-to-remember but easy-to-guess passwords. Strict password policies make passwords more complicated, but it establishes strong protection against brute force attacks. At the very least, passwords should have one upper case letter, one lower case letter, one number, one symbol, and 14 characters.
- Make sure CloudTrail is active for all of AWS because global logging allows you to retain an audit trail of activities within AWS services, even for those that are not region specific, notably IAM and Cloudfront.
- Turn on CloudTrail log file validation in order to track changes made to the log file after it has been delivered to the S3 bucket. Not only will this create another layer of security for the bucket, but validation will also make it easier to discover potential threats.
- Activate access logging for CloudTrail S3 buckets to track access requests in order to identity unauthorized or unwarranted access. Customers could also monitor past access requests.
- Don’t use root user accounts because these automatically-generated accounts have access to all services and resources in AWS. Since root users are very privileged accounts, they must only be used to create the first IAM user, after which the root user credentials should be locked away.
- Terminate unused access keys, which decreases the chance of a compromised account or insider threat. It is recommended that access keys be deleted after 30 days of inactivity. These precautions should also be applied to IAM access keys in order to prevent unauthorized access to AWS resources.
- Avoid using expired SSL/TLS certificates because they may no longer be compatible with AWS services, leading to errors for ELB or custom applications, impacting productivity and overall security.
- Use standard naming conventions for EC2 as this reduces the risk of misconfiguration. By utilizing regular tagging conventions, there is a reduced risk of someone misusing a tag or name, decreasing the number of potential vulnerabilities.
- Restrict access to Amazon Machine images (AMIs) because if left unrestricted, anyone with an AWS account can access them through community AMIs to launch EC2 instances. Restricting this would prevent enterprise-specific application data from being exposed to the public.
Applying the best practices for AWS services and infrastructure is only a small part of the puzzle, as custom applications deployed in AWS also require similar safety precautions. Without proper security configurations, the Deep Root leak may become one of many data breaches that impacts hundreds of millions of people. However, by employing security best practices, organizations can withstand even the most sophisticated threats, sheltering their most valuable data.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:00p |
Sensors Supercharge Predictive Data Center Modeling Sensors are decreasing in cost and increasing in functionality. They are becoming more accurate, more reliable, and easier to install. Can they form the basis for a more accurate predictive data center modeling system? Ideally, data from sensors could be used to arrive at an optimal operational point for minimizing energy consumption. How might this work, and are data center managers ready for it?
Typically, where predictive modeling is required, computational fluid dynamics systems come into play. Soeren Brogaard Jensen, former VP for enterprise software and services at Schneider Electric and currently CTO of Trackunit, envisaged usage models for CFD without sensors.
“That’s where you base things on a fairly sophisticated model, where you have your equipment mapped out, and you understand the relationships and how it all connects together,” he said. This theoretical model can be used to map out what-if scenarios.
The next level involves overlaying a sensor model on top of the theoretical model to gain a real-world view of the data center’s environment and equipment.
See also: Machine Learning Tools are Coming to the Data Center
CFD tools can be used as an indicator of trends, rather than as a specific forensic tool, but the closer the data in the model to the real data in the room, the more informed the decisions are that can be made. Many models will make assumptions based on the type and location of the equipment installed, but these can then be calibrated according to measured data from the sensors, to validate the model.
UK-based Future Facilities, which sells a CFD system for data center managers, manually measures environmental parameters to calibrate models based on theoretical specifications.
“When we do a model of a DC we do calibration. We survey the DC, inspect the cabinets, take temperature, pressure, airflow, and we will make sure that the model matches reality,” said Matt Warner, former manager of the firm’s UK development team.
The Benefits of Sensors
Implementing sensor technologies in a data center can help to automate the data gathering process for managers intent on data center modeling scenarios. These can then be fed back into a CFD modeling system that can be used for planning what happens under various scenarios. These can include the failure of mechanisms, or installing more equipment in the racks.
Managers can examine various parameters in these scenarios. These include the coefficient of performance (the ratio of the heat projected outside the room to the energy expended on it). Another key metric is the rack cooling index, a quantifiable metric that measures the level of compliance with the ASHRAE standards.
ASHRAE publishes allowable temperature and humidity levels within data centers, setting clear operating parameters for computing equipment. One of the biggest benefits of installing environmental sensors in the data center is that, judiciously placed, they can help managers to raise temperatures safely within these bounds.
If the computing equipment is running too cold, then the data center manager risks over-using the cooling equipment within the data center and incurring a power overload.
James Cerwinski, former director of DCIM at Raritan, cited Gartner when he says that for every one degree that data center managers can avoid over-cooling, they can save 3-4 percent on cooling expenses.
“There is another consideration. If you go too hot, depending on the configuration of your service you may drive up energy consumption by excessive fans running in the servers. So every customer has to understand their own environment,” he warned.
Types of Measurement
Temperature and humidity often go together when installing data center sensors. The ambient temperature, combined with the level of moisture in the air, will determine when condensation happens as water forms droplets on nearby services, endangering operations.
Another reason to install sensors is to prevent hotspots building up at specific areas in the data center. Airflow sensors can detect the amount of air arriving via a floor-located conduit for cooling purposes, for example, ensuring that equipment such as networking cables are not blocking the conduit and choking off the chilling supply. Airflow sensors can also be deployed to ensure that hot air returns are similarly free from obstruction.
These sensors can help to feed models that factor in the return temperature index (the measure of recirculated air in the data center, which should be at 100 percent). This is another metric that can be put to good use in a CFD package.
Differential air pressure sensors can also be used to detect differences in pressure between parts of the data center, such as hot and cold aisles. If the pressure differential goes above a certain threshold, it can result in leaks and mingling of air with different temperatures.
Philip Squire, design director at UK-based colocation provider Ark Data Centres, worked with a third-party partner to design its free-air cooling system from the bottom up, and designed its monitoring infrastructure to fit.
“We have four sensors in each aisle, two on each side, for 26 cabinets,” he said. “We’re using air pressure sensors, because they measure the pressure differential between the hot aisle at the front and the cold aisle at the back. Those control the volume of air that we need to go into the aisle, and they are monitoring to ensure that we have a four pascal difference in pressure.”
The benefits to installing sensors in the data center extend beyond the technical. “Many customers work in secure environments, and we can install sensors that trigger an alarm if a door opens and closes,” Cerwinski said. These sensors can alert managers if cabinets containing sensitive hardware applications are opened by unauthorized personnel. And smoke particle sensors can also alert staff to fires in the data center environment.
Where to Place Sensors
Depending on what kind the sensors are being used and what is being monitored, data center managers can place them at different points in the environment. Sensors designed to support ASHRAE temperature and humidity controls can be installed within the rack, at the top, middle and bottom. Another can be installed at the back of the rack to measure containment, and this configuration can be repeated every fourth rack, Cerwinski said.
“Sensing at the cabinet level is important, because many data centers have difficulty providing real-time power and cooling data at the cabinet,” explained Aaron Carman, worldwide critical facilities strategy leader at HP.
Device-level Sensing
Experts caution data center managers not to ignore device-level sensing, which can be particularly useful in analyzing the effective operations of IT equipment.
These on-device sensors only entered widespread use during the last five years, argued Jensen, but they can bring significant benefits, by helping to merge two domains: facilities-level monitoring and IT infrastructure.
“In recent years, we are now overlaying the IT sensors. You have those in the cabinet itself, mounted inside the rack. That offers another element of information in the modeling of the data center,” he said.
Many of these sensors sit directly on the motherboard, while others sit behind the front plate where the air enters the chassis.
Deployment Costs
All of this means that capital expenditure may be the smallest part of the overall cost of a data center sensor initiative. Jensen argues that capital outlay on the sensor equipment itself may be less than 20 percent of the overall cost. Data integration can be more costly, along with other aspects of deployment that data center managers often underestimate, he suggests. These include location mapping, information sharing, and alerts, in addition to the training and tools needed for post-deployment usage.
Nevertheless, sensor technology has developed significantly in recent years, offering increasing levels of functionality that can help to ease some of those deployment costs.
Typically in the past, data center managers would deploy some sensors as a part of the existing data center network. They could be inserted into rack PDUs as plug-and-play options in some cases. In this kind of installation, the sensor communications can piggyback via the network already used to monitor the PDUs.
The alternative is to deploy them as a separate overlay network. The overlay network requires a controller with its own network connections, and can make this approach relatively expensive, because network drops also have to be deployed.
Wireless Sensors
More recently, wireless sensor technologies have emerged that can bring several benefits to deployment teams. Such sensors often use the Zigbee standard, which enables sensors to connect via a wireless mesh network that introduces a level of redundancy into the system. The wireless network is then consolidated by a controller, which then relays the data to a DCIM system, potentially reducing the cost.
“These mesh networks allow them to be more fault tolerant in the way that they connect across the data center,” Jensen said.
There are other advances, too. Sensors have become smaller, and accuracy has increased, he says. But perhaps one of the biggest drivers for sensor technology has been the move towards controlling cooling units directly using sensor technology.
“Wireless sensors are also turning into wireless control systems. A couple of years ago we didn’t see a lot of controls in this space, but now we are,” he said.
Using Sensors to Control Airflow
Some companies are using sensor technology as the basis for control loops to regulate data center temperature. Ark Data Centres’ Squire explains that he used the differential pressure sensors installed along his cabinets to control airflow.
“We measure pressure in the cold aisle, and we use pressure to control airflow,” he said. “At the other end of the airflow system, where the hot air comes out of the servers and then returns back to the input side of our air optimizers, it measures temperature and humidity as well as pressure. There, we control how much it mixes with outside air to deliver the right conditions for the cooling system.”
At the other side of the air optimizer it measures pressure, temperature, and humidity, to ensure that the right mixture of air is being delivered on the cool side. These controllers and sensors interact to ensure that it is making the best use of the waste heat that comes out of its servers, and the best use of the ambient external air.
Controlling data center operations in real time can provide significant efficiency savings, but it is also possible to use this sensor data for strategic outcomes. It enables data center managers to build an influence map, said Jensen.
“It basically shows you in real time, given any constraint that you have in the room, how every CRAC unit is impacting every area of the room,” he said. “Often, you may find that a CRAC is impacting an area in the room that you’d never thought about.”
Maps like these can provide actionable data in issues such as data center overcooling, he explained. Understanding the temperature dynamics of the data center in real time can enable managers to turn down cooling equipment in certain parts of the facility, he suggests, based on predictive analyses of what will happen. In some cases, he believes that data center teams may be able to recognize ROI in under a year on sensor deployments.
“CFD is critically important in the planning cycle, for understanding impact and what happens when you run out of certain resources,” Jensen said, arguing that data center operations benefit from both live data and CFD for effective planning.
The idea here is to create a positive feedback loop, with sensor data used to validate and tune theoretical CFD models, which then enable managers to find the optimal operating point in a data center. The sensors can then be used to maintain it, especially if they’re used as part of a control system to affect key metrics such as airflow.
With sensor technology evolving, and with pressure on data center managers increasing, every piece of actionable intelligence is useful. Together, sensors and predictive data center modeling tools can be a formidable weapon in the planners’ arsenal.
This article was originally published in the 2015 March/April issue of AFCOM’s Data Center Management magazine. |
|