Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, February 22nd, 2017
Time |
Event |
4:00p |
Artificial Intelligence and the Evolution of Data Centers Charles-Antoine Beyney is Co-Founder and CEO of Etix Everywhere.
Data centers are proliferating to meet the relentless demand for IT capacity and seeking greater efficiency everyday, and each new innovation is a major step. To meet these requirements, Artificial Intelligence (AI) has arrived, holding tremendous promise for the industry.
Facility administrators and IT managers have several critical objectives for their data center operations, but none are as important as uptime and energy efficiency. According to 2016 research by the Ponemon Institute, the average cost of a single data center outage today is approximately $730,000. Unplanned outages are costly and time-consuming affairs that can be detrimental not just to a data center, but to an organization’s business operations and bottom line. Meanwhile, figures concerning data center energy consumption are equally compelling. Worldwide, data centers consume approximately three per cent of the global electricity supply.
See also: This Data Center is Designed for Deep Learning
Fortunately, AI is mitigating data centers’ energy consumption, while improving uptime and reducing costs without compromising on performance.
What is AI?
AI is technology that enables machines to execute processes that would otherwise require human intelligence. A machine endowed with AI is capable of interpreting data to form its own conclusions and make reasonable operating decisions automatically.
Many progressive businesses today are using AI to optimize resource management and to gain a leg up on the competition. A smart, AI-enabled data center is now a necessity for any business that wants to achieve an operationally efficient, high-performance computing environment.
Common use cases of AI include:
Long-Term Planning: Research and development teams may use AI to predict the short- and long-term implications of strategic business decisions. In a manufacturing setting, for example, AI could be used to make accurate, long-term environmental predictions. This data could be very useful for planning eco friendly business initiatives.
Game Theory: Some executives are using AI to predict how markets will react to certain business decisions. An AI engine can compile data from many different sources and help executives to better understand how customers and investors will respond to corporate announcements.
Collective Robot Behavior: Imagine a scenario where an unmanned drone has to land on an aircraft carrier. A successful landing would require many different connected systems to act as one, exchanging data in real-time from a variety of sensors monitoring ocean conditions, temperature, the speed of the craft and other vehicles that are attempting to land. In this case, AI is used to control the “collective behavior” of the different systems.
These diverse business cases make AI one of the hottest branches of computer science and a top focal point for technology providers today. According to MarketsandMarkets, it’s estimated that the global AI market will grow at an astounding rate of 62.9 percent from 2016 to 2022 when it will reach $16.06 billion, much of the increase driven by technology companies, including IBM, Intel and Microsoft, which serve high performance, data center computing environments.
AI and the Data Center Industry
The same AI applications and strategies that are being used to guide larger business decisions are now making their way into the data center. AI is being used in conjunction with data center infrastructure management (DCIM) technologies to analyze power, cooling and capacity planning, as well as the overall health and status of critical backend systems.
In 2014, Google, for instance, acquired an AI startup, DeepMind, and started to use it to slash costs and improve efficiencies in its data centers. Its AI engine automatically manages power usage in certain parts of Google’s data centers by discovering and reporting inefficiencies across 120 data center variables, including fans, cooling systems and windows.
Using AI, Google was able to reduce its total data center power consumption by 15 percent, which will save the company hundreds of millions of dollars over the next several years. Additionally, the company has already saved 40 percent alone on power consumed for cooling purposes.
DCIM tools are software and technology products that converge IT and building facilities functions to provide engineers and administrators with a holistic view of a data center’s performance to ensure that energy, equipment and floor space are used as efficiently as possible. In large data centers, where electrical energy billing comprises a large portion of the cost of operation, the insight these software platforms provide into power and thermal management accrue directly to an organization’s bottom line while reducing its carbon footprint.
Leveraging AI, automated software platforms such as DCIM and smart devices, businesses can ensure their data centers provide stringent security and are eco-friendlier, while improving uptime and reducing costs — without compromising performance. With that, why shouldn’t companies follow suit or at least make AI a discussion point in 2017?
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:43p |
BMW, Mobileye Partner to Collect and Share Self-Driving Car Data Gabrielle Coppola (Bloomberg) — BMW AG will use chips and cameras made by Mobileye NV to collect mapping data for autonomous driving in its vehicles starting in 2018, the two companies said in a statement.
The German carmaker will also allow data to be merged with information collected from competitors’ fleets to speed the development of high-definition maps that are critical to enable autonomous driving. The announcement comes a week after Mobileye said it signed a similar accord with Volkswagen AG to collect and share mapping data for self-driving cars.
Mobileye has been lobbying its customers — traditional car manufacturers — to both install the mapping technology and allow it to merge that data into one collaborative mapping effort. Chairman Amnon Shashua has called technology that gathers crowd-sourced real-time mapping data from automakers’ fleets the “missing piece” in the race to achieve fully autonomous driving.
The data Mobileye collects using its Road Experience Management, or REM technology, will be provided to HERE, the consortium forged by BMW, Daimler AG and VW-owned Audi, to develop a real-time mapping service to enable autonomous driving.
BMW already has a relationship with Jerusalem-based Mobileye, announcing in July a partnership with the company and Intel Corp. to put a fleet of fully autonomous vehicles on the road by 2021.
Mobileye reports earnings Wednesday before the market opens in New York. The company’s shares rose as much as 3.4 percent after Tuesday’s announcement with BMW. | 8:00p |
NTT Names Adams RagingWire CEO, Takes Full Ownership of Company There wasn’t much of a search that preceded NTT Communications execs’ decision to pick a successor to data center provider RagingWire’s former CEO, George Macricostas.
For the last year and a half, Macricostas, a RagingWire co-founder, has been grooming Douglas Adams, the company’s then president, who was also there from the beginning, to take over. “NTT had bought into that decision and was kind of watching things from the sidelines for the last year and a half,” Adams, RagingWire’s new CEO, says. “It was kind of a short conversation.”
NTT recently acquired the remaining 20 percent stake it hadn’t already owned in the Reno, Nevada-based data center provider – known to many for being Twitter’s data center landlord in Sacramento, California – and put Adams, 51, as its helm.
The big Japanese telco is one of the world’s largest data center providers (fifth-largest in retail colocation by market share). While it has strong positions in Asian and European data center markets, it isn’t one of the top players in North America. In 2013 it spent $350 million on a controlling 80 percent stake in RagingWire to change that. The US company has been tasked with building out NTT’s North and South American platform, and Adams is now in charge of that effort, although it doesn’t include non-RagingWire NTT facilities already in the US.
In addition to its massive data center campus in Sacramento, RagingWire has built one in Northern Virginia and expects to launch its first Texas facility, in Garland, this April. Next on the agenda are Chicago, New Jersey, and Silicon Valley, followed by locations in Canada and South America, although there’s no timeline for expansion north and south of the border at the moment, Adams says.

RagingWire’s upcoming data center in Garland, Texas (Image: RagingWire)
The Cloud Data Center Gold Rush
He is taking the helm amidst a period of rapid growth in the data center provider industry, especially in the wholesale data center space, where companies like RagingWire and Digital Realty Trust build and lease out large multi-megawatt facilities. Today, the majority of those customers are hyper-scale cloud platforms – the likes of Google, Amazon, and Microsoft. Their rapid expansion is what’s driving most of the growth for these landlords.
“Hyper-scale cloud providers are snatching up every piece of available data center space in every market that they possibly can,” Adams says. “It’s caused incredible growth in our industry.”
It took RagingWire 10 years to fill up its first building in Sacramento (12.6MW), he says, to illustrate the current momentum. It took three years to sell out the first building in Virginia (14MW) and just six months to sell out its latest, second Virginia building (also 14MW). All recent deals have been with hyper-scale cloud companies, according to Adams.

RagingWire’s CA3 data center in Sacramento, California (Photo: RagingWire)
It’s unclear if and when this growth will decelerate, but for now, RagingWire and its competitors are racing to build as much inventory as they can finance in top data center markets to take advantage of the environment.
The bet isn’t on the cloud providers alone. They’re also betting on traditional enterprises – the manufacturers, the retailers, the banks, the insurance firms – to also move their servers into leased data centers, where they can connect to cloud providers directly, bypassing the public internet. Most enterprises are still running the bulk, if not all of their applications in-house.
Read more:
Adams is bullish on the data center industry – “Everything is becoming digital; paper is dying” – and the only thing that worries him is the possibility of another economic crisis and the effect it may have on companies like his.
RagingWire came out of the 2008 meltdown relatively unscathed. According to Adams, the business went flat to slight growth during that period. Today, it’s profitable and growing at 20 percent annually, and he doesn’t want to go back to flat sales.
Other than the economy tanking again, he doesn’t see any potential obstacles to continued growth in the industry. While many have been saying that cloud computing will eventually put data center providers out of business, cloud adoption has obviously been a big plus for them – at least for companies with some scale. The cloud lives in data centers. “I’m not anticipating us having a difficult time,” he says.
Guiding Principles
Raised by a single mother who worked multiple jobs to support him and his two siblings in Glendale, California, Adams is no stranger to hard times. “We had absolutely no money,” he says.
He praises his mother for instilling the value of education in him, his brother, and his sister. The option of not going to college didn’t exist for him, and he ended up going to University of Southern California, a prestigious private research university, with half of his tuition funded by a scholarship and grants.
After graduating from USC’s Marshall School of Business in 1989, Adams joined Mars Inc., where he held various sales and marketing positions for several of the company’s many brands, which in addition to M&Ms, Mars, and Snickers, include Dove Ice Cream, Uncle Ben’s Rice, and Pedigree, among others. Adams credits his time at Mars with giving him some core business values – such as egalitarianism and fiscal responsibility – that have been propelling his career to this day.
In the late nineties, after nine years at Mars, he joined the Japanese semiconductor giant NEC Corp. as vice president and general manager of the optical and magnetic drive division in the US. RagingWire was formed two years later.
Adams’s other core principles are ensuring good team chemistry, avoiding being a reactionary, and not being overly focused on tactical maneuvering. He says these are the most important pieces of advice he’s ever been given. He usually waits for a situation to unfold so he can understand it better before reacting. He also spends at least a day every week to think strategy. “I don’t get caught up in the tactical.” | 9:53p |
Sponsored: Designing Data Center Remote Power Management and Monitoring The idea is very simple – how do you manage and optimize your power requirements if you can’t see the metrics? As we know, data center environments and infrastructure services will only continue to evolve and expand. Business needs are the drivers for technological innovation and cloud computing is certainly helping push organizations forward. As more IT environments see the benefits of the hybrid data center model, administrators will need to learn how to properly size, manage, and deploy across IT platforms.
Already, new data center services are pushing the capacity of technologies like cloud computing to the next level. In fact, a 2015 NRDC report indicates that data center electricity consumption is projected to increase to roughly 140 billion kilowatt-hours annually by 2020. This is the equivalent annual output of 50 power plants, costing U.S. businesses $13 billion annually in electricity bills. This is why data center operations are more critical than ever before. And, what makes a data center run efficiently and resiliently? Rack and Power Intelligence.
Data center operations rely more and more on data. Data center managers need to be able to oversee everything that happens in the white space, and they use intelligent rack solutions to provide the maximum amount of insight to assist their decision-making process, especially regarding power consumption inside the rack.
This means administrators must examine cost and availability as critical design factors. Businesses are asking managers to ensure uptime, availability, and intelligent power management at all times, even as they reduce the cost of operating the equipment. These are the keys to a successful IT operation as it has become critical to rely on intelligent PDU hardware to achieve success.
Very recently, Server Technology added POPS (Per Outlet Power Sensing) to its industry leading and award winning HDOT PRO2 Alternating Phase Rack PDUs. This product expands upon the most innovative power product on the market, with solutions for density, capacity planning and uptime in the modern data center. But, the technology doesn’t stop there. It also directly integrates with monitoring and management solutions:
- Creating Intelligent Power Management. Per Outlet Power Sensing (POPS) Switched technology provides the flexibility needed for all data centers and remote sites, including power requirements for high amperage and high-voltage, Branch Circuit Protection, and SNMP traps and email alerts – including current monitoring. When paired with Sentry Power Manager (SPM), Server Technology’s award-winning power management solution, Switched POPS technology provides the most detailed power data within the cabinet.
ServerTech has many success stories to share, but let’s focus on one case study that illustrates PDU intelligence with next-generation power management:
- Client: University of Florida – Health
- Key Challenges: The UF Health Shands facilities located in Gainesville has grown along with the changes in population and medical technology. Joseph Keena, Manager for Datacenter Operations for Shands, has spent the last 10 plus years diligently working to keep ahead of the increasing demands placed on the Shands IT infrastructure. Their UF Health IT facilities have grown from one datacenter hall to four during that time, and Keena has been there every moment to ensure that the equipment is able to run today’s compute load and meet the expanding requirements placed on the systems therein. Overall challenges included:
- Third party access to the datacenter
- Knowing what it costs to run a system
- Custom power solution for new rack
- Collecting environmental information from the datacenter
- The Solution: Keena first began working with Server Technology (STI) about 10 years ago. He selected STI after evaluating a number of power solutions from other vendors. He was intrigued at the time by the potential of remotely monitored (Smart) and remotely managed (Switched) power for his datacenter. Using Smart and Switched products enabled Keena and the Shands team to access their rack power distribution units (PDUS) remotely to gather both power and environmental data. Today,Keena’s team has gone a step further by taking advantage of the individual outlet power measuring capability of STI’s Switched POPS family of PDUS. Joseph first sought the ability to do reports on what was going on with power in his datacenter. UF Health Shands IT was using a competitive software package as a data collection tool that featured a power management plug in, and STI was one of the manufacturers supported. When the power management plug in support went away, it became a logical choice for Keena to adopt the SPM power management software from STI.
Adding the environmental probes offered by Server Technology to his cabinets enables Joseph to see what the temperature and humidity of his datacenter look like at a granular, rack by rack level, just as he can with his power. And getting data from the probes doesn’t cost Joseph any additional Ethernet ports – the probes plug directly into his power strips.
- Business Outcomes and Benefits: Joseph sees the utility in the future of using Alternating Phase Switched POPS combined with SPM to be able to determine what it costs the hospital to run each individual server in the datacenter. “There is value in the data that you collect,” says Joseph. “And alternating phase power lets you cable and balance without re-cabling between the sections. It’s a beautiful thing.” Other big benefits include:
- Control
- Reporting
- Integration, ease of use
- Uptime
Change is inevitable, and data centers should be designed with this reality in mind. Companies that cannot shift with the times or trends because of antiquated technology and infrastructure lose business to more agile competitors.
Power management solutions play a fundamental role in implementing more versatile data centers that can quickly evolve to address the demands and challenges of the future. As in the case study example, the organization was able to make intelligent decisions around their power requirements both in the present and for the near future. This level of power and data center integration not only impacts operations, but also directly improves the business process. A healthy data center enables a healthy and competitive business. Powerful PDU designs, when coupled with power management solutions, help organizations control their assets and identify where there are growing power requirements.
This article was brought to you by Server Technology. |
|