Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, May 17th, 2017
Time |
Event |
12:00p |
Microsoft: Government Should Regulate IoT Security As it wags its finger at the NSA for amassing a toolbox for breaking cybersecurity defenses in products built by technology companies, Microsoft at the same time is calling on the government to regulate privacy and security in the Internet of Things market, a huge growth area for the company’s cloud business.
Government will have to get involved in IoT security, Sam George, the company’s director of engineering for Azure IoT, said Tuesday while sitting on a panel at IoT World, the IoT industry’s big annual conference taking place this week in Silicon Valley.
Security is one of the biggest challenges in the budding IoT space, as companies rush products to the hot new market. Currently the “bar is low” for IoT security, George said.
More connected devices means a larger attack surface that’s more difficult to secure. The biggest and clearest example of the threat was last year’s DDoS attack on the DNS service provider Dyn (now owned by Oracle), when an army of compromised IoT devices, including CCTV cameras and DVR recorders, was hijacked and used to flood the provider’s data centers with requests, effectively disabling web properties that relied on its services to let computers know how to find them on the internet.
The attack on Dyn by the so-called Mirai botnet was only the most prominent example. Not long before it happened, cybersecurity company Symantec released a report saying the number of DDoS attacks exploiting IoT devices had been rising for some time, with the record number of attacks recorded in 2015. (The report came one month prior to the Dyn incident, and the company hasn’t yet released a similar report for 2016.)
Malware that targets IoT devices exploits vulnerabilities like unchanged default passwords and outdated firmware and often goes unnoticed for long periods of time, according to Symantec:
“Embedded devices are often designed to be plugged in and forgotten after a very basic setup process. Many don’t get any firmware updates, or owners fail to apply them, and the devices tend to only be replaced when they’ve reached the end of their lifecycle. As a result, any compromise or infection of such devices may go unnoticed by the owner, and this presents a unique lure for the remote attackers.”
IoT Regulation on Government’s Radar
Select federal agencies that oversee specific sectors already regulate some areas of the IoT market. The Federal Aviation Administration, for example, regulates drones, while the National Highway Traffic Safety Administration regulates autonomous vehicles and vehicle-to-vehicle communication technology, according to the first comprehensive report on IoT by the Government Accountability Office, released this month.
Both federal and executive branches of the US government have been considering regulation of IoT devices or data, according to the report. Ongoing efforts include a review of the government’s role in IoT by the National Telecommunications and Information Administration and the Developing Innovation and Growing the Internet of Things Act (DIGI Act), introduced in both congressional bodies, which would “require the Department of Commerce to convene a working group of federal stakeholders to provide recommendations to Congress on the proliferation of the IoT.”
Big IoT User Expects Secure Products from Vendors
The IoT World panelists weren’t all on the same page. Alan Boehme, chief innovation officer at Coca-Cola, said government regulation would take too long to materialize, and that he would prefer an industry-driven effort to create some security standards in the space.
Coca-Cola, he said, is a “huge” user of Microsoft’s IoT technologies, using them to collect and manage data from 15 million vending machines and a fleet of trucks that’s bigger than all of UPS and FedEx trucks combined. “We have a lot of ‘Things,’” he said.
As a big technology buyer, Boehme said he expects the vendors to make sure their products have all the appropriate cyber safeguards in place.
As Attacks Rise, Regulation May Be Inevitable
Another panelist, Stuart McGuigan, CIO at Johnson & Johnson, agreed with Microsoft’s George. In his employer’s business, which nowadays is heavily focused on healthcare technology, security can be a matter of life and death, but it’s also closely linked to an already heavily regulated issue: patient privacy. Put simply, a poorly secured connected device in a healthcare facility can become an entry point for gaining access to personal patient data, compromising the patients and causing huge fines for the organization.
“This is where – and I say this with a lot of sincerity – I love regulation,” McGuigan said, adding that regulation in the IoT space may be inevitable, as large-scale cyberattacks become more frequent, causing public outcry for laws that will govern the way IoT networks are protected. The pace of attacks “is accelerating, and unfortunately we’re going to see regulation,” he said. | 3:00p |
Carter Validus II Buys Conn. Data Center, Leased to CyrusOne Carter Validus Mission-Critical REIT II went on a buying spree in March, acquiring five properties totaling 362,000 square feet of fully occupied, leasable space worth $141.5 million, including a massive data center in Connecticut, according to a press release.
The Norwalk Data Center, bought for about $60 million from Fortis Property Group, is the largest of the REIT’s newly acquired properties, with 75,000 square feet of data center space, 30,000 square feet of office space, and about 60,000 square feet of supporting infrastructure. Cervalis, a subsidiary of CyrusOne—a publicly traded data center REIT—leases 100 percent of the facility.
The other properties acquired are healthcare facilities, one in Aurora, Illinois, and three in Texas.
“We believe these acquisitions represent our commitment to invest in high-quality real estate in the growing data center and healthcare industries. We further believe the critical nature of these buildings to the tenants that occupy them, along with their favorable locations and property conditions make them attractive acquisitions for CV Mission Critical REIT II,” said John E. Carter, CEO of CV Mission Critical REIT II, in a statement.
See also: DCK Investor Edge: Why Money is Pouring Into Data Centers
Unlike data center REITs that trade on the stock market (CyrusOne, Equinix, Data Center Realty, etc.), neither Carter Validus Mission-Critical REIT nor Carter Validus Mission-Critical REIT II are publicly traded. Investors can, however, purchase shares directly through the company.
Michael A. Seton, President of CV Mission Critical REIT II, commented, “We believe acquisitions like these align well with our high-growth, net lease, mission critical investment strategy, and anticipate that they will translate into added value for our stockholders.”
Moving forward, management’s goal for Carter Validus Mission Critical REIT II, Inc., formed in 2014, is to continue to build its portfolio and assets. However, the company is reportedly looking to split its data center and hospital assets in its other investment trust and sell each separately.
The 20 or so data center assets could fetch more than $1 billion, sources said. They include data centers occupied by IO in Arizona, Internap and Atos data centers in Texas, an Infocrossing data center in New Jersey, and AT&T data centers in California, Tennessee, and Wisconsin. Together, its data center and hospital assets could value the entire REIT at $3.5 billion or more, according to Reuters. | 3:30p |
How to Prevent DRUPS-Related Data Center Outages Diesel rotary uninterruptable power supply (DRUPS) systems were implicated in power disruptions that in the past three years took down Amazon Web Service in Sydney, Global Switch and Sovereign House in London, and the Singapore Stock Exchange.
A 222 millisecond disruption triggered the Global Switch outage, while Amazon’s was caused by what it called “an unusually long voltage sag.” Those issues shouldn’t have caused problems in data center DRUPS systems that were well-maintained and properly engineered.
How a DRUPS Works
“DRUPS systems use kinetic energy generated by a flywheel. Its momentum generates enough energy to deliver 15 to 20 seconds of ride-through before the diesel generator comes on,” Peter Panfil, VP of global power for Vertiv, says. Because nearly 99 percent of power disruptions last less than 10 seconds, kinetic energy generally is sufficient to ride through the fluctuation without requiring the diesel generator. As it approaches the threshold, however, the genset activates. Frequent on/off cycling causes wear.
Unlike battery UPS systems, data center DRUPS are line-interactive – they draw power directly from the utility to keep their flywheels spinning. Consequently, they don’t have power conditioners, so any fluctuations in utility power are passed on to the DRUPS.
“A single DRUPS failure isn’t a problem with 2N redundancy,” Jacob Ackerman, CTO of SkyLink Data Centers in Florida, says. SkyLink prefers battery-based UPS systems to DRUPS. The decision, he says, was based on the desire to maximize cross-over time in case generators have issues.
That said, Ackerman recommends running multiple DRUPS units in parallel rather than in isolated redundant mode. With that configuration you can avoid an outage even if one generator fails. “DRUPS systems are a UPS for the entire facility, including chillers and air handlers. So, if your DRUPS goes down, everything goes down.”
Issues to Watch For
Synchronization-related failures can occur when power voltage and frequencies of the utility and the bypass path don’t match. DRUPS is line-interactive, so it’s synchronized initially. The challenge is in syncing back to utility power. “If the utility wobbles, my UPS has to wobble with it,” Panfil explains. Following the fluctuations mechanically can be challenging. A battery system eliminates that need with a double power conversion (AC to DC to AC), which conditions the power. Synchronization issues reportedly caused the outage at the Sovereign House colocation data center.
Voltage sag can cause DRUPS units to back-feed and trip. When using a mechanical system to compensate for momentary reductions in voltage the system should have a slight lag to ensure the power generated by the DRUPS goes to the data center rather than back-feeding to the utility.
In Amazon’s Sydney outage, the breakers that isolated the data center DRUPS from utility power didn’t open quickly enough, which caused the DRUPS power to feed back to the power grid. Amazon fixed the problem by adding more breakers and conducting regular system tests on unoccupied hosts within AWS.
See also: How Amazon Prevents Data Center Outages Like Delta’s $150M Meltdown
Bad fuel could be a factor in any diesel genset failure. Diesel fuel doesn’t last indefinitely. After six to 12 months it may be contaminated by bacteria, water and solid particulate. Fuel also may gel.
Cisco keeps 96,000 gallons of diesel on hand at its Allen, Texas, data center – enough to run at full load for four days. The facility’s staff refresh the fuel every three to four months and store it in environmentally-controlled areas to protect it from temperature variations, Sidney Morgan, Cisco Distinguished Engineer, says.
Maintenance and mechanical issues – like a genset starter failure – can derail any generator. DRUPS systems, however, sometimes can be started using the flywheel momentum. This is like popping the clutch in a truck for a rolling start.
The maintenance schedule should include inspecting data center DRUPS units:
- Weekly, to check coolant fluid levels and winding and bearing temperatures
- Monthly, to assess wear on carbon brushes and to test cross-over capabilities
- Yearly, to change oil, check the control circuit frequency and clean the unit
- At five years, to replace bearings and inspect internal components
“It’s just like maintaining your car,” Morgan says.
Human error also can trigger critical load interruptions. “People are generally the weakest link,” Ackerman says. Although load switching occurs automatically, company policy and documentation govern activities during outages. “You can build a fully redundant system, but if someone flips a couple of switches during testing and maintenance or during a failure, it can cause an outage.” One data center experienced an outage because the switch that would move power from the generator back to the utility failed, and the data center lacked the protective arc suits its personnel needed before they could enter the area and throw the switch manually.
Practice your procedures for outages. “It’s not easy, but run a ‘pull the plug’ test on an isolated system to ensure you can cross over and cross back without issues,” Panfil says. “Systems often don’t operate the way you expect.”
DRUPS design affects reliability. Several designs are on the market, including models with dynamic speed control, in-line mechanically-coupled storage, and solid state energy transfer for power storage. When evaluating data center DRUPS units, also consider ease of maintenance.
Cisco’s Allen data center has used an integrated DRUPS system since going online in 2011. “Because the DRUPS was designed as a self-contained unit in which electromagnetic clutches connect the flywheel to the diesel generator, the unit engages within five seconds. This system, (which uses eight DRUPS units to generate 15MW of power) has been active more than six years with no failures,” Morgan says.
Limited lifespan is a concern with any system. “If inheriting a DRUPS, know how old the bearings are, the start cycle count, and run hours,” Panfil says. “Also look at the mechanical locking. Understand what the system already has gone through.”
DRUPS systems have been implicated in notable outages. Usually, however, they were caused by related issues. So, if you use data center DRUPS, keep them well-maintained, check their fuel, and test your cross-over procedures to ensure everything works as expected. | 4:00p |
Trends in Systems Migration to the Cloud Leon Adato is Head Geek™ at SolarWinds.
Cloud computing’s mounting importance and the shift to hybrid IT is propelling organizations of all shapes and sizes to hurriedly migrate systems infrastructure up and out of the physical data center—whether they’re truly ready to or not. As the results of the recent SolarWinds IT Trends Report 2017 show, organizations have migrated applications, storage, and databases to the cloud more than any other area of IT in the past 12 months.
This probably comes as no surprise: cloud computing is a compelling and exciting alternative to traditional IT and you can expect organizations to drive even further in the cloud in the years ahead. Still, patterns of implementations and challenges associated with the cloud and hybrid IT are beginning to emerge that can help you and your business better manage your infrastructure in the cloud. On that note, here are three emerging trends in systems migration and how they can impact your cloud strategy.
To the Cloud… and Back?
According to the report, 95 percent of IT professionals reported migrating some part of their infrastructure to the cloud in the past 12 months. Despite this amazing race to the cloud, 35 percent of those same respondents said they had also ultimately moved workloads out of the cloud and back on-premises in the past year, either due to security/compliance or performance concerns.
This begs the question: why were those applications migrated in the first place? Most organizations have implemented a virtualization strategy in some shape or form by now, which makes a lift-and-shift into the cloud much more achievable. But what’s much more likely is that these applications and other pieces of systems infrastructure are simply being caught up in the sheer enthusiasm of cloud migration. The old on-premises frustration of needing to scale but experiencing cost, resource, and management restrictions no longer applies when you move to an elastic-scale platform like the cloud, and for many, it’s too tempting to ignore.
Still, while it’s easy to be dazzled by the cloud’s benefits, not every workload is a great candidate for the cloud. Ultimately, this trend illustrates that pre-testing and workload performance and security considerations prior to migration—which should be a cornerstone of any migration strategy—are coming second to speed of deployment. To better work with business leadership and avoid realizing too late that a workload does not perform better in the cloud, you should look to participate in cloud conversations early (and often).
To start, your organization should be prepared to properly test any workload, application, or piece of infrastructure prior to migration to accurately gauge how it will perform in the cloud and what support you will require from the cloud service provider (CSP). A comprehensive monitoring tool that provides visibility into not only your on-premises systems, but also those in the cloud, should be implemented to help establish baseline performance metrics that make it easier to identify whether a workload belongs “on the ground” or in the cloud.
No Visibility or Control, No Service
In order to be successful as an IT professional, you need three things: responsibility, accountability, and authority. The first two are a natural part of your job, but in the era of hybrid IT, authority is what you’re constantly fighting for. In fact, over half of IT professionals reported that a lack of control over the performance of cloud-based workloads was a top challenge and is still a considerable barrier to migration.
When you think of control and authority over your workloads, the scenario you’re most likely familiar with is when a workload in the cloud begins to experience performance degradation, and although your owned systems and alerts indicate everything is roses, the cloud provider insists the problem does not stem from their services, either. Who’s to blame, and how do you validate your suspicions that it’s actually the CSP?
What’s almost as good as authority is visibility. If you’re able to see the problem in question and communicate it to the provider down to the most minute detail, it’s a much faster route to resolution, which in turn alleviates some of the stress associated with migrating a workload to the cloud. To that end, “trust but verify” should be the IT professional’s mantra in the year ahead, as organizations work to identify how best to maintain an element of control and visibility into workloads and applications that are hosted in the cloud. It will be critical to leverage comprehensive hybrid IT monitoring, beyond what is typically offered by cloud service providers, to ensure you have enough data and visibility to truly understand how workloads are performing in the cloud and the reasons for that performance.
You should also work with business leadership to make the case for multi-region or multiple cloud strategies to avoid catastrophic downtime in the cloud due to a single point of failure. Remember: this is the same lesson we learned from internet service providers and our off-site data center vendors before the advent of cloud. It’s better to be safe than sorry, so make sure you have a management and monitoring strategy that protects your data and delivers a strong end-user experience.
You’re Never Done Learning
One of the primary challenges we face today is staying abreast of new technology—how it works, how it interacts with other systems, and most importantly, how to manage it all. Complexity remains both a barrier to cloud adoption and a key challenge for those who have already migrated, and—again, according to the 2017 report—many administrators admit that they feel IT professionals entering the workforce don’t have the skills to successfully do their current job.
But that doesn’t mean these skills can’t be taught, learned, and implemented to better equip you to manage hybrid IT environments and oversee systems migration in the cloud. Think back to how you skilled up when you first became an IT professional: trying to build a single instance of something and seeing how it went, then tearing it down and building it again. Back when we were all at the start of our bright IT careers, practice made—and still makes—perfect, after all. You should also look for opportunities to use new features included in vendor tools you already own, such as automated functions, or try your hand at something like migrating an on-premises test server to the cloud to better understand how lift-and-shift techniques work.
The rate of technology abstraction isn’t slowing down. It’s important for businesses and you, as the IT professional, to “fail fast” (which does not mean actually fail, but rather, discover points of failure as quickly as possible). This involves constantly testing and implementing new solutions, because in today’s on-demand environment, availability, durability, and an acceptable response time from the end-user perspective are expected no matter where an application service is hosted or delivered from. This requires a comprehensive understanding of the technology you’re tasked with managing and migrating. You should work step by step and explore as many technology avenues as you can—containers, microservices, serverless computing—to gain meaningful, usable skills that will drive a more proactive, efficient, and effective cloud strategy.
Conclusion
We are in a new era of work as organizations of all sizes are implementing cloud computing to better meet the demands of a modernized workforce. In looking to the year of IT transformation ahead, the rate of technology abstraction promises to increase systems migration and hybrid IT complexity, requiring you and your business to be prepared for the shift in management and monitoring requirements. The above trends in systems migration to the cloud not only help paint a portrait of the modern hybrid IT organization and the us, the IT professionals who manage them, but provide key considerations for your company to evaluate and leverage when crafting your cloud strategy.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| 6:04p |
Google to Sell New AI `Supercomputer’ Chip Via Cloud Business Mark Bergen (Bloomberg) — At the I/O developer conference last year, Google debuted its first chip. The company kept the component mostly for internal artificial intelligence needs. Today, version two arrived — and Google is selling this one.
Chief Executive Officer Sundar Pichai announced the new chip on Wednesday during a keynote address at the Alphabet Inc. unit’s annual I/O event. Normally, the gathering focuses on mobile software. This year’s spotlight on hardware underscores Pichai’s effort to transform the search giant into an “AI-first” company and a real cloud-computing contender.
Companies will be able to purchase the hardware, called Cloud Tensor Processing Units (TPUs), through a Google Cloud service. Google hopes it will quicken the pace of AI advancements. And despite official statements to the contrary, it may also threaten Intel Corp. and Nvidia Corp., the main suppliers of powerful semiconductors that run large processing tasks.
See also: NVIDIA CEO: AI Workloads Will “Flood” Data Centers
“This is basically a supercomputer for machine learning,” Urs Hölzle, Google’s veteran technical chief, said. Machine learning, a method for deciphering patterns in reams of data, is behind Google’s recent progress on voice recognition, text translation and search rankings. But the approach cost a lot, and sucked up computing time in Google’s data centers. The latest chip was designed to address these issues, and executives said they saw dramatic improvements after putting the component to work on these internal tasks.
Google wouldn’t divulge the chip’s price, what company manufactures it, or when the related cloud service goes on sale. Google still purchases processors from Intel and Nvidia. But by relying more on in-house designs, Google could trim its multi-billion-dollar annual computing bill.
Google plans more chips like this, and sees the components as essential for success in the cloud — a key part of Alphabet’s push to make money beyond digital advertising.
“The field is rapidly evolving,” Hölzle said. “For us, it’s very important to advance machine learning for our own purposes and to be the best cloud.”
See also: This Data Center is Designed for Deep Learning
Google’s cloud business grew by more than 80 percent last year, according to estimates from Synergy Research Group. But Amazon Web Services still has over 40 percent of the public cloud market, and continues to expand at a steady clip. Google is third, according to industry estimates.
To gain share, Google is leaning on its AI prowess. The Cloud TPU chip won’t be sold to Dell Inc. and other makers of servers that power traditional corporate data centers. To get the benefits, customers will have to sign up for a Google cloud service and run their software tasks and store their data on Google’s equipment. If companies get on board, Google insists, they can plumb their own data for unseen efficiency gains and profit.
AWS and no. 2 player Microsoft Corp., make similar cases. So Google’s pitch stresses performance. A single Cloud TPU device, composed of four chips, is nearly 12,000 times faster than IBM’S Deep Blue supercomputer, the famous chess victor from 1997, Hölzle said. Google is stringing 64 of the devices into “pods” that sit in its data centers.
See also: Deep Learning Driving Up Data Center Power Density
Google unveiled its chip at last year’s I/O conference, so why does it need another? First, the company is going up against rivals that develop and deliver faster processors on an annual cadence. To lock in customers, it must match that pace.
In addition, the original chip only worked for “inference,” processing data that’s already packaged in mathematical models. It’s akin to compressing large photos into tiny digital formats. For instance, a company could turn an algorithm for voice recognition into an app using inference chips.
To create an algorithm from just raw voices, you need lots of data to train AI software. That takes massive computing power, forcing coders to wait days or weeks to see results. Google’s second chip speeds up the training process. In internal tests, it cut the time in half compared to commercially available graphic processing units, known as GPUs.
Nvidia, the dominant GPU manufacturer, recently announced a new chip, called Volta, that handles training data like Google’s Cloud TPU. An eight-chip Volta module will sell for $149,000 starting in the third quarter.
Google is less experienced at selling chips, so it’s being cautious about commercial deployment. “When you have something that’s really new, some of the tools occasionally break. You want to reach a certain level of maturity,” Hölzle said. “We’re probably going to have a lot more demand than we can satisfy.”
Excessive demand inspired the creation of TPUs in the first place, according to company lore. Six years ago, Google saw an uptick in voice searches on phones. Just three minutes of conversation a day, per Android phone user, would have doubled the number of data centers Google needed, based on its technology at the time. TPUs were designed to handle the extra volume more efficiently.
The second-generation chip accelerated Google’s own research. For its translation efforts, Google previously ignored more than eighty percent of its data at the training stage, according to Jeff Dean, who leads a Google AI research unit called Brain. With its new chip, they can use all the information. That means better trained and potentially more accurate AI software.
The new chip may let researchers use image data that currently sits unused because of high computing costs, according to Fei-Fei Li, an AI expert who runs a machine learning group inside Google’s Cloud business division. Image classification is one of the machine learning tools Fi’s team is offering cloud clients, and the new chip will make this more accessible and usable.
EBay Inc. used Google’s cloud to develop ShopBot software that identifies items snapped on smartphone cameras. Today’s image-recognition systems have around ten percent accuracy, said R.J. Pittman, EBay’s Chief Product Officer. The new Cloud TPU, which EBay has tested, could eventually increase accuracy to more than 90 percent, he added.
Companies like EBay want AI to tag every physical good in existence. Li imagines businesses that may want to map every square inch on earth or each minute part of a human cell.
Amazon and Microsoft have their own AI-powered cloud services too though, and both have committed to buy Nvidia’s Volta chips. Nvidia’s data center sales surged 186 percent during the first quarter. “Nvidia is not standing still,” said Pittman from EBay, which also buys Nvidia GPUs.
Hölzle dismissed a direct rivalry. Nvidia’s chips are built for more general-purpose tasks, he said, while Google’s focus solely on machine learning.
That won’t calm Intel and Nvidia investors, who worry about in-house chipmaking efforts by their largest customers — data center operators like Google. Analysts are concerned that revenue and profitability at the two companies, both at historically high levels, may be dented. Even if Google doesn’t succeed in commercializing its own chips, it’s in a better position to negotiate on price.
Google isn’t restricting cloud customers to its own chips. It has Intel and Nvidia processors running inside its data centers. Google’s pushing a Lego-like model — corporate customers can chose their combination of software and hardware, and rent storage and computing power by the minute. It has to be flexible if it’s going to catch AWS and Microsoft.
“Down the road, we make actually pick the hardware for you that minimizes your cost or maximizes your turnaround time or whatever you tell us is important to you,” Hölzle said. “It becomes invisible to you.” | 7:35p |
Planes, Trains, and Cloud: Bombardier Signs $700M Deal with IBM Cloud  Brought to You by Talkin’ Cloud
Canadian transportation company Bombardier will spend around $700 million over six years in an extended partnership with IBM to use its cloud services.
The deal includes IBM services and IBM cloud management of Bombardier’s worldwide infrastructure and operations. The companies said this will be one of IBM’s largest cloud partnerships in Canada.
Cloud vendors like Microsoft and IBM are benefiting from customers north of the border making commitments to public cloud. Last week at Microsoft Build 2017, electronic signature technology provider DocuSign announced that it would use Microsoft Azure data centers in Quebec City and Toronto to expand its footprint in the country and comply with local data requirements.
“Bombardier’s global decision to extend its existing partnership with IBM and move to IBM Cloud is recognition of our broad expertise and experience helping our clients transform the business of IT to be more competitive, agile and secure through cloud computing and industry services best-practices,” Martin Jetter, senior vice-president, IBM Global Technology Services said in a statement. “We look forward to further developing our relationship with Bombardier and working with the talented team there.”
In 2015, Bombardier launched a five-year turnaround plan to improve productivity and reduce costs. The company said that its partnership with IBM will help it offload cloud management and focus on its core competencies as part of this strategy.
“As part of our turnaround plan, Bombardier is working to improve productivity, reduce costs and grow earnings. This IT transformation initiative will help us better integrate globally to create a best-in-class IT organization,” Sean Terriah, Chief Information Officer, Aerospace and Corporate Office, Bombardier said. “With IBM, we will transform our service delivery model to focus on our core competencies, and leverage the best practices of our strategic partner across our infrastructure and operations.”
This article originally appeared on Talkin’ Cloud. |
|