Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, July 20th, 2017
Time |
Event |
12:00p |
10 Things Every CIO Must Know about Their Data Centers While data centers aren’t necessarily something CIOs think about on a daily basis, there are some essential things every executive in this role must know about their organization’s data center operations. They all have to do with data center outages, past and future ones. These incidents carry significant risk of negative impact on the entire organization’s performance and profitability, which are things that fall comfortably within a typical CIO’s scope of responsibilities.
CIOs need to know answers to these questions, and those answers need to be updated on a regular basis. Here they are:
- If you knew that your primary production data center was going to take an outage tomorrow, what would you do differently today? This is the million-dollar question, although not knowing the answer usually costs a lot more to the CIO. Simply put, if you don’t know your data center’s vulnerabilities, you are more likely to take an outage. Working with experienced consultants will usually help, both in terms of tapping into their expertise and in terms of having a new set of eyes focus on the matter. At least two things should be reviewed: 1) How your data center is designed; and 2) How it operates. This review will help identify downtime risks and point to potential ways to mitigate.
- Has your company ever experienced a significant data center outage? How do you know it was significant? Key here is defining “significant outage.” The definition can vary from one organization to another, and even between roles within a single company. It can also vary by application. Setting common definitions around this topic is essential to identifying and eliminating unplanned outages. Once defined, begin to track, measure, and communicate these definitions within your organization.
- Which applications are the most critical ones to your organization, and how are you protecting them from outages? The lazy uniform answer would be, “Every application is important.” But every organization has applications and services that are more critical than others. A website going down in a hospital doesn’t stop patients being treated, but a website outage for an e-commerce company means missed sales. Once you identify your most critical apps and services, determine who will protect them and how, based on your specific business case and risk tolerance.
- How do you measure the cost of a data center outage? Having this story clear can help the business make better decisions. By developing a model for determining outage costs and weighing them against the cost to mitigate the risk, the business can make more informed decisions. Total outage cost can be nebulous, but spending the time to get as close to it as possible and getting executive buy-in on that story will help the cause. We have witnessed generator projects and UPS upgrades turned down simply because the manager couldn’t tell this story to the business. A word of warning: The evidence and the costs for the outage have to be realistic. Soft costs get hard to calculate and can make the choices seem simple, but sometimes the outage may just mean a backlog of information that needs to be processed, without significant top-line or bottom-line impact. Even the most naïve business execs will sniff out unrealistic hypotheticals. Outage cost estimates have to be real.
- What indirect business costs will a data center outage result in? This varies greatly from organization to organization, but these are the more difficult to quantify costs, such as loss of productivity, loss of competitive advantage, reduced customer loyalty, regulatory fines, and many other types of losses.
- Do you have documented processes and procedures in place to mitigate human error in the data center? If so, how do you know they are being precisely followed? According to recent Uptime Institute statistics, around 73% of data center outages are caused by human error. Before we can replace all humans with machines, the only way to address this is having clearly defined processes and procedures. The fact that this statistic hasn’t improved over time indicates that most organizations still have a lot of work to do in this area. Enforcement of these policies is just as critical. Many organizations do have sound policies but don’t enforce them adequately.
- Do your data center security policies gel with your business security policies? We could write an entire article on this topic (and one is in the works), but in short, now that IT and facilities are figuring out how to collaborate better inside the data center, it’s time for IT and security departments to do the same. One of the common problems we’ve observed is when a corporate physical security system needs to operate within the data center but under different usage requirements than the rest of the company. Getting corporate security and data center operations to integrate, or at least share data is usually problematic.
- Do you have a structured, ongoing process for determining what applications run in on-premises data centers, in a colo, or in a public cloud? As your business requirements change, so do your applications and resources needed to operate them. All applications running in the data center should be assessed and reviewed at least annually, if not more often, and the best type of infrastructure should be decided for each application based on reliability, performance, and security requirements of the business.
- What is your IoT security strategy? Do you have an incident response plan in place? Now that most organizations have solved or mitigated BYOD threats, IoT devices are likely the next major category of input devices to track and monitor. As we have seen over the years, many organizations are monitoring activity on the application stack, while IoT devices are left unmonitored and often unprotected. These devices play a major role in the physical infrastructure (such as power and cooling systems) that operates the organization’s IT stack. Leaving them unprotected increases the risk of data center outages.
- What is your Business Continuity/Disaster Recovery process? And the follow up questions: Does your entire staff know where they need to be and what they need to do if you have a critical and unplanned data center event? Has that plan been tested? Again, processes are key here. Most organizations we consult with do have these processes architected, implemented, and documented. The key issue is once again the human factor: Most often personnel don’t know about these processes, and if they do, they haven’t practiced them to be alert and cognizant of what to do when a major event actually happens.
Many other questions could (and should) be asked, but we believe that these represent the greatest risk and impact to an organization’s IT operations in a data center. Can you thoroughly answer all of these questions for your company? If not, it’s time to look for answers.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
About the Author: Tim Kittila is Director of Data Center Strategy at Parallel Technologies. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business. | 4:24p |
Cyberattack on Medical Software Shows Industry Vulnerability John Lauerman and Jeran Wittenstein (Bloomberg) — Many doctors still can’t use a transcription service made by Nuance Communications Inc. three weeks after the company was hit by a powerful, debilitating computer attack.
Hospital systems including Beth Israel Deaconess in Boston and the University of Pittsburgh Medical Center said eScription, a Nuance staple product that allows physicians to dictate notes from a telephone, still isn’t functioning. The outage obliterated doctors’ instructions to patients, forcing some to revert to pen and paper.
The computer virus, called Petya, has sent ripples through health care, among the last industries to make the switch to digital record keeping and one of the most frequently targeted by hackers, said Michael Ebert, a partner with KPMG who advises health and life-science companies on cybersecurity.
“Health care has been late to respond to the need for protected information, and the information is worth more,” Ebert said. “It’s amazing how far behind we are, and we know we have to do something.”
See also: Quantum Computing Could Make Today’s Encryption Obsolete
Hackers increasingly use viruses to encrypt companies’ information systems, unlocking the data only when a ransom is paid. After the Petya attack began in late June, companies from Oreo-maker Mondelez International Inc. to Reckitt Benckiser Group Plc warned of a blow to their sales. Information systems used by FedEx Corp.’s TNT unit may never fully recover, the shipping company said Monday.
Nuance shares were down 2.3 percent to $17.14 at 10:57 a.m. in New York. They’ve dropped about 6 percent since June 27, when the attack began.
The University of Pittsburgh Medical Center, a system of 25 hospitals and 3,600 doctors, said that its dictation and transcription services are still affected “with no estimated time of resolution.” The nonprofit is using features of medical records systems made by Cerner Corp. and closely held Epic Systems in the interim, said Ed McCallister, the Pittsburgh system’s chief information officer.
When the hack hit in June, the virus spread quickly. Ebert said one of his clients stood in a parking lot with a bullhorn, pleading with employees not to turn on computers, lest the virus spread into them. Another saw 100 workstations infected in an hour. Others shut down their entire systems, painstakingly starting computers one by one offline to see whether they had been tainted.
Read more: Cyberattack Fallout Engulfs FedEx, Shuts Terminals and Email
After acknowledging June 28 that portions of its network were affected, Nuance, based in Burlington, Massachusetts, is still picking up the pieces. In addition to transcription, Nuance named about 10 other affected products, including those used for radiology, billing and software that tracks quality of care.
About half of the company’s $1.95 billion in revenue came from its health-care and dictation business last year. The malware attack represents a big risk for Nuance, as many of its customers use products that appear to have been affected, according to Bloomberg Intelligence analyst Mandeep Singh.
“Any time there is a cyberattack and a company is exposed to that threat, that presents both reputational risk as well as the risk from disruption,” he said. “Since a lot of the deals get signed toward the end of the quarter, the timing of it could have impacted certain deal closures.”
Enhancing Security
Nuance said it has been fixing affected systems, enhancing security and bringing customers back online. The company declined to say how many clients were affected by the attack.
“We are doing everything within our power to support our health-care customers and provide them with the information and resources they need to provide quality patient care, including offering an alternative system and solutions,” company spokesman Richard Mack said Wednesday in an email. “We have no indication that any customer information has been lost or removed from the network.”
Other Products
The loss of service is an invitation to customers to seek other products and vendors, such MModal, a Nuance rival. Even though Intermountain Health Care, a Salt Lake City-based company that operates 22 hospitals, wasn’t affected, it turned off all its Nuance products and is using other transcription tools, said Daron Cowley, a spokesman.
At Beth Israel Deaconess, a Harvard-affiliated hospital, doctors who have been accustomed to using Nuance’s telephone-based product are switching to its Dragon system, where physicians dictate into a computer, making edits as they go.
That still means lost revenue for Nuance. While the computer-based product is a single software purchase, Nuance bills for eScription by line of text. So far, it’s been three weeks of revenue they can’t get back, and more users may drop away, said John Halamka, Beth Israel’s chief information officer.
“The hardest thing for a clinician is a change in workflow,” he said. “If you’ve changed for a couple of weeks, you might not go back.”
Nuance has done well to try to maintain customers in the aftermath of the attack, KPMG’s Ebert said, but the damage has already been done.
“They’re probably going to have a bad quarter,” he said. | 5:13p |
Google Brings Tech That Made YouTube Faster to Its Cloud Services Alphabet subsidiary Google is so confident in its new approach to handling internet-scale network congestion problems that it is now bringing the technology to Google Cloud Platform, its infrastructure services for the enterprise.
Google’s new BBR networking algorithm is already used to accelerate its consumer services, such as YouTube and Google.com, and it could be the next step in improving performance of the public internet. The company says it’s seen significant improvements in those services as a result, and it is now making the technology available to GCP users.
BBR is a congestion control protocol designed to deal with a common problem: traffic congestion in the complex networks that make up the modern internet, with its crowded high-speed international links, mobile devices each getting only a share of base station backhaul, home users on shared connections from DSL or cable hubs, and businesses sharing thousands of devices through a handful of routers. All this adds up to a network that doesn’t quite work to its full potential.
“Today’s internet is a far more unwieldy beast than that which confronted its progenitors,” Eric Hanselman, chief analyst at 451 Research, told Data Center Knowledge. “Google’s efforts with BBR are the latest effort to tackle one of the thorniest of the legacy protocol performance problems that plague the internet.”
See also: Google Reveals Espresso, Its Edge Data Center SDN
While much of the data organizations deliver from their data centers isn’t affected by congestion, its effects are noticeable when they stream data, transfer large files, or when they want a near-real-time response. With its initial deployments of BBR, Google has seen significant improvements in its YouTube and Google.com services; good enough that it’s now deploying it in its Google Cloud Platform, where you can take advantage of it in your own applications and services.
So How Does BBR Work?
Packet loss has long been a reliable sign of network congestion and a signal that senders need to reduce data rates. Recent changes to the internet’s architecture have made those techniques less effective; the last mile of broadband connectivity has been configured with large buffers, while long-haul links are using commodity switches with shallow buffers. The combination means we have an internet clogged up by queuing delays in the large buffers and instabilities due to traffic bursts in the backbone.
With all those buffers, how do you to determine the best speed to send data? The answer is surprisingly simple, once you determine what the slowest link is in any TCP connection path. That link defines the maximum data-delivery rate of the connection, and where queues form. Knowing the roundtrip time and the bandwidth of the slowest link that acts as a bottleneck for the connection, the algorithm can determine the best data rate to use — a problem that’s long been considered nearly unsolvable.
That’s where the name BBR comes from: Bottleneck Bandwidth and Round-trip propagation. Using these calculations and recent developments in control systems, Google network engineers have come up with a way to manage dynamically the amount of data sent over a connection, so it doesn’t swamp the capacity of its bottleneck link, keeping queues to a minimum.

Illustration by Google
While TCP doesn’t track bottleneck bandwidth in a connection, it is possible to estimate it from the timestamps on packet responses. By understanding which connections are limited by the speed of the application generating the data and which are limited by the capacity of the network, and by knowing exactly which response packets should be sampled to get those estimates, BBR can send data at the maximum possible rate. Network connections over the internet aren’t static, so if a connection is operating at a steady state, BBR will also occasionally increase the data rate to see if any of the bottlenecks have changed, which means it can respond rapidly to changes in the underlying network.
Thousands of Times Faster Across the Atlantic
The improvement can be significant; Google claims that a typical transatlantic connection can run 2,700 times faster. BBR may also be a better match for newer protocols, like HTTP/2, which use a single TCP connection for multiple requests to the server rather than multiple connections, one after another.
Implementing BBR as a sender-side algorithm means Google is able to improve end-user experience without having to upgrade all the networking devices and services between GCP and the user’s device. While it’s been a big win for YouTube, bringing the algorithm to GCP is a significant step, as it will be handling traffic for a much more diverse set of applications.
How BBR Accelerates Google’s Cloud Services
GCP customers can take advantage of BBR support in three ways: connecting to Google services that use it, using it as a front end to their applications through Google cloud networking services, or using it directly in their own IaaS applications.
As Google’s own services will be using BBR, the latency to your cloud storage should be reduced, making applications that use services like Spanner or BigTable more responsive. End users will see a bigger effect from BBR support in Google’s Cloud CDN (in the form of better media delivery) and in Cloud Load Balancing, where it will route packets from different instances of an application.
If you want to use BBR in your IaaS applications running on Google Compute Engine, you’ll need to use a custom Linux kernel. While BBR has been contributed to the Linux kernel, it’s not yet in mainstream releases, and you’ll need to add it from the networking development branch, configure it for GCE, and then compile the kernel.
With BBR available to compile into Linux kernels, you can also start using it in your own network, especially if you’re using Linux-powered networking equipment, such as Open Compute switches. GCP switching to BBR may attract interest from outside Google, in the Linux community, and from other network operators and vendors.
451’s Hanselman sees this as a promising step forward for the internet. “There have been many efforts to adapt the inner logic of TCP to improve performance, and Google has taken a fair shot.” He also views Google’s cautious approach to rolling BBR out as sensible; “There are lingering questions on how well this version plays with others, and Google is clear that it doesn’t want to release a bully upon the unsuspecting.” | 5:37p |
It’s Transition Time: Legacy Converged Infrastructure vs Hyperconverged Infrastructure Lee Caswell is Vice President of Products, Storage and Availability for VMware.
Hyperconverged infrastructure, or HCI, represents an important shift in how IT infrastructure is being deployed, managed and maintained. Hyperconverged integrated systems (HCISs) will be the fastest-growing segment, increasing at a compound annual growth rate (CAGR) of 48 percent in the forecast period. That’s three times faster than the overall market, and is tagged by Gartner to reach $8.6 billion by 2020.
This is in stark contrast to the sharp declines underway in the legacy storage area network (SAN) storage and LCI markets.
As enterprises shift from legacy-converged infrastructure (LCI) to HCI, there are material differences between the two architectures that should be considered as organizations design modern infrastructure, regardless of whether new data centers are located on-premises, at managed service providers or in the public cloud.
LCI Packages Up Separate Servers, Storage and Networking Elements
Traditional three-tier infrastructure relies on three physically and logically separate products: centralized storage, storage networking and server compute. Each “silo” has a separate upgrade cycle and management domain that tends to make the products complex to manage and inflexible for changing workload requirements.
It typically takes months of planning just for organizations to verify interoperability and purchase the individual components, and then even more time to integrate them all effectively.
The LCI market was originally created by storage, server and networking companies that banded together to ensure interoperability, accelerate deployment and simplify some management tasks of these complex and disparate systems. Because the LCI approach presumes traditional separate server, storage and network architectures, there is simply no way for LCI systems to realize material capital cost reductions, remove software layers or eliminate management panes.
HCI Integrates Server, Storage and Network Elements
HCI introduces a fundamentally new software-based, scale-out architecture of reliable, high-performance shared storage that is built on the latest, low-latency flash technology. By caching storage writes across scale-out server nodes, HCI can integrate virtualized compute resources with software-based shared storage using a common Ethernet network.
The efficient HCI design cuts 40 percent to 60 percent from legacy infrastructure capital costs by eliminating separate proprietary storage and storage networking hardware. This compelling economic benefit drives HCI adoption wherever cost pressure exists.
HCI further reduces operational costs by 50 percent by consolidating storage and virtual compute management into a single management console. With HCI, there are no independent storage administrators because storage is simply another attribute of a virtual machine. This is markedly different from the LCI approach where storage is configured independently and later assigned to applications and users.
HCI success started with small, niche markets like VDI where there were no separate storage administrators and where a VM-centric management view was preferred. But as enterprise storage features have been introduced, HCI moved quickly into serving the most business-critical workload segments in core data centers.
HCI Extends Naturally to the Public Cloud
HCI carries an important common DNA element with the public cloud – both leverage flash-enabled servers with a software abstraction layer that is hardware-agnostic. This common hardware building block makes it possible for a common software stack to run across the hybrid cloud with common data services. The architectural affinity of HCI makes it possible to extend common storage control planes from on-premises environments to the public cloud in a way that will never happen with proprietary hardware SAN products.
A 2017 Server Refresh is an Opportunity to Deploy HCI
For customers looking to try HCI, a server refresh offers a compelling opportunity. More than 10 million Intel x86 servers will be sold this year and server vendors are refreshing their product lines with the latest Intel Xeon Scalable processors. For hardware enthusiasts, the processing power of these new servers combined with NVMe flash and low-latency networks is incredibly exciting.
HCI is a powerful tool for the modern data center and 2017 server refreshes are an excellent opportunity to realize the capital savings and operational efficiencies from this new architecture.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 6:12p |
DataBank Building Atlanta Data Center for Georgia Tech Supercomputers DataBank, the Dallas-based data center provider that’s been expanding rapidly since being acquired by Digital Bridge Holdings last year, is building a data center in Atlanta, which will house supercomputers for Georgia Institute of Technology. The university will be the facility’s anchor tenant, meaning there will be space for others in what will be DataBank’s first site in the Atlanta data center market.
DataBank was the first data center provider Digital Bridge acquired. This year, it added Salt Lake City-based C7 Data Centers to its portfolio, two data centers in Cleveland and Pittsburgh, considered “key interconnection assets,” purchased from 365 Data Centers, and finally Vantage Data Centers, which has the largest (and growing) wholesale data center footprint in Silicon Valley in addition to a campus in Quincy, Washington.
Digital Bridge also owns several cell-tower companies and a mobile connectivity solutions provider, and its play includes leveraging a combination of those assets and data center space around the country to provide end-to-end infrastructure services to the likes of Verizon Communications or Alphabet’s Google.
Read more: Meet Digital Bridge, a New Consolidator in the US Data Center Market
But the deal in Atlanta is different. It’s a 94,000-square foot data center development that so far appears to be a pure colocation play, although the nature of its anchor tenant calls for a high-density design, up to 45kW per rack.
“Georgia Tech’s premier academic and research programs will be the main tenant under a long-term lease for both the data center as well as the adjoining office tower,” DataBank said in a statement.
Atlanta has been heating up recently as a data center market. Digital Realty Trust is expanding capacity in the market, and Switch announced a large new build there. Some new players have entered via acquisition, including Ascent, which bought a BlackBerry data center in Atlanta, and Lincoln Rackhouse, which acquired a data center in the market from a corporation whose name was not revealed.
Once its Atlanta data center comes online, DataBank will have 13 data centers in seven markets, including Dallas, Kansas City, Minneapolis, Salt Lake City, Pittsburgh, and Cleveland. | 11:17p |
Microsoft Profit Tops Estimates as Cloud Growth Marches On Dina Bass (Bloomberg) — Microsoft Corp.’s turnaround plan got back on track in the latest quarter, buoyed by rising sales of internet-based software and services.
Profit in the fiscal fourth quarter exceeded analysts’ estimates and adjusted sales rose 9 percent as demand almost doubled for Azure cloud services, which let companies store and run their applications in Microsoft data centers. A tax-rate benefit added 23 cents a share to earnings, Microsoft said.
Shareholders are watching closely to gauge Satya Nadella’s progress toward reshaping 42-year-old Microsoft as a cloud-computing powerhouse with new services related to Azure and the Office 365 online productivity apps — a shift that led to a massive sales-force restructuring earlier this month. The stock has surged 33 percent in the past year to a record amid signs that the changes are taking root, and the company rewarded that optimism by posting a significant gain in revenue from commercial cloud products along with wider margins for the business.
“They are a company that seems to be ahead of some of these old-line technology companies that are making transitions to the cloud,” said Dan Morgan, a senior portfolio manager at Synovus Trust, which owns Microsoft shares. “The story is still intact but they still have a ways to go.”
Microsoft shares rose as much as 3.3 percent in extended trading following the report. Earlier, they had gained half a percent to a record $74.22 at the close in New York. The stock has risen 19 percent in 2017, compared with a 10 percent gain in the Standard & Poor’s 500 Index.
Profit excluding certain items in the quarter ended June 30 was 98 cents a share, including the 23-cent tax benefit, Microsoft said Thursday in a statement. Excluding that boost, profit would have been 75 cents, still higher than the 71-cent average projection of analysts surveyed by Bloomberg. Revenue climbed to $24.7 billion, compared with estimates for $24.3 billion.
The company, which cut thousands of sales and marketing jobs earlier this month to concentrate on selling cloud and newer products like artificial-intelligence and data-analysis tools, said it recorded costs of $306 million for the restructuring in the fourth quarter.
Cloud Revenue
Commercial cloud revenue was $18.9 billion on an annualized basis, moving closer to the $20 billion target the company set for the fiscal year that started July 1. Even as cloud sales rise, the company has been able to meet a pledge to trim costs, with commercial cloud gross margin widening to 52 percent.
“In commercial cloud gross margin, we committed a year ago to material improvement and this is 10 points higher than where we were last year,” Chief Financial Officer Amy Hood said in an interview.
Azure sales rose 97 percent in the period, while commercial Office 365 — cloud-based versions of Word, Excel and other productivity software — increased 43 percent. Microsoft’s Azure cloud-computing service still lags behind market leader Amazon.com Inc., but more customers are starting to go with Microsoft, according to research from Credit Suisse Group AG. Both corporate and consumer users are switching from older Office programs to the cloud subscriptions, providing more stable and recurring revenue.
“The underlying trends — the shift to the cloud and also what it means for the legacy, on-premise stuff — are likely to be in motion for a very long period of time,” said Sid Parakh, a fund manager at Becker Capital Management, which owns Microsoft stock.
Surface hardware sales slipped 2 percent, though Microsoft’s Hood said that was better than she had forecast. In the previous quarter, Microsoft’s Surface revenue fell short because customers weren’t buying aging models of the Surface Pro. Since then, Microsoft unveiled an update to that product — its best-selling Surface — and released a totally new category, a Surface laptop computer with a clamshell design. Both devices went on sale June 15 and Hood forecast increased momentum for those products as customers start buying machines during back-to-school season.
Overall revenue in the company’s More Personal Computing division, which also includes Windows, was $8.82 billion, above the $8.55 billion average estimate of three analysts polled by Bloomberg.
In the Intelligent Cloud unit, made up of Azure and server software deployed in customers’ own data centers, sales increased 11 percent to $7.43 billion, compared with the $7.31 billion average analyst projection. Productivity revenue, mainly Office software, climbed 21 percent to $8.45 billion. Analysts had estimated $8.32 billion.
Chasing Amazon
As more companies shift data and computing tasks to the cloud, Microsoft is trying to narrow Amazon Web Services’ lead and fend off Google’s challenge. During the quarter, the Redmond, Washington-based software maker unveiled new tools to switch its database customers to the cloud and steal some from rivals like Oracle Corp. Microsoft is also trying to shift more of its Office customers to online subscription versions, while adding cloud business from professional network LinkedIn, acquired last year for $26.2 billion.
In the Credit Suisse survey of customers, Microsoft’s Azure saw the greatest boost among cloud vendors, with 40 percent of respondents saying Azure was their preferred product, up from 21 percent six months earlier. Investors and analysts are paying close attention to the cloud race as they try to project which of the older, established technology companies will profit most and survive in the new world of cloud computing. Oracle and International Business Machines Corp. have been navigating their own transitions to internet-based computing from legacy hardware and software. |
|