Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, October 27th, 2016
Time |
Event |
12:00p |
Storage Innovations Spur a Second Look at Cloud and On-Premise Options Rajesh Nair is CTO of Tegile.
Until recently, the conventional wisdom about data storage was that on-premise solutions don’t offer the flexibility or cost savings of the cloud. Enterprises may have concerns about handing over control of their data and IT infrastructure to a cloud provider because they worry about security, but they’re willing to put these concerns aside if they think they can get the scale and storage they need — at a good price.
Depending on your business, this might have been true in the past: If you weren’t dealing with big data sets, sought low latency, and wanted to save money, the cloud may have been the right choice. Spinning disks didn’t offer the performance needed, and flash drives were too expensive to use in bulk.
Recent changes in the storage market have weakened the argument that storage in the public cloud is the only cost-effective option. Your data center doesn’t necessarily have to be built in the cloud if you’re trying to get that magic combination of cost effectiveness and performance. Here’s what’s happening in the data storage market that should factor into your decision making:
- Multi-tiered flash. No doubt your business’s analysts are clamoring for predictive modeling that’s powered by real-time analytics. However, disk-based storage was never designed for this type of task – real-time analytics need an infrastructure that can process queries and requests immediately, as opposed to the batch processing done with disk storage. Multi-tiered flash – which includes fast flash for performance, and dense flash for capacity – allows you to process real-time data on an all-flash platform, providing better performance and low latency, and at a reasonable price. Even if your organization isn’t necessarily making use of extensive real-time data analytics, there’s no telling how much you’ll rely on it in the future – so better to prepare infrastructure to support future needs. As flash storage has become more affordable, cost is no longer the deciding factor. If your data center is modernized, running it on-premise can actually be less expensive than the cloud. Cloud storage can make economic sense, but the cost of reading and processing large data sets can ratchet up storage costs beyond an on-premise option. There’s also the reduced maintenance costs associated with flash drives compared to spinning drives, so that’s another data point to consider when comparing costs of the cloud versus on-premise data storage.
- High bandwidth and low latency. Non-Volatile Memory Express, or NVMe, is a relatively new interface standard for storage, and was designed to optimize the higher performance of newer flash drives. Before NVMe, the speed and performance of flash hit a bottleneck because the standards couldn’t deliver similar speeds. With NVMe, when you spend money on flash storage, you’ll get what you pay for. That means you can provide memory-to-memory transfers that are nearly as fast as accessing local file storage at sub-millisecond latencies, which is obviously quicker than cloud storage and an unheard-of precedent in storage. Simply put, on-premise data gives you more choices in how you run your data center. Your IT department can respond more quickly to online attacks or system failures, and can step in to protect data. You also gain more control over performance. Instead of being hemmed in by the benchmarks of your cloud provider, you have the choice to increase performance through use of various flash options.
The cloud vs. on-premise debate will never have one answer that’s right for every organization. But as innovations in data storage change some long-held beliefs about which option offers the best performance at the right cost, it’s worth taking a new look at your data storage choices.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:45p |
IBM CEO Says Watson Stands Out by Protecting Clients’ Data (Bloomberg) — IBM Chief Executive Officer Ginni Rometty sees proprietary data combined with artificial intelligence technology as the competitive advantage for companies going forward.
When International Business Machines Corp. works with clients, it trains a unique version of its artificial intelligence technology, Watson, using proprietary data, and that information creates individual business insights that stay with the customer, Rometty said. That’s how Watson is different from its competitors that offer similar services around data analytics and machine learning, she said Wednesday in a speech at the company’s World of Watson event in Las Vegas.
“We made an important architectural decision — for all our clients, all their data, it’s their accumulated knowledge,” she said. “It’s your data, not someone else’s. It’s your intellectual property, not someone else’s. It’s your competitive advantage from all this data, not someone else’s.”
Rometty said clients have praised IBM’s approach. She envisions a future in which all enterprises will add artificial intelligence to help automate business processes, improve productivity and increase sales — and will turn to IBM to provide that AI software. Watson is seen as the key driver to long-term growth at the Armonk, New York-based company, which has been struggling with declining older businesses in traditional license-based software and information technology services. Watson is contributing to these existing operations, Rometty said in an interview.
IBM hasn’t yet broken out Watson revenue, which is housed within analytics sales under its cognitive solutions segment. That division reported $4.2 billion in sales in the third quarter.
IBM is competing with large technology companies including Microsoft Corp. and Alphabet Inc. to sell more machine learning and data analytics tools, which is why it’s working to differentiate its AI services. The company doesn’t have any consumer-facing products in artificial intelligence and is only focused on serving an enterprise’s needs, Rometty said, meaning it doesn’t require clients to share data to use Watson services.
“We have no search legacy,” Rometty said in the speech. Google, for example, uses huge amounts of data it has collected over the years to improve its core advertising product.
While the core Watson technology isn’t trained using customers’ proprietary data, it has received large chunks of information in various industries, from health care to weather to financial services, she said. That data — much of which was gained in several billion dollar plus acquisitions over the past few years –are used to train the technology in specific domains.
Currently, hundreds of millions of people are interacting with Watson in some way, with that number expected to reach 1 billion by the end of next year, Rometty said. That includes 200 million consumers who come across the technology deployed by IBM’s clients in a number of areas, such as shopping, travel planning, buying insurance and accessing banking and government services. Watson currently has been trained in 20 industries.
“This is the moment — it’s clear to me and to the world that Watson is the AI platform of business because we understand all these dimensions,” Rometty said. | 5:47p |
DuPont Fabros Buys Former Printing Plant for Its First Toronto Data Center DuPont Fabros Technology has acquired a building and land outside of Toronto that used to house a printing facility for the Toronto Star newspaper, the company announced Thursday. The site in Vaughan, Ontario, will become the Washington, DC-based data center REIT’s first location in the Toronto data center market.
This is also DFT’s first foray into a market outside of the US. The company first announced plans to expand there last year as part of a series of big strategic changes, which followed the appointment of its new CEO, former NTT exec Christopher Eldredge. Another big change was a move away from retail colocation services to focus strictly on leasing wholesale data center space in massive-scale facilities.
DFT expects the future TOR1 data center to have about 23 computer rooms and 46MW of power at full build-out. The first phase, which the company expects to come online in the third quarter of next year, will consist of 12 computer rooms, totaling about 125,000 square feet of data center space and 24MW of power.
To a large extent, DFT caters to customers who lease data center capacity in large chunks, companies that provide cloud and other internet services at global scale. Its two largest customers are Microsoft and Facebook. Other major tenants are Rackspace, Yahoo, and Dropbox.
DFT paid $41.6 million for the Vaughan site. Former newspaper printing plants being converted into data centers has been a common dynamic in recent years. A former Chicago Sun-Times plant in Chicago, for example, houses a new QTS data center, and a facility that used to print The New York Times in Edison, New Jersey, has been converted for data center use by Phoenix-based IO.
The data center REIT has facilities in the Northern Virginia, Chicago, and Silicon Valley markets, with the bulk of its capacity (about 180MW) located in Northern Virginia.
Toronto is the second geographic expansion DFT has in the works. Earlier this year, the company also acquired a big property in Hillsboro, Oregon, outside of Portland.
On Thursday, DFT reported third-quarter earnings of $0.37 per share – up from $0.29 reported for the same period last year. Its revenue for the quarter was $134.3 million – up 16 percent year over year. | 10:36p |
Report: CenturyLink and Level 3 in Merger Talks CenturyLink and Level 3 Communications are in negotiations about a potential merger, The Wall Street Journal reported, citing anonymous sources.
Broomfield, Colorado-based Level 3 operates one of the most extensive global internet backbones, and CenturyLink’s connectivity services, while global, are primarily concentrated in the US. Known primarily as carriers, both companies also have substantial carrier-neutral data center services businesses.
While CenturyLink, based in Monroe, Louisiana, has been exploring a potential sale of some or all of its global data center assets, the company’s management has repeatedly stated that while it is looking for an alternative to owning data center assets, it is not planning to get out of the business of providing data center space and services up the stack, which include various flavors of cloud, as well as managed hosting and security, among other services.
Read more: Why CenturyLink Doesn’t Want to Own Data Centers
Level 3 has 350 colocation data centers around the world, while CenturyLink has about 60, according to the companies’ respective websites. The kernel of CenturyLink’s current data center fleet is made up of assets it gained through the $2.5 billion acquisition of Savvis in 2011.
The talks are in progress and can still fall apart, the Journal noted. If finalized, however, a deal could be announced in the coming weeks.
Level 3’s market value is $16.8 billion, while CenturyLink’s is $15.2 billion.
In September, CenturyLink announced plans to cut its workforce by about 8 percent, or 3,400 people, to cut costs in response to a declining landline communications business.
See also: CenturyLink Data Center Team Keeps Eye on the Ball Despite Uncertain Future |
|