Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, June 20th, 2016

    Time Event
    3:33p
    Dell Sells Software Unit to Francisco Partners, Elliott

    (Bloomberg) — Dell agreed to sell its software unit to buyout firm Francisco Partners Management and the private equity arm of activist investor Elliott Management.

    The sale of the unit, which is focused on advanced analytics, database management and data protection, will help Dell reduce its debt burden after a series of deals since its own buyout in 2013. Terms weren’t provided in the statement released Monday.

    CEO and founder Michael Dell agreed in October to buy EMC for $67 billion to broaden the company’s product lineup from servers to storage devices amid intensifying competition. Dell, which has said it will add about $50 billion in debt to get the EMC deal done, expects the purchase to close between June and October, people familiar with the matter have said.

    See also: What About Dell’s Own Huge Data Center Software Portfolio?

    The sale of the software group is Dell’s latest ahead of its combination with EMC. NTT Data Corp., a unit of Japan’s former telephone monopoly, agreed to buy Dell’s IT services businesses for $3.06 billion in March. A month later, Dell took its cybersecurity company SecureWorks public, raising $112 million.

    Debt financing for the latest transaction was provided by Credit Suisse Group and RBC Capital Markets, according to the statement.

    Round Rock, Texas-based Dell is also proceeding with selling software assets Sonicwall and Quest, people with knowledge of the matter have said. EMC is seeking to sell its Documentum business as part of its plan with Dell to divest more than $6 billion in assets.

    Elliott Management, led by billionaire Paul Elliott Singer, has pushed EMC to sell itself and spin off VMware, of which the storage company is the majority owner.

    4:00p
    Five Things I Love About My DRUPS

    David Johnson is Director of Business Development for Hitec Power Protection, Inc.

    “How do I love thee? Let me count the ways.” It was not Shakespeare, but the poet Elizabeth Barrett Browning who penned that now famous line almost 150 years ago, and the words still ring true. But is data center equipment applicable to a love like no other?

    It may sound strange, but Diesel rotary UPS (DRUPS) are not ordinary machines. DRUPS systems do offer some charming qualities that people like myself find irresistible. So to the depth and breadth of my soul, let me count the five ways to admire DRUPS offerings.

    Long Lifecycle

    We all want relationships to last, so at the top of the list of things to admire about a diesel rotary is the fact that it lasts – a very long time. While the useable life cycle is 30 years (as opposed to the 15-year standard from the batteries), today there are diesel rotary units still in service that were installed in the 1960s.

    And even if the love only lasts thirty years, most would agree that is a pretty good run for the investment. Since the machine is constructed from tried-and-true mechanical components like motors, rotors, and generators, it will last as long as it is properly maintained.

    Less Space

    Unlike that person you dated who came with a lot of boxes and furniture – and maybe some baggage to boot – a diesel rotary system is designed to fit in tight spaces, and doesn’t mind sleeping outside. Unlike a high-maintenance, battery-based proposition, DRUPS is something you can fit into your life.

    Static based systems require dedicated rooms with sophisticated mechanical, controls, and monitoring systems, all of which consume space and maintenance dollars. Rotary? It fits within the space you’ve got, and doesn’t demand special treatment.

    Simple One-Line

    Like the Shakespeare/Browning mix up, there are many things attributed to the more famous party that simply aren’t true. One of those misconceptions is the idea that static systems are less complicated electrically than rotary. But a careful look at the one-line diagram of both systems proves that rotary is simpler.

    In a study of comparable 3 MW UPS systems, the static system requires a minimum of 12 input and output breakers to support the critical bus. Rotary? A mere three. Fewer breakers mean fewer single points of failure, which translates into less heartache down the road.

    All Loads are Created Equal

    When it comes to the UPS-supported loads, we all assume that this will be an IT-only affair. But if you could support other systems besides computer loads, wouldn’t it change your thinking about data center design? Of course it would.

    Part of the joy that comes from relationship with diesel rotary is that I can put motor loads from my mechanical system on it and never, ever be concerned about thermal storage systems, complicated controls schemes, or thermal runaway. Now that’s a critical advantage.

    Low Total Cost of Ownership

    Of all of the things that make diesel-rotary systems special, I will shamelessly admit that the qualities mentioned so far make them a pretty cheap date. Call me shallow, but there is an economic side to any relationship – this is no different. And as it turns out, my diesel rotary system is the best deal both in terms of up-front cost, and also for the long haul.

    No air conditioning bills, 10-year bearing overhauls, and better efficiency all contribute to a 30 percent lower operating cost. I have to say I am looking forward to looking back at a low TCO on our 30th anniversary.

    Who knows, maybe we will decide to do it all over again.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:37p
    Intel Wants to Make Machine Learning Scalable

    Intel’s strategy for tackling the AI CPU market, where it is facing competition from leading GPU makers and potentially also big customers that make their own specialized processors for this purpose, such as Google, rests to a great extent on designing systems that scale out rather than up. The latter, according to the chipmaker, is the conventional but inefficient approach to architecting these systems.

    Software code in today’s machine learning systems (machine learning is one of the most active subfields in the development of artificial intelligence) is tough to scale and usually lives in a single box, Charles Wuischpard, VP of the Intel Data Center Group and general manager of the giant’s HPC Platform Group, said.

    Companies generally buy high-power scale-up systems filled with GPUs. “In a way, there’s an efficiency loss here,” he said on a call with reporters last week.

    See also: Google Has Built Its Own Custom Chip for AI Servers

    This is the first time Intel has publicly discussed its strategy for the AI CPU market. “This is an early indicator of our plans in this area,” Wuischpard said.

    The company has been working on a scale-out solution for machine learning, taking the cluster approach that’s typical in high-performance computing systems and hyperscale web or cloud applications.

    In addition to taking a different architectural approach to machine learning, Intel’s AI CPU strategy includes a solution that does more than one thing. While some companies will have dedicated machine learning environments, Intel believes that the vast majority will want a single system that can run machine learning as well as other workloads.

    “Enabling the most effective and efficient use of compute resource for multiple uses remains one of our underlying themes,” Wuischpard said.

    Besides hardware, Intel has been investing in development of software tools and libraries for machine learning, training for partners, and an early access program for top research academics. About 100,000 developers are being trained on machine learning through its partner program, the company said.

    Wuischpard talked about Intel’s plans in the AI CPU market in the context of a general-availability roll-out of Xeon Phi, its latest processor for high-performance computing. It is the chipmaker’s first bootable CPU and its first part to feature an integrated fabric (Intel Omni-Path) and integrated memory.

    The company previewed the part in 2014 and has been shipping it in volume to select customers for several months, but it will become generally available in September, according to Wuischpard.

    More than 100,000 units have been either sold or pending, he said, the bulk of them going to major supercomputer labs, such as Cineca in Italy, Texas Advanced Computing Center, and numerous US Department of Energy national labs, among others.

    See also: World’s Fastest Supercomputer Now Has Chinese Chip Technology

    In the AI CPU market, Xeon Phi is Intel’s answer to GPUs by the likes of Nvidia. According to Wuischpard, Phi is faster and more scalable than GPUs.

    Phi and GPUs are best suited for a subset of machine learning workloads called Training. Another type of machine learning workload, called Inference, is already dominated by Intel’s Xeon processors, he said, calling Xeon the most widely deployed processor for machine learning.

    Earlier this year, Google announced that it has developed its own custom CPU for machine learning called Tensor Processing Unit, or TPU, potentially indicating that Intel and other processor makers were unable to produce a part that matched Google’s performance and price requirements. Google and other hyperscale data center operators usually invest in engineering infrastructure components in-house when they cannot source them from the suppliers in the market.

    Wuischpard said Google’s TPU appears to be a highly specialized part designed for a specific workload, which makes it a small threat to Intel’s general-purpose strategy in machine learning.

    “This is a case where they found a way to develop a specialized use for something that they do at massive scale,” he said. “I don’t think it [will turn] out to be as much of a general-purpose solution.”

    Updated with comments on Google’s Tensor Processing Unit.

    5:19p
    World’s Fastest Supercomputer Now Has Chinese Chip Technology

    (Bloomberg) — In a threat to US technology dominance, the world’s fastest supercomputer is powered by Chinese-designed semiconductors for the first time. It’s a breakthrough for China’s attempts to reduce dependence on imported technology.

    The Sunway TaihuLight supercomputer, located at the state-funded Chinese Supercomputing Center in Wuxi, Jiangsu province, is more than twice as powerful as the previous winner, according to TOP500, a research organization that compiles the rankings twice a year. The machine is powered by a SW26010 processor designed by Shanghai High Performance IC Design Center, TOP500 said Monday.

    “It’s not based on an existing architecture. They built it themselves,” said Jack Dongarra, a professor at the University of Tennessee and creator of the measurement method used by TOP500. “This is a system that has Chinese processors.”

    The new machine shows China’s determination to build its domestic chip industry and replace its dependence on imports that cost as much as oil. The world’s most populous country may also try to lessen its reliance on US companies for defense technology and security infrastructure. Supercomputers aren’t major consumers of chips. But being at the heart of the world’s most powerful machines helps processor makers persuade the broader market to consider their technology.

    “This is the first time that the Chinese have more systems than the US, so that, I think, is a striking accomplishment,” said Dongarra. The Chinese had no machines in the 2001 list, he noted. In the latest, China has 167 entries compared with 165 for the US.

    Previous supercomputer winners have had processors built on US technology from Intel — the world’s largest chipmaker — IBM or a derivative of Sun Microsystems designs.

    The top position was previously occupied by Tianhe-2, built on Intel chips by China’s National Supercomputer Center in Guangzhou. That system is now second, according to TOP500.

    Read more: China’s Milkyway 2 Ranked Fastest Supercomputer for Fifth Time

    Sunway TaihuLight’s victory is a particular challenge to Intel’s dominance in computer servers, where it currently controls about 96 percent of the market. It announced a joint venture with a Chinese organization to domesticate some of its technology earlier this year.

    Supercomputers are multiple server computers linked together in a way that allows them to process huge data sets and run the most complex calculations. While they’re hugely expensive and relatively rare, they showcase new technologies that often make their way into corporate data centers.

    An Intel spokesman declined to comment on the new rankings.

    Other chipmakers such as Qualcomm Inc. are working with Chinese organizations to build processors in the country. Technology provider ARM Holdings, whose products are at the heart of most smartphones, is also trying to grab a slice of the Chinese market.

    << Previous Day 2016/06/20
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org