Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, November 16th, 2016
| Time |
Event |
| 11:00a |
Three Trends Driving Digital Business Innovation Tom Fountain is Chief Technology Officer for Pneuron.
The conventional paradigm for value creation is being abandoned, and IT organizations are struggling in the face of three major challenges. We need to look at how to extract value from an ever-growing mass of data spread across disparate sources. We must find strategies to cope with the impact and opportunity that the Internet of Things (IoT) brings. And, we need to adapt to evolving work habits and a mobile workforce.
The Promise of Big Data Analytics
This is the third generation of transformative change in IT in recent years. There’s been a shift from bespoke applications serving specific business purposes to enterprise resource planning (ERP) ushering in an era of more integrated software that helped us to better manage the execution of our businesses.
Now, with big data analytics, we’re looking for insights in all the wonderful transactional data we’ve been gathering for years. Failures in data governance and data model definition are making it difficult for analytics. In many cases, the data is simply too diverse and disparate. The full potential benefits will only be realized when we connect it together.
We need to go beyond the identification of insights and connect our analysis to systems that can execute. But it’s vital to seamlessly blend in our planning and take the time to understand our true capabilities.
As predictive capabilities improve, we have greater lead time to prepare ourselves to take full advantage of any new insight and craft an optimal response. Innovation springs from our ability to effectively mix and match our analytics, transactions, and planning.
Harnessing IoT
IoT represents a wealth of non-traditional data sources that can enrich our knowledge and help us orchestrate our businesses in a more optimal fashion. However, we need to work out how to collect it, secure it, transmit it, leverage it, and discount it, if it proves unreliable. An organization’s ability to manage big data analytics is critically important to its success or failure with IoT, according to Dresner research. The survey also reveals that IoT advocates are 50 percent more likely to be users of advanced and predictive analytics.
Enlarging the space within which we capture data, both for predictive modeling purposes and for optimal response planning, significantly boosts our chances of success. Experimentation is vital. We can’t include every piece of data from every source. We need to identify which ones will have the most positive impact on our planning and execution and that means testing cheaply with fast cycle times. Flexibility is key, so we can always take advantage of the next new data source, processing platform, or customer relationship. Only through rapid trial and error can we uncover the best combinations.
The Changing Digital Workplace
Mobility and movements like DevOps that break down traditional siloes have led to an unprecedented level of collaboration. Employees today can work with anybody, at any time, in any part of the company on whatever device is at hand.
“In a successful digital workplace, engaged employees are more willing to change roles and responsibilities and embrace new technology,” according to Matt Cain, vice president and distinguished analyst at Gartner.
The challenge is to foster this collaborative environment without losing control and oversight, or sacrificing governance and security. Innovation plays a major part in establishing that working balance between the needs of the emerging workforce and the desire for appropriate controls, on the part of the organization.
A digital business is built upon a combination of internal and external technology. There’s an inward focus to improve business operations, support the workforce, govern, and manage assets. Then there’s the external face, striving to improve customer experience, and manage a complex network of partners supplying and receiving services. The path to innovation, agility and value comes from fluid integration, enabling customers to participate directly and influence how service is delivered.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 7:41p |
Report: Network Complexity Creates Security Headaches  Brought to You by The WHIR
Complexity will hold two out of every five organizations back from making any upgrades to their networks in 2017, and security products are contributing to the problem, according to new research from Cato Networks.
The Top Networking and Security Challenges in the Enterprise report, released on Tuesday, shows that the majority of organizations use between two and five security solutions, with a quarter of respondents using even more.
Cato Networks asked over 700 IT professionals from around the world about their planned investments in 2017. It found that organizations with over 1,000 employees tend to experience even more security complexity, with 39 percent managing five or more solutions.
More than half of IT professionals, and almost three-quarters of CIOs surveyed said defending against emerging threats like ransomware is their top priority over the next year, and 41 percent say Firewall-as-a-Service is the most promising emerging technology for protecting infrastructure.
READ MORE: New Ransomware Targets Linux Servers, Deletes Files
“The results of this global survey highlight systemic weaknesses in network and security architectures currently built upon point products and dependent on hard-to-find skills. Today’s network rigidity hampers IT’s ability to adapt to growing threats and dynamic business requirements,” Shlomo Kramer, founder and CEO of Cato Networks said in a statement. “The only way forward is a radical simplification of the current infrastructure complexity that underpins on-premises and cloud environments.”
Many respondents using wide area networks (WANs) reported problems with user experience, infrastructure maintenance, and a lack of effective control over mobile access.
Almost half report that their biggest network security challenge is the cost of buying and managing security appliances and software, and half also report difficulty enforcing security policies for mobile users. Nearly half provide mobile access to the cloud by VPN, while 15 percent use a cloud access security broker (CASB).
Cato suggests streamlining is a key to enterprise IT security, and while an overwhelming majority of enterprises see hybrid as the best cloud model for them, even hybrid proponents admit its complexity.
This post originally appeared here on The Whir. | | 7:47p |
DivvyCloud Launches Hosted Version of Cloud Maintenance Bots  Brought to You by The WHIR
Cloud automation company DivvyCloud announced Wednesday that it has launched a hosted version of its cloud maintenance “Bots” to help AWS users optimize their cloud infrastructure. Botfactory.io will provide continuous scans and real-time actions to help organizations using AWS close security gaps, save money, and ensure compliance and best practices.
Botfactory.io is designed to support clouds for a range of business sizes, providing visibility into cloud infrastructure, and DivvyCloud’s Bots, which use “if – then” automation based on best practices and user-defined policies.
The DivvyCloud Bot “army” can detect and remove non-compliant security rules, eliminate orphan resources, limit instances to approved cloud regions, and enforce roper database encryption. It can also turn off dev/test instances at night, yielding potentially huge monthly savings, DivvyCloud says.
“BotFactory has been delivering great value to our enterprise customers like General Electric and Discovery Communications,” DivvyCloud CEO Brian Johnson said in a statement. “We are excited to enable broader adoption with our hosted BotFactory.io solution. Any customer of public cloud platforms can get value from BotFactory.io within a matter of minutes, no matter their skill level or cloud size.”
Along with its active user community, DivvyCloud expects new filters, actions, and integrations will continue to be developed to extend the capabilities of its Bots.
The service is available with different tiers for multiple cloud accounts, additional users, and broader cloud footprints.
This post originally appeared here at The Whir. | | 7:58p |
This Year’s Crop of Tech IPOs Stunts the Next One: Gadfly BLOOMBERG – There has been a welcome trickle of technology companies going public in the last few months. It seemed to signal an end to the IPO desert of 2015 and most of 2016. The relatively strong — maybe too strong — recent public market debuts of tech companies such as Twilio Inc. and Nutanix Inc. helped support predictions of a surge of technology IPOs in 2017.
But in the last six weeks or so, the class of 2016 tech IPOs suddenly doesn’t look so hot. It’s far from a disaster, but fading share performances for some of the recent debutantes aren’t a great setup for the incoming IPO class, headlined by the likely public market listing of Snapchat parent company Snap Inc.
The good news is there have been more listings since the summer after a dry spell for most of 2016 for IPOs in general and tech listings in particular. And the stock market newcomers didn’t fall on their faces. Of the 18 tech IPOs in the U.S. this year, two-thirds are trading higher than their initial sale price, according to data compiled by Bloomberg. That is the kind of good early start that eluded the much larger crop of tech IPOs in the last few years, whose struggles sapped investors’ appetite for new tech listings.
But all is not rosy. Only about 40 percent of this year’s tech IPOs are trading above the closing share price on their first day of trading. That metric matters because the first day of trading is the point at which all but the biggest institutional investors can buy shares of a newly public company.
The IPO darling of the year, Twilio, had dipped nearly back to where it closed on its first market day before shares recovered in the last few sessions. (Twilio’s stock price remains more than double the level at which it first sold shares.) Other harbingers of an IPO recovery — The Trade Desk Inc., Apptio Inc., Nutanix and Coupa Software Inc. — all had no trouble selling their IPO shares and had strong early stretches of trading. Now they are all below their closing prices on their first days as public companies.
There are good reasons for some stock sagging among recently public tech companies. Some of them have offered more stock to the public, and the increased supply naturally affects share demand. Investors might be cashing in some gains as the overall market for tech stocks has been bumpy both before and especially after the U.S. election. And in no rational market should Twilio’s enterprise value be 19 times its expected sales for the next 12 months, as it was in September — the second-most-highly valued U.S. technology company by that measure. Twilio’s multiple is now a still aggressive but more sane 8.7 times its estimated revenue.
The market-setting institutional shareholders who buy IPO shares have done well with this year’s new tech listings, and by all accounts they remain eager to invest in strong technology listings.
Companies are planning to give them what they want. Investment bank Union Square Advisors estimates there could be as many as 90 tech IPOs next year, counting companies that have filed IPO paperwork plus others that have stated a desire to go public in 2017. Morgan Stanley’s top tech banker figured there might be 30 to 40 tech IPOs next year — more in line with the number of tech initial public offerings in 2014.
For sure the tech IPO market is in better shape than it has been for a couple of years. The companies going public are of higher quality than those with IPOs in 2014 and 2015, and valuations are less aggressive. That is a healthy backdrop for next year’s potential stock market debuts. But the cracks emerging in this year’s new tech listings don’t make a perfect setup for the IPO class of 2017.
This column does not necessarily reflect the opinion of Bloomberg LP and its owners.
| | 9:44p |
Nvidia Makes Its Mark in Top 500 Supercomputers List for Power Efficiency The resurgence of the most storied name in high performance computing is now complete. Cray supercomputers have captured 12 of the top 25 spots in the University of Mannheim’s venerable Top 500 Supercomputers list, the latest edition of which was released Wednesday.
A top-of-the-line Cray XC50 chassis with 206,720 total cores, dubbed Piz Daint [pictured above] and built for the Swiss National Supercomputing Centre, holds onto the #8 slot, with an Rmax score of 9,779,000 — just under 10 petaflops (trillions of floating point operations per second). By comparison, the #7 slot — held by Japan’s RIKEN research lab’s venerable K computer, which debuted six years ago at #1, and was built on a Fujitsu SPARC64 chassis — performed at about 10.5 petaflops on the Linpack benchmark.
Piz Daint is accelerated by some 170,240 GPU pipelines, provided by way of Nvidia Tesla P100 accelerators. And while those GPU accelerators may be responsible for Piz Daint’s rise to success, they may also be a key contributor to the second largest power efficiency ratio on the November 2016 list: 7453.51 megaflops/watt. Of the 25 most power-efficient supercomputers undergoing the Linpack battery of tests, 16 are supplemented by Nvidia Tesla accelerators, with the top efficiency scorer — the #28 DGX SATURNV, built by Nvidia itself on its own DGX-1 deep learning system chassis — scoring a colossal 9462.09 megaflops/watt. SATURNV posted an Rmax score of about 3.3 petaflops.
What do I mean by Rmax? It’s an assessment of maximal sustained performance in a battery of tests based on the industry standard Linpack benchmark.
The Top 500 scores are assessed twice annually by the University of Mannheim, working in cooperation with Berkeley National Laboratory and the University of Tennessee, Knoxville. Testers look for how well supercomputer systems perform in this battery over a long stretch of time. The Rmax score refers to “maximal achieved performance.” Testers operate under the assumption of a theoretical peak performance, called the Rpeak; the ratio of achieved performance to theoretical produces an interesting derivative called the yield.
The top performer on the list overall is no surprise: Sunway TaihuLight, built for Wuxi, China’s National Supercomputing Center. It burst onto the scene earlier this spring with performance that could leave the dust in the dust, so to speak: just over 93 petaflops, with a decent megaflops/watt rating of 6051.3. “Sunway” is an Americanization of Shen Wei. Its CPUs are astoundingly simple in design, without memory caches, but with its 10.6 million-plus processor cores divided into clusters of 65.
In other words, the Shen Wei design is built for supercomputing, not an extrapolation of x86 architecture — not just a commercial, off-the-shelf (COTS) design. Needless to say, it doesn’t use (or need) acceleration from GPUs.
Last year’s supercomputing leader, China’s Tianhe-2, held onto second place with an Rmax score of about 33.9 petaflops. When a country devotes a good chunk of resources to the exclusive design and production of an unmatched supercomputer — and no other country can — it has a high chance of success.
So from a standpoint of an actual race (and let’s be honest, all performance tests are really races) the real objective for commercial processor-based design is to demonstrate just how much power can be achieved by designs that are not exclusive to supercomputing, and learn what we can from their achievements. Viewed in that light, Oak Ridge National Laboratory’s Titan — an old 2012-model Cray XK7 which held the #1 spot five years ago — truly proves its mettle by scoring just under 17.6 petaflops.
Titan’s power plant is comprised of some 35,040 16-core, 2011-model AMD Opteron 6274 processors — several generations removed from today’s mainstream. (Titan is only one of 7 AMD-based systems on the November 2016 list, by the way.) But it is assisted by a battalion of 261,632 GPU pipeline cores, provided by about 100 2013-model Nvidia Tesla K20X accelerators.
The next best-performing Opteron-based model in this list is the #87-ranked Garnet, run by the U.S. Dept. of Defense’ Supercomputing Resource Center, scoring 1.17 petaflops in the Rmax. Garnet does not use GPU acceleration. But Titan scored 2142.77 megaflops/watt in power efficiency, while Garnet scored just 209.55.
The message here is that GPU accelerators clearly improve power efficiency in high-performance settings.
However, accelerators don’t necessarily make HPC designs perform more closely to their theoretical peak. When systems on the list are sorted according to yield — that interesting measure of observed maximal performance to theoretical peak performance — only the 94th highest yielding system uses GPU acceleration: the #69 JURECA system, built for Forschungszentrum Jülich (FZJ), the European research institute in far western Germany near the Netherlands border. JURECA’s yield is about 84.13%.
The system with the highest yield this time is #295 NEMO bwForCluster, built for Freiburg University in Switzerland by Germany’s Dalco AG. Its yield is a stellar 98.78%.
Intel’s 14 nm “Broadwell” series Xeon E5 processors account for 69 of the Top 500, including the #11 system, a 241,920-core Cray XC40 based on Xeon E5-2695v4 processors. It scored 6.77 petaflops, though its megaflops/watt score has not been posted. But the older 22 nm “Haswell” series led the way, with 224 of the Top 500, the best performer of which is the #8 Piz Daint. |
|