Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, April 19th, 2016

    Time Event
    3:00p
    Wireless Interconnects Promise Big Data Center Efficiency Wins

    It’s no secret that the US government invests a lot of money in research and development efforts around more and more powerful computing systems. Some of that money goes to researchers who spend time pushing the boundaries of energy efficiency of computers and data centers.

    The latest example of this investment is a grant to an assistant professor at the Rochester Institute of Technology who believes it’s possible to achieve significant energy efficiency improvements in data centers by eliminating physical interconnects both within and between servers.

    Amlan Ganguly, a faculty member at RIT’s Kate Gleason College of Engineering, has been publishing research papers on wireless and photonic communication mechanisms within circuits for several years now. His next project is to scale that approach beyond the chip, to enable wireless interconnection between components of a server and between servers in a data center. The nearly $600,000 grant from the National Science Foundation will fund those effort over the next five years.

    “We want to revolutionize that mechanism of communication within servers with wireless interconnects,” Ganguly said in a statement. “The crux of the approach is to replace the legacy internet type of connections with the novel wireless technology which we project to be significantly more power efficient than the current state of the art.”

    He described the project as “high-risk,” citing significant challenges with interconnecting what could be tens to hundreds of servers with the same wireless frequency. There’s a lot of crosstalk, or interference, which makes it challenging to create an effective way to manage that communication.

    This is not the first time the NSF has funded a research project Ganguly has been involved in. At least one project where he was the lead investigator and three others where he participated in a non-principal role have received grants from the foundation over the last seven years. Most of them were projects that researched wireless on-chip communications.

    3:30p
    Backup is Broken – Enterprises Need Availability Instead

    Peter Ruchatz is CMO of Veeam.

    Backup is broken. Anyone who has had to work with enterprise backup knows this to be the case. Gartner, in fact, published a report six years ago titled, “Best Practices for Addressing the Broken State of Backup.” One would think that, given how awful the state of backup was in 2010, the situation would have improved by now. But, unfortunately, the broken state of backup is actually getting worse, not better.

    For example, a global survey of CIOs and IT pros in 2015 showed that, on average, an organization experienced 15 unplanned downtime events that year. This compares to the average of 13 reported in 2014. In addition, unplanned mission-critical application downtime length grew 36 percent from 1.4 hours to 1.9 hours year over year, and non-mission-critical application downtime length grew 45 percent from 4 hours to 5.8 hours. These outages cost the average organization $16 million a year, up 60 percent over 2014.

    The central problem is that backup cannot provide what organizations really need: availability. After all, when a mission-critical application is down or the file server has crashed beyond repair, it’s cold comfort to have a backup of the data somewhere across town on a tape in an underground vault. The enterprise is undergoing a digital transformation in which executives, employees, customers and partners expect to have 24/7/365 access to data.

    Downtime is unacceptable, and the pressure to enable a truly always-on business is growing daily. However, CIOs are far from meeting expectations around data availability, a fact of which they are are painfully aware: 84 percent acknowledged they currently have an availability gap, which is defined as the gap between the constant access users demand what IT departments actually deliver. Also, most organizations (96 percent) have increased their service-level requirements to minimize application downtime over the past two years. Alas, the availability gap still remains.

    How far away are organizations from delivering availability? According to the survey, service-level agreements (SLAs) around recovery time objectives (RTOs) are set on average at 1.6 hours, which, to be frank, is far too long for a critical application to be down. But organizations aren’t even meeting this standard. Their average recovery is three hours, nearly double their average SLA. Similarly, the average SLA for recovery point objectives (RPOs) is 2.9 hours, whereas 4.2 hours is what’s being delivered, on average.

    Users want support for real-time operations (63 percent) and 24/7 global access to IT services to support international business (59 percent). But availability requires far more than creating a backup of the data every night. It requires backing up throughout the day every 15 minutes without affecting the performance of the production environment. Each of these backups need to be tested to ensure that they will restore properly, and then IT needs to be able to recover within 15 minutes if an application goes down or data is lost.

    Again, CIOs are aware of the challenge ahead of them. When modernizing their data centers, CIOs cite high-speed recovery (59 percent) and data loss avoidance (57 percent) as the two most sought-after capabilities, but they say that high costs and insufficiently skilled personnel are inhibiting deployment. The problem is that CIOs are trying to leverage traditional backup to provide availability, and that’s an impossible task. Instead, IT organizations need to look beyond backup as they modernize their data centers and exploit the combined power of modern technologies like cloud, virtualization and advanced storage to achieve true availability.

    CIOs don’t need to wait for a future solution to realize the always-on business. The solution is already here, so long as they leverage the right mix of advanced technologies to achieve it.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    Analysts: Public Cloud Adoption to Create “Major Ripple Effect”
    By Talkin' Cloud

    By Talkin’ Cloud

    Analysts from JP Morgan see a future where AWS and Microsoft Azure have a significant place within the infrastructure of the biggest enterprises – perhaps eventually replacing legacy vendors like HPE, IBM and Oracle.

    According to a report by Barrons that outlines the main points of a 50-page note JP Morgan sent to clients last week, “IBM, HP, and Oracle are the top 3 most at-risk vendors for losing share of IT budget as the world shifts workloads to IaaS vendors.”

    Analysts from JP Morgan interviewed 207 chief information officers from companies with annual budgets in excess of $600 million, and found that CIOs used words like “transformative power” to describe AWS and its impact on their infrastructure. One CIO even went so far as to say that its organization is planning to go “all in with AWS.”

    While Microsoft and AWS were cited as the most critical and indispensable mega IT-vendors by 48.9 percent and 13 percent of respondents, respectively, HP was cited as the least critical vendor on the list.

    Aside from mega-vendors, JP Morgan also gauged the popularity of smaller vendors within the large enterprise space. In this area, business intelligence providers Tableau and Qlik were among the most popular, as was security provider Palo Alto Networks, cloud company ServiceNow, and VMware (AirWatch) rounded out the top five list of smaller software vendors impressing CIOs most with “their technology, vision, and value-add.”

    This first ran at http://talkincloud.com/cloud-computing/analysts-public-cloud-adoption-create-major-ripple-effect

    5:13p
    NY4: Inside Equinix’s Crown Jewel in New Jersey

    NY4, Equinix’s nearly 340,000-square foot data center just across the Hudson River from Manhattan, is “where Wall Street actually transacts,” write Matthew Leising and Annie Massa of Bloomberg Markets in their recently published profile of the facility.

    Hosting infrastructure for 49 exchanges, the Secaucus, New Jersey, site is one of the global financial industry’s most important locations few people have ever heard of. It’s where more than 6,300 companies interconnect, creating a hive of a trading ecosystem whose density any competitor would be hard-pressed to match.

    The piece is both a profile of Equinix and its facilities and a beginner’s introduction to the backend of the electronic trading market as we currently know it.

    For more details on the facility, here’s a profile of NY4 Data Center Knowledge ran in 2014. Also, check out our photo tour of the NY4 facility.

    One of the customers at NY4 is Lucera, which is a modern version of Radianz, a company that in 2000s pioneered the idea that participants in the trading ecosystem will gladly outsource the complicated and expensive task of setting up and managing interconnection.

    Radianz, which British Telecom acquired in 2005 for $130 million, gave banks like Goldman Sachs a way out of the telecom infrastructure business. It offered them access to all major financial institutions through a single connection, Leising and Massa write, replacing the need to lay their own cable and set up their networks, which they had been doing since the 1980s.

    Read more: The Data Centers Powering High Frequency Trading

    Fast forward to today, and providing infrastructure services to the trading market is its own industry, where even the exchange operators themselves found new big revenue opportunities. NY4 and other data centers operated by Equinix and some of its competitors are where this industry’s physical manifestation can be seen.

    Read Bloomberg’s profile on NY4 here.

    5:23p
    Microsoft Launches Azure Container Service
    By Talkin' Cloud

    By Talkin’ Cloud

    Microsoft announced general availability of its Azure Container Service on Tuesday to help organizations deploy and operate containerized application workloads.

    According to a blog post by Ross Gardler, Azure senior program manager, Azure Container Service provides simplified configurations of open source container orchestration technology that is optimized to run in the cloud.

    The Azure Container Service is built on 100 percent open source software, and offers a choice among popular orchestration engines DC/OS or Docker Swarm.

    “We built Azure Container Service to be the fastest way to get the benefits of running containerized applications, using your choice of open source technology, tools and skills and with the support of a thriving community and ecosystem,” Gardler said.

    The service was first announced in September and hit public preview in February.

    Microsoft joined Mesosphere’s Data Center Operating System (DC/OS) project on Tuesday, joining other corporate members of the project including Accenture, Cisco and HPE, according to ZDNet.

    “With the general availability of the Azure Container Service, containers are ready for prime-time in the cloud, enabling organizations to transform the excitement and hype into concrete business value quickly and with confidence,” Gardler added. “Thousands of customers are already running containerized applications in Azure, converting the promise of agility and efficiency at cloud scale into business results.”

    This first ran at http://talkincloud.com/cloud-computing-and-open-source/microsoft-launches-azure-container-service

    << Previous Day 2016/04/19
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org