Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет Data Center Knowledge | News and analysis for the data center industry - Industr ([info]syn_dcknowledge)
@ 2017-08-25 17:17:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Artificial Intelligence and the Future of HPC Simulations

Robert Wisniewski is Chief Software Architect, Exascale Computing at Intel Corporation

When discussing artificial intelligence and how it relates to the future of high-performance computing (HPC), it’s important to begin by noting that while machine learning and deep learning are sometimes viewed as synonymous, there are distinguishing features of deep learning that have allowed it to come to the forefront.

Machine learning can be viewed as a sub-field of artificial intelligence, with a core underlying principle that computers can access data, recognize patterns and essentially ‘learn’ for themselves. Deep learning takes this one step further by increasing the depth of the neural network and providing massive amounts of data. The combination of the increased computational power of modern computers and the recent deluge of data from a myriad of sources has allowed the deep learning sub-discipline to produce impressive results in terms of capability and accuracy.

Within the HPC community we have witnessed the uptake of both machine learning and, in particular, deep learning. Given the successes mentioned above, there is a high likelihood in the near future that we will see an increasing number of HPC computation cycles being used for machine learning, deep learning, and any other artificial intelligence-type of capability. For people designing the computers and the system software to support them, it is important to realize that while AI is revolutionizing computation and is exciting, a large percentage of HPC cycles will continue to be required by traditional simulation – and even the end state is likely to be a combination of AI and simulation as they are symbiotic.

Thus, the move toward AI needs to be approached in a balanced way because while there is no doubt that it will form an important and increasing portion of HPC, it is not going to replace classical simulation cycles. Simulation cycles will remain, and on large HPC machines they will continue to be a critical step. The difference is that there will now be an additional class of cycles that includes machine learning and deep learning.

Ten years ago, the amount of data being generated was only just beginning to get to the point where machine learning and deep learning algorithms could be used successfully. Now that there is a critical mass of data and the ability to store it and access it efficiently, significant strides can be made in many scientific fields. For example, in climatology, scientists have been able to couple the massive amounts of data with deep learning algorithms to discover new weather phenomenon. This is just one discipline in which huge amounts of data is generated by simulations, and the coupling of deep learning algorithms is starting to form an important tool base for scientific discovery.

Trends like this indicate that machine learning and deep learning algorithms will increasingly be run in conjunction with existing HPC algorithms, and will help them analyze the data. By combining the capabilities of a cohesive and comprehensive software stack that runs and manages HPC systems from small turn-key machines to large supercomputers with those of machine-learned algorithms, enterprises can benefit from significant computational savings in terms of both time and effort. Redundant system administrator tasks can be eliminated, software upgrades facilitated, and time can instead be devoted to customization and the programming of machine learning algorithms relevant to an organization’s needs.

Intel HPC Orchestrator democratizes HPC and aids entry into the machine and deep learning space by easing the burden of building and maintaining HPC systems. It allows organizations the opportunity to leverage HPC capability, and provides a more efficient ecosystem by eliminating duplicative work across organizations and by allowing hardware innovations to be realized more quickly.

The efficient enabling occurs because Intel develops the software stack in parallel with the hardware. In addition to providing the composability of frameworks across HPC and AI, we are investigating technologies to more tightly couple the application in an HPC AI workflow. Depending on the granularity of the couple, different technologies are needed. For example, if users are needing to loosely couple their applications across a machine, the resource management infrastructure may need to be enhanced.

For more complex scenarios, such as running a simulation algorithm concurrently with an AI algorithm with the requirement that they can access the same data without the need to spend time moving that data in and out of the system, then users will want tighter coupling; maybe even at a node granularity. Some researchers are already working on this with positive results. To make this possible, we need to have capabilities that allow us to take a given node, or a local set of nodes, and be able to divide it up in a manner that allows both the new artificial intelligence algorithm and the classical HPC simulation to run.

Because the simulation portion is sensitive to noise (execution time variance causes by system software interruptions, etc. that induce a difference application progress), we need technologies that can isolate the simulation execution from the AI execution. While containers provide some amount of isolation, through our mOS architecture (an extreme-scale operating systems effort), we are looking to provide greater degrees of isolation. We are looking to take technologies such as mOS and make them available through Intel HPC Orchestrator, as well as contribute them to OpenHPC (www.openhpc.community).

In addition to coupling HPC and AI applications, another area where this tighter coupling can be beneficial is uncertainty quantification, which allows scientist to provide tighter error bounds on simulation results. As an analogy from the previous weather topic, a prediction for hurricane path is not a single line, but rather a probably of likely paths. The reason is because the conditions affecting the hurricane’s path each have a certain error associated with them. When the sum of these errors are mathematically combined, the result is a hurricane plot that is commonly shown on weather maps. The ability to accurately represent the error on a simulation plays an important role in scientific studies.

Uncertainty quantification involves running a set of lower fidelity simulations with different error condition models, and with varying initial starting points. Today, the couple between these runs are relatively course grain. In some fields, if the coupling can be finer grain, potentially even on an iteration granularity, much tighter and higher fidelity error bars are achievable. This is just one example of where coupling could benefit classical simulation. For machine and deep learning, there are many additional possibilities where tighter coupling can be beneficial.

It seems clear that the path between machine and deep learning, and classical HPC simulation is converging. However, for the near future at least, machine-learned algorithms will remain best suited to filling in gaps of information, especially in areas where simulated interactions rather than predictions are needed. In the current computing landscape, algorithms that can learn and make predictions from incredibly large datasets are entering existing workflows and enabling enterprises to save costs, streamline processes, and drive forward innovation.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.



(Читать комментарии) (Добавить комментарий)