Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет bioRxiv Subject Collection: Neuroscience ([info]syn_bx_neuro)
@ 2025-06-25 20:02:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Theory of Temporal Pattern Learning in Echo State Networks
Echo state networks are well-known for their ability to learn temporal patterns through simple feedback to a large recurrent network with random connections. However, the learning process itself remains poorly understood. We develop a quantitative theory that explains learning in a regime where the network dynamics is stable and the feedback is weak. We show that the dynamics is governed by a finite number of master modes whose nonlinear interactions can be described by a normal form. This formulation provides a simple picture of learning as a Fourier decomposition of the target pattern with amplitudes determined by nonlinear interactions that, remarkably, become independent of the network randomness in the limit of large network size. We further show that the description extends to moderate feedback and recurrent networks with multiple unstable modes.


(Читать комментарии) (Добавить комментарий)