Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет bioRxiv Subject Collection: Neuroscience ([info]syn_bx_neuro)
@ 2025-09-27 10:46:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Predictive learning enables compositional representations
The brain builds predictive models to plan future actions. These models generalize remarkably well to new environments, but it is unclear how neural circuits acquire this flexibility. Here, we show that compositional representations emerge in Recurrent Neural Networks (RNNs) trained solely to predict future sensory inputs. These representations have been observed in different areas of the brain, for example, in the motor cortex of monkeys, which have been shown to reuse primitives in sequences. They enable compositional generalization, a mechanism that could explain the brain's adaptability, where independent modules representing different parts of the environment can be selected according to context. We trained an RNN to predict future frames in a visual environment defined by independent latent factors and their corresponding dynamics. We found that the network learned to solve this task by developing a compositional internal model. Specifically, it had disentangled representations of the static latent factors, and formed distinct, modular clusters, each selectively implementing a single dynamic. This modular and disentangled architecture enabled the network to exhibit compositional generalization, accurately predicting outcomes in novel contexts composed of unseen combinations of dynamics. Our findings present a powerful, unsupervised mechanism for learning the causal structure of an environment, suggesting that predicting the future can be sufficient to develop generalizable world models.


(Читать комментарии) (Добавить комментарий)