Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет bioRxiv Subject Collection: Neuroscience ([info]syn_bx_neuro)
@ 2025-09-26 04:35:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
A Computational Perspective on the No-Strong-Loops Principle in Brain Networks
Cerebral cortical networks in the mammalian brain exhibit a non-random organization that systematically avoids strong reciprocal projections, particularly in sensory hierarchies. This "no-strong-loops" principle is thought to prevent runaway excitation and maintain stability, yet its computational impact remains unclear. Here, we use computational analysis and modeling to show that connectivity asymmetry supports high working-memory capacity, whereas increasing reciprocity reduces memory capacity and representational diversity in reservoir-computing models of recurrent neural networks. We systematically examine synthetic architectures inspired by mammalian cortical connectivity and find that sparse, modular, and hierarchical networks achieve superior performance, relative to random, small-world, or core-periphery graphs, but only when reciprocity is constrained. Validated on directed macaque and marmoset connectomes, these results indicate that restricting reciprocal motifs yields functional benefits in sparse networks, consistent with an evolutionary strategy for stable, efficient information processing in the brain. These findings suggest a biologically-inspired design principle for artificial neural systems.


(Читать комментарии) (Добавить комментарий)