Войти в систему

Home
    - Создать дневник
    - Написать в дневник
       - Подробный режим

LJ.Rossia.org
    - Новости сайта
    - Общие настройки
    - Sitemap
    - Оплата
    - ljr-fif

Редактировать...
    - Настройки
    - Список друзей
    - Дневник
    - Картинки
    - Пароль
    - Вид дневника

Сообщества

Настроить S2

Помощь
    - Забыли пароль?
    - FAQ
    - Тех. поддержка



Пишет bioRxiv Subject Collection: Neuroscience ([info]syn_bx_neuro)
@ 2024-11-15 07:04:00


Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Modeling Complex Animal Behavior with Latent State Inverse Reinforcement Learning
Understanding complex animal behavior is crucial for linking brain computation to observed actions. While recent research has shifted towards modeling behavior as a dynamic process, few approaches exist for modeling long-term, naturalistic behaviors such as navigation. We introduce discrete Dynamical Inverse Reinforcement Learning (dDIRL), a latent state-dependent paradigm for modeling complex animal behavior over extended periods. dDIRL models animal behavior as being driven by internal state-specific rewards, with Markovian transitions between the distinct internal states. Using expectation-maximization, we infer reward functions corresponding to each internal states and the transition probabilities between them, from observed behavior. We applied dDIRL to water-starved mice navigating a labyrinth, analyzing each animal individually. Our results reveal three distinct internal states sufficient to describe behavior, including a consistent water-seeking state occupied for less than half the time. We also identified two clusters of animals with different exploration patterns in the labyrinth. dDIRL offers a nuanced understanding of how internal states and their associated rewards shape observed behavior in complex environments, paving the way for deeper insights into the neural basis of naturalistic behavior.


(Читать комментарии) (Добавить комментарий)