|

|

A proposal for unifying statistical learning at different scales: Long-Horizon Associative Learning
Sensory inputs often display complex temporal interdependencies that typically conform to a latent underlying arrangement, whose learning facilitates the efficient interaction with the environment. However temporal dependencies may cover a large range of scales, from consecutive items (Saffran et al., 1996), to relations in longer sequences (Dehaene et al., 2015; Schapiro et al., 2013). Whereas numerous models have been proposed to explain statistical learning at different scales (Fiser and Lengyel, 2022), we argue here that a single and unifying model can account for the different scales of statistical learning including, adjacent and non-adjacent transitions as well as network structures, even in the absence of first order transition probability information. Based on behavioral and MEG results, we previously showed that a long-horizon associative learning process (a biologically plausible implementation of successor representation, or equivalently the Free-Energy Minimization Model (FEMM) originally proposed by Lynn and colleagues (2020a)) was able to explain learning based on local statistical properties but also based on high-order network properties (Benjamin et al., 2024, 2023a). Here, we extend our proposal and describe how this long-term associative learning model accounts for various results from the literature with different scales of dependency between elements. Long-horizon associative learning thus provides a unified framework that spans diverse statistical scales, offering a unique explanation of literature results. We thus aim at replacing the numerous metrics described in the statistical learning literature, each tuned to a specific scale and that occasionally conflict with each other by this unique parsimonious approach.
(Читать комментарии) (Добавить комментарий)
|
|