|

|

Optimizing Language Model Embeddings to Voxel Activity Improves Brain Activity Predictions
Recent studies have shown that contextual semantic embeddings from language models can accurately predict human brain activity during language processing. However, most studies use contextual embeddings with the same context length and model layer for all voxels, potentially overlooking meaningful variations across the brain. In this study, we investigate whether optimizing contextual embeddings for individual voxels improves their ability to predict brain activity during reading. We optimize embeddings for each voxel by selecting the best-predicting context length, model layer, or both. We perform this optimization with two different types of stimuli (isolated sentences and narratives), and quantify the performance gains of optimized embeddings over standard fixed embeddings. Our results show that voxel-specific optimization substantially improves the prediction accuracy of contextual semantic embeddings. These findings demonstrate that voxel-specific contextual tuning provides a more accurate and nuanced account of how the contextual semantic information is represented across the cortex.
(Читать комментарии) (Добавить комментарий)
|
|