|

|

Animate-inanimate object categorization from minimal visual information in the human brain, human behavior, and deep neural networks
The distinction between animate and inanimate things is a main organizing principle of information in perception and cognition. Yet, animacy, as a visual property, has so far eluded operationalization. Which visual features are necessary and sufficient to see animacy? At which level of the visual hierarchy does the animate-inanimate distinction emerge? Here, we show that the animate-inanimate distinction is preserved even among images of objects that are made unrecognizable and only retain low- and mid-level visual features of their natural version. In particular, in three experiments, healthy human adults saw rapid sequences of images (6 Hz) where every five exemplars of a category (i.e., animate), an exemplar of another category (i.e., inanimate) was shown (1.2 Hz). Using frequency-tagging electroencephalography (ftEEG), we found significant neural responses at 1.2 Hz, indicating rapid and automatic detection of the periodic categorical change. Moreover, such effect was found -although increasingly weaker- for 'impoverished' stimulus-sets that retained only certain (high-, mid- or low-level) features of the original colorful images (i.e., grayscale, texform and phase-scrambled images), and even if the images were unrecognizable. Similar effects were found with two Deep Neural networks (DNNs) presented with the same stimulus-sets. In sum, reliable categorization effects for dramatically impoverished and unrecognizable images, in humans' EEG and DNN data, demonstrate that animacy representation emerges early in the visual hierarchy and is remarkably resilient to the loss of visual information.
(Читать комментарии) (Добавить комментарий)
|
|