|

|

Visual contributions to the perception of speech in noise
Investigations of the role of audiovisual integration in speech-in-noise perception have largely focused on the benefits provided by lipreading cues. Nonetheless, audiovisual temporal coherence can offer a complementary advantage in auditory selective attention tasks. We developed an audiovisual speech-in-noise test to assess the benefit of visually conveyed phonetic information and visual contributions to auditory streaming. The test was a video version of the Children's Coordinate Response Measure with a noun as the second keyword (vCCRMn). The vCCRMn allowed us to measure speech reception thresholds in the presence of two competing talkers under three visual conditions: a full naturalistic video (AV), a video which was interrupted during the target word presentation (Inter), thus, providing no lipreading cues, and a static image of a talker with audio only (A). In each case, the video/image could display either the target talker, or one of the two competing maskers. We assessed speech reception thresholds in each visual condition in 37 young ([≤]35 years old) normal-hearing participants. Lipreading ability was independently assessed with the Test of Adult Speechreading (TAS). Results showed that both target-coherent AV and Inter visual conditions offer participants a listening benefit over the static image audio-only condition, with the full AV target-coherent condition providing the most benefit. Lipreading ability correlated with the audiovisual benefit shown in the full AV target-coherent condition, but not the benefit in the Inter target-coherent condition. Together our results are consistent with visual information providing independent benefits to listening, through lip reading and enhanced auditory streaming.
(Читать комментарии) (Добавить комментарий)
|
|