The Healthy and Impaired Multisensory Talking Brain

Humans are experts in perceiving speech, even though the quality of the auditory speech signal we produce or hear is sub-optimal because of background noise and speaker variability. One reason why we nevertheless experience hardly any perceptual problems when engaged in a face-to-face conversation, is that our brain uses two additional streams of sensory information that generate non-auditory predictions about the upcoming sound. That is, we need to first plan and execute a set of fine-grained motor commands to correctly shape our vocal apparatus before we can produce the correct speech sound and, as a consequence, we actually see these articulatory gestures of an external speaker before we hear the sound. It is well-established that both the preceding motor-information and the preceding visual (i.e., lip-read) information modulate the way in which the self- or externally generated speech sound is processed. However, the effects of motor- and lip-read information on auditory speech processing have always been studied in isolation and current proposal is set-up to determine the multisensory interplay between auditory speech, lip-read speech and self-generated motor commands.
Plan nacional, Spanish Government (2015-2018)
PI : Baart, M., Pourquié, M.

BACK