Topic 2.1 - Temporal course of auditory, labial, and manual signals in Cued Speech (CS) perception (ESR 5)

Image © ULB-Isopix.be

The topic emporal course of auditory, labial, and manual signals in Cued Speech (CS) perception will be hosted at the Université libre de Bruxelles (BE) and supervised by Cécile Colin and Jacqueline Leybaert.

CS is a system of manual gestures produced near the speaker’s face whose shape and position disambiguate lipreading. Because it enables the development of accurate phonological representations, it has a very positive impact on speech perception and production as well as on speech-related abilities (e.g. reading, until the mean level of hearing matched individual) of persons with HI. This is the case even in children fitted with a CI since CI does not transmit speech-related spectro-temporal cues properly. Previous work from our team demonstrated that patients with CI do integrate auditory, lipread, and manual information in speech perception and that the weight allocated to those three signals is modulated by expertise in CS and by the degree of auditory recuperation (Bayard, Colin & Leybaert, 2014). However, the level of processing at which information from the manual cues from CS, from the lips and from the sound are integrated is not yet fully understood. This is why we will use ERPs targeting automatic as well as attentive processing. In separate experiments, using paradigms specifically devoted to elicit sensory exogenous (N1/P2) vs cognitive (MMN-P300) potentials, we will compare brain electrical activity elicited by consonant-vowel syllables presented in all unimodal and (congruent or incongruent) bi- or tri-modal stimulation conditions in hearing children and children fitted with a CI, expert or not in CS and (for children with HI) with different degrees of auditory abilities.

In case of automatic low-level integration, N1/P2 should be less ample in bi and tri-modal conditions relative to the unimodal conditions, but, according to the conditions, this amplitude reduction is expected to be modulated by auditory status, expertise in CS and auditory abilities. MMN and P300 amplitudes might be modulated by higher-level later processing stages and as they will be recorded in the same attentive oddball paradigm, we will be able to distinguish high-level automatic and attentive processing stages, that are also expected to be modulated by auditory status, CS expertise and auditory abilities. The results of this task are hoped to help clinicians to better take into account individual specificities and characteristics of children with HI (i.e. weight allocated to the different speech signals and level of processing) when elaborating intervention plans targeting audio-visual integration in speech perception

Part of this work will take place at OTIC. It will enable the ESR to focus on the relationships between audiovisual (including CS) input and listening effort (measured by pupillometry and/or EEG) in children fitted with a CI. The ESR will have access to the FUEL models and theoretical frameworks, developed at the Eriksholm research centre (OTIC).

Comm4CHILD
Comm4CHILD
Project leader

Related