Multimodality and optimisation of cognitive ressources


Our mission

The ultimate goal of interventions for children with hearing impairment is to provide them with full access to language. While assistive hearing devices can improve sound perception, they are still limited in the quality of the available auditory cues – particularly in adverse conditions, e.g. with noise, distorted speech or complex multi-speaker environments. Therefore, children with hearing impairment may require specific intervention and support in addition to hearing rehabilitation. This work package explores various means to make communication easier and help children with hearing impairment to optimise their cognitive resources for interaction and learning. 

Strategies can firstly include focused training in the native auditory communication modality:  auditory training aims to improve the awareness of the auditory cues and lead to a better use of the input provided by the hearing device. Then, children may take advantage of additional sensory inputs in multimodal settings: multimodality is a means to provide enhanced input, associating the degraded auditory input to vision (for both lipreading and spatial hearing, and with the possible addition of manual linguistic cues) and also somatosensory information on the speaker’s gestures. Finally, settings may exploit assistive technological resources, possibly capitalizing on the quick development of tools based on artificial intelligence: this could enable automatically translate between different language modalities (sign language, cued speech, spoken language) and hence help to bridge the communication gap and integration of children with hearing loss in society.


Research Development

The first year of this project has been essentially devoted to the definition of the explored paradigms on pilot studies. Most participants included at this stage are normal-hearing participants, though some studies already began on participants equipped with hearing assistance.

Concerning multisensory interactions:

ESR5: In the first part of the project, we aimed to show how the brain integrates audiovisual information in Cued Speech (CS) perception. Firstly, we conducted a pilot electroencephalographic (EEG) study with typically-hearing adults who were naive towards the CS system. Preliminary results were promising and confirmed that seeing CS gestures doesn’t modify brain responses in people with no experience in the structure of CS information. Followingly, we aim to explore the effect of seeing CS gestures on speech perception brain responses in adults who are fluent in CS, including both a typically-hearing and a hearing-impaired group. We expect that only experienced CS users will show a benefit of seeing CS gestures on their EEG responses in a speech perception task.   

ESR 6: We examined the relationship between speech production abilities and orofacial somatosensory interaction in speech perception. We used the modulation of speech perception due to orofacial somatosensory inputs associated with facial skin deformation as an experimental model of somatosensory-auditory interaction in speech perception. In addition, we also evaluated acoustic distances between uttered vowels, recorded in a separate test, as a measure of speech production performance. We found that auditory-somatosensory interaction in speech perception was correlated with acoustic distances in speech production. This result supports the hypothesis that auditory-somatosensory interaction in speech perception can be developed based on speech production abilities.

ESR10: The purpose of our first study is to automate the process of transcribing Cued Speech. We have proposed a simple and effective approach for automatic recognition of CS. The proposed approach is based on a pre-trained hand and lips tracker used for visual feature extraction and a phonetic decoder based on a multi-stream recurrent neural network trained with connectionist temporal classification loss and combined with a pronunciation lexicon. This light-weight architecture happens to outperform our previous CNN-HMM decoder and competes with more complex baselines. We hope that this research would benefit, in general, as a tool for interpretation and translation between people who use Cued Speech and those who do not.

Concerning the cognitive environment favouring speech perception and development:

ESR7: In this study, our first aim is to investigate the extent to which speech understanding in noise is affected by the presence of distractors such as simultaneous talkers. We adapted an already available realistic 3D environment on a large screen, to make it able to present stimuli from two virtual speakers simultaneously in a way that reflects realistic listening scenarios and the difficulties these might present, especially to listeners with hearing loss. We also plan on pairing this paradigm with eye-tracking as a means of evaluating where listeners attend to in this situation, and whether that affects their ability to understand speech in situations with a high cognitive demand.

ESR8: To investigate how the learning of new words can be affected by impairments in phonological discrimination, we adapted an experimental protocol and built tasks in which the ability to discriminate phonemes is key to detecting and learning new words. This protocol was tested in normal-hearing adults and is meant to be a research tool for children with hearing loss. Pilot data obtained on adults show that when the sound is degraded (spectrally distorted), learning occurs more slowly, at about half the normal speed, than when the sound is spectrally intact.

ESR 9: The purpose of this project is to provide a more detailed description of theory of mind (ToM), cognitive and language skills in deaf and deafblind children (Usher syndrome), equipped with cochlear implants (CI), to enable more effective fitting of CIs and improve the implementation of intervention plans. We will examine the relation between ToM, cognitive and language skills and explore how this relation could be shaped by early access to language enabled by early implantation and/or inclusion in auditory verbal therapy. The first year has been essentially devoted to set up the research framework and paradigms.


Impact

The perspectives for users and professionals are:

Better appreciate the difficulties faced by users of assistive hearing technology in different conditions

Enhance knowledge on the efficiency and limitations of the additional modalities

Support professionals in the optimization of cognitive resources during intervention to overcome communication challenges

Direct their focus to enhancing the overall listening experience and early exposure to language

Provide information on potential solutions provided by technologies for enhanced communication (and their limitations)


Our team of early-stage researchers

Click here to read more about their individual research projects