Topic 2.6 - Development of communication systems and tools related to Sign Language (SL), Cued Speech (CS), and audio-visual speech synthesis (ESR 10)
The project Development of communication systems and tools related to Sign Language (SL), Cued Speech (CS), and audio-visual speech synthesis will be hosted at the CNRS-GIPSA, Université Grenoble-Alpes, and supervised by Denis Beautemps and Thomas Hueber.
In the technology domain, the accessibility to communication tools for people with sensory disabilities is a priority. Relay Services dedicated to people with HI are created. These services are designed for people with hearing or speech impairment who use telecommunication devices to contact hearing interpreters in SL, CS and speech language at a distant Centre. To integrate automatisation in this telecommunication chain, applications based on automatic gestural iconic signs recognition will be developed to complement vocal and tactile commands within mobile phones or tablet computers. For this objective, the ESR project will develop models for automatic recognition of iconic signs derived from SL and/or CS gestures towards text and/or speech sound.
This work will inform the development of new algorithms (based on recent techniques of deep learning) for multimodal communication (including text, speech, lipreading, and manual gestures) between hearing participants and participants with HI. Indeed, the Convolution Neural Networks (CNN) could process the automatic extraction of pertinent features from the videos and the Recurrent Neural Networks (RNN) as the Long Short-Term Memory LSTM methods could be applied for the automatic processing of the large desynchronisation that can occur between hands and lips. A first application of these methods at CNRS-GIPSA made it possible to reach a score of 72.67 % for CS recognition of phonemes in the context of continuous speech (PhD thesis of Li Liu, 2018). The applications based on automatic recognition of iconic signs will be developed for telecommunication devices in relation with the IVèS telecommunication platform. It will thus increase the telecommunication accessibility for people with HI, including children with only developing text-based skills, taking into account their own preferred communication means.
Part of this project will take place at the IVèS company and at the Université libre de Bruxelles (ULB). IVèS has developed a strong expertise in phone platform integrating end-users aspects and is very well identified at French national and international levels and especially in Grenoble, Toulouse (with ELIOZ company), and in Montreal. The expertise of ULB in language development will enable the ESR to evaluate the different technical solutions in relation with language abilities of children with HI.
- Project 2 - Multimodality and optimisation of cognitive resources
- Topic 2.1 - Temporal course of auditory, labial, and manual signals in Cued Speech (CS) perception (ESR 5)
- Topic 2.2 - The somatosensory function and perceptuo-motor loop in speech communication (ESR 6)
- Topic 2.3 - Improving integration of audio-visual speech cues in children with HI (ESR 7)
- Topic 2.4 - Effortful listening, cognitive energy, and learning in children fitted with CI (ESR 8)