1. Pixel "Vue de contenu" : 2. Pixel "Ajout info de paiement" : 3. Pixel "confirmation de Don" :
top of page

Gesture recognition & French Sign Language

Updated: Aug 9, 2023

Artificial Intelligence has been a constant topic of conversation in recent years, owing to all the remarkable feats it is capable of achieving. Health, automotive, meteorology... no field is exempt from its influence.


In this article, we introduce one of the missions assigned to MCOVISION by the association SIGNES DE SENS, a specialist in raising awareness and providing training in French Sign Language (LSF). This mission facilitates the coming together of the worlds of deaf individuals and the hearing population.



1. Sign Language: an expressive and powerful means of communication.


Sign language is a visual and gestural means of communication used by deaf or hard of hearing individuals to interact with each other. Often, this language is misunderstood and underestimated by those who can hear. Nevertheless, it is just as rich and complex as spoken languages.


Sign language is unique in that it combines visual, gestural, and expressive elements to convey ideas and emotions. There isn't a single sign language; rather, there are almost as many sign languages as there are countries.

In the case of France, it's French Sign Language (LSF). LSF is practiced by over 100,000 people worldwide, with the majority residing in France. Among the estimated deaf population of 300,000 individuals, around one-third of them are proficient in using sign language, and 34% are inactive due to difficulties accessing employment, leisure activities, and social isolation.

Artificial intelligence, especially Computer Vision, represents powerful technologies that mimic human visual capabilities. By automatically analyzing signs in sign language, AI can prove immensely valuable in popularizing the practice of sign language, thus enabling deaf individuals to integrate better into society.



2. Artificial Intelligence and LSF


Currently, there are several AI-driven automatic translation systems for LSF that utilize video recognition and motion analysis to translate gestures into words. These translations can be generated in real-time to facilitate communication between deaf individuals and those who do not use LSF, especially during conferences and corporate meetings.


Educational tools aimed at training a larger number of people in LSF also benefit from artificial intelligence-based tools to enhance LSF instruction. This is where Signes de Sens has enlisted our services to integrate AI into their e-learning module, making the learning process more effective, pedagogical, and enjoyable.


After watching various learning tutorials for one or more LSF words, the e-learning platform will prompt the learner to perform the corresponding gestures in front of their webcam. AI will then be able to evaluate and assess the quality of the gesture performed by the learner. Learning becomes interactive, and the learner becomes an active participant in their own learning journey, self-assessing as many times as needed. They can target their weaknesses and strengths, obtaining a genuine objective evaluation of their gestures.


3. Neural Networks


The AI model used is a type of neural network called "convolutional neural network" (CNN). Convolutional neural networks are designed to recognize patterns in images, making them particularly suitable for learning sign language recognition.


Since a video is a sequence of images (or frames), we utilized a 3D network architecture to leverage the temporal information in the signal. This is in contrast to a 2D approach, which would limit the analysis to spatial aspects of the signal.


It's important to emphasize that the objective of training the AI model is to enable the computer to interpret the features and nuances of the signs, ultimately translating them into words. The learning process involves collecting a large number of videos featuring individuals signing LSF words in front of a webcam. These videos need to be annotated and are then used to train the AI model to recognize the signs and understand their meanings. The model minimizes its errors through backpropagation within the network, progressively achieving high performance. Finally, the model is deployed to predict with high accuracy on new real-world videos.


The network outputs not only the corresponding word for the performed sign but also a score assessing the quality of the sign. This score, ranging from 0 to 1, helps identify poorly executed signs by learners. If the score is low, it indicates that the learner did not perform the sign correctly, prompting them to work on specific aspects of their sign to improve its execution.


4. Conclusion

In conclusion, the utilization of artificial intelligence marks a significant stride forward in enhancing communication between the deaf and hard of hearing community and the rest of the world. It provides a practical and effective solution for real-time training in LSF, thereby promoting inclusion and diversity.



14 views0 comments

Comments


bottom of page