In Germany and Europe, there is ongoing debate about sign language avatars. These digital figures are designed to automatically translate text or spoken language into sign language and display it on websites, apps, or information terminals to improve accessibility. The idea sounds modern and practical, but many Deaf people respond critically. Hearing people have decades of experience with automated phone systems, where one first interacts with a computer voice. Many find these systems impersonal, error-prone, and frustrating. If hearing people reject machines, it raises the question of why Deaf people are expected to rely on “talking puppets.”
Automated phones for hearing people
Automated phone systems have been around for decades, especially in banks, insurance companies, and government offices. Typical prompts are: “Press 1 for accounts, press 2 for customer service…” These systems are meant to save time, but they often frustrate users. They cannot understand dialects or unusual expressions and offer no way to clarify misunderstandings. Almost all hearing people prefer direct contact with a real person. Despite technical advances, hearing people do not use phone avatars, showing that technology alone cannot replace personal communication.
Sign language avatars: opportunities and risks
Sign language avatars are intended to automatically translate texts and visualize signs. An avatar is a digital figure using hands, gestures, and sometimes facial expressions to produce signs. While the idea seems modern, practical experience reveals many problems. Sign language is a living, visual language with its own grammar, facial expression, and spatial structure. Computers cannot fully reproduce these nuances. This is why many Deaf people say, “We do not want talking puppets!”
Early avatars appeared stiff and unnatural. Today, technology has improved significantly: movements are smoother, hands appear more natural, and some systems show basic facial expressions. While these are visible improvements, the real problem lies in content translation. Avatars can recognize words, but they do not understand the meaning of sentences. Simple phrases like “The town hall is open” work reasonably well, but technical, medical, or legal texts often lead to misunderstandings. A small mistake in handshape, facial expression, or eye gaze can completely change the meaning of a sentence. Many Deaf people therefore say: “The avatars look better, but they don’t know what they’re saying.”
Machine translations are also not recognized for hearing people
Programs like DeepL, Promt, or Google Translate also use artificial intelligence to translate texts automatically. However, these translations contain many errors and often fail to capture the meaning or context accurately. In court or for official documents, only certified translators are allowed, whose work is verified and legally recognized. If machine translations are not accepted even for hearing people, it seems questionable why Deaf people should rely on avatars.
Why Deaf people often get avatars
Another reason avatars are introduced is that communication with Deaf people is considered tedious and time-consuming for many hearing people. Hearing staff must speak slowly, maintain eye contact, possibly organize interpreters, and adjust their own language. This requires time and patience. For some authorities or companies, an avatar appears convenient: no interpreter, no waiting time, no extra effort. Instead of promoting true inclusion, Deaf people are often simply “redirected” to digital figures. Many Deaf people experience this as technical exclusion, as human communication is replaced rather than supplemented.
Criticism from the Deaf community
The criticism is clear: avatars cannot reliably translate complex content, especially in sensitive areas. Authorities and companies may reduce the use of interpreters if they believe an avatar is sufficient. Sign language relies on expression, speed, and personality, while an avatar remains artificial and distant. Moreover, many systems are developed without Deaf experts, resulting in unnatural representation. Only humans can convey emotion, context, and meaning accurately.
Legal framework
Germany’s Disability Equality Act (BGG) requires barrier-free communication but does not mandate that digital avatars replace interpreters. Authorities may offer avatars as an additional tool, but Deaf people still have the right to interpreters or other communication support.
Research and findings
Studies by the University of Hamburg, RWTH Aachen, and the EU project SignON show that avatars are much less understandable than real humans. Emotions, facial expressions, and timing are often unnatural, and acceptance remains low. Especially for important matters like legal, medical, or administrative issues, the demand for real interpreters remains high.
What Deaf people really want
Most Deaf people do not want robots. They want real people who know sign language, reliable interpreters, and hearing staff who can sign, enabling direct, barrier-free communication. Avatars may be useful for simple information, but they cannot replace human communication.
Tips for Deaf people
- Always insist on interpreters, even when avatars are offered.
- Report errors in avatars so developers can improve them.
- Critically evaluate information, especially for important topics.
- Support projects involving Deaf experts to ensure authentic solutions.
Conclusion
Sign language avatars are interesting, but not yet mature. They can deliver basic information, but they cannot replace conversations, emotion, or responsibility. If machine translations are not recognized for hearing people, it is unjustifiable to rely on digital puppets for Deaf people. Sign language is living, human, and culturally valuable – it belongs to people, not machines.

