top of page

Exploring the Impact of Emotional Voice Integration in
Sign-to-Speech Translators for Deaf-to-Hearing Communication

Hyunchul Lim, et al., 

Emotional voice communication plays a crucial role in effective daily interactions. Deaf and hard-of-hearing (DHH) individuals often rely on facial expressions to supplement sign language to convey emotions, as the use of voice is limited. However, in American Sign Language (ASL), these facial expressions serve not only emotional purposes but also as linguistic markers, altering sign meanings and often confusing non-signers when interpreting a signer’s emotional state. Most existing ASL translation technologies focus solely on signs, neglecting the role of emotional facial expressions in the translated output (e.g., text, voice). This paper present studies which 1) confirmed the challenges for non-signers of interpreting emotions from facial expressions in ASL communication, of facial expressions, and 2) how integrating emotional voices into translation systems can enhance hearing individuals’ comprehension of a signer’s emotions. An online survey conducted with 45 hearing participants (Non-ASL Signers) revealed that they frequently misinterpret signers’ emotions when emotional and linguistic facial expressions are used simultaneously. The findings indicate that incorporating emotional voice into translation systems significantly improves the recognition of signers’ emotions by 32%. Additionally, further research involving 6 DHH participants discusses design considerations for the emotional voice feature from both perspectives, emphasizing the importance of integrating emotional voices in translation systems to bridge communication gaps between DHH and hearing communities.

bottom of page