Gesture2Speech: How Far Can Hand Movements Shape Expressive Speech?

Lokesh Kumar, Nirmesh J. Shah, Ashishkumar P. Gudmalwar, Pankaj Wasnik

Media Analysis Group, Sony Research India, Bangalore

Human communication seamlessly integrates speech and bodily motion, where hand gestures naturally complement vocal prosody to express intent, emotion, and emphasis. While recent text-to-speech (TTS) systems have begun incorporating multimodal cues such as facial expressions or lip movements, the role of hand gestures in shaping prosody remains largely underexplored. We propose a novel multimodal TTS framework, Gesture2Speech, that leverages visual gesture cues to modulate prosody in synthesized speech. Motivated by the observation that confident and expressive speakers coordinate gestures with vocal prosody, we introduce a multimodal Mixture-of-Experts (MoE) architecture that dynamically fuses linguistic content and gesture features within a dedicated style extraction module. The fused representation conditions an LLM-based speech decoder, enabling prosodic modulation that is temporally aligned with hand movements. We further design a gesture-speech alignment loss that explicitly models their temporal correspondence to ensure fine-grained synchrony between gestures and prosodic contours. Evaluations on the PATS dataset show that Gesture2Speech outperforms state-of-the-art baselines in both speech naturalness and gesture-speech synchrony. To the best of our knowledge, this is the first work to utilize hand gesture cues for prosody control in neural speech synthesis.

Accepted at AAAI BEEU Workshop 2026

Demo Samples

Input Text 1: “In a spark of reaction, atoms collide, transforming matter with invisible power”

Descirption Gesture 1 Gesture 2 Gesture 3
Reference Video
Extracted Gesture
Gesture2Speech: XTTS-V2
Gesture2Speech: GPT-SoVITS
Gesture2Speech: Unimodal MoE
Gesture2Speech: H-MoE
Gesture2Speech: Multimodal-MoE: Ours

Input Text 2: “Everything changes, nothing stays the same forever.”

Descirption Gesture 1 Gesture 2 Gesture 3
Reference Video
Extracted Gesture
Gesture2Speech: XTTS-V2
Gesture2Speech: GPT-SoVITS
Gesture2Speech: Unimodal MoE
Gesture2Speech: H-MoE
Gesture2Speech: Multimodal-MoE: Ours

Input Text 3: “Technology is evolving so quickly that it's changing the way we live, work, and communicate every day.”

Descirption Gesture 1 Gesture 2 Gesture 3
Reference Video
Extracted Gesture
Gesture2Speech: XTTS-V2
Gesture2Speech: GPT-SoVITS
Gesture2Speech: Unimodal MoE
Gesture2Speech: H-MoE
Gesture2Speech: Multimodal-MoE: Ours

Citation

@inproceedings{gesture2speech,
    title={Gesture2Speech: How Far Can Hand Movements Shape Expressive Speech?},
    author={Kumar, Lokesh and Shah, Nirmesh and Gudmalwar, Ashishkumar and Wasnik, Pankaj},
    booktitle={The 2nd International Workshop on Bodily Expressed Emotion Understanding (BEEU) at AAAI},
    year={2026},
    address={Singapore}
  }