Model See Model Do: Speech-Driven
Facial Animation with Style Control (SIGGRAPH 2025)

Ubisoft La Forge*, University of Toronto+

MSMD generates high-fidelity facial animations that are temporally coherent and stylistically consistent with a input reference video.

Abstract

Speech-driven 3D facial animation plays a key role in applications such as virtual avatars, gaming, and digital content creation. While existing methods have made significant progress in achieving accurate lip synchronization and generating basic emotional expressions, they often struggle to capture and effectively transfer nuanced performance styles. We propose a novel example-based generation framework that conditions a latent diffusion model on a reference style clip to produce highly expressive and temporally coherent facial animations. To address the challenge of accurately adhering to the style reference, we introduce a novel conditioning mechanism called style basis, which extracts key poses from the reference and additively guides the diffusion generation process to fit the style without compromising lip synchronization quality. This approach enables the model to capture subtle stylistic cues while ensuring that the generated animations align closely with the input speech. Extensive qualitative, quantitative, and perceptual evaluations demonstrate the effectiveness of our method in faithfully reproducing the desired style while achieving superior lip synchronization across various speech scenarios.

Teaser Image

MY ALT TEXT

We present an example-based system for generating stylistic 3D facial animations: (a) Given an arbitrary style reference, a style encoder (b) obtains latent style features and a style basis that reflects key poses from the reference. (c) A diffusion module, conditioned on audio and style features, produces the primary motion. (d) The style basis guide the primary motion throughout the diffusion process, gradually refining it at each diffusion step to produce the final animation.

Supplementary Video

Results (reference followed by output)

BibTeX

BibTex Code Here