IRIS Lab's (Prof. Hak Gu Kim) papers accepted to Interspeech 2025
관리자 │ 2025-05-19 HIT 142 |
---|
We are delighted to announce that one paper from the Immersive Reality and Integrated Lab (IRIS Lab, Prof. Hak Gu Kim) has been accepted to 26th edition of the Interspeech Conference (Interspeech 2025) [LINK]. Title: Learning Phonetic Context-Dependent Viseme for Enhancing Speech-Driven 3D Facial Animation Authors: Hyung Kyu Kim and Hak Gu Kim Abstract: Speech-driven 3D facial animation aims to generate realistic facial movements synchronized with audio. Traditional methods primarily minimize reconstruction loss by aligning each frame with ground-truth. However, this frame-wise approach often fails to capture the continuity of facial motion, leading to jittery and unnatural outputs due to coarticulation. To address this, we propose a novel phonetic context-aware loss, which explicitly models the influence of phonetic context on viseme transitions. By incorporating a viseme coarticulation weight, we assigns adaptive importance to facial movements based on their dynamic changes over time, ensuring smoother and perceptually consistent animations. Extensive experiments demonstrate that replacing the conventional reconstruction loss with ours improves both quantitative metrics and visual quality. It highlights the importance of explicitly modeling phonetic context dependent visemes in synthesizing natural speech-driven 3D facial animation. |
이전글 | IIPL’s (Prof. YoungBin Kim) Two Papers Accepted to ACL 2025 Main Conference (A... |
---|---|
다음글 | 다음글이 없습니다. |