대학원소개

논문성과

VE Lab’s (Prof. Young Ho Chai)'s three papers accepted to ISMAR 2025 (AI Top-tier Conference)

관리자 │ 2025-10-11

HIT

110


We are delighted to announce that three papers from the Virtual Environments Lab (VELab, Prof. Young Ho Chai) has been accepted to the 24th IEEE International Symposium on Mixed and Augmented Reality (ISMAR) [LINK].


Title: 

I-MOR: Intention-Driven Motion Overriding for Realistic Authoring


Authors:

Joung Jun Kim , Won Jin Hong, Jin Kim, Young Ho Chai


Abstract:

Realistic avatar motion in VR environments is crucial for user immersion, emphasizing the need for techniques that transform basic movements into professional-level articulation. We propose a lightweight framework (I-MOR) that employs quaternion-based representations to bridge user input and high-quality motion generation efficiently. Rather than relying on templates or data-driven models, I-MOR captures instantaneous angular changes from expressive gestures and maps them to frame-wise transformations. SpongeBob Shuffle dance recordings are analyzed to extract quantitative velocity, acceleration, and directional profiles as physical benchmarks. During authoring, user input is continuously compared to these benchmarks, enabling intuitive modulation of rhythm, sharpness, and range without altering core structure. Experiments show that users can synthesize professional-quality motion timing and articulation through simple gestures, advancing low-barrier control, grounded motion generation, and intention aware authoring in VR.


__________________________________________________________________________________________________________________________________________________________________________________________


Title: 

LongAct: A Dataset for Motion Generation from Long-Term Multi-Action Text Descriptions


Authors:

Jeong Yeon Lee, Soung Sill Park, Young Ho Chai


Abstract:

Text-to-motion is a research field that generates human body motion sequences from detailed textual descriptions of actions. It is useful for quickly generating avatar motions that can be efficiently applied to a wide range of content. However, most existing methods struggle to generate long sequences involving multiple types of actions. To address this limitation, we propose LongAct, a new dataset constructed by connecting multiple human motion segments. We evaluate LongAct on the task of generating long human motion sequences. Both quantitative and qualitative results demonstrate that our dataset enables the generation of natural and coherent motion sequences in long sequence multi-action scenarios.


__________________________________________________________________________________________________________________________________________________________________________________________


Title: 

Joint-wise Comparative Analysis of Relative Local Velocity in the ‘Running Man’ Shuffle Dance


Authors:

Jin Kim, Joung Jun Kim, Young Ho Chai


Abstract:

In immersive VR/AR/MR environments, avatar motion must go beyond generic animations to convey expressive, human-like behaviors that reflect user intent. Subtle distinctions between expert and non-expert movements—especially in rhythmic actions like dance—are critical for realism and user engagement. This study quantitatively compares expert and beginner motor control in the “Running Man” shuffle dance by analyzing joint-wise relative local velocity across posture transitions (0→4). By normalizing angular velocity within each segment, we highlight key differences in timing, rebound suppression, and balance adjustment. Experts consistently exhibited smooth acceleration–peak–deceleration profiles, while beginners showed abrupt velocity shifts and unintended rebounds. These insights point to distinctive motion cues that can inform the authoring of expressive avatar animations. Our findings suggest a velocity-based framework for synthesizing and correcting avatar motion in real-time applications.






이전글 VE Lab’s (Prof. Young Ho Chai)'s paper accepted to UIST 2025 (AI Top-tier Conf...
다음글 VE Lab’s (Prof. Young Ho Chai)'s three papers accepted to ACM VRST 2025 (AI To...