대학원소개

논문성과

VI Lab's (Prof. Jongwon Choi) two papers accepted to CVPR 2024 (AI Top-tier Conference)

관리자 │ 2024-02-28

HIT

51622


Two papers of Visual Intelligence (VI) Lab are accepted to Conference on Computer Vision and Pattern Recognition (CVPR) 2024, Top Conference in AI & Computer Vision


Title: 

Text-Guided Variational Image Generation for Industrial Anomaly Detection and Segmentation


Authors:

MinGyu Lee, Jongwon Choi


Abstract: 

We propose a text-guided variational image generation method to address the challenge of getting clean data for anomaly detection in industrial manufacturing. Our method utilizes text information about the target object, learned from extensive text library documents, to generate non-defective data images resembling the input image. The proposed framework ensures that the generated non-defective images align with anticipated distributions derived from textual and image-based knowledge, ensuring stability and generality. Experimental results demonstrate the effectiveness of our approach, surpassing previous methods even with limited non-defective data. Our approach is validated through generalization tests across four baseline models and three distinct datasets. We present an additional analysis to enhance the effectiveness of anomaly detection models by utilizing the generated images.

__________________________________________________________________________________________________________________________________________________________________________________________


Title: 

Exploiting Style Latent Flows for Generalizing Video Deepfake Detection


Authors:

Jongwook Choi, Taehoon Kim, Yonghyun Jeong, Seungryul Baek, Jongwon Choi


Abstract: 

This paper presents a new approach for the detection of fake videos, based on the analysis of style latent vectors and their abnormal behavior in temporal changes in the generated videos. We discovered that the generated facial videos suffer from the temporal distinctiveness in the temporal changes of style latent vectors, which are inevitable during the generation of temporally stable videos with various facial expressions and geometric transformations. Our framework utilizes the Style GRU module, trained by contrastive learning, to represent the dynamic properties of style latent vectors. Additionally, we introduce a style attention module that integrates Style GRU-generated features with content-based features, enabling the detection of visual and temporal artifacts. We demonstrate our approach across various benchmark scenarios in deepfake detection, showing its superiority in cross-dataset and cross-manipulation scenarios. Through further analysis, we also validate the importance of using temporal changes of style latent vectors to improve the generality of deepfake video detection.



이전글 IRIS Lab's (Prof. Hak Gu Kim) paper accepted to CVPR 2024 (AI Top-tier Conferenc...
다음글 CM Lab's (Prof. Jihyong Oh) paper accepted to CVPR 2024 (AI Top-tier Conference)