|
IIPL’s (Prof. YoungBin Kim) Two Papers Accepted to AAAI 2026 (AI Top-tier Conference)
관리자 │ 2025-11-27 HIT 99 |
|---|
|
We are delighted to announce that two papers from the Intelligent Information Processing Lab (IIPL, Prof. YoungBin Kim) have been accepted to the 2026 AAAI Conference on Artificial Intelligence (AAAI 2026), including one paper in the Main Technical Track and one paper in the Demonstration Program [LINK]. Title: Easy to Learn, yet Hard to Forget: Towards Robust Unlearning under Bias Authors: Junehyoung Kwon, MiHyeon Kim, Eunju Lee, Yoonji Lee, Seunghoon Lee, YoungBin Kim Abstract: Machine unlearning, which enables a model to forget specific data, is crucial for ensuring data privacy and model reliability. However, its effectiveness can be severely undermined in real-world scenarios where models learn unintended biases from spurious correlations within the data. This paper investigates the unique challenges of unlearning from such biased models. We identify a novel phenomenon we term "shortcut unlearning," where models exhibit an "easy to learn, yet hard to forget" tendency. Specifically, models struggle to forget easily-learned, bias-aligned samples; instead of forgetting the class attribute, they unlearn the bias attribute, which can paradoxically improve accuracy on the class intended to be forgotten. To address this, we propose CUPID, a new unlearning framework inspired by the observation that samples with different biases exhibit distinct loss landscape sharpness. Our method first partitions the forget set into causal- and bias-approximated subsets based on sample sharpness, then disentangles model parameters into causal and bias pathways, and finally performs a targeted update by routing refined causal and bias gradients to their respective pathways. Extensive experiments on biased datasets including Waterbirds, BAR, and Biased NICO++ demonstrate that our method achieves state-of-the-art forgetting performance and effectively mitigates the shortcut unlearning problem. __________________________________________________________________________________________________________________________________________________________________________________________ Title: RefLens: End-to-End Evidence-Grounded Citation Verification with LLM Agents Authors: SeungHoo Lee, Junehyoung Kwon, Jooweon Choi, JungMin Yun, Seunguk Yu, Yoonji Lee, Jinhee Jang, YoungBin Kim Abstract: Accurate citation is critical, yet error rates remain high across scientific literature. We present RefLens, an end-to-end system that automates citation verification from PDF parsing to interactive report generation. Unlike summary- or embedding-based approaches, RefLens performs evidence-grounded verification by extracting verbatim spans from original sources and displaying citation-level cards and a paper-level dashboard. In a 35-participant study, users rated value (M=4.34), trust (M=4.15), and usability (M=4.19) highly, with strong adoption intention (M=4.28). |
| 이전글 | VI Lab (Prof. Jongwon Choi) & CAD Lab (Prof. Gyuhyun Kim)'s one paper accepted t... |
|---|---|
| 다음글 | 다음글이 없습니다. |