Hi, I'm Yejin Choi. I'm a researcher at Yonsei University, MIRLAB (Multimodal Intelligance Research Lab) advised by Youngjae Yu. I received my bachelor of Engineering degree in Computer Engineering, and I am currently pursuing an integrated MS/PhD program in Artificial Intelligence.
My research focuses on multimodal AI for real-world tasks, emphasizing task-aware modeling and system-level efficiency to enable AI systems that perceive, reason, and act in complex environments.
Under Review
TLDR; PREMIR makes retrieval more practical and robust by using cross-modal pre-questions over token-level matching, outperforming baselines in real-world closed-domain, and multimodal multilingual tasks.
COLM'2025
TLDR; We introduce GlyphDecode, a multimodal framework for restoring visually perturbed text and enhancing content moderation, featuring a lightweight GlyphRestorer and the GlyphSynth benchmark for real-world evaluation.
Under Review
TLDR; We introduce WiserUI-Bench, a benchmark with 300 real-world UI image pairs and A/B test results for assessing design persuasiveness. Our reasoning strategy, G-FOCUS, enhances VLMs’ reliability in UI evaluation by reducing bias and improving accuracy.