my photo

Yejin Choi

Hi, I'm Yejin Choi. I'm a researcher at Yonsei University, MIRLAB (Multimodal Intelligance Research Lab) advised by Youngjae Yu. I received my bachelor of Engineering degree in Computer Engineering, and I am currently pursuing an integrated MS/PhD program in Artificial Intelligence.

My research focuses on multimodal AI for real-world tasks, emphasizing task-aware modeling and system-level efficiency to enable AI systems that perceive, reason, and act in complex environments.

Publications

Publication thumbnail

Zero-shot Multimodal Document Retrieval via Cross-modal Question Generation

Yejin Choi*, Jaewoo Park*, Janghan Yoon, Saejin Kim, Jaehyun jeon, Youngjae Yu

Under Review

See More

TLDR; PREMIR makes retrieval more practical and robust by using cross-modal pre-questions over token-level matching, outperforming baselines in real-world closed-domain, and multimodal multilingual tasks.

Publication thumbnail

G1yphD3c0de: Towards Safer Language Models on Visually Perturbed Texts

Yejin Choi, Yejin Yeo, Yejin Son, Seungju Han, Youngjae Yu

COLM'2025

See More

TLDR; We introduce GlyphDecode, a multimodal framework for restoring visually perturbed text and enhancing content moderation, featuring a lightweight GlyphRestorer and the GlyphSynth benchmark for real-world evaluation.

Publication thumbnail

Towards Visual Text Design Transfer Across Languages

arXiv

Yejin Choi*, Jiwan Chung*, Sumin Shim, Giyeong Oh, Youngjae Yu

NeurIPS'2024

See More

TLDR; We introduces MuST-Bench for evaluating generative models’ visual text style translation across languages and proposes SIGIL as the framework for achieving it.

Publication thumbnail

G-FOCUS: Towards a Robust Method for Assessing UI Design Persuasiveness

arXiv

Jaehyun Jeon, Jang Han Yoon, Min Soo Kim, Sumin Shim, Yejin Choi, Hanbin Kim, Youngjae Yu

Under Review

See More

TLDR; We introduce WiserUI-Bench, a benchmark with 300 real-world UI image pairs and A/B test results for assessing design persuasiveness. Our reasoning strategy, G-FOCUS, enhances VLMs’ reliability in UI evaluation by reducing bias and improving accuracy.