Minseo Kim

Undergraduate Student, CSE, Seoul National University

prof_pic.png

Hi, I’m an undergraduate student at Seoul National University majoring in Computer Science and Engineering. My research focuses on improving the efficiency of large-scale AI models grounded in a deep understanding of systems. I aim to achieve this by developing system-aware algorithmic methods and cross-stack designs that enable efficient training and serving in real-world deployments. Currently, I am working on inference efficiency for large language models (LLMs) and diffusion language models (DLMs).

During my undergraduate years, I have been fortunate to be part of two great research groups. I was a visiting researcher in the Pallas Lab at Berkeley AI Research (BAIR), advised by Prof. Kurt Keutzer and Dr. Amir Gholami. Previously, I worked in the Architecture and Code Optimization Lab (ARC Lab) at Seoul National University, advised by Prof. Jae W. Lee.

I am seeking a PhD position starting in Fall 2026.

news

Jan 27, 2026 Our paper on accelerating DLM inference is accepted to MLSys 2026. Huge thanks to my collaborators - Chenfeng, Coleman, and Harman!
Jan 19, 2026 I’m joining FuriosaAI as an AI Algorithm Research Intern, looking forward to collaborating with a great team!
Jan 08, 2026 I gave a 1-hour online seminar at Cerebras with Coleman, presenting our work on Diffusion Language Models.
Nov 14, 2025 Team Architects won the Grand Prize (NIPA President’s Award) at the 2025 AI Chip Contest!
Oct 07, 2025 Our paper on DLM analysis is now on arXiv. This is my first paper at Berkeley! :sparkles:
Aug 21, 2025 Our VLM team at AttentionX had a paper accepted to EMNLP 2025! I’ll be presenting it in Suzhou, China (Nov 5-7). [Link]

selected publications

  1. MLSys
    cdlm.png
    CDLM: Consistency Diffusion Language Models for Faster Sampling
    Minseo Kim, Chenfeng Xu, Coleman Hooper, and 5 more authors
    Accepted to MLSys 2026 , 2025
  2. beyond.png
    Beyond Next-Token Prediction: A Performance Characterization of Diffusion versus Autoregressive Language Models
    Minseo Kim, Coleman Hooper, Aditya Tomar, and 5 more authors
    arXiv Preprint , 2025
  3. EMNLP
    kreta.png
    KRETA: A Benchmark for Korean Reading and Reasoning in Text-Rich VQA Attuned to Diverse Visual Contexts
    Taebaek Hwang*, Minseo Kim*, Gisang Lee, and 2 more authors
    In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 2025
  4. CIKM
    instanns.png
    InstANNS: Scalable Approximate Nearest Neighbor Search via Cost-Efficient In-Storage Processing
    Bonggeun Sim, Yushin Kim, Minseo Kim, and 2 more authors
    In Proceedings of the 34th ACM International Conference on Information and Knowledge Management (CIKM ’25), 2025