Hye Won Chung, Associate Professor of School of Electrical Engineering at KAIST (joint affiliation at School of Computing and Graduate School of AI)
Distinguished Lecturer of IEEE Information Theory Society (ITSOC) for 2025-2026
Curriculum Vitae: [CV]
Email: hwchung@kaist.ac.kr
Office: N1 Building Room 206, KAIST
Phone: +82-42-350-7441
Short bio
I am an Associate Professor at the School of Electrical Engineering at KAIST, jointly affiliated with School of Computing. I completed my Ph.D. in Electrical Engineering and Computer Science (EECS) at MIT in 2014. From 2014 to 2017, I worked at University of Michigan as a research fellow. I received my M.S. from MIT, and my B.S. (with summa cum laude) from KAIST in Korea, all in the Department of EECS.
My research interests include data science, information theory, statistical inference, machine learning, and quantum information. I want to provide theoretical framework to data science by using tools from information theory, statistical inference and machine learning. I also aim to develop efficient algorithmic tools in extracting and exploiting information in statistical inference problems.
Selected recent papers
Algorithms and Theory for Data Science and Machine Learning
Rethinking Self-Distillation: Label Averaging and Enhanced Soft Label Refinement with Partial Labels, ICLR 2025
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery, IEEE Trans. Information Theory 2025
Rank-1 Matrix Completion with Gradient Descent and Small Random Initialization, NeurIPS 2023
Recovering Top-Two Answers and Confusion Probability in Multi-Choice Crowdsourcing, ICML 2023
A Worker-Task Specialization Model for Crowdsourcing: Efficient Inference and Fundamental Limits, IEEE Trans. Information Theory 2024
Binary Classification with XOR Queries: Fundamental Limits and An Efficient Algorithm, IEEE Trans. Information Theory 2021
Detection of Signal in the Spiked Rectangular Models, ICML 2021
Efficient Deep Learning, Robust and Trustworthy AI
Toward Understanding Adversarial Distillation: Why Robust Teachers Fail, ICML 2026
CovMatch: Cross-Covariance Guided Multimodal Dataset Distillation with Trainable Text Encoder, NeurIPS 2025
VIPAMIN: Visual Prompt Initialization via Embedding Selection and Subspace Expansion, NeurIPS 2025
BWS: Best Window Selection Based on Sample Scores for Data Pruning across Broad Ranges, ICML 2024
Data Valuation without Training of a Model, ICLR 2023
Test-Time Adaptation via Self-Training with Nearest Neighbor Information, ICLR 2023
Self-Diagnosing GAN: Diagnosing Underrepresented Samples in Generative Adversarial Networks, NeurIPS 2021
News
(May 2026) One paper is accepted to ICML 2026.
Toward Understanding Adversarial Distillation: Why Robust Teachers Fail
(May 2026) I will attend the IEEE European School of Information Theory (ESIT) and provide a 3-hour tutorial on data- and supervision-efficient learning.
(Apr. 2026) One paper is accepted to TMLR.
Sample-Wise Adaptive Weighting for Transfer Consistency in Adversarial Distillation.
(Apr. 2026) I will be serving as an area chair of NeurIPS 2026.
(Jan. 2026) I will be serving as an area chair of ICML 2026.