Welcome!
I am a fifth-year Ph.D. candidate in Computer Science at the University of Illinois at Urbana-Champaign advised by Prof. Julia Hockenmaier. My research primarily centers around:
- Mechanistic Interpretability.
- Creative repurposing of interpretability methods (e.g., SAEs) to deliver practical, real-world gains.
- Designing and analyzing novel deep learning architectures, with a current focus on mixture-of-experts models.
Education
- University of Illinois at Urbana-Champaign Aug 2021 ~ Current
- Ph.D. student in Computer Science
- Seoul National University Mar 2013 ~ Feb 2021
- B.S. in Electrical and Computer Enginering
Fellowships and Awards
- CS Ph.D. Fellowship (UIUC)
- Sep 2023 - May 2024
- Sep 2024 - May 2025
- Sep 2025 - May 2026
- Conference Presentation Awards for Graduate Students (UIUC)
- Excellent TA Award (UIUC)
Publications
- Toward Efficient Sparse Autoencoder-Guided Steering for Improved In-Context Learning in Large Language Models
🎉 EMNLP 2025 Main
Ikhyun Cho and Julia Hockenmaier
- On the Versatility of Sparse Autoencoders for In-Context Learning
🎉 EMNLP 2025 Findings
Ikhyun Cho, Gaeul Kwon, and Julia Hockenmaier
- Analyzing Multilingualism in Large Language Models with Sparse Autoencoders
🎉 COLM 2025
Ikhyun Cho and Julia Hockenmaier
- The Power of Bullet Lists: Reducing Mistakes in Large Language Models with a Simple Primer
🎉 NAACL 2025 Findings
Ikhyun Cho, Changyeon Park, and Julia Hockenmaier
- Tutor-ICL: Guiding Large Language Models for Improved In-Context Learning Performance
🎉 EMNLP 2024 Findings
Ikhyun Cho, Gaeul Kwon, and Julia Hockenmaier
- SIR-ABSC: Incorporating Syntax into RoBERTa-based Sentiment Analysis Models with a Special Aggregator Token
🎉 EMNLP 2023 Findings
Ikhyun Cho, Yoonhwa Jung, and Julia Hockenmaier
- VisualSiteDiary: A Detector-Free Vision-Language Transformer Model for Captioning Photologs for Daily Construction Reporting and Image Retrievals
🎉 Elsevier 2024: Automation in Construction
Yoonhwa Jung, Ikhyun Cho, and Julia Hockenmaier
- Pea-KD: Parameter-efficient and accurate Knowledge Distillation on BERT
🎉 PLOS ONE 2022
Ikhyun Cho and U Kang
- SensiMix: Sensitivity-Aware 8-bit index & 1-bit value mixed precision quantization for BERT compression
🎉 PLOS ONE 2022
Tairen Piao, Ikhyun Cho, and U Kang
Unfortunate Publications 😭
- Duplicate-and-Share: A Novel Approach to Efficient Vision Transformer Unlearning
Ikhyun Cho, Changyeon Park, and Julia Hockenmaier
- Prompting for Mixture-of-Experts: A Prompt-based Mixture-of-Experts framework for Stylized Image Captioning
Ikhyun Cho, Yoonhwa Jung, and Julia Hockenmaier
- Attack and reset for unlearning: Exploiting adversarial noise toward machine unlearning through parameter re-initialization
Yoonhwa Jung, Ikhyun Cho, Shun-Hsiang Hsu, and Julia Hockenmaier