Welcome!
I am a fourth-year Ph.D. student in Computer Science at the University of Illinois at Urbana-Champaign advised by Prof. Julia Hockenmaier. My research primarily centers around:
- Creating and analyzing novel deep learning model architectures
- Improving the in-context learning framework using large language models (LLMs)
- Enhancing spatial reasoning capabilities of LLMs
Education
- University of Illinois at Urbana-Champaign Aug 2021 ~ Current
- Ph.D. student in Computer Science
- Seoul National University Mar 2013 ~ Feb 2021
- B.S. in Electrical and Computer Enginering
Fellowships
- University of Illinois at Urbana-Champaign
- CS Ph.D. Fellowship Sep 2023 - May 2024
Publications
- Anonymous
Under review at ARR 2024 August
Ikhyun Cho, Changyeon Park, and Julia Hockenmaier
- Anonymous (Received a meta review score of 4)
Under review at EMNLP 2024
Ikhyun Cho, Gaeul Kwon, and Julia Hockenmaier
- Duplicate-and-Share: A Novel Approach to Efficient Vision Transformer Unlearning
Under review at AAAI 2025
Ikhyun Cho, Changyeon Park, and Julia Hockenmaier
- ViT-MUL: Baseline Study on Recent Machine Unlearning Methods Applied to Vision Transformers
[ArXiv]
Ikhyun Cho, Changyeon Park, and Julia Hockenmaier
- Attack and Reset for Unlearning: Exploiting Adversarial Noise toward Machine Unlearning through Parameter Re-initialization
[ArXiv]
Yoonhwa Jung, Ikhyun Cho, Shun-Hsiang Hsu, and Julia Hockenmaier
- VisualSiteDiary: A Detector-Free Vision-Language Transformer Model for Captioning Photologs for Daily Construction Reporting and Image Retrievals
Elsevier: Automation in Construction
Yoonhwa Jung, Ikhyun Cho, and Julia Hockenmaier
- SIR-ABSC: Incorporating Syntax into RoBERTa-based Sentiment Analysis Models with a Special Aggregator Token
EMNLP 2023 Findings
Ikhyun Cho, Yoonhwa Jung, and Julia Hockenmaier
- Pea-KD: Parameter-efficient and accurate Knowledge Distillation on BERT
PLOS ONE 2022
Ikhyun Cho and U Kang
- SensiMix: Sensitivity-Aware 8-bit index & 1-bit value mixed precision quantization for BERT compression
PLOS ONE 2022
Tairen Piao, Ikhyun Cho, and U Kang