Dissecting EEG-Language Models: Token Granularity, Model Size, and Cross-Site Generalization

Dissecting EEG-Language Models: Token Granularity, Model Size, and Cross-Site Generalization

We investigate how token granularity and model size affect EEG-language model performance in both in-distribution and cross-site scenarios, and find that token granularity is a critical, task-dependent scaling dimension for clinical EEG models, sometimes more important than model size.

January 2026 · Xujin Chris Liu, Yao Wang, Eric Karl Oermann
Lightweight Transformer for EEG Classification via Balanced Signed Graph Algorithm Unrolling

Lightweight Transformer for EEG Classification via Balanced Signed Graph Algorithm Unrolling

We build lightweight and interpretable transformer-like neural nets by unrolling a spectral denoising algorithm for signals on a balanced signed graph—graph with no cycles of odd number of negative edges. Experiments show that our method achieves classification performance comparable to representative deep learning schemes, while employing dramatically fewer parameters.

January 2026 · Junyi Yao, Parham Eftekhar, Gene Cheung, Xujin Chris Liu, Yao Wang, Wei Hu
Automated, Scalable and Generalizable Deep Learning for Tracking Cortical Spreading Depression Using EEG

Automated, Scalable and Generalizable Deep Learning for Tracking Cortical Spreading Depression Using EEG

We present a graph neural network that is able to track cortical spreading depressions in scalp EEG signals. We show that our model is scalable to different densities of EEG and generalizable to different head models.

May 2021 · Xujin Liu, Alireza Chamanzar, Lavender Y. Jiang, Kimon A. Vogt, Jose M. F. Moura, Pulkit Grover