Dissecting EEG-Language Models: Token Granularity, Model Size, and Cross-Site Generalization

Dissecting EEG-Language Models: Token Granularity, Model Size, and Cross-Site Generalization

We investigate how token granularity and model size affect EEG-language model performance in both in-distribution and cross-site scenarios, and find that token granularity is a critical, task-dependent scaling dimension for clinical EEG models, sometimes more important than model size.

January 2026 · Xujin Chris Liu, Yao Wang, Eric Karl Oermann
Lightweight Transformer for EEG Classification via Balanced Signed Graph Algorithm Unrolling

Lightweight Transformer for EEG Classification via Balanced Signed Graph Algorithm Unrolling

We build lightweight and interpretable transformer-like neural nets by unrolling a spectral denoising algorithm for signals on a balanced signed graph—graph with no cycles of odd number of negative edges. Experiments show that our method achieves classification performance comparable to representative deep learning schemes, while employing dramatically fewer parameters.

January 2026 · Junyi Yao, Parham Eftekhar, Gene Cheung, Xujin Chris Liu, Yao Wang, Wei Hu
VoxelFormer: Parameter-Efficient Multi-Subject Visual Decoding from fMRI

VoxelFormer: Parameter-Efficient Multi-Subject Visual Decoding from fMRI

VoxelFormer is a lightweight transformer architecture that enables multi-subject training for visual decoding from fMRI.

September 2025 · Chenqian Le, Yilin Zhao, Nikasadat Emami, Kushagra Yadav, Xujin Chris Liu, Xupeng Chen, Yao Wang
Neural and Computational Mechanisms Underlying One-shot Perceptual Learning in Humans

Neural and Computational Mechanisms Underlying One-shot Perceptual Learning in Humans

In this paper, we investigate the neural and computational mechanisms underlying one-shot perceptual learning in humans. By introducing a novel top-down feedback mechanism into a vision transformer and comparing its representations with fMRI data, we find high level visual cortex as the most likely neural substrate wherein neural plasticity supports one-shot perceptual learning.

May 2025 · Xujin Chris Liu, Ayaka Hachisuka, Jonathan D. Shor, Daniel Friedman, Patricia Dugan, Ignacio Saez, Fedor E. Panov, Yao Wang, Werner Doyle, Orrin Devinsky, Eric K. Oermann, Biyu J. He
Longitudinal deep neural networks for assessing metastatic brain cancer on a large open benchmarks

Longitudinal deep neural networks for assessing metastatic brain cancer on a large open benchmark

We present NYUMets-Brain, the world’s largest, longitudinal, real-world dataset of cancer consisting of the imaging, clinical follow-up, and medical management of 1,429 patients. Using this dataset we developed Segmentation-Through-Time, a deep neural network which explicitly utilizes the longitudinal structure of the data and obtained state-of-the-art results at small (<10 mm3) metastases detection and segmentation.

September 2024 · Katherine E. Link, Zane Schnurman, Xujin Chris Liu, Young Joon Fred Kwon, Lavender Yao Jiang, Mustafa Nasir-Moin, Sean Neifert, Juan Diego Alzate, Kenneth Bernstein, Tanxia Qu, Viola Chen, Eunice Yang, John G. Golfinos, Daniel Orringer, Douglas Kondziolka, Eric Karl Oermann
Health system-scale language models are all-purpose prediction engines

Health system-scale language models are all-purpose prediction engines

We trained a large language model for medical language (NYUTron) and subsequently fine-tuned it across a wide range of clinical and operational predictive tasks, and found that it outperforms traditional models while being much easier to deploy.

October 2023 · Lavender Yao Jiang, Xujin Chris Liu, ..., Eric Karl Oermann
Automated, Scalable and Generalizable Deep Learning for Tracking Cortical Spreading Depression Using EEG

Automated, Scalable and Generalizable Deep Learning for Tracking Cortical Spreading Depression Using EEG

We present a graph neural network that is able to track cortical spreading depressions in scalp EEG signals. We show that our model is scalable to different densities of EEG and generalizable to different head models.

May 2021 · Xujin Liu, Alireza Chamanzar, Lavender Y. Jiang, Kimon A. Vogt, Jose M. F. Moura, Pulkit Grover