Advancing EEG-based brain-computer interfaces through performance modeling, decoding validation, and methodological optimization: a multi-study approach
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Brain-computer interfaces represent a transformative technology that enables direct communication between neural activity and external devices. By decoding brain signals associated with movement intentions or cognitive states, BCIs offer potential solutions for individuals with severe paralysis or communication impairments. Interest from both researchers and clinicians in BCI applications has accelerated in recent years. Yet, despite this promise and decades of intensive research, BCIs have not transitioned successfully from controlled laboratory environments to everyday practice. Significant implementation barriers continue to limit widespread adoption; such as insufficient speed and accuracy, limited and unstandardized training data, and inconsistent evaluation practices. This dissertation examines critical obstacles hindering practical BCI deployment across two distinct application domains: P300-based communication spellers and EEG-based emotional state recognition. The work addresses these challenges through three strategic approaches focusing on efficient performance estimation in P300 spellers, comprehensive emotion dataset development, and rigorous methodological evaluation.
P300-based BCI spellers enable communication for individuals with severe motor impairments, yet performance validation creates a practical dilemma. Accurate assessment demands extensive testing sessions where users type 20 or more characters, consuming 4-20 minutes. For individuals who are already fatigued, these lengthy procedures are impractical. Moreover, even after substantial time investment, confidence intervals for accuracy estimates can span ±23%, making it difficult to determine system reliability for a particular user. The first study addresses one of the key challenges in communication BCIs, particularly the slow validation and calibration of P300-based BCI speller systems. A predictive framework based on Classifier-Based Latency Estimation (CBLE) was used to estimate user accuracy from a small amount of typing data (∼3-8 characters). This approach provides faster and more reliable performance prediction, allowing quicker adaptation of the system to individual users.
Following this development, the second study expands the scope of BCI research toward affective computing. Current emotion recognition research relies on small, proprietary datasets that typically employ single-modality stimuli, limiting model generalizability across different emotion elicitation scenarios. To address this data scarcity, a comprehensive multimodal database was created with recordings from 30 participants experiencing 240 stimuli across three categories: emotional pictures, facial expressions, and music. Each participant provided both SAM ratings for valence, arousal, and dominance, and discrete emotion labels for happiness, sadness, fear, disgust, surprise, and anger. This database provides nearly three times more stimuli than comparable datasets and represents the first to integrate all three stimulus modalities within a unified experimental framework per participant.
However, large-scale dataset availability alone cannot ensure progress if methodological practices remain flawed. The insights gained from dataset design motivated the third study, which systematically examined methodological flaws that affect the reliability of emotion recognition results. A systematic review of 101 studies using the widely-adopted DEAP dataset revealed that 87% contained critical errors, including data leakage from improper segmentation, biased feature selection, and flawed hyperparameter optimization. These methodological oversights artificially inflate reported accuracies, creating unrealistic expectations about system capabilities. Experimental validation demonstrated that such errors can boost performance estimates by up to 46 percentage points, explaining why laboratory results fail to translate to real-world deployment. The review establishes rigorous evaluation protocols and provides concrete guidelines to prevent common methodological pitfalls.
Together, these three studies form a continuous progression—from improving BCI speller performance modeling, enabling standardized affective datasets, and establishing rigorous evaluation protocols. Collectively, they contribute to building faster, more reliable, and methodologically sound EEG-based BCI systems for assistive and adaptive real-world applications.