Human–Machine Interaction (HMI) systems are integral to modern technological infrastructures, ranging from aviation control and automotive systems to medical monitoring and industrial automation. As automation levels increase, human operators are often required to maintain high levels of vigilance over prolonged periods. This sustained cognitive engagement inevitably leads to cognitive fatigue—a neurophysiological state characterized by reduced alertness, slower reaction times, and diminished decision-making capacity. Cognitive fatigue poses serious safety and performance risks in HMI contexts, where human error can have catastrophic consequences. Consequently, the accurate and timely detection of cognitive fatigue has become a critical research focus in biomedical engineering, particularly through the analysis of brain activity using electroencephalography (EEG) (Koch et al., 2017).
Cognitive fatigue is not merely a subjective experience but a measurable neurocognitive state resulting from prolonged cognitive effort and mental workload. In safety-critical domains such as air traffic control, driving, and robotic teleoperation, operators must continuously process large volumes of sensory and decision-making information. Prolonged exposure to such tasks results in altered neural patterns, manifesting as changes in EEG rhythms, particularly within the theta (4–7 Hz), alpha (8–13 Hz), and beta (14–30 Hz) frequency bands. These EEG features provide valuable insight into the temporal dynamics of fatigue onset and progression. However, the challenge lies in translating these physiological signals into real-time, reliable indicators suitable for practical deployment within HMI systems (Kardell, 2017).
Traditional methods for fatigue detection—such as behavioral measures (e.g., reaction time, eye blinking rate) and self-reported scales (e.g., Karolinska Sleepiness Scale)—suffer from several limitations. Behavioral indicators often lag behind physiological changes and can be confounded by environmental or emotional factors. Self-reports, on the other hand, are subjective and intrusive, making them unsuitable for continuous monitoring. Physiological methods, particularly EEG, offer an objective, non-invasive, and temporally precise approach for fatigue detection. EEG signals capture direct neural activity, providing a window into brain dynamics that precede behavioral deterioration. Nevertheless, existing EEG-based systems often rely on offline data analysis, laboratory-controlled conditions, or high-density electrode arrays that are impractical for real-time operational use.
In biomedical signal processing, significant strides have been made toward automating the analysis of EEG signals for fatigue detection. Conventional approaches employ statistical and frequency-domain features, such as Power Spectral Density (PSD) or band-power ratios, followed by machine learning models like Support Vector Machines (SVM) or Random Forest classifiers. While these methods have demonstrated moderate success in distinguishing between fatigued and alert states, they are often limited by feature dependency, subject-specific variability, and an inability to adapt to dynamic conditions encountered in real-world HMI applications. The inherent non-stationarity of EEG signals further complicates model generalization, leading to degraded accuracy in cross-session or cross-user scenarios. Additionally, most studies are confined to offline analysis pipelines, where post-hoc processing is performed on pre-recorded data, thereby limiting their applicability for real-time monitoring and intervention (Naczynski et al., 2013).
From a biomedical engineering perspective, the design of a real-time EEG-based fatigue detection system demands an integrated approach that combines efficient signal acquisition, noise suppression, feature extraction, and robust classification algorithms within a low-latency framework. Advances in deep learning and embedded computing provide new opportunities to overcome the limitations of traditional models. Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) architectures have emerged as powerful tools for modeling spatial–temporal dependencies in EEG data. CNNs excel in extracting discriminative spatial features, while LSTMs capture the temporal evolution of brain states. The hybrid CNN–LSTM architecture, therefore, offers a promising pathway for accurate fatigue classification, capable of adapting to non-linear, time-varying EEG patterns in real time. However, despite its potential, few studies have successfully integrated such models into embedded biomedical systems that can operate continuously in real-world HMI environments.
The problem statement addressed in this study is thus articulated as follows:
Existing EEG-based fatigue detection frameworks are largely offline, computationally intensive, and unable to deliver accurate, real-time classification of cognitive fatigue under dynamic Human–Machine Interaction conditions.
Moreover, there remains a lack of integrated biomedical systems that combine signal acquisition, preprocessing, and machine learning classification in a unified, low-latency pipeline suitable for deployment in operational environments such as driver monitoring, drone piloting, or medical robotics. Current models also fail to adequately address individual variability, often requiring user-specific calibration, which limits their scalability and generalizability across populations.
The research gap motivating this work lies at the intersection of neuroscience, biomedical engineering, and real-time signal processing. While EEG-based fatigue detection has been extensively studied in controlled laboratory conditions, the transition to operational, embedded, and adaptive systems remains underdeveloped. Key challenges include:
Real-time processing constraints – existing algorithms require substantial computational resources, incompatible with embedded biomedical platforms.
Artifact robustness – EEG signals are susceptible to motion, muscle, and environmental artifacts, which degrade classification accuracy if not properly mitigated.
Adaptive modeling – fatigue manifests differently across individuals; models need adaptive calibration mechanisms for universal applicability.
System integration – most prior research isolates signal processing, modeling, or hardware aspects rather than presenting a fully integrated end-to-end biomedical system.
Practical usability – limited studies focus on wearable, comfortable, and biocompatible EEG systems that are viable for extended real-world use.
To address these gaps, this research proposes a real-time EEG-based cognitive fatigue detection system leveraging a hybrid CNN–LSTM deep learning model implemented within a biomedical embedded architecture. The system captures EEG signals from low-density, wearable electrodes and processes them using optimized filtering and Independent Component Analysis (ICA) to remove artifacts. Subsequently, time–frequency features such as PSD, Hjorth parameters, and Theta/Alpha–Beta ratios are fed into the CNN–LSTM model for classification. The architecture is optimized for low-latency performance (< 500 ms per window), ensuring operational feasibility for real-time HMI monitoring. This integrated biomedical framework thus bridges the gap between laboratory EEG research and practical, deployable fatigue detection systems.
The contributions of this study can be summarized as follows:
Design and implementation of a real-time EEG-based cognitive fatigue detection system suitable for embedded biomedical applications.
Development of a hybrid CNN–LSTM deep learning model that simultaneously captures spatial and temporal EEG dynamics, improving classification accuracy.
Integration of signal preprocessing and feature extraction modules within a compact, low-latency pipeline, ensuring robustness to noise and artifacts.
Experimental validation using real EEG data from sustained-attention tasks, demonstrating the system’s effectiveness in detecting fatigue onset with over 94% accuracy.
Deployment on an embedded platform (e.g., NVIDIA Jetson Nano), confirming the feasibility of real-time biomedical monitoring within human–machine systems.
In essence, this research presents a novel contribution to biomedical signal processing and neuroergonomics by developing a real-time, EEG-based fatigue monitoring system that integrates neuroscientific principles with modern engineering methodologies. The proposed framework addresses critical limitations of prior models and advances the field toward intelligent, adaptive, and user-centric HMI systems that promote safety, efficiency, and cognitive well-being. By bridging the gap between laboratory neuroscience and applied biomedical engineering, this study demonstrates how EEG-based neurophysiological monitoring can enhance human performance in increasingly automated and cognitively demanding environments.