Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Automated sleep staging is crucial for assessing sleep quality and diagnosing sleep-related diseases. Single-channel EEG has attracted significant attention due to its portability and accessibility. Most existing automated sleep staging methods often emphasize temporal information and neglect spectral information, the relationship between sleep stage contextual features, and transition rules between sleep stages. To overcome these obstacles, this paper proposes an attention-based two stage temporal-spectral fusion model (BiTS-SleepNet). The BiTS-SleepNet stage 1 network consists of a dual-stream temporal-spectral feature extractor branch and a temporal-spectral feature fusion module based on the cross-attention mechanism. These blocks are designed to autonomously extract and integrate the temporal and spectral features of EEG signals, leveraging temporal-spectral fusion information to discriminate between different sleep stages. The BiTS-SleepNet stage 2 network includes a feature context learning module (FCLM) based on Bi-GRU and a transition rules learning module (TRLM) based on the Conditional Random Field (CRF). The FCLM optimizes preliminary sleep stage results from the stage 1 network by learning dependencies between features of multiple adjacent stages. The TRLM additionally employs transition rules to optimize overall outcomes. We evaluated the BiTS-SleepNet on three public datasets: Sleep-EDF-20, Sleep-EDF-78, and SHHS, achieving accuracies of 88.50%, 85.09%, and 87.01%, respectively. The experimental results demonstrate that BiTS-SleepNet achieves competitive performance in comparison to recently published methods. This highlights its promise for practical applications.

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2024.3523908DOI Listing

Publication Analysis

Top Keywords

temporal-spectral fusion
12
sleep staging
12
transition rules
12
stage network
12
attention-based stage
8
stage temporal-spectral
8
fusion model
8
sleep
8
single-channel eeg
8
automated sleep
8

Similar Publications

A review of hybrid EEG-based multimodal human-computer interfaces using deep learning: applications, advances, and challenges.

Biomed Eng Lett

July 2025

Department of Electronics and Information Engineering, Korea University, 2511, Sejong-ro, Jochiwon-eup, Sejong-si, 30019 Republic of Korea.

Human-computer interaction (HCI) focuses on designing efficient and intuitive interactions between humans and computer systems. Recent advancements have utilized multimodal approaches, such as electroencephalography (EEG)-based systems combined with other biosignals, along with deep learning to enhance performance and reliability. However, no systematic review has consolidated findings on EEG-based multimodal HCI systems.

View Article and Find Full Text PDF

The decoding of electroencephalography (EEG) signals allows access to user intentions conveniently, which plays an important role in the fields of human-machine interaction. To effectively extract sufficient characteristics of the multichannel EEG, a novel decoding architecture network with a dual-branch temporal-spectral-spatial transformer (Dual-TSST) is proposed in this study. Specifically, by utilizing convolutional neural networks (CNNs) on different branches, the proposed processing network first extracts the temporal-spatial features of the original EEG and the temporal-spectral-spatial features of time-frequency domain data converted by wavelet transformation, respectively.

View Article and Find Full Text PDF

Temporal action detection (TAD) is a vital challenge in computer vision and the Internet of Things, aiming to detect and identify actions within temporal sequences. While TAD has primarily been associated with video data, its applications can also be extended to sensor data, opening up opportunities for various real-world applications. However, applying existing TAD models to sensory signals presents distinct challenges such as varying sampling rates, intricate pattern structures, and subtle, noise-prone patterns.

View Article and Find Full Text PDF

Improving the decoding performance of steady-state visual evoked (SSVEP) signals is crucial for the practical application of SSVEP-based brain-computer interface (BCI) systems. Although numerous methods have achieved impressive results in decoding SSVEP signals, most of them focus only on the temporal or spectral domain information or concatenate them directly, which may ignore the complementary relationship between different features. To address this issue, we propose a dual-branch convolution-based Transformer network with multi-scale temporal-spectral feature fusion, termed MTSNet, to improve the decoding performance of SSVEP signals.

View Article and Find Full Text PDF

In automated sleep monitoring systems, bed occupancy detection is the foundation or the first step before other downstream tasks, such as inferring sleep activities and vital signs. The existing methods do not generalize well to real-world environments due to single environment settings and rely on threshold-based approaches. Manually selecting thresholds requires observing a large amount of data and may not yield optimal results.

View Article and Find Full Text PDF