98%
921
2 minutes
20
Gait-based emotion recognition has emerged as a promising field with applications in public safety, healthcare, and human-computer interaction. However, existing methods often suffer from excessive globalization, feature redundancy, and lack of dynamic time dependence. To address these issues, we propose a novel temporal graph convolutional network (MDT-GCN) that integrates multi-anchor (MAAF) and bi-focus attention (BFA) mechanisms. MDT-GCN extracts pose and action features from bone nodes using GCN and TCN networks, respectively. The MAAF module captures multi-scale temporal features to understand emotional expressions across different time ranges, while the BFA module focuses on both local and global features, enhancing the model's ability to capture complex emotional information. Experimental results on the Emotion Gait and Emotion Walk datasets demonstrate the effectiveness of MDT-GCN, achieving recognition accuracies of 90.11% and 84.23%, respectively. By making the code and datasets openly accessible, we aim to facilitate further research and applications in this field. The source code is released on https://github.com/928319204ljc/MDT .
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12041366 | PMC |
http://dx.doi.org/10.1038/s41598-025-97922-3 | DOI Listing |
Sci Rep
April 2025
College of Public Security Information Technology and Intelligence, Criminal Investigation Police University of China, Tawan Street83, Shenyang, 110854, Liaoning, China.
Gait-based emotion recognition has emerged as a promising field with applications in public safety, healthcare, and human-computer interaction. However, existing methods often suffer from excessive globalization, feature redundancy, and lack of dynamic time dependence. To address these issues, we propose a novel temporal graph convolutional network (MDT-GCN) that integrates multi-anchor (MAAF) and bi-focus attention (BFA) mechanisms.
View Article and Find Full Text PDF