98%
921
2 minutes
20
Progressive gait impairment is common among aging adults. Remote phenotyping of gait during daily living has the potential to quantify gait alterations and evaluate the effects of interventions that may prevent disability in the aging population. Here, we developed ElderNet, a self-supervised learning model for gait detection from wrist-worn accelerometer data. Validation involved two diverse cohorts, including over 1000 participants without gait labels, as well as 83 participants with labeled data: older adults with Parkinson's disease, proximal femoral fracture, chronic obstructive pulmonary disease, congestive heart failure, and healthy adults. ElderNet presented high accuracy (96.43 ± 2.27), specificity (98.87 ± 2.15), recall (82.32 ± 11.37), precision (86.69 ± 17.61), and F1 score (82.92 ± 13.39). The suggested method yielded superior performance compared to two state-of-the-art gait detection algorithms, with improved accuracy and F1 score (p < 0.05). In an initial evaluation of construct validity, ElderNet identified differences in estimated daily walking durations across cohorts with different clinical characteristics, such as mobility disability (p < 0.001) and parkinsonism (p < 0.001). The proposed self-supervised method has the potential to serve as a valuable tool for remote phenotyping of gait function during daily living in aging adults, even among those with gait impairments.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11379690 | PMC |
http://dx.doi.org/10.1038/s41598-024-71491-3 | DOI Listing |
IEEE Trans Cybern
September 2025
Sleep is essential for maintaining human health and quality of life. Analyzing physiological signals during sleep is critical in assessing sleep quality and diagnosing sleep disorders. However, manual diagnoses by clinicians are time-intensive and subjective.
View Article and Find Full Text PDFNeural Netw
September 2025
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, China. Electronic address:
Automatic segmentation of retinal vessels from retinography images is crucial for timely clinical diagnosis. However, the high cost and specialized expertise required for annotating medical images often result in limited labeled datasets, which constrains the full potential of deep learning methods. Recent advances in self-supervised pretraining using unlabeled data have shown significant benefits for downstream tasks.
View Article and Find Full Text PDFComput Biol Med
September 2025
Department of Electrical and Computer Engineering and the Institute of Biomedical Engineering, University of New Brunswick, Fredericton, E3B 5A3, NB, Canada.
Pattern recognition-based myoelectric control is traditionally trained with static or ramp contractions, but this fails to capture the dynamic nature of real-world movements. This study investigated the benefits of training classifiers with continuous dynamic data, encompassing transitions between various movement classes. We employed both conventional (LDA) and deep learning (LSTM) classifiers, comparing their performance when trained with ramp data, continuous dynamic data, and an LSTM pre-trained with a self-supervised learning technique (VICReg).
View Article and Find Full Text PDFBioinform Adv
August 2025
IBM Research, Yorktown Heights, NY, 10598, United States.
Motivation: Due to the intricate etiology of neurological disorders, finding interpretable associations between multiomics features can be challenging using standard approaches.
Results: We propose COMICAL, a contrastive learning approach using multiomics data to generate associations between genetic markers and brain imaging-derived phenotypes. COMICAL jointly learns omics representations utilizing transformer-based encoders with custom tokenizers.
Biomed Eng Lett
September 2025
Department of Precision Medicine, Yonsei University Wonju College of Medicine, Wonju, Korea.
Unlabelled: Foundation models, including large language models and vision-language models (VLMs), have revolutionized artificial intelligence by enabling efficient, scalable, and multimodal learning across diverse applications. By leveraging advancements in self-supervised and semi-supervised learning, these models integrate computer vision and natural language processing to address complex tasks, such as disease classification, segmentation, cross-modal retrieval, and automated report generation. Their ability to pretrain on vast, uncurated datasets minimizes reliance on annotated data while improving generalization and adaptability for a wide range of downstream tasks.
View Article and Find Full Text PDF