98%
921
2 minutes
20
Background: The two most commonly used methods to identify frailty are the frailty phenotype and the frailty index. However, both methods have limitations in clinical application. In addition, methods for measuring frailty have not yet been standardized.
Objective: We aimed to develop and validate a classification model for predicting frailty status using vocal biomarkers in community-dwelling older adults, based on voice recordings obtained from the picture description task (PDT).
Methods: We recruited 127 participants aged 50 years and older and collected clinical information through a short form of the Comprehensive Geriatric Assessment scale. Voice recordings were collected with a tablet device during the Korean version of the PDT, and we preprocessed audio data to remove background noise before feature extraction. Three artificial intelligence (AI) models were developed for identifying frailty status: SpeechAI (using speech data only), DemoAI (using demographic data only), and DemoSpeechAI (combining both data types).
Results: Our models were trained and evaluated on the basis of 5-fold cross-validation for 127 participants and compared. The SpeechAI model, using deep learning-based acoustic features, outperformed in terms of accuracy and area under the receiver operating characteristic curve (AUC), 80.4% (95% CI 76.89%-83.91%) and 0.89 (95% CI 0.86-0.92), respectively, while the model using only demographics showed an accuracy of 67.96% (95% CI 67.63%-68.29%) and an AUC of 0.74 (95% CI 0.73-0.75). The SpeechAI model outperformed the model using only demographics significantly in AUC (t4=8.705 [2-sided]; P<.001). The DemoSpeechAI model, which combined demographics with deep learning-based acoustic features, showed superior performance (accuracy 85.6%, 95% CI 80.03%-91.17% and AUC 0.93, 95% CI 0.89-0.97), but there was no significant difference in AUC between the SpeechAI and DemoSpeechAI models (t4=1.057 [2-sided]; P=.35). Compared with models using traditional acoustic features from the openSMILE toolkit, the SpeechAI model demonstrated superior performance (AUC 0.89) over traditional methods (logistic regression: AUC 0.62; decision tree: AUC 0.57; random forest: AUC 0.66).
Conclusions: Our findings demonstrate that vocal biomarkers derived from deep learning-based acoustic features can be effectively used to predict frailty status in community-dwelling older adults. The SpeechAI model showed promising accuracy and AUC, outperforming models based solely on demographic data or traditional acoustic features. Furthermore, while the combined DemoSpeechAI model showed slightly improved performance over the SpeechAI model, the difference was not statistically significant. These results suggest that speech-based AI models offer a noninvasive, scalable method for frailty detection, potentially streamlining assessments in clinical and community settings.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756832 | PMC |
http://dx.doi.org/10.2196/57298 | DOI Listing |
Menopause
September 2025
Department of Speech Language Pathology and Audiology, Northeastern University, Boston, MA.
Importance And Objective: Voice changes during menopause affect patients' communication and quality of life. This narrative review aims to provide a comprehensive exploration of voice changes during menopause. It presents objective and subjective/symptomatic changes as well as treatment options for this population.
View Article and Find Full Text PDFActa Neuropsychiatr
September 2025
Goethe-University Frankfurt am Main; Department of Psychiatry, Psychosomatic Medicine and Psychotherapy, University Hospital, Frankfurt, Germany.
Objective: Cortisol is a well-established biomarker of stress, assessed through salivary or blood samples, which are intrusive and time-consuming. Speech, influenced by physiological stress responses, offers a promising non-invasive, real-time alternative for stress detection. This study examined relationships between speech features, state anger, and salivary cortisol using a validated stress-induction paradigm.
View Article and Find Full Text PDFBMJ Open
September 2025
Luxembourg Institute of Health, Strassen, Luxembourg.
Introduction: Stress is nearly ubiquitous in everyday life; however, it imposes a tremendous burden worldwide by acting as a risk factor for most physical and mental diseases. The effects of geographic environments on stress are supported by multiple theories acknowledging that natural environments act as a stress buffer and provide deeper and quicker restorative effects than most urban settings. However, little is known about how the temporalities of exposure to complex urban environments (duration, frequency and sequences of exposures) experienced in various locations - as shaped by people's daily activities - affect daily and chronic stress levels.
View Article and Find Full Text PDFFront Digit Health
August 2025
Division of Informatics, Clinical Epidemiology, Oregon Health and Science University, Portland, OR, United States.
Benign and malignant vocal fold lesions can alter voice quality and lead to significant morbidity or, in the case of malignancy, mortality. Early, noninvasive identification of these lesions using voice as a biomarker may improve diagnostic access and outcomes. In this study, we analyzed data from the initial release of the Bridge2AI-Voice dataset to evaluate which acoustic features best distinguish laryngeal cancer and benign vocal fold lesions from other vocal pathologies and healthy voice function.
View Article and Find Full Text PDFPoult Sci
August 2025
College of Veterinary Medicine, Nanjing Agricultural University, Nanjing 210095, China. Electronic address:
With the advancement of precision livestock farming (PLF), acoustic technology has emerged as a key tool for tracking the health and well-being of laying hens, owing to its non-invasive, real-time and cost-effective nature. In this study, continuous audio data were collected from commercial chicken houses over a period of 15 days, in addition to temperature and humidity index (THI) analysis, to develop a convolutional neural network (CNN)-based model for classifying chicken squawks. This approach enabled the investigation of the relationship between environmental adaptability and acoustic traits in a mixed-sex rearing system.
View Article and Find Full Text PDF