Auditory and cognitive contributions to recognition of degraded speech in noise: Individual differences among older adults.

PLoS One

Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina, United States of America.

Published: September 2025


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

This study examined individual differences in how older adults with normal hearing (ONH) or hearing impairment (OHI) allocate auditory and cognitive resources during speech recognition in noise at equal recognition. Associations between predictor variables and speech recognition were assessed across three datasets that each included 15-16 conditions involving temporally filtered speech. These datasets involved (1) degraded spectral cues, (2) competing speech-modulated noise, and (3) combined degraded spectral cues in speech-modulated noise. To minimize effects of audibility differences, speech was spectrally shaped according to each listener's hearing thresholds. The extended Short-Time Objective Intelligibility metric was used to derive psychometric functions that relate the acoustic degradation to speech recognition. From these functions, speech recognition thresholds (SRTs) were determined at 20%, 50%, and 80% recognition. A multiple regression dominance analysis, conducted separately for ONH and OHI groups, determined the relative importance of auditory and cognitive predictor variables to speech recognition. ONH participants had a stronger association of vocabulary knowledge with speech recognition, whereas OHI participants had a stronger association of speech glimpsing abilities with speech recognition. Combined with measures of working memory and hearing thresholds, these predictors accounted for 73% to 89% of the total variance for ONH and OHI, respectively, and generalized to other diverse measures of speech recognition.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410885PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0331487PLOS

Publication Analysis

Top Keywords

speech recognition
32
auditory cognitive
12
speech
12
recognition
11
individual differences
8
differences older
8
older adults
8
predictor variables
8
variables speech
8
degraded spectral
8

Similar Publications

Deep Learning-Assisted Organogel Pressure Sensor for Alphabet Recognition and Bio-Mechanical Motion Monitoring.

Nanomicro Lett

September 2025

Nanomaterials & System Lab, Major of Mechatronics Engineering, Faculty of Applied Energy System, Jeju National University, Jeju, 63243, Republic of Korea.

Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring, clinical diagnosis, and robotic applications. Nevertheless, it remains a critical challenge to simultaneously achieve desirable mechanical and electrical performance along with biocompatibility, adhesion, self-healing, and environmental robustness with excellent sensing metrics. Herein, we report a multifunctional, anti-freezing, self-adhesive, and self-healable organogel pressure sensor composed of cobalt nanoparticle encapsulated nitrogen-doped carbon nanotubes (CoN CNT) embedded in a polyvinyl alcohol-gelatin (PVA/GLE) matrix.

View Article and Find Full Text PDF

Objectives: Alexithymia is characterized by difficulties in identifying and describing one's own emotions. Alexithymia has previously been associated with deficits in the processing of emotional information at both behavioral and neurobiological levels, and some studies have shown elevated levels of alexithymic traits in adults with hearing loss. This explorative study investigated alexithymia in young and adolescent school-age children with hearing aids in relation to (1) a sample of age-matched children with normal hearing, (2) age, (3) hearing thresholds, and (4) vocal emotion recognition.

View Article and Find Full Text PDF

Objectives: This study aimed to investigate the potential contribution of subtle peripheral auditory dysfunction to listening difficulties (LiD) using a threshold-equalizing noise (TEN) test and distortion-product otoacoustic emissions (DPOAE). We hypothesized that a subset of patients with LiD have undetectable peripheral auditory dysfunction.

Design: This case-control study included 61 patients (12 to 53 years old; male/female, 18/43) in the LiD group and 22 volunteers (12 to 59 years old; male/female, 10/12) in the control group.

View Article and Find Full Text PDF

Dysphagia lusoria is an uncommon cause of dysphagia with an increasing incidence with age. It is unknown why individuals with dysphagia lusoria typically remain asymptomatic until older adulthood, but some theorize that it could be related to physiologic and anatomical changes that occur with the aging process, such as increased esophageal rigidity and stiffening of vascular walls with atherosclerosis, that make the compression from these congenital aberrations more impactful. While uncommon, it is also likely underrecognized due to its being diagnostically challenging to identify.

View Article and Find Full Text PDF

While blink analysis was traditionally conducted within vision research, recent studies suggest that blinks might reflect a more general cognitive strategy for resource allocation, including with auditory tasks, but its use within the fields of Audiology or Psychoacoustics remains scarce and its interpretation largely speculative. It is hypothesized that as listening conditions become more difficult, the number of blinks would decrease, especially during stimulus presentation, because it reflects a window of alertness. In experiment 1, 21 participants were presented with 80 sentences at different signal-to-noise ratios (SNRs): 0,  + 7,  + 14 dB and in quiet, in a sound-proof room with gaze and luminance controlled (75 lux).

View Article and Find Full Text PDF