98%
921
2 minutes
20
Speech-in-noise perception, the ability to hear a relevant voice within a noisy background, is important for successful communication. Musicians have been reported to perform better than non-musicians on speech-in-noise tasks. This meta-analysis uses a multi-level design to assess the claim that musicians have superior speech-in-noise abilities compared to non-musicians. Across 31 studies and 62 effect sizes, the overall effect of musician status on speech-in-noise ability is significant, with a moderate effect size (g = 0.58), 95% CI [0.42, 0.74]. The overall effect of musician status was not moderated by within-study IQ equivalence, target stimulus, target contextual information, type of background noise, or age. We conclude that musicians show superior speech-in-noise abilities compared to non-musicians, not modified by age, IQ, or speech task parameters. These effects may reflect changes due to music training or predisposed auditory advantages that encourage musicianship.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.heares.2022.108442 | DOI Listing |
Trends Hear
September 2025
Department of Psychology, University of Toronto, Toronto, Ontario, Canada.
Understanding speech in noise is a common challenge for older adults, often requiring increased listening effort that can deplete cognitive resources and impair higher-order functions. Hearing aids are the gold standard intervention for hearing loss, but cost and accessibility barriers have driven interest in alternatives such as Personal Sound Amplification Products (PSAPs). While PSAPs are not medical devices, they may help reduce listening effort in certain contexts, though supporting evidence remains limited.
View Article and Find Full Text PDFPLoS One
September 2025
Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina, United States of America.
This study examined individual differences in how older adults with normal hearing (ONH) or hearing impairment (OHI) allocate auditory and cognitive resources during speech recognition in noise at equal recognition. Associations between predictor variables and speech recognition were assessed across three datasets that each included 15-16 conditions involving temporally filtered speech. These datasets involved (1) degraded spectral cues, (2) competing speech-modulated noise, and (3) combined degraded spectral cues in speech-modulated noise.
View Article and Find Full Text PDFInt J Audiol
September 2025
Manchester Centre for Audiology and Deafness, School of Health Sciences, The University of Manchester, Manchester, UK.
Objectives: To evaluate children's ability to recognise speech and its relationship to language ability using two newly developed tests: the Listening in Spatialised Noise and Reverberation test (LiSN-R) and the Test of Listening Difficulties - Universal (ToLD-U).
Design: LiSN-R and ToLD-U used nonword and sentence recognition in spatially separated noise and reverberation. Language ability was assessed using the Clinical Evaluation of Language Fundamentals (CELF) sentence recall.
Imaging Neurosci (Camb)
August 2025
Department of Electrical Engineering, Columbia University, New York, NY, United States.
Understanding speech in noise depends on several interacting factors, including the signal-to-noise ratio (SNR), speech intelligibility (SI), and attentional engagement. However, how these factors relate to selective neural speech tracking remains unclear. In this study, we recorded EEG and eye-tracking data while participants performed a selective listening task involving a target talker in the presence of a competing masker talker and background noise across a wide range of SNRs.
View Article and Find Full Text PDFJ Acoust Soc Am
September 2025
Audiology Department, College of Health and Life Sciences, Aston University, Birmingham, B4 7ET, United Kingdom.
The current study simulated bilateral and unilateral cochlear implant (CI) processing using a channel vocoder with dense tonal carriers ("SPIRAL") in 13 normal-hearing listeners. Their performance of recognizing spatial speech-in-noise was measured under the effects of three masker locations (0°, +90°, and -90°; target at 0°) and three types of maskers (steady-state noise, speech-modulated noise, and a single-talker interferer) where the maskers contained different levels of energetic and informational masking. The stimuli were spatialized using the head-related impulse responses recorded from behind-the-ear microphones of hearing aids.
View Article and Find Full Text PDF