Speech discrimination impairments as a marker of disease severity in multiple sclerosis.

Mult Scler Relat Disord

Neuroscience Discovery Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Melbourne, Australia.

Published: January 2021


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Background: Multiple Sclerosis (MS) pathology is likely to disrupt central auditory pathways, thereby affecting an individual's ability to discriminate speech from noise. Despite the importance of speech discrimination in daily communication, it's characterization in the context of MS remains limited. This cross-sectional study evaluated speech discrimination in MS under "real world" conditions where sentences were presented in ecologically valid multi-talker speech or broadband noise at several signal-to-noise ratios (SNRs).

Methods: Pre-recorded Bamford-Kowal-Bench sentences were presented at five signal-to-noise ratios (SNR) in one of two background noises: speech-weighted noise and eight-talker babble. All auditory stimuli were presented via headphones to control (n = 38) and MS listeners with mild (n = 20), moderate (n = 16) and advanced (n = 10) disability. Disability was quantified by the Kurtzke Expanded Disability Status Scale (EDSS) and scored by a neurologist. All participants passed a routine audiometric examination.

Results: Despite normal hearing, MS psychometric discrimination curves which model the relationship between signal-to-noise ratio (SNR) and sentence discrimination accuracy in speech-weighted noise and babble did not change in slope (sentences/dB) but shifted to higher SNRs (dB) compared to controls. The magnitude of the shift in the curve systematically increased with greater disability. Furthermore, mixed-effects models identified EDSS score as the most significant predictor of speech discrimination in noise (odds ratio = 0.81; p < 0.001). Neither age, sex, disease phenotype or disease duration were significantly associated with speech discrimination performance in noise. Only MS listeners with advanced disability self-reported audio-attentional difficulty in a questionnaire designed to reflect auditory processing behaviours in daily life.

Conclusion: Speech discrimination performance worsened systematically with greater disability, independent of age, sex, education, disease duration or disease phenotype. These results identify novel auditory processing deficits in MS and highlight that speech discrimination tasks may provide a viable non-invasive and sensitive means for disease monitoring in MS.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.msard.2020.102608DOI Listing

Publication Analysis

Top Keywords

speech discrimination
16
multiple sclerosis
8
sentences presented
8
signal-to-noise ratios
8
speech-weighted noise
8
speech
6
noise
5
discrimination
5
discrimination impairments
4
impairments marker
4

Similar Publications

The human auditory system must distinguish relevant sounds from noise. Severe hearing loss can be treated with cochlear implants (CIs), but how the brain adapts to electrical hearing remains unclear. This study examined adaptation to unilateral CI use in the first and seventh months after CI activation using speech comprehension measures and electroencephalography recordings, both during passive listening and an active spatial listening task.

View Article and Find Full Text PDF

Reverberation cues underlying virtual auditory distance perception for a frontal source.

J Acoust Soc Am

September 2025

ENTPE, Ecole Centrale de Lyon, CNRS, LTDS, UMR5513, 69518 Vaulx-en-Velin, France.

This study investigated the potential role of temporal, spectral, and binaural room-induced cues for the perception of virtual auditory distance. Listeners judged the perceived distance of a frontal source simulated between 0.5 and 10 m in a room via headphones, with eyes closed in a soundproof booth.

View Article and Find Full Text PDF

Hearing aid (HA) processing can affect acoustic features linked with emotions, potentially making them less distinguishable. This study investigated whether HA processing, with both standard and short processing delays, affects emotion prediction from a set of acoustic features associated with speech emotions and how well these predictions align with perceived emotions. The findings indicated that anger and sadness are the easiest emotions to predict from acoustic features, while happiness and fear are the most accurately perceived emotions by listeners with normal hearing.

View Article and Find Full Text PDF

Prior researches on global-local processing have focused on hierarchical objects in the visual modality, while the real-world involves multisensory interactions. The present study investigated whether the simultaneous presentation of auditory stimuli influences the recognition of visually hierarchical objects. We added four types of auditory stimuli to the traditional visual hierarchical letters paradigm:no sound (visual-only), a pure tone, a spoken letter that was congruent with the required response (response-congruent), or a spoken letter that was incongruent with it (response-incongruent).

View Article and Find Full Text PDF

Psychoacoustic assessment of misophonia.

JASA Express Lett

September 2025

Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas 76201,

Misophonia is a condition characterized by intense negative emotional reactions to trigger sounds and related stimuli. In this study, adult listeners (N = 15) with a self-reported history of misophonia symptoms and a control group without misophonia (N = 15) completed listening judgements of recorded misophonia trigger stimuli using a standard scale. Participants also completed an established questionnaire of misophonia symptoms, the Misophonia Questionnaire (MQ).

View Article and Find Full Text PDF