98%
921
2 minutes
20
Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5461255 | PMC |
http://dx.doi.org/10.3389/fnhum.2017.00294 | DOI Listing |
Hum Brain Mapp
August 2025
Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK.
In real-life interaction, we often need to communicate under challenging conditions, such as when speech is acoustically degraded. This issue is compounded by the fact that our attentional resources are often divided when we simultaneously need to engage in other tasks. The interaction between the perception of degraded speech and simultaneously performing additional cognitive tasks is poorly understood.
View Article and Find Full Text PDFJ Speech Lang Hear Res
September 2025
Purpose: Previous studies using noise-vocoded speech (NVS) have demonstrated the significance of the temporal amplitude envelope (TAE) of speech signals, such as modulation perception, in vocal emotion perception. In addition, due to the importance of modulation processing for TAE in speech perception, researchers began to focus on the role of TAE modulation components. A previous study suggested the contributions of modulation frequency components in vocal emotion perception.
View Article and Find Full Text PDFEar Hear
June 2025
Department of Hearing and Speech Sciences, University of Maryland-College Park, College Park, Maryland, USA.
Objectives: For many (especially older) single-sided-deafness (SSD) cochlear-implant (CI) users (one normal hearing and one CI ear), masking speech in the acoustic ear can interfere with CI-ear speech recognition. This study examined two possible explanations for this "bilateral speech interference." First, it might reflect a general (i.
View Article and Find Full Text PDFNeuropsychologia
September 2025
Department of Physiology, Graduate School of Medicine, International University of Health and Welfare Faculty of Medicine, Narita, Japan.
Numerous studies have investigated hemispheric laterality of multi-band neural oscillatory activities in response to speech temporal information, but their speech- and language-specificity remain elusive. In the present study, using magnetoencephalography, we examined laterality patterns of theta (4-8 Hz) and high gamma-band (more than 80 Hz) activities phase-locked to temporal envelope and fine structure, respectively, of speech and non-speech stimuli. Monotone speech (MS) with a fundamental frequency of 80 Hz and its backward sound (bMS) were used as speech stimuli.
View Article and Find Full Text PDFEar Hear
August 2025
Department of Psychology & Neuroscience, Duke University, Durham, North Carolina, USA.
Objectives: Lexical bias is a phenomenon wherein impoverished speech signals tend to be perceived in line with the word context in which they are heard. Previous research demonstrated that lexical bias may guide processing when the acoustic signal is degraded, as in the case of cochlear implant (CI) users. The goal of the present study was twofold: (1) replicate previous lab-based work demonstrating a lexical bias for acoustically degraded speech using online research methods, and (2) characterize the malleability of the lexical bias effect following a period of auditory training.
View Article and Find Full Text PDF