98%
921
2 minutes
20
Objective: The present study aimed to evaluate binaural auditory skills in bimodal and bilateral pediatric cochlear implant (CI) users with incomplete partition type-II (IP-II) and to reveal the effect of IP-II on performance by comparing the results to pediatric CI users with normal cochlear morphology.
Study Design: Cross-sectional study.
Setting: Tertiary referral center.
Methods: Forty-one CI users (mean age 8.8 ± 1.9) were grouped as bimodal (BIM-IP) and bilateral (BIL-IP) users with IP-II; bimodal (BIM-N) and bilateral (BIL-N) users with normal cochlear anatomy. Speech perception in noise and sound localization skills were compared under 2 conditions; binaural (bilateral or bimodal) and monaural (first CI alone).
Results: BIM-IP and BIL-IP showed no performance difference in binaural tasks. The BIM-N group showed remarkably poor performance in comparison to the groups of BIL-IP (p = .007), BIM-IP (p < .001), and BIL-N (p = .004) in terms of speech-in-noise skills. In sound localization abilities, similar significant differences were found between the group of BIM-N and the groups of BIL-IP (p = .001), BIM-IP (p < .001), and BIL-N (p = .004). All groups showed statistically significant improvements in binaural condition on both tasks (p < .05).
Conclusion: We revealed that bilateral and bimodal pediatric CI users with IP-II benefitted from implantation as much as bilateral users with normal anatomy. Differences in residual hearing between groups may explain the poor performance of bimodal users with normal cochlear morphology. To the best of our knowledge, it is the first study to unveil binaural performance characteristics in children diagnosed with a specific inner ear malformation subgroup.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1002/ohn.244 | DOI Listing |
J Neurosci
September 2025
Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.
Human speech perception is multisensory, integrating auditory information from the talker's voice with visual information from the talker's face. BOLD fMRI studies have implicated the superior temporal gyrus (STG) in processing auditory speech and the superior temporal sulcus (STS) in integrating auditory and visual speech, but as an indirect hemodynamic measure, fMRI is limited in its ability to track the rapid neural computations underlying speech perception. Using stereoelectroencephalograpy (sEEG) electrodes, we directly recorded from the STG and STS in 42 epilepsy patients (25 F, 17 M).
View Article and Find Full Text PDFProg Neurobiol
September 2025
The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, United States; Elmezzi Graduate School of Molecular Medicine at Northwell Health, Manhasset, NY, United States; Department of Neurosurgery, Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, United States; Tr
Humans live in an environment that contains rich auditory stimuli, which must be processed efficiently. The entrainment of neural oscillations to acoustic inputs may support the processing of simple and complex sounds. However, the characteristics of this entrainment process have been shown to be inconsistent across species and experimental paradigms.
View Article and Find Full Text PDFAnn N Y Acad Sci
September 2025
BCBL, Basque Center on Cognition, Brain and Language, Donostia, Spain.
Neural tracking, the alignment of brain activity with the temporal dynamics of sensory input, is a crucial mechanism underlying perception, attention, and cognition. While this concept has gained prominence in research on speech, music, and visual processing, its definition and methodological approaches remain heterogeneous. This paper critically examines neural tracking from both theoretical and methodological perspectives, highlighting how its interpretation varies across studies.
View Article and Find Full Text PDFCereb Cortex
August 2025
Department of Psychology, University of Lübeck, Ratzeburger Allee 160, Lübeck 23562, Germany.
The human auditory system must distinguish relevant sounds from noise. Severe hearing loss can be treated with cochlear implants (CIs), but how the brain adapts to electrical hearing remains unclear. This study examined adaptation to unilateral CI use in the first and seventh months after CI activation using speech comprehension measures and electroencephalography recordings, both during passive listening and an active spatial listening task.
View Article and Find Full Text PDFJ Acoust Soc Am
September 2025
ENTPE, Ecole Centrale de Lyon, CNRS, LTDS, UMR5513, 69518 Vaulx-en-Velin, France.
This study investigated the potential role of temporal, spectral, and binaural room-induced cues for the perception of virtual auditory distance. Listeners judged the perceived distance of a frontal source simulated between 0.5 and 10 m in a room via headphones, with eyes closed in a soundproof booth.
View Article and Find Full Text PDF