Sentence recognition in quiet and amidst single-talker babble in Chinese kindergarten-aged children with cochlear implants.

J Acoust Soc Am

Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Minneapolis, USA.

Published: August 2025


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

This study aimed to investigate open-set sentence recognition in quiet and amidst single-talker babble among Mandarin-speaking children with cochlear implants (CIs) to elucidate key contributing cognitive and linguistic factors influencing performance. Open-set sentence recognition was assessed in both conditions, alongside measurement of cognitive skills (operational efficiency and auditory short-term memory) and linguistic skills (oral vocabulary and syntactic competence) in kindergarten-aged children with CIs (n = 22; age = 59.8 ± 10.6 months; age at implantation = 31.9 ± 15.1 months; primary communication mode: auditory-oral) compared to peers with typical hearing (TH) (n = 21; age = 67.9 ± 7.9 months). Results showed that children with CIs exhibited poorer performance than TH peers across measures (p < 0.001) except for operational efficiency. Notably, in children with CIs, oral vocabulary significantly contributed to sentence recognition in quiet (β = 0.39, p = 0.029), while auditory short-term memory significantly influenced sentence recognition in both quiet (β = 0.51, p = 0.006) and noise conditions (β = 0.44, p = 0.04). These findings suggest that kindergarten-aged children with CIs face significant challenges in sentence recognition, particularly in the interference condition despite relatively early implantation. Auditory short-term memory emerges as a crucial factor affecting sentence recognition in children with CIs, underscoring its importance for clinical and educational consideration.

Download full-text PDF

Source
http://dx.doi.org/10.1121/10.0039050DOI Listing

Publication Analysis

Top Keywords

sentence recognition
12
recognition quiet
8
quiet amidst
8
amidst single-talker
8
single-talker babble
8
kindergarten-aged children
8
children cochlear
8
cochlear implants
8
open-set sentence
8
children cis
8

Similar Publications

Objectives: Alexithymia is characterized by difficulties in identifying and describing one's own emotions. Alexithymia has previously been associated with deficits in the processing of emotional information at both behavioral and neurobiological levels, and some studies have shown elevated levels of alexithymic traits in adults with hearing loss. This explorative study investigated alexithymia in young and adolescent school-age children with hearing aids in relation to (1) a sample of age-matched children with normal hearing, (2) age, (3) hearing thresholds, and (4) vocal emotion recognition.

View Article and Find Full Text PDF

Background: Clinical notes house rich, yet unstructured, patient data, making analysis challenging due to medical jargon, abbreviations, and synonyms causing ambiguity. This complicates real-time extraction for decision support tools.

Objective: This study aimed to examine the data curation, technology, and workflow of the named entity recognition (NER) pipeline, a component of a broader clinical decision support tool that identifies key entities using NER models and classifies these entities as present or absent in the patient through an NER assertion model.

View Article and Find Full Text PDF

While blink analysis was traditionally conducted within vision research, recent studies suggest that blinks might reflect a more general cognitive strategy for resource allocation, including with auditory tasks, but its use within the fields of Audiology or Psychoacoustics remains scarce and its interpretation largely speculative. It is hypothesized that as listening conditions become more difficult, the number of blinks would decrease, especially during stimulus presentation, because it reflects a window of alertness. In experiment 1, 21 participants were presented with 80 sentences at different signal-to-noise ratios (SNRs): 0,  + 7,  + 14 dB and in quiet, in a sound-proof room with gaze and luminance controlled (75 lux).

View Article and Find Full Text PDF

Objectives: To evaluate children's ability to recognise speech and its relationship to language ability using two newly developed tests: the Listening in Spatialised Noise and Reverberation test (LiSN-R) and the Test of Listening Difficulties - Universal (ToLD-U).

Design: LiSN-R and ToLD-U used nonword and sentence recognition in spatially separated noise and reverberation. Language ability was assessed using the Clinical Evaluation of Language Fundamentals (CELF) sentence recall.

View Article and Find Full Text PDF

Introduction: Manual ICD-10 coding of German clinical texts is time-consuming and error-prone. This project aims to develop a semi-automated pipeline for efficient coding of unstructured medical documentation.

State Of The Art: Existing approaches often rely on fine-tuned language models that require large datasets and perform poorly on rare codes, particularly in low-resource languages such as German.

View Article and Find Full Text PDF