98%
921
2 minutes
20
This study aimed to investigate open-set sentence recognition in quiet and amidst single-talker babble among Mandarin-speaking children with cochlear implants (CIs) to elucidate key contributing cognitive and linguistic factors influencing performance. Open-set sentence recognition was assessed in both conditions, alongside measurement of cognitive skills (operational efficiency and auditory short-term memory) and linguistic skills (oral vocabulary and syntactic competence) in kindergarten-aged children with CIs (n = 22; age = 59.8 ± 10.6 months; age at implantation = 31.9 ± 15.1 months; primary communication mode: auditory-oral) compared to peers with typical hearing (TH) (n = 21; age = 67.9 ± 7.9 months). Results showed that children with CIs exhibited poorer performance than TH peers across measures (p < 0.001) except for operational efficiency. Notably, in children with CIs, oral vocabulary significantly contributed to sentence recognition in quiet (β = 0.39, p = 0.029), while auditory short-term memory significantly influenced sentence recognition in both quiet (β = 0.51, p = 0.006) and noise conditions (β = 0.44, p = 0.04). These findings suggest that kindergarten-aged children with CIs face significant challenges in sentence recognition, particularly in the interference condition despite relatively early implantation. Auditory short-term memory emerges as a crucial factor affecting sentence recognition in children with CIs, underscoring its importance for clinical and educational consideration.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/10.0039050 | DOI Listing |
Ear Hear
September 2025
Department of Otorhinolaryngology, University Medical Center Groningen (UMCG), University of Groningen, Groningen, the Netherlands.
Objectives: Alexithymia is characterized by difficulties in identifying and describing one's own emotions. Alexithymia has previously been associated with deficits in the processing of emotional information at both behavioral and neurobiological levels, and some studies have shown elevated levels of alexithymic traits in adults with hearing loss. This explorative study investigated alexithymia in young and adolescent school-age children with hearing aids in relation to (1) a sample of age-matched children with normal hearing, (2) age, (3) hearing thresholds, and (4) vocal emotion recognition.
View Article and Find Full Text PDFJMIR AI
September 2025
Department of Anesteshiology, Perioperative and Pain Medicine, Mount Sinai, New York, NY, United States.
Background: Clinical notes house rich, yet unstructured, patient data, making analysis challenging due to medical jargon, abbreviations, and synonyms causing ambiguity. This complicates real-time extraction for decision support tools.
Objective: This study aimed to examine the data curation, technology, and workflow of the named entity recognition (NER) pipeline, a component of a broader clinical decision support tool that identifies key entities using NER models and classifies these entities as present or absent in the patient through an NER assertion model.
Trends Hear
September 2025
Department of Psychology, Concordia University, Montreal, Canada.
While blink analysis was traditionally conducted within vision research, recent studies suggest that blinks might reflect a more general cognitive strategy for resource allocation, including with auditory tasks, but its use within the fields of Audiology or Psychoacoustics remains scarce and its interpretation largely speculative. It is hypothesized that as listening conditions become more difficult, the number of blinks would decrease, especially during stimulus presentation, because it reflects a window of alertness. In experiment 1, 21 participants were presented with 80 sentences at different signal-to-noise ratios (SNRs): 0, + 7, + 14 dB and in quiet, in a sound-proof room with gaze and luminance controlled (75 lux).
View Article and Find Full Text PDFInt J Audiol
September 2025
Manchester Centre for Audiology and Deafness, School of Health Sciences, The University of Manchester, Manchester, UK.
Objectives: To evaluate children's ability to recognise speech and its relationship to language ability using two newly developed tests: the Listening in Spatialised Noise and Reverberation test (LiSN-R) and the Test of Listening Difficulties - Universal (ToLD-U).
Design: LiSN-R and ToLD-U used nonword and sentence recognition in spatially separated noise and reverberation. Language ability was assessed using the Clinical Evaluation of Language Fundamentals (CELF) sentence recall.
Stud Health Technol Inform
September 2025
Department of Computer Science, Kempten University of Applied Sciences, Kempten, Germany.
Introduction: Manual ICD-10 coding of German clinical texts is time-consuming and error-prone. This project aims to develop a semi-automated pipeline for efficient coding of unstructured medical documentation.
State Of The Art: Existing approaches often rely on fine-tuned language models that require large datasets and perform poorly on rare codes, particularly in low-resource languages such as German.