98%
921
2 minutes
20
Purpose: Cochlear implant (CI) recipients with normal or near normal hearing (NH) in the contralateral ear, referred to as single-sided deafness (SSD), experience significantly better speech recognition in noise with their CI than without it, although reported outcomes vary. One possible explanation for differences in outcomes across studies could be differences in the spatial configurations used to assess performance. This study compared speech recognition for different spatial configurations of the target and masker, with test materials used clinically.
Method: Sixteen CI users with SSD completed tasks of masked speech recognition presented in five spatial configurations. The target speech was presented from the front speaker (0° azimuth). The masker was located either 90° or 45° toward the CI-ear or NH-ear or colocated with the target. Materials were the AzBio sentences in a 10-talker masker and the Bamford-Kowal-Bench Speech-in-Noise test (BKB-SIN; four-talker masker). Spatial release from masking (SRM) was computed as the benefit associated with spatial separation relative to the colocated condition.
Results: Performance was significantly better when the masker was separated toward the CI-ear as compared to colocated. No benefit was observed for spatial separations toward the NH-ear. The magnitude of SRM for spatial separations toward the CI-ear was similar for 45° and 90° when tested with the AzBio sentences, but a larger benefit was observed for 90° as compared to 45° for the BKB-SIN.
Conclusions: Masked speech recognition in CI users with SSD varies as a function of the spatial configuration of the target and masker. Results supported an expansion of the clinical test battery at the study site to assess binaural hearing abilities for CI candidates and recipients with SSD. The revised test battery presents the target from the front speaker and the masker colocated with the target, 90° toward the CI-ear, or 90° toward the NH-ear.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1044/2022_AJA-21-00268 | DOI Listing |
Int J Audiol
September 2025
Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany.
Objective: Determination of monaural and binaural speech-recognition curves for the Freiburg monosyllabic speech test (FMST) in quiet to update and supplement existing normative data.
Design: Monaural and binaural speech-recognition tests were performed in free field at five speech levels in two anechoic test rooms at two sites (Lübeck and Oldenburg, Germany). For the monaural tests, one ear was occluded with a foam earplug.
Front Artif Intell
August 2025
School of Computation and Communication Science and Engineering, The Nelson Mandela African Institution of Science and Technology, Arusha, Tanzania.
Computer vision has been identified as one of the solutions to bridge communication barriers between speech-impaired populations and those without impairment as most people are unaware of the sign language used by speech-impaired individuals. Numerous studies have been conducted to address this challenge. However, recognizing word signs, which are usually dynamic and involve more than one frame per sign, remains a challenge.
View Article and Find Full Text PDFZhonghua Jie He He Hu Xi Za Zhi
September 2025
Department of Respiratory and Critical Care Medicine, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory He
Cough is a common symptom of many respiratory diseases, and parameters such as frequency, intensity, type and duration play important roles in disease screening, diagnosis and prognosis. Among these, cough frequency is the most widely applied metric. In current clinical practice, cough severity is primarily assessed based on patients' subjective symptom descriptions in combination with semi-structured questionnaires.
View Article and Find Full Text PDFCogn Psychol
September 2025
Graduate School of Engineering, Kochi University of Technology, Kami, Kochi, Japan. Electronic address:
Prior researches on global-local processing have focused on hierarchical objects in the visual modality, while the real-world involves multisensory interactions. The present study investigated whether the simultaneous presentation of auditory stimuli influences the recognition of visually hierarchical objects. We added four types of auditory stimuli to the traditional visual hierarchical letters paradigm:no sound (visual-only), a pure tone, a spoken letter that was congruent with the required response (response-congruent), or a spoken letter that was incongruent with it (response-incongruent).
View Article and Find Full Text PDFNanomicro Lett
September 2025
Nanomaterials & System Lab, Major of Mechatronics Engineering, Faculty of Applied Energy System, Jeju National University, Jeju, 63243, Republic of Korea.
Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring, clinical diagnosis, and robotic applications. Nevertheless, it remains a critical challenge to simultaneously achieve desirable mechanical and electrical performance along with biocompatibility, adhesion, self-healing, and environmental robustness with excellent sensing metrics. Herein, we report a multifunctional, anti-freezing, self-adhesive, and self-healable organogel pressure sensor composed of cobalt nanoparticle encapsulated nitrogen-doped carbon nanotubes (CoN CNT) embedded in a polyvinyl alcohol-gelatin (PVA/GLE) matrix.
View Article and Find Full Text PDF