Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Purpose: Cochlear implant (CI) recipients with normal or near normal hearing (NH) in the contralateral ear, referred to as single-sided deafness (SSD), experience significantly better speech recognition in noise with their CI than without it, although reported outcomes vary. One possible explanation for differences in outcomes across studies could be differences in the spatial configurations used to assess performance. This study compared speech recognition for different spatial configurations of the target and masker, with test materials used clinically.

Method: Sixteen CI users with SSD completed tasks of masked speech recognition presented in five spatial configurations. The target speech was presented from the front speaker (0° azimuth). The masker was located either 90° or 45° toward the CI-ear or NH-ear or colocated with the target. Materials were the AzBio sentences in a 10-talker masker and the Bamford-Kowal-Bench Speech-in-Noise test (BKB-SIN; four-talker masker). Spatial release from masking (SRM) was computed as the benefit associated with spatial separation relative to the colocated condition.

Results: Performance was significantly better when the masker was separated toward the CI-ear as compared to colocated. No benefit was observed for spatial separations toward the NH-ear. The magnitude of SRM for spatial separations toward the CI-ear was similar for 45° and 90° when tested with the AzBio sentences, but a larger benefit was observed for 90° as compared to 45° for the BKB-SIN.

Conclusions: Masked speech recognition in CI users with SSD varies as a function of the spatial configuration of the target and masker. Results supported an expansion of the clinical test battery at the study site to assess binaural hearing abilities for CI candidates and recipients with SSD. The revised test battery presents the target from the front speaker and the masker colocated with the target, 90° toward the CI-ear, or 90° toward the NH-ear.

Download full-text PDF

Source
http://dx.doi.org/10.1044/2022_AJA-21-00268DOI Listing

Publication Analysis

Top Keywords

speech recognition
20
masked speech
12
spatial configurations
12
masker
8
cochlear implant
8
single-sided deafness
8
spatial
8
configurations target
8
target masker
8
users ssd
8

Similar Publications

Objective: Determination of monaural and binaural speech-recognition curves for the Freiburg monosyllabic speech test (FMST) in quiet to update and supplement existing normative data.

Design: Monaural and binaural speech-recognition tests were performed in free field at five speech levels in two anechoic test rooms at two sites (Lübeck and Oldenburg, Germany). For the monaural tests, one ear was occluded with a foam earplug.

View Article and Find Full Text PDF

Computer vision has been identified as one of the solutions to bridge communication barriers between speech-impaired populations and those without impairment as most people are unaware of the sign language used by speech-impaired individuals. Numerous studies have been conducted to address this challenge. However, recognizing word signs, which are usually dynamic and involve more than one frame per sign, remains a challenge.

View Article and Find Full Text PDF

[Cough frequency monitoring: current technologies and clinical research applications].

Zhonghua Jie He He Hu Xi Za Zhi

September 2025

Department of Respiratory and Critical Care Medicine, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory He

Cough is a common symptom of many respiratory diseases, and parameters such as frequency, intensity, type and duration play important roles in disease screening, diagnosis and prognosis. Among these, cough frequency is the most widely applied metric. In current clinical practice, cough severity is primarily assessed based on patients' subjective symptom descriptions in combination with semi-structured questionnaires.

View Article and Find Full Text PDF

Prior researches on global-local processing have focused on hierarchical objects in the visual modality, while the real-world involves multisensory interactions. The present study investigated whether the simultaneous presentation of auditory stimuli influences the recognition of visually hierarchical objects. We added four types of auditory stimuli to the traditional visual hierarchical letters paradigm:no sound (visual-only), a pure tone, a spoken letter that was congruent with the required response (response-congruent), or a spoken letter that was incongruent with it (response-incongruent).

View Article and Find Full Text PDF

Deep Learning-Assisted Organogel Pressure Sensor for Alphabet Recognition and Bio-Mechanical Motion Monitoring.

Nanomicro Lett

September 2025

Nanomaterials & System Lab, Major of Mechatronics Engineering, Faculty of Applied Energy System, Jeju National University, Jeju, 63243, Republic of Korea.

Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring, clinical diagnosis, and robotic applications. Nevertheless, it remains a critical challenge to simultaneously achieve desirable mechanical and electrical performance along with biocompatibility, adhesion, self-healing, and environmental robustness with excellent sensing metrics. Herein, we report a multifunctional, anti-freezing, self-adhesive, and self-healable organogel pressure sensor composed of cobalt nanoparticle encapsulated nitrogen-doped carbon nanotubes (CoN CNT) embedded in a polyvinyl alcohol-gelatin (PVA/GLE) matrix.

View Article and Find Full Text PDF