Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Unlabelled: Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene.

Significance Statement: When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole auditory scene and how increasing background noise corrupts this process is still debated. In this magnetoencephalography study, subjects had to attend a speech stream with or without multitalker background noise. Results argue for frequency-dependent cortical tracking mechanisms for the attended speech stream. The left superior temporal gyrus tracked the ∼0.5 Hz modulations of the attended speech stream only when the speech was embedded in multitalker background, whereas the right supratemporal auditory cortex tracked 4-8 Hz modulations during both noiseless and cocktail-party conditions.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6601992PMC
http://dx.doi.org/10.1523/JNEUROSCI.1730-15.2016DOI Listing

Publication Analysis

Top Keywords

multitalker background
40
attended speech
36
speech stream
32
background noise
20
superior temporal
16
auditory scene
16
left superior
12
multitalker
12
auditory cortex
12
speech
11

Similar Publications

DNN-Based Noise Reduction Significantly Improves Bimodal Benefit in Background Noise for Cochlear Implant Users.

J Clin Med

July 2025

Department of Otolaryngology-Head and Neck Surgery, Division of Audiology, Mayo Clinic, Rochester, MN 55905, USA.

: Traditional hearing aid noise reduction algorithms offer no additional benefit in noisy situations for bimodal cochlear implant (CI) users with a CI in one ear and a hearing aid (HA) in the other. Recent breakthroughs in deep neural network (DNN)-based noise reduction have improved speech understanding for hearing aid users in noisy environments. These advancements could also boost speech perception in noise for bimodal CI users.

View Article and Find Full Text PDF

Introduction: Performing everyday tasks requires the use of multiple cognitive, sensory, and emotional systems. The interference of different variables in these multitasking systems affects our motor-balance system. This study was conducted to investigate how acoustic stimuli presented during a cognitive-motor dual task affect postural control in healthy young adults.

View Article and Find Full Text PDF

Processing speech amongst noise requires sensory and cognitive abilities that are often affected by Huntington's Disease. However, their impact on daily communication remains unclear. We examined the effects of Huntington's Disease on speech-in-noise processing using everyday sentences and words in noise contexts and conditions that mimic different daily life scenarios.

View Article and Find Full Text PDF

Purpose: The Connected Speech Test (CST) assesses an individual's ability to understand everyday contextualized running speech amidst competing background babble. To minimize accent effects on speech perception scores and reduce the noise floor of the original recordings, an updated version was developed by Saleh et al. (2020).

View Article and Find Full Text PDF

Purpose: Previous studies have debated differences in spoken language processing between nonnative and native English speakers, often yielding varying and sometimes contradictory results. To address these discrepancies, we employed a comprehensive battery of tasks to compare auditory, speech, and memory processing between nonnative and native English language speakers (NELS).

Method: The study included 70 university students aged 18-35 years, comprising 29 nonnative and 41 native monolingual NELS of both genders.

View Article and Find Full Text PDF