Auditory-visual multisensory interactions in humans: timing, topography, directionality, and sources.

J Neurosci

Neuropsychology and Neurorehabilitation Service, Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois and University of Lausanne, 1011 Lausanne, Switzerland.

Published: September 2010


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Current models of brain organization include multisensory interactions at early processing stages and within low-level, including primary, cortices. Embracing this model with regard to auditory-visual (AV) interactions in humans remains problematic. Controversy surrounds the application of an additive model to the analysis of event-related potentials (ERPs), and conventional ERP analysis methods have yielded discordant latencies of effects and permitted limited neurophysiologic interpretability. While hemodynamic imaging and transcranial magnetic stimulation studies provide general support for the above model, the precise timing, superadditive/subadditive directionality, topographic stability, and sources remain unresolved. We recorded ERPs in humans to attended, but task-irrelevant stimuli that did not require an overt motor response, thereby circumventing paradigmatic caveats. We applied novel ERP signal analysis methods to provide details concerning the likely bases of AV interactions. First, nonlinear interactions occur at 60-95 ms after stimulus and are the consequence of topographic, rather than pure strength, modulations in the ERP. AV stimuli engage distinct configurations of intracranial generators, rather than simply modulating the amplitude of unisensory responses. Second, source estimations (and statistical analyses thereof) identified primary visual, primary auditory, and posterior superior temporal regions as mediating these effects. Finally, scalar values of current densities in all of these regions exhibited functionally coupled, subadditive nonlinear effects, a pattern increasingly consistent with the mounting evidence in nonhuman primates. In these ways, we demonstrate how neurophysiologic bases of multisensory interactions can be noninvasively identified in humans, allowing for a synthesis across imaging methods on the one hand and species on the other.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6633577PMC
http://dx.doi.org/10.1523/JNEUROSCI.1099-10.2010DOI Listing

Publication Analysis

Top Keywords

multisensory interactions
12
interactions humans
8
analysis methods
8
interactions
6
auditory-visual multisensory
4
humans
4
humans timing
4
timing topography
4
topography directionality
4
directionality sources
4

Similar Publications

This study utilized integrated sensory-guided, machine learning, and bioinformatics strategies identify umami-enhancing peptides from , investigated their mechanism of umami enhancement, and confirmed their umami-enhancing properties through sensory evaluations and electronic tongue. Three umami-enhancing peptides (APDGLPTGQ, SDDGFQ, and GLGDDL) demonstrated synergistic/additive effects by significantly enhancing umami intensity and duration in monosodium glutamate (MSG). Furthermore, molecular docking showed that these umami-enhancing peptides enhanced both the binding affinity and interaction forces between MSG and the T1R1/T1R3 receptor system, thereby enhancing umami perception.

View Article and Find Full Text PDF

Prior researches on global-local processing have focused on hierarchical objects in the visual modality, while the real-world involves multisensory interactions. The present study investigated whether the simultaneous presentation of auditory stimuli influences the recognition of visually hierarchical objects. We added four types of auditory stimuli to the traditional visual hierarchical letters paradigm:no sound (visual-only), a pure tone, a spoken letter that was congruent with the required response (response-congruent), or a spoken letter that was incongruent with it (response-incongruent).

View Article and Find Full Text PDF

Multisensory perception and action in painting: science, creativity, and technology.

Front Psychol

August 2025

Istituto Italiano di Tecnologia - Robotics, Brain and Cognitive Sciences (RBCS), Genova, Italy.

Painting comes from the desire to create through movement and color a non-existent image, making it real. To create a new pictorial composition, a close harmony between the creative process and the motor act is necessary. The technique represents the ability to generate motor actions and to interpret the "language" of colors and shapes.

View Article and Find Full Text PDF

The prevalence of dementia is increasing every year, with one person developing dementia every 3 s. Therefore, this study proposes a novel multi-sensory rehabilitation interactive game system (MRIGS), which uses grip assistive devices combined with different colors and tactile stimulation to achieve multi-sensory training effects of vision, hearing, and touch. This study involved 17 older adults (72.

View Article and Find Full Text PDF

Sensorimotor contingencies in congenital hearing loss: The critical first nine months.

Hear Res

August 2025

Departments of Human Development & Quantitative Methodology and Hearing & Speech Sciences, University of Maryland, College Park, USA.

In the recent two decades it became possible to compensate severe-to-profound hearing loss using cochlear implants (CIs). The data from implanted children demonstrate that hearing and language acquisition is well-possible within an early critical period of 3 years, however, the earlier the access to sound is provided, the better outcomes can be expected. While the clinical priority is providing deaf and hard of hearing children with access to spoken language through hearing aids and CIs as early as possible, for most deaf children this access is currently in the second or third year of life.

View Article and Find Full Text PDF