98%
921
2 minutes
20
Although not considered a core feature of autism, autistic children often present with difficulties in reading comprehension, which is a multisensory process involving translation of print to speech sounds (i.e., decoding) and interpreting words in context (i.e., language comprehension). This study tested the hypothesis that audiovisual integration may explain individual differences in reading comprehension, through its relations with decoding and language comprehension, in autistic and non-autistic children. To test our hypothesis, we conducted a concurrent correlational study involving 50 autistic and 50 non-autistic school-aged children (8-17 years of age) matched at the group level on biological sex and chronological age. Participants completed a battery of tests probing their reading comprehension, decoding, and language comprehension, as well as a psychophysical task assessing audiovisual integration as indexed by susceptibility to the McGurk illusion. A series of regression analyses was carried out to test relations of interest. Audiovisual integration was significantly associated with reading comprehension, decoding, and language comprehension, with moderate-to-large effect sizes. Mediation analyses revealed that the relation between audiovisual integration and reading comprehension was completely mediated by decoding and language comprehension, with standardized indirect effects indicating significant mediation through both pathways. These associations did not vary according to diagnostic group. This work highlights the potential role of audiovisual integration in language and literacy development and underscores the potential for multisensory-based interventions to improve reading outcomes in autistic and non-autistic children. Future research should employ longitudinal designs and more diverse samples to replicate and extend these findings.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s10803-025-06960-3 | DOI Listing |
J Neurosci
September 2025
Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.
Human speech perception is multisensory, integrating auditory information from the talker's voice with visual information from the talker's face. BOLD fMRI studies have implicated the superior temporal gyrus (STG) in processing auditory speech and the superior temporal sulcus (STS) in integrating auditory and visual speech, but as an indirect hemodynamic measure, fMRI is limited in its ability to track the rapid neural computations underlying speech perception. Using stereoelectroencephalograpy (sEEG) electrodes, we directly recorded from the STG and STS in 42 epilepsy patients (25 F, 17 M).
View Article and Find Full Text PDFAudio-visual event localization (AVEL) aims to recognize events in videos by associating audio-visual information. However, events involved in existing AVEL tasks are usually coarse-grained events. Actually, finer-grained events are sometimes necessary to be distinguished, especially in certain expert-level applications or rich-content-generation studies.
View Article and Find Full Text PDFSoc Cogn Affect Neurosci
September 2025
Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.
There is emerging evidence that a performer's body movements may enhance music-induced pleasure. However, the neural mechanism underlying such modulation remains largely unexplored. This study utilized behavioral, psychophysiological and electroencephalographic data collected from 32 listeners (analyzed sample = 31) as they watched and listened to vocal (Mandarin lyrics) and violin performances of pop music videos.
View Article and Find Full Text PDFNeuroimage
August 2025
MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy.
Action representation and the sharing of feature coding within the Action Observation Network (AON) remain debated, and our understanding of how the brain consistently encodes action features across sensory modalities under variable, naturalistic conditions is still limited. Here, we introduce a theoretically-based taxonomic model of action representation that categorizes action-related features into six conceptual domains: Space, Effector, Agent & Object, Social, Emotion, and Linguistic. We assessed the predictive power of this model on human brain activity by acquiring functional MRI (fMRI) data from participants exposed to audiovisual, visual-only, or auditory-only versions of the same naturalistic movie.
View Article and Find Full Text PDFEur J Neurosci
September 2025
Centre de Recherche Cerveau et Cognition (CerCo), CNRS UMR 5549, Université de Toulouse, Toulouse, France.
The pulvinar is a posterior thalamic nucleus, with a heterogeneous anatomo-functional organization. It is divided into four parts, including the medial pulvinar, which is densely connected with primary unisensory and multisensory cortical regions, and subcortical structures, including the superior colliculus. Based on this connectivity, the medial pulvinar may play an important role in sensory processing and multisensory integration.
View Article and Find Full Text PDF