98%
921
2 minutes
20
Perceptual decisions are often based on multiple sensory inputs whose reliabilities rapidly vary over time, yet little is known about how the brain integrates these inputs to optimize behavior. The optimal solution requires that neurons simply add their sensory inputs across time and modalities, as long as these inputs are encoded with an invariant linear probabilistic population code (ilPPC). While this theoretical possibility has been raised before, it has never been tested experimentally. Here, we report that neural activities in the lateral intraparietal area (LIP) of macaques performing a vestibular-visual multisensory decision-making task are indeed consistent with the ilPPC theory. More specifically, we found that LIP accumulates momentary evidence proportional to the visual speed and the absolute value of vestibular acceleration, two variables that are encoded with close approximations to ilPPCs in sensory areas. Together, these results provide a remarkably simple and biologically plausible solution to near-optimal multisensory decision making.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neuron.2019.08.038 | DOI Listing |
Cogn Psychol
September 2025
Graduate School of Engineering, Kochi University of Technology, Kami, Kochi, Japan. Electronic address:
Prior researches on global-local processing have focused on hierarchical objects in the visual modality, while the real-world involves multisensory interactions. The present study investigated whether the simultaneous presentation of auditory stimuli influences the recognition of visually hierarchical objects. We added four types of auditory stimuli to the traditional visual hierarchical letters paradigm:no sound (visual-only), a pure tone, a spoken letter that was congruent with the required response (response-congruent), or a spoken letter that was incongruent with it (response-incongruent).
View Article and Find Full Text PDFEur J Investig Health Psychol Educ
August 2025
Department of Psychology, Counselling and Therapy, La Trobe University, Melbourne, VIC 3086, Australia.
Multisensory processing has long been recognized to enhance perception, cognition, and actions in adults. However, there is currently limited understanding of how multisensory stimuli, in comparison to unisensory stimuli, contribute to the development of both motor and verbally assessed working memory (WM) in children. Thus, the current study aimed to systematically review and meta-analyze the associations between the multisensory processing of auditory and visual stimuli, and performance on simple and more complex WM tasks, in children from birth to 15 years old.
View Article and Find Full Text PDFQ J Exp Psychol (Hove)
August 2025
Centre for Multisensory Marketing, Department of Marketing, BI Norwegian Business School, Oslo, Norway, Nydalsveien 37, 0484 Oslo, Norway.
For decades, researchers have explored the relationship between aesthetic features such as symmetry and complexity and preference. Likewise, philosophers and psychologists alike have pondered the differences between preference and behavior. Nevertheless, little is known about the relationship between aesthetic preference and motivation.
View Article and Find Full Text PDFFront Psychol
August 2025
Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy.
This paper explores the impact of technology-mediated (TM) communication on interpersonal synchrony through the integrated lens of social neuroscience, embodied cognition, and Conceptual Metaphor Theory (CMT). It focuses particularly on the case of remote working, which exemplifies the challenges and adaptations required when social interactions shift from face-to-face (FTF) to digital environments. While FTF communication enables interpersonal synchrony through rich sensorimotor cues, such as gaze, posture, and gesture, TM communication often reduces or distorts these embodied signals.
View Article and Find Full Text PDFNat Hum Behav
August 2025
School of Electrical and Electronic Engineering and UCD Centre for Biomedical Engineering, University College Dublin, Dublin, Ireland.
Detecting targets in multisensory environments is an elemental brain function, but it is not yet known whether information from different sensory modalities is accumulated by distinct processes, and, if so, whether the processes are subject to separate decision criteria. Here we address this in two experiments (n = 22, n = 21) using a paradigm design that enables neural evidence accumulation to be traced through a centro-parietal positivity and modelled alongside response time distributions. Through analysis of both redundant (respond-to-either-modality) and conjunctive (respond-only-to-both) audio-visual detection data, joint neural-behavioural modelling, and a follow-up onset-asynchrony experiment, we found that auditory and visual evidence is accumulated in distinct processes during multisensory detection, and cumulative evidence in the two modalities sub-additively co-activates a single, thresholded motor process during redundant detection.
View Article and Find Full Text PDF