98%
921
2 minutes
20
Our intuitive sense of number allows rapid estimation for the number of objects (numerosity) in a scene. How does the continuous nature of neural information processing create a discrete representation of number? A neurocomputational model with divisive normalization explains this process and existing data; however, a successful model should not only explain existing data but also generate novel predictions. Here, we experimentally test novel predictions of this model to evaluate its merit for explaining mechanisms of numerosity perception. We did so by consideration of the coherence illusion: the underestimation of number for arrays containing heterogeneous compared to homogeneous items. First, we established the existence of the coherence illusion for homogeneity manipulations of both area and orientation of items in an array. Second, despite the behavioral similarity, the divisive normalization model predicted that these two illusions should reflect activity in different stages of visual processing. Finally, visual evoked potentials from an electroencephalography experiment confirmed these predictions, showing that area and orientation coherence modulate brain responses at distinct latencies and topographies. These results demonstrate the utility of the divisive normalization model for explaining numerosity perception, according to which numerosity perception is a byproduct of canonical neurocomputations that exist throughout the visual pathway.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1093/cercor/bhae418 | DOI Listing |
J Neurosci
August 2025
Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada, K7L 3N6.
The integration of multiple sensory inputs is essential for human perception and action in uncertain environments. This process includes reference frame transformations as different sensory signals are encoded in different coordinate systems. Studies have shown multisensory integration in humans is consistent with Bayesian optimal inference.
View Article and Find Full Text PDFbioRxiv
July 2025
Simons Center for Computational Physical Chemistry, Dept. of Chemistry, New York University, NY, USA.
↔ ↔ ↔
View Article and Find Full Text PDFBiology (Basel)
June 2025
Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai 200062, China.
Self-motion perception is a complex multisensory process that relies on the integration of various sensory signals, particularly visual and vestibular inputs, to construct stable and unified perceptions. It is essential for spatial navigation and effective interaction with the environment. This review systematically explores the mechanisms and computational principles underlying visual-vestibular integration in self-motion perception.
View Article and Find Full Text PDFbioRxiv
May 2025
Center for Neural Science, NYU Center for Soft Matter Research, Department of Physics, NYU.
Stability is a fundamental requirement for both biological and engineered neural circuits, yet it is surprisingly difficult to guarantee in the presence of recurrent interactions. Standard linear dynamical models of recurrent networks are unreasonably sensitive to the precise values of the synaptic weights, since stability requires all eigenvalues of the recurrent matrix to lie within the unit circle. Here we demonstrate, both theoretically and numerically, that an arbitrary recurrent neural network can remain stable even when its spectral radius exceeds 1, provided it incorporates divisive normalization, a dynamical neural operation that suppresses the responses of individual neurons.
View Article and Find Full Text PDFPLoS Comput Biol
May 2025
Department of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia.
Sparse coding, predictive coding and divisive normalization have each been found to be principles that underlie the function of neural circuits in many parts of the brain, supported by substantial experimental evidence. However, the connections between these related principles are still poorly understood. Sparse coding and predictive coding can be reconciled into a learning framework with predictive structure and sparse responses, termed as sparse/predictive coding.
View Article and Find Full Text PDF