98%
921
2 minutes
20
Learning in a changing, uncertain environment is a difficult problem. A popular solution is to predict future observations and then use surprising outcomes to update those predictions. However, humans also have a sense of confidence that characterizes the precision of their predictions. Bayesian models use a confidence-weighting principle to regulate learning: for a given surprise, the update is smaller when the confidence about the prediction was higher. Prior behavioral evidence indicates that human learning adheres to this confidence-weighting principle. Here, we explored the human brain dynamics sub-tending the confidence-weighting of learning using magneto-encephalography (MEG). During our volatile probability learning task, subjects' confidence reports conformed with Bayesian inference. MEG revealed several stimulus-evoked brain responses whose amplitude reflected surprise, and some of them were further shaped by confidence: surprise amplified the stimulus-evoked response whereas confidence dampened it. Confidence about predictions also modulated several aspects of the brain state: pupil-linked arousal and beta-range (15-30 Hz) oscillations. The brain state in turn modulated specific stimulus-evoked surprise responses following the confidence-weighting principle. Our results thus indicate that there exist, in the human brain, signals reflecting surprise that are dampened by confidence in a way that is appropriate for learning according to Bayesian inference. They also suggest a mechanism for confidence-weighted learning: confidence about predictions would modulate intrinsic properties of the brain state to amplify or dampen surprise responses evoked by discrepant observations.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7292419 | PMC |
http://dx.doi.org/10.1371/journal.pcbi.1007935 | DOI Listing |
IEEE J Biomed Health Inform
July 2025
The integration of artificial intelligence (AI) into medical image analysis has transformed healthcare, offering unprecedented precision in diagnosis, treatment planning, and disease monitoring. However, its adoption within the Internet of Medical Things (IoMT) raises significant challenges related to transparency, trustworthiness, and security. This paper introduces a novel Explainable AI (XAI) framework tailored for Medical Cyber-Physical Systems (MCPS), addressing these challenges by combining deep neural networks with symbolic knowledge reasoning to deliver clinically interpretable insights.
View Article and Find Full Text PDFNeuroimage
March 2023
Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, France. Electronic address:
PLoS Comput Biol
June 2020
Cognitive Neuroimaging Unit, NeuroSpin center, Institute for Life Sciences Frédéric Joliot, Fundamental Research Division, Commissariat à l'Energie Atomique et aux énergies alternatives, INSERM, Université Paris-Sud, Université Paris-Saclay, Gif-sur-Yvette, France.
Learning in a changing, uncertain environment is a difficult problem. A popular solution is to predict future observations and then use surprising outcomes to update those predictions. However, humans also have a sense of confidence that characterizes the precision of their predictions.
View Article and Find Full Text PDF