Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Reward-predictive items capture attention even when task-irrelevant. While value-driven attention typically generalizes to stimuli sharing critical reward-associated features (e.g., red), recent evidence suggests an alternative generalization mechanism based on feature relationships (e.g., redder). Here, we investigated whether relational coding of reward-associated features operates across different learning contexts by manipulating search mode and target-distractor similarity. Results showed that singleton search training induced value-driven relational attention regardless of target-distractor similarity (Experiments 1a-1b). In contrast, feature search training produced value-driven relational attention only when targets and distractors were dissimilar, but not when they were similar (Experiments 2a-2c). These findings indicate that coarse selection training (singleton search or feature search among dissimilar items) promotes relational coding of reward-associated features, while fine selection (feature search among similar items) engages precise feature coding. The precision of target selection during reward learning thus critically determines value-driven attentional mechanisms.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12311181PMC
http://dx.doi.org/10.1038/s41539-025-00342-1DOI Listing

Publication Analysis

Top Keywords

reward-associated features
12
feature search
12
selection reward
8
reward learning
8
value-driven attention
8
relational coding
8
coding reward-associated
8
target-distractor similarity
8
singleton search
8
search training
8

Similar Publications

Reward-predictive items capture attention even when task-irrelevant. While value-driven attention typically generalizes to stimuli sharing critical reward-associated features (e.g.

View Article and Find Full Text PDF

Systematic review of the value-driven attentional capture paradigm in visual attention studies: Evidence from 52 experiments.

Acta Psychol (Amst)

August 2025

Behavioral Epidemiology, Institute of Clinical Psychology and Psychotherapy, Technische Universität Dresden, Germany; General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany. Electronic address:

Human visual attention is strongly influenced by rewards, affecting both top-down and bottom-up attentional processes. The value-driven attentional capture (VDAC) paradigm, introduced by Anderson et al. (2011b), has had a significant impact on the field of visual attention.

View Article and Find Full Text PDF

Sensory perception requires the processing of stimuli from both sides of the body. Yet, how neurons bind stimulus information across the hemispheres to create a unified percept remains unknown. Here we perform large-scale recordings from neurons in the left and right primary somatosensory cortex (S1) in mice performing a task requiring active whisker touch to coordinate stimulus features across hemispheres.

View Article and Find Full Text PDF

Attention is rapidly directed to stimuli associated with rewards in past experience, independent of current task goals and physical salience of stimuli. However, despite the robust attentional priority given to reward-associated features, studies often indicate negligible priority toward previously rewarded locations. Here, we propose a relational account of value-driven attention, a mechanism that relies on spatial relationship between items to achieve value-guided selections.

View Article and Find Full Text PDF

Humans use selective attention to prioritize visual features, like color or shape, as well as discrete spatial locations, and these effects are sensitive to the experience of reward. Reward-associated features and locations are accordingly prioritized from early in the visual hierarchy. Attention is also sensitive to the establishment of visual objects: selection of one constituent object part often leads to prioritization of other locations on that object.

View Article and Find Full Text PDF