98%
921
2 minutes
20
Odors are often considered to be powerful memory cues, yet early olfactory paired-associate (PA) studies suggested that they are less effective than other sensory cues and particularly prone to proactive interference (PI). Research with other modalities indicates semantic similarity increases retroactive interference (RI). Two experiments compared olfactory PA memory to verbal and auditory PA memory, focusing on the role of semantic congruency. In Experiment 1, a mixed design tested the efficiency of odors as a PA cue under semantically congruent versus incongruent conditions. One hundred one participants were randomly assigned to 4 groups, each experiencing one of the following cross-modal pairs: olfactory-visual and verbal-visual (as a control group for olfactory-visual), auditory-visual, and verbal-visual (as a control group for auditory-visual). Replicating prior work, odors were less effective than verbal or auditory cues. However, semantic congruency enhanced performance across modalities, with a greater effect for olfactory PAs. Experiment 2 employed a mixed design to assess PI and RI in olfactory versus verbal PA memory. Thirty-eight participants were randomly assigned to one of two cross-modal pair groups (olfactory-visual and verbal-visual). RI was more pronounced than PI for both modalities, with RI levels increasing when the second pair of associations was semantically congruent, but the first was not. Semantic congruency consistently enhanced olfactory retrieval cues, supporting its role in mitigating interference effects. These findings demonstrate that while odors are less effective associative cues than verbal or auditory stimuli, semantic congruency significantly improves their utility, highlighting the nuanced interplay between modality and memory processes.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1093/chemse/bjaf014 | DOI Listing |
Sensors (Basel)
August 2025
Jingjinji Spatial Intelligent Perception Collaborative Innovation Center, Hebei University of Engineering, Handan 056009, China.
With the increasing complexity of human-computer interaction scenarios, conventional digital human facial expression systems show notable limitations in handling multi-emotion co-occurrence, dynamic expression, and semantic responsiveness. This paper proposes a digital human system framework that integrates multimodal emotion recognition and compound facial expression generation. The system establishes a complete pipeline for real-time interaction and compound emotional expression, following a sequence of "speech semantic parsing-multimodal emotion recognition-Action Unit (AU)-level 3D facial expression control.
View Article and Find Full Text PDFExp Psychol
August 2025
School of Psychology, Shanghai University of Sport, Shanghai, PR China.
Previous task-switching research typically assumed that event-related potentials related to task switching, such as the target-locked switch positivity difference wave (SPDW), were indicators of cognitive control during task-set control. This study challenges that assumption. In two conventional numeric task-switching experiments (odd-even and low-high tasks), unknown symbols represented common Arabic numerals.
View Article and Find Full Text PDFNeuropsychologia
August 2025
The University of Sydney, Brain and Mind Centre, Sydney, New South Wales, Australia; The University of Sydney, School of Psychology, Sydney, New South Wales, Australia. Electronic address:
Mounting evidence points to the role of semantic knowledge in modulating how we perceive, and subsequently remember, experiences. In healthy aging, prior knowledge becomes increasingly important to guide visual exploration during episodic encoding and retrieval and can hinder performance when incongruous with to-be-learned information. It remains unclear, however, how the dynamic integration of visual information and prior knowledge is altered in neurodegenerative disorders, and whether this impacts oculomotor behaviour.
View Article and Find Full Text PDFFront Neurosci
July 2025
Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands.
Numerous studies have explored crossmodal correspondences, yet have so far lacked insight into how crossmodal correspondences influence audiovisual emotional integration and aesthetic beauty. Our study investigated the behavioral and neural underpinnings of audiovisual emotional congruency in art perception. Participants viewed 'happy' or 'sad' paintings in an unimodal (visual) condition or paired with congruent or incongruent music (crossmodal condition).
View Article and Find Full Text PDFNeurosci Conscious
August 2025
Graduation School of Letters, Kyoto University, Yoshidahonmachi, Sakyo Ward, Kyoto 606-8501, Japan.
Recent studies on brief scene perception have revealed that adults discriminate between what they see and do not see in a photograph with varying degrees of confidence. In this study, we attempt to extend previous studies by asking if these perceptual/cognitive abilities are already established in preschool and school-aged children. In Experiment 1 ( = 122) and 2 ( = 205, registered report), using an online experiment, we briefly presented a natural scene (267 ms in Experiment 1 and 133 ms in Experiment 2) to participants and, subsequently, asked them if a small patch was included in the original scene.
View Article and Find Full Text PDF