Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Over the past 10 years, Oosterhof and Todorov's valence-dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov's methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov's original analysis strategy, the valence-dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence-dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 5 November 2018. The protocol, as accepted by the journal, can be found at https://doi.org/10.6084/m9.figshare.7611443.v1 .

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41562-020-01007-2DOI Listing

Publication Analysis

Top Keywords

valence-dominance model
16
oosterhof todorov's
12
regions
6
model
6
regions valence-dominance
4
model social
4
social perception
4
perception apply?
4
apply? 10 years
4
10 years oosterhof
4

Similar Publications

Emotion recognition via EEG signals and facial analysis becomes one of the key aspects of human-computer interaction and affective computing, enabling scientists to gain insight into the behavior of humans. Classic emotion recognition methods usually rely on controlled stimuli, such as music and images, which does not allow for ecological validity and scope. This paper proposes the EmoTrans model, which uses the DEAP dataset to analyze physiological signals and facial video recordings.

View Article and Find Full Text PDF

Psychological studies have revealed that people can easily draw inferences regarding others' personal traits from their faces, which has a considerable impact on social decisions. Impressions from faces can be summarized into two orthogonal dimensions: valence and dominance. Owing to their prominence in social relationships, faces appear in paintings across all ages and cultures.

View Article and Find Full Text PDF

Humans perceive a range of basic emotional connotations from music, such as joy, sadness, and fear, which can be decoded from structural characteristics of music, such as rhythm, harmony, and timbre. However, despite theory and evidence that music has multiple social functions, little research has examined whether music conveys emotions specifically associated with social status and social connection. This investigation aimed to determine whether the social emotions of dominance and affiliation are perceived in music and whether structural features of music predict social emotions, just as they predict basic emotions.

View Article and Find Full Text PDF

First impressions of a person, including social judgements, are often based on appearance. The widely accepted valence-dominance model of face perception (Oosterhof and Todorov 2008 , 11 087-11 092 (doi:10.1073/pnas.

View Article and Find Full Text PDF

In recent years, emotion recognition based on electroencephalography (EEG) has received growing interests in the brain-computer interaction (BCI) field. The neuroscience researches indicate that the left and right brain hemispheres demonstrate activity differences under different emotional activities, which could be an important principle for designing deep learning (DL) model for emotion recognition. Besides, owing to the nonstationarity of EEG signals, using convolution kernels of a single size may not sufficiently extract the abundant features for EEG classification tasks.

View Article and Find Full Text PDF