98%
921
2 minutes
20
In part due to correspondence in time, seeing how a speaking body moves can impact how speech is apprehended. Despite this, little is known about whether and which specific kinematic features of co-speech movements are relevant for their integration with speech. The current study uses machine learning techniques to investigate how co-speech gestures can be quantified to model vocal acoustics within an individual speaker. Specifically, we address whether kinetic descriptions of human movement are relevant for modeling their relationship with speech in time. To test this, we apply experimental manipulations that either highlight or obscure the relationship between co-speech movement kinematics and downward gravitational acceleration. Across two experiments, we provide evidence that quantifying co-speech movement as a function of its anisotropic relation to downward gravitational forces improves how well those co-speech movements can be used to predict prosodic dimensions of speech, as represented by the low-pass envelope. This study supports theoretical perspectives that invoke biomechanics to help explain speech-gesture synchrony and offers motivation for further behavioral or neuroimaging work investigating audiovisual integration and/or biological motion perception in the context of multimodal discourse.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12058326 | PMC |
http://dx.doi.org/10.1162/opmi_a_00196 | DOI Listing |
Brain Lang
August 2025
University of Chicago, United States.
Children learning structurally different languages display variability in the way they package semantic elements of a physical motion event in gesture, mirroring the patterns found in speech for the same events. In this study, we ask whether these differences extend to metaphorical motion events and, if so, when in development the patterns become evident. We studied the speech and gestures produced by 100 children learning English or Turkish (n = 50/language)-equally divided into 5 age groups: 3-4, 5-6, 7-8, 9-10, 11-12 years-when describing metaphorical motion events (e.
View Article and Find Full Text PDFCognition
November 2025
Departments of Psychology and Comparative Human Development, University of Chicago, United States of America.
Children who are exposed to minimal linguistic input can nevertheless introduce linguistic features into their communication systems at the level of morphology, syntax, and semantics (Goldin-Meadow, 2003a). However, it is not clear whether they can do so at the level of phonetics and phonology. This study asks whether congenitally deaf children, unable to learn spoken language and living in a hearing family without exposure to sign language, introduce phonology and phonetics into the gestural communication systems they create, called homesigns.
View Article and Find Full Text PDFWhile multisensory super-additivity has been demonstrated in the context of visual articulation, it is unclear whether speech and co-speech gestures are similarly subject to super-additive integration. The current study investigates multisensory integration of speech and bodily gestures, testing whether biological motion signatures of co-speech gestures enhance cortical tracking of the speech envelope. We recorded EEG from 20 healthy adults as they watched a series of multimodal discourse clips from four conditions: AV congruent clips with co-speech gestures that were naturally aligned with speech, AV incongruent clips in which gestures were not aligned with the speech, audio-only clips in which speech was delivered in isolation, and video-only clips presenting the gesture content with no accompanying speech.
View Article and Find Full Text PDFPsychon Bull Rev
May 2025
Department of Psychology, University of Warwick, Coventry, CV4 7AL, UK.
Iconicity is the resemblance or similarity between the form of a signal and its meaning. In two studies, we investigated whether adults interpret iconicity in speech and gesture via a modality-independent mechanism (Study 1, N = 40; Study 2, N = 348). Participants in both studies completed two verb-action matching tasks.
View Article and Find Full Text PDFOpen Mind (Camb)
April 2025
Joint Doctoral Program in Language and Communication Disorders, San Diego State University and UC San Diego.
In part due to correspondence in time, seeing how a speaking body moves can impact how speech is apprehended. Despite this, little is known about whether and which specific kinematic features of co-speech movements are relevant for their integration with speech. The current study uses machine learning techniques to investigate how co-speech gestures can be quantified to model vocal acoustics within an individual speaker.
View Article and Find Full Text PDF