Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

In part due to correspondence in time, seeing how a speaking body moves can impact how speech is apprehended. Despite this, little is known about whether and which specific kinematic features of co-speech movements are relevant for their integration with speech. The current study uses machine learning techniques to investigate how co-speech gestures can be quantified to model vocal acoustics within an individual speaker. Specifically, we address whether kinetic descriptions of human movement are relevant for modeling their relationship with speech in time. To test this, we apply experimental manipulations that either highlight or obscure the relationship between co-speech movement kinematics and downward gravitational acceleration. Across two experiments, we provide evidence that quantifying co-speech movement as a function of its anisotropic relation to downward gravitational forces improves how well those co-speech movements can be used to predict prosodic dimensions of speech, as represented by the low-pass envelope. This study supports theoretical perspectives that invoke biomechanics to help explain speech-gesture synchrony and offers motivation for further behavioral or neuroimaging work investigating audiovisual integration and/or biological motion perception in the context of multimodal discourse.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12058326PMC
http://dx.doi.org/10.1162/opmi_a_00196DOI Listing

Publication Analysis

Top Keywords

co-speech gestures
8
co-speech movements
8
co-speech movement
8
downward gravitational
8
co-speech
6
decoding prosodic
4
prosodic motion
4
motion capture
4
capture data
4
data gravity
4

Similar Publications

Children learning structurally different languages display variability in the way they package semantic elements of a physical motion event in gesture, mirroring the patterns found in speech for the same events. In this study, we ask whether these differences extend to metaphorical motion events and, if so, when in development the patterns become evident. We studied the speech and gestures produced by 100 children learning English or Turkish (n = 50/language)-equally divided into 5 age groups: 3-4, 5-6, 7-8, 9-10, 11-12 years-when describing metaphorical motion events (e.

View Article and Find Full Text PDF

Children who are exposed to minimal linguistic input can nevertheless introduce linguistic features into their communication systems at the level of morphology, syntax, and semantics (Goldin-Meadow, 2003a). However, it is not clear whether they can do so at the level of phonetics and phonology. This study asks whether congenitally deaf children, unable to learn spoken language and living in a hearing family without exposure to sign language, introduce phonology and phonetics into the gestural communication systems they create, called homesigns.

View Article and Find Full Text PDF

While multisensory super-additivity has been demonstrated in the context of visual articulation, it is unclear whether speech and co-speech gestures are similarly subject to super-additive integration. The current study investigates multisensory integration of speech and bodily gestures, testing whether biological motion signatures of co-speech gestures enhance cortical tracking of the speech envelope. We recorded EEG from 20 healthy adults as they watched a series of multimodal discourse clips from four conditions: AV congruent clips with co-speech gestures that were naturally aligned with speech, AV incongruent clips in which gestures were not aligned with the speech, audio-only clips in which speech was delivered in isolation, and video-only clips presenting the gesture content with no accompanying speech.

View Article and Find Full Text PDF

Iconicity is the resemblance or similarity between the form of a signal and its meaning. In two studies, we investigated whether adults interpret iconicity in speech and gesture via a modality-independent mechanism (Study 1, N = 40; Study 2, N = 348). Participants in both studies completed two verb-action matching tasks.

View Article and Find Full Text PDF

In part due to correspondence in time, seeing how a speaking body moves can impact how speech is apprehended. Despite this, little is known about whether and which specific kinematic features of co-speech movements are relevant for their integration with speech. The current study uses machine learning techniques to investigate how co-speech gestures can be quantified to model vocal acoustics within an individual speaker.

View Article and Find Full Text PDF