98%
921
2 minutes
20
We asked how caregivers use spatial language and deictic gestures, in addition to object labeling, with their infants during spatial play, and how such spatial multimodal input scaffolds infants' in-the-moment attention. Forty-nine North American middle-class racially and ethnically diverse caregivers (four fathers, 45 mothers; 51% White and not Hispanic) and their 9-month-old infants (15 girls, 34 boys; 43% White and not Hispanic) played with a puzzle while wearing head-mounted eye trackers. Results showed that caregivers' speech with spatial words or objects labels extended the duration of infants' looking at the puzzle, compared to looking accompanied by utterances without such words. Notably, the combination of spatial and labeling language was more effective than either type alone. Furthermore, infants' attention was longer when caregivers used deictic gestures (e.g., pointing) compared to when they did not use these gestures, highlighting the support of multimodal communication. Together these results add to our understanding of how the content of caregivers' speech, and not simply the presence of speech, along with deictic gestures may shape infants' attention in real time. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/dev0002068 | DOI Listing |
Dev Psychol
September 2025
Center for Mind and Brain, University of California, Davis.
We asked how caregivers use spatial language and deictic gestures, in addition to object labeling, with their infants during spatial play, and how such spatial multimodal input scaffolds infants' in-the-moment attention. Forty-nine North American middle-class racially and ethnically diverse caregivers (four fathers, 45 mothers; 51% White and not Hispanic) and their 9-month-old infants (15 girls, 34 boys; 43% White and not Hispanic) played with a puzzle while wearing head-mounted eye trackers. Results showed that caregivers' speech with spatial words or objects labels extended the duration of infants' looking at the puzzle, compared to looking accompanied by utterances without such words.
View Article and Find Full Text PDFCognition
November 2025
Departments of Psychology and Comparative Human Development, University of Chicago, United States of America.
Children who are exposed to minimal linguistic input can nevertheless introduce linguistic features into their communication systems at the level of morphology, syntax, and semantics (Goldin-Meadow, 2003a). However, it is not clear whether they can do so at the level of phonetics and phonology. This study asks whether congenitally deaf children, unable to learn spoken language and living in a hearing family without exposure to sign language, introduce phonology and phonetics into the gestural communication systems they create, called homesigns.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
October 2025
In mixed reality (MR) avatar-mediated telepresence, avatar movement must be adjusted to convey the user's intent in a dissimilar space. This paper presents a novel neural network-based framework designed for translating upper-body gestures, which adjusts virtual avatar movements in dissimilar environments to accurately reflect the user's intended gestures in real-time. Our framework translates a wide range of upper-body gestures, including eye gaze, deictic gestures, free-form gestures, and the transitions between them.
View Article and Find Full Text PDFJ Child Lang
April 2025
Autism, Bilingualism, Cognitive and Communicative Development Research Group (ABCCD), Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland.
Co-speech gestures accompany or replace speech in communication. Studies investigating how autistic children understand them are scarce and inconsistent and often focus on decontextualized, iconic gestures. This study compared 73 three- to twelve-year-old autistic children with 73 neurotypical peers matched on age, non-verbal IQ, and morphosyntax.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
March 2025
School of Psychology, University of East Anglia.
When people communicate, they use a combination of modalities-speech, gesture, and eye gaze-to engage and transmit information to an addressee. Spatial deictic communication is a paradigmatic case, with spatial demonstratives () frequently co-occurring with eye gaze and pointing gestures to draw the attention of an addressee to an object location (e.g.
View Article and Find Full Text PDF