Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

We asked how caregivers use spatial language and deictic gestures, in addition to object labeling, with their infants during spatial play, and how such spatial multimodal input scaffolds infants' in-the-moment attention. Forty-nine North American middle-class racially and ethnically diverse caregivers (four fathers, 45 mothers; 51% White and not Hispanic) and their 9-month-old infants (15 girls, 34 boys; 43% White and not Hispanic) played with a puzzle while wearing head-mounted eye trackers. Results showed that caregivers' speech with spatial words or objects labels extended the duration of infants' looking at the puzzle, compared to looking accompanied by utterances without such words. Notably, the combination of spatial and labeling language was more effective than either type alone. Furthermore, infants' attention was longer when caregivers used deictic gestures (e.g., pointing) compared to when they did not use these gestures, highlighting the support of multimodal communication. Together these results add to our understanding of how the content of caregivers' speech, and not simply the presence of speech, along with deictic gestures may shape infants' attention in real time. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

Download full-text PDF

Source
http://dx.doi.org/10.1037/dev0002068DOI Listing

Publication Analysis

Top Keywords

deictic gestures
12
spatial language
8
infants' in-the-moment
8
in-the-moment attention
8
spatial play
8
white hispanic
8
caregivers' speech
8
infants' attention
8
spatial
7
infants'
5

Similar Publications

We asked how caregivers use spatial language and deictic gestures, in addition to object labeling, with their infants during spatial play, and how such spatial multimodal input scaffolds infants' in-the-moment attention. Forty-nine North American middle-class racially and ethnically diverse caregivers (four fathers, 45 mothers; 51% White and not Hispanic) and their 9-month-old infants (15 girls, 34 boys; 43% White and not Hispanic) played with a puzzle while wearing head-mounted eye trackers. Results showed that caregivers' speech with spatial words or objects labels extended the duration of infants' looking at the puzzle, compared to looking accompanied by utterances without such words.

View Article and Find Full Text PDF

Children who are exposed to minimal linguistic input can nevertheless introduce linguistic features into their communication systems at the level of morphology, syntax, and semantics (Goldin-Meadow, 2003a). However, it is not clear whether they can do so at the level of phonetics and phonology. This study asks whether congenitally deaf children, unable to learn spoken language and living in a hearing family without exposure to sign language, introduce phonology and phonetics into the gestural communication systems they create, called homesigns.

View Article and Find Full Text PDF

In mixed reality (MR) avatar-mediated telepresence, avatar movement must be adjusted to convey the user's intent in a dissimilar space. This paper presents a novel neural network-based framework designed for translating upper-body gestures, which adjusts virtual avatar movements in dissimilar environments to accurately reflect the user's intended gestures in real-time. Our framework translates a wide range of upper-body gestures, including eye gaze, deictic gestures, free-form gestures, and the transitions between them.

View Article and Find Full Text PDF

Co-speech gesture comprehension in autistic children.

J Child Lang

April 2025

Autism, Bilingualism, Cognitive and Communicative Development Research Group (ABCCD), Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland.

Co-speech gestures accompany or replace speech in communication. Studies investigating how autistic children understand them are scarce and inconsistent and often focus on decontextualized, iconic gestures. This study compared 73 three- to twelve-year-old autistic children with 73 neurotypical peers matched on age, non-verbal IQ, and morphosyntax.

View Article and Find Full Text PDF

When people communicate, they use a combination of modalities-speech, gesture, and eye gaze-to engage and transmit information to an addressee. Spatial deictic communication is a paradigmatic case, with spatial demonstratives () frequently co-occurring with eye gaze and pointing gestures to draw the attention of an addressee to an object location (e.g.

View Article and Find Full Text PDF