Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Parenting and children's temperament are important influences on language development. However, temperament may reflect prior parenting, and parenting effects may reflect genes common to parents and children. In 561 U.S. adoptees (57% male) and their birth and rearing parents (70% and 92% White, 13% and 4% African American, and 7% and 2% Latinx, respectively), this study demonstrated how genetic propensity for temperament affects language development, and how this relates to parenting. Genetic propensity for negative emotionality inversely predicted language at 27 months (β = -.15) and evoked greater maternal warmth (β = .12), whereas propensity for surgency positively predicted language at 4.5 years (β = .20), especially when warmth was low. Parental warmth (β = .15) and sensitivity (β = .19) further contributed to language development, controlling for common gene effects.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11023813PMC
http://dx.doi.org/10.1111/cdev.14021DOI Listing

Publication Analysis

Top Keywords

language development
16
genetic propensity
12
early language
8
propensity negative
8
negative emotionality
8
predicted language
8
language
7
parenting
5
disentangling genetic
4
genetic environmental
4

Similar Publications

Applications of Federated Large Language Model for Adverse Drug Reactions Prediction: Scoping Review.

J Med Internet Res

September 2025

Department of Information Systems and Cybersecurity, The University of Texas at San Antonio, 1 UTSA Circle, San Antonio, TX, 78249, United States, 1 (210) 458-6300.

Background: Adverse drug reactions (ADR) present significant challenges in health care, where early prevention is vital for effective treatment and patient safety. Traditional supervised learning methods struggle to address heterogeneous health care data due to their unstructured nature, regulatory constraints, and restricted access to sensitive personal identifiable information.

Objective: This review aims to explore the potential of federated learning (FL) combined with natural language processing and large language models (LLMs) to enhance ADR prediction.

View Article and Find Full Text PDF

Evaluating anti-LGBTQIA+ medical bias in large language models.

PLOS Digit Health

September 2025

Department of Dermatology, Stanford University, Stanford, California, United States of America.

Large Language Models (LLMs) are increasingly deployed in clinical settings for tasks ranging from patient communication to decision support. While these models demonstrate race-based and binary gender biases, anti-LGBTQIA+ bias remains understudied despite documented healthcare disparities affecting these populations. In this work, we evaluated the potential of LLMs to propagate anti-LGBTQIA+ medical bias and misinformation.

View Article and Find Full Text PDF

Purpose: Since its inception, the National Joint Committee for the Communication Needs of Persons with Severe Disabilities (NJC) has focused specifically on advocating for individuals with significant communication support needs resulting from intellectual disability. The purpose of this review article is to describe the history of terminology used to describe this group of individuals, share the results of a recent survey completed by 102 members of our NJC Network, and discuss the implications of decisions regarding terminology in the NJC's ongoing advocacy efforts.

Method: History of terminology used to describe people with intellectual disability is documented by reviewing the literature, policies, professional organizations, and self-advocacy groups that used various terms from the early 20th century to present day.

View Article and Find Full Text PDF

In the context of the rapid development of large language models (LLMs), contrastive learning has become widely adopted due to its ability to bypass costly data annotation by leveraging vast amounts of network data for model training. However, this widespread use raises significant concerns regarding data privacy protection. Unlearnable Examples (UEs), a technique that disrupts model learning by perturbing data, effectively prevents unauthorized models from misusing sensitive data.

View Article and Find Full Text PDF

Background: Children in low- and middle-income countries face obstacles to optimal language and cognitive development due to a variety of factors related to adverse socioeconomic conditions. One of these factors is compromised caregiver-child interactions and associated pressures on parenting. Early development interventions, such as dialogic book-sharing (DBS), address this variable, with evidence from both high-income countries and urban areas of low- and middle-income countries showing that such interventions enhance caregiver-child interaction and the associated benefits for child cognitive and socioemotional development.

View Article and Find Full Text PDF