Effects of syntactic expectations on speech segmentation.

J Exp Psychol Hum Percept Perform

Department of Experimental Psychology, University of Bristol, Bristol, United Kingdom.

Published: August 2007


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Although the effect of acoustic cues on speech segmentation has been extensively investigated, the role of higher order information (e.g., syntax) has received less attention. Here, the authors examined whether syntactic expectations based on subject-verb agreement have an effect on segmentation and whether they do so despite conflicting acoustic cues. Although participants detected target words faster in phrases containing adequate acoustic cues ("spins" in take spins and "pins" in takes pins), this acoustic effect was suppressed when the phrases were appended to a plural context (those women take spins/*takes pins [with the asterisk indicating a syntactically unacceptable parse]). The syntactically congruent target ("spins") was detected faster regardless of the acoustics. However, a singular context (that woman *take spins/takes pins) had no effect on segmentation, and the results resembled those of the neutral phrases. Subsequent experiments showed that the discrepancy was due to the relative time course of syntactic expectations and acoustics cues. Taken together, the data suggest that syntactic knowledge can facilitate segmentation but that its effect is substantially attenuated if conflicting acoustic cues are encountered before full realization of the syntactic constraint.

Download full-text PDF

Source
http://dx.doi.org/10.1037/0096-1523.33.4.960DOI Listing

Publication Analysis

Top Keywords

acoustic cues
16
syntactic expectations
12
speech segmentation
8
conflicting acoustic
8
segmentation
5
acoustic
5
cues
5
effects syntactic
4
expectations speech
4
segmentation acoustic
4

Similar Publications

Early sensory experience can exert lasting perceptual consequences. For example, a brief period of auditory deprivation early in life can lead to persistent spatial hearing deficits. Some forms of hearing loss (i.

View Article and Find Full Text PDF

Animal display behaviors, such as advertisement songs, are flashy and attention grabbing by necessity. In order to balance the costs and benefits of such signals, individuals must be able to assess both their own energetic state and their social environment. In this study, we investigated the role of leptin, a hormonal signal of high energy balance, in regulating the vocal advertisement display of Alston's singing mouse ( ).

View Article and Find Full Text PDF

Auditory and cognitive contributions to recognition of degraded speech in noise: Individual differences among older adults.

PLoS One

September 2025

Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina, United States of America.

This study examined individual differences in how older adults with normal hearing (ONH) or hearing impairment (OHI) allocate auditory and cognitive resources during speech recognition in noise at equal recognition. Associations between predictor variables and speech recognition were assessed across three datasets that each included 15-16 conditions involving temporally filtered speech. These datasets involved (1) degraded spectral cues, (2) competing speech-modulated noise, and (3) combined degraded spectral cues in speech-modulated noise.

View Article and Find Full Text PDF

Objectives: In recent years, there has been a profound increase in the use of remote online communication as a supplement to, and in many cases a replacement for, in-person interactions. While online communication tools hold potential to improve accessibility, previous studies have suggested that increased reliance on remote communication poses additional challenges for people with hearing loss, including those with a cochlear implant (CI). This study aimed to investigate the preferences and speech-reception performance of adults with a CI during online communication.

View Article and Find Full Text PDF

This study presents a novel privacy-preserving deep learning framework for accurately classifying fine-grained hygiene and water-usage events in restroom environments. Leveraging a comprehensive, curated dataset comprising approximately 460 min of stereo audio recordings from five acoustically diverse bathrooms, our method robustly identifies 11 distinct events, including nuanced variations in faucet counts and flow rates, toilet flushing, and handwashing activities. Stereo audio inputs were transformed into triple-channel Mel spectrograms using an adaptive one-dimensional convolutional neural network (1D-CNN), dynamically synthesizing spatial cues to enhance discriminative power.

View Article and Find Full Text PDF