Sentential, lexical, and acoustic effects on the perception of word boundaries.

J Acoust Soc Am

Department of Experimental Psychology, University of Bristol, Bristol, Avon, United Kingdom.

Published: July 2007


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

This study investigates the effects of sentential context, lexical knowledge, and acoustic cues on the segmentation of connected speech. Listeners heard near-homophonous phrases (e.g., plmpaI for "plum pie" versus "plump eye") in isolation, in a sentential context, or in a lexically biasing context. The sentential context and the acoustic cues were piloted to provide strong versus mild support for one segmentation alternative (plum pie) or the other (plump eye). The lexically biasing context favored one segmentation or the other (e.g., skmpaI for "scum pie" versus *"scump eye," and lmpaI, for "lump eye" versus *"lum pie," with the asterisk denoting a lexically unacceptable parse). A forced-choice task, in which listeners indicated which of two words they thought they heard (e.g., "pie" or "eye"), revealed compensatory mechanisms between the sources of information. The effect of both sentential and lexical contexts on segmentation responses was larger when the acoustic cues were mild than when they were strong. Moreover, lexical effects were accompanied with a reduction in sensitivity to the acoustic cues. Sentential context only affected the listeners' response criterion. The results highlight the graded, interactive, and flexible nature of multicue segmentation, as well as functional differences between sentential and lexical contributions to this process.

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.2735105DOI Listing

Publication Analysis

Top Keywords

sentential context
16
acoustic cues
16
sentential lexical
12
pie" versus
8
lexically biasing
8
biasing context
8
sentential
7
context
6
acoustic
5
segmentation
5

Similar Publications

Rapid turn-taking in conversation suggests that speakers plan part of their turn in advance, but evidence for this is scarce. Using context-driven picture naming, we examined whether (a) speakers preplan lexical-semantic and phonological information at the word level in constraining sentential contexts, and (b) phonological preplanning encompasses the whole word. Analysis of naming response times (RTs) showed that constraining contexts enable preplanning of both lexical-semantic and phonological representations (Experiment 1).

View Article and Find Full Text PDF

Recognition of spoken words with mispronounced lexical prosody in Japanese.

J Acoust Soc Am

June 2025

Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1 Komaba Meguro-city, Tokyo 153-8902, Japan.

Lexical prosody plays a crucial role in Japanese spoken word recognition. However, listeners of Japanese can still recognize spoken words easily even when they are pronounced with mispronounced lexical prosody, i.e.

View Article and Find Full Text PDF

The present study examined how Arabic-Hebrew-English trilinguals process double and triple cognate words in their third language (L3) across three different experiments. Utilizing the same set of critical cognate items, trilinguals completed a semantic relatedness task, a lexical decision task, or a sentence reading eye-tracking task. The results revealed a significant cognate facilitation effect in the semantic relatedness task, with no consistent differences in the magnitude of facilitation across double and triple cognates, suggesting that both L1 and L2 are activated during L3 processing.

View Article and Find Full Text PDF

Spoken language understanding requires the integration of incoming speech with representations of the preceding context. How rich the information is that listeners maintain in these contextual representations has been a long-standing question. Under one view, subcategorical information about the preceding input-including any uncertainty about the underlying categories-is quickly discarded due to memory limitations.

View Article and Find Full Text PDF
Article Synopsis
  • - Predictive processing in key brain areas like the TPJ and IFG helps us anticipate the meanings of sentences, which improves language comprehension efficiency.
  • - An fMRI study with 22 participants revealed that stronger connectivity in these areas occurs when the upcoming semantic information is highly predictable, influencing how the brain prepares for and integrates new information.
  • - The findings suggest a dynamic interaction between different brain regions based on the predictability of content, highlighting both top-down semantic predictions and bottom-up integration in understanding language.
View Article and Find Full Text PDF