98%
921
2 minutes
20
Previous studies have shown that reading experience reshapes speech processing. The orthography can be implemented in the brain by restructuring the phonological representations or being co-activated during spoken word recognition. This study utilized event-related functional magnetic resonance imaging and functional connectivity analysis to examine the neural mechanism underlying two types of orthographic effects in the Chinese auditory semantic category task, namely phonology-to-orthography consistency (POC) and homophone density (HD). We found that the POC effects originated from the speech network, suggesting that sublexical orthographic information could change the organization of preexisting phonological representations when learning to read. Meanwhile, the HD effects were localized to the left fusiform and lingual gyrus, suggesting that lexical orthographic knowledge may be activated online during spoken word recognition. These results demonstrated the different natures and neural mechanisms for the POC and HD effects on Chinese spoken word recognition.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.bandl.2021.104961 | DOI Listing |
Psychophysiology
September 2025
Department of Developmental Psychology and Socialisation, University of Padova, Padova, Italy.
Prediction models usually assume that highly constraining contexts allow the pre-activation of phonological information. However, the evidence for phonological prediction is mixed and controversial. In this study, we implement a paradigm that capitalizes on the phonological errors produced by L2 speakers to investigate whether specific phonological predictions are made based on speaker identity.
View Article and Find Full Text PDFNat Commun
August 2025
Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland.
In blind individuals, language processing activates not only classic language networks, but also the "visual" cortex. What is represented in visual areas when blind individuals process language? Here, we show that area V5/MT in blind individuals, but not other visual areas, responds differently to spoken nouns and verbs. We further show that this effect is present for concrete nouns and verbs, but not abstract or pseudo nouns and verbs.
View Article and Find Full Text PDFPsychon Bull Rev
August 2025
Department of Linguistics, Stanford University, Building 460, Margaret Jacks Hall 450 Jane Stanford Way, Stanford, CA, 94305, USA.
Over the past 35 years, it has been established that mental representations of language include fine-grained acoustic details stored in episodic memory. The empirical foundations of this fact were established through a series of word recognition experiments showing that participants were better at remembering words repeated by the same talker than words repeated by a different talker (talker-specificity effect). This effect has been widely replicated, but exclusively with isolated, generally monosyllabic, words as the object of study.
View Article and Find Full Text PDFNeurobiol Lang (Camb)
August 2025
The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel.
Written language production is a fundamental aspect of daily communication, yet the neural pathways supporting it are far less studied than those for spoken language production. This study evaluated the contributions of speech-production pathways to written word production, specifically focusing on the central processes of word spelling rather than the motor production processes that support handwriting. Seventy-three English-speaking, neurotypical adults completed a spelling-to-dictation task and underwent diffusion MRI scans.
View Article and Find Full Text PDFComput Biol Med
August 2025
Dept. of ECE, Mepco Schlenk Engineering College, Sivakasi, Tamil Nadu, India. Electronic address:
Background: Speech recognition allows the recognition of audio streams. It is a tool used by professionals across a range of industries that require accurate transcriptions. In the context of authentication, speech recognition can be used as a biometric factor to verify a user's identity and can be incredibly helpful for individuals with disabilities, particularly those with speech impairments.
View Article and Find Full Text PDF