98%
921
2 minutes
20
During face-to-face communication, the perception and recognition of facial movements can facilitate individuals' understanding of what is said. Facial movements are a form of complex biological motion. Separate neural pathways are thought to processing (1) simple, nonbiological motion with an obligatory waypoint in the motion-sensitive visual middle temporal area (V5/MT); and (2) complex biological motion. Here, we present findings that challenge this dichotomy. Neuronavigated offline transcranial magnetic stimulation (TMS) over V5/MT on 24 participants (17 females and 7 males) led to increased response times in the recognition of simple, nonbiological motion as well as visual speech recognition compared with TMS over the vertex, an active control region. TMS of area V5/MT also reduced practice effects on response times, that are typically observed in both visual speech and motion recognition tasks over time. Our findings provide the first indication that area V5/MT causally influences the recognition of visual speech. In everyday face-to-face communication, speech comprehension is often facilitated by viewing a speaker's facial movements. Several brain areas contribute to the recognition of visual speech. One area of interest is the motion-sensitive visual medial temporal area (V5/MT), which has been associated with the perception of simple, nonbiological motion such as moving dots, as well as more complex, biological motion such as visual speech. Here, we demonstrate using noninvasive brain stimulation that area V5/MT is causally relevant in recognizing visual speech. This finding provides new insights into the neural mechanisms that support the perception of human communication signals, which will help guide future research in typically developed individuals and populations with communication difficulties.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10634547 | PMC |
http://dx.doi.org/10.1523/JNEUROSCI.0975-23.2023 | DOI Listing |
Cogn Psychol
September 2025
Graduate School of Engineering, Kochi University of Technology, Kami, Kochi, Japan. Electronic address:
Prior researches on global-local processing have focused on hierarchical objects in the visual modality, while the real-world involves multisensory interactions. The present study investigated whether the simultaneous presentation of auditory stimuli influences the recognition of visually hierarchical objects. We added four types of auditory stimuli to the traditional visual hierarchical letters paradigm:no sound (visual-only), a pure tone, a spoken letter that was congruent with the required response (response-congruent), or a spoken letter that was incongruent with it (response-incongruent).
View Article and Find Full Text PDFJ Speech Lang Hear Res
September 2025
Department of Communication Sciences & Disorders, Montclair State University, Bloomfield, NJ.
Purpose: Residual speech sound disorder (RSSD) is a high-prevalence condition that can limit children's academic and social participation, with negative consequences for overall well-being. Previous studies have described visual biofeedback as a promising option for RSSD, but results have been inconclusive due to study design limitations and small sample sizes.
Method: In a preregistered randomized controlled trial, 108 children aged 9-15 years with RSSD affecting American English /ɹ/ were randomly assigned to receive treatment incorporating visual biofeedback (subdivided into ultrasound and visual-acoustic types) or a comparison condition of motor-based treatment consistent with current best practices in speech therapy.
Am J Speech Lang Pathol
September 2025
Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park.
Purpose: The current study investigated the impact of a short mobile training implemented in peer pairs to teach the Communicating Choices-CVI (Peers) strategy to support interactions with students with multiple disabilities.
Method: A pretest-posttest control group design was used to evaluate the effects of the training created on the INSTRUCT app, which used a checklist of steps with video models to teach elementary-age peers a strategy to structure opportunities for students with multiple disabilities to communicate choices. Peers were randomly assigned to the experimental group ( = 10) or control group ( = 10) and then video-recorded while interacting with students with multiple disabilities during one pretest and one posttest interaction in their typical educational settings.
J Voice
September 2025
Department of Clinical Science, Intervention and Technology (CLINTEC), Division of Speech and Language Pathology, Karolinska Institutet, SE-171 76, Stockholm, Sweden.
Objective: Subglottal pressure is a clinically relevant parameter for assessment of voice disorders and correlates to f and sound pressure level (SPL). The aim of the current study was to evaluate the use of a visual target for feedback of f and SPL in subglottal pressure measurements in habitual voice and at phonation threshold level with a syllable string and a phrase for the purpose of improving the reliability of subglottal pressure measurements.
Methods: Data from 12 vocally healthy women (29-61 years) was analyzed.
Disabil Rehabil Assist Technol
September 2025
School of Foreign Languages, Ningbo University of Technology, Ningbo, China.
The speech and language rehabilitation are essential to people who have disorders of communication that may occur due to the condition of neurological disorder, developmental delays, or bodily disabilities. With the advent of deep learning, we introduce an improved multimodal rehabilitation pipeline that incorporates audio, video, and text information in order to provide patient-tailored therapy that adapts to the patient. The technique uses a cross-attention fusion multimodal hierarchical transformer architectural model that allows it to jointly design speech acoustics as well as the facial dynamics, lip articulation, and linguistic context.
View Article and Find Full Text PDF