98%
921
2 minutes
20
Cerebral/cortical visual impairment (CVI) is a leading cause of pediatric visual impairment in the United States and other developed countries, and is increasingly diagnosed in developing nations due to improved care and survival of children who are born premature or have other risk factors for CVI. Despite this, there is currently no objective, standardized method to quantify the diverse visual impairments seen in children with CVI who are young and developmentally delayed. We propose a method that combines eye tracking and an image-based generative artificial intelligence (AI) model (SegCLIP) to assess higher- and lower-level visual characteristics in children with CVI. We will recruit 40 CVI participants (aged 12 months to 12 years) and 40 age-matched controls, who will watch a series of images on a monitor while eye gaze position is recorded using eye tracking. SegCLIP will be prompted to generate saliency maps for each of the images in the experimental protocol. The saliency maps (12 total) will highlight areas of interest that pertain to specific visual features, allowing for analysis of a range of individual visual characteristics. Eye tracking fixation maps will then be compared to the saliency maps to calculate fixation saliency values, which will be assigned based on the intensity of the pixel corresponding to the location of the fixation in the saliency map. Fixation saliency values will be compared between CVI and control participants. Fixation saliency values will also be correlated to corresponding scores on a functional vision assessment, the CVI Range-CR. We expect that fixation saliency values on visual characteristics that require higher-level processing will be significantly lower in CVI participants compared to controls, whereas fixation saliency values on lower-level visual characteristics will be similar or higher in CVI participants. Furthermore, we anticipate that fixation saliency values will be significantly correlated to scores on corresponding items on the CVI Range-CR. Together, these findings would suggest that AI-enabled saliency analysis using eye tracking can objectively quantify abnormalities of lower- and higher-order visual processing in children with CVI. This novel technique has the potential to guide individualized interventions and serve as an outcome measure in future clinical trials.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11782282 | PMC |
http://dx.doi.org/10.3389/fnhum.2024.1506286 | DOI Listing |
Maturitas
August 2025
Turku PET Centre, University of Turku and Åbo Akademi University, Finland; Turku University Hospital, Turku, Finland; Department of Psychology, University of Turku, Finland. Electronic address:
Objectives: Faces and bodies serve as important cues of physical attractiveness and reproductive fitness. Previous studies indicate that there are sex-related differences in the visual processing of erotic stimuli. We investigated gaze patterns and sex differences during sexual perception.
View Article and Find Full Text PDFDev Psychol
August 2025
Department of Linguistics, University of Potsdam.
Studies suggest that infants initially show universal discrimination abilities. However, this narrative has been heavily based on Indo-European languages. It has also been proposed that infants' speech sound discrimination is affected by acoustic salience, such that acoustically subtle contrasts are not discriminated until the end of an infant's first year.
View Article and Find Full Text PDFSensors (Basel)
July 2025
School of Physics, Engineering and Computer Science (SPECS), University of Hertfordshire, Hatfield AL10 9AB, UK.
As large language models (LLMs) and vision-language models (VLMs) become increasingly used in robotics area, a crucial question arises: to what extent do these models replicate human-like cognitive processes, particularly within socially interactive contexts? Whilst these models demonstrate impressive multimodal reasoning and perception capabilities, their cognitive plausibility remains underexplored. In this study, we address this gap by using human visual attention as a behavioural proxy for cognition in a naturalistic human-robot interaction (HRI) scenario. Eye-tracking data were previously collected from participants engaging in social human-human interactions, providing frame-level gaze fixations as a human attentional ground truth.
View Article and Find Full Text PDFBehav Res Methods
August 2025
Department of Life Sciences, University of New Hampshire at Manchester, 88 Commercial St, Manchester, NH, 03101, USA.
Ever since its introduction to vision research by Bonneh and colleagues in 2001, motion-induced blindness (MIB) has spawned a great deal of scholarly activity. However, an investigation of the common methods used for the MIB task indicates several issues, including a trend of newer studies simply replicating methodologies from earlier works, ambiguity regarding what the most optimal methods are for the MIB task, and the exclusion by MIB studies of crucial details regarding the MIB task. These issues are the consequence of the past two decades of MIB research having transpired without an updated set of guidelines and considerations regarding the use of the MIB task.
View Article and Find Full Text PDFErgonomics
July 2025
School of Psychology, University of Nottingham, University Park, Nottingham, NG7 2RD, UK.
High-risk incidents require responders to rapidly detect, sample, and interpret critical visual information. To understand how experience shapes these abilities, we used mobile eye-tracking to examine expertise-related differences in gaze behaviour of Authorised Firearms Officers during simulated tactical scenarios. Receiver operating characteristic analysis revealed that the number, duration, and horizontal spread of fixations moderately discriminated between expert and novice officers, with experts tending to perform more, but shorter fixations that were distributed more broadly.
View Article and Find Full Text PDF