98%
921
2 minutes
20
Leveraging real-time eye tracking, foveated rendering optimizes hardware efficiency and enhances visual quality virtual reality (VR). This approach leverages eye-tracking techniques to determine where the user is looking, allowing the system to render high-resolution graphics only in the foveal region-the small area of the retina where visual acuity is highest, while the peripheral view is rendered at lower resolution. However, modern deep learning-based gaze-tracking solutions often exhibit a long-tail distribution of tracking errors, which can degrade user experience and reduce the benefits of foveated rendering by causing misalignment and decreased visual quality. This paper introduces FovealNet, an advanced AI-driven gaze tracking framework designed to optimize system performance by strategically enhancing gaze tracking accuracy. To further reduce the implementation cost of the gaze tracking algorithm, FovealNet employs an event-based cropping method that eliminates over 64.8% of irrelevant pixels from the input image. Additionally, it incorporates a simple yet effective token-pruning strategy that dynamically removes tokens on the fly without compromising tracking accuracy. Finally, to support different runtime rendering configurations, we propose a system performance-aware multi-resolution training strategy, allowing the gaze tracking DNN to adapt and optimize overall system performance more effectively. Evaluation results demonstrate that FovealNet achieves at least 1.42× speed up compared to previous methods and 13% increase in perceptual quality for foveated output. The code is available at https://github.com/wl3181/FovealNet.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TVCG.2025.3549577 | DOI Listing |
J Vis Exp
August 2025
Marianne Bernadotte Centrum, Department for Clinical Neuroscience, Karolinska Institutet; St Erik Eye Hospital.
The present protocol evaluates the relative impact of visual and vestibular inputs during roll plane rotations using optokinetic, vestibular, and combined visuovestibular stimulations. Subjects underwent isolated visual rotations, whole-body vestibular rotations in darkness, and visuovestibular stimulations combining static visual scenes with head rotations. Dynamic and static eye movement gains, absolute amplitudes, velocities, and accelerations were measured alongside perceptual responses.
View Article and Find Full Text PDFJ Integr Neurosci
August 2025
School of Aeronautic Science and Engineering, Beihang University, 100191 Beijing, China.
Background: Pilots often experience mental fatigue during task performance, accompanied by fluctuations in positive (e.g., joy) and negative (e.
View Article and Find Full Text PDFAnn Neurosci
September 2025
Rekhi Centre of Excellence for the Science of Happiness, Indian Institute of Technology, Kharagpur, West Bengal, India.
Background: Creativity involves the generation of novel ideas that are original and unique. It is a subjective process, and few studies are available in support of objective measures. Available tests of creativity are limited to questions related to an individual's trait and subjective responses.
View Article and Find Full Text PDFDev Med Child Neurol
September 2025
Murdoch Children's Research Institute, Parkville, VIC, Australia.
Aim: To examine visual engagement to social stimuli and response to joint attention in young children with neurofibromatosis type 1 (NF1) and typically developing peers (controls).
Method: Forty-five preschool children were studied cross-sectionally (mean age [SD] = 4 years 3 months [10 months]), 25 with NF1 and 20 typically developing controls. Participants passively viewed two eye-tracking paradigms.
Neural Regen Res
September 2025
Department of Biomedical Engineering, Tianjin University School of Medicine, Tianjin, China.
Electroencephalography-based brain-computer interfaces have revolutionized the integration of neural signals with technological systems, offering transformative solutions across neuroscience, biomedical engineering, and clinical practice. This review systematically analyzes advancements in electroencephalography-based brain-computer interface architectures, emphasizing four pillars, namely signal acquisition, paradigm design, decoding algorithms, and diverse applications. The aim is to bridge the gap between technology and application and guide future research.
View Article and Find Full Text PDF