Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Ophthalmic practice involves the integration of diverse clinical data and interactive decision-making, posing challenges for traditional artificial intelligence (AI) systems. Visual question answering (VQA) addresses this by combining computer vision and natural language processing to interpret medical images through user-driven queries. Evolving from VQA, multimodal AI agents enable continuous dialogue, tool use and context-aware clinical decision support. This review explores recent developments in ophthalmic conversational AI, spanning theoretical advances and practical implementations. We highlight the transformative role of large language models (LLMs) in improving reasoning, adaptability and task execution. However, key obstacles remain, including limited multimodal datasets, absence of standardised evaluation protocols, and challenges in clinical integration. We outline these limitations and propose future research directions to support the development of robust, LLM-driven AI systems. Realising their full potential will depend on close collaboration between AI researchers and the ophthalmic community.

Download full-text PDF

Source
http://dx.doi.org/10.1136/bjo-2024-326097DOI Listing

Publication Analysis

Top Keywords

visual question
8
question answering
8
answering intelligent
4
intelligent agents
4
agents ophthalmology
4
ophthalmology ophthalmic
4
ophthalmic practice
4
practice involves
4
involves integration
4
integration diverse
4

Similar Publications

Background: Electronic health records (EHRs) have been linked to information overload, which can lead to cognitive fatigue, a precursor to burnout. This can cause health care providers to miss critical information and make clinical errors, leading to delays in care delivery. This challenge is particularly pronounced in medical intensive care units (ICUs), where patients are critically ill and their EHRs contain extensive and complex data.

View Article and Find Full Text PDF

EFMouse: A toolbox to model stimulation-induced electric fields in the mouse brain.

PLoS Comput Biol

September 2025

Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America.

Research into the mechanisms underlying neuromodulation by tES using in-vivo animal models is key to overcoming experimental limitations in humans and essential to building a detailed understanding of the in-vivo consequences of tES. Insights from such animal models are needed to develop targeted and effective therapeutic applications of non-invasive brain stimulation in humans. The sheer difference in scale and geometry between animal models and the human brain contributes to the complexity of designing and interpreting animal studies.

View Article and Find Full Text PDF

Autonomous agents powered by Large Language Models are transforming AI, creating an imperative for the visualization area. However, our field's focus on a human in the sensemaking loop raises critical questions about autonomy, delegation, and coordination for such agentic visualization that preserve human agency while amplifying analytical capabilities. This paper addresses these questions by reinterpreting existing visualization systems with semi-automated or fully automatic AI components through an agentic lens.

View Article and Find Full Text PDF

Despite the functional specialization in visual cortex, there is growing evidence that the processing of chromatic and spatial visual features is intertwined. While past studies focused on visual field biases in retina and behavior, large-scale dependencies between coding of color and retinotopic space are largely unexplored in the cortex. Using a sample of male and female volunteers, we asked whether spatial color biases are shared across different human observers, and whether they are idiosyncratic for distinct areas.

View Article and Find Full Text PDF

Hydroxychloroquine Toxicity with Short Duration of Hydroxychloroquine Use and Unilateral Bull's Eye Maculopathy.

Retin Cases Brief Rep

September 2025

Doheny Eye Institute, David Geffen School of Medicine, University of California, Los Angeles, California, USA.

Purpose: To report the examination and multimodal imaging findings of a patient with unilateral bull's eye maculopathy.

Methods: A retrospective chart review of a 77-year-old patient with unilateral bull's eye maculopathy who presented to a tertiary retinal practice was performed. The patient's history, visual acuity, examination and multimodal imaging findings over five years of follow-up were described.

View Article and Find Full Text PDF