98%
921
2 minutes
20
Purpose: To compare the performance of a novel convolutional neural network (CNN) classifier and human graders in detecting angle closure in EyeCam (Clarity Medical Systems, Pleasanton, California, USA) goniophotographs.
Design: Retrospective cross-sectional study.
Methods: Subjects from the Chinese American Eye Study underwent EyeCam goniophotography in 4 angle quadrants. A CNN classifier based on the ResNet-50 architecture was trained to detect angle closure, defined as inability to visualize the pigmented trabecular meshwork, using reference labels by a single experienced glaucoma specialist. The performance of the CNN classifier was assessed using an independent test dataset and reference labels by the single glaucoma specialist or a panel of 3 glaucoma specialists. This performance was compared to that of 9 human graders with a range of clinical experience. Outcome measures included area under the receiver operating characteristic curve (AUC) metrics and Cohen kappa coefficients in the binary classification of open or closed angle.
Results: The CNN classifier was developed using 29,706 open and 2,929 closed angle images. The independent test dataset was composed of 600 open and 400 closed angle images. The CNN classifier achieved excellent performance based on single-grader (AUC = 0.969) and consensus (AUC = 0.952) labels. The agreement between the CNN classifier and consensus labels (κ = 0.746) surpassed that of all non-reference human graders (κ = 0.578-0.702). Human grader agreement with consensus labels improved with clinical experience (P = 0.03).
Conclusion: A CNN classifier can effectively detect angle closure in goniophotographs with performance comparable to that of an experienced glaucoma specialist. This provides an automated method to support remote detection of patients at risk for primary angle closure glaucoma.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8286291 | PMC |
http://dx.doi.org/10.1016/j.ajo.2021.02.004 | DOI Listing |
J Craniofac Surg
September 2025
Department of Oral and Maxillofacial Surgery, University of Ulsan Hospital, University of Ulsan College of Medicine.
This study aimed to develop a deep-learning model for the automatic classification of mandibular fractures using panoramic radiographs. A pretrained convolutional neural network (CNN) was used to classify fractures based on a novel, clinically relevant classification system. The dataset comprised 800 panoramic radiographs obtained from patients with facial trauma.
View Article and Find Full Text PDFPLoS One
September 2025
School of Computer Science, CHART Laboratory, University of Nottingham, Nottingham, United Kingdom.
Background And Objective: Male fertility assessment through sperm morphology analysis remains a critical component of reproductive health evaluation, as abnormal sperm morphology is strongly correlated with reduced fertility rates and poor assisted reproductive technology outcomes. Traditional manual analysis performed by embryologists is time-intensive, subjective, and prone to significant inter-observer variability, with studies reporting up to 40% disagreement between expert evaluators. This research presents a novel deep learning framework combining Convolutional Block Attention Module (CBAM) with ResNet50 architecture and advanced deep feature engineering (DFE) techniques for automated, objective sperm morphology classification.
View Article and Find Full Text PDFNeurodegener Dis Manag
September 2025
Department of Computer Science and Engineering, SRM Institute of Science and Technology (SRMIST), Tiruchirappalli Campus, Trichy, India.
Background: Alzheimer's disease (AD) is considered to be one of the neurodegenerative diseases with possible cognitive deficits related to dementia in human subjects. High priority should be put on efforts aimed at early detection of AD.
Research Design And Methods: Here, images undergo a pre-processing phase that integrates image resizing and the application of median filters.
IEEE Trans Neural Syst Rehabil Eng
September 2025
Hand gesture recognition(HGR) is a key technology in human-computer interaction and human communication. This paper presents a lightweight, parameter-free attention convolutional neural network (LPA-CNN) approach leveraging Gramian Angular Field(GAF)transformation of A-mode ultrasound signals for HGR. First, this paper maps 1-dimensional (1D) A-mode ultrasound signals, collected from the forearm muscles of 10 healthy participants, into 2-dimensional (2D) images.
View Article and Find Full Text PDFIEEE J Biomed Health Inform
September 2025
Vision Transformer (ViT) applied to structural magnetic resonance images has demonstrated success in the diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, three key challenges have yet to be well addressed: 1) ViT requires a large labeled dataset to mitigate overfitting while most of the current AD-related sMRI data fall short in the sample sizes. 2) ViT neglects the within-patch feature learning, e.
View Article and Find Full Text PDF