98%
921
2 minutes
20
Differential classification of prostate cancer grade group (GG) 2 and 3 tumors remains challenging, likely because of the subjective quantification of the percentage of Gleason pattern 4 (%GP4). Artificial intelligence assessment of %GP4 may improve its accuracy and reproducibility and provide information for prognosis prediction. To investigate this potential, a convolutional neural network (CNN) model was trained to objectively identify and quantify Gleason pattern (GP) 3 and 4 areas, estimate %GP4, and assess whether CNN-predicted %GP4 is associated with biochemical recurrence (BCR) risk in intermediate-risk GG 2 and 3 tumors. The study was conducted in a radical prostatectomy cohort (1999-2012) of African American men from the Henry Ford Health System (Detroit, Michigan). A CNN model that could discriminate 4 tissue types (stroma, benign glands, GP3 glands, and GP4 glands) was developed using histopathologic images containing GG 1 (n = 45) and 4 (n = 20) tumor foci. The CNN model was applied to GG 2 (n = 153) and 3 (n = 62) tumors for %GP4 estimation, and Cox proportional hazard modeling was used to assess the association of %GP4 and BCR, accounting for other clinicopathologic features including GG. The CNN model achieved an overall accuracy of 86% in distinguishing the 4 tissue types. Furthermore, CNN-predicted %GP4 was significantly higher in GG 3 than in GG 2 tumors (P = 7.2 × 10). %GP4 was associated with an increased risk of BCR (adjusted hazard ratio, 1.09 per 10% increase in %GP4; P = .010) in GG 2 and 3 tumors. Within GG 2 tumors specifically, %GP4 was more strongly associated with BCR (adjusted hazard ratio, 1.12; P = .006). Our findings demonstrate the feasibility of CNN-predicted %GP4 estimation, which is associated with BCR risk. This objective approach could be added to the standard pathologic assessment for patients with GG 2 and 3 tumors and act as a surrogate for specialist genitourinary pathologist evaluation when such consultation is not available.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.modpat.2023.100157 | DOI Listing |
Sci Justice
September 2025
Department of Multidisciplinary Radiological Science, The Graduate School of Dongseo University, 47 Jurye-ro, Sasang-gu, Busan 47011, Republic of Korea. Electronic address:
The identification of deceased individuals is essential in forensic investigations, particularly when primary identification methods such as odontology, fingerprint, or DNA analysis are unavailable. In such cases, implanted medical devices may serve as supplementary identifiers for positive identification. This study proposes deep learning-based methods for the automatic detection of metallic implants in scout images acquired from computed tomography (CT).
View Article and Find Full Text PDFJ Neurosci Methods
September 2025
Department of Computer Science and Engineering, IIT (ISM) Dhanbad, Dhanbad, 826004, Jharkhand, India. Electronic address:
Background: Interpretation of motor imagery (MI) in brain-computer interface (BCI) applications is largely driven by the use of electroencephalography (EEG) signals. However, precise classification in stroke patients remains challenging due to variability, non-stationarity, and abnormal EEG patterns.
New Methods: To address these challenges, an integrated architecture is proposed, combining multi-domain feature extraction with evolutionary optimization for enhanced EEG-based MI classification.
Neural Netw
September 2025
School of Automation, Southeast University, Nanjing, 210096, China; Advanced Ocean Institute of Southeast University Nantong, Nantong, 226010, China. Electronic address:
Unmanned Aerial Vehicle (UAV) tracking requires accurate target localization from aerial top-down perspectives while operating under the computational constraints of aerial platforms. Current mainstream UAV trackers, constrained by the limited resources, predominantly employ lightweight Convolutional Neural Network (CNN) extractor, coupled with an appearance-based fusion mechanism. The absence of comprehensive target perception significantly constrains the balance between tracking accuracy and computational efficiency.
View Article and Find Full Text PDFJ Craniofac Surg
September 2025
Department of Oral and Maxillofacial Surgery, University of Ulsan Hospital, University of Ulsan College of Medicine.
This study aimed to develop a deep-learning model for the automatic classification of mandibular fractures using panoramic radiographs. A pretrained convolutional neural network (CNN) was used to classify fractures based on a novel, clinically relevant classification system. The dataset comprised 800 panoramic radiographs obtained from patients with facial trauma.
View Article and Find Full Text PDFPLoS One
September 2025
School of Computer Science, CHART Laboratory, University of Nottingham, Nottingham, United Kingdom.
Background And Objective: Male fertility assessment through sperm morphology analysis remains a critical component of reproductive health evaluation, as abnormal sperm morphology is strongly correlated with reduced fertility rates and poor assisted reproductive technology outcomes. Traditional manual analysis performed by embryologists is time-intensive, subjective, and prone to significant inter-observer variability, with studies reporting up to 40% disagreement between expert evaluators. This research presents a novel deep learning framework combining Convolutional Block Attention Module (CBAM) with ResNet50 architecture and advanced deep feature engineering (DFE) techniques for automated, objective sperm morphology classification.
View Article and Find Full Text PDF