Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Background: To use adversarial training to increase the generalizability and diagnostic accuracy of deep learning models for prostate cancer diagnosis.

Methods: This multicenter study retrospectively included 396 prostate cancer patients who underwent magnetic resonance imaging (development set, 297 patients from Shanghai Jiao Tong University Affiliated Sixth People's Hospital and Eighth People's Hospital; test set, 99 patients from Renmin Hospital of Wuhan University). Two binary classification deep learning models for clinically significant prostate cancer classification [PM1, pretraining Visual Geometry Group network (VGGNet)-16-based model 1; PM2, pretraining residual network (ResNet)-50-based model 2] and two multiclass classification deep learning models for prostate cancer grading (PM3, pretraining VGGNet-16-based model 3; PM4: pretraining ResNet-50-based model 4) were built using apparent diffusion coefficient and T2-weighted images. These models were then retrained with adversarial examples starting from the initial random model parameters (AM1, adversarial training VGGNet-16 model 1; AM2, adversarial training ResNet-50 model 2; AM3, adversarial training VGGNet-16 model 3; AM4, adversarial training ResNet-50 model 4, respectively). To verify whether adversarial training can improve the diagnostic model's effectiveness, we compared the diagnostic performance of the deep learning methods before and after adversarial training. Receiver operating characteristic curve analysis was performed to evaluate significant prostate cancer classification models. Differences in areas under the curve (AUCs) were compared using Delong's tests. The quadratic weighted kappa score was used to verify the PCa grading models.

Results: AM1 and AM2 had significantly higher AUCs than PM1 and PM2 in the internal validation dataset (0.84 0.89 and 0.83 0.87) and test dataset (0.73 0.86 and 0.72 0.82). AM3 and AM4 showed higher κ values than PM3 and PM4 in the internal validation dataset {0.266 [95% confidence interval (CI): 0.152-0.379] 0.292 (95% CI: 0.178-0.405) and 0.254 (95% CI: 0.159-0.390) 0.279 (95% CI: 0.163-0.396)} and test set [0.196 (95% CI: 0.029-0.362) 0.268 (95% CI: 0.109-0.427) and 0.183 (95% CI: 0.015-0.351) 0.228 (95% CI: 0.068-0.389)].

Conclusions: Using adversarial examples to train prostate cancer classification deep learning models can improve their generalizability and classification abilities.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9131330PMC
http://dx.doi.org/10.21037/qims-21-1089DOI Listing

Publication Analysis

Top Keywords

adversarial training
32
prostate cancer
28
deep learning
20
cancer classification
16
learning models
16
classification deep
12
adversarial
10
model
9
magnetic resonance
8
resonance imaging
8

Similar Publications

Adverse Pregnancy Outcomes in Patients with Congenital Uterine Anomalies: Evaluation of a Population Dataset.

Am J Perinatol

September 2025

Division of Maternal and Fetal Medicine, OB/GYN and Women's Health Institute, Cleveland Clinic, Cleveland, Ohio, United States.

This study aimed to characterize the risk of adverse pregnancy outcomes among patients with congenital uterine anomalies (CUA) using electronic health record data.Retrospective cohort study utilizing the TriNetX analytics research network, including female patients aged 10 to 55 with a documented singleton and intrauterine pregnancy.A total of 561,440 patients met inclusion criteria, of whom 3,381 (0.

View Article and Find Full Text PDF

Deepfakes pose critical threats to digital media integrity and societal trust. This paper presents a hybrid deepfake detection framework combining Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) to address challenges in scalability, generalizability, and adversarial robustness. The framework integrates adversarial training, a temporal decay analysis model, and multimodal detection across audio, video, and text domains.

View Article and Find Full Text PDF

Adversarial training for dynamics matching in coarse-grained models.

J Chem Phys

September 2025

Department of Chemistry, Chicago Center for Theoretical Chemistry, Institute for Biophysical Dynamics, and James Franck Institute, The University of Chicago, 5735 S. Ellis Ave., SCL 123, Chicago, Illinois 60637, USA.

Molecular dynamics simulations are essential for studying complex molecular systems, but their high computational cost limits scalability. Coarse-grained (CG) models reduce this cost by simplifying the system, yet traditional approaches often fail to maintain dynamic consistency, compromising their reliability in kinetics-driven processes. Here, we introduce an adversarial training framework that aligns CG trajectory ensembles with all-atom (AA) reference dynamics, ensuring both thermodynamic and kinetic fidelity.

View Article and Find Full Text PDF

Lung nodule synthesis guided by customized multi-confidence masks.

Biomed Eng Lett

September 2025

Department of Radiology, Guizhou International Science and Technology Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, Guizhou China.

The generated lung nodule data plays an indispensable role in the development of intelligent assisted diagnosis of lung cancer. Existing generative models, primarily based on Generative Adversarial Networks (GANs) and Denoising Diffusion Probabilistic Models (DDPM), have demonstrated effectiveness but also come with certain limitations: GANs often produce artifacts and unnatural boundaries, and due to dataset limitations, they struggle with irregular nodules. While DDPMs are capable of generating a diverse range of nodules, their inherent randomness and lack of control limit their applicability in tasks such as segmentation.

View Article and Find Full Text PDF

Artificial General Intelligence and Its Threat to Public Health.

J Eval Clin Pract

September 2025

Academic Unit of Population and Lifespan Sciences, School of Medicine, Nottingham City Hospital Campus, University of Nottingham, Clinical Sciences Building, Nottingham, UK.

Background: Artificial intelligence (AI) is increasingly applied across healthcare and public health, with evidence of benefits including enhanced diagnostics, predictive modelling, operational efficiency, medical education, and disease surveillance.However, potential harms - such as algorithmic bias, unsafe recommendations, misinformation, privacy risks, and sycophantic reinforcement - pose challenges to safe implementation.Far less attention has been directed to the public health threats posed by artificial general intelligence (AGI), a hypothetical form of AI with human-level or greater cognitive capacities.

View Article and Find Full Text PDF