A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1075
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3195
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Availability and transparency of artificial intelligence models in radiology: a meta-research study. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Objectives: This meta-research study explored the availability of artificial intelligence (AI) models from development studies published in leading radiology journals in 2022, with availability defined as the transparent reporting of relevant technical details, such as model architecture and weights, necessary for independent replication.

Materials And Methods: A systematic search of Ovid Medline and Embase was conducted to identify AI model development studies published in five leading radiology journals in 2022. Data were extracted on study characteristics, model details, and code and model-sharing practices. The proportion of AI studies sharing their models was analyzed. Logistic regression analyses were employed to explore associations between study characteristics and model availability.

Results: Of 268 studies reviewed, 39.9% (n = 107) made their models available. Deep learning (DL) models exhibited particularly low availability, with only 11.5% (n = 13) of the 113 studies being fully available. Training codes for DL models were provided in 22.1% (n = 25), suggesting limited ability to train DL models with one's own data. Multivariable logistic regression analysis showed that the use of traditional regression-based models (odds ratio [OR], 17.11; 95% CI: 5.52, 53.05; p < 0.001) was associated with higher availability, while the radiomics package usage (OR, 0.27; 95% CI: 0.11, 0.65; p = 0.003) was associated with lower availability.

Conclusion: The availability of AI models in radiology publications remains suboptimal, especially for DL models. Enforcing model-sharing policies, enhancing external validation platforms, addressing commercial restrictions, and providing demos for commercial models in open repositories are necessary to improve transparency and replicability in radiology AI research.

Key Points: Question The study addresses the limited availability of AI models in radiology, especially DL models, which impacts external validation and clinical reliability. Findings Only 39.9% of radiology AI studies made their models available, with DL models showing particularly low availability at 11.5%. Clinical relevance Improving the availability of radiology AI models is essential for enabling external validation, ensuring reliable clinical application, and advancing patient care by fostering robust and transparent AI systems.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350510PMC
http://dx.doi.org/10.1007/s00330-025-11492-6DOI Listing

Publication Analysis

Top Keywords

artificial intelligence
8
models
8
intelligence models
8
meta-research study
8
development studies
8
studies published
8
published leading
8
leading radiology
8
radiology journals
8
journals 2022
8

Similar Publications