Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Objective: To compare statistical outputs from ChatGPT 4.0 and human experts in both comparative and correlation analyses in the evaluation of multiparametric MRI/ultrasound fusion-targeted biopsy plus random biopsy versus standard random biopsy alone, in terms of upstaging.
Methods: Authors performed a retrospective evaluation on 101 patients undergoing robot-assisted radical prostatectomy (RaRP) between 2021 and 2023. Patients were divided in two groups, according to the type of prostatic biopsy received: combined fusion (MRI/US) targeted and random biopsy versus standard random biopsy. Clinical and histological data were anonymized and analyzed using logistic regression models, ANOVA, and Chi-square tests. Analysis generated by ChatGPT and by an experienced human statistician were compared. The Q-EVAL and Q-EVA tools were used to assess the quality of user-formulated questions and AI-generated answers, respectively.
Results: Results revealed high concordance between statistical outputs generated by AI and expert human statistician with perfect concordance using Cohen's kappa coefficient (κ = 1.0). Logistic regression analysis demonstrated that fusion biopsy was associated with a reduced likelihood of upstaging, a consistent finding across statistical evaluations. Additionally, user interaction assessments indicated high-quality in question formulation.
Conclusions: ChatGPT (version 4.0) proved reliable for statistical analysis, showing strong concordance with human statisticians (κ = 1.0) in performing logistic regression, chi-square, and ANOVA tests. The Q-EVAL tool could reduce query errors, though ChatGPT's lack of automatic citations remains a limitation. Fusion biopsy significantly lowered upstaging risk after RaRP. In conclusion, ChatGPT is a valuable assistive tool but further research is required to optimize human-AI collaboration in clinical research.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.4081/aiua.2025.13596 | DOI Listing |