Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Objectives: To train and validate segmentation models for automated segmentation of gallbladder cancer (GBC) lesions from contrast-enhanced CT images.
Materials And Methods: This retrospective study comprised consecutive patients with pathologically proven treatment naïve GBC who underwent a contrast-enhanced CT scan at four different tertiary care referral hospitals. The training and validation cohort comprised CT scans of 317 patients (center 1). The internal test cohort comprised a temporally independent cohort (n = 29) from center 1 (internal test 1). The external test cohort comprised CT scans from three centers [ (n = 85)]. We trained the state-of-the-art 2D and 3D image segmentation models, SAM Adapter, MedSAM, 3D TransUNet, SAM-Med3D, and 3D-nnU-Net, for automated segmentation of the GBC. The models' performance for GBC segmentation on the test datasets was assessed via dice score and intersection over union (IoU) using manual segmentation as the reference standard.
Results: The 2D models performed better than 3D models. Overall, MedSAM achieved the highest dice and IoU scores on both the internal [mean dice (SD) 0.776 (0.106) and mean IoU 0.653 (0.133)] and external [mean dice (SD) 0.763 (0.098) and mean IoU 0.637 (0.116)] test sets. Among the 3D models, TransUNet showed the best segmentation performance with mean dice (SD) and IoU (SD) of 0.479 (0.268) and 0.356 (0.235) in the internal test and 0.409 (0.339) and 0.317 (0.283) in the external test sets. The segmentation performance was not associated with GBC morphology. There was weak correlation between the dice/IoU and the size of the GBC lesions for any segmentation model.
Conclusion: We trained 2D and 3D GBC segmentation models on a large dataset and validated these models on external datasets. MedSAM, a 2D prompt-based foundational model, achieved the best segmentation performance.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s00261-025-04887-y | DOI Listing |