A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1075
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3195
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

An Open-architecture AI Model for CPT Coding in Breast Surgery: Development, Validation, and Prospective Testing. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Objective: To develop, validate, and prospectively test an open-architecture, transformer-based artificial Intelligence (AI) model to extract procedure codes from free-text breast surgery operative notes.

Background: Operative note coding is time-intensive and error-prone, leading to lost revenue and compliance risks. Although AI offers potential solutions, adoption has been limited due to proprietary, closed-source systems lacking transparency and standardized validation.

Methods: We included all institutional breast surgery operative notes from July 2017 to December 2023. Expert medical coders manually reviewed and validated surgeon-assigned Current Procedural Terminology (CPT) codes, establishing a reference standard. We developed and validated an AI model to predict CPT codes from operative notes using 2 versions of the pretrained GatorTron clinical language model: a compact 345 million-parameter model and a larger 3.9 billion-parameter model, each fine-tuned on our labeled data-set. Performance was evaluated using the area under the precision-recall curve (AUPRC). Prospective testing was conducted on operative notes from May to October 2024.

Results: Our data set included 3259 operative notes with 8036 CPT codes. Surgeon coding discrepancies were present in 12% of cases (overcoding: 8%, undercoding: 10%). The AI model showed strong alignment with the reference standard [compact version AUPRC: 0.976 (0.970, 0.983), large version AUPRC: 0.981 (0.977, 0.986)] on cross-validation, outperforming surgeons (AUPRC: 0.937). Prospective testing on 268 notes confirmed strong real-world performance.

Conclusions: Our open-architecture AI model demonstrated high performance in automating CPT code extraction, offering a scalable and transparent solution to improve surgical coding efficiency. Future work will assess whether AI can surpass human coders in accuracy and reliability.

Download full-text PDF

Source
http://dx.doi.org/10.1097/SLA.0000000000006793DOI Listing

Publication Analysis

Top Keywords

operative notes
16
breast surgery
12
prospective testing
12
cpt codes
12
open-architecture model
8
surgery operative
8
reference standard
8
version auprc
8
model
7
operative
6

Similar Publications