Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Background: Although clinical quality registries have been established worldwide to monitor cardiothoracic surgery outcomes through benchmarking to detect underperforming hospitals (outliers) and improve quality of care, the accuracy of such analyses remains unclear. This study aimed to compare and evaluate methods of outlier classification when applied to real-world and simulated data.
Methods: Data relating to isolated coronary artery bypass graft procedures were obtained from the Australian and New Zealand Society of Cardiac and Thoracic Surgeons Cardiac Surgery Database registry. Unadjusted and risk-adjusted operative mortality and new renal insufficiency were the key outcomes evaluated for two timeframes: cumulative (2018-2021) and rolling (2022); additional data were parametrically generated to simulate these datasets. Agreement in outlier flagging was compared between variations of control limit and confidence interval methods when applied to the real data, and the expected accuracy of the methods evaluated using the simulated data.
Results: While outlier flagging was similar between techniques, agreement between different risk-adjustment, timeframes and significance levels were moderate to poor. The expected accuracy of outlier classification also differed between these considerations, with high performance only reached for risk-adjusted outcomes using cumulative data. Of the methods, outliers flagged using exact binomial 95 % control limits had the highest accuracy.
Conclusions: Clinical registries should consider their data parameters before commencing benchmarking to detect underperforming sites. To optimise accuracy of outlier flagging, outcomes should be risk-adjusted, cumulative datasets should be used in the case of low patient volumes and, where possible, outcomes with higher prevalence should be evaluated.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.ijcard.2025.133517 | DOI Listing |