A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1075
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3195
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Real-world evaluation of interconsensus agreement of risk of bias tools: A case study using risk of bias in nonrandomized studies-of interventions (ROBINS-I). | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Background: Risk of bias (RoB) tools are critical in systematic reviews and affect subsequent decision-making. RoB tools should have adequate interrater reliability and interconsensus agreement. We present an approach of post hoc evaluation of RoB tools using duplicated studies that overlap systematic reviews.

Methods: Using a back-citation approach, we identified systematic reviews that used the Risk Of Bias In Nonrandomized Studies-of Interventions (ROBINS-I) tool and retrieved all the included primary studies. We selected studies that were appraised by more than one systematic review and calculated observed agreement and unweighted kappa comparing the different systematic reviews' assessments.

Results: We identified 903 systematic reviews that used the tool with 51,676 cited references, from which we eventually analyzed 171 duplicated studies assessed using ROBINS-I by different systematic reviewers. The observed agreement on ROBINS-I domains ranged from 54.9% (missing data domain) to 70.3% (deviations from intended interventions domain), and was 63.0% for overall RoB assessment of the study. Kappa coefficient ranged from 0.131 (measurement of outcome domain) to 0.396 (domains of confounding and deviations from intended interventions), and was 0.404 for overall RoB assessment of the study.

Conclusion: A post hoc evaluation of RoB tools is feasible by focusing on duplicated studies that overlap systematic review. ROBINS-I assessments demonstrated considerable variation in interconsensus agreement among various systematic reviewes that assessed the same study and outcome, suggesting the need for more intensive upfront work to calibrate systematic reviewers on how to identify context-specific information and agree on how to judge it.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11795881PMC
http://dx.doi.org/10.1002/cesm.12094DOI Listing

Publication Analysis

Top Keywords

risk bias
16
rob tools
16
interconsensus agreement
12
systematic reviews
12
duplicated studies
12
systematic
10
bias nonrandomized
8
nonrandomized studies-of
8
studies-of interventions
8
interventions robins-i
8

Similar Publications