A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Reliability and validity of the Paprosky classification for acetabular bone loss based on level of orthopedic training. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Background: Reliability and validity of the Paprosky classification for acetabular bone loss have been debated. Additionally, the relationship between surgeon training level and Paprosky classification accuracy/treatment selection is poorly defined. This study aimed to: (1) evaluate the validity of preoperative Paprosky classification/treatment selection compared to intraoperative classification/treatment selection and (2) evaluate the relationship between training level and intra-rater and inter-rater reliability of preoperative classification and treatment choice.

Methods: Seventy-four patients with intraoperative Paprosky types [I (N = 24), II (N = 27), III (N = 23)] were selected. Six raters (Residents (N = 2), Fellows (N = 2), Attendings (N = 2)) independently provided Paprosky classification and treatment using preoperative radiographs. Graders reviewed images twice, 14 days apart. Cohen's Kappa was calculated for (1) inter-rater agreement of Paprosky classification/treatment by training level (2), intra-rater reliability, (3) preoperative and intraoperative classification agreement, and (4) preoperative treatment selection and actual treatment performed.

Results: Inter-rater agreement between raters of the same training level was moderate (K range = 0.42-0.50), and mostly poor for treatment selection (K range = 0.02-0.44). Intra-rater agreement ranged from fair to good (K range = 0.40-0.73). Agreement between preoperative and intraoperative classifications was fair (K range = 0.25-0.36). Agreement between preoperative treatment selections and actual treatments was fair (K range = 0.21-0.39).

Conclusion: Inter-rater reliability of Paprosky classification was poor to moderate for all training levels. Preoperative Paprosky classification showed fair agreement with intraoperative Paprosky grading. Treatment selections based on preoperative radiographs had fair agreement with actual treatments. Further research should investigate the role of advanced imaging and alternative classifications in evaluation of acetabular bone loss.

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00402-024-05524-xDOI Listing

Publication Analysis

Top Keywords

paprosky classification
24
training level
16
acetabular bone
12
bone loss
12
agreement preoperative
12
paprosky
10
preoperative
9
reliability validity
8
validity paprosky
8
classification
8

Similar Publications