The estimand framework, introduced in the ICH E9 (R1) Addendum, provides a structured approach for defining precise research questions in randomised clinical trials. It suggests five strategies for addressing intercurrent events (ICE). This case study examines the principal stratum strategy, highlighting its potential for estimating causal treatment effects in specific subpopulations and the challenges involved.
View Article and Find Full Text PDFPLoS One
April 2025
Background: Various treatments are recommended as first-line options in practice guidelines for depression, but it is unclear which is most efficacious for a given person. Accurate individualized predictions of relative treatment effects are needed to optimize treatment recommendations for depression and reduce this disorder's vast personal and societal costs.
Aims: We describe the protocol for a systematic review and individual participant data (IPD) network meta-analysis (NMA) to inform personalized treatment selection among five major empirically-supported depression treatments.
Various statistical and machine learning algorithms can be used to predict treatment effects at the patient level using data from randomized clinical trials (RCTs). Such predictions can facilitate individualized treatment decisions. Recently, a range of methods and metrics were developed for assessing the accuracy of such predictions.
View Article and Find Full Text PDFBMC Med Res Methodol
April 2024
Observational data provide invaluable real-world information in medicine, but certain methodological considerations are required to derive causal estimates. In this systematic review, we evaluated the methodology and reporting quality of individual-level patient data meta-analyses (IPD-MAs) conducted with non-randomized exposures, published in 2009, 2014, and 2019 that sought to estimate a causal relationship in medicine. We screened over 16,000 titles and abstracts, reviewed 45 full-text articles out of the 167 deemed potentially eligible, and included 29 into the analysis.
View Article and Find Full Text PDFAn external validation study evaluates the performance of a prediction model in new data, but many of these studies are too small to provide reliable answers. In the third article of their series on model evaluation, Riley and colleagues describe how to calculate the sample size required for external validation studies, and propose to avoid rules of thumb by tailoring calculations to the model and setting at hand.
View Article and Find Full Text PDFAims: Clinical guidelines often recommend treating individuals based on their cardiovascular risk. We revisit this paradigm and quantify the efficacy of three treatment strategies: (i) overall prescription, i.e.
View Article and Find Full Text PDFMissing data is a common problem in medical research, and is commonly addressed using multiple imputation. Although traditional imputation methods allow for valid statistical inference when data are missing at random (MAR), their implementation is problematic when the presence of missingness depends on unobserved variables, that is, the data are missing not at random (MNAR). Unfortunately, this MNAR situation is rather common, in observational studies, registries and other sources of real-world data.
View Article and Find Full Text PDFObjectives: Risk of bias assessments are important in meta-analyses of both aggregate and individual participant data (IPD). There is limited evidence on whether and how risk of bias of included studies or datasets in IPD meta-analyses (IPDMAs) is assessed. We review how risk of bias is currently assessed, reported, and incorporated in IPDMAs of test accuracy and clinical prediction model studies and provide recommendations for improvement.
View Article and Find Full Text PDFAm J Epidemiol
February 2024
Propensity score analysis is a common approach to addressing confounding in nonrandomized studies. Its implementation, however, requires important assumptions (e.g.
View Article and Find Full Text PDFObservational data provide invaluable real-world information in medicine, but certain methodological considerations are required to derive causal estimates. In this systematic review, we evaluated the methodology and reporting quality of individual-level patient data meta-analyses (IPD-MAs) published in 2009, 2014, and 2019 that sought to estimate a causal relationship in medicine. We screened over 16,000 titles and abstracts, reviewed 45 full-text articles out of the 167 deemed potentially eligible, and included 29 into the analysis.
View Article and Find Full Text PDFExternal validation of the discriminative ability of prediction models is of key importance. However, the interpretation of such evaluations is challenging, as the ability to discriminate depends on both the sample characteristics (ie, case-mix) and the generalizability of predictor coefficients, but most discrimination indices do not provide any insight into their respective contributions. To disentangle differences in discriminative ability across external validation samples due to a lack of model generalizability from differences in sample characteristics, we propose propensity-weighted measures of discrimination.
View Article and Find Full Text PDFMost clinical specialties have a plethora of studies that develop or validate one or more prediction models, for example, to inform diagnosis or prognosis. Having many prediction model studies in a particular clinical field motivates the need for systematic reviews and meta-analyses, to evaluate and summarise the overall evidence available from prediction model studies, in particular about the predictive performance of existing models. Such reviews are fast emerging, and should be reported completely, transparently, and accurately.
View Article and Find Full Text PDFThe increasing availability of large combined datasets (or big data), such as those from electronic health records and from individual participant data meta-analyses, provides new opportunities and challenges for researchers developing and validating (including updating) prediction models. These datasets typically include individuals from multiple clusters (such as multiple centres, geographical locations, or different studies). Accounting for clustering is important to avoid misleading conclusions and enables researchers to explore heterogeneity in prediction model performance across multiple centres, regions, or countries, to better tailor or match them to these different clusters, and thus to develop prediction models that are more generalisable.
View Article and Find Full Text PDFThe TRIPOD-Cluster (transparent reporting of multivariable prediction models developed or validated using clustered data) statement comprises a 19 item checklist, which aims to improve the reporting of studies developing or validating a prediction model in clustered data, such as individual participant data meta-analyses (clustering by study) and electronic health records (clustering by practice or hospital). This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD-Cluster statement is explained in detail and accompanied by published examples of good reporting.
View Article and Find Full Text PDFWhen data are available from individual patients receiving either a treatment or a control intervention in a randomized trial, various statistical and machine learning methods can be used to develop models for predicting future outcomes under the two conditions, and thus to predict treatment effect at the patient level. These predictions can subsequently guide personalized treatment choices. Although several methods for validating prediction models are available, little attention has been given to measuring the performance of predictions of personalized treatment effect.
View Article and Find Full Text PDFA common problem in the analysis of multiple data sources, including individual participant data meta-analysis (IPD-MA), is the misclassification of binary variables. Misclassification may lead to biased estimators of model parameters, even when the misclassification is entirely random. We aimed to develop statistical methods that facilitate unbiased estimation of adjusted and unadjusted exposure-outcome associations and between-study heterogeneity in IPD-MA, where the extent and nature of exposure misclassification may vary across studies.
View Article and Find Full Text PDFClin Kidney J
October 2022
Background: Previous studies suggest that haemodiafiltration reduces mortality compared with haemodialysis in patients with end-stage kidney disease (ESKD), but the controversy surrounding its benefits remains and it is unclear to what extent individual patients benefit from haemodiafiltration. This study is aimed to develop and validate a treatment effect prediction model to determine which patients would benefit most from haemodiafiltration compared with haemodialysis in terms of all-cause mortality.
Methods: Individual participant data from four randomized controlled trials comparing haemodiafiltration with haemodialysis on mortality were used to derive a Royston-Parmar model for the prediction of absolute treatment effect of haemodiafiltration based on pre-specified patient and disease characteristics.
Objective: To externally validate various prognostic models and scoring rules for predicting short term mortality in patients admitted to hospital for covid-19.
Design: Two stage individual participant data meta-analysis.
Setting: Secondary and tertiary care.
Background: Randomised controlled trials (RCTs) investigated analgesics, herbal formulations, delayed prescription of antibiotics, and placebo to prevent overprescription of antibiotics in women with uncomplicated urinary tract infections (uUTI).
Objectives: To estimate the effect of these strategies and to identify symptoms, signs, or other factors that indicate a benefit from these strategies.
Data Sources: MEDLINE, EMBASE, Web of Science, LILACS, Cochrane Database of Systematic Reviews and of Controlled Trials, and ClinicalTrials.
Objectives: Among ID studies seeking to make causal inferences and pooling individual-level longitudinal data from multiple infectious disease cohorts, we sought to assess what methods are being used, how those methods are being reported, and whether these factors have changed over time.
Study Design And Setting: Systematic review of longitudinal observational infectious disease studies pooling individual-level patient data from 2+ studies published in English in 2009, 2014, or 2019. This systematic review protocol is registered with PROSPERO (CRD42020204104).
While the opportunities of ML and AI in healthcare are promising, the growth of complex data-driven prediction models requires careful quality and applicability assessment before they are applied and disseminated in daily practice. This scoping review aimed to identify actionable guidance for those closely involved in AI-based prediction model (AIPM) development, evaluation and implementation including software engineers, data scientists, and healthcare professionals and to identify potential gaps in this guidance. We performed a scoping review of the relevant literature providing guidance or quality criteria regarding the development, evaluation, and implementation of AIPMs using a comprehensive multi-stage screening strategy.
View Article and Find Full Text PDF