Publications by authors named "Tim P Morris"

The Fine-Gray model for the subdistribution hazard is commonly used for estimating associations between covariates and competing risks outcomes. When there are missing values in the covariates included in a given model, researchers may wish to multiply impute them. Assuming interest lies in estimating the risk of only one of the competing events, this paper develops a substantive-model-compatible multiple imputation approach that exploits the parallels between the Fine-Gray model and the standard (single-event) Cox model.

View Article and Find Full Text PDF

Background: Time-to-event data is commonly used in non-inferiority clinical trials. While the hazard ratio is a popular summary measure in this context, the difference in restricted mean survival time has been theoretically shown to increase power and interpretability. This study aimed to empirically compare the power of the hazard ratio, difference in survival and difference in restricted mean survival time for non-inferiority clinical trials with a time-to-event outcome recently published in key clinical journals.

View Article and Find Full Text PDF

Simulation studies are widely used for evaluating the performance of statistical methods in psychology. However, the quality of simulation studies can vary widely in terms of their design, execution, and reporting. In order to assess the quality of typical simulation studies in psychology, we reviewed 321 articles published in in 2021 and 2022, among which 100/321 = 31.

View Article and Find Full Text PDF

Background: Bias from data missing not at random (MNAR) is a persistent concern in health-related research. A bias analysis quantitatively assesses how conclusions change under different assumptions about missingness using bias parameters that govern the magnitude and direction of the bias. Probabilistic bias analysis specifies a prior distribution for these parameters, explicitly incorporating available information and uncertainty about their true values.

View Article and Find Full Text PDF

Quantitative bias analysis (QBA) permits assessment of the expected impact of various imperfections of the available data on the results and conclusions of a particular real-world study. This article extends QBA methodology to multivariable time-to-event analyses with right-censored endpoints, possibly including time-varying exposures or covariates. The proposed approach employs data-driven simulations, which preserve important features of the data at hand while offering flexibility in controlling the parameters and assumptions that may affect the results.

View Article and Find Full Text PDF
Article Synopsis
  • Frequentist performance is important for methods used in confirmatory clinical trials, but it alone doesn't validate the use of certain missing data imputation methods.
  • Reference-based conditional mean imputation can lead to misleading results, as its variance estimation gets smaller with more missing data.
  • This approach can inadvertently suggest that the true treatment effect is zero for patients with missing data, which is not a desirable outcome.
View Article and Find Full Text PDF

To obtain valid inference following stratified randomisation, treatment effects should be estimated with adjustment for stratification variables. Stratification sometimes requires categorisation of a continuous prognostic variable (eg, age), which raises the question: should adjustment be based on randomisation categories or underlying continuous values? In practice, adjustment for randomisation categories is more common. We reviewed trials published in general medical journals and found none of the 32 trials that stratified randomisation based on a continuous variable adjusted for continuous values in the primary analysis.

View Article and Find Full Text PDF

Estimands can be used in studies of healthcare interventions to clarify the interpretation of treatment effects. The addendum to the ICH E9 harmonised guideline on statistical principles for clinical trials (ICH E9(R1)) describes a framework for using estimands as part of a study. This paper provides an overview of the estimands framework, as outlined in the addendum, with the aim of explaining why estimands are beneficial; clarifying the terminology being used; and providing practical guidance on using estimands to decide the appropriate study design, data collection, and estimation methods.

View Article and Find Full Text PDF

Background: Patient and public involvement (PPI) in trials aims to enhance research by improving its relevance and transparency. Planning for statistical analysis begins at the design stage of a trial within the protocol and is refined and detailed in a Statistical Analysis Plan (SAP). While PPI is common in design and protocol development it is less common within SAPs.

View Article and Find Full Text PDF

Simulation studies are powerful tools in epidemiology and biostatistics, but they can be hard to conduct successfully. Sometimes unexpected results are obtained. We offer advice on how to check a simulation study when this occurs, and how to design and conduct the study to give results that are easier to check.

View Article and Find Full Text PDF

For simulation studies that evaluate methods of handling missing data, we argue that generating partially observed data by fixing the complete data and repeatedly simulating the missingness indicators is a superficially attractive idea but only rarely appropriate to use.

View Article and Find Full Text PDF

The marginality principle guides analysts to avoid omitting lower-order terms from models in which higher-order terms are included as covariates. Lower-order terms are viewed as "marginal" to higher-order terms. We consider how this principle applies to three cases: regression models that may include the ratio of two measured variables; polynomial transformations of a measured variable; and factorial arrangements of defined interventions.

View Article and Find Full Text PDF

In network meta-analysis, studies evaluating multiple treatment comparisons are modeled simultaneously, and estimation is informed by a combination of direct and indirect evidence. Network meta-analysis relies on an assumption of consistency, meaning that direct and indirect evidence should agree for each treatment comparison. Here we propose new local and global tests for inconsistency and demonstrate their application to three example networks.

View Article and Find Full Text PDF

Individual participant data meta-analysis (IPDMA) projects obtain, check, harmonise and synthesise raw data from multiple studies. When undertaking the meta-analysis, researchers must decide between a two-stage or a one-stage approach. In a two-stage approach, the IPD are first analysed separately within each study to obtain aggregate data (e.

View Article and Find Full Text PDF

Many trials use stratified randomisation, where participants are randomised within strata defined by one or more baseline covariates. While it is important to adjust for stratification variables in the analysis, the appropriate method of adjustment is unclear when stratification variables are affected by misclassification and hence some participants are randomised in the incorrect stratum. We conducted a simulation study to compare methods of adjusting for stratification variables affected by misclassification in the analysis of continuous outcomes when all or only some stratification errors are discovered, and when the treatment effect or treatment-by-covariate interaction effect is of interest.

View Article and Find Full Text PDF

Background: The population-level summary measure is a key component of the estimand for clinical trials with time-to-event outcomes. This is particularly the case for non-inferiority trials, because different summary measures imply different null hypotheses. Most trials are designed using the hazard ratio as summary measure, but recent studies suggested that the difference in restricted mean survival time might be more powerful, at least in certain situations.

View Article and Find Full Text PDF

We describe a new command, artcat, that calculates sample size or power for a randomized controlled trial or similar experiment with an ordered categorical outcome, where analysis is by the proportional-odds model. artcat implements the method of Whitehead (1993, 12: 2257-2271). We also propose and implement a new method that 1) allows the user to specify a treatment effect that does not obey the proportional-odds assumption, 2) offers greater accuracy for large treatment effects, and 3) allows for noninferiority trials.

View Article and Find Full Text PDF

Although new biostatistical methods are published at a very high rate, many of these developments are not trustworthy enough to be adopted by the scientific community. We propose a framework to think about how a piece of methodological work contributes to the evidence base for a method. Similar to the well-known phases of clinical research in drug development, we propose to define four phases of methodological research.

View Article and Find Full Text PDF

Background: Non-random selection of analytic subsamples could introduce selection bias in observational studies. We explored the potential presence and impact of selection in studies of SARS-CoV-2 infection and COVID-19 prognosis.

Methods: We tested the association of a broad range of characteristics with selection into COVID-19 analytic subsamples in the Avon Longitudinal Study of Parents and Children (ALSPAC) and UK Biobank (UKB).

View Article and Find Full Text PDF

Objective: To quantify the effects of a series of text messages (safetxt) delivered in the community on incidence of chlamydia and gonorrhoea reinfection at one year in people aged 16-24 years.

Design: Parallel group randomised controlled trial.

Setting: 92 sexual health clinics in the United Kingdom.

View Article and Find Full Text PDF

The quantitative analysis of research data is a core element of empirical research. The performance of statistical methods that are used for analyzing empirical data can be evaluated and compared using computer simulations. A single simulation study can influence the analyses of thousands of empirical studies to follow.

View Article and Find Full Text PDF

Factorial trials offer an efficient method to evaluate multiple interventions in a single trial, however the use of additional treatments can obscure research objectives, leading to inappropriate analytical methods and interpretation of results. We define a set of estimands for factorial trials, and describe a framework for applying these estimands, with the aim of clarifying trial objectives and ensuring appropriate primary and sensitivity analyses are chosen. This framework is intended for use in factorial trials where the intent is to conduct "two-trials-in-one" (ie, to separately evaluate the effects of treatments A and B), and is comprised of four steps: (i) specifying how additional treatment(s) (eg, treatment B) will be handled in the estimand, and how intercurrent events affecting the additional treatment(s) will be handled; (ii) designating the appropriate factorial estimator as the primary analysis strategy; (iii) evaluating the interaction to assess the plausibility of the assumptions underpinning the factorial estimator; and (iv) performing a sensitivity analysis using an appropriate multiarm estimator to evaluate to what extent departures from the underlying assumption of no interaction may affect results.

View Article and Find Full Text PDF