Publications by authors named "Ekkehard Glimm"

The MCP-Mod approach by Bretz et al. is commonly applied for dose-response testing and estimation in clinical trials. The MCP part of MCP-Mod was originally developed to detect a dose-response signal using a multiple contrast test, but it is not appropriate to make a specific claim that the drug has a positive effect at an individual dose.

View Article and Find Full Text PDF

Platform trials are randomized clinical trials that allow simultaneous comparison of multiple interventions, usually against a common control. Arms to test experimental interventions may enter and leave the platform over time. This implies that the number of experimental intervention arms in the trial may change as the trial progresses.

View Article and Find Full Text PDF

Overall response rate (ORR) is commonly used as key endpoint to assess treatment efficacy of chronic graft versus host disease (cGvHD), either as ORR at week 24 or as best overall response rate (BOR) at any time point up to week 24 or beyond. Both endpoints as well as duration of response (DOR) were previously reported for the REACH3 study, a phase 3 open-label, randomized study comparing ruxolitinib (RUX) versus best available therapy (BAT). The comparison between RUX and BAT was performed on ORR and BOR using all randomized patients, while DOR was derived for the subgroup of responders only.

View Article and Find Full Text PDF

Response-adaptive randomization allows the probabilities of allocating patients to treatments in a clinical trial to change based on the previously observed response data, in order to achieve different experimental goals. One concern over the use of such designs in practice, particularly from a regulatory viewpoint, is controlling the type I error rate. To address this, Robertson and Wason (Biometrics, 2019) proposed methodology that guarantees familywise error rate control for a large class of response-adaptive designs by re-weighting the usual -test statistic.

View Article and Find Full Text PDF

Non-alcoholic steatohepatitis (NASH) is the progressive form of nonalcoholic fatty liver disease (NAFLD) and a disease with high unmet medical need. Platform trials provide great benefits for sponsors and trial participants in terms of accelerating drug development programs. In this article, we describe some of the activities of the EU-PEARL consortium (EU Patient-cEntric clinicAl tRial pLatforms) regarding the use of platform trials in NASH, in particular the proposed trial design, decision rules and simulation results.

View Article and Find Full Text PDF

Phase II/III clinical trials are efficient two-stage designs that test multiple experimental treatments. In stage 1, patients are allocated to the control and all experimental treatments, with the data collected from them used to select experimental treatments to continue to stage 2. Patients recruited in stage 2 are allocated to the selected treatments and the control.

View Article and Find Full Text PDF
Article Synopsis
  • Platform trials allow multiple experimental treatments to be tested against a control group, improving efficiency by using shared controls, but may face bias when adding new treatments over time.
  • The study analyzes a platform trial with two treatment arms, focusing on methods to adjust for time trends using either linear models or step functions, and evaluates how these methods affect error rates and treatment effect estimates.
  • Results suggest that a step function model improves statistical power without increasing error rates, provided time trends are equal across treatment arms and follow an additive model.
View Article and Find Full Text PDF

We discuss how to handle matching-adjusted indirect comparison (MAIC) from a data analyst's perspective. We introduce several multivariate data analysis methods to assess the appropriateness of MAIC for a given set of baseline characteristics. These methods focus on comparing the baseline variables used in the matching of a study that provides the summary statistics or aggregated data (AD) and a study that provides individual patient level data (IPD).

View Article and Find Full Text PDF

Platform trials have become increasingly popular for drug development programs, attracting interest from statisticians, clinicians and regulatory agencies. Many statistical questions related to designing platform trials-such as the impact of decision rules, sharing of information across cohorts, and allocation ratios on operating characteristics and error rates-remain unanswered. In many platform trials, the definition of error rates is not straightforward as classical error rate concepts are not applicable.

View Article and Find Full Text PDF

Tests based on pairwise distance measures for multivariate sample vectors are common in ecological studies but are usually restricted to two-sided tests for differences. In this paper, we investigate extensions to tests for superiority, equivalence and non-inferiority.

View Article and Find Full Text PDF

Causal inference methods are gaining increasing prominence in pharmaceutical drug development in light of the recently published addendum on estimands and sensitivity analysis in clinical trials to the E9 guideline of the International Council for Harmonisation. The E9 addendum emphasises the need to account for post-randomization or 'intercurrent' events that can potentially influence the interpretation of a treatment effect estimate at a trial's conclusion. Instrumental Variables (IV) methods have been used extensively in economics, epidemiology, and academic clinical studies for 'causal inference,' but less so in the pharmaceutical industry setting until now.

View Article and Find Full Text PDF

Purpose: Recent years have seen a change in the way that clinical trials are being conducted. There has been a rise of designs more flexible than traditional adaptive and group sequential trials which allow the investigation of multiple substudies with possibly different objectives, interventions, and subgroups conducted within an overall trial structure, summarized by the term master protocol. This review aims to identify existing master protocol studies and summarize their characteristics.

View Article and Find Full Text PDF

In personalized medicine, it is often desired to determine if all patients or only a subset of them benefit from a treatment. We consider estimation in two-stage adaptive designs that in stage 1 recruit patients from the full population. In stage 2, patient recruitment is restricted to the part of the population, which, based on stage 1 data, benefits from the experimental treatment.

View Article and Find Full Text PDF

This paper discusses a number of methods for adjusting treatment effect estimates in clinical trials where differential effects in several subpopulations are suspected. In such situations, the estimates from the most extreme subpopulation are often overinterpreted. The paper focusses on the construction of simultaneous confidence intervals intended to provide a more realistic assessment regarding the uncertainty around these extreme results.

View Article and Find Full Text PDF

In power analysis for multivariable Cox regression models, variance of the estimated log-hazard ratio for the treatment effect is usually approximated by inverting the expected null information matrix. Because, in many typical power analysis settings, assumed true values of the hazard ratios are not necessarily close to unity, the accuracy of this approximation is not theoretically guaranteed. To address this problem, the null variance expression in power calculations can be replaced with one of the alternative expressions derived under the assumed true value of the hazard ratio for the treatment effect.

View Article and Find Full Text PDF

Network meta-analysis uses direct comparisons of interventions within randomized controlled trials and indirect comparisons across them. Network meta-analysis uses more data than a series of direct comparisons with placebo, and theoretically should produce more reliable results. We used a Cochrane overview review of acute postoperative pain trials and other systematic reviews to provide data to test this hypothesis.

View Article and Find Full Text PDF

Robust semiparametric models for recurrent events have received increasing attention in the analysis of clinical trials in a variety of diseases including chronic heart failure. In comparison to parametric recurrent event models, robust semiparametric models are more flexible in that neither the baseline event rate nor the process inducing between-patient heterogeneity needs to be specified in terms of a specific parametric statistical model. However, implementing group sequential designs in the robust semiparametric model is complicated by the fact that the sequence of Wald statistics does not follow asymptotically the canonical joint distribution.

View Article and Find Full Text PDF

Count data and recurrent events in clinical trials, such as the number of lesions in magnetic resonance imaging in multiple sclerosis, the number of relapses in multiple sclerosis, the number of hospitalizations in heart failure, and the number of exacerbations in asthma or in chronic obstructive pulmonary disease (COPD) are often modeled by negative binomial distributions. In this manuscript, we study planning and analyzing clinical trials with group sequential designs for negative binomial outcomes. We propose a group sequential testing procedure for negative binomial outcomes based on Wald statistics using maximum likelihood estimators.

View Article and Find Full Text PDF

To efficiently and completely correct for selection bias in adaptive two-stage trials, uniformly minimum variance conditionally unbiased estimators (UMVCUEs) have been derived for trial designs with normally distributed data. However, a common assumption is that the variances are known exactly, which is unlikely to be the case in practice. We extend the work of Cohen and Sackrowitz (, 8(3):273-278, 1989), who proposed an UMVCUE for the best performing candidate in the normal setting with a common variance.

View Article and Find Full Text PDF

We describe a general framework for weighted parametric multiple test procedures based on the closure principle. We utilize general weighting strategies that can reflect complex study objectives and include many procedures in the literature as special cases. The proposed weighted parametric tests bridge the gap between rejection rules using either adjusted significance levels or adjusted p-values.

View Article and Find Full Text PDF

In oncology studies with immunotherapies, populations of "super-responders" (patients in whom the treatment works particularly well) are often suspected to be related to biomarkers. In this paper, we explore various ways of confirmatory statistical hypothesis testing for joint inference on the subpopulation of putative "super-responders" and the full study population. A model-based testing framework is proposed, which allows to define, up-front, the strength of evidence required from both full and subpopulations in terms of clinical efficacy.

View Article and Find Full Text PDF

A permutation test assigns a p-value by conditioning on the data and treating the different possible treatment assignments as random. The fact that the conditional type I error rate given the data is controlled at level α ensures validity of the test even if certain adaptations are made. We show the connection between permutation and t-tests, and use this connection to explain why certain adaptations are valid in a t-test setting as well.

View Article and Find Full Text PDF

Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group.

View Article and Find Full Text PDF

The two-stage drop-the-loser design provides a framework for selecting the most promising of K experimental treatments in stage one, in order to test it against a control in a confirmatory analysis at stage two. The multistage drop-the-losers design is both a natural extension of the original two-stage design, and a special case of the more general framework of Stallard & Friede () (Stat. Med.

View Article and Find Full Text PDF