Schedule

All times listed are in Australian Eastern Daylight Time (AEDT) UTC +11.

DAY 1

- Tuesday 17 October -

9:30 - 10am

Registration

Opening Plenary

Chair: Katherine Leeย 

Moderator: ย Rebecca Harding

10:00 - 10:15am

Rory Wolfe

Professor of Biostatistics, Monash University

Primary Chief Investigator, Australian Trials Methodology (AusTriM) Research Network

Welcome and Acknowledgement of Country

10:15 - 11:00am

James Wason

Professor of Biostatistics, Newcastle University (UK)

Making more rare disease trials feasible with innovative statistical methods

It is challenging to do well-powered randomised clinical trials for rare diseases due to the low number of potential participants available to enrol. This means many rare diseases lack effective treatments, with some patients frustrated by there being no trials available. Innovative clinical trial approaches can help through reducing the sample size required for the same level of evidence and making the trial more attractive for participants to enrol in.ย 

In this talk I will showcase several statistical design and analysis approaches that we have applied in clinical trials for a rare autoimmune liver disease called Primary Biliary Cholangitis (PBC). This includes: 1) use of a basket trial approach for sharing information between a drugโ€™s efficacy for PBC and Parkinsonโ€™s Disease; 2) use of an improved statistical analysis method for a PBC trial that used liver function normalisation as its primary endpoint; 3) use of high-quality longitudinal registry data from PBC patients to create a synthetic control group to assess long-term effectiveness for a trial that had switched patients to the drug after one year of placebo-control. I will discuss how these methods could be taken further with more development, leading to more rare disease trials becoming feasible to do and eventually better treatments for patients.

11:00 - 11:30am

Tea break

Cluster randomised trials

Chair: Andrew Forbes

Moderator: Rhys Bowden

11:30 - 11:50am

Fan Li

Assistant Professor of Biostatistics, Yale School of Public Health (US)

Model-robust and efficient covariate adjustment for cluster-randomized experiments

Cluster-randomized experiments are increasingly used to evaluate interventions in routine practice conditions, and researchers often adopt model-based methods with covariate adjustment in the statistical analyses. However, the validity of model-based covariate adjustment is unclear when the working models are misspecified, leading to ambiguity of estimands and risk of bias.ย 

In this article, we first adapt two conventional model-based methods, generalized estimating equations and linear mixed models, with weighted g-computation to achieve robust inference for cluster-average and individual-average treatment effects. To further overcome the limitations of model-based covariate adjustment methods, we propose an efficient estimator for each estimand that allows for flexible covariate adjustment and additionally addresses cluster size variation dependent on treatment assignment and other cluster characteristics. Such cluster size variations often occur post-randomization and, if ignored, can lead to bias of model-based estimators.ย 

For our proposed efficient covariate-adjusted estimator, we prove that when the nuisance functions are consistently estimated by machine learning algorithms, the estimator is consistent, asymptotically normal, and efficient. When the nuisance functions are estimated via parametric working models, the estimator is triply robust. Simulation studies and analyses of three real-world cluster-randomized experiments demonstrate that the proposed methods are superior to existing alternatives.

11:50am - 12:10pm ย 

Laura Balzer

Associate Professor of Biostatistics, University of California, Berkeley (US)

Minimizing bias and maximizing efficiency in cluster randomized trials

Cluster (group) randomized trials (CRTs) often face the dual challenges of missing data and limited sample sizes. In this talk, we highlight the use of TMLE with Super Learner to reduce bias due to missing outcomes and to improve precision when estimating effects in studies with few independent units and high levels of dependence within those units.ย 


We illustrate with the SEARCH Study, a CRT for HIV prevention and improved community health in rural Kenya and Uganda. We conclude with some open challenges and proposed solutions in the design and analysis of CRTs.

12:10 - 12:30pm

Lisa Yelland

Biostatistician Senior Research Fellow, South Australian Health and Medical Research Instituteย 

Design and Analysis of Perinatal Trials with Partial Clustering

Many perinatal trials include infants from both single and multiple births, resulting in partial clustering in the data. This form of clustering has implications for both the design and analysis of these trials that are often overlooked in practice.ย 


In this presentation, I will reflect on both my research in this field and my experience working on perinatal trials affected by partial clustering. I will answer practical questions such as (i) how are multiple births dealt with in practice, (ii) how should multiple births be accounted for in sample size calculations, (iii) how should multiple births be accounted for in the analysis, and (iv) does it really matter if you ignore the clustering?ย 


I will also consider the problem of informative cluster size, where outcomes and/or treatment effects differ between infants from single and multiple births, and discuss the implications for defining the estimand of interest in perinatal trials.

12:30 - 1:30pm

Lunch break

1:30 - 3:00pm

Cluster randomised trials

Chair: Elizabeth (Liz) Ryan

Moderator: Andrew Forbes

Applied innovation

Chair: Rebecca Harding

Moderator: Rory Wolfe

๐ŸŒถ๐ŸŒถ๐ŸŒถ

For the concurrent sessions, the Scientific Program Committee have applied a "statistical spicy" rating out of 3 to describe the statistical complexity of each talk.ย 

This is intended as a guide to assist clinicians and other non-statisticians in determining which of the concurrent presentations may be more suitable for them to attend.ย 

All presentations are recorded so that delegates can catch up later on the concurrent talk that they are unable to watch live.ย 

Please refer to the abstract (each talk title links to the PDF) for further information on the content of each talk.

๐ŸŒถ๐ŸŒถ๐ŸŒถ

2:30 - 2:45pm

Ehsan Rezaeidarzi

Identifying cost-efficient incomplete stepped wedge designs

๐ŸŒถ๐ŸŒถ๐ŸŒถ

3:00 - 3:30pm

Tea break

Innovative trial designs

Chair: Stephane Heritier

Moderator: Elizabeth (Liz) Ryan

3:30 - 3:50pm

Haiyan Zheng

Senior Lecturer in Statistics, University of Bath (UK)

Multiplicity adjustment in basket trials permitting pairwise borrowing of information

Basket trials are increasingly used for the simultaneous evaluation of a new treatment in various patient subgroups that share a commonality (e.g., a genetic mutation or clinical symptom). The new treatment, which targets the mutant gene or symptom, is typically hypothesised to be efficacious for some, if not all, of the patient subgroups. Sophisticated analysis models for basket trials feature borrowing of information between subgroups are preferred over the stand-alone analyses, which regard the subgroups in isolation. Such analysis models can generally yield higher power to detect a treatment benefit while accommodating patientโ€™s heterogeneity.


Conclusions about a treatment benefit, however, should be made with caution given multiple, substudy-specific questions. Moreover, implementing borrowing of information could mean increased risks of incorrect decisions, as information from one substudy influences the others. This talk will highlight the need of multiplicity correction in basket trials. Perspectives will also be given on the future development of procedures beyond Bonferroni correction.

3:50 - 4:10pm

Marta Bofill Roig

Postdoctoral Researcher, Medical University of Vienna (AT)

Analysis approaches for platform trials utilising non-concurrent controls

Platform trials aim at evaluating the efficacy of multiple treatments, allowing for late entry of the experimental arms and enabling efficiency gains by sharing controls. For arms that join the trial later, the control data is divided into concurrent (CC) and non-concurrent controls (NCC). Using NCC for treatment-control comparisons can improve the power but might cause biased estimates if there are time trends. Several approaches have been proposed to utilize NCC while aiming to maintain the integrity of the trial. Frequentist model-based approaches adjust for potential bias by adding time as a covariate to the regression model.ย 


The Time Machine considers a Bayesian generalized linear model that uses a smoothed estimate for the control response over time. The Meta-Analytic-Predictive prior approach estimates the control response by combining the CC data with a prior distribution derived from the NCC data. In this talk we review the analysis approaches proposed for incorporating NCC in the treatment-control comparisons of platform trials. We investigate the operating characteristics of the considered approaches by means of a simulation study, focusing on assessing the impact of the overlap between treatment arms and the strength of the time trend on the performance of the evaluated models. We furthermore present the R-package โ€œNCCโ€ for the design and analysis of platform trials. We illustrate the use of the above-mentioned approaches and show how to perform simulations in a variety of settings through the โ€œNCCโ€ package.


Bofill Roig, M., Krotka, P., Burman, C.F., Glimm, E., Gold, S.M., Hees, K., Jacko, P., Koenig, F., Magirr,

D., Mesenbrink, P., Viele, K., Posch M. On model-based time trend adjustments in platform trials with nonconcurrent controls. BMC medical research methodology, 22(1), 1โ€“16, (2022).

Krotka, P., Hees, H., Jacko, P., Magirr, D., Posch, M., and Bofill Roig, M. NCC: An R-package for analysis

and simulation of platform trials with non-concurrent controls. SoftwareX 23 (2023): 101437.

4:10 - 4:30pm

David Robertson

Senior Research Associate, University of Cambridge (UK)

Point estimation after adaptive designs

In adaptive clinical trials, the conventional end-of-trial point estimate of a treatment effect is prone to bias, that is, a systematic tendency to deviate from its true value. As stated in recent FDA guidance on adaptive designs, it is desirable to report estimates of treatment effects that reduce or remove this bias. However, it may be unclear which of the available estimators are preferable, and their use remains rare in practice. In this talk, we discuss how bias can affect standard estimators and assess the negative impact this can have.ย 

We review current practice for reporting point estimates and illustrate the computation of different estimators using a real adaptive trial example. Finally, we propose guidelines for researchers around the choice of estimators and the reporting of estimates following an adaptive design. The issue of bias should be considered throughout the whole lifecycle of an adaptive design, with the estimation strategy pre-specified in the statistical analysis plan. When available, unbiased or bias-reduced estimators are to be preferred.

4:30 - 4:40pm

Short break

Trial analysis

Chair: Rory Wolfe

Moderator: Anurika De Silva

4:40 - 5:10pm

Jonathan Bartlett

Professor in Medical Statistics, London School of Hygiene & Tropical Medicine (UK)

Hypothetical Estimands in Clinical Trials: A Unification of Causal Inference and Missing Data Methods

The ICH E9 addendum introduces the term intercurrent event to refer to events that happen after treatment initiation and that can either preclude observation of the outcome of interest or affect its interpretation. It proposes five strategies for handling intercurrent events to form an estimand, but does not suggest statistical methods for estimation.ย 


In this talk I will focus on the hypothetical strategy, where the treatment effect is defined under the hypothetical scenario in which the intercurrent event is prevented. Historically such estimands have been estimated using missing data methods such as multiple imputation, after deleting any data recorded after the intercurrent event occurs.


ย In this talk I will define such estimands using the potential outcome framework from causal inference. Doing so allows us to exploit various results from this literature, including clear explication of the casual assumptions required to estimate such effects and (apparently) different estimation methods, such as G-formula. I will describe however that certain โ€œcausal inference estimatorsโ€ are in fact essentially identical to certain โ€œmissing data estimatorsโ€. I will also discuss how causal inference estimation approaches open up the possibility of using data observed after the intercurrent event to be utilised for more efficient estimation.

5:10 - 5:25pm

Eleanor (Ellie) Van Vogt

NHIR Doctoral Fellow, Imperial College London (UK)

Detection of treatment effect heterogeneity in trials using causal machine learning methods

Traditionally, subgroup analysis in randomised controlled trials involve a statistical test for interactions between treatment and patient characteristics. However, these analyses most often must be prespecified and testing multiple characteristics increases type I error.


Causal forests and Bayesian causal forests are examples of non-parametric causal machine learning methods, which aim to estimate treatment effects dependent on baseline covariates. I demonstrate the application of these methods to trials where the treatment did not show a significant benefit overall and seek to define subgroups within the trial population that experienced a significantly positive or negative treatment effect.


This talk will highlight the potential benefits of using such models to identify subgroups of heterogeneous treatment effect which might not have otherwise been considered, and discuss how these methods might be included into trial analysis.

5:25 - 5:55pm

Thomas Jaki

Professor of Statistics, University of Regensburg (DE) and University of Cambridge (UK)

Heterogeneity in treatment effects: random or real

An important goal of personalized medicine is to identify heterogeneity in treatment effects and then use that heterogeneity to target the intervention to those most likely to benefit. Heterogeneity can be assessed using the predicted individual treatment effects framework.ย 

This framework, however, will always result in individualized predictions irrespective whether these reflect real heterogeneity or are just chance variation. To establish if the observed heterogeneity in effects is real or not we develop a permutation test that tests if significant heterogeneity is present and discuss how to validate predictions.

5:55 - 6:00pm

Rory Wolfe

Professor of Biostatistics, Monash University

Primary Chief Investigator, Australian Trials Methodology (AusTriM) Research Network

End of day wrap up

6:00 - 7:00pm

Networking reception

DAY 2

- Wednesday 18 October -

9:30 - 10am

Registration

Causal inference

Chair: Julie Marsh

Moderator: Tom Snelling

10:00 - 10:30am

Miguel Hernan

Kolokotrones Professor of Biostatistics and Epidemiology, Harvard University (US)

Causal estimands: Should we ask different causal questions in randomized trials and in the observational studies that emulate them?

The FDA, EMA, and other regulators have embraced the so-called "estimand framework".ย 


This talks discusses the concept of causal estimand, the shortcomings of the current framework, and ways to improve it for both randomized trials and observational studies that emulate target trials.

10:30 - 11:00am

Fan Li

Professor of Statistical Science and of Biostatistics & Bioinformatics, Duke University (US)

Principal Stratification for addressing intercurrent events in clinical trials: some recent advancement of methods and software

The ICH E9 guidelines for Statistical Principles for Clinical Trials lists Principal stratification (PS) as a statistically valid approach to analyze clinical trials with intercurrent events. Originated from the instrumental variable approach to noncompliance in randomized trials, PS has been extended to study a wide range of intercurrent events, including truncation due to death and nonignorable missing data.ย 

However, there are several barriers to the wide adoption of PS in clinical trials, including lack of extension to time-to-event outcomes and software packages. In this talk, I will review some of recent development in addressing these barriers. In particular, I will highlight the PStrata package.

11:00 - 11:15am

Panel Discussion

11:15 - 11:45am

Tea break

Survival analysis

Chair: Serigne Lo

Moderator: Thao Le

11:45 - 12:10pm

Annabel Webb

PhD Candidate, Macquarie University and Biostatistician, Cerebral Palsy Alliance Research Institute

A maximum penalised likelihood approach for Cox models with time-varying covariates and partly-interval censored survival data

Time-varying covariates can be important predictors when analysing time-to-event data. A Cox model that includes time-varying covariates is usually referred to as an extended Cox model, and when only right censoring is present, the conventional partial likelihood method is applicable to estimate the regression coefficients of this model. However, model estimation becomes more complex in the presence of partly-interval censoring.ย 


This talk will detail the fitting of a Cox model using a maximum penalised likelihood method for partly-interval censored survival data with time-varying covariates. The baseline hazard function is approximated using spline functions, and a penalty function is used to regularise this approximation. Asymptotic variance estimation of regression coefficients and survival quantities are computed. The method will be illustrated in an application to breast cancer recurrence, and finally there will be a discussion of the use of this extended Cox model for prediction.ย ย 

12:10 - 12:35pm

Giorgos Bakoyannis

Assistant Professor of Statistics, Athens University of Economics and Business (GR)

Estimating optimal individualized treatment rules with multistate processes

Multistate process data are common in studies of chronic diseases such as cancer. These data are ideal for precision medicine purposes as they can be leveraged to improve more refined health outcomes, compared to standard survival outcomes, as well as incorporate patient preferences regarding quantity versus quality of life. However, there are currently no methods for the estimation of optimal individualized treatment rules with such data.ย 


In this paper, we propose a nonparametric outcome weighted learning approach for this problem in randomized clinical trial settings. The theoretical properties of the proposed methods, including Fisher consistency and asymptotic normality of the estimated expected outcome under the estimated optimal individualized treatment rule, are rigorously established. A consistent closed form variance estimator is provided and methodology for the calculation of simultaneous confidence intervals is proposed. Simulation studies show that the proposed methodology and inference procedures work well even with small sample sizes and high rates of right censoring. The methodology is illustrated using data from a randomized clinical trial on the treatment of metastatic squamous-cell carcinoma of the head and neck.

12:35 - 1:00pm

Ronald Geskus

Associate Professor of Biostatistics, Oxford University Clinical Research Unit (VN)

Competing risks, when and how to incorporate them in the analysis

In the end we all die, but not all at the same age and from the same cause. Competing risks are event-type outcomes that are mutually exclusive. The presence and role of competing risks in the analysis of time-to-event data is increasingly acknowledged. However, confusion remains regarding the proper analysis. When and how do we need to take the presence of competing risks into account? What is the right estimand in relation to our research question? What assumptions do we need to make for unbiased estimation?


In a marginal analysis we imagine a world in which the competing risks are absent. Often a marginal analysis is difficult to perform due to dependence between competing risks. Moreover, it may be completely hypothetical setting, especially if the competing events are of biological nature.


In a competing risks analysis the main quantities are the cause-specific cumulative incidence, the cause-specific hazard and the subdistribution hazard. The nonparametric estimator of the cause-specific cumulative incidence can be based on each of the three quantities, and in a specific formulation these estimators are algebraically equivalent. We give an overview of regression models for each of these quantities and explain their difference in interpretation. We briefly discuss the preferred analysis in relation to the type of study question (inferential, predictive or causal) and the character of the competing risks (interventions or biological events).

1:00 - 2:00pm

Lunch break

2:00 - 3:30pm

Bayesian trial design

Chair: Rhys Bowden

Moderator: Serigne Lo

Innovative analysis methodology

Chair: Elaine Pascoe

Moderator: Kylie Lange

๐ŸŒถ๐ŸŒถ๐ŸŒถ

For the concurrent sessions, the Scientific Program Committee have applied a "statistical spicy" rating out of 3 to describe the statistical complexity of each talk.ย 

This is intended as a guide to assist clinicians and other non-statisticians in determining which of the concurrent presentations may be more suitable for them to attend.ย 

All presentations are recorded so that delegates can catch up later on the concurrent talk that they are unable to watch live.ย 

Please refer to the abstract (each talk title links to the PDF) for further information on the content of each talk.

๐ŸŒถ๐ŸŒถ๐ŸŒถ

3:15 - 3:30pm

Julie Marsh

Platform Trials 2.0

๐ŸŒถ๐ŸŒถ

3:30 - 4:00pm

Tea break

Health economics of trials

Chair: Robert Mahar

Moderator: Anurika De Silva

4:00-4:30pm

Rachael Morton

Professor of Health Economics and Health Technology Assessment, University of Sydney

Value of Information analysis to guide investment in randomised trials โ€“ an example from perinatal medicine

Abstract TBA shortly

4:30 - 4:55pm

Laura Flight

Scientific Adviser โ€“ Science Policy and Research Programme, National Institute for Health and Care Excellence (UK)

The use of health economics in adaptive clinical trials

Adaptive clinical trial designs are increasingly used to improve the efficiency of clinical trials, but it is currently unclear what impact they have on health economic analyses that aim to maximise the health gained for money spent. Additionally, opportunities are potentially being missed to incorporate health economic considerations into the design and analysis of adaptive trials.


We review the use of health economics in the design and analysis of adaptive clinical trials, including the potential opportunities and barriers to this approach as highlighted by a range of stakeholders. We then describe how existing theory can be extended to the health economic context to 1) maintain an accurate economic evaluation following an adaptive trial and 2) incorporate value of information analysis at the design stage and during an adaptive design.

4:55 - 5:00pm

Rory Wolfe

Professor of Biostatistics, Monash University

Primary Chief Investigator, Australian Trials Methodology (AusTriM) Research Network

Closing remarks