Statistical Insights: Designing clinical studies for success and not failure.
Aiden Flynn, CEO, Exploristics
One of the most common complaints I have heard from biostatisticians is that they don’t get involved early enough in the clinical trial design process. By the time they do get involved, many of the decisions have already been made: the patient population; the data that need to be collected; the primary and secondary endpoints; the number of visits. Often the budget has already been agreed, which ultimately constrains the sample size, so the role of the biostatistician is reduced to finding a justification for a sample size that has largely been set. The reality is that it is possible to trawl through the literature and results from previous studies to find a set of assumptions that will support virtually any sample size. It is also possible to make that justification sound convincing. However, many clinical trials are set up for failure because the study team does not invest the time on planning and designing the study for success. Given that clinical evidence and study success is determined by the data, it is a tragedy that we do not ensure we collect the right data in the right way from the right patients.
The Pharmaceutical Industry has long wondered why the attrition rate in clinical development exceeds 90% and how it can be improved. The truth is that many clinical trial failures are completely avoidable through better planning and the harnessing of in silico models. We live in a world where we have increasing access to large volumes of data that can be used to avoid conducting studies that can’t recruit patients, where we have a high rate of withdrawals because we overburden the patient, where we have selected the wrong endpoint and where the study is underpowered because we have made optimistic assumptions about the variability and size of a treatment effect. Most clinical trials do not adequately consider all the sources of uncertainty that could result in a failed study and do not put in place study design and analysis strategies that can mitigate the risks.
There is no excuse. Why do we work in silos when we need to collaborate to ensure we have understood and captured the issues raised by multiple domain experts? Why is a sample size calculation conducted in isolation, without consideration of the impact on the ability to recruit? Why do we assume that the data we collect in a complex study will be perfect, without unexpected deviations, missingness or variability? Why do we overburden patients by taking large volumes of measurements without first knowing the incremental value of those additional measurements? Why do we blindly assume that we have a wonder drug that will have a magnitude of effect that far exceeds anything that has been observed before? The simple answer to all of those questions is that we shouldn’t but we often do.
The simple solution is that biostatisticians and data scientists hold the key to success. Engaging with statisticians early will go some way to reducing attrition by properly managing the risks and uncertainty. Whilst earlier engagement represents an important cultural shift, it is not enough on its own. In order to bring the insight and clarity to a study at the planning stage, statisticians need to be able to extract relevant information from swathes of data sources and then use that information to construct plausible study scenarios in which a range of design features can be evaluated. This painstaking research and bespoke programming and modelling need to be completed within tight study timelines. At best, this is an extremely difficult task, not helped by the fact that many of the existing study design tools were developed to support an outdated model where many of the decisions were made before the statistician got involved.
Many statisticians will say that they make the greatest impact when they get involved early in the design process rather than trying to recover something from the wreckage of a failed study. However, it takes a long time to extract information from multiple data sources and domain experts, to construct and evaluate study design and what-if scenarios. There was a clear market need for tools and technology which could augment the work of statisticians and prevent the avoidable failures. There was a need to enable access to data sources, to efficiently extract information and to use the information to better understand variability and uncertainty. There was a need for a simulation platform that could generate realistic multi-variate synthetic data for a range of plausible scenarios and then evaluate the ability of design and analysis options to achieve clinical and statistical success. There was a need to assess the robustness of controllable design features in situations where aspects of the study cannot be controlled. There was a need to understand the impact of study designs on the ability to operationalise the study. There was a need to visualise the results of complex simulations across numerous dimensions so that all team members could understand and interpret them. There was a need to collaborate with project team members, to rule options in or out and make rapid iterative improvements to the study design. There was a need to consider the design of individual studies as part of an evidence generation pipeline, ensuring they collect the right information to de-risk future development. There was a need to take a holistic approach to study design and planning, one that moved beyond the siloed approach of the past. That’s why Exploristics has built a new platform and ecosystem called KerusCloud– to place statistics and data sciences at the heart of informed decision-making and to reduce the risks and costs of clinical development.