By Kimberley Hacquoil, Exploristics CDSO
I recently attended the Statisticians in the Pharmaceutical Industry (PSI) flagship annual conference which had many fantastic presentations and sessions. I was reflecting on a couple of themes which resonated for me in my current role.
Learning from the past
There were two amazing keynote speakers at the conference who covered different aspects of learning from the past.
Timandra Harkness gave an overview of the life works of John Graunt in the 17th century. She discussed how he used the weekly Bills of Mortality to discover patterns and relay insights from this. Viewed as the “Christopher Columbus” of statistics, Graunt invented some analytical tools that we still use today in his work on understanding plague mortality. Learning about him made me think about how the methods we use are not ultimately the innovative aspect of our work, it’s the impact and influence it leaves behind and what people can learn from it in the future.
Andy Grieve talked about teaching young dogs old tricks and the value statisticians can bring without the use of high-powered computer models or machine learning techniques. He wasn’t advocating that we abandon searching or using innovative techniques and approaches, but he was saying that we don’t always need these to solve every problem. As statisticians it is vital that we first understand the problem before we delve into using a computer to solve it. Clients often come to us when they have a problem with their project or study, and I spend a lot of time discussing the challenges they are facing. The talk from Andy reminded me of the importance of this step in providing the right solution.
We often talk about learning from our prior experience, learning from others and utilising historical information. For example, when designing clinical trials, we need to ensure that the appropriate and relevant historical information is included for the challenges we face in a certain situation. We don’t immediately need to rush to a complicated design – understanding the situation, exploring different options and scenarios is a key step in optimising designs and balancing the benefits and risks.
Learning from failure
There were a couple of sessions related to embracing failure and learning from your personal mistakes in order to grow and develop within your career. This got me thinking about my personal growth and how much I challenge myself in the work that I do, but it also made me think about the pharmaceutical industry more broadly.
Failure within this industry is not an alien concept with 90% of compounds failing to make it to market. We are used to high attrition rates, but this doesn’t mean that we should blindly accept failure, instead we need to embrace it with greater transparency to learn from it. This will allow is to:
- Fail fast
Utilising adaptive clinical trials allow for teams to fail fast if a compound is unlikely to succeed, thus saving time, money and most importantly patients. Statisticians can explore different decision rules and test the operating characteristics of different design options. This allows for informed decision-making at the design stage to make sure the risks of any decision-making framework are quantified and understood by the team. Correctly failing fast is a good thing as it means we can refocus resources to compounds more likely to bring benefits to patients.
- Fail virtually
We have the technology and skills to inform study design through simulation within a virtual environment (in silico) and observe the outcomes. We can then use this to explore different scenarios and under multiple assumed “truths” to make better decisions for the real study. Failing in a virtual setting can help us set up more studies for success and hence will reduce the real-life failure due to poor study designs.
Designing and running clinical trials is hard, its time consuming, expensive and has a low probability of success. We can learn from the past and learn from failing, to help us forge a more successful future.