By Aiden Flynn
Ten common questions:
1. What is clinical trial simulation?
In general terms, simulation uses a mathematical model to describe a system and process (i.e., a clinical study). It can be used to test and evaluate the process in a virtual environment under different sets of conditions to see how they perform. In the case of clinical trials, simulation is used to assess the performance of different design options prior to conducting the actual study. A study designed using simulation can be more efficient and with a higher chance of success.
2. How can simulation help with recruitment issues?
Pharma has incredibly high clinical study failure rates despite record R&D spending, placing an unnecessarily heavy burden on the patients involved in these studies. With recruitment a widely recognised barrier to study success, there is an urgent need for a more radical approach to addressing this beyond spending more money or collecting more data. One way to avoid unnecessary failure due to recruitment issues upfront is to design smarter studies. Study design using simulation supports this by delivering key insights on the drivers of study success, including the minimum number of patients required to answer the clinical question at hand, helping to quantify risks and develop mitigation strategies. In this way, simulation provides a more holistic approach than sample size calculations to recruitment needs, delivering a more ethical strategy by reducing patient burden and risk and ensuring that clinical project teams can derive meaningful information from smaller patient numbers, reducing recruitment pressures and so development timelines.
3. How has the utility of clinical trial simulation changed?
While simulation is not a new technique, several factors are driving its uptake right now. Extracting new insights from the current explosion in data sources now available in the real world could transform clinical development. Information from sources such as electronic health records, medical insurance claims, disease registries, medication/device registries as well as wearables can be integrated into some simulation platforms to extract new valuable insights to inform development. Moreover, the pandemic is driving the rise of decentralised trials to circumvent current recruitment challenges and reduce patient burden. This means that there are more clinical trial subjects collecting data at home. Consequently, there is an increasing need within organisations to understand how best to harness information from these large complex and messy datasets and the potential data quality issues that can arise with this real-world approach. Study simulations allow companies to look at multiple study variables like these at the same time where their potential impact is unknown and give them a chance to test drive different study designs upfront to identify potential issues which perhaps have not been anticipated and so mitigate for them in advance. These highly realistic parallel simulations are now possible due to the increased processing power available in the cloud. Growing acceptance of cloud computing as well as a new generation of statisticians with more extensive multi-disciplinary skills able to build and use simulation capabilities means that the technique is now gaining increasing traction in the sector.
4. Are regulators accepting simulation-based study design methods?
Regulators see the value in these methods as a viable way to improve clinical studies while lessening patient burden. Since the FDA launched its critical path initiative, they continue to encourage companies to use model-based drug development despite some inertia in the sector. For example, the FDA has highlighted the value of using modelling and simulation to predict clinical outcomes, inform clinical trial designs, support evidence of effectiveness, identify the most relevant patients to study, and predict product safety. More recently, it has also been evaluating how using virtual patient cohorts can replace clinical trials to further reduce patient burden in some cases. Simulation can create a more robust evidence package before engaging with regulators, driving more successful interactions with them as it shows you have ‘done your homework’. In this context, it provides a tool with which to demonstrate the reasons for your decision-making in the design of your study and indicates that you have done everything possible from an ethical perspective to avoids unnecessary recruitment, reducing patient burden. Tapping into a readily available simulation software package like KerusCloud also has an added advantage over bespoke in-house simulations in that regulators can easily review and replicate your data-driven decision-making process.
5. How common is it for small/medium sized companies to use study simulation?
Despite, the real benefits that could gained by small to medium sized companies from study simulation, particularly as more are taking their therapeutic candidates further through the development process, only a small number have access to study simulation as an approach to designing their studies. This can be due to a lack of IT infrastructure to support this kind of software or even simply a lack of in-house statistics expertise. However, with cloud-based, SaaS simulation software platforms, IT infrastructure is no longer an issue as it is supported by the provider and the required statistics expertise can now also be outsourced. This gives small/medium companies a highly cost-effective way to access to this powerful technology which can provide key benefits especially in generating statistical evidence packages for regulators or for out-licensing products or when seeking external investment at key transition points in their development programmes.
We have already seen the wins that some of the smaller companies we have worked with have experienced using study simulation. For one small company developing inhaled formulations of a treatment for RSV, it helped them to identify the most efficient way to generate evidence while keep patient discomfort at a minimum using a less invasive approach to sampling. Not only did the use of simulation help to reduce patient burden, but it also decreased study duration by a year. Another small company we worked with was developing an antibacterial agent and was seeking an accelerated path to approval. Simulation enabled them to identify a study design which could generate an initial evidence package acceptable to regulators using 180 patients rather than 1000, saving them £18M and potentially reducing time to market by 3-5 years. Both companies were small and had limited resources and yet a modest investment upfront on study simulation provided them with a ‘win-win’ situation in that patient burden was relieved while reducing both study costs and timelines.
6. How do companies implement this tool?
A cloud-based SaaS simulation tool like KerusCloud is primarily designed for use by statisticians. It can be licenced for direct use by in-house statisticians with onboarding support from our teams. However, for those with limited statistics capabilities we offer wrap-around packages where we can implement the software for you. A lack of IT infrastructure or in-house statistic resources is no longer a barrier to accessing a simulation tool. Going forward, it is possible to develop a more flexible hybrid model depending on changing statistics capabilities.
7. What are the remaining barriers to widescale adoption?
Despite the increasing ease of accessibility and the obvious multiple benefits, a few barriers currently remain preventing even wider adoption of simulation. These are:
A perceived need for speed – Clinical project teams often feel under pressure to implement a study to ensure sufficient recruitment and that spending time on simulation will slow this down. However, study simulation is a fast and effective approach to ensuring that recruitment targets are necessary to answer the clinical question at hand. As outlined by our case studies, it offers much more insight than just a sample size calculation into the numbers and type of patients that need to be recruited. Importantly, it reduces patient burden and risk by ensuring that only the right number are included in a study and identifying exactly which measurements and or interventions are critical to answering the clinical question. Informed by the right information, simulations can be generated in minutes and evaluated in hours to provide insights that could shave years from clinical development timelines. Therefore, investing a small amount of time at the outset to fully understand which parameters could underpin a study’s success can save a great deal of time longer term.
A matter of money – Paying upfront to pinpoint these critical drivers to study success can also be off-putting for project teams unused to this more prospective statistical approach. However, the cost of study simulation at the outset is completely dwarfed by the cost of running a clinical trial. With a trial costing on average US$20-30M, investing a modest sum in simulation to ensure that you are implementing the best study design can prevent the enormous financial losses incurred by a failed trial and deliver a better return on R&D investment. The likelihood of study success depends heavily on selecting the right design and statistical analysis approach, yet the usual sum spent on statistics for a clinical trial protocol (usually to calculate sample size) still ranges between a few hundred to a few thousands of dollars. With so much trial failure still occurring, we need to question if enough is being done at this early stage to de-risk clinical studies.
The status quo – Unfortunately, there is still some inertia within the sector on embracing new technologies and approaches. Traditionally, trials have been designed by project teams mostly comprised of specialists in the biological mechanism, therapeutic area or intervention of interest. In these circumstances, often many of the key study parameters are set long before a statistician is brought into the clinical protocol design process, usually to calculate sample size. However, we are currently in the grips of a digital revolution in life sciences which has created a proliferation in the types of data now available to inform healthcare. In this changing environment, traditional project teams may not have the skillset to understand the intricacies and potential pitfalls associated with the increasingly complex data now collected. They may also not be fully aware of how to mitigate upfront through study design for many of the problems commonly encountered with complex data sets that can lead to unnecessary study failure. Therefore, there is now a pressing need for a data specialist such as a statistician to be brought in right at the start of the clinical protocol design process rather than towards the end. In this new digital era, asking a statistician only for sample size calculations in this process represents a missed opportunity to build in development success upfront statistically through simulation.
8. What differentiates KerusCloud?
Uniquely, the KerusCloud simulation tool can capture statistically the real complexity of a clinical trial rather than the traditional oversimplified understanding of potential risk factors. Frequently it is assumed that there are only one or two unknowns (e.g., effect size and uncertainty) when designing a study whereas in reality there may be many. Designing studies using simulation encourages earlier engagement across project teams as well as with statisticians in pinpointing the best design and analysis strategies for a clinical study protocol. A tool like KerusCloud can allow you to capture all available knowledge before initiating a trial to build realistic large, virtual, synthetic patient populations with which to create thousands of parallel simulated ‘what if scenarios’ to identify key risk factors and mitigate for these upfront. Other tools focus on fewer scenarios where decisions have already been taken in a traditional fashion (e.g., population, primary endpoints) whereas KerusCloud can account for more complex studies where there is a need to understand the complex inter-relationships that can have an impact on outcome.
9. How does simulation improve enrolment with a difficult to recruit population?
Currently it is common practice to have statisticians estimate the study sample size in isolation before it is passed to operations to deliver the study. In general, there isn’t much discussion on the likely study constraints. Today, statisticians have many tools and approaches that they can consider ensuring that the study design accounts for the constraints and yet maintains a high chance of success. Statisticians can now consider strategies to implement more efficient studies and reduce the enrolment targets. These include reducing patient burden and improving adherence and persistence, integrating real-world information, adaptive designs, historical data as control arms. Engaging with statisticians early allows a more integrated and collaborative approach to study design where all success factors are considered. Tools, such as KerusCloud, provides a more holistic approach to help control sample size in a scenario where a population is difficult to recruit. In this context, simulation is important as it helps to better quantify and communicate potential benefits/risk to patients/research team.
10. What type of study or indication would not be good for a simulation model?
I believe that simulation needs to become mainstream as it offers a new gold standard for designing more effective clinical studies. Some studies are quite straightforward, and the risk of failure is low. However, the high failure rates of studies in the sector reflect that more often we are failing to capture the true complexity of real studies and get a handle on the many unknowns that may be impacting their success. Simulation offers a rapid and cost-effective approach to counter this for most types of study. Nevertheless, any modelling approach requires data on which to support assumptions. Therefore, a first-in-human study or any indication with very little data on which to build assumptions is not ideal. However, this is rare, and we have not yet encountered a situation where we weren’t able to collect enough data to build a simulation model. In my view, with so many current and growing applications for simulation, why would any team design a study without it?