Aiden Flynn, CEO 11 August 2020
The challenges faced by the Pharmaceutical Industry have been well documented: the high costs of clinical trials; the high rates of attrition; higher regulatory scrutiny; limited scope to pay high costs of new medicines; and diminishing returns on investment. For some time, this is an industry in need of critical and comprehensive disruption. However, it has been notoriously conservative and slow to adopt new approaches. Tied up by process and regulation, the sector has traditionally been risk averse and reluctant to reappraise and refresh its approaches.
Over the past few months, I have heard many people comment that the COVID-19 pandemic might be the catalyst for change, a kind of enforced disruption that will transform the industry forever. Commentators and experts alike have touched on some of the areas that are ripe for change:
- Greater adoption of technology
- More virtual clinical trials
- Greater use of real-world data
- The application of artificial intelligence and machine learning to almost every conceivable dataset
- Less process, red-tape and regulation
- Shorter, more concise protocols
Certainly, industry is re-evaluating its role and is pivoting hard during this crisis. As the world has turned to science for rapid solutions to this healthcare and economic nightmare, the global clinical research community has responded to the challenge. Some of the approaches listed above, and others too, are being tested as a consequence and may ultimately be adopted. There is little doubt that clinical trials could be more efficient, and that industry is learning lessons fast as it seeks to speed up development in response to COVID-19. However, we should be careful to avoid throwing the baby out with the bath water. The pursuit of efficiency and the desire to collect data quickly should not come at the expense of quality, good planning and optimal study design. The fact remains that many clinical trials continue to fail for avoidable reasons: Impractical studies; poorly defined hypothesis; overly optimistic view of the likely effects of treatment; insufficient statistical power; wrong dose; inappropriate endpoints and decision criteria; sub-optimal data collection, assessment schedules and analysis; poor persistence and adherence to treatment; budgetary constraints; plus many other avoidable reasons. These together with the current intractable difficulty in recruiting patients result in potentially useful medicines falling by the wayside. This has the knock-on effect of driving up R&D costs and ultimately the costs of medicines to both patients and Stakeholders.
Despite the fact that many clinical trials would avoid failure through better planning and design, I have not heard many talk about the need to invest more time into the design stage. Better design allows a more informed risk-based approach whereby well-designed studies minimise the risk of saying a drug works when is doesn’t and maximise the likelihood of identifying safe and effective therapeutics. In short, it drives up the efficiency of clinical development and so ultimately the speed of bringing effective drugs to market. We can do this by collecting the right data in the right population in a study that is the right size and has clear success criteria. In other words, the time when we can make the greatest impact on the success of the study is before we collect any data. To me, this is the one area that is most lacking in many of the protocols I have reviewed and often goes unmentioned in any wider review of how to transform clinical development. Too often the sample size is not adequately justified, the endpoints, the analysis strategy and the decision rules have not been scrutinised or clearly defined and are open to interpretation. This lack of clarity ultimately leads to unnecessary failures or to ambiguous results and more expensive failures further down the line.
There is broad consensus about the importance of statistics and data science in modern clinical trials. Unfortunately, I would argue that the skills and tools needed to properly evaluate and mitigate the risks of clinical trials is sadly lacking or underutilised. The adoption of new technologies may well lead to the more efficient generation of large volumes of data that may form part of the evidence used to support the further development or marketing authorisation of new drugs. However, the application of data science has focused too much on improving the efficiency of data generation and on the development of algorithms that can extract information from the data. Simply generating data and performing some analysis does not, on its own, bring clarity or additional evidence without first asking if the study design is likely to answer adequately the specific clinical questions it seeks to address. In many cases, the use of new technological approaches simply adds to the noise and may therefore prolong development timelines and increase costs. The use and adoption of new data approaches in clinical trials certainly has its place but not in isolation. Rather, it should form part of an integrated data and evidence generation strategy that starts with early engagement with statisticians and fully optimised trial design.