By Exploristics CEO – Aiden Flynn
Clinical development is extremely costly, in terms of money, time and patient burden. Financially, the median cost of a clinical trial supporting FDA approval of a new drug is US$19M , while the average total cost of developing a new marketable treatment is estimated to be US$2B . In terms of time, each new approved drug treatment takes on average 10-15 years in clinical development . However, anticipated timelines are frequently stretched, with Phase I trials exceeding initial expectations by 42% on average, Phase II trials lasting 31% longer and Phase III trials extending 30% beyond initial deadlines . Slipping timelines adds to the overall financial costs, with the estimated cost for each additional day a drug spends in development ranging from US$600,000 to US$8 million  denting returns on R&D investment.
The cost to patients should also not be underestimated. Whilst overall trial recruitment is now rising due to better outreach and greater patient involvement in protocol development, mobile data collection devices and virtual trials, difficulties still remain with retention . This is understandable given the considerable patient burden associated with participation. Lost days from work for site visits, travel costs as well as the physical and psychological strain involved in participation all play a part in the rising dropout rate, which increased to 19.1% from 15.3% in late-stage trials globally in 2019 when compared to 2012 .
With so many costs and stakeholders, it seems important to question clinical trial success rates and ask how they are being de-risked to try and secure success. Figures around study success are complex as they vary according to several factors including therapeutic area, trial phase and the fact that the majority of failed investigational drug studies are not published in peer review journals. However, what remains clear is that most drugs in clinical development fail. Studies have found that across therapeutic areas only 9.6% of drugs entering Phase I reach the market and following Phases II and III, 30.7% and 58.1% of drugs fail, respectively . These figures can be even lower for some therapeutic areas. For example, for cardiovascular agents only 6.6% entering Phase I, 24% entering Phase II and 45% entering Phase III advance to market . The widely accepted overall figure for failure is around 90%, a staggering percentage given the vast and varied costs involved in running clinical trials. This enormous failure rate suggests that there is considerable room for improvement in the process, starting with a pressing need for developers to gain a better understanding of what are the key drivers of study success.
When it comes to the key drivers of cost, the majority relate to the study operations once the study has started. These include staff costs, clinical and laboratory procedures, site monitoring and source data verification . In contrast, the cost relating to study design and planning before starting the study is negligible and yet it is the one chance to de-risk the study and set it up for success. Once a protocol is finalised the opportunity to make any substantial changes is limited and any amendments are likely to be severely disruptive, resulting in delays and substantial unforeseen costs. There is, therefore, a real responsibility to all stakeholders to get it right, first time.
To get it right, there are several aspects of protocol development that require more careful consideration. Too many protocols are unclear and complex which makes them difficult to operationalise and to generate quality data. Many do not have clearly defined success criteria. Many do not take the time to identify the risks and put in place strategies to overcome them. Let me give you an example from a statistician’s perspective. The statistics section of a protocol is the only section that specifically quantifies the study risks in terms of making the wrong decision given the proposed design. In basic terms this could mean drawing the conclusion that the drug does work when it doesn’t or that the drug doesn’t work when it does. These risks are controlled via concepts like statistical power, alpha and sample size.
Most statisticians will have experienced situations where they have been asked to do a sample size calculation for a study where all other design decisions have already been made. They may even try to push back and offer more granular statistical insights to give a greater understanding of the risks to rescue a study from inevitable failure. However, this is often brushed aside in favour of speed, illustrating a common view that statistics is an afterthought rather than an integral partner when it comes to planning and strategy. This is an expensive error for all stakeholders in the development process given the huge overall costs of bringing a treatment to market. With a clinical study alone costing around $20M, some sponsors are prepared to spend less than $1000 to de-risk it. That’s less than 0.005% of the budget…and we wonder why so many studies fail.
Statisticians play an important role in mitigating some of the key study risks through calculating the sample size. However, statisticians and data scientists offer so much more than the traditional support. With the increasing availability of relevant data and design tools, statisticians can now evaluate the impact of a much broader set of potential risks, including:
- Risk relating to unrealistic assumptions that are the basis of sample size calculations
- Risk of selecting the wrong or sub-optimal study population
- Inability to recruit patients and risks related to slower recruitment rates
- Excessive burden on the patient leading to non-adherence and drop-outs
- Imbalanced or unexpected patterns of missing data 
- Risk that complexity impacts data quality
- Decision criteria that lead to ambiguous conclusions
- Risk of selecting the wrong endpoints
- Risk of selecting the wrong design (including risk-discharge through planned study adaptations) 
- Risk that study does not generate sufficient evidence to support further development, investment or approval
What we now know is that investing more in the planning stage makes an enormous difference to the likelihood of study success. A more holistic approach to design which integrates statisticians much earlier in this process means that the cost in the investment is more than recovered through efficiencies gained during the conduct of the study itself . Therefore, rethinking how trials are planned and designed really matters, and the current approach must change. It didn’t work, it isn’t working, and it isn’t going to work. The time to change is now. Don’t let your investment in a clinical development programme fail because you failed to invest in design. Get statisticians involved at the earliest opportunity. You simply cannot afford not to.