Disparate and Desperate – curse of the diminishing N

Disparate and Desperate – curse of the diminishing N

Disparate and Desperate – curse of the diminishing N 558 344 Exploristics

By Exploristics CDSO, Kimberley Hacquoil

There are pressures for the pharmaceutical industry to utilise novel and innovative methods to incorporate historical data and real-world evidence into the design and analysis of clinical trials. The objectives of doing this is to efficiently and effectively speed up clinical trials, reduce costs associated with drug discovery and development, and improve successful decision-making. With so much data available, surely incorporating this into clinical trials would be easy, right? Well….

It’s not plain sailing.

Unfortunately, more data does not always mean useful data. Or at least not in its current form. I have seen many examples where although you may start out with hundreds of thousands of clinical trials with large patient numbers ultimately only a small proportion of this is applicable. This may be due to eligibility criteria not matching up, data collected not being relevant, or reconciliation of timepoints not aligning. Similarly, when utilising large real-world data there may also be issues with missing timepoints, differences in prognosis and discrepancies in treatment paradigms.

In the end this can result in only a small amount of data meeting your requirements. The curse of the diminishing N (sample size) is very real.

All is not lost!!

Engaging with data scientists, research scientists and statisticians will ensure you get the most value out of your data whilst maintaining integrity and reliability of the insights.

Working through all the data intricacies, data wrangling and data transformations is bread and butter to data scientists. It’s vital that the data is interrogated fully to identify how well it can be matched robustly to the situation at hand. Understanding if you can compare “apples to apples” is key as there is no point in using data which is not comparable nor applicable.

The statistician’s logical mindset can consider the best way to negotiate the available data, the specific applications, and the problem to solve. Statisticians can help by building suitable models which translate available data and information into a useable form. This may be through meta-analyses, extrapolation, imputation and Bayesian methods. Simulation of synthetic data can also augment or bolster the historical and real-world data to support decision-making and scenario testing for study designs. Another approach is to explore different study inclusion/exclusion options to align more with the historical or real-world data. In some circumstances, it may therefore be possible to optimise the comparability and reduce the impact of the diminishing N.

Future looks bright.

Staging work and determining key go/no-go decision points is a fantastic way to approach these types of situations where there are a lot of unknowns regarding the relevance of a data source. It can help companies to first scope out the current landscape with regards to what data is out there, how applicable it is based on their requirements and what value it can bring. Getting this stage right is really key. It makes the navigation and implementation of using historical and real-world data far smoother and ensures you arrive at your desired destination.

 

Read more:

One size does not fit all

Birth of the data science team

Are external control arms worth the extra effort?

Developments in digital data

The liberation of real-world data

Going synthetic with real-world data

Data Science Services

Watch more:

I just need a simple sample size

Gain valuable insights from a constrained setting

Learn how KerusCloud helped provide evidence which was used to support raising US$30 million in a subsequent financing round.

X