Clearly it is in no one’s interests for a clinical trial to fail. Whenever these high-stake activites are abandoned and drug development pathways are aborted, missed opportunities abound.
Beyond the sponsor’s missed commercial opportunity, there are missed opportunities to advance the science around treating a certain condition and, more obviously, missed opportunities to improve the lives of affected patients.
Trials continue to suffer from exceptionally high failure rates.
But clinical trials are of course a necessarily rigorous pursuit, and failure is therefore not an uncommon outcome. Indeed, research reinforces the generally accepted notion that only around 10% of drug development projects entering Phase I progress all the way through to approval, with the industry registering a composite success rate of 10.8% in 2023.[1]
However, looking at the average figure for composite success since 2010, the average is lower still at 7.6%, which provides a stark reality that more than 90% of drugs entering clinical trials result in failure. Of course, there is nuance beneath such headline numbers, but nevertheless they provide evidence of an exceptionally high failure rate for an endeavour that has such potential benefit and carries such significant costs.
Industry analysis of failed trials conducted in 2023 underlines the intensely challenging environment surrounding drug development.[2] While published in the hopeful vein that “2024 brings us much better data”, it is a sobering summary of the failures of a broad range of companies, from major pharmaceuticals to smaller biopharmaceuticals, with a cumulative cost likely running into the billions of dollars.
Medication adherence plays a big role in the success or otherwise, of a trial.
The reasons behind clinical trial failure are complex and diverse. Predominantly, however, they fall into two main categories: lack of clinical efficacy or unmanageable toxicity. In both of these scenarios output data from the trial is fundamental in laying the foundations for definitive conclusions. And while it can be argued that the data doesn’t lie, it is also true that where the data captured is flawed, data quality is poor, or of questionable integrity, it has the potential to skew the resulting analysis and influence judgements.
Medication adherence is of particular concern in relation to the quality of trial output. After all, it is inherent in the model that analyses are predicated on a candidate being exposed to a drug at precisely the level indicated, whether that dose has been administered by a third party or the participant is given the responsibility to self-administer. Only with complete confidence in adherence, co-ordinators can be satisfied of exposure levels, allowing them to derive solid inferences about post-exposure response.
So, why did these high profile trials fail?
Clearly, when a drug under trial is administered via injection by a qualified healthcare professional in a clinical setting, trial co-ordinators have robust first-hand evidence of full adherence, with the precise dose having been delivered. Indeed, this was the case in three of the highlighted trial failures (for an HIV vaccine, macular degeneration treatment and gene therapy). Whether or not these trials had other challenges to overcome in relation to exposure is not clear, but with this dosing regimen it can be assumed that any issues are unlikely to be linked to adherence.
Of the remaining seven trials highlighted, administration was in an ambulatory context, with delivery methods made up of a combination of subcutaneous injection, oral administration and topical administration. The differences in administration routes in these trials are significant since the conditions were not necessarily present to deliver the same guarantees of confidence when it comes to adherence.
Topical administration, for example, is widely associated with poor adherence.[3] Indeed, research has previously evidenced a decline in adherence to a topical therapy over the course of a clinical trial, with common methods of measurement (medication logs and medication usage by weight) found to overestimate real-world usage levels.[4]
Furthermore, the majority of the failed trials that did not involve professional administration. Add to that the added complexities of a complicated regimen, with requirements encompassing twice daily (BID) dosing, fortnightly dosing and unequal on/off periods. While such complexity is rooted in a strategy to achieve the target drug concentration, improve efficacy and limit side effects, it also has the unfortunate effect of increasing the likelihood of administration errors and poor adherence.[5]
Whether any of these risks were at play in the case of these particular trials, we simply do not know. And because we have no information on levels of reported adherence and methods for measuring adherence, no conclusions can be drawn as to whether adherence was a factor in the trial failure.
Establishing hard evidence for your trial.
We do know, however, that uncertainty hangs over many ‘failed’ clinical trials where approaches to adherence are flawed. Put simply, any conclusions regarding efficacy and safety can only truly be conclusive if they are informed by robust data on adherence and, therefore, true exposure. It is not enough to specify a dosing regimen; there must also be absolute confidence in adherence to that regimen to understand a drug’s true effect.
Unless adherence can be guaranteed, introducing greater confidence in this crucial variable demands a reconsideration of the method used to monitor and measure patient behaviour during a trial. And while legacy approaches such as pill count might still be widely used, methods such as electronic monitoring are proven to consistently deliver a superior measure of adherence by eliminating reliance on flawed self-reporting techniques that muddy understanding of the relationship between exposure and effect.
It is certainly a welcome ambition to push for “better data” in future and better adherence is key to achieving that objective. In the best-case scenario, this could shift the outcome of a trial, turning a missed opportunity into a promising result and helping redress the overall failure rate. But even when failure does persist, if the analysis is based on more robust, reliable data, there is potential to elicit more valuable learnings and change course. Not necessarily a case of preventing failure but failing faster at the very least.
[1] https://www.iqvia.com/insights/the-iqvia-institute/reports-and-publications/reports/global-trends-in-r-and-d-2024-activity-productivity-and-enablers
[2] https://www.fiercebiotech.com/special-reports/2023s-top-10-clinical-trial-flops
[3] https://pubmed.ncbi.nlm.nih.gov/28188596/
[4] https://www.jaad.org/article/S0190-9622(04)00554-7/abstract