AARDEX Group

Moving clinical trials towards the value-rich model of continuous monitoring

In a clinical trial, every piece of data contributes to the overall story being told. No matter how small or seemingly insignificant, all the information that emerges over the course of the process adds rich detail to the narrative.

Two of the ‘buckets’ that such data will fall into are outcomes, which essentially describe the patient response, and drug exposure i.e. the level of the drug within the patient’s body based on dose taken. Closely and temporally interlinked, these two provide important indicators of both the efficacy and the safety of the therapy under investigation.

Despite their value to the interpretation of the trial’s findings, these data points are often not mined to their full potential. Indeed, many trials still follow fairly rudimental protocols when it comes to measurement of outcome and exposure, with initial benchmark values recorded at the beginning of the trial and measurements then subsequently taken part-way through at scheduled visits and again at the end of the trial.

While there will be undeniable value contained within this information, there is also no escaping the fact that the data points are few and far between. For example, in a year-long study with monthly visits, a total of 13 such data points (including the baseline measure) will be generated, which equates to just 3% of the total 365-day trial period. As such, the trial can be said to suffer from ‘data sparsity’, and organisers are forced to build arguments from comparatively weak foundations, joining the dots with logic, reason and judgement.

Nevertheless, examples of this approach to the measurement of outcome and exposure abound, whether it is blood pressure being sampled intermittently during a six-month trial for an anti-hypertension drug or viral load being analysed over a year in the case of an HIV treatment. Furthermore, in trials for rheumatoid arthritis or depression treatments, the scores used to highlight any change in symptoms are often generated on the basis of self-reported patient questionnaires, with the possibility of six months between interventions. Even in population pharmacokinetics (popPK), sampling at a rate of once every two months might mistakenly be regarded as relatively high frequency.

With the advent of electronic patient-reported outcomes (ePRO) and electronic clinical outcome assessment (eCOA), the potential for generating richer patient data in clinical trials has been significantly enhanced. These platforms moved the centre of gravity for outcome measurement to the subject, providing them with a convenient and reliable means to self-report data in a timely fashion and, crucially, far more frequently. Of course, the important caveat here is that these methods rely heavily on the subjective information provided by the patient regarding their own health condition status, with no interpretation provided by clinicians or other third parties.

Thanks to the rapid maturation of wearable hardware and technologies such as Artificial Intelligence (AI), both the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) are pursuing pathways designed to gather an infinitely richer wealth of reliable live data on outcomes generated at the patient source.

In a regulatory first, the EMA qualified a digital variable sourced through remote, continuous monitoring as a primary endpoint in a study of patients with the degenerative muscle disease Duchenne muscular dystrophy (DMD).[1] Previously, patients were required to participate in clinic-based physical tests led by investigators but the limitations of this approach presented a compromise to the reliability of the findings. In contrast, by using an ankle-worn device, patients could discreetly generate and transmit valuable raw data on their movements within a natural environment. This data was then converted into measurements of stride velocity, with analysis performed to highlight changes over time.

A similar approach to continuous monitoring has been used in a separate clinical study. Here, wrist-worn activity monitors were used to assess changes in daily activity for patients at risk of pulmonary hypertension associated with pulmonary fibrosis when treated with pulsed, inhaled nitric oxide (iNO).[2]

These examples highlight the benefits of using such technology to elicit reliable measures of treatment outcomes. A further associated benefit is that they are able to ease the current high level of burden associated with recruiting trial participants. With traditionally less reliable and more variable outcome measures, higher numbers of patients must be recruited to ensure any assessment of the impact of treatments is meaningful. However, this is problematic in the context of an increasing focus on tackling rare diseases, which, by definition, affect small populations. In this fragile scenario, the ‘data potential’ represented by each participant is increased and there is enhanced pressure to maximise the value obtained: outcomes must be measured accurately.

Such developments begin to address the challenge of data sparsity when it comes to the measurement of patient outcomes. They do not, however, tackle some of the fundamental challenges associated with data integrity when it comes to the common methods for measuring adherence, such as pill count or self-report, which can be flawed and unreliable. As such, there is potential for a situation to exist where outcome sampling frequency is more robust and yet data on exposure remains opaque, preventing organisers from drawing causal inferences between these temporally interlinked factors.

This is particularly poignant in the case of trials exploring treatments for rare diseases, where there is pressure to ensure high-integrity and increasingly high-frequency data on outcomes is matched to data on drug exposure that is of equal integrity. Any ambiguity, inconsistency and opacity surrounding adherence has the potential to tarnish the validity of a richer stream of outcome data. As such, employing electronic monitoring tools becomes an increasingly preferable option for organisers, ensuring the wealth of outcome data is complemented and validated by an equally rich stream of high-integrity adherence data, and allowing the maximum value potential to be extracted from the trial.

It follows, therefore, that both measures – exposure and outcome – should be subject to robust, continuous monitoring if we are to continue advancing the methods used and value extracted from clinical trials. It is only by marrying these data points in parallel that a true understanding of the variability in drug effect can be ascertained.

This is not based on the simple idea that ‘more is more’ when it comes to data; it is based on the premise that richer data points, when evaluated in tandem, have the potential to uncover more powerful insights. The result is a step-change in the quality of information available to trialists and a chance to accelerate progress on the treatments designed to improve patients’ lives.

[1] https://www.nature.com/articles/s41591-023-02459-5

[2] https://pubmed.ncbi.nlm.nih.gov/32092321/

Share This Post

You may also like...

Breaking News

Clinical trial failures: Are we counting the cost of poor adherence?

Clearly it is in no one’s interests for a clinical trial to fail. Whenever these high-stake activites are abandoned and drug development pathways are aborted, missed opportunities abound. Beyond the sponsor’s missed commercial opportunity, there are missed opportunities to advance