AARDEX Group

Unmasking Bias in Clinical Research: A Historical Perspective

Bias in clinical Research

It’s an uncomfortable truth that decision-making is a murky business. While we would all like to think that our actions and the choices underpinning them are rooted in logic, integrity, and reason, the truth is not entirely clear-cut.

In reality, the purity of our thinking is sullied by a variety of factors, and we may or may need to be made aware of their influence. A case in point is bias, which describes the deep-held feelings, tendencies, and prejudices that can lead us to unknowingly deviate from a position of objective understanding and cause us to make misguided subjective judgments. Where bias is present, we cannot guarantee or fully trust that what is said describes the precise reality of what is happening.

In medicine, bias has long been accepted as a concept of fundamental importance because of its potential to skew understanding and distort decisions. Indeed, the Hippocratic Corpus reflects the Father of Medicine’s appreciation for the idea in the following line: “Keep a watch also on the faults of the patients, which often make them lie about the taking of things prescribed.” [1]

Much later, in 1747, Scottish physician James Lind brought knowledge of bias to one of the first-ever recognized clinical trials when he proved citrus fruit’s effectiveness as a scurvy treatment. While Lind’s achievements did not extend to establishing the link with vitamin C, they were notable because he followed a systematic experiment and arrived at his conclusions based on hard evidence when the idea had previously only been supported by conjecture.

At the end of the 19th century, British physician John Haygarth further advanced the validity of clinical trials when he became the first person to demonstrate the placebo effect. His work involved comparing a popular – and expensive – treatment of the day against a dummy alternative. In revealing no difference in efficacy, Haygarth said his experiment illuminated “what powerful influence upon diseases is produced by mere imagination.” [2]

Haygarth and his successors ultimately laid the foundations for Henry K. Beecher’s 1955 paper, The Powerful Placebo, which quantified the placebo effect and paved the way for placebo-controlled clinical trials. Since the 1960s, this has been recognized as the accepted standard to adjust for conscious and unconscious bias in drug development studies.

Slightly earlier, in the 1940s, another aspect common to modern-day studies – the double-blind comparative trial – was first conducted by the UK’s Medical Research Council to safeguard against results being influenced by patients, physicians, and trial administrators. This milestone was then quickly followed by the first trial to explore the randomized allocation of patients. The study was conducted by Professor (later Sir) Austin Bradford Hill [3] on a platform of pioneering work previously carried out by renowned statistician Ronald Fisher. [4] Bradford Hill’s work further sought to minimize bias and introduce greater levels of rigor into the examination of the relationship between intervention and outcome.

The concept of randomization was central to creating the intention-to-treat (ITT) principle, which has been adopted as standard practice in clinical trials since it first emerged in the 1960s. ITT is based on analyzing study participants according to their initially assigned treatment groups, regardless of whether they received or completed the treatment as intended. Proponents of this method, such as Dr. Donald A. Berry, recognized that preserving randomization ensures the trial paints a picture of real-world practice. Therefore, it allows the study to more accurately illuminate the effect of the treatment under investigation, assuming a reasonable level of adherence to the investigational products.

More than half a century has passed since clinical trials arrived at this point in their history. While further progress has been made in various aspects, from the ethical to the technological, everything has stayed the same. In particular, pill counts continue to be relied upon for measuring adherence to medication regimens, even though awareness of medication measurement methods among patients can result in psychological bias.[5]

Today, however, we are equipped to address this problem through the availability of Digital Medication Adherence strategies and the use of electronic monitoring tools, such as Medication Event MonitoringSsystems (MEMS) for smart pill bottles, electronic caps, or other delivery devices (e.g., injectables, inhalers)These discreet technologies represent a step forward for understanding and managing adherence in trials by helping remove bias and tackling the risk presented by human error. As such, they enhance the quality of data gathered and the value of the insights derived from the study.

Looking back throughout history, we can see that progress in clinical trials relies on challenging the status quo. Medicine has achieved this by consistently questioning how to introduce more rigor and which methods can be adopted to address the various potential sources of conscious and unconscious bias. And this is all done to deliver results with more integrity and inform judgments with more validity. Is now the time to take the next step?

References

Other Content from Dr. Vrijens...

Share This Post

You may also like...

Breaking News

Dose reduction’s role in tackling Pharma 2.0’s profitability problem

The economic model that has underpinned the development of drug products for decades is currently under siege. Faced on one side with mounting pressure from payers to address affordability issues, pharmaceutical companies are facing ever more stringent regulatory requirements alongside