Current Edition

Comparing Treatment Performance through Evidence Synthesis

A Solution for Sparse Evidence, Heterogeneous Studies, and Disconnected Networks

If there is a variety of treatment options in a specific disease area, decision-makers require evidence of the efficacy and safety of novel interventions in comparison to established treatments. One randomised controlled trial (RCT) is not sufficient to come to a final conclusion, especially since RCTs often present contradictory results.

In certain situations, it is possible to synthesise existing evidence from multiple studies to calculate a pooled treatment effect and thus demonstrate the comparative performance of a novel treatment against established interventions. Standard models for evidence synthesis work well when there is a large evidence base, the absence of any effect modifiers, and connected networks of studies have a common comparator linking two interventions of interest. However, even when there are no direct, head-to-head comparisons available, it is nonetheless possible to estimate effects through indirect comparisons using specific methods of evidence synthesis. We’ll explore these options in this article.

Evidence Synthesis – An Overview

Evidence synthesis implies using classical frequentist or Bayesian statistical methodology to estimate a pooled treatment effect (for example an odds ratio or a risk difference) to demonstrate the comparative performance of a novel treatment against established interventions. To ensure that all relevant evidence is identified, prior to conducting evidence synthesis, a systematic literature review (SLR) has to be conducted. As a next step, all RCTs of high quality are pooled. If head-to-head studies comparing an experimental intervention to the same comparator are available, the simplest approach to evidence synthesis is a so-called pairwise meta-analysis. In the absence of head-to-head studies, more complex approaches are used, such as network meta-analysis (NMA) based on generalised linear models.

When assessing the feasibility of indirect treatment comparisons, as a first step, a so-called network of evidence is drawn (see Figure 1). This is to assess data availability per the outcome of interest and to investigate the presence of a common comparator. Each intervention is shown as a node, whereas links between nodes are shown through lines (dotted for indirect comparisons).

Standard methodology for NMA works well if there is a large evidence base if the network of studies is connected through common comparators linking the interventions of interest, and in the absence of any effect modifiers, i.e. variables that alter the effect of treatment on outcomes. There are, however, a variety of situations in which these analytical methods may not be sufficient, such as when:

There is a sparse network of evidence such that fewer than five studies inform one outcome, or that only one study informs each direct comparison of treatments in the network of studies. This is especially an issue if Bayesian methods with non-informative priors on between-study standard deviation parameters are used. A non-informative prior implies that nothing is known about the parameter in advance, and a dynamic process of learning from the data is conducted. If data are sparse, there is not enough information to update the prior into the posterior

• A large amount of heterogeneity exists between the studies

• The network is disconnected – in other words when there is no common comparator linking two interventions of interest

Prior to conducting evidence synthesis, the feasibility of pooling the data has to be assessed. The main aim is to evaluate whether there are issues with an excessive amount of heterogeneity, which would be a violation of the similarity assumption.