Skip to main content
SearchLoginLogin or Signup

Chapter 1: How to choose between the different types of research evidence

Published onApr 25, 2024
Chapter 1: How to choose between the different types of research evidence

To address diverse health policy and practice questions at various health system or service levels, EIDM requires different types of research evidence [29]. The traditional way of thinking about research evidence was based on study design, with the different designs arranged in order of decreasing internal validity in a hierarchy or pyramid. Fig. 1.1 shows an example hierarchy of evidence for questions related to effectiveness [30].

There is recognition, however, that the best study design varies according to the question that needs answering, such as “does it work?”,“how does it work?”, “is it safe?”, among others (see Table 1.1). While randomized controlled trials (RCTs) and systematic reviews of RCTs are the most useful study designs for questions of effectiveness (“does doing this work better than doing that?”), other questions require different study designs [29],[31]. For example, to understand how an intervention works or fails to work, qualitative studies or systematic reviews of qualitative studies are the most useful. Thus, the appropriateness of the evidence for a particular question is an important consideration in EIDM [32],[33],[34]. See Table 3.1 in Chapter 3 for guidance on which type of evidence to use when.

Source: adapted from Table 1 in Petticrew and Roberts 2003 [31] and Box 4 in Nutley et al. 2013 [29], with additional questions based on those included in the GRADE Evidence to Decision frameworks [35].
a. These are also known as cross-sectional studies.
b. Quasi-experimental studies are those where the investigator lacks full control over the allocation and/or timing of intervention but nonetheless conducts the study as if it were an experiment, allocating subjects to groups [36].
c. Various types of systematic reviews exist, e.g. rapid reviews, scoping reviews, mixed method reviews, overviews, qualitative reviews. While a systematic review of RCTs (with or without a meta-anaysis) is most appropriate for questions of effectiveness, other types will be more appropriate for other types of questions. For example, where the use of qualitative evidence is appropriate for a particular question, a qualitative review or mixed methods review will be most appropriate.

While evaluations are not explicitly mentioned in the table as a type of evidence useful to inform policy- and decision-making, the research questions that can be addressed by an evaluation can implicitly be found in the table. For example, process evaluations ask questions such as “how does it work?”, which requires conducting qualitative research or surveys; and impact evaluations ask questions such as “did it work?” (effectiveness), in which case the best evidence will come from an RCT design. The main difference with evaluations is that systematic reviews may not be relevant as new data usually need to be collected to answer the relevant questions. However, a systematic review can help to inform the design of the evaluation, e.g. the type of evaluation that will be needed and the relevant indicators.

In addition to study design, when considering what is the best evidence for a particular question, the methodological quality (or risk of bias) of the study also needs to be taken into account as it can affect the degree of confidence that can be placed in its results [29],[30],[31]. Flaws in the design or conduct of the study can result in misleading results [37]. The advantage of well-conducted systematic reviews is that they include, as part of the process, an assessment of risk of bias of the included studies [38]. This assessment is used in the interpretation of the included evidence. For example, we can have more confidence in a systematic review of effectiveness that includes RCTs with a low risk of bias than one in which most of the RCTs had a high risk of bias. In turn, the systematic review itself should also be assessed for its methodological quality. See Table 3.6 in Chapter 3 for tools to assess the quality of different study designs.

The quality of a body of evidence for a particular question and outcome can also be assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach, which is used for the development of WHO guidelines and is included as a standard in some systematic reviews [30],[38],[39][40]. The GRADE approach rates the quality of the evidence (also known as certainty of the evidence) based on study design, risk of bias, inconsistency, imprecision, indirectness, and publication bias [38],[39],[41]. For qualitative evidence reviews, GRADE-CERQual can be used [42].

<- What types of evidence are needed for evidence-informed decision-making? | Closing the research-to-policy gap ->

Comments
0
comment
No comments here
Why not start the discussion?