selection bias, information bias, and confounding.
1.6.1 Selection Bias
Selection biases can occur when there are differences – other than the intervention itself – between the intervention/control groups being compared. It is common in observational health care research that there will be systematic differences in the types of patients in each intervention group. When these differences are in variables that are prognostic (and thus confounding exists), bias can result and must be addressed. Selection bias can also appear in other forms. Bias can result when the sample from which the results are obtained are not representative of the population, not because of chance, but because of an error in the inclusion or exclusion criteria, or in the recruitment process.
A second source of bias is loss to follow up, when data that are not obtained are systematically different from data that is available. A third reason for selection bias is the absence of response. This is typical of many studies because many times those who do not answer differ in something from those who do. Fourth, selective survival occurs when prevalent cases are selected instead of incidents. This type of bias is typical of case-control studies, in which the more severe or milder cases are under-represented by exitus or cure. Finally, self-selection bias can occur due to volunteer participation. In general, there is a risk that these individuals have different characteristics than non-volunteers.
1.6.2 Information Bias
Information or classification bias occurs when there is error in the measurement of the study variables in all or some of the study subjects. This can occur due to the use of non-sensitive or unspecific tests, use of incorrect or variable diagnostic criteria, and inaccuracy in the collection of data. When the error is similar in both intervention groups of interest, this is termed non-differential information bias. On the contrary, if errors are preferentially or exclusively in one group, the bias is differential. The non-differential bias skews the results in favor of the null hypothesis (tends to decrease the magnitude of the differences between groups), so in cases where significant differences are still observed, the result can still have value. However, the impact of differential bias is difficult to predict and seriously compromises the validity of the study.
There are two common information biases in case-control studies (also those with retrospective cohorts):
● memory bias – for example, those with a health problem remember their antecedents in a different way than those who do not
● interviewer bias – the information is requested or interpreted differently according to the group to which the subject belongs
However, prospective studies are also subject to information biases because, for example, a patient may try to answer to please the investigator (social desirability bias) or the investigator might voluntarily or involuntarily modify the assessment in the direction of the hypothesis that she or he wants to prove.
1.6.3 Confounding
Confounding occurs when the association between the study factor (intervention or treatment) and the response variable can be explained by a third variable, the confounding variable, or, on the contrary, when a real association is masked by this factor. For a variable to act as a confounder, it must be a prognostic factor of the outcome and be associated with exposure to the intervention, but it must not be included in the pathway between exposure and outcome. For example, assume that we studied the association between smoking and coronary heart disease and that the group of patients who smoke most often is the youngest. If we do not take into account age, the measure of global association will not be valid because the “beneficial” effect of being younger could dilute the harmful effect of tobacco on the occurrence of heart disease. In this case, the confounding variable would underestimate the effect of the exposure, but in other cases, it can result in overestimation. If a confounding factor exists but is not measured or available for analysis in a particular study, it is referred to as an unmeasured confounder.
It is confounding that raises the greatest challenge with causal inference analyses based on RWD. Even if one appropriately adjusts for measured confounders (the topic of much of this book), there is no guarantee that unmeasured confounders do not exist. This is an unprovable assumption that is necessary for most causal inference methods. Thus, comparative observational research sits lower on the hierarchy of evidence than randomized controlled trials. Chapter 2 provides a full discussion of causal inference and the assumptions necessary for causal inference analyses from non-randomized data.
1.7 Guidance for Real World Research
The growing use of real world evidence research and the growing recognition of the challenges to validity of such evidence has sparked multiple groups to propose guidance documents for the design, conduct, and reporting of observational research. The specific aims of each effort varies, but the general goal is to improve the quality and reliability in real world data research. Table 1.2 provides a summary and references to key guidance documents.
Table 1.2: Summary of Guidance Documents for Real World Evidence Research
Year | Guidance or Sponsor | Reference | Summary |
2004 | TREND - CDC | Des Jarlais DC, Lyles C, Crepaz N, and the TREND Group (2004). Improving the Reporting Quality of Nonrandomized Evaluations of Behavioral and Public Health Interventions: The TREND Statement. Am J Public Health 94:361-366.https://www.cdc.gov/trendstatement | 22-Item Checklist – designed to be a non-randomized research complement to the CONSORT guidelines for reporting randomized trials. |
2007 | STROBE | von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, and the STROBE Initiative (2007). The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Epidemiology 18(6):800-4.Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, Poole C, Schlesselman JJ, Egger M, and the STROBE Initiative (2007). Strengthening the Reporting of Observational Studies in Epidemiology (STROBE):explanation and elaboration. Epidemiology 18(6):805-35.https://strobe-statement.org | Checklist focused on improving the reporting of observational studies. |
2009 | ISPOR Good Practices | Berger ML, Mamdani M, Atkins D, Johnson ML (2009). Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: The ISPOR good research practices for retrospective database analysis task force report—Part I. Value in Health 12:1044-52.Cox E, Martin BC, Van Staa T, Garbe E, Siebert U, Johnson ML (2009). Good Research Practices for Comparative Effectiveness Research: Approaches To Mitigate Bias And Confounding In The Design Of Non-randomized Studies of Treatment Effects Using Secondary Databases: Part II. Value in Health 12(8):1053-61.Johnson ML, Crown W, Martin BC, Dormuth CR, Siebert U (2009). Good Research Practices for Comparative Effectiveness Research: Analytic Methods to Improve Causal Inference from Nonrandomized Studies of Treatment Effects using Secondary Data Sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part III. Value in Health. 2009;12(8):1062-1073.https://www.ispor.org/heor-resources/good-practices-for-outcomes-reserarch/report | ISPOR sponsored effort to provide guidance on quality observational research at a more detailed level than previous checklists (three-part manuscript series). |
2010 | GRACE | Dreyer NA, Schneeweiss S, McNeil B, et al (2010). GRACE Principles: Recognizing high-quality observational studies of comparative effectiveness. American Journal of Managed Care 16(6):467-471.Dreyer NA, Velentgas P, Westrich K et al (2014). The GRACE Checklist for Rating the Quality of Observational Studies of Comparative Effectiveness: A Tale of Hope and Caution. Journal of Managed Care Pharmacy 20(3):301-08.Dreyer NA, Bryant A, Velentgas P (2016). The GRACE Checklist: A Validated Assessment Tool for High Quality Observational Studies of Comparative Effectiveness. Journal of Managed Care and Specialty
|