Uwe Siebert

Real World Health Care Data Analysis


Скачать книгу

      Hernan MA, Robins JM (2016). Using Big Data to Emulate a Target Trial When a Randomized Trial is Not Available. Am J Epi 183(8):758-764.

      Johnson ML, Crown W, Martin BC, Dormuth CR, Siebert U (2009). Good Research Practices for Comparative Effectiveness Research: Analytic Methods to Improve Causal Inference from Nonrandomized Studies of Treatment Effects using Secondary Data Sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part III. Value in Health 12(8):1062-1073.

      Makady MS A, de Boer A, Hillege H, Klungel O, Goettsch W on behalf of GetReal Work Package 1 (2017). What Is Real-World Data? A Review of Definitions Based on Literature and Stakeholder Interviews. Value in Health 20(7):858-865

      Network for Excellence in Health Innovation (NEHI) (2015). Real World Evidence: A New Era for Health Care Innovation. https://www.nehi.net/publications/66-real-world-evidence-a-new-era-for-health-care-innovation/view. Posted September 22, 2015, Accessed October 2, 2019.

      Patient Centered Outcomes Research Institute (PCORI) Methodology Committee. (2017). Chapter 8. Available at https://www.PCORI.org/sites/defaultfiles/PCORI-Methodology-Report.pdf.

      Rothman KJ, Lash TL, Greenland S (2012). Modern Epidemiology, 3rd Edition. Baltimore, MD: Wolters Kluwer.

      Rothwell PM (1995). Can overall results of clinical trials be applied to all patients? Lancet 345:1616–1619.

      Rubin DB (2007). The Design versus the Analysis of Observational Studies for Causal Effects: Parallels with the Design of Randomized Trials. Statistics in Medicine 26(1): 20-36.

      Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, Poole C, Schlesselman JJ, Egger M and the STROBE Initiative (2007). Strengthening the Reporting of Observational Studies in Epidemiology (STROBE):explanation and elaboration. Epidemiology 18(6):805-35.

      von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP and the STROBE Initiative (2007).

      The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Epidemiology 18(6):800-4.

      Wells KB (1999). Treatment research at the crossroads: the scientific interface of clinical trials and effectiveness research. Am J Psychiatry 156:5–10.

      Woodward M (2019). Epidemiology: Study Design and Data Analysis, Third Edition. Boca Raton, FL: CRC Press.

      Chapter 2: Causal Inference and Comparative Effectiveness: A Foundation

       2.1 Introduction

       2.2 Causation

       2.3 From R.A. Fisher to Modern Causal Inference Analyses

       2.3.1 Fisher’s Randomized Experiment

       2.3.2 Neyman’s Potential Outcome Notation

       2.3.3 Rubin’s Causal Model

       2.3.4 Pearl’s Causal Model

       2.4 Estimands

       2.5 Totality of Evidence: Replication, Exploratory, and Sensitivity Analyses

       2.6 Summary

       References

      In this chapter, we introduce the basic concept of causation and the history and development of causal inference methods including two popular causal frameworks: Rubin’s Causal Model (RCM) and Pearl’s Causal Model (PCM). This includes the core assumptions necessary for standard causal inference analyses, a discussion of estimands, and directed acyclic graphs (DAGs). Lastly, we discuss the strength of evidence needed to justify inferring a causal relationship between an intervention and outcome of interest in non-randomized studies. The goal of this chapter is to provide the theoretical background behind the causal inference methods that are discussed and implemented in later chapters. Unlike the rest of the book, this is a theoretical discussion and lacks any SAS code or specific analytical methods. Reading this chapter is not necessary if your main interest is the application of the methods for inferring causation.

      In health care research, it is often of interest to identify whether an intervention is “causally” related to a sequence of outcomes. For example, in a comparative effectiveness study, the objective is to assess whether a particular drug intervention is efficacious (for example, better disease control, improved patient satisfaction, superior tolerability, lower health care resource use or medical cost) for the target patient population in real world settings. Before defining causation, let us first point out the difference between causation and association (or correlation). For example, we have observed global warming for the past decade and during the same period the GDP of the United States increased an average of 2% per year. Are we able to claim global warming is the cause of US GDP increase, or vice versa? Not necessarily. The observation just indicates that global warming was present while the US GDP was increasing. Therefore, “global warming” and “US GDP increase” are two correlated or associated events, but there is little or no evidence suggesting direct causal relationship between them.

      The discussion regarding the definition of “causation” has been ongoing for centuries among philosophers. We borrow the ideas from the 18th century Scottish philosopher David Hume to define causation: causation is the relation that holds between two temporally simultaneous or successive events when the first event (the

      cause) brings about the other (the effect). According to Hume, when we say that “A causes B” (for example, fire causes smoke), we mean that

      ● A is “constantly conjoined” with B;

      ● B follows A and not vice versa;

      ● there is a “necessary connection” between A and B such that whenever an A occurs, a B must follow.

      Here we present a hypothetical example to illustrate a “causal effect.” Assume that a subject has a choice to take drug A (T=1) or not (T=0), and the outcome of interest Y is a binary variable (1 = better, 0 = not better). There are four possible scenarios that we could observe. (See Table 2.1.)

      Table 2.1: Possible Causal Effect Scenarios

1.The subject took A and got better.
T=1Y=1 (actual outcome)
2.The subject took A and did not get better.
T=1Y=0 (actual outcome)
3.The subject did not take A and got better.
T=0Y=1 (actual outcome)
4.The subject did not take A and did not get better.
T=0Y=0 (actual outcome)

      If we