William Gregory

Cobert's Manual Of Drug Safety And Pharmacovigilance (Third Edition)


Скачать книгу

to onset, between taking the drug and start of liver injury, short enough to be suggestive of a causal link or was it so long that suspicion is rather low? Was there significant alcohol consumption, underlying infectious hepatitis, or other plausible causes of hepatic compromise? Was the patient in perfect health or was he or she at high risk of developing viral, alcoholic or other hepatitis? Is there a known background rate of hepatic dysfunction in the exposed population (subpopulation)?

      According to these and other diagnostic criteria, the PV clinician will roughly assess causality using categories such as definite, probable, possible, or unlikely or some other causality grading system. When enough details are available and when the information is complete enough, these judgments are feasible and they matter. We then say these reports are valid and (it is hoped) of high quality. Various techniques described elsewhere in this Manual are used to judge whether this is a strong enough signal worth publicizing or, if weaker, requiring only continued surveillance.

      Note: Many causal relationship methods have been published over the last decades. One interesting reference has been published by an epidemiologist in 1965.1

      The next question is the frequency (or risk) of occurrence of this reported AE. In other words, what is the probability of the next patient exposed to drug X developing severe liver disease? One in 10, 1 in 100, 1 in 1,000 people treated? To put it another way: What is the number of patients treated for another liver injury to be observed, also known as “Number Needed to Harm” 10, 100, 1,000? It is obviously important to know.

      Over the years, many attempts have been made to apply statistics and epidemiologic quantitative methods to study case series of AEs, primarily using spontaneously reported AEs to inform causality. The results did not meet expectations.

      It is necessary to differentiate between AEs reports received in clinical trials (experimental) or epidemiologic studies (observational) and those received spontaneously or in a solicited manner (surveillance or monitoring systems). The use of statistics is well described and defined for data generated in formal clinical trials. The patient populations are largely under the control of the investigator or researcher. The methodology for efficacy and safety analysis is well described and largely agreed on. Placebo- and comparator-controlled trials give clear pictures of occurrence rates of AEs, and significance values and confidence intervals can be determined and used to draw conclusions (at least for efficacy criteria). Clinical trials are usually not designed with sufficient statistical power to make safety judgments using statistics beyond descriptive statistics.

      Different types of trials have different purposes in the world of pharmaceuticals. Randomized clinical trials are experimental and test one drug versus another or placebo or both. Pharmacoepidemiology studies look at observational use of drugs already on the market (in most cases). Pharmacovigilance studies are done for surveillance or monitoring of drug use and toxicity. Each type of study has a different though sometimes overlapping use.

      The safety and efficacy data in the clinical trials are usually very solid, because data integrity is good to excellent. In addition, the data collected are usually complete. As patients are seen by the investigator at periodic intervals and as the investigator and his or her staff question the patient on AEs, it is believed that few AEs are missed, especially serious or dramatic ones. Incidence rates of AEs calculated from these data are held to be valid and useful.

      Spontaneous reports received from routine healthcare practice are a different situation entirely. The data are unsolicited in most cases and may come from consumers or healthcare professionals. Follow-up for additional information is variable, and source documents (e.g., laboratory reports, office and hospital records, autopsy reports) are not always available because of privacy issues, busy physicians or pharmacists unable to supply records from multiple sources, patients not wanting to disclose information, and so on. If no healthcare professional was involved, such as when a patient uses an OTC product, the data usually cannot be verified. Hence, the data integrity of individual spontaneous reports is variable and inconsistent. There may also be duplicate reports if more than one person reports the case unbeknownst to the others.

Case Report or Individual Case Safety Report

      A case report, also called an “individual case safety report” (ICSR), is a clinical observation of a patient who received a drug and experienced one or more AEs. The most common paper formats for presentation of a case report are the MedWatch form and the CIOMS I form. The electronic equivalent is the E2B report, which is an electronic file transmitted to a health authority or company or elsewhere with all the elements of the ICSR. Sometimes cases are published as short reports in medical journals, some of which have been previously reported to health authorities and some not. These cases are picked up in the periodic review of the medical literature done by companies.

Aggregate Reports

      Aggregate reports are descriptions, or compilations and analyses, of a group of patients exposed to a drug (or sometimes more than one drug, e.g., combination products) and the AEs and other safety issues such as medication errors or quality problems. There are multiple standard formats, of which the Periodic Safety Update Reports (PSUR) is the main one. The US aggregate reports are called, sometimes confusingly, NDA Periodic Reports, Periodic Reports, and PADERs (Periodic Adverse Drug Experience Reports). To worsen the situation, PADERs is also occasionally used to refer to PSURs. And the FDA also accepts PSURs. Whatever they are called, companies are obliged to prepare them in a serious and careful manner and to submit them on time to concerned health authorities.

      There are multiple biases and limitations involved in the quantitative analyses of spontaneous reported data that can result in case information and trends that do not truly represent the real safety profile of the drug. Two phenomena that act on spontaneously reported data are worth examining: (1) the Weber effect and (2) Secular effects, described below.

      (1) Weber Effect: The Weber effect, also called the product life cycle effect, describes the phenomenon of increased voluntary reporting after the initial launch of a new drug. “Voluntary reporting of adverse events for a new drug within an established drug class does not proceed at a uniform rate and may be much higher in the first year or two of the drug’s introduction”.2

      This means that for the period of time after launch (from 6 months to as long as 2 years), there will be a large number of spontaneously reported AEs/adverse drug reactions (ADRs) that taper down to steady-state levels after this effect is over. It is to be distinguished from secular effects. The Weber Effect has been seen in multiple other situations since the original report.3

      However, newer information has thrown this phenomenon into question. A study published in 2014 looked at 62 drugs’ ADRs in FDA’s FAERS database. The reporting pattern for each was looked at over the 4 years after its FDA approval date. The results showed: “the general AE reporting pattern observed in this study appears to consist simply of increasing case counts over the first three quarters after approval followed by relatively constant counts thereafter.” Thus, there was not a spurt at the beginning followed by a drop. Rather, it was a rising volume over the first 4 years of reports. A few drugs did seem to show a Weber effect.

      The authors conclude: “Our results suggest that most of the modern adverse event reporting into FAERS does not follow the pattern described by Weber. Factors that may have contributed to this finding include large increases in the volume of AE reports since the Weber effect was described, as well as a concerted effort by the FDA to increase awareness regarding the utility of post-marketing AE reporting.”4

      Another study published in 2017 looked at 15 oncology drugs over 5 years after approval and also did not find evidence of the Weber effect here.5

      In addition, safety data sourced from customer engagement programs, e.g., patient support programs, and disease management programs, etc., may have blunted the original effect.

      So,