Robert Weis

Introduction to Abnormal Child and Adolescent Psychology


Скачать книгу

caused this reduction in children’s symptoms. It is possible that other factors, besides medication, might better explain the results of this study. Internal validity refers to the degree to which we can say that manipulation of an independent variable (e.g., treatment) causes a corresponding change in a dependent variable (e.g., children’s outcomes). When other factors can explain children’s outcomes, researchers say these factors threaten the internal validity of the study. There are several threats to internal validity that limit the causal inferences we can make from quasi-experimental research (Kazdin, 2017).

      First, maturation can compromise the internal validity of a study. Maturation refers to changes in the child that occur because of the passage of time. For example, as children’s brains mature, they show greater capacity for attention, concentration, and impulse control. It is possible that all children, even those who do not receive treatment, will show a reduction in ADHD symptoms simply due to this brain maturation. Unless researchers compare children who receive treatment with children in a control group who do not, the effects of treatment cannot be distinguished from maturation alone.

      Second, environmental factors can threaten the internal validity of pretest-posttest studies. Environmental factors include changes in the child’s family (e.g., divorce), school (e.g., a new teacher), or peer group (e.g., best friend moves away). Environmental factors also include major events (e.g., an economic downturn, the COVID-19 pandemic) or more subtle changes in the child’s surroundings. For example, if researchers assessed children’s symptoms at pretest during the school year and at posttest during the summer, parents might report fewer attention problems over time. However, this apparent improvement in attention might be explained by the fact that inattention is less problematic during summer vacation than during the academic year. Without a control group for comparison, it is possible that these environmental changes might explain some of the study’s results.

      A third threat to internal validity is repeated testing. The act of repeatedly assessing children can cause them to show improvement over time. For example, if children know that their parents and teachers are monitoring their behavior, they might try to act more attentive or obedient. Similarly, if parents and teachers are repeatedly asked to rate children’s behavior, they might pay more attention to signs of improvement. Without a comparison group, it is possible that some of the benefits of treatment are simply due to the fact that children were monitored and tested multiple times.

      Fourth, attrition can threaten the internal validity of a study. Attrition refers to the loss of participants over the course of the study. Attrition usually occurs because participants decide to withdraw from the study or simply stop attending treatment sessions. When a large percentage of participants in the treatment group withdraw from the study, it threatens the study’s internal validity. For example, Döpfner and colleagues (2011) found that 75% of children who completed their study showed a significant reduction in ADHD symptoms. However, this percentage likely overestimates the actual number of children who benefited from the medication. Closer inspection of the data showed that 6% of children in the original sample withdrew from the study prematurely, perhaps because the medication was not effective for them or because it produced unpleasant side effects. Some of the apparent benefits of the medication are partially explained by the fact that these children withdrew from the study.

      Nonequivalent Groups

      A nonequivalent groups study is a more sophisticated quasi-experimental study in which researchers compare participants in treatment and control groups, but participants are not randomly assigned to these groups. Because the researchers do not randomly assign participants to the two groups, the groups are nonequivalent—that is, they may differ in some way before treatment. Therefore, if the researchers notice differences between groups after treatment, they cannot definitively say that treatment caused those differences.

      For example, researchers wanted to see if medication for ADHD in childhood might reduce the likelihood of alcohol and drug use problems in adolescence. Ideally, the researchers would randomly assign children to either a treatment group that receives medication or to a control group that does not. Then, several years later, they could compare the prevalence of substance use problems among adolescents in the two groups.

      However, the researchers could not randomly assign children to treatment and control groups because it would be unethical to delay treatment to children with ADHD for so long. Instead, the researchers compared a group of adolescents with ADHD who took medication for this condition as children to another group of adolescents with ADHD who had no history of using medication to manage their symptoms. The researchers found that youths with ADHD who took medication were less likely to use alcohol and other drugs than youths with ADHD who never used medication (Hammerness, Petty, Faraone, & Biederman, 2017).

      Because their study included a control group, there are fewer threats to its internal validity. For example, it is unlikely that maturation explains differences in adolescents’ substance use because both groups of adolescents were assessed at the same point in their development. Similarly, the dropout rate of participants in both groups was roughly equal, ruling out attrition as a possible explanation for their findings. However, the use of nonequivalent groups can introduce a special threat to internal validity called selection bias.

      Selection bias refers to a systematic difference between the treatment and control groups that emerges when groups are not randomly assigned. Because their study lacked random assignment, youths in the treatment and control groups were nonequivalent at the beginning of the study. It is possible that subtle differences in demographic variables, family background, or attitudes toward medication existed between the two groups at the beginning of the study. Therefore, the differences in substance use seen at the end of the study might be explained by these subtle differences rather than the medication itself.

      Single Subject Studies

      A single subject study is a quasi-experimental study in which one participant’s behavior is assessed across time, usually with and without treatment. Single subject studies are frequently used when a therapist wants to assess the effectiveness of treatment with a particular client. Because single subject studies involve only one participant, they lack a control group and random assignment. However, the pattern of behavior shown by children when treatment is applied, compared to when it is absent or withdrawn, can suggest the effectiveness of the intervention.

      The simplest way to evaluate treatment effectiveness in a single subject study is to use an AB design. In this approach, we measure the frequency or severity of a child’s behavior before and after treatment. “A” refers to the level of behavior at baseline. “B” refers to the level of behavior when the intervention in applied. For example, a school psychologist might ask a teacher to estimate the percentage of time a student is attentive during math class each day for 2 weeks. The first week (A) serves as a baseline measure of the child’s attention. During the second week (B), the teacher uses a sticker chart to reinforce on-task behavior during class. She might praise the child and award a sticker for on-task behavior. Increased attention from phase A to phase B indicates that the treatment is effective (Gast & Baekey, 2016).

      A more sophisticated way to assess treatment effectiveness is to use a reversal (ABAB) design. The teacher rates behavior at baseline (A) and when the treatment is applied (B). Then, she temporarily withdraws treatment (i.e., stops using the sticker chart) and notices any change in the child’s behavior (the second A). If the treatment is responsible for improvement, then discontinuing the sticker chart should result in a temporary decrease in attention. Finally, the teacher would reinstate the treatment (the second B). If the sticker chart is effective, the child’s time on task should increase once again (Figure 3.5).

      A line graph shows the days and time for different baseline, treatment, and withdrawal tasks.Description

      Figure 3.5 ■ ABAB Reversal Design

      Note: This pattern of results indicates that the treatment is effective in increasing