Robert Weis

Introduction to Abnormal Child and Adolescent Psychology


Скачать книгу

be open label or blind. In an open-label study, participants and researchers know which participants are receiving treatment and which have been assigned to the control group. An open-label study gets its name because, in medical research, participants can see the name of the medication they are receiving on the label of the bottle. Open-label studies increase the chance that people’s biases might influence the study’s outcome. For example, children might alter their behavior or parents might perceive greater improvement if they know that children are receiving treatment.

      In a double-blind study, neither participants nor researchers know to which group participants have been assigned. Double-blind studies reduce bias by treating both groups of participants equally and keeping researchers in the dark regarding each participant’s treatment status.

       Review

       An experiment is a specific type of study in which participants are randomly assigned to two or more groups, at least one independent variable is manipulated in one of the groups, and all other factors are held constant. Experiments allow us to make statements about causality.

       Attention-placebo and treatment as usual (TAU) control groups can reduce the placebo effect, that is, people’s tendency to improve because they know they are receiving treatment and they expect it to work.

       Double-blind studies reduce biases in experiments. Neither researchers nor participants know to which group each participant has been assigned.

      How Do Psychologists Replicate Studies?

      Meta-Analysis

      A final goal of psychological research is replication. We have greater confidence in the results of studies if our findings are reproducible. Replication is especially important in studies that evaluate the efficacy of treatment. We want to be confident that treatment is likely to work before we recommend it to families (Schmidt, 2017).

      Not all studies investigating the same phenomena yield identical results. One study may show that therapy greatly improves children’s functioning, another study may indicate that it is only moderately helpful, and a third study may suggest no benefit whatsoever. The studies may also differ in their number of participants and the manner in which they measured children’s outcomes. How can we combine the results of these studies to determine the overall efficacy of therapy and make a decision about its usefulness?

      Meta-analysis is a widely used statistical technique to combine the results of multiple research studies into an overall, numerical result (Del Re & Fluckiger, 2018). The result of each study is converted into a single metric called an effect size. The effect size (ES) reflects the magnitude of the difference between the treatment group and the control group at the end of the study. Here is its formula:

      ES = Mtreatment group – Mcontrol group

       SD

      First, we calculate the difference between the mean score of children in the treatment group and the mean score of children in the control group. Then, we divide this difference by the standard deviation of scores (SD), a measure of variability. The result is a single number that reflects how many standard deviations the treatment and control groups are apart. Positive scores indicate that children in the treatment group fared better than children in the control group and therapy was helpful. Negative scores indicate that children in the control group experienced better outcomes than children in the treatment group and therapy was harmful (Hoyt & Del Re, 2018).

      We can combine the results of multiple studies by calculating the weighted average effect size. Studies are weighted based on their number of participants, so large studies influence the average more than smaller studies. As a rule of thumb, effect sizes of .2 or less are considered “small,” .5 are “medium,” and .8 or greater are “large” (Ferguson, 2017).

      Examples of Meta-Analysis

      Figure 3.4 shows the results of meta-analyses examining the effects of neurofeedback, behavior therapy, and medication on children with ADHD. These meta-analyses combine the results of hundreds of studies involving thousands of children with this disorder. Consequently, the results can be used to make decisions regarding which form of treatment is most likely to help youths with ADHD (Cortese et al., 2017; Fabiano et al., 2010; Faraone & Buitelaar, 2010).

      A vertical bar graph shows the effect size of various treatments for ADHD such as neurofeedback, behavior therapy, methylphenidate, and amphetamine.Description

      Figure 3.4 ■ Meta-Analysis for the Treatment of ADHD in Children

      Note: Meta-analysis is used to combine the results of many studies into a single effect size. This meta-analysis shows the effects of various treatments for ADHD compared to placebo. Whereas neurofeedback has a small effect on children’s symptoms, behavior therapy and medication have medium to large effects.

      The bars in Figure 3.4 show the average weighted effect size for each form of treatment compared to placebo. Overall, neurofeedback has a small (and nonsignificant) effect on ADHD symptoms, behavior therapy has a medium effect, and medication has a large effect. Consequently, evidence-based practice indicates that clinicians should use behavior therapy and/or medication to treat children with ADHD, because these treatments are most likely to help (Evans et al., 2019).

       Review

       Meta-analysis is a statistical technique that researchers use to combine the results of many studies into a single, numerical result.

       Meta-analysis yields an effect size (ES) that tells us how much of an effect the treatment had on children’s outcomes. It often reflects the number of standard deviations the treatment and control groups are apart at the end of the study.

       As a general rule, effect sizes of .2 or less are small, .5 are medium, and .8 or greater are large.

      What Are Quasi-Experimental Studies?

      Experiments allow us to infer causal relationships between variables because participants are randomly assigned. Sometimes, however, researchers are not able to conduct true experiments because random assignment is not possible. Instead, researchers conduct quasi-experiments. In a quasi-experimental study, researchers manipulate an independent variable (e.g., provide treatment) and note changes in a dependent variable (e.g., children’s outcome). However, they do not randomly assign participants to different groups, so they cannot infer that the treatment caused those outcomes. The term “quasi” means “looks like.” A quasi-experimental study looks like a true experiment, but it lacks an experiment’s essential ingredient: random assignment.

      Let’s look at three of the most common types of quasi-experimental studies used in the field of abnormal child psychology: pretest-posttest studies, nonequivalent groups studies, and single case studies.

      Pretest-Posttest Studies

      A pretest-posttest study is a quasi-experimental study in which the same group of participants is measured at least twice: at baseline (before treatment) and at the end of the study (after treatment). Because all participants receive treatment, there is no control group and, therefore, random assignment is not possible.

      For example, researchers conducted a pretest-posttest study investigating the effects of stimulant medication on children’s ADHD symptoms. They administered stimulant medication to a large sample of children with ADHD for approximately 12 weeks. To assess outcomes, they asked clinicians, parents, and teachers to rate children’s ADHD symptoms at the beginning and end of the study. Overall, 75% of children showed a significant decrease in symptoms (Döpfner, Görtz-Dorten, Breuer, & Rothenberger, 2011).

      Because the study lacked a control group, the researchers