Questions
As we have discussed, different research designs are better suited for certain types of EIP questions. As an alternative to presenting multiple research hierarchies, some researchers have represented the fit between various EIP questions and the research designs most equipped to answer those questions with a matrix to emphasize that multiple designs can be used for different types of questions, but that some designs are stronger than others depending on the question type. Table 3.2 is an example of such a matrix that we've created using the four EIP questions and the study designs that we have briefly explored in this chapter.
The larger checks indicate the study designs that are best suited to answer each EIP question, and the smaller checks indicate designs that are less well suited, or less commonly used, to answer each EIP question. Those designs without a large or small check do not provide good evidence to answer the EIP question. For example, when answering questions about factors that predict desirable and undesirable outcomes, correlational studies are the study design of choice and therefore are marked with a large check. Also, systematic reviews that combine the results of these types of studies to answer prognosis and risk questions are also marked with a large check. Sometimes the results of experimental or quasi-experimental studies are used to determine risk and prognosis as well, especially if they are large studies that collect a lot of data about the participants in order to look at factors related to risks and benefits, not just the treatments received. Therefore, each of these designs is marked with a small check. Qualitative studies and single-case designs are neither well suited to answering risk and prognosis questions nor commonly used for this purpose, so these are not marked with a check.
TABLE 3.2 Matrix of Research Designs by Research Questions
Qualitative | Experimental | Quasi-Experimental | Single Case | Correlational | Systematic Reviews or Meta-analyses | |
---|---|---|---|---|---|---|
What factors predict desirable and undesirable outcomes? | √ | √ | √ | √ | ||
What can I learn about clients, service delivery, and targets of intervention from the experiences of others? | √ | √ | √ | |||
What assessment tools should be used? | √ | √ | √ | √ | ||
What intervention, program or policy has the best effects? | √ | √ | √ | √ | √ |
You should keep in mind that this is not an exhaustive list of types of research studies you might encounter and that this table assumes that these study designs are executed with a high level of quality. As you read on in this book, you'll learn a lot more about these and other study designs and how to judge the quality of the research evidence related to specific EIP questions.
3.3.6 Philosophical Objections to the Foregoing Hierarchy: Fashionable Nonsense
Several decades ago, it started to become fashionable among some academics to raise philosophical objections to the traditional scientific method and the pursuit of logic and objectivity in trying to depict social reality. Among other things, they dismissed the value of using experimental design logic and unbiased, validated measures as ways to assess the effects of interventions.
Although various writings have debunked their arguments as “fashionable nonsense” (Sokal & Bricmont, 1998), some writers continue to espouse those arguments. You might encounter some of their arguments – arguments that depict the foregoing hierarchy as obsolete. Our feeling is that these controversies have been largely laid to rest, so we only briefly describe some of these philosophical controversies here.
Using such terms as postmodernism and social constructivism to label their philosophy, some have argued that social reality is unknowable and that objectivity is impossible and not worth pursuing. They correctly point out that each individual has his or her own subjective take on social reality. But from that well-known fact they leap to the conclusion that because we have multiple subjective realities, that's all we have, and that because each of us differs to some degree in our perception of social reality, an objective social reality, therefore, does not exist. Critics have depicted their philosophy as relativistic because it espouses the view that because truth is in the eyes of the beholder it is therefore unknowable and that all ways of knowing are therefore equally valid. We agree that we can never be completely objective and value free, but this doesn't mean that we should leap to the conclusion that it's not worth even trying to protect our research as best we can from our biases. It does not follow that just because perfect objectivity is an unattainable ideal we should therefore not even try to minimize the extent to which our biases influence our findings. Furthermore, if relativists believe it is impossible to assess social reality objectively and that anyone's subjective take on external reality is just as valid as anyone else's, then how can they proclaim their view of social reality to be the correct one? In other words, relativists argue that all views about social reality are equally valid, but that our view that it is knowable is not as good as their view that it is unknowable. Say what?!
Some have argued that an emphasis on objectivity and logic in research is just a way for the powers that be to keep down those who are less fortunate and that if a research study aims to produce findings that support noble aims, then it does not matter how biased its design or data collection methods might be. However, critics point out that the idea that there is no objective truth actually works against the aim of empowering the disenfranchised. If no take on social reality is better than any other, then on what grounds can advocates of social change criticize the views of the power elite? For example, during the Trump administration his spokespeople at times dismissed facts that they didn't like by proclaiming to have their own “alternative facts.” Those spokespeople were dismissing objective facts as just someone else's version of reality that had no more credence than the Trump version of reality.
You may or may not encounter philosophical objections such as these, but there certainly continue to be political leaders and others who consistently call into question the value of science and objectivity. We have found that these controversies are less and less frequently raised by experts in EIP and that for the most part practitioners and researchers alike are keenly interested in focusing on how to use research to support the best services possible for clients and communities. While philosophical objections