Allen Rubin

Practitioner's Guide to Using Research for Evidence-Informed Practice


Скачать книгу

      The current and more comprehensive definition of EIP – one that is more consistent with definitions that are prominent in the current human service professions literature – views EIP as a process, as follows: EIP is a process for making practice decisions in which practitioners integrate the best research evidence available with their practice expertise and with client attributes, values, preferences, and circumstances. In other words, practice decisions should be informed by, and not necessarily based on, research evidence. Thus, opposing EIP essentially means opposing being informed by scientific evidence!

      In the EIP process, practitioners locate and appraise credible evidence as an essential part, but not the only basis, for practice decisions. The evidence does not dictate the practice. Practitioner expertise such as knowledge of the local service context, agency capacity, and available resources, as well as experience with the communities and populations served, must be considered. In addition, clients are integral parts of the decision-making process in collaboration with the practitioner. Indeed, it's hard to imagine an intervention that would work if the client refuses to participate!

      Moreover, although these decisions often pertain to choosing interventions and how to provide them, they also pertain to practice questions that do not directly address interventions. Practitioners might want to seek evidence to answer many other types of practice questions, as well. For example, they might seek evidence about client needs, what measures to use in assessment and diagnosis, when inpatient treatment or discharge is appropriate, understanding cultural influences on clients, determining whether a child should be placed in foster care, and so on. They might even want to seek evidence about what social justice causes to support. In that connection, there are six broad categories of EIP questions, as follows:

      1 What factors best predict desirable or undesirable outcomes?

      2 What can I learn about clients, service delivery, and targets of intervention from the experiences of others?

      3 What assessment tool should be used?

      4 Which intervention, program, or policy has the best effects?

      5 What are the costs of interventions, policies, and tools?

      6 What are the potential harmful effects of interventions, policies, and tools?

      Let's now examine each of the preceding six types of questions. We'll be returning to these questions throughout this book.

      Suppose you are a child welfare administrator or caseworker and want to minimize the odds of unsuccessful foster-care placements, such as placements that are short-lived, that subject children to further abuse or that exacerbate their attachment problems. Your EIP question might be: “What factors best distinguish between successful and unsuccessful foster-care placements?” The type of research evidence you would seek to answer your question (and thus inform practice decisions about placing children in foster care) likely would come from case-control studies and other forms of correlational studies that are discussed in Chapter 9 of this book.

      A child welfare administrator might also be concerned about the high rate of turnover among direct-service practitioners in the agency, and thus might pose the following EIP question: “What factors best predict turnover among child welfare direct-care providers?” For example, is it best to hire providers who have completed specialized training programs in child welfare or taken electives in it? Or will such employees have such idealistic expectations that they will be more likely to experience burnout and turnover when they experience the disparity between their ideals and service realities of the bureaucracy? Quite a few studies have been done addressing these questions, and as an evidence-informed practitioner, you would want to know about them.

      If you administer a shelter for homeless people, you might want to find out why so many homeless people refuse to use shelter services. You may suspect that the experience of living in a shelter is less attractive than other options. Perhaps your EIP question would be: “What is it like to stay in a shelter?” Perhaps you've noticed that among those who do use your shelter there are almost no females. Your EIP question might therefore be modified as follows: “What is it like for females to stay in a shelter?” To answer those questions, you might read various qualitative studies that employed in-depth, open-ended interviews of homeless people that include questions about shelter utilization. Equally valuable might be qualitative studies in which researchers themselves lived on the streets among the homeless for a while as a way to observe and experience the plight of being homeless, what it's like to sleep in a shelter, and the meanings shelters have to homeless people.

      Direct-service practitioners, too, might have EIP questions about their clients' experiences. As mentioned previously, one of the most important factors influencing service effectiveness is the quality of the practitioner-client relationship, and that factor might have more influence on treatment outcome than the choices practitioners make about what particular interventions to employ. We also know that one of the most important aspects of a practitioner's relationship skills is empathy. It seems reasonable to suppose that the better the practitioner's understanding of what it's like to have had the client's experiences – what it's like to have walked in the client's shoes, so to speak – the more empathy the practitioner is likely to convey in relating to the client.