WHAT IS EVIDENCE-BASED THERAPY? 


Psychologists, Therapy, and the Evidence

Generally speaking, psychologists should ensure that their therapeutic interventions are guided by sound reasoning, careful judgment, and a theoretically coherent method of conceptualizing the issues that a client is looking to address in therapy. Furthermore, if the therapy we provide is to be of the highest quality and standard, it must be effective beyond superficial "placebo" or "expectancy" effects. I think all psychologists would agree with these points and would advocate for this kind of responsible practice. This scientifically-minded approach to therapy is also in many ways what separates the majority of psychologists from a plethora of guru or "new age" therapists who practice untested forms of therapy that are grounded in neither science nor logic.

In an effort to hold themselves accountable, and to determine the most effective treatments for clients, psychologists have in recent years conducted studies to evaluate different modes of therapy and their treatment outcomes. This clinical research has led to the emergence of so-called "Evidence Based Treatment," or what is sometimes called "Evidence Based Practice," "Empirically Validated Treatments," or a handful of other variants. Some of the earliest studies evaluated the effectiveness of Cognitive-Behavioral Therapy (CBT). Researchers would do so by randomly assigning clients with very specific diagnoses (e.g. Depression), to a course of treatment involving either CBT, or a generic or less active "supportive therapy." The results from these studies seemed overwhelmingly positive, indicating that CBT was the more effective therapeutic approach (Dobson & Dobson, 2009). For a long time, CBT was the only therapeutic approach being researched, and it was not long before many began advocating for CBT to be the first-line of treatment for depression, anxiety, and a variety of other issues. In fact, for many years, saying that one practices "Evidence Based Therapy" was often equivalent to saying that one works from a CBT model of therapy - to a certain extent, this bias is still prevalent today.

To summarize, the early research seemed to suggest that if we were to practice in accordance with the scientific evidence, we should all become Cognitive-Behavioural Therapists. To do otherwise, would appear to be "unscientific" and perhaps even unethical. But clinicians who practiced other models of therapy were suspicious of these bold claims, and critical thinkers were beginning to question the interpretation of the so-called "evidence." In the end, the apparent superiority of CBT would turn out to be a claim holding little water. I will briefly discuss some of the critical issues below.

 

Research and "Manualized" Therapies

A question that one might reasonably ask is: why were advocates of those other therapy approaches so slow in conducting their own empirical research? This may be difficult to explain to non-psychologists, but part of the answer has to do with the extent to which an approach has been "manualized." CBT, for example, lends itself very well to empirical research because it is a manualized form of therapy: psychologists can essentially follow a step-by-step manual for how to apply their specific therapy approach. This ensures that every client essentially gets the same form of therapy. It may not sound like a big deal, but it is extremely important for research purposes due to the need for standardization and the necessity of being able to "control for" the specific variables researchers are interested in (in this case CBT therapy versus a generic supportivetherapy). In the best case scenario, every research subject or client will be as similar to one another as possible (e.g. all clients will have a common diagnosis of "depression"), and every aspect of the research will be the same for each individual, except for the therapy treatment, which the researcher manipulates, through randomized assignment, while they observe the effect - in this case, treatment outcome. What I am describing here is the basic outline for conducting Randomized Controlled Trials (RCTs), which are thought to be the gold standard in psychotherapy research.

Okay, "that sounds fine," one might say, "but what prevents other approaches from conducting research in the same way?" Well, some therapies are not so easy to manualize or standardize. Most psychodynamic therapies, for example, do not involve rigid step-by-step approaches where individuals presenting with "depression" are treated more-or-less the same. Psychodynamic therapies may instead put a lot of emphasis on the specific developmental and attachment history of the individual, in addition to the specific ways that they present themselves within relationships - both in the therapy room and with important others in their lives. Furthermore, most dynamic therapists would argue that a person may meet criteria for depression for very different reasons, so one could not necessarily categorize individuals by diagnosis. These theoretical differences make it much harder for psychodynamic therapies to engage in empirical research, though efforts have been made to manualize and research these more individually-tailored or fluid approaches. Short-Term Dynamic Psychotherapy (STDP), for example, has been shown to be just as effective as other therapies that have been actively promoted as "empirically supported" (Shedler, 2010). Many psychodynamic investigators have also challenged the status-quo in mainstream psychology by arguing against the use of Randomized Controlled Trials (RCTs) in efficacy research. They instead point to the importance of exploring the validity of treatment approaches using qualitative research and clinical case studies (Malan & Selva, 2006).

In short, manualized therapies are better suited to empirical research because of their ability to be standardized and contrasted within randomized controlled trials, making it easier to conclusively determine whether they are truly effective. However it does not follow that "non-manualized" approaches are necessarily ineffective. Though these other approaches may be at an empirical disadvantage, since they do not easily lend themselves to a cookie-cutter approach to therapy, the emerging research suggests that these other bona fide approaches (e.g. Psychodynamic & Interpersonal therapies) are every bit as effective (Charman, 2003).

 

A Sterile Research Environment

Many critics point out that the findings within the context of a standardized and excessively controlled research program are unlikely to generalize to the more dynamic real-world environments of private practice clinics. Remember that the study designs of the "empirically based therapy" research typically assume that disorders are homogenous and specified. In reality, people will often meet criteria for multiple diagnoses at any one time. These studies also categorize individuals based on diagnosis, which may fail to account for individual differences - in my experience, a person may meet criteria for "anxiety" for very different underlying reasons, and in my opinion they ought not to be treated the same. These "empirical" studies also rely on the process of standardization - researchers must try to use a very specific therapeutic approach that will be essentially the same for everyone. While research studies must focus on strict standardization, real world therapists rarely practice so inflexibly, so again, the results of these studies may only apply to those who strictly adhere to a "theoretically pure" model of therapy, which in practice, is almost impossible. Though well-intended I am sure, in my estimation the uncritical interpretation of the "evidence based" literature has the unfortunate effect of treating clients according to their diagnostic labels, while failing to have greater appreciation for the very real human qualities of our clients, their unique histories, and specific life circumstances.

 

Ignoring Non-Specific Factors

Some critics suggest that the "evidence-based" movement makes the specific therapy approach sound more important than it really is, while ignoring other factors that are perhaps more crucial. The critical psychotherapy research has shown, for example, that the specific therapeutic approach may count for as little as 2-15% of the therapeutic outcome. This compares to an estimated 30% attributed to the quality of the therapeutic relationship, 15% attributed to client expectancy effects, and 40% to client variables and other therapeutic factors (2007, Parker & Fletcher). These so-called "common" or "non-specific" factors therefore account for far more variability in outcome than those attributed to specific therapy approach.

In addition to the above, critics point out that the "empirically-based" research is not able to control for non-specific therapy factors during the investigative process. We can see the problem more clearly if we compare the RCTs within the psychotherapy research to those involving psychotropic medication. During drug trials, for example, experiments are set up so that subjects/patients are randomly assigned to either the "active" drug, or a "placebo" group. The best experiments are ones in which neither the researcher, nor the patient, are aware of what treatment they received until the end of the study. These are appropriately called "double-blind" experiments, and they are beneficial in that they cancel out expectancy effects or researcher bias. But there is no such thing as a double-blind experiment in the therapy research. The therapist or researcher is fully aware that: 1) they are giving the treatment being tested, and 2) that the therapy they are providing is "active." Parker & Fletcher (2007) point out that both of these factors are likely to increase therapist motivation and numerous non-specific therapy factors. Therapists providing the more passive supportive therapy are likely to compromise these non-specific factors and the therapeutic alliance - factors that contribute strongly to therapy outcome. Therapists providing the supportive therapy "control group" are also unlikely to be offering their preferred mode of therapy, which is another variable linked to treatment outcome. For example, Blatt et al. (1996), found that the most highly-effective psychologists in studies contrasting specific therapies, appear to be those that provided treatment consistent with their preferred mode of therapy - regardless of what treatment group they were assigned to. 

There are many practitioners out there advertising that they provide "empirically supported" or "empirically validated" therapy. It is unfortunate that these terms have so many hidden meanings and problematic assumptions because the average consumer is unlikely to know the difference. I think it is therefore necessary for the careful consumer to ask what empirically based really means. In the broadest sense, I think all psychologists would apply, however, the narrower sense, especially as it applies to the apparent superiority of CBT, it may be very misleading, as it does not hold up to critical evaluation of the so-called evidence.

 

References

Blatt, S., Sanislow, C., Zuroff, D. & Pilkonis, P. (1996). Characteristics of effective therapists: Further analyses of data from the National Institute of Mental Health Treatment of Depression Collaborative Research Program. Journal of Consulting and Clinical Psychology, 64, 1276-1284.

Charman, D. (2003). Paradigms in current psychotherapy research: A critique and the case for evidence-based psychodynamic psychotherapy research. Australian Psychologist, 36(1), 39-45.

Dobson, D. & Dobson, K. (2009). Evidence-based practice of cognitive-behavioral therapy. New York, NY: Guilford.

Malan, D. & Selva, P. D. (2006). Lives transformed: A revolutionary method of psychodynamic psychotherapy. London: Karnac Books.

Parker, G. & Fletcher, G. (2007). Treating depression with the evidence-based psychotherapies: a critique of the evidence. Acta Psychiatrica Scandinavica, 115, 352-359.

Shedler, J. (2010). The efficacy of psychodynamic psychotherapy. American Psychologist, 63(2), 98-109.

Worrall, J. (2010). Evidence: philosophy of science meets medicine. Journal of Evaluation in Clinical Practice, 16, 356-362.