There are those who embrace routine outcome monitoring (ROM), and those who shy away from it like the plague.
On one side of the fence, skeptical practitioners point their crucifix against the use of any client-focused outcome measures, while others who embrace ROM think that outcome measures are like the second coming, thinking that it can supersede decision making about the treatment process.
The adamant Non-ROMer would say, “How can a simple outcome measure tell me about whether my client is benefiting from treatment and how effective I am? Besides, change takes a long time to happen, and it’s gonna get worst before it gets better.”
While the rookie ROMer would say, “The outcome measure is sufficient to inform me about whether my client is benefiting from therapy and how effective I am. Change happens early all the time, and it won’t get worse before it gets better.”
Like all fundamentalism, such rigidity snaps easily under pressure. The words of Gregory Bateson reminds as that the test our stability is how flexible we are.
We cannot farm out decision making to quantified measures alone, and neither can we rely on intuition, no matter how experienced we may be. Often, our intuition is like Homer Simpson when he looks in the mirror (see previous post on Why Our Self-Assessment Might be a Delusion of Reality), highly susceptible to an overconfidence bias. For instance, data in and of itself cannot dictate whether one should continue or end treatment.
In order to make better clinical judgment, our task is to hold the tension of opposites. In turn, rather than rely solely on borrowed evidence from remote clinical trials that barely represent the clientele that we see, we can develop our own native evidence, and use that to hone our intuitive lens.
Contrary to intuition, here’s what we know from psychotherapy outcome studies:
1. Change happens early rather than later, and
2. It doesn’t often get worse before it gets better.
And it doesn’t mean that your client will not experience a delay or a dip. Life happens. Averages does not negate the individual. But it would be an unwise to see the individual and ignore the average base rates.
Here’s more of what we know from cumulative evidence:
3. We are practically blind when our clients deteriorate in our care, and
4. Differences in therapist skills accounts for more than treatment models.
It’s straightforward thing to identify deterioration. Start becoming a ROMer. It’s not so straightforward to improve your performance (more on this in later posts). For now, it’s important to note that an individual client outcome does not tell you how effective you are as a therapist. You would need more than N = 1, 2 or even 3 (at least 30) to make any general conclusion about your effectiveness.
Our task is to hold the science and the art, and conjointly with our clients, weave a coherent narrative about the impact of our work together. There is science in the arts, and there is art in the sciences.
1.Superforcasting: The Art and Science of Prediction by Philip Tetlock and Dan Gardner. What an experiment! When put to the test, pundits that we deem as top forecasters aren’t as good as what they really made out to be. Instead, other ordinary “non-experts” rise to the occasion to become “super forecasters”
2. The Great Psychotherapy Debate (2nd ed.) by Bruce Wampold & Zac Imel, 2015. Even if you own the classic 2001 1st edition of this book, it is a must-read for anyone truly interested in the field of psychotherapy.
3. The Art of Thinking Clearly by Rolf Dobelli. If you liked Dan Kahneman’s classic Thinking, Fast & Slow, you are going to appreciate this book. Every chapter offers a bite-size approach to counteracting our inherit biases. It’s chalk full of good advice.