Yesterday I talked about adolescents with BPD using technology to aid in therapy and recovery. Along that vein is this article that is discusses the possibility of computerizing therapy as a whole. Personally I think this is a horrible idea but let’s see what they have to say. It’s always good to be aware of what it is going on in the psychiatric community.
Why Machines May Be The Best Therapists
Robots can be trusted more than humans—especially where memories are concerned.
Published on February 23, 2013 by Christopher Badcock, Ph.D. in The Imprinted Brain
Research by Elizabeth Loftus over thirty years established that eye-witnesses’ recall of incidents could be influenced by the language of their interrogation: for example, using words like “smash” in relation to a car accident instead of “bump” or “hit” causes witnesses to report higher speeds and more serious damage. But more recent research has revealed that this so-called misinformation effect is not found if a robot does the questioning.
A group with a human and another with a NAO robot interviewer illustrated here were asked identical questions that introduced false information about a crime that the subjects had viewed. When posed by humans, the questions caused the witnesses’ accuracy of recall to drop by 40 per cent compared with those that did not receive misinformation as the former group “remembered” objects that were never there. But misinformation presented by the robot had no such effect despite the fact that the scripts were identical and that the experimenters told the human interviewers “to be as robotic as possible”.
The explanation presumably lies in the fact that, although the 23-inch high android robot has eyes, a synthesized voice, and is capable of gestures, it is not able to bring the subtle expressions to an interview that a human being could—and is certainly not capable of sophisticated mentalistic responses that might exert further, even more sensitive effects on those being interviewed. As the lead researcher points out in New Scientist, (9 February, p. 21) “We have good strong mental models of humans, but we don’t have good models of robots.”
In fact, we relate to them rather as would to aliens, and to the extent that robots like these mimic what we might expect of an encounter with an alien, they have the same “autistic” effect: diminishing mentalism but encouraging the kind of mechanistic, computer-like memory you typically find in autistic savants like Kim Peek.
Elizabeth Loftus went on to research so-called “false memory syndrome” and did much to discredit the paranoia of child sex abuse witch-hunts. But these remarkable findings suggest that, were psychotherapy to be entrusted to suitably programmed computers, there would be much less risk of false memories being reported in the first place. And if being interviewed by a robot makes such a difference to the accuracy and objectivity of a person’s memory, what more could be expected where other aspects of mentalism were concerned, such as emotion, sociability, subjectivity, and self-consciousness? At the very least, a mechanistic psychotherapist would counter-balance the hyper-mentalism of psychotics, and even autistic clients might relate to it much better than to a human one.
It is now widely recognized that classical psychoanalysis is not an effective treatment for psychotics. Indeed, as a recent account points out: “The classic psychoanalytic approach (including free association and having the patient lying prone on a couch with the therapist out of sight) is contra-indicated.” Furthermore, “Therapists who work with schizophrenia patients need to have a high level of frustration tolerance and not have a need to derive narcissistic gratification from the patient’s efforts or progress.” Clearly, the role of the psychotherapist—and perhaps that of the psychoanalyst especially—is open to abuse and exploitation by the therapist for whatever reason—and there a lot of them!
But no conceivable computerized psychotherapist would be subject to similar temptations. On the contrary, intelligent interfaces that might develop into computer psychotherapists could exploit their very weaknesses where absence of real human motives, memories, needs, emotions, and ego were concerned to guarantee levels of objectivity, impartiality, and rationality to which few if any human psychotherapists could aspire. At the very least, their never-tiring silicon circuits would certainly guarantee a high level of frustration tolerance, and narcissistic gratification is something that only a hyper-mentalizing human being with an agenda of personal aggrandizement would seek!
Instead, like an alien intelligence from outer space, the machine mind would be ideally qualified to explore human mentality with an objectivity, detachment, and impartiality that no human being could ever achieve. Even better, the wholly mechanistic basis of the machine’s mind would mean that it was ideally tailored to help where psychotics need help the most: in de-hyper-mentalizing and re-balancing their cognitive configuration in the mechanistic direction.
So not just in general terms, and in relation to the human race as a whole might the alien invasion of the future—intelligent, Turing-tested machines—be crucial where our understanding of ourselves is concerned: it could transform individual psychotherapy and give those who needed it unique and otherwise unobtainable insights into themselves—something psychoanalysis always promised, but seldom if ever delivered.
(With thanks to Steven M. Silverstein, whose remarkable research on blindness and risk of psychosis was the subject of a previous post.)
So what do you think about that my fellow Borderline friend? How would feel about talking to a robot shrink? Any feasibility here? Okay I kind of feel like this article was written with a bit of tongue in cheek. At least I hope it was, except considering it’s source and the fact that it was written by a prominent Doctor of Psychiatry and published makes me fear it was not.
On an intellectual level. I get it. It also makes me understand why some women are drawn to sociopaths. From a purely analytic stand point, yes, for assessment and diagnosis sure, robots might be an excellent aid. A tool in the therapeutic process. But the Therapist themselves? Hell no. They would be an utter failure. Especially for people like us.
They could spit out a pre-ordered list of behavioral techniques and advice when certain keywords are triggered, but other than that???
How would a robot offer true guidance? How would a robot offer comfort and security? How would a robot establish a therapeutic relationship? AGAIN, this concept of therapy ONLY works on the idea that people will take quickly and easily to the idea of therapy and compile quickly. What part of psychotic or disordered makes you believe that this will happen? At all?
Not to mention, I would just feel a little bit silly talking to a box with eyes. I’d rather put in the work with a real human being. Especially in the case with BPD it’s really important that we learn how to develop healthy functioning relationship skills. Having a robot therapist is pretty much the definition of how NOT to do this, haha.
As a screening tool? Sure. A diagnostic aid? Great! As an actual therapist? Don’t be lazy.
What do you think?