Friday, May 24, 2013

Why Machines May Be The Best Therapists

Yesterday I talked about adolescents with BPD using technology to aid in therapy and recovery. Along that vein is this article that is discusses the possibility of computerizing therapy as a whole. Personally I think this is a horrible idea but let’s see what they have to say. It’s always good to be aware of what it is going on in the psychiatric community.

Why Machines May Be The Best Therapists

Robots can be trusted more than humans—especially where memories are concerned.
Published on February 23, 2013 by Christopher Badcock, Ph.D. in The Imprinted Brain

Research by Elizabeth Loftus over thirty years established that eye-witnesses’ recall of incidents could be influenced by the language of their interrogation: for example, using words like “smash” in relation to a car accident instead of “bump” or “hit” causes witnesses to report higher speeds and more serious damage. But more recent research has revealed that this so-called misinformation effect is not found if a robot does the questioning.

A group with a human and another with a NAO robot interviewer illustrated here were asked identical questions that introduced false information about a crime that the subjects had viewed. When posed by humans, the questions caused the witnesses’ accuracy of recall to drop by 40 per cent compared with those that did not receive misinformation as the former group “remembered” objects that were never there. But misinformation presented by the robot had no such effect despite the fact that the scripts were identical and that the experimenters told the human interviewers “to be as robotic as possible”.

The explanation presumably lies in the fact that, although the 23-inch high android robot has eyes, a synthesized voice, and is capable of gestures, it is not able to bring the subtle expressions to an interview that a human being could—and is certainly not capable of sophisticated mentalistic responses that might exert further, even more sensitive effects on those being interviewed. As the lead researcher points out in New Scientist, (9 February, p. 21) “We have good strong mental models of humans, but we don’t have good models of robots.”

In fact, we relate to them rather as would to aliens, and to the extent that robots like these mimic what we might expect of an encounter with an alien, they have the same “autistic” effect: diminishing mentalism but encouraging the kind of mechanistic, computer-like memory you typically find in autistic savants like Kim Peek.

Elizabeth Loftus went on to research so-called “false memory syndrome” and did much to discredit the paranoia of child sex abuse witch-hunts. But these remarkable findings suggest that, were psychotherapy to be entrusted to suitably programmed computers, there would be much less risk of false memories being reported in the first place. And if being interviewed by a robot makes such a difference to the accuracy and objectivity of a person’s memory, what more could be expected where other aspects of mentalism were concerned, such as emotion, sociability, subjectivity, and self-consciousness? At the very least, a mechanistic psychotherapist would counter-balance the hyper-mentalism of psychotics, and even autistic clients might relate to it much better than to a human one.

It is now widely recognized that classical psychoanalysis is not an effective treatment for psychotics. Indeed, as a recent account points out: “The classic psychoanalytic approach (including free association and having the patient lying prone on a couch with the therapist out of sight) is contra-indicated.” Furthermore, “Therapists who work with schizophrenia patients need to have a high level of frustration tolerance and not have a need to derive narcissistic gratification from the patient’s efforts or progress.” Clearly, the role of the psychotherapist—and perhaps that of the psychoanalyst especially—is open to abuse and exploitation by the therapist for whatever reason—and there a lot of them!

But no conceivable computerized psychotherapist would be subject to similar temptations. On the contrary, intelligent interfaces that might develop into computer psychotherapists could exploit their very weaknesses where absence of real human motives, memories, needs, emotions, and ego were concerned to guarantee levels of objectivity, impartiality, and rationality to which few if any human psychotherapists could aspire. At the very least, their never-tiring silicon circuits would certainly guarantee a high level of frustration tolerance, and narcissistic gratification is something that only a hyper-mentalizing human being with an agenda of personal aggrandizement would seek!

Instead, like an alien intelligence from outer space, the machine mind would be ideally qualified to explore human mentality with an objectivity, detachment, and impartiality that no human being could ever achieve. Even better, the wholly mechanistic basis of the machine’s mind would mean that it was ideally tailored to help where psychotics need help the most: in de-hyper-mentalizing and re-balancing their cognitive configuration in the mechanistic direction.

So not just in general terms, and in relation to the human race as a whole might the alien invasion of the future—intelligent, Turing-tested machines—be crucial where our understanding of ourselves is concerned: it could transform individual psychotherapy and give those who needed it unique and otherwise unobtainable insights into themselves—something psychoanalysis always promised, but seldom if ever delivered.

(With thanks to Steven M. Silverstein, whose remarkable research on blindness and risk of psychosis was the subject of a previous post.)


So what do you think about that my fellow Borderline friend? How would feel about talking to a robot shrink? Any feasibility here? Okay I kind of feel like this article was written with a bit of tongue in cheek. At least I hope it was, except considering it’s source and the fact that it was written by a prominent Doctor of Psychiatry and published makes me fear it was not.

On an intellectual level. I get it. It also makes me understand why some women are drawn to sociopaths. From a purely analytic stand point, yes, for assessment and diagnosis sure, robots might be an excellent aid. A tool in the therapeutic process. But the Therapist themselves? Hell no. They would be an utter failure. Especially for people like us.

They could spit out a pre-ordered list of behavioral techniques and advice when certain keywords are triggered, but other than that???

How would a robot offer true guidance? How would a robot offer comfort and security? How would a robot establish a therapeutic relationship? AGAIN, this concept of therapy ONLY works on the idea that people will take quickly and easily to the idea of therapy and compile quickly. What part of psychotic or disordered makes you believe that this will happen? At all?

Not to mention, I would just feel a little bit silly talking to a box with eyes. I’d rather put in the work with a real human being. Especially in the case with BPD it’s really important that we learn how to develop healthy functioning relationship skills. Having a robot therapist is pretty much the definition of how NOT to do this, haha. 

As a screening tool? Sure. A diagnostic aid? Great! As an actual therapist? Don’t be lazy.

What do you think? 


  1. Have you ever thought that you might not need therapy? That whatever disorder you have is just who you are? Maybe sleeping around, being the center of attention, lying, and manipulating is just a evolutionary advantage for people who act this way.

    I certainty can see the benefits of this. You will get alot of guys wanting to be your protector or try to save you. It will suck for them because some of them will actually love you, but some of them will probably deserve the shit storm you put them through as well. Just a thought and yes I am bitter.

    1. I've covered this before myself! I'm a Non who actually thinks a lot of the typical BPD sensitivities, even the scale of their trigger responses, are a great deal more rational than psychiatrists like to pretend. It seems to me that you have intelligent, perceptive people who are Natural and True waging war against a sense of "shame" instilled by early influences, people and religions.

      Honestly, if there is no Higher Power and religious morality is revealed to be nothing more than a way to control people, the Borderline's survival instincts are advantageous in MANY ways. The depression is the result of Gravity - the Curse of Knowledge mixed with Shame-experience - Bitterness about the painful reality of an IMPOSED existence.

      If a judgemental God-like entity DOES exist, and there IS a purpose to life, it complicates things a little - but if God is all things, It must be a non-judgemental entity AS WELL AS a judgemental one (at once both and neither, or something more). Bipolar. Following this line of logic, in some ways the Borderline mentality seems CLOSER to the Spiritual Ideal than the Nons who aspire to it ... the world is separated by those who KNOW they have personality "disorders", and those who don't. Surely?

      Or would the religious believer suggest the Borderline is in trapped in Limbo? Isn't that just a cop-out though?

      Anyway, as far as I can tell, the opposite to the Borderline is someone ignorant, or repressed to the point of virtual mechanisation. Which I suppose is how this comment ties in loosely with the article (thank you as always, Haven!!)

    2. Not sure what you are getting at with god. I was talking about evolution not god. I don't know what real emotions people with BPD actually experience, but I think jealousy and hate are emotions that they are capable of. Everything else is questionable. The other emotions are acted out to be a self service to their narcissistic desires. So in this sense if we were to talk about god and what not I would compare those labeled with BPD as demons.

      Take my comments likely though because I don't live with BPD I just thought I knew someone who has BPD and this is just from observation.

    3. Haha that's okay, I was probably being unclear - I'm a fairly indifferent agnostic myself, but the people who I know with BPD have often emphasised that they feel the gravity of the God question a lot. Mainly because it's the opposite to the purely evolution-based angle you were mentioning. Essentially, these people suggested that BPD cycling between idealization and devaluation could be turning on the idea that evolutionary success and religious/spiritual success seem to be at odds. So when selecting partners they might swing between the angelic caregiver and the devilish darwinian ... and take on the role of the other in each instance. It's black and white. Either we're alone and out for ourselves, or we're not - if you see what I mean.

      Saying that, evolution obviously happened. What I mean is, a lot of BPD issues come down to "IS there a point?". To be or not to be. Etc. etc.

    4. i.e. God is the difference between whether the strong are weak or the weak are strong. Whether someone who loves you is worth your time or wasting your time. Whether the Borderline is an angel or a devil for acting upon their impulses - whether their impulses are natural or unnatural.

    5. Thanks for the clarification ACE. The person with BPD I know used to be all about Christianity, but has long since abandoned it... I guess much like everything else.

      Maybe to them god is a hope for a better future/afterlife or a way to forgive their sins or the feeling that they are somehow flawed or bad (I don't know if they actually feel this way or it's just more talk). Maybe it is just another front to make them appear like they are trying to change or again are the victims of circumstances. Who knows... they lie so much.

  2. Either the above is philosophically brilliant or totally wack. I have an IQ of 130 and I don't get any of it.
    Anyways, about robots and therapy, there is a lot of testing going on involving US veterans with symptoms of PTSD. Results show that subjects are more prone to opening up to a computer then to a real person although this might be due to the military's view on such problems. The researchers do acknowledge that a computer doesn't replace a human for the therapy itself but that it is a useful tool in making a diagnosis as a computer is much more accurate in picking up small tell-tale signs in a patient.

  3. They were questioning whether we aren't all overpathologizing natural human behaviour. It wasn't directly linked to the article but to the blog as a whole and whether seeking recovery or therapy is a waste of time.

    In my opinion a robot therapist wouldn't be providing anything that you couldn't research by yourself online. It would be like a lie detector giving advice or one of those online FAQs that gives auto-suggestions but not real answers. The article assumes that patients with PDs go to therapists for information and not for anonymous comfort. Not necessarily true.

    1. That's true. On the other hand the problem with online research is that people start self-diagnosing and assuming stuff that might not be there. The idea of a robot is that it can remember everything that has already been said with pinpoint accuracy so that inconsistencies can be weeded out. But with the whole comfort thing that's where it goes wrong I guess. The technology is still evolving but I still don't see the AI being so good that it can give the comfort of a real human. The robots and computers they are testing now are all remote controlled by human therapists anyway.

  4. Yes, maybe robots could be used to diagnose PDs beyond all reasonable doubt according to set criteria that they can objectively evaluate - but I don't think therapists struggle with that side of things when their patients are actually ready to be honest about their behaviour. I think the point is that people see a therapist to feel connected with another HUMAN under controlled conditions as much as they do to hear the opinion of an impartial outsider. If a patient knew that his or her therapist wasn't human, it wouldn't matter how advanced the AI was, they'd feel like they were alone in the room. The whole experience would suddenly seem farcical and self-indulgent (if it didn’t already!). It would be like talking to an all-knowing, emotionless doppelganger: the ultimate distortion!

    Like Haven says, if anything, the article serves as a reminder of why some disordered personalities are drawn to humans with sociopathic characteristics. The ruthless logic and truth-telling can be very appealing. Literal inhumanity would be one step too far, however, as there would be nothing "real" to chase. No truly personal solution. Human and computer logic are completely different, after all, because computer logic is fixed. The robot therapist would be completely incapable of a sudden moment of human insight; its guidance would be entirely reliant on pre-existing research data - the arrogant presumptions of the psychotherapy community. Whether they’re fooling themselves or not, people walk into mental health clinics believing that their problems are unique, and want more than a label and a set of tips. Of course, the fact that a human psychotherapist will almost certainly talk them about “re-PROGRAMMING their thoughts” makes a joke of the whole process.

    But this is what I was getting at in my post a few days ago – people trying to file each other away into neat little boxes seems comically soulless whichever way you slice it. Human psychologists are just repeaters, repeating what they’ve read in books, making a living out of intellectualization and guess-work – in many ways they’re just as lost as the people they’re trying to help, they just don’t bring the same emotional baggage into the room when they attend a session. When you trace it ALL back, it all comes down to whether there’s a POINT to our lives or not, and how that would affect our priorities – whether people have anything to feel ashamed about, and whether “disorder” actually exists outside of the scientific box. As a Non, I’m attracted to the hypersensitivities of people with PDs, but rather more sceptical that there’s actually a RECOVERY to be made. I wonder if they’ll happen upon The Truth somewhere in their emotional turmoil – and they might wonder the same about me deducing something with my more detached sort of logic.

    But The Truth is a game for heads and hearts, not for cogs or factory parts.

    … … Hey, anyone here looking for a robot therapist who can throw rhyming couplets at you for an hour? Because I could do with the work.

    1. "The Truth"? What truth? There is none of that. It is not about finding "The Truth", because it doesn't exist. And that's certainly something that they tell you very early in the therapeutic process. What they do offer though, are handholds for people to be able to cope with a certain set of problems. Handholds that come forth out of a lot of generalizing research and a few assumptions, but who are in the end tailor-made for the specific problems of a specific individual. Any therapist worth his/her salt will do it that way, instead of parroting whatever assumptions there are out there. You should know that, instead of throwing everything on a big heap.

      About AI, computers do learn, and they learn very quickly. Now the AI isn't good enough, but I am very curious what it will be like in a decade time when logic isn't as fixed anymore as it is now for robots. What's more, tests with robots have shown that humans can become as emotionally attached to them as they can to a human.

    2. In the much bigger sense of the phrase, The Truth MUST exist, one way or the other. We all have our own subjective way of looking at things, but unless you can convince yourself that you’re the only person alive, there ARE absolute truths beyond that – about life, the universe and so on. Our personalities, however “disordered”, all came from somewhere. That’s the aspect of the sciences I like – that they’re trying to get to the bottom of everything (unlike blind faith). Therapeutic diagnoses, meanwhile, deal with the minutiae of human behaviour by collecting and categorising individuals. On a practical level this makes sense, because they offer “handholds for people to cope with a certain set of problems”, just like you say. But the idea of a robot therapist (the subject of the article) introduces a situation where the generalized research and assumptions could only be employed in a perfectly fixed capacity. The robot therapist’s effectiveness would depend entirely on the current level of human understanding about mental health – and all possible manifestations and eventualities being programmed into/accessible to the AI.

      You mention how any therapist worth his or her salt will tailor the procedure to the specific individual rather than parroting assumptions - I agree with that completely, which is why I commented on people going to therapists in order to establish a beneficial human connection (rather than simply to access an impersonal resource). My point is that AI is incapable of anything OTHER than parroting assumptions, no matter how detailed those assumptions might be. Computers are programmed by humans, so the depth of their logic will always be dependent on the knowledge of the programmer. Robots may become the most EFFICIENT way of processing existing assumptions but, like I say above, they’re incapable of having a moment of true insight like humans are. Machines only have epiphanies in Sci-Fi films.

      So then supposing, in a hypothetical situation, that there was a moment where humans came to understand everything about the brain, personality disorders and so on, and robots became the most efficient at processing that information – we wouldn’t need therapy any more! The solutions would become common knowledge almost as soon as they were discovered, like with all things these days. It would become a science of Truth rather than the well-intentioned pseudo-science that it currently is - two people in a room, hashing out issues over a table, hoping for a moment of clarity. The day a robot becomes the ideal therapist by any objective standard, therapy will no longer be required.

      But yes, go to a HUMAN therapist by all means, if you think it’s worth your time and money. There’s nothing wrong with that, and no-one can deny you that choice. It’s the robot therapist idea I’m really taking on here.

    3. Oh sorry, I just noticed your comment about humans becoming emotionally attached to robots! I’m sure this is possible, because some people objectify everything and everyone anyway. Why should it matter if it’s a cuddly toy, a pet animal or a smartass robot therapist? Disordered personalities, we are told, struggle with actualization of the self and others as it is. Attachment to the Inanimate is just another manifestation of those same self-destructive tendencies.

    4. "My point is that AI is incapable of anything OTHER than parroting assumptions."

      When the Blue Brain Project simulates a human brain in silicon in 2023, come back and talk. Also, regarding intelligence having an upper limit of the intelligence that created it, do some reading on the approaching Singularity.


Leave me a comment! It makes me feel good and less paranoid about talking to myself =)

Related Posts Plugin for WordPress, Blogger...