
The latest issue of the BACP’s Therapy Today magazine focusses on the rise of Artificial Intelligence (AI). There’s obviously been a huge rise in the use of prevalence of AI in the last couple of years. I mostly hear a lot of people worried about the impact of this. Many industries, including therapy, are concerned about AI in some way replacing them. I see a lot of discussion online too about AI potentially reducing our ability to think critically. This reminds me of the move from paper maps to sat nav, and the worries that drivers just follow the instructions of the sat nav rather than pay attention to where they’re going. Or the dark side of online cumulative reviews, where it becomes impossible for some people to choose a restaurant (for example) until they can find the ‘best’ overall rating. There can be a nostalgia, for those who are old enough to remember, of how real life navigation (driving to a destination or walking through town) would involve much more human interaction. There’s a sense, at least from older generations, that the internet has made us less connected rather than more. With AI, there’s also a rising awareness of the environmental impact both in terms of the use of rare minerals and the use of water, and on the ethics of where it gathers information.
As well as being sympathetic to these concerns, I also have a defensive response to the rise of AI. My immediate thought is that it can’t ever really replace therapists – or similarly human endeavours like making art. However last year the sort of images, music and video that was generated by AI were uncanny and obviously fake (e.g. hands with too many fingers), but now it is getting harder to tell them apart from the real. Similarly chatbots only a couple of years ago were obviously limited, whereas it’s now being reported that many people find solace and life-changing impact from ‘conversations’ with AI about their problems. As a scif-fi fan, and someone who does remember life before we all had the internet in our pockets, I can’t help but think about the potential dangers of the rise of the machines. But this is perhaps my bias, and while the dark side certainly exists, therapists may need to start engaging with the AI issue rather than ignoring it or waving it away as simply a bad development.
It’s often cited that the main factor of successful therapy is the relationship between client and therapist, and it’s usually taken for granted that this is a human-human relationship. The role of other-than-human beings in therapy certainly plays a role too, perhaps often overlooked. This can be a literal therapy animal in the room, the relationship the client and therapist have with other animals and other-than-human-beings (past and present), metaphor, or the threat of climate disaster. Again my initial thought is that such a relationship could never be recreated with a virtual/artificial being, although sci-fi gives plenty of examples where it can. That being said, in fiction ‘AI’ usually refers to independent intelligence, and even emotion, to a degree which is still very much science-fiction. ChatGPT does not know the feel of wind in it’s hair, or the relief of a thunderstorm after weeks of hot weather. But we love to see faces in clouds. Perhaps if ‘AI’ co-opts enough human experience it can pretend well enough that it does know how these things feel, and even generate a poem about it. Humans are arguably biased to seek meaning from the meaningless. We’re also not used to seeing language used by beings without the capability for thought. If the environmental impact can be addressed, which is crucial, is the technology itself harmful if it can help people feel more understood, or understand themselves better?
It seems ironic that people are seeking connection to AI when it feels that the internet may be one of the causes of increased loneliness and disconnection in the first place. Although it must be said, there is not necessarily enough research to show that feelings of loneliness have increased over the last few decades. You can see trends in divorce rates going up, and membership of clubs going down, so isolation has possibly risen. Whether that means people feel more lonely is another matter, which we may not be able to determine. Loneliness is not the same as isolation, which is why overall trends of loneliness may not have increased as dramatically during COVID-19 as you might think.
The rise of AI reminds me of the schizoid experience that RD Laing writes about so well in The Divided Self. For some people, who Laing describes as schizoid, feeling ‘robotic’ or seeing others as unfeeling robots may act as a useful defence against perceived/actual dangers of connecting with others. Some autistic people, or those with other neurodiversity, have shared experiences of feeling like or coming across as robotic to others – although this is not every autistic person’s experience. This author Chloe shared how AI has helped her understand her own mind through the metaphor of Large Language Models (LLM) like ChatGPT. LLMs gather large amounts of language data in order to come up with a reasonable response to an input. That’s one way of thinking about how humans learn too. Transactional Analysis (TA), a psychoanalytical theory and method of therapy developed in the 1950s, referred to communication exchanges as ‘transactions’. The idea is that we have certain ‘ego states’ (‘parent’, ‘adult’ or ‘child’) and that being aware of these can help improve communication. The more we’re aware (the more data we gather) the more effective our response can be, just like an LLM like ChatGPT.
Models are only models of course, as I wrote previously how maps are not the territory. We are not robots who act only from specific and defined ego states, even if that is a helpful model to understand human communication. If I think of myself as robotic, I can become split from my ‘inner’ self, and from my body. I would be privileging mind over body and spirit. If I disconnect from body and spirit (or ‘instinct’ if you prefer), these things become projected outwards onto Others (human or otherwise). Normal parts of me like anger, snot, fear and sweat become truly separated from my experience, so as to seem entirely Foreign and ‘not me’. Truly ‘knowing’ that I can ‘only’ be one way to be accepted (e.g. nice), can lead to a separation of the ‘not nice’ parts of me, which do not go away but may start knocking louder at the basement door. These can come out in unexpected ways which I may not associate with ‘me’.
The Therapy Today article talks about the possibility of therapists embracing AI to some extent in their work with clients. It can also be useful for therapists as a marketing tool, as a way of automating emails, and I often get spam emails about the uses of AI for note taking – although I’m wary about issues of confidentiality. And perhaps cynically, I wonder in the transaction with AI what they (large companies that run these models) are getting in exchange for our participation. I agree with the article that therapists should be open about their use of AI, and questioning of its accuracy, as it often hallucinates, is overly sycophantic, or just gets things wrong. As the article says though, 51% of participants with depression felt better after speaking with a chatbot. I suspect that the role of the therapist will always continue in some way, even if we have to adapt to a rapidly shifting therapeutic environment.
The hallucination of AI reminds me of the inaccuracy of human perception. In many ways we also ‘hallucinate’ reality to create smooth uninterrupted vision; through not seeing our blind spot (where the optic nerve meets the eye), filling in of our peripheral vision, change blindness and so on. We only experience reality through our limited senses. Then of course there’s the stories we live by, which inform the reality we perceive. Drawing on TA again, we have certain ‘scripts’ that we learn when we’re very young – that the world is a certain way, and therefore that we should play our designated part. For example, “others can be trusted, so I have to be self-sufficient”. This is one person’s truth, and may be very different from someone else’s. The theory is that we develop these scripts early in our lives and carry them through into adulthood. Most people don’t question their way of being unless the script ‘breaks down’ in some fundamental way. In the example above, if that person becomes seriously ill they may need to rely on others for help, which offers a fundamental challenge to their way of being / script.
This isn’t saying that just because we hallucinate too, that AI is more understandable or acceptable. But is an example of how the metaphor of AI may be able to shine a light on understanding ourselves better. My dislike of AI may be partly about the environmental damage it seems to cause. But it’s my responsibility to make sure I’m not also projecting my own damage of the environment onto AI, or to be aware if I’m projecting all my ‘robotic’ elements onto actual robots. It may also be helpful to understand why I might dislike AI from a script perspective, e.g. if I believe “I shouldn’t take up too much space”, and I see corporations taking up (and destroying) huge amounts of space, it makes sense why I would find that objectionable. It doesn’t mean it’s not an issue, but is useful to see why it’s an issue for me personally, as it helps me decide how I want to act.