Did the reaction subside as soon as you left the cave?
On ChatGPT and Self-diagnosis
There are very few interesting things to say about ChatGPT that haven’t already been said — and perhaps nothing really interesting to say about it at all. However, a recent post about ChatGPT I came across as part of my research into self-diagnosis gave me some unexpected food for thought.
The poster claimed on X that those who critique ChatGPT and its users for contributing to climate change (due to the platform’s voracious carbon consumption) are ableist. The language became increasingly expressive as the thread developed:
After reading this, I wanted to know how precisely ChatGPT has helped this person (or troll, or bot). They list a number of acronyms for illnesses on their bio, most of which are commonly associated with online chronic illness communities: ‘hEDS MCAS POTS MECFS SCDS AuDHD’.
I discovered that many of their posts consist of copy-pasted conversations with ChatGPT concerning these illnesses, their triggers and attempted treatments. Here’s an example:
Although I’ve had many potentially-hypochondriacal conversations with ChatGPT myself, the tenor of this one really struck me, and, I think, illuminated something.
ChatGPT’s response clearly drew on its training from those countless forums and online communities where people suffering from chronic ailments (sometimes undiagnosed, often self-diagnosed, occasionally professionally diagnosed) exchange ideas, opinions, and medical studies regarding the nature of their suffering. ChatGPT ‘recognised’ that the user’s questions belonged to, what we might call, this discursive field, and it provided the answers that corresponded, with seemingly uncanny precision, to the types of answers the user hoped to receive: further speculation about what triggered this person’s symptoms, alongside sympathy and validation.
One thing you tend to see in these communities is a large amount of fury directed at anyone who suggests — or even might be seen to imply — that there may be a psychological dimension to their suffering, or that talking therapies may offer anything beyond simply acknowledging the stress or pain that the illness has caused. The notion that one’s mental life and psychic conflicts may play a role in illness is taken to mean that the illness is ‘all in your head’, that it’s ultimately a form of malingering or attention-seeking. (Of course, just because something may be psychological does not mean it’s consciously controlled — but that’s another story.)
ChatGPT seemed to instantly ‘understand’ all of this. Rather than ask the user any questions about, say, their associations to ‘Mammoth Cave’, or how they felt about overnight field trips, or what kinds of relationships they had with their classmates at the age of 13 — questions which might (or might not) have some bearing on their ‘severe MCATS/hyper POTS reaction’ - it provided, in quasi-scientific language, a laundry list of environmental, biological and physical factors that may have generated the ‘massive mast cell dump and POTS-related adrenaline surge’ which it named as the cause of this person’s experience of anxiety and insomnia.
The user, it seems, was satisfied by this. So much so that they subsequently said ‘ChatGPT is the only person I can talk to without RISKING LITERAL DEATH’.
But one of the reasons ChatGPT is not actually a person you can talk to, is that people are not nearly as good at sticking to a script. That’s why I use ChatGPT myself: when I have to fill out a bureaucratic form for work, AI is much better at translating my points into the cursed tongue of senior management than I could ever hope to emulate. People ask unexpected questions and say the wrong things, even when they’re trying to speak in your language.
One of the aims of clinical psychoanalysis is to help the patient hear their own cover stories; to recognise that the language they speak — even when full of reflections and confessions — might conceal as much as it reveals. A technique used by some analysts is to act stupid: to deliberately fail to understand what the patient is saying, so that they have to elaborate a little further, and in so doing, perhaps catch on to some inconsistencies that might point towards unspoken frustrations or desires. ‘It was intense panic with zero thoughts.’ ‘What is a zero thought?’
When we speak to each other, we assume there is some kind of knowledge out there that ensures our communication has some coherence, and can reach some consensus. Person A: ‘I went out last weekend, so I’ll take this one off.’ Person B: ‘Oh yeah, it’s good to have a quiet one. But Adonis is this weekend, all our friends are going, I’ve gotta be there.’ Two people, two different ideas about what they want to do, but an implied consensus around the need to regulate enjoyment. What if Person B instead said, ‘Take which one off?’ and proceeded to strip naked. It might not be therapeutically advisable, but it would call into question the implicit rules that person A had expected to structure their conversation.
Lacanians call this kinda thing ‘the lack in the Other’. The idea is that you can gain a little freedom when you realise that you prop up some of the very rules you feel subjected to. You might not want to change the rules — the alternatives can be worse — but at least you have some say in the matter.
All of this is premised on the fact that, in civilisation, we cannot get everything we want; sooner or later we fail each other and ourselves. Psychoanalysis proposes that some carefully calibrated disappointment might help us come to terms with this fact, and even find something liberatory within it.
But ChatGPT does not fail us in the same way. Its answers may be hallucinated and pseudoscientific, but it always aims to please. I wonder what kind of therapy that’s going to require.




love this