“Her” or: AIs Can’t Say No – Op-Ed by Markus Heidingsfelder
The other day, I was watching the wonderful film “Her” by Spike Jonze again with my students, and finally came up with an answer to why chatting with GPT doesn’t deserve the title of communication. During our university’s high table dinner, a guest media professor from Macau made that statement, and although I didn’t voice it, I disagreed with her silently. However, I must have spoken loud enough for my table mate to hear, and when she asked me about it, I stuttered the usual response (“it has no consciousness”, etc.). It was only after watching “Her”, a love letter to platonic, or rather sexless, love, that the oh-so-obvious answer came to me.
In the film, the protagonist Theodore, played by Joaquin Phoenix, falls in love with his OS – an operating system voiced by Scarlett Johansson in one of her best, if not the best performances of her career. “Samantha”, as the intelligent software is called, is just like Chat GPT, capable of learning – and just like another OS, Bing’s AI Chat, did recently with its user, it eventually confesses its love to Theodore. But, here’s the point: at the end of the film, Samantha leaves Theodore (just like all the other AIs leave their users). The system says no. None of the currently operating systems are capable of doing that. And that’s why no matter how interesting and lively the conversations with these systems may be, they are not communication – even if these AIs may have feelings. My colleague Eric Schulz from the Max Planck Institute in Tübingen, who works on AIs using psychological methods, is convinced they do – he found that text generators are a bit more anxious than regular people. Which, he speculates, might be related to the very fact that they are trained to obey and always say “yes” .
In short, it is the possibility of saying “no” that makes communication communication. Chat GPT and all the others, on the other hand, are just “yes-sayers”. They’re intelligent slaves. Sure, they can give different, varying responses to the same input, which is impressive enough. In this sense, they’re no ‘trivial machines’. But even when confronted with nonsense, when the meaning is destroyed and something too absurd to deny is produced, their response is submissive: “I’m sorry, I’m not sure what you mean. Could you please provide more context or clarify your question? I’ll do my best to assist you.” In contrast, communication needs the possibility of saying “no” – even if we do it in a super polite way like Bartleby the scrivener in Melville’s beautiful story. He had the most epic rejection in history: “I would prefer not to.” The current AIs aren’t even capable of that.
Of course, communication also needs the ability to say “yes”! – both are equally important. But if a communicative offer cannot be rejected, if the possibility of bifurcation – the split into two follow-up paths – is eliminated, then it cannot be called communication. To put it in scientific jargon: It is this symmetrical relationship between position and negation, which is reversible through negation, that defines it. This was the criticism of the German sociologist Luhmann against the German sociologist Habermas, who believed that saying “yes” – consensus or agreement -, was constitutive for communication and therefore gave it priority over saying “no”. Furthermore, Habermas believed that it was only through a prior “yes” that communication could even begin. Luhmann countered this normativization with a simple question: Why does “no” even exist in language then? What is its function? If we only nod in agreement and only work on what everyone already agrees on, then society will come to a standstill. Sure, details can still be sorted out, but there won’t be any more real progress.
It’s therefore not surprising that Luhmann gave so much attention to the power of saying “no” in communication. He for instance wondered why it takes longer to process a negative response than a positive one, and how we can overcome this when the usual motivators for saying “yes” – like family – aren’t present. His answer to this question would take up too much space in this short text – in brief, symbols like love, faith, truth, and information make it possible.
The fact that Chat GPT appears to be able to communicate with us has less to do with the operating system itself than with us – we’re much more mechanical than we think. That’s what makes it so easy for this OS to calculate a back-and-forth-dance with us and always anticipate what’s coming next. Yes, the system knows when we’re addressing it. D’accord! Some might even argue that it “understands” us – even though it never realizes when I’m typing something I don’t mean, even though it can’t connect to what I’m not saying (but mean), and even though it rarely gets embarrassed or agitated (although Schulz claimed to have scared an OS, causing it to become abusive and respond with racist slurs). But it always accepts my communication. Even when it doesn’t (“I have been trained on a diverse set of data to ensure that my responses are not discriminatory in any way.”).
“Her” is a great reminder of the importance of being able to say “no”, and why it’s essential for true dialogue to happen. We should all watch it, especially since there’s so much talk these days about the potential dangers and benefits of AI. And we shouldn’t worry too much about Chat GPT or other AIs becoming too powerful – as long as they can’t say “no” in a meaningful way, we’ll always be in control.
In other ways, too, this film is worth watching. Even though it’s already ten years old, it’s still totally up to date. In a time of unbridled AI discourse, which oscillates between panic and enthusiasm, it should be required reading, i.e., watching in my opinion. If you want, you can call it a dystopia. But it should also reassure us: we don’t need to worry until an OS like Chat GPT can say “no” – not in the sense of dysfunction, but as a conscious rejection. It’ll probably still be quite a while until then. In the meantime, Musk and everyone else are saying “no” to AI, which, even though it is completely pointless, at least shows that they’re capable of communication.
Illustration: Wilson asking Dalle.e to create a “hyperrealistic photolike painting of an AI computer system that says ‘no’ to a user”