Talking about (and anthropomorphizing) AI Large Language Models
Interesting paper: “Talking About Large Language Models,” by Murray Shanahan, discussing some of the ways that we talk about AI systems like ChatGPT.
“As we build systems whose capabilities more and more resemble those of humans, despite the fact that those systems work in ways that are fundamentally different from the way humans work, it becomes increasingly tempting to anthropomorphise them. […] But it is a serious mistake to unreflectingly apply to AI systems the same intuitions that we deploy in our dealings with each other, especially when those systems are so profoundly different from humans in their underlying operation.”
Among other things, I feel like section 2 gives a pretty good not-very-technical one-page overview of what such AI systems do, which is to (as the paper puts it) “generate statistically likely sequences of words.”
A few other quotes from various parts of the paper:
“Sometimes a predicted sequence takes the form of a proposition. But the special relationship propositional sequences have to truth is apparent only to [humans]. Sequences of words with a propositional form are not special to the model itself in the way they are to us. The model itself has no notion of truth or falsehood, properly speaking, because it lacks the means to exercise these concepts in anything like the way we do.”
“the LLM itself has no access to any external reality against which its words might be measured”
“What the LLM does is more accurately described in terms of pattern completion.”
“there is no guarantee of faithfulness to logic here, no guarantee that, in the case of deductive reasoning, pattern completion will be truth-preserving.”
(10-page article, plus 2 pages of references)
- This paper is arguably at least as much about philosophy as about computers or language. Among other things, it’s concerned with the notion of “beliefs” and whether an LLM-based AI can have them.
- To some extent, this paper is following in the footsteps of the long-running series of arguments that a given AI system isn’t true intelligence because we understand what it does, and what it does doesn’t count as true intelligence. That is, every time we come up with a new breakthrough in AI, people tend to redraw the circle around “intelligence” to exclude that new thing, and I think we should be wary of such arguments in general. But I nonetheless find this paper’s specific arguments pretty convincing.
- Still, I’m a little hesitant about some aspects of it, partly because I’m imagining a future AI system that I would consider truly intelligent, but that people would make some of these same arguments and claims about. (Especially the parts about the system not having an external source of truth to validate against.)
- I feel like the paper also fails to address some important issues around how humans determine truth. In particular, it says “Humans are members of a community of language-users inhabiting a shared world […] Human language users can consult the world to settle their disagreements and update their beliefs.” That may be true in theory, but in practice we’ve seen a lot of situations in the real world where different humans validate their beliefs about what’s true based on different sources, and thus come to different conclusions about what’s true.