Google’s conversational AI, the Turing Test, and equity

In early 2020, Google posted about Meena, “a Conversational Agent that Can Chat About…Anything.” The post gets fairly technical, but it also includes two brief sample dialogues with Meena in which Meena pretty much passes the Turing Test as far as I’m concerned. (In one dialogue, Meena makes a couple of on-topic puns; in the other, Meena converses about TV shows.)

(Aside: the last time that I looked at results from the annual Turing Test contest (the Loebner Prize) was years ago, but at the time those results were really disappointing; they had the same kinds of flaws that Eliza had, with very obvious mistakes of the kind that I doubt any human would make. Edited to add: After posting this, I looked at the online version of Kuki, the chatbot that’s won the Loebner Prize for the past several years, and I see that it’s still doing the Eliza thing where it plugs in text from what you wrote into its response, in awkwardly ungrammatical ways.)

Last week, Google revealed LaMDA, their latest conversational-dialogue system, and it’s even more impressive than Meena. That new post doesn’t get technical, but it does explain some interesting stuff about some of the difficulties in making AI conversation feel natural.

In a related 6-minute video, Google CEO Sundar Pichai shows a demonstration in which LaMDA takes on a couple of roles: first, LaMDA responds as if it were Pluto; second, LaMDA responds as if it were a paper airplane. (CNET’s title for the video is incorrect; in both dialogues, LaMDA is talking with a human, not talking to itself.) Google’s choice to have spoken words as well as text in this demo is good for accessibility, but may make their achievement here a little unclear; I saw another video in which commentators seemed to think that Google was demoing text-to-speech. So to be clear, the point here is that LaMDA is able to carry on a coherent conversation with a human.

As Sundar points out at the end of the video, LaMDA is not perfect. He briefly shows a couple of errors that it made in alternate versions of the demo dialogues. But even so, I’m really really impressed.

But this topic also demands some discussion of the ethical issues involved. I don’t know whether Dr. Timnit Gebru saw Meena or LaMDA before Google fired her, but both projects seem likely to be related to the work that she was doing, exploring the ethical ramifications of training AI on very large language models. For example, if LaMDA was trained on social media posts (I have no idea whether it was or not, but other AI projects have been), then it may have picked up an awful lot of the kind of bad stuff that appears all over social media.

To their credit, both Sundar and the LaMDA blog post do mention issues around equity and bias. Google is very aware of those issues.

But it would be in a better position to deal with those issues—both from a technical standpoint and from a moral-position standpoint—if it hadn’t fired Dr. Gebru and others.

Join the Conversation