Sorry, but ChatGPT doesn’t love your book

A month ago, File 770 carried an item in which an author said that he had asked ChatGPT for a blurb for his book. He seemed to be delighted that ChatGPT had not only read his book, but loved it; ChatGPT called it a “captivating narrative.” The author continued:

Could [ChatGPT] have reached into my computer and read my novel and then compared it to the thousands of other novels and their reviews that have been uploaded to its massive database? And now makes a value judgement like that?

It could do so in a nanosecond.

The author claimed to “know a bit about AI,” but unfortunately appeared to misunderstand what ChatGPT does.

A few commenters provided corrections/clarifications, but anyone who didn’t read the comments may have been left with the impression that ChatGPT “know[s] more than humans do” and that it read this book and thought it was great.

I’m posting all this not to pick on the specific author, nor on File 770’s editor/owner, but rather because it’s just one instance of what I suspect is a very widespread misunderstanding.

So, just in case anyone who sees this post of mine is uncertain:

ChatGPT is more like a game of Mad Libs than like a person. It’s not sentient. It doesn’t “read” a book, and it doesn’t “make a value judgement” about a book. The way it works (very roughly) is that, given some input words, it chooses a sequence of words based on which word is the most likely next word in the sequence. (Where it determines what’s “most likely” by having been trained on an enormous number of examples of text written by humans.)

(What I’m saying about ChatGPT here also applies to other software that’s in the AI subcategory called “LLM,” or Large Language Model. There are also other kinds of AI software that work differently, but none of them so far are capable of making meaningful value judgments about books.)

ChatGPT’s training included a vast number of book blurbs (I would guess millions of them), so when it‘s asked to write a book blurb, it creates a sequence of words that looks a lot like all of the other blurbs out there.

So ChatGPT didn’t evaluate this book and find it “captivating.” Instead, ChatGPT created a sequence of words that was similar to the sequences of words used in other blurbs, and the phrase “captivating narrative” is a very common phrase, so it used that phrase.


I just asked ChatGPT to write me a blurb for the Gor series. Here’s part of its response:

Explore the captivating world of Gor in John Norman's iconic series. Immerse yourself in a vivid and intricate universe where honor, adventure, and exotic landscapes come alive. With engaging characters and thought-provoking themes, the Gor series weaves a compelling tapestry of fantasy that resonates with fans worldwide.

Again, that doesn’t mean that ChatGPT read and loved the Gor books. It means that ChatGPT was trained on a large volume of human-written text, and humans have written blurbs, and humans have written about Gor, and ChatGPT is generating a sequence of words based on the human writing it was trained on.

I also asked ChatGPT to write a negative review of the book that the File 770 author wrote. I won’t quote the result here, because again my goal here is not to pick on that particular author; but it was full of the kinds of negative phrases that appear in negative reviews.

So, once again: ChatGPT isn’t sentient, and it doesn’t evaluate the literary quality of books.

Join the Conversation