A FEW THOUGHTS ON ChatGPT AND GOOD WRITING
Back in May, the Chicago Sun-Times caught heat for publishing a summer reading list that had been produced using artificial intelligence. The AI produced a list of fifteen new titles, ten of which don’t actually exist. Tidewater Dreams by Isabel Allende, Salt and Honey by Delia Owens, The Last Algorithm by Andy Weir, Nightshade Market by Min Jin Lee – the AI not only generated these and other fake titles, it provided brief descriptions of each book as well. Ironically, Andy Weir’s novel is described as a story that “follows a programmer who discovers that an AI system has developed consciousness – and has been secretly influencing global events for years.” Curiously, many of the fake books are described as being about environmental catastrophes. Allende’s fake book is about “how one family confronts rising sea levels…” Rebecca Makkai’s Boiling Point “centers on a climate scientist forced to reckon with her own family’s environmental impact…” Migrations by Maggie O’Farrell is about “an eco-activist targeting her mother’s wealthy clients.” The very titles of Hurricane Season by Brit Bennett and The Rainmakers by Percival Everett suggest their connection to climate change. (As we shall see below, the ChatGPT AI system seems to be unfairly biased in favor of writers who tackle environmental issues.)
I am largely illiterate when it comes to technology, but I do occasionally use the free version of ChatGPT that is available online. Sometimes I feed my older essays into the program and ask it for a critique. Often, these critiques are pretty good. They accurately summarize the essay, mention some of its strengths, and then criticize its shortcomings. But sometimes these critiques are totally bananas. A few days ago I asked ChatGPT to critique an essay I wrote called The Man Who Invented the Cop Novel, which was published in Quillette a few years ago. Here was the “Overview & Thesis” provided by the AI:
Mims tells the story of Ed McBain (aka Evan Hunter), whom he credits with creating the template for the modern police procedural through his 87th Precinct series, beginning with Cop Hater in 1956. He argues McBain transformed crime fiction away from lone detective mysteries toward ensemble cast narratives rooted in institutional realism…Mims effectively sketches McBain’s rise against the backdrop of mid-century crime fiction. He situates Cop Hater within its cultural moment and traces its ripple effect in literature and TV…The essay is engaging and approachable, even for readers unfamiliar with McBain…”
That sounds like a fascinating essay, and I wish I had written it. But The Man Who Invented the Cop Novel makes no mention of Ed McBain. My essay is about Joseph Wambaugh. I recall, back in the mists of time, sampling a few of the 87th Precinct novels, but I never really got into them. I certainly never read Cop Hater. And I have no idea how ChapGPT came to believe that my essay was about McBain. Perhaps, somewhere online, there is an essay on McBain that contains a link to my Wambaugh essay. Maybe ChatGPT got confused by the link and conflated the two essays. In truth, I’m too much of a technoboob to figure out how this mistake occurred. And I’m not at all bothered by the error. In fact, I find it kind of amusing. But I keep reading essays online arguing that turning to AI for medical information might bring you more accurate answers than the ones you would get from a human doctor. Color me skeptical. If AI can mistake Ed McBain for Joseph Wambaugh, it can probably also mistake a gall bladder for a duodenum. Good luck with that.
I also occasionally use ChatGPT to seek out book recommendations. Recently I asked ChatGPT to recommend novels similar to the works of Frederick Forsyth. As expected I got some of the usual suspects, primarily just novels by other British espionage writers: Len Deighton, Ken Follett, John LeCarre, etc. But ChatGPT also recommended a book called The Sleep of the Generations by Albert H.Z. Carr, which it described as, “An obscure Cold War-era thriller involving espionage, propaganda, and psychological warfare. Dated but intellectually intriguing – sort of a cerebral cousin to Forsyth’s early work.” That sounded cool to me, but when I searched for it on Google, I got this message, “Based on the provided search results, there is no direct evidence of a book or publication titled The Sleep of Generations written by Albert H.Z. Carr. The search results do list other works by Albert H.Z. Carr.” This happens frequently when I search for book recommendations on ChatGPT. Awhile back I was looking for biomedical thrillers and asked for recommendations. ChatGPT made the obvious suggestions – The Andromeda Strain, The Hot Zone (despite the fact that I asked for novels), Coma, etc. – but it also recommended a novel by Alex Kava called Viral. But when I went looking for Kava’s Viral, I discovered that she hasn’t ever published a book with that title. She has published a book called Virus, but that is just an alternate title that some foreign publishers have given to her novel Exposure. What’s more, I’ve read Exposure and its plot is unlike the plot ChatGPT ascribed to Viral. It is possible that ChatGPT just got the author’s name wrong. One of the books it recommended as Forsyth-adjacent was a novel called The Delta Decision by Thomas Hoover. Thomas Hoover never wrote a novel called The Delta Decision, but Wilbur Smith did, and the plot description provided by ChatGPT makes it clear that the AI simply identified Hoover as the author of Smith’s book.
Sometimes I ask ChatGPT to recommend books that are similar to some really obscure novel that I happen to be fond of. Recently I asked it to recommend novels similar to Kay McGrath’s The Seeds of Singing, a romantic adventure tale set almost entirely in and around New Guinea in the era of World War II. But all of the recommendations I received were for war novels set in France. I found this odd, so I asked ChatGPT to give me a capsule review of The Seeds of Singing. Here is the response I got: “The Seeds of Singing by Kay McGrath is a little-known but beautifully wrought historical novel set in wartime France, often praised for its emotional depth, vivid setting, and exploration of love, resistance, and personal transformation during World War II.” Not only does this get the setting of the novel wrong, but it lacks internal logic as well. If the novel is “little-known” how could it be “often praised”? One of the books that ChatGPT recommended as being similar to The Seeds of Singing was The Skylark’s Song by Joscelyn Godwin. Godwin is a living British composer and musicologist who has written a number of books, but none of them, as far as I can tell, are historical thrillers. Goodreads.com lists several novels called The Skylark’s Song, but none of them seem at all similar to The Seeds of Singing.
What’s weird about ChatGPT’s inability to accurately summarize The Seeds of Singing is that I have had success when asking it to summarize far more obscure books than that. Twenty years ago, under a pseudonym, I self-published two thriller novels. The contents of neither novel has ever been available online or electronically. You can buy hard copies of the books online, but you can’t read any excerpts of them. Nonetheless, when I asked ChatGPT to synopsize the books for me, it did a pretty good job of summarizing both novels. And, as usual, it did this instantly.
A few weeks ago I asked ChatGPT to evaluate the work of freelance writer Kevin Mims. ChatGPT told me that he is “based in Sacramento, CA and [contributes] to outlets like The New York Times, Salon, Newsweek, NPR’s Morning Edition, Quillette, South Florida Sun Sentinel, and The Federalist. [He] writes about a wide range of topics, including climate and environmental issues, culture and literature, and media analysis. In Quillette he co-authors insightful culture/literary essays, like a 2024 retrospective on Bright Lights, Big City at its 40th anniversary, and broader critiques in areas such as DEI, celebrity memoirs, and lab-grown meat. [He] writes climate and environmental opinion pieces for the South Florida Sun Sentinel, addressing sea-level rise, renewable energy, [and more].”
According to ChatGPT my strengths include my topical range (“In addition to climate and environmental commentary, he offers cultural criticism and coverage of literary history, demonstrating versatility.”) and my “collaborative depth” (“[Mims] frequently co-authors with subject-matter experts, adding layered perspectives – i.e., joint pieces in Quillette on science, film, sports, or cultural discourse.”).
This is fine, but it demonstrates a serious problem with AI. It cannot distinguish between two people with the same name. There lives in Florida a freelance writer named Kevin Mims (pictured above) who specializes in writing about climate and environmental issues. I can understand why people might mix us up. We’re both gray-haired white guys of a certain age. Florida Kevin’s Twitter photo features him in a kayak. Kayaking also happens to be a passion of mine. I am happy to be conflated with Florida Kevin, because he is a fine writer. I keep hoping that some national publication will contact me and offer me a fat fee to write about rising sea levels. I would snap up that assignment in a hurry, despite the fact that I know nothing about climate change. When I submit a pitch to some magazine, I always list my publication credentials, but I don’t list the South Florida Sun Sentinel or any other publications in which Florida Kevin has published. Nonetheless, I’m always hoping that the editor will Google my name, see Florida Kevin’s credentials, and assume that they are mine. Thus, AI’s inability to distinguish between me and Florida Kevin isn’t exactly a flaw as far as I’m concerned.
A somewhat more serious problem is that ChatGPT seems to think that I frequently collaborate with other writers, particularly when I am writing for Quillette. To my recollection, I have never shared a byline with any other writer over the course of my forty-plus years as a freelancer. I would be happy to collaborate with other writers. But I have just never had an opportunity to do so. As far as I can tell, Florida Kevin never collaborates with other writers either. So I am not sure why ChatGPT thinks that we do.
But that isn’t my biggest problem with ChatGPT’s evaluation of freelance writer Kevin Mims. According to ChatGPT, one of my biggest flaws is “mixed editorial rigor.” The AI believes that the quality of my work varies depending upon which outlet I am writing for. It notes that when I write for “opinion-heavy, ideologically slanted platforms like The Federalist and Quillette” readers are required to “calibrate for potential bias.” On the other hand, according to ChatGPT, “His Sun Sentinel work is clear, data-driven, and well-sourced, useful for those following regional climate debates.” Clearly ChatGPT has a higher opinion of Florida Kevin’s work than it does of mine.
To which I can only respond: YOU’RE NOTHING BUT A STUPID COMPUTER PROGRAM THAT DOESN’T KNOW THE DIFFERENCE BETWEEN ED MCBAIN AND JOSEPH WAMBAUGH! WHAT THE FUCK COULD YOU POSSIBLY KNOW ABOUT GOOD WRITING?
Cheers.