Amusing Ourselves to Death With ChatGPT
Is AI making us stupid and what would Neil Postman think?
Designating something a moral panic—some trend or development, a shift in thinking or usage, or a turning away from traditional forms of learning or epistemology (that word will be important here)—can seem tactical or sly sometimes. Immediately, it casts the person doing the adjudicating as wiser, more temperate and less prone to hysteria or myopia. To take an extreme example, the people who early on voiced their scepticism of the lurid stories that fuelled The Satanic Panic (even that coinage sounds campy and unserious), looked like the sober ones: unmoved by the frenzy of credulity that animated so many others. Technology, though, is a different category. The effects, whether primary or secondary, that people worry about are real enough, and while we may not agree on the extent to which something is altering or menacing the status quo—even modifying how human beings behave—we usually concede that some kind of change is, in fact, taking place.
I’ve lost track of all the ways my generation and those behind (or ahead?) of me are supposed to have been affected deleteriously by technological advancement. Our attention spans have been truncated; we no longer know how to socialise; dating has been spoiled and we struggle to connect with one another; we have access to reams and reams of information but cannot discern what is useful, cannot distil that information into knowledge and certainly not wisdom; we’re vulnerable to radicalisation and prone to conspiratorial thinking; we never meet anybody who challenges our opinions, which anyway are not our own—they’re regurgitated anecdotes and insinuations from uninformed shitposters; we are lonely but don’t know how to deal with solitude.
A recent MIT study has been doing the rounds both in the press and on social media. The Times reported on it under the headline, “Using ChatGPT for work? It might make you stupid,” explaining that researchers noted reduced brain activity in students who used AI tools to help write essays.
Of course, that’s not good news, but it’s hardly surprising. It would be extremely odd if these researchers had found an increase in neural activity in the cohort outsourcing (even part of) the essay-writing exercise to AI. Whether or not we’re more or less cognitively engaged when we’re writing with the assistance of an LLM (Large Language Model) compared to writing using only a search engine, or nothing at all, strikes me as a question barely worth asking. But the study did find that “when people went from using ChatGPT to writing without it, their brains were still less active”. That does seem significant. It’s strange, though, that each of these findings is given equal weight and attention: one observation is obvious and banal and the other seems much more alarming.
But this isn’t about whether or not AI is making us stupid. I’m not qualified to comment on that in any real depth, but I suspect that depending on how LLMs are being deployed, the answer is quite obviously, yes. Writing an essay with assistance—not just help with research but with a tool that can compose a readable piece of prose in seconds—is clearly less demanding than doing so without that assistance. But that conclusion is trivial.
What interests me is to what extent AI assistants, chatbots, LLMs etc. are similar to or different from previous iterations of consumer tech. In his 1985 book, Amusing Ourselves to Death, Neil Postman charts the ways new media and forms of communication (in this case, television) are supplanting what’s come to be seen as traditional (print, text, reading). Should we understand the emergence of AI as the latest innovation in the steady march of progress? Are we only wary, or even afraid of it, because it’s new, disruptive, untested and somehow hard to measure and comprehend? Or is it something quite different: a break in the evolution of communication technology?
Postman contends that the medium dictates the content; that is, there are certain subjects or discussions we could have in a literary, print-based culture which we cannot have in a culture that’s dominated by television. Television, as a medium of communication, necessarily changes—and to his mind, degrades—the kinds of meaning we can telegraph.
He uses smoke signals as an example of “primitive technology” to elucidate this point: “While I do not know exactly what content was once carried in the smoke signals of American Indians, I can safely guess that it did not include philosophical argument… You cannot use smoke to do philosophy. Its form excludes the content”. Because Postman’s tone is often irascible and patrician, it’s worth noting that this doesn’t mean that American Indians did not have philosophical concepts, just that smoke signals would not be the way to communicate them. This example works to focus the mind on how communication technology isn’t neutral. As he says, the form dictates the content to a large extent. But it’s also true that smoke signals are much more limited than television. So is semaphore.
The medium is the message, yes, but Postman goes further, claiming that “media are implicated in our epistemologies”—or put another way, “definitions of truth are derived, at least in part, from the character of the media of communication through which information is conveyed”. That’s a stronger, more destabilising conclusion to arrive at.
Postman was not alone in his anxieties: television was a bête noire for plenty of intellectuals in the latter half of the twentieth century. David Foster Wallace wrote that “television’s greatest minute-by-minute appeal is that it engages without demanding. One can rest while undergoing stimulation. Receive without giving”. Wallace’s principal worry was that television was leading to inertia, disengagement and isolation. That this kind of mindless entertainment exacts a price on our psyches.
For Postman, the dangers are cultural and societal, while for Wallace they’re more personal, but it’s hard to disagree with either of them. That being said, I do think it’s plausible that the moral panic (and I don’t use that phrase to mock or denigrate people’s concerns; I have concerns myself. Perhaps there’s a better term I can’t think of…) around AI and cognitive ability stems from the fact we haven’t come to any agreement about how they can be usefully deployed. We’ll invent rules and norms in time, but we haven’t yet. What kinds of tasks can it help with or streamline without our cognitive function, even our humanity, being diminished? In a few years’ time, maybe we’ll be comfortable around this technology and understand—intuitively—how it can complement the skills we already have and reduce the amount of time we spend on menial or frustrating tasks.
There’s another issue here which I don’t have space to get to but which interests me a lot: does the arrival of LLMs signal not merely a new medium of communication but a restructuring in who the participants are in a dialogue? Many of our interactions with ChatGPT have no audience—or at least, the audience is either illusory or it’s ourselves or both. But if we understand AI as a technology that can assist us in communicating and, as such, is just the most recent invention in a long string of inventions—like word processors and the internet and smart phones and social media—then I think our pessimism and alarm can be chalked up to a lack of familiarity. For now, generative AI is something we don’t quite understand and can’t take the measure of. But by the end of the decade, ChatGPT-4 will seem as rudimentary as a series of smoke signals—and much more familiar.