The start of term is fast approaching. Parents are starting to worry about packed lunches, uniforms and textbooks. Graduating students planning to go to university are wondering what welcome week will be like for new students. And some professors, especially in the humanities, are anxiously wondering how to handle students who are already more adept at using Large Language Models (LLMs) than they are.
They have reason to be worried. As Ian Bogost, a professor of film, media, and computer science at Washington University in St. Louis, puts it, “If the first year of AI college ended with a feeling of disappointment, the situation has now descended into absurdity. Teachers struggle to continue teaching while wondering whether they are grading students or computers, while a never-ending race to AI cheat and detect is waged behind the scenes.”
As expected, that arms race is already heating up. The Wall Street Journal recently reported that “OpenAI has a way to reliably detect if someone is using ChatGPT to write an essay or research paper, but the company has not made it public, despite widespread concerns that students are using artificial intelligence to cheat.” This refusal has infuriated sectors of academia that imaginatively imagine that the “cheating” problem must have a technical solution. They clearly have not read the Association for Computing Machinery's statement on development principles for systems to detect generative AI content. The statement reads, “Reliably detecting the output of generative AI systems without embedded watermarks is beyond the current state of the art and is unlikely to change within a foreseeable timeframe.” And while digital watermarks are useful, they can also cause problems.
As Alison Gopnik puts it, the LL.M. program is a “cultural technology,” like writing, printing, or libraries, a tool to augment human beings, not replace them.
The LLM is a particularly pressing problem for the humanities because the essay is a critical pedagogical tool in teaching students how to research, think, and write. Perhaps more importantly, the essay also plays a central role in grading. Unfortunately, the LLM threatens to make this venerable pedagogy unviable. And there is no technological solution in sight.
The good news is that the problem is not insurmountable if educators in these fields are willing to rethink and adapt their teaching methods to fit new realities. Alternative pedagogies are available. But it will require two changes of thinking, if not a change of heart.
The first is the recognition that the LLM is, as the eminent Berkeley psychologist Alison Gopnik puts it, a “cultural technology,” like writing, printing, libraries, and Internet searching. In other words, the LLM is a tool for extending human beings, not replacing them.
Second, and perhaps more important, we need to instill in students’ minds the importance of writing as a process. I think EM Forster once said that there are two kinds of writers: those who know their ideas and write them, and those who find their ideas by trying to write. The vast majority of humanity belongs to the latter group. That’s why the process of writing is so good for the intellect. Writing forces you to learn the skills of coming up with a coherent line of argument, selecting relevant evidence, finding useful sources and inspiration, and, most importantly, expressing yourself in readable, clear prose. For many, that’s not easy or natural. That’s why students turn to ChatGPT when asked to write 500 words to introduce themselves to their classmates.
Rather than trying to “integrate” AI into the classroom, American academic Josh Blake, who has written a sensible book about engaging with AI, thinks it's worth making the value of writing as an intellectual activity abundantly clear to students: “If students don't already understand the value of writing as a thinking process, then they'll naturally be interested in outsourcing that labour to law students. And if writing (or any other work) is really just about the deliverable, then why not? If the means to an end don't matter, then why not outsource it?”
Ultimately, the problems that the LLM poses to academia can be solved, but it will require new thinking and different approaches to teaching and learning in some areas. The bigger problem is the slow pace at which universities are moving. I know this from experience. In October 1995, the American academic Eli Noam published a very insightful article in Science magazine called “Electronics and the Dark Future of the University”. Between 1998 and 2001, I asked every UK governor and senior university leader I met what they thought about it. They all looked stunned.
Still, things have improved since then: at least now everyone knows about ChatGPT.
What I'm Reading
Online Crime
Ed West wrote a fascinating blog post about sentences handed down for online posts during the Southport stabbing riots, highlighting the inconsistencies in the UK justice system.
Ruth Bannon
The Boston Review has a fascinating interview with documentarian Errol Morris about Steve Bannon's dangerous “dharma” — his sense of being part of the inevitable unfolding of history.
Online forgetting
MIT Technology Review has a poignant piece by Niall Firth about the efforts to preserve digital history in a world of ever-growing data.