The case for Luddism against ChatGPT (opinion)
“I’ll just use ChatGPT to write it.” That was the first thing I heard from a student on campus this January. They weren’t addressing me but answering a classmate who had asked how they were going out that night given an assignment’s looming deadline. It wasn’t an auspicious start to the new semester—at least from my perspective as a humanities scholar who believes that our job is to help students acquire the messy skills of information collection, critical evaluation and meaningful synthesizing. Historically, we’ve tried to achieve this through the essay form.
There are probably few faculty lounges around the world that haven’t witnessed exasperated discussions about what the availability of generative AI to the wider public entails. Instructors are frantically sharing examples of its outputs and wondering how to cope with the new technology. There’s nothing close to a consensus, only confusion.
Meanwhile, plenty of people who are not educators have some very strong feelings about what this all means (especially those who stand to benefit from the hype discourse that oozes out of Silicon Valley). According to Kevin Scott, Microsoft’s chief technology officer, teachers are freaking out for no reason, for “it’s a pedagogical mistake to think that the essay, like the artifact that you get at the end, is the important thing about what it is you’re trying to teach a student.” He also said that even “a toddler can teach someone how to do something,” which apparently is the relationship students will have with tools like ChatGPT. Charlie will teach the AI how to write the essay he wants. (It is worth noting that Microsoft has invested billions of dollars in OpenAI, the company behind ChatGPT.)
Search over 40,000 Career Opportunities in Higher Education
We have helped more than 2,000 institutions hire the best higher education talent.
Browse all job openings »
Scott not only lacks understanding of what the essay assignment seeks to achieve, he also ignores the issues in our educational landscape that can transform ChatGPT into a pedagogical monster. Given those prospects, I would like to make a case for Luddism. By resisting technologies like ChatGPT, we can potentially produce the kind of conflict that highlights the importance of what we are teaching and focuses the discussion on the kinds of resources we actually need to teach well.
Why the Essay?
Let us return to Scott’s “pedagogical mistake.” So many assumptions. So many questions raised. How effective will Charlie be in coaching the AI if he has no experience with the basic mechanics of crafting a compelling essay? Scott also seems to think of the essay as an end product, rather than a process (unsurprising, given his background in a field where the “deliverable” is king). But as any writing instructor will tell you, the value in the essay is not in its final form but in the work the writer puts into it during the process. As cliché as it sounds, writing is thinking, and it’s thinking iteratively.
This has been true to the essay genre from its very origin, which is usually traced back to the French philosopher Michel de Montaigne (1533–1592). Deeply cultured in the classics, Montaigne would use passages from the likes of Plutarch and Cicero as starting points to develop his own understanding of what it meant to be human. Through these engaged readings, he produced essays with titles like “Of freedom of conscience” and “How we cry and laugh for the same thing,” constantly revisiting and revising them through time.
“Essai” in French means “to attempt,” “to try.” Montaigne, who coined the genre, chose that name deliberately. Influenced by the Stoics, the Frenchman was a model of epistemic humility. He knew that he was not capable of arriving at any grand “truth” (for that was reserved for God) but realized that by thinking through writing he could construct some tangible meaning concerning the problems that were troubling him. His most famous essai at making sense of his place in the world was “Of cannibals,” where he inverted the average European assumption that the recently encountered Tupinambá people from Brazil were barbarians for practicing cannibalism. In a rhetorical tour de force, Montaigne asked whether the Tupinambás wouldn’t also see the Europeans, who themselves were involved in violent wars and practiced cruel forms of torture, as the real savages. More than an example of cultural relativism (“each man calls barbarism whatever is not his own practice”), the essay reveals Montaigne’s discerning mind, for he exposes the unreliability of travelers’ accounts. “They never show you things as they are, but bend and disguise them according to the way they have seen them,” he wrote.
Academic essays today have a more rigid framework than those Montaigne wrote. They usually demand a clearer thesis/support framework and abstain from excessive self-exploration. Nevertheless, they retain the core that Montaigne instituted in the genre—tentative discovery. The key to any good academic essay is to start from a puzzle that doesn’t have a clear and obvious answer, one that yields itself to multiple interpretations. That’s why writing instructors everywhere encourage their students to start from “why” or “how” questions. Why did the French Revolution happen? How does income inequality shape domestic politics? There are myriad compelling answers to these questions. “Why” and “how” open up universes that then need to be put into order. How does one do that?
To write a good essay (or offer a compelling solution to the puzzle) one must collect reliable evidence (research), reflect on the significance of that evidence and interpret it in relation to a larger context (analysis), consider the implication of your findings and propose a significant argument (thesis), and present all of that in an intelligible manner (structure). More than an end in and of itself, the essay is a means to get students to practice those skills. They might never write a text longer than 500 words after graduation. But if they iterate these skills enough times, then maybe they will develop a more critical eye when deciding what to do with the avalanche of information out there (ironically, more and more of it produced by AI). The academic essay is a laudable form because it prompts students to think about the different elements necessary for rational discourse. In writing essays, they learn to distinguish between facts and arguments and come to realize that you can only arrive at compelling versions of the latter through nuanced analysis of the former. That is a desirable goal, especially in democratic societies whose health depends on informed citizens who can debate in reciprocally intelligible ways.
Why Luddism?
What does something like ChatGPT bring to the table when it comes to the academic essay? I’m not sure. After all, the technology is effective at constructing logical and coherent sentences—something many students have trouble with. However, to assume that students can critically improve on the ChatGPT output without some prior solid foundation is, from my perspective, a stretch. How will they be able to tell that the AI’s analysis is worth anything if they never practiced much analysis themselves?
Writing in The New York Times, Kevin Roose (who also doesn’t have a background in education), makes the case that educators “should thoughtfully embrace ChatGPT as a teaching aid.” Banning the technology is doomed to fail, Roose argues, and measures to do so will only produce an adversarial dynamic between students and teachers. Instead, we should be thinking of ways to integrate the technology into the classroom—from having it produce outlines for an essay that students then go on to write on their own to asking students to evaluate the answers ChatGPT hands them (for they can be wrong). The problem, though, is that the outline activity circumvents a critical thinking skill (structuring arguments), while the activity to correct ChatGPT runs counter to its developers’ end goal (which is to make it increasingly reliable).
While Scott is all in and Roose is thoughtfully in, I would like to make the case for all-out resistance to the technology—at least at this moment. This is a case for Luddism, or for what the philosopher of technology Langdon Winner labeled “epistemological Luddism.”
The original Luddites were textile workers in Britain who destroyed some of the new machines that accompanied early industrialization. Popular depictions have painted the Luddites as technophobes who resisted progress. But the works of historians from E. P. Thompson to François Jarrige have shown that these machine breakers were quite adept with technology (they were skilled artisans, after all) and that machine breaking was informed by a keen awareness that one cannot separate the technological from the social. For whom did the automated loom bring progress? Certainly not to the unemployed weaver and the communities built around that mode of production. Machine breaking was often the last resort of these workers who enjoyed few political rights: as E. J. Hobsbawm put it, it was a form of “collective bargaining by riot.”
Today we are witnessing a kind of Luddite revival, with books coming out with titles like Breaking Things at Work: The Luddites Are Right About Why You Hate Your Job and prestigious journals like Nature publishing articles titled “In Praise of Luddism.” This is in part due to the troubles stemming from Silicon Valley’s uncontrollable growth. But it is also part of a longer intellectual movement, whose first major theorization goes back to Winner’s 1978 book, Autonomous Technologies: Technics-out-of-Control as a Theme in Political Thought. Published in the wake of more catastrophizing reflections of modern technology (like Jacques Ellul’s The Technological Society and Herbert Marcuse’s One-Dimensional Man), Winner’s book sought to develop a critical apparatus to make sense of a social world increasingly under the spell of technology. One of the things he proposed was adopting “epistemological Luddism” as a heuristic. As he explained, “in certain instances it may be useful to dismantle or unplug a technological system in order to create the space and opportunity for learning.” Doing so would allow us to evaluate more carefully how a technology and the social environment it’s embedded in condition our behavior—a necessary exercise to decide whether a technology is appropriate for that given place and time. In short, epistemological Luddism helps us maintain our autonomy in the face of autonomous technology.
Being a Luddite when it comes to ChatGPT is necessary, because the balance is overwhelmingly tilted toward Silicon Valley and those who have invested vast sums in creating a market for educational technologies. The way ChatGPT and other technologies developed by OpenAI are being implemented is quite concerning. OpenAI claims its mission is to democratize AI. But a truly democratic adoption model wouldn’t be to unleash the technology into the world and make it available to whoever wants it. That’s mistaking the market for democracy (tellingly, OpenAI pivoted from its nonprofit origins to a for-profit model as soon as the market showed interest in its product). It’s also an approach that prizes efficiency over other values. Given the market pressures we all live under, we are wont to adopt a technology that will cut corners and get us at a “deliverable” faster.
But what is lost in that process? Or, to be even more provocative, will it even make us more efficient? Most of the written content available on the internet is garbage, and something like ChatGPT is only going to make it easier to produce exponentially more garbage. Inefficiency might be just as likely an outcome as efficiency. Soon we will spend more and more of our precious time discerning between the mountain of chaff and the dwindling wheat in the internet’s endless content pit. Have we not learned anything from the unintended consequences produced by the utopian schemes concocted in Silicon Valley? Targeted advertisement was supposed to streamline funding models, making content accessible to all while catering to our consumer desires. Instead, what we got was a crisis in journalism, the scourge of pop-up ads and surveillance capitalism. Social media was supposed to bring people together and democratize the public discourse. Instead, what we got were filter bubbles that fomented polarization and the spread of fake news at unprecedented scales.
Avoiding Monsters
Roose is right, though. When it comes to educational spaces, resisting ChatGPT is bound to produce conflict. But that is precisely why we need it, for conflict creates room for inquiry and brings values into sharper relief (it can be pedagogically useful in that way). Much like Viktor Frankenstein’s creation, who in Mary Shelley’s 1818 novel only becomes a monster because the doctor refused to care for it, the main problem with ChatGPT is that it has been unleashed into an educational landscape ripe for the making of monsters. Resisting is therefore necessary to highlight what needs to change in that landscape.
Much like the automation of textile production benefited some and was catastrophic to others, we should expect ChatGPT to have a disparate impact on students. Those attending places like Princeton University, where the intensive first-year writing seminar is capped at 12 students per session, are lucky enough to receive a highly personalized educational experience that could allow for the careful integration of tools like ChatGPT. The same cannot be said for those attending colleges that rely on overworked adjuncts responsible for multiple large course sections. The affordances to use the product in constructive ways just aren’t there.
Much of the money invested in educational technology in the past decades—from plagiarism-detection software to MOOCs—has been searching for larger profit margins that come along with scalability. ChatGPT makes clear the pedagogical limits of that goal. The Modern Language Association’s guidelines stipulate that writing courses should not exceed 20 students and that faculty should teach no more than 60 students per term. Why? Because writing-intensive classes need to be small enough so that instructors can offer personalized feedback. This is especially important because it creates buy-in from those who matter: the students. Why would students who are getting little to no feedback on their work not resort to ChatGPT to write their essays? Why would they make themselves vulnerable on a page (for writing is indeed a very vulnerable act) if they know that the instructor reading doesn’t have the resources to offer generous feedback? Better to let the machine produce a bland text than to take risks finding your own idiosyncratic voice. The problem is not just the technology, but the context in which the technology embeds itself. If students feel like they are just another cog in the university assembly line, then they will submit generic, machine-produced essays.
The logical conclusion to this arrangement will be like the old Soviet joke: students pretend to learn and write; instructors pretend to teach and evaluate. We will have autonomous universities, so to speak, but spaces of very little human autonomy.
اكتشاف المزيد من موقع الدكتور العتيبي
اشترك للحصول على أحدث التدوينات المرسلة إلى بريدك الإلكتروني.