A Standard Disclaimer for AI-Curious Academics
I've noticed that when "pro-AI" academics—and by that I mean academics who don't reject AI out of hand for all purposes and are willing to experiment with the technology—write about AI, they generally feel the need to start with a disclaimer. The disclaimer establishes our bona fides as serious thinkers and liberals in good standing. It acknowledges our colleagues' main critiques of AI in the academic context (student cheating and threats to deep thinking, the death of writing and creativity, plagiarism and copyright concerns, environmental impacts, mass unemployment, rising inequality, the outsized political standing of tech billionaires, and a reasonable skepticism born of earlier hype cycles like crypto, etc.) and then moves on to say something like, "but LLMs are a reality and we need to engage with them… here's what I'm doing."
Here are three examples, the last one my own:
This plea for pluralism comes from a sense that battles between enthusiasts and critics of AI are not that important relative to larger problems. I'm talking about the sense that something is off in the functioning of institutions of higher education, and has been for a while now… Those who work in higher education and believe in its value should unify around academic freedom, public support for research, and the safety and success of students. But the need for unity should not extend to whether and how teachers use the technology. (Rob Nelson, AI Log)
To start, I think it's important to acknowledge that there's good reason for smart people to be skeptical. AI has historically been associated with grandiose claims, most of which have rarely panned out. At the same time, recent AI developments run against the grain of the intuitions most people form about how new technologies evolve and change. As a society, we just aren't used to things moving this rapidly. (Benjamin Breen, Generative History)
I think it's safe to say that most historians and archivists are skeptical—even resistant—to AI, and for good reasons. The extractive training practices, the environmental costs, the annoying hype cycle, and the threat to human expertise are all real. I share these concerns. But this is a use case I think we need to take seriously. (me, right here on Found History)
Three different writers, essentially the same opening gambit.
These disclaimers, including my own, are offered sincerely. Most of us writing this kind of thing really do sympathize with the critiques and share the values of our skeptical colleagues. We just think the best response to the problems of AI is to try to understand it better and hopefully steer it in directions consistent with our shared values and shared political, social, cultural, and educational commitments.
But disclaimers are tedious to write. And, frankly, they should go without saying. We are, by and large, talking to colleagues who know us, who have read our other work, and who should understand that our engagement with AI is not an abandonment of the humanistic project but a continuation of it by other means. The preemptive throat-clearing can sometimes feel less like good-faith intellectual caution and more like the academic equivalent of showing our papers.
So in the spirit of good software development—don't repeat yourself—I propose we standardize. Below is a block of boilerplate that those of us in the cautiously AI-curious camp can prepend to anything we write about the subject. Consider it open source. Pull requests welcome.
Before proceeding, the author wishes to acknowledge the following:
The training of large language models has involved extractive labor practices and the unlicensed use of copyrighted material. Their operation consumes significant energy and water. Their deployment threatens to displace workers, deskill professions, and concentrate wealth further in the hands of a small number of technology companies whose founders now wield disproportionate political influence. In educational settings, they raise legitimate concerns about cheating, the erosion of deep reading and writing, and the devaluation of the slow cognitive work that humanistic training is meant to cultivate. The hype surrounding them echoes the hype that surrounded crypto, the metaverse, and a long line of technologies that did not deliver on their promises.
The author takes these concerns seriously and shares many of them. The author values human creativity, human learning, and human labor, and believes that scholarly traditions developed over centuries are not obstacles to human flourishing but constitutive of it.
None of what follows should be read as dismissing these concerns or as enthusiasm for the companies, personalities, or business models associated with this technology. It should be read instead as an attempt to understand a technology that is, for better or worse, already shaping the institutions we work in, and to steer it—where we can—toward ends consistent with those traditions and values.
I'm only half joking. The serious point is that the ritual disclaimer has become something people doing legitimately interesting experimental work feel they have to spend time on, time that we all know is a precious commodity in academia today. Meanwhile, the rejectionist position gets to assume its own bona fides and skip straight to the argument. I'd like to believe that among colleagues, we could extend each other the benefit of the doubt. I'd like to believe that engaging seriously with a new technology—including being willing to be wrong about it—is itself a humanistic act, in the tradition of scholars who have always had to figure out what to do with earlier new tools and new media.
Member discussion