AI Inverts the Disciplinary Hierarchy
Anthropic co-founder Daniela Amodei recently told Fortune that the humanities will be more important than ever in an age of AI. I think she's right. I'd go further. AI isn't just making the humanities more valuable. It's turning the entire hierarchy of academic usefulness on its head.
Since at least the turn of the century, computer science has been viewed as more practical than the humanities. For students, learning to code meant job security. For faculty, teaching computer science meant resources and prestige within the university—prestige that had gradually leaked away from the humanities departments.
But when computer programming no longer means writing code but instead giving high level direction and having AI write code, it becomes more important to express yourself clearly than to know syntax. The humanities have always taught how to think through ambiguity, articulate complex ideas, and communicate what you want. Computer science taught that too, but necessarily had to spend more time on the nuts and bolts. Now that the nuts and bolts have been automated, humanists have a leg up.
But the inversion goes deeper. Even within the humanities, I'd argue the hierarchy has flipped.
Philosophy was long considered the most impractical discipline in the liberal arts, pure abstraction with no obvious application. Now, as questions of consciousness, personhood, and agency come to the fore with AI, philosophy may be the most practically relevant field in the university. What does it mean for something to be conscious? What constitutes intelligence? What are the ethical implications of creating non-human minds? These have become engineering decisions.
Religious studies deals with similar questions of meaning, ethics, and what it means to be human, questions that become urgent when we're creating non-human intelligences. What is the moral status of an AI? What responsibilities do we have toward created beings? These are theological questions dressed in technological clothing.
Art history and literary criticism deal with aesthetics, questions of what makes something beautiful, meaningful, or worth creating. An AI can generate images and text, but it can't tell you whether those outputs matter or why. That kind of judgment requires the deep engagement with aesthetic traditions and real people that the humanities provide.
Meanwhile, those fields within the humanities that were once considered somewhat more practical, like government, law, and (as a historian I'm sorry to say) history, turn out to be surprisingly amenable to AI. ChatGPT can summarize a Supreme Court decision, analyze polling data, explain parliamentary procedure, draft a basic contract, or synthesize narratives from case law. Legal research and document review, once a lucrative specialty for junior associates at law firms, is exactly the kind of structured information retrieval and synthesis that AI excels at. It's humbling to include my own field on this list, but historical research that involves pattern recognition across large bodies of already digitized documents? Also quite amenable to AI. Of course, history has a built-in defense that many other fields don't where research involves undigitized collections—though we should be honest with ourselves and admit that ever fewer graduate students and junior scholars have the luxury of spending months on end in the archives. Many dissertations today are written substantially, if not mostly, with sources found online.
I'm not arguing that government, law, or history are now useless, or that philosophy majors are about to inherit the earth. But AI is good at the kind of information processing, structured analysis, and narrative synthesis that until recently made these fields seem more practical than, say, poetry. On the other hand, the disciplines we dismissed as most impractical—philosophy, religion, art history, literature—deal with precisely the questions AI can't answer for us. What is consciousness? What is beauty? What is the good?
The hierarchy has inverted. The "useless" has become essential, and the "practical" has become automated.
Maybe this should teach us something about how we talk about academic disciplines. For decades, we've sorted fields into "useful" and "impractical," often with brutal consequences for funding, enrollment, and respect. We knew which majors led to jobs and which led to unemployment. We knew which departments deserved resources and which were luxuries we couldn't afford.
But if AI can invert these hierarchies this quickly—if philosophy becomes more relevant than computer science, if art history matters more than law—then maybe our categories were never as stable as we thought. Maybe what seems "useful" is just what seems useful right now, in this particular economic and technological moment. Maybe we should be more cautious about defunding, dismissing, or disrespecting fields just because we can't immediately see their application.
The university isn't a trade school. But we've spent decades pretending it is, and directing students toward "practical" majors while treating the humanities as indulgences. AI is showing us what happens when the practical becomes automated. Suddenly we need the people who spent years thinking about consciousness, beauty, meaning, and ethics. Suddenly the "useless" knowledge is the knowledge we don't have.
This doesn't mean every field deserves equal enrollment or equal funding. It suggests we should reserve judgment about which subjects matter and which don't. The next technological shift might invert the hierarchy again. And we should probably stop acting like we know which knowledge will and won't be essential twenty years from now.
Member discussion