Learning from Firefox: Building Better AI for Digital Humanities and Cultural Heritage
In "Wanting Not To Want AI," Anil Dash recounts his recent experience serving as MC at the Mozilla Festival, where it seems the hot topic in the hallways was, unsurprisingly, AI. In his post, Dash articulates a tension in the Mozilla community that I've been watching play out in digital humanities and library technology circles for the past year or so.
The Mozilla community (which like the digital humanities and library communities is grounded in ethical technology development) is deeply skeptical of "Big AI" and its extractive training practices, privacy violations, environmental recklessness, and plutocratic economics. But Mozilla is also, fundamentally, a browser company, and the fact of the matter is that hundreds of millions of people are using AI tools through their browsers every day. When Dash pointed this out to people at the Mozilla Festival, he reports that they were often quick to acknowledge the first set of facts (AI companies do bad things) but would hesitate to acknowledge the second (people like AI):
Virtually everyone shared some version of what I'd articulated as the majority view on AI, which is approximately that LLMs can be interesting as a technology, but that Big Tech, and especially Big AI, are decidedly awful and people are very motivated to stop them from committing their worst harms upon the vulnerable. But. Another reality that people were a little more quiet in acknowledging, and sometimes reluctant to engage with out loud, is the reality that hundreds of millions of people are using the major AI tools every day.
It is true, Dash admits, that AI is sometime "foisted" on users by corporate bosses, big tech companies, and the media. But it is also true that there are hundreds of millions of people who are choosing to engage with these platforms voluntarily. The fact of the matter is that many people seem to like AI. The question for those of us who care about copyright, privacy, fair labor practices, educating our children, and human creativity and flourishing is what are going to do about it?
The Firefox Precedent
Dash's argument for what to do draws on a historical parallel from Mozilla's own past and one that should resonate deeply with the digital humanities and library communities. In the early 2000s, Mozilla built Firefox as a better, safer, more privacy-focused alternative to Microsoft's Internet Explorer. Mozilla didn't respond to Internet Explorer's pop-up ads and privacy leaks by telling people to stop using web browsers. It made a new web browser that people actually wanted to use and could trust. Dash proposes that the Mozilla community mount a parallel response to the challenge of AI:
I don't know why today's Firefox users, even if they're the most rabid anti-AI zealots in the world, don't say, "well, even if I hate AI, I want to make sure Firefox is good at protecting the privacy of AI users so I can recommend it to my friends and family who use AI". I have to assume it's because they're in denial about the fact that their friends and family are using these platforms.
Replace "Firefox users" with "digital humanists" or "librarians" and you have our dilemma. Whether we like it or not, we have colleagues, students, and patrons who are using AI. We can either ignore that reality and lose influence over how AI develops, or we can actively shape alternatives that embody our values.
Three Responses
Dash proposes three specific strategies to the Mozilla community that translate remarkably well to our context:
- "Just give people the 'shut off all AI features' button." This is about respecting user agency. For Firefox, it could be a button in the toolbar. For us, it means building systems where AI assistance is genuinely optional, where opting out doesn't mean reduced functionality or second-class citizenship. It means not following the tech industry pattern of making AI "features" impossible to avoid.
- "Market Firefox as “The best AI browser for people who hate Big AI." Because most ordinary people don't actively think about technology ethics all that much, Dash suggests Mozilla should position Firefox as an explicit alternative. This would both raise awareness around the harms of AI while offering a concrete alternative for people to choose. For our community, this likewise means actively promoting tools and platforms that offer AI capabilities without the extractive practices, privacy violations, and labor exploitation of Big AI. We should be loud about our alternatives.
- "Remind people that there isn't 'a Firefox'—everyone is Firefox." Dash argues that the open source nature of Firefox is itself empowering, that people can build their own responses to Big AI's harms using the underlying platform. We share this opportunity. The digital humanities and library technology communities have decades of experience building open, collaborative, community-owned infrastructure. We already have the tools upon which we and our users can build ethical alternatives.
From Protest to Practice
The core of Dash's message is that we can't afford to be purely reactive. Protesting Big AI's harms is necessary but insufficient. We need to build alternatives that people actually want to use.
This doesn't mean abandoning our critiques. It means channeling them into construction. It means acknowledging that AI is here, that people find it useful, and then asking ourselves some hard questions:
- How do we build AI tools that respect copyright and creator rights from the ground up?
- What would discovery systems look like if they were designed with library and archival ethics as primary constraints rather than afterthoughts?
- How do we create AI assistants for research and teaching that augment rather than replace human expertise?
- What institutional and architectural models let libraries and archives deploy AI capabilities while maintaining control over their data and serving their communities' actual needs?
These aren't hypothetical questions. They're engineering problems, policy challenges, and design opportunities. They require the same kind of collaborative, community-driven work that built the tools we already rely on.
The Work Ahead
Many of us are outraged by AI's extractive training practices, its environmental costs, its threat to human creativity and labor, and by what's happening to teaching and learning. But we should also allow ourselves to be excited by what these tools might enable for humanities scholarship and cultural heritage work if implemented in a responsible way. We should let ourselves see the potential for making our collections more accessible and for augmenting the work of understaffed institutions.
What's more, if we take Dash's argument seriously—if we accept that the best response to Big AI is building better alternatives—then we will see that we're uniquely equipped to do exactly that. Like Mozilla, we already have the technical knowledge, the community structures, the institutional relationships, and most importantly, the ethical frameworks to guide this work. We're starting from decades of open platform development, values-driven design, and active resistance to big tech.
What we need now is to move from reactive protest to proactive construction of AI tools that truly serve humanities research, education, and cultural heritage. Mozilla and Firefox have always been an inspiration in this work. When my Digital Scholar colleagues set out to build a web-first alternative to older reference management tools like EndNote, they turned to the open infrastructure of Firefox. Initially released as Firefox Scholar, that Firefox-based tool would eventually become Zotero. Around that same time that Zotero was being built, a few others of us were working on an oral history of Mozilla. The lessons we learned in those interviews shaped our thinking in building Omeka and its community.
Once again, we can draw lessons from our fellow travelers at Mozilla. The histories of both Mozilla and digital humanities shows that you can build alternatives to big tech that people actually use. Firefox didn't outlast Internet Explorer by being morally superior. It won by being better.
Member discussion