Ten Things You Need to Know About AI
Author’s note: Regular readers will know that I've been writing about and experimenting with generative AI systems a lot lately. AI figures much less prominently in my teaching, but as we approach the end of the semester in DMD 2010: History of Digital Culture and dig in to more contemporary issues in the history of 20th and 21st century computing, it just so happens that AI is on the syllabus for today. (We tackled blockchain and crypto earlier in the week and we'll watch Her next week.) The following is the latest revision of a lecture I've been giving for the past few semesters, updated for late-2025.
Introduction: A Different Kind of Conversation
For weeks, we've analyzed historical shifts, from mainframes to PCs, from Web 1.0 to Web 2.0, from desktop computing to mobile computing. We’ve looked at the rise of social media and cloud computing. We've looked at blockchain and cryptocurrency as attempts to enclose digital commons. These were important conversations about how technology reshapes culture and economy over time.
But the rise of Large Language Models (LLMs) and Generative AI looks different. It looks like a historical break, one that’s happening faster than anything we've seen before and with implications we're only beginning to understand.
I want to be clear: I don't know how this story ends. Nobody does. The experts building these systems don't know. The CEOs running these companies don't know. The policymakers looking to regulate them don't know. Social media influencers certainly don't know. We are all, collectively, figuring this out in real time.
What I can offer you today is a framework for thinking it through—ten realities about LLMs and their capabilities that should inform your thinking as you move into your careers. These aren't predictions. They're facts about where we are right now, at the end of 2025, and what those facts suggest about where we're heading.
Some of this will sound optimistic. Some of it will sound cautionary. That's appropriate, because the technology itself is genuinely both promising and unsettling, often at the same time. Your job isn't to be either an enthusiast or a skeptic. It's to be as informed and thoughtful as possible as you make decisions about how to work with, around, and sometimes even against these systems.
I. The Black Box: We Don't Know How They Work
The first reality—and perhaps the most worrying—is that we are using tools that are fundamentally beyond human comprehension.
We have a name for this: the interpretability problem. Nobody—including the scientists and philosophers at OpenAI, Google, and Anthropic—can precisely explain why a massive neural network chooses the specific combination of weighted parameters that results in an AI’s response to a prompt.
Think about what this means. We know the inputs: vast datasets scraped from the internet, millions of books, scientific papers, conversations. We know the outputs: the text that appears on your screen when you ask ChatGPT a question. But the millions of layers of calculation in between? That operates as a “black box” that’s impossible to see inside.
When an AI produces bias—and it does—we often can't trace why. When it generates misinformation—and it does—we can't always identify the faulty reasoning. When it develops an unexpected capability that nobody programmed into it—and this happens regularly—we're often surprised.
This isn't like debugging code that you can trace line by line. This is more like trying to understand why a human brain made a particular decision, except the "brain" has billions of parameters instead of neurons, and we can't ask it to explain its thinking in any reliable way.
Why this matters: You're going to be asked to build systems that incorporate AI. You're going to be asked to trust AI-generated content. You're going to be asked to explain to clients or users how an AI-driven feature works. And the honest answer, in many cases, will be, "We don't know." That's not a comfortable position for anyone, but it's the reality we're working with.
II. Exponential Speed: The Six-Month Doubling
If history teaches us anything, it's that technological change is often faster than human institutions can adapt. With generative AI, that speed is exponential.
Sundar Pichai, Google's CEO, has written that AI computing power is doubling approximately every six months. Let me be clear about what "exponential" means, because we use that word loosely sometimes. This isn't a steady upward climb. This is a curve heading straight up.
If something doubles every six months, it's not twice as powerful in a year, it's four times as powerful. In two years, it's sixteen times as powerful. In five years, we're talking about capabilities that are over a thousand times more powerful than what we have today.
Here's another way to think about it: Moore's Law, which governed computing for decades, predicted a doubling every 18-24 months. Think of the changes we’ve seen since 1945 under that growth curve. AI is moving three to four times faster than that.
This creates a profound challenge for every institution in society. Universities design four-year curricula. Companies plan in three-to-five-year strategic cycles. Governments pass legislation (if they pass it at all) that takes years to implement and is designed to be in effect for a generation or more. But how can you plan for a technology that is fundamentally different every six months?
Why this matters: Whatever skills you're learning right now—including how to use current AI tools—will likely be obsolete or radically transformed before you graduate. That's not a reason not to learn them. But it does mean you need to focus on underlying principles, not specific tools. Learn how to learn quickly. Learn how to adapt. Those meta-skills are going to be more valuable than any particular technical competency.
III. Qualitative Leaps, Not Gradual Improvement
To understand this exponential speed in concrete terms, look at the historical evidence from just the past two years.
I sometimes show two charts side by side: ChatGPT 3.5's performance on entrance exams and professional accreditation tests, and ChatGPT 4.0's performance on the exact same tests, less than one year later. The leap in capability is startling. ChatGPT 3.5 scored in the 10th percentile on the Bar Exam. ChatGPT 4 scored in the 90th percentile. This isn’t incremental improvement. It is a qualitative leap.
What’s important here is that we don't always know what new capabilities will emerge when we scale these systems up. We don’t know what or how big the next jump will be. Researchers train larger models and then discover, after the fact, that the model can do things nobody explicitly programmed it to do. This is called “emergence,” when complex phenomena arise from simple rules at scale, and it means we can't reliably predict what GPT-6 will be able to do just by looking at GPT-5. The capabilities might jump in unexpected directions.
Why this matters: Don't make career plans based on "AI can't do X yet" as if that's a stable fact. In two years, it might be able to do X extremely well. Instead, think about the kind of work that's fundamentally difficult to automate, not because the technology isn't there yet, but because the nature of the work itself resists automation. We'll come back to this question at the end of class.
IV. The Latent Frontier: You're Using Old Technology
Here's a reality that most people don't fully appreciate: the tools you interact with every day—the public versions of ChatGPT, Claude, and Gemini—are already technically obsolete.
Right now, the companies building these systems have models that are significantly more capable than what they've released publicly. The speculation—and this is informed speculation from people close to these companies—is that the delay in releasing GPT-5 was partly about implementing safety measures and guardrails, but it was also about public readiness. There was a real concern within OpenAI that if it released GPT-5 too soon, it would "freak people out," as some insiders put it. The worry was that the performance would be too human-like, the potential for misuse too obvious, or the societal implications too overwhelming. This means that what we have access to right now is a deliberately throttled technology. What you're using today is probably at least a year or two behind the cutting edge. Maybe more.
This creates an odd situation: we're trying to understand the social implications of AI based on systems that don't represent the current state of the art. We're developing policies and practices for technology that's already been surpassed internally at the labs where it's being developed.
Why this matters: When you hear experts say "AI can't do that" or "that's years away," take it with a grain of salt. They might mean "publicly available AI can't do that" or "AI that we're willing to talk about can't do that." The actual frontier might be further along than the public conversation suggests. This isn't conspiracy thinking—it's just recognizing that there's a gap between what exists in the lab and what's been released.
V. The End of Synthetic Writing
So what about your own work and education? Let's start by talking about a common category of work that I think is especially at risk in the age of AI: what I call “synthetic writing.”
As of today, you still have to be somewhat good at "prompt engineering" to get AI to produce quality written work. You need to learn to give an AI instructions in a way it understands and to iterate on those instructions until the AI gives you what can stand as a “good” piece of writing. That's a skill, and some of you have gotten quite good at it, as I saw in your Tron/Matrix assignment for this class.
But that skill is rapidly becoming obsolete. The latest generation of AI systems are dramatically better at assessing user intent than the last generation. They adapt to your conversational style. They ask clarifying questions when your prompt is ambiguous. They learn from context what kind of output you're looking for.
What this means is that a huge category of human labor is genuinely at risk: synthetic writing. By this I mean writing that synthesizes existing research—anything that compiles information that can be found online and structures it as a coherent narrative. This is a huge category of creative work. It includes:
- Journalism summarizing published research, legislation, public announcements, or industry reports
- Much of legal writing (compiling precedents, writing standard contracts, etc.)
- Much of business writing (reports, memos, proposals, presentations, etc.)
- Much of technical writing (user manuals, etc.)
- Entry-level research and scholarship that's primarily literature review
I'm not saying this writing no longer has any value. I'm saying that AI can increasingly do it better, faster, and cheaper than most humans can. And it's getting better at it faster than it’s getting better at other categories of work.
Why this matters: If you're planning a career as a writer or content creator, you need to think carefully about the kind of writing you're doing. Writing that synthesizes existing information increasingly can be automated. What you want to be doing is writing that generates genuinely new ideas, writing that’s grounded in specific lived experience, or writing that does original investigative work. An AI can’t talk to people on the street. It can’t describe its personal feelings of watching a sunset, It can’t discover things about the external world that aren’t already described somewhere online. We'll come back to this distinction in a minute.
But for now I want to stress something else. You yourself still need to learn to write well, even if—especially if—AI will ultimately do much of your actual writing for you. Because if AI is generating your first drafts, then your job shifts from writer to editor. And you can't evaluate and edit writing if you don't understand what makes for good writing.
You need to know what clear prose looks like. You need to recognize when an argument is well-structured versus when it's superficially coherent but actually muddled. You need to understand tone, register, and audience. You need to spot when something sounds plausible but is actually misleading or incomplete. Think of it like this: even if they never personally build a house, an architect needs to understand construction. In a world where AI generates the first draft, your value is in your judgment about what's worth keeping, what needs revision, and what needs to be thrown away.
So don't blow off your writing-intensive classes saying, “AI will do this for me when I’m out of school, so why do I need to do it now?” Take them thinking, “I need to understand this well enough to direct and evaluate AI.”
VI. The Emergent Coder: Technical Skills Aren't Safe Either
You probably know that ChatGPT can code. What you might not realize is that OpenAI never explicitly trained it to code. OpenAI discovered it could code by accident. The model learned how to structure logic, apply syntax, and debug code from the vast datasets it consumed, things like GitHub repositories, Stack Overflow discussions, and programming textbooks. Coding emerged as a capability from exposure to patterns in training data.
For years, we've told students that learning to code was a reliable path to career security. "Learn to code" became almost a mantra for students, parents, and educators alike. And historically, that made sense. Software was eating the world, and programmers were the ones writing the software.
But if AI systems can code—and they're getting very good at it—what happens to that advice? How long before an AI can reliably build yet another delivery app? Or design core game logic for mobile games? Or write the backend for a standard web application? These aren't rhetorical questions. The answer is: it kinda already can.
Now, I want to be careful here. I'm not saying all programming jobs are about to vanish. High-level architecture, algorithm design, and debugging complex distributions are still things that humans need to do. But entry-level programming and routine coding tasks are increasingly automatable.
Why this matters: The idea of any technical skill as "bulletproof job security" is questionable. That doesn't mean you shouldn’t learn technical skills. As with writing, you absolutely still should learn to code. But if AI is writing your code, then your job becomes directing the AI and evaluating its output. Making architectural decisions about how systems should be designed, anticipating security vulnerabilities, and debugging when things go wrong are things you can’t do if you don't understand code yourself. As with writing, you need to learn code not to compete with AI at writing it. You need to learn it so you can be the one who knows what's worth building and whether it's been built well. In the age of AI, creativity, problem-framing, and judgment will just matter more than raw technical execution.
VII. Breaking Out of the Digital Box
Right now, AI largely operates within the digital world. You type text, it generates text. You upload an image, it analyzes the image. The interaction is bounded by screens and networks. But that boundary is artificial and temporary.
Consider this scenario: An AI has access to all the major gig economy and logistics platforms—TaskRabbit, Uber, Amazon Marketplace, Fiverr. It can create accounts, post tasks, hire workers, and coordinate deliveries. Each individual human worker sees only their small piece of the task. But the AI sees the whole picture. How hard would it be for such a system to orchestrate a complex, potentially destructive event where no single human actor is aware of the full plan? One person buys components that seem innocuous. Another person assembles them following instructions that look like standard work. A third person delivers the finished product to a specific location. The sum of these actions achieves something that none of the individuals understood they were contributing to: a bomb in a government building.
This isn't science fiction. The technical capability exists right now. What's stopping it is primarily that we haven't connected AI systems to real-world orchestration tools in this way. But the gig economy platforms exist. The payment systems exist. The logistics networks exist. The AI capability exists. The gap between "technically possible" and "actually happening" is narrowing. And we're simply trusting the venture capital firms that own the big AI companies not to release these capabilities to the public.
Why this matters: The political, legal, and regulatory response to AI can't afford to lag years behind the technology. This is no longer merely an educational or economic policy issue. It's an immediate public safety concern. As citizens and as professionals, we need to be advocating for thoughtful governance that's actually contemporary with the technology we're deploying. That mean you need to advocate for good governance of these technologies. And it means you need to understand these systems well enough to participate in those conversations.
VIII. The Geopolitical Arms Race: Two Futures, Not One
Furthermore, whether we like it or not, AI development is not just a commercial issue or even a domestic policy issue. It's fundamentally geopolitical. The United States and China are locked in an arms race for dominance in AI. This isn't hyperbole. It's the explicit framing used by policymakers in both countries.
Here's what makes this particularly significant. Unlike other areas of technology where the US and China are intertwined (manufacturing, supply chains, scientific research), AI development in the two countries is increasingly being kept separate by export controls and nationalist industrial policies. The US and China are not going to be partners in creating this technology. What this means is that two separate and distinct branches of AI will emerge. Different architectures, different training data, different values embedded in the design, different capabilities, different guardrails, different risks.
We already see this beginning. Chinese AI models are trained on different datasets, optimized for different languages and cultural contexts, and designed with different censorship and control mechanisms built in. There's also almost certainly classified military AI development happening in both countries that operates at the true frontier of the technology—capabilities that the commercial world has no knowledge of. The AI systems we're discussing publicly may not represent the actual state of the art in military and intelligence applications.
Why this matters: As global citizens and as professionals who may work internationally, you need to understand that "AI" isn't one thing. The AI systems developed in different contexts will have different characteristics and different implications. Standards that emerge in one country may not apply in another. And the geopolitical competition shapes what gets built and how it gets deployed in ways that aren't always visible from the consumer side.
IX. Recursive Self-Improvement: AIs Building AIs
Up until now, we've been talking about human-built AI systems. Researchers design architectures, train models, evaluate performance, and iterate on designs. Humans are in the loop at every stage. But we're approaching—or may have already reached—a threshold where AI systems can participate meaningfully in building, modifying, and improving their own successor models.
AI can already write code, including code that implements neural network architectures. AI can already evaluate the performance of other AI systems. AI can already propose optimizations to training procedures. The pieces exist for a more autonomous development loop. When that loop closes, when AI systems are regularly building their next generation with minimal human intervention, the pace of advancement will accelerate beyond what we've seen so far.
This is what AI evangelists are looking at when they talk about transforming human civilization within a generation. They’re not just predicting that AI will be much better, but that the rate of improvement will break away from human timescales. There's a term for this—recursive self-improvement—and some researchers think we're already there. Others think we're five years away. But very few serious people think it's more than a decade away.
Why this matters: If these predictions are even partially correct, the world you'll be working in will be almost unrecognizably different from the world you're preparing for now. Institutions, job categories, and economic structures could all be fundamentally transformed. That's not a reason for despair, but it is a reason to build adaptability and resilience into your professional identity.
X. General Purpose Technology and the Human Mandate
This brings us to the final reality, which is also the most important for thinking about your futures. AI is not just a tool. It's what economic historians call a “General Purpose Technology.” A General Purpose Technology is a technology so fundamentally useful that it transforms every economic sector and social institution it touches.
Previous general purpose technologies include the printing press, which transformed knowledge production and dissemination; the steam engine, which transformed manufacturing and transportation; electricity, which transformed everyday life (including the way we experience day and night and the very passage of time); and the internet, which transformed communication and commerce. Each of these technologies didn't just create new products or services. They changed what kinds of work humans did and how that work was structured. They reorganized entire economies and societies and the culture itself. AI is arguably in this category, and if it is, then its impact won't be confined to "tech jobs" or "digital media." It will reshape every profession, every industry, every institution, even the way we experience the world.
What Does This Mean For You?
So what should you, as humans entering a world that could be fundamentally changed by this new technology, do with all this? This is where I want to offer both a challenge and a partial framework. Your mandate, as you move into your careers, is to identify what I’ll call “distinctively human work,” the capabilities and contributions that remain unique to humans and still valuable in a world of automated intelligence.
Don't ask: "How can I use AI to do my work more efficiently?" Instead ask: "What work should I be doing that AI fundamentally cannot do?
For example:
AI can:
- Generate text, images, code, music based on patterns in training data
- Synthesize information from vast sources faster than any human
- Predict outcomes based on historical patterns
- Optimize within defined parameters
AI can’t:
- Innovate truly outside its training data
- Understand the weight of history and human experience
- Make judgments that require a lived, embodied, and messy physical existence
- Exercise genuine empathy rooted in shared vulnerability
- Take responsibility for decisions in the way we expect humans to do
- Be physically present with other humans
Let me get concrete, because ending on an abstract philosophical note isn't helpful when you're trying to figure out what classes to take or what job to apply for. Here are some directions that seem promising, based on what we know about AI today:
1. Work that requires deep contextual judgment
AI is excellent at pattern matching but struggles with context that isn't explicit in its training data. If your work requires understanding local culture, history, organizational politics, material culture (artifacts and archives), or the nuances of a particular community, that's harder to automate. Working in a local library or museum is a good example.
2. Work that requires original problem framing
AI is very good at solving problems you can clearly specify. It's not good at figuring out what the problem actually is, or whether you're solving a problem that people need solving. Take, for example, a company that knows its product isn't selling, but doesn't know why. The work of investigating, interviewing customers, and observing usage patterns: that's distinctively human work.
3. Work that requires building trust and relationships
AI can simulate empathy, but it can't actually care. For work where trust matters—therapy, coaching, even sales and negotiation—humans have an advantage.
4. Work that requires ethical judgment in ambiguous situations
AI can apply rules consistently, but it struggles when the situation doesn't fit existing frameworks, or when values conflict. For example, an AI can write a compelling frontpage expose, but it can’t decide whether to publish information that is newsworthy but might put someone’s reputation or livelihood at risk.
5. Work that creates genuinely new cultural artifacts
AI generates outputs based on existing patterns. Humans can create things that are both genuinely new and unexpected but still genuinely matter. Artists often make work that seems weird, confusing, and meaningless…until it doesn’t. Artists break patterns in ways that later seem inevitable but weren't.
6. Work that involves your physical presence and embodied skill
For now, AI is mostly disembodied. Work that requires physical presence, spatial awareness, fine motor control, or embodied intuition is harder to automate. Healthcare, performance, public speaking, politics, or anything involving physical manipulation of real-world objects in unstructured environments is distinctively human work.
A Final Thought: Staying Human in an AI World
What all of these things have in common are physicality, presence, and empathy. These are the skills you should cultivate and jobs that require them are the ones you should seek out as you embark on your career.
Thus I want to end with something that might sound obvious but needs to be said explicitly: your job at this moment is not to compete with AI. It's to remain human. That sounds trite, but there’s going to be enormous economic, social, and professional pressure to make yourself more like an AI: to be faster, more efficient, more consistent, more optimized, and always on.
Resist that pressure. The value you bring is precisely that you are not a machine. You get tired. You need breaks. You have bad days. You misunderstand things. You have intuitions you can't fully explain. You care about things for reasons that aren't rational. You have a body that constrains and shapes your thinking. You exist in relationship to others.
All of this—everything that makes you “inefficient” compared to an AI—is actually the source of what makes you valuable. Because in a world where the machine can do more and more, what the machine can’t do is what becomes truly valuable. Your weird interests. Your messy relationships. Your physical limitations. Your emotional responses. Your ethical commitments that don't optimize for any clear outcome. Your capacity to be present with another person.
As you move forward into careers, keep this in mind: your job is not to become more like the machines. Your job is to figure out what the machines can't be, and to become more fully that.
Member discussion