5 min read

Does the iPhone Work in the age of AI?

Apple's focus on perfection is at odds with the messiness of generative AI
Does the iPhone Work in the age of AI?
Apple's new Liquid Glass user interface

Apple’s AI efforts are floundering. Apple Intelligence, which debuted more than a year ago, is still only marginally more useful than the Siri of ten years ago. (Genmoji, anyone?) Just yesterday, Apple announced that it is replacing its AI chief with a Google and Microsoft veteran.

The conventional wisdom is that Apple was simply caught wrong-footed by the quick rise of AI or, more generously, that it is taking a more deliberate, privacy-first approach than its peers. Maybe. But it has been more than three years since ChatGPT broke out, and Apple has been trying hard, with practically unlimited resources, to catch up. In that time, both DeepSeek and Grok have gone from zero-to-competitive with the frontier models of OpenAI, Anthropic, and Google. Even the far more staid Microsoft has responded better—transforming from a company once mocked for Clippy into a company that owns a significant chunk of OpenAI and has integrated Copilot across its entire product line. Meanwhile, my iPhone still can’t guess that I’m trying to type “instability” when I mistakenly hit “o” instead of “i” for the first letter but then get every other letter after that (-n-s-t-a-b-i-l-i-t-y) correct.

Perhaps this is just mismanagement or a mismatched company culture. Maybe Apple comes out with something tomorrow that blows us all away.

But here’s a thought: maybe the iPhone is just poorly suited to this kind of technology.

The UI-less Technology

Given the sophistication of the technology, it’s truly remarkable how simple our AI chat bot interfaces are. Most people interact with AI via a text box on a more-or-less blank screen. The biggest development in AI user interface of the last year has been its integration with the command line. You don’t so much navigate AI, you converse with it. You don’t tap through a hierarchy of screens to accomplish a task, you describe what you want and it figures out how to do it. Moreover, as agentic AI becomes more capable, the screen may become even less central to the experience.

Apple was perfectly positioned to dominate the visual and tactile era—the world of clicking around with a mouse or swiping around with a finger across the glowing surface of a screen. The company’s legendary expertise in user interface design gave it an enormous advantage in the age of the web and then the smartphone. But perhaps this expertise just doesn’t translate to an AI world in which you simply talk to your technology and it does stuff for you.

The iPhone is so controlled, so integrated—so perfect—that perhaps it’s just constitutionally unsuited for the openness of the AI model. The iPhone is based around beautiful icons, elegant animations, and carefully considered layouts. Apple's "Human Interface Guidelines" explicitly stress hierarchy, harmony, and consistency as core values. AI just wants to do things, usually invisibly. The iPhone experience is about control and precision—you tap exactly where you want and you see exactly what the app designer intended. The AI experience is about delegation and ambiguity—you express an intention in natural language, and the system interprets it and executes.

Take Siri. It’s been around for over a decade now, and it has never really taken hold among iOS users. Siri seems incredibly impoverished now that ChatGPT is in widespread use. But viewed from the perspective of 2015, Siri was actually quite remarkable. It could set timers, send messages, make calls, check the weather, search the web, play music, get directions—all by voice. And yet nobody really used it very much.

The standard explanation is that Siri was too limited, too prone to misunderstanding. And that’s true. But maybe that’s not the whole story. Maybe the deeper problem is a mismatch between the form factor and the interaction model. The iPhone just feels like something that wants to be held and touched and looked at—not something that wants to be in conversation. When you’re holding a beautiful object with a gorgeous screen, talking to it feels… wrong. You want to use it, not talk to it.

This might explain why Siri worked best for quick, hands-free commands (setting timers while cooking, playing music while driving) but never became a primary way of interacting with the device. The iPhone’s fundamental nature kept pulling users back to the screen.

A Fundamental Mismatch

But there’s an even deeper tension here. Apple’s entire design philosophy is built around anticipating user needs and presenting a curated, controlled experience. The App Store model, which has been Apple’s greatest business success, is premised on the idea that Apple knows best which software you should run on your device and how it should behave.

But generative AI isn't like that. It is inherently unpredictable. That’s the point. You ask it something and you don’t know exactly what you’re going to get. The output is emergent and probabilistic. An LLM doesn’t present you with three carefully designed options. It generates a response on the fly that has never existed before.

How do you design a perfect Apple experience around something that is, by its nature, imperfect and unpredictable? How do you maintain the famous Apple polish when the core technology is a black box that might say anything?

The companies that are winning in AI right now—OpenAI, Anthropic, and Google—have embraced this messiness. They ship products that are obviously incomplete and sometimes fail in embarrassing ways. That’s not how Apple works. Apple announces a product when it’s ready, polished, and perfect. The current AI development model is fundamentally in tension with the big Apple reveal.

Perhaps Apple can overcome this. It certainly has the resources and the talent. And one could argue that Apple’s tight integration is actually an advantage for AI. An agent that can access your calendar, email, messages, photos, and health data through Apple’s unified ecosystem could be far more powerful than one siloed in a single app. Apple’s privacy-focused, on-device approach might also appeal to users nervous about sending their lives to OpenAI’s servers.

But the history of technology suggests caution. Clayton Christensen’s Innovator’s Dilemma documented how successful companies repeatedly fail to adapt to disruptive change, not because they’re stupid or lack resources, but because their existing strengths become weaknesses in a changed environment. The very things that made the iPhone revolutionary (the seamless integration and controlled ecosystem) may be exactly what makes it difficult to adapt to an AI-first world.

I’m not predicting Apple’s demise. The company has surprised skeptics before—with the Mac, the iPod, and the iPhone, to name a few. And it’s possible that the current AI enthusiasm is overblown. But I keep coming back to my iPhone keyboard, unable to guess “instability” from “onstability,” and I wonder: is this a company that’s about to crack AI, or a company whose fundamental product philosophy is wrong for an AI world?