Over the last year, AI-enabled design tools have meaningfully changed how quickly teams can produce artifacts. We can generate flows, screens, variants, and prototypes at a pace that would have been unthinkable even eighteen months ago. That progress is real—and worth celebrating.
But speed, on its own, is not a strategy.
What I’m seeing in executive conversations is genuine excitement about velocity, paired with a lack of precision about what kind of speed we’re actually buying. Most AI enthusiasm today is enthusiasm for production acceleration—how fast teams can make things. The harder question is whether that production speed is translating into decision speed—faster, better decisions grounded in user reality.
Those two are not the same.
Production speed reduces the cost of making artifacts. Decision speed reduces uncertainty at the moment choices are made. One improves throughput. The other improves outcomes. Confusing them creates a subtle but expensive risk: teams feel confident sooner without actually being more correct.
There’s an unspoken assumption embedded in many AI conversations: Faster making → faster iterating → better outcomes.
What’s missing from that loop is the actor.
When we say “iterate faster,” who are we iterating with—users, or ourselves?
AI dramatically increases internal velocity: how quickly teams align, revise, and move ideas forward inside the building. What it does not automatically increase is external learning: how quickly assumptions are validated with real customers, in real contexts, with real consequences.
That gap matters. Internal velocity without external learning doesn’t create insight—it creates rehearsal. Teams move quickly, confidently, and coherently…in directions that may or may not map to reality.
This gap between internal velocity and external learning shows up wherever an organization depends on functions that carry signal from outside the building—UX, customer success, sales, support, research. These roles exist precisely to create grip: the connection to reality that keeps speed from becoming drift. I’ve spent twenty-five years on the UX side of this equation, and the pattern is consistent. When production timelines compress, the functions that slow down to listen get pressure to speed up and match. But that’s a category error. Their value isn’t in keeping pace—it’s in keeping contact.
Which raises the real question: AI is a force multiplier—but what are we multiplying? Insight, or opinion?
If AI is accelerating decisions that are grounded in validated user understanding, that’s leverage. If it’s accelerating decisions that haven’t been tested outside the room, that’s confidence debt. And confidence debt, like technical debt, always comes due—just at a less convenient time. I’ve seen this pattern across fintech, insurance, and enterprise software: teams ship in half the time, celebrate the velocity, then spend three quarters untangling assumptions that were never tested. The rework isn’t dramatic—it’s quiet. A feature that doesn’t get adopted. A flow that requires constant support intervention. A roadmap that keeps revisiting the same problem because it was never actually solved.
There’s a simple diagnostic leaders can apply immediately: If an iteration loop doesn’t include a real user signal, we’re not iterating—we’re rehearsing.
For any AI-accelerated design work, we should be able to answer three questions:
- What assumption are we testing?
- Who outside this room can confirm or refute it?
- How quickly will that signal come back?
Those aren’t UX questions. They’re governance questions. When AI compresses production timelines, the assumptions embedded in what we’re building get locked in faster. That makes the decision about when and how we validate a strategic choice, not a research preference. Who owns that decision—and how it’s resourced—is a leadership accountability, not a team-level workflow issue.
AI has made us incredibly fast at moving inside the organization. The opportunity—and responsibility—now is to ensure we’re equally fast at moving toward reality. When speed is paired with learning, AI pays off. When it isn’t, it just helps us get confidently wrong sooner.
That’s the distinction that matters.
