AI Speed Isn’t the Goal. Decision Confidence Is.

Over the last year, AI-enabled design tools have meaningfully changed how quickly teams can produce artifacts. We can generate flows, screens, variants, and prototypes at a pace that would have been unthinkable even eighteen months ago. That progress is real—and worth celebrating.

But speed, on its own, is not a strategy.

What I’m seeing in executive conversations is genuine excitement about velocity, paired with a lack of precision about what kind of speed we’re actually buying. Most AI enthusiasm today is enthusiasm for production acceleration—how fast teams can make things. The harder question is whether that production speed is translating into decision speed—faster, better decisions grounded in user reality.

Those two are not the same.

Production speed reduces the cost of making artifacts. Decision speed reduces uncertainty at the moment choices are made. One improves throughput. The other improves outcomes. Confusing them creates a subtle but expensive risk: teams feel confident sooner without actually being more correct.

There’s an unspoken assumption embedded in many AI conversations: Faster making → faster iterating → better outcomes.

What’s missing from that loop is the actor.

When we say “iterate faster,” who are we iterating with—users, or ourselves?

AI dramatically increases internal velocity: how quickly teams align, revise, and move ideas forward inside the building. What it does not automatically increase is external learning: how quickly assumptions are validated with real customers, in real contexts, with real consequences.

That gap matters. Internal velocity without external learning doesn’t create insight—it creates rehearsal. Teams move quickly, confidently, and coherently…in directions that may or may not map to reality.

This gap between internal velocity and external learning shows up wherever an organization depends on functions that carry signal from outside the building—UX, customer success, sales, support, research. These roles exist precisely to create grip: the connection to reality that keeps speed from becoming drift. I’ve spent twenty-five years on the UX side of this equation, and the pattern is consistent. When production timelines compress, the functions that slow down to listen get pressure to speed up and match. But that’s a category error. Their value isn’t in keeping pace—it’s in keeping contact.

Which raises the real question: AI is a force multiplier—but what are we multiplying? Insight, or opinion?

If AI is accelerating decisions that are grounded in validated user understanding, that’s leverage. If it’s accelerating decisions that haven’t been tested outside the room, that’s confidence debt. And confidence debt, like technical debt, always comes due—just at a less convenient time. I’ve seen this pattern across fintech, insurance, and enterprise software: teams ship in half the time, celebrate the velocity, then spend three quarters untangling assumptions that were never tested. The rework isn’t dramatic—it’s quiet. A feature that doesn’t get adopted. A flow that requires constant support intervention. A roadmap that keeps revisiting the same problem because it was never actually solved.

There’s a simple diagnostic leaders can apply immediately: If an iteration loop doesn’t include a real user signal, we’re not iterating—we’re rehearsing.

For any AI-accelerated design work, we should be able to answer three questions:

  1. What assumption are we testing?
  2. Who outside this room can confirm or refute it?
  3. How quickly will that signal come back?

Those aren’t UX questions. They’re governance questions. When AI compresses production timelines, the assumptions embedded in what we’re building get locked in faster. That makes the decision about when and how we validate a strategic choice, not a research preference. Who owns that decision—and how it’s resourced—is a leadership accountability, not a team-level workflow issue.

AI has made us incredibly fast at moving inside the organization. The opportunity—and responsibility—now is to ensure we’re equally fast at moving toward reality. When speed is paired with learning, AI pays off. When it isn’t, it just helps us get confidently wrong sooner.

That’s the distinction that matters.

Why UX Work Still Struggles to Influence Decisions

I’ve spent twenty-five years leading UX across fintech, cybersecurity, enterprise software, and insurance tech. And I keep seeing the same pattern: good UX work—solid research, thoughtful design, real effort—fails to shape decisions in meaningful ways.

It’s not because the work is bad. It’s because most organizations aren’t ready for what the work asks of them.

I’ve watched teams genuinely invest in discovery, engage in design exploration, even agree with what they’re seeing—only to quietly move forward with the original plan anyway. Not because they don’t care. Not because they don’t “get UX.” But because uncertainty is uncomfortable, and most organizations are built to resolve discomfort quickly.

UX introduces pause. Organizations reward momentum.

When UX work creates tension—between speed and rigor, roadmap and reality—what I usually see isn’t outright rejection. It’s erosion. Insight gets acknowledged but softened. Design intent gets diluted. Decisions get reframed as “pragmatic” when they’re really just familiar.

Early in my career, I thought the answer was more research. Clearer artifacts. More “actionable” deliverables. I’ve learned that volume isn’t the issue. Capacity is.

Capacity to sit with ambiguity. Capacity to question assumptions without panicking. Capacity to let understanding actually change direction.

You can see the breakdowns when that capacity isn’t there:

  • Research lives in decks, not in decisions
  • Design intent makes sense in concept but falls apart in delivery
  • Teams want solutions before they’ve aligned on the problem

Those aren’t process failures. They’re human ones.

Organizations, like people, develop coping mechanisms under pressure. Metrics, velocity, and quick decisions start to feel safer than slowing down to think. Certainty becomes more comfortable than clarity.

What I’ve learned works differently

UX doesn’t need to fight harder to be heard. UX needs to function as infrastructure, not output.

I think about it as three connected capabilities:

Understanding is where we listen—really listen—to what’s true for users, even when it’s inconvenient. Research surfaces patterns, tensions, and signals we might prefer not to see. This breaks down when teams hear insights but aren’t ready to sit with them.

Interpretation is where understanding becomes shared meaning. Design frames the problem, makes tradeoffs visible, and turns insight into intent. This breaks down when we rush to solutions before unresolved questions are actually resolved.

Follow-through is the hardest part. It’s about protecting intent when pressure arrives—deadlines, scope, competing priorities. This is where leadership matters most. When stress rises, meaning either holds or dissolves.

UX rarely breaks at handoffs. It breaks at sense-making gaps—when research is acknowledged but not integrated, when design is appreciated but not protected, when delivery optimizes speed over understanding.

The real work

The organizations I’ve seen do this well share one thing: leadership willing to tolerate discomfort long enough for insight to become understanding.

UX maturity isn’t about process sophistication or headcount. It’s an organization’s ability to make meaning together—without panic, ego, or false certainty.

That’s not a tooling problem. It’s a leadership one.

And after twenty-five years, I’m more convinced than ever that building that capacity is the actual work.