The 80% Problem

Why Most Leaders Think They’re Using AI — And Why That’s the Dangerous Part

Joerg with Claude, 3.3.2026

A friend of mine runs a business in the classifieds industry. He attended an industry conference recently — the kind where the CEOs and managing directors of major marketplace businesses come together to compare notes, share strategies, and talk about what’s next. During one of the sessions, someone ran a live poll. The question was simple: how are you using AI in your daily work?

Less than 20% of these leaders — people running significant digital businesses, people whose platforms literally depend on matching supply and demand through algorithms — were using AI for anything beyond search and text improvement. Better queries. Nicer copy. The remaining 80% were, functionally, using the most powerful technology shift of the decade as a spell-checker with ambition.

That number stopped me. Not because it surprised me — it didn’t. But because I recognized it in myself, in my colleagues, in the founders I talk to every week. We’ve all started saying we “use AI.” But what most of us mean by that is something far more shallow than what’s actually possible. And the gap between those two things is where the real story is.

The Illusion of Adoption

“I use AI” has become the new “I have a website.” It’s technically true for almost everyone, and it tells you almost nothing about how someone actually operates. In the early 2000s, having a website could mean you had an e-commerce engine processing millions of transactions — or it could mean you had a digital brochure someone built in FrontPage. Both were “online.” The difference in business impact was a hundredfold.

We’re in the same moment with AI. A CEO who asks ChatGPT to polish a board update email and a CEO who has AI woven into how their team makes decisions, creates content, processes information, and evaluates opportunities are both “using AI.” But they’re operating in fundamentally different universes.

The problem isn’t that people are doing it wrong. The problem is that most people don’t realize there are levels. They’ve reached the first plateau and mistaken it for the summit.

Three Levels of AI Integration

I find it useful to think about AI adoption in three levels. Not as a rigid hierarchy, but as a way of asking yourself an honest question: where am I, really?

Level 1: AI as a tool. This is where most people are. You use AI the way you’d use a better search engine or a writing assistant. You ask it questions. You have it clean up text. You might use it to summarize a document. It saves time on discrete tasks, but it hasn’t changed how you think about your work. Your workflows, your decision-making, your creative process — all essentially unchanged. AI is bolted on to the existing way of doing things. This is the 80% from that conference room.

Level 2: AI as a workflow. Here, AI is embedded in how you operate. It’s not a tool you pick up for one-off tasks — it’s part of the rhythm. You might use it to prepare for every meeting, to process inbound deal flow, to draft first versions of analysis that you then sharpen with your own judgment, to run scenarios before making decisions. The difference is integration. AI shapes the process itself, not just individual outputs within it. Fewer people are here than claim to be.

Level 3: AI as a strategy. This is where it gets genuinely different. At this level, AI doesn’t just change how you do what you do — it changes what you can do. You’re building new capabilities that weren’t possible before. You’re rethinking what your organization should even be doing, because the frontier of what’s achievable has shifted. Klarna didn’t just use AI to write better customer service responses — they rearchitected how customer service works as a function. Shopify’s CEO told his team that proving a job can’t be done by AI is now a prerequisite before requesting new headcount. That’s not using a tool. That’s reorganizing around a new reality.

The Compounding Gap

Here’s what makes this urgent: the distance between the levels isn’t static. It’s compounding.

Every day a team operates at Level 2 or 3, they learn something about how to use AI better. They discover edge cases, develop intuitions, build institutional knowledge about what works. Their prompts get sharper. Their integration gets deeper. Their sense of what’s possible expands. This compounds. A team that’s been operating at Level 2 for six months has a qualitative advantage that’s difficult to replicate by starting later — not because the technology is gated, but because the organizational learning is.

Meanwhile, teams at Level 1 aren’t standing still in any absolute sense. They’re getting incrementally more productive — their emails are a bit better, their research is a bit faster. But relative to what’s possible, they’re falling behind at an accelerating rate. And the dangerous part is: they don’t feel it. Level 1 feels productive. It feels modern. You’re “using AI.” The feedback loop that would tell you you’re underperforming doesn’t kick in until someone at Level 2 or 3 starts competing with you directly.

I saw this exact pattern with mobile. The companies that treated mobile as “a smaller screen for our website” versus the ones that rethought their product around mobile-native behavior — the gap opened slowly, then all at once. We’re in the “slowly” phase with AI right now. The “all at once” part is coming.

AI Doesn’t Lower the Bar — It Raises the Stakes

I’ve been thinking a lot recently about where excellence lives in this new environment. What I keep coming back to is this: AI doesn’t make excellence easier. It makes mediocrity faster.

When anyone can generate a functional first draft, a decent analysis, a passable strategy document — the floor rises. But the ceiling rises faster, and only for the people who bring real judgment to the process. Taste. Conviction. The ability to recognize when something is good enough versus when it’s genuinely right. These human qualities become more valuable, not less, as execution gets cheaper.

This is the paradox of AI in professional work: the people who benefit most from it are the ones who need it least in any naive sense. A senior leader with 20 years of pattern recognition and deep domain expertise, paired with AI, becomes extraordinarily powerful — because they know what to ask for, they can evaluate what comes back, and they can direct the iteration toward something that actually matters. A junior person using AI can produce impressive-looking outputs without the judgment to know whether those outputs are substantively right. The appearance of competence diverges from actual competence.

This means that AI integration isn’t a technology question. It’s a judgment question. And that changes who needs to lead the adoption. It can’t be delegated to the IT department or the “innovation team.” The people who need to be using AI most aggressively are the ones with the most experience and the sharpest judgment — the leaders themselves. Which brings us back to that conference room, and the 80%.

The Honest Mirror

I want to be honest about something: I’m not writing this from the summit. I’m somewhere on the path between Level 2 and Level 3, and on some days I slip back to Level 1. I see the same pattern in my colleagues, in the investors I respect, in the founders who are building the future while using yesterday’s workflows to manage their own organizations.

This piece itself is an example of what Level 2 looks like in practice. It started with a conversation with a friend. The observation lodged somewhere. I brought it to Claude — not to “write an article” but to think through a framework. We went back and forth. The structure emerged from dialogue, not from a prompt. The judgment about what mattered, what was honest, what was worth saying — that was mine. The synthesis speed, the structural pressure, the ability to iterate on the argument — that was the AI. Neither of us could have produced this alone. That’s not a footnote about process. That’s the whole point.

The question isn’t whether you should use AI. That debate is over. The question is whether you’re using it in a way that actually changes how you work — or whether you’ve adopted the language of AI integration while leaving your actual operating system untouched.

What Moving Up Actually Looks Like

A few patterns I’ve noticed in the people and organizations that are genuinely operating at Level 2 and above:

They’ve moved from occasional use to habitual use. AI isn’t something they open for special occasions. It’s the first thing they reach for in the morning. They prepare for meetings with it. They process complex information through it. They think alongside it. The shift from “I should try AI for this” to “Of course I’m using AI for this” is the fundamental transition.

They’ve invested in their own context. The most effective users have built persistent context — custom instructions, knowledge bases, workflow templates that carry their institutional knowledge into every interaction. They’re not starting from scratch each time. The AI knows their style, their frameworks, their decision criteria. This is the difference between using a taxi and owning a car that knows your commute.

They’ve stopped treating AI outputs as final. Level 1 users accept or reject AI outputs. Level 2+ users treat them as first drafts in a conversation. They push back. They refine. They ask “what are you missing?” and “what would the counterargument be?” The quality of the output is directly proportional to the quality of the human judgment directing it.

They’ve made it organizational, not personal. The most impactful adoption I’ve seen isn’t one person using AI brilliantly. It’s a team that has collectively rethought how they work. Shared prompts. Shared workflows. Shared understanding of where AI helps and where human judgment is irreplaceable. The individual gains are linear. The organizational gains compound.

The Shadow Side

A few honest caveats, because frameworks that don’t name their own failure modes aren’t frameworks — they’re sales pitches.

The level framework can become its own performance. People love categorizing themselves, and there’s a risk that “level” language becomes another thing to signal rather than something to honestly assess. A leader who describes elaborate AI workflows but still makes all their real decisions the old way hasn’t moved up. They’ve just gotten better at describing Level 2.

Speed without direction is still just fast burning. AI makes everything faster, including going in the wrong direction. Organizations that integrate AI deeply into bad strategies will execute those bad strategies more efficiently. The technology amplifies whatever’s already there — which means judgment about what to do matters even more than before. AI integration without strategic clarity is a performance multiplier applied to the wrong function.

Not everything needs AI. There are conversations, decisions, and creative acts where the value is in the slowness. In the human friction. In the unoptimized moment where something unexpected happens. Over-integrating AI can smooth out exactly the textures that make work meaningful and organizations human. The art is knowing which processes to accelerate and which to protect.

The Real Question

The technology will keep advancing. The tools will get better, cheaper, more capable. None of that is within our control. What’s within our control is the question that room full of classifieds leaders was really asking, even if they didn’t frame it this way: do we have the honesty to admit where we actually are, and the discipline to close the gap?

The 80% problem isn’t a technology problem. It’s a leadership problem. The same leaders who would never accept being in the bottom quintile of their industry on revenue growth or customer satisfaction are comfortably in the bottom quintile of AI integration — and they don’t know it, because everyone around them is in the same place.

That’s what makes this moment both uncomfortable and exciting. The bar is still low enough that genuine commitment can create real distance. But it won’t stay low for long. The compounding has already started.

Companion piece to: Where Does Excellence Live? (Joerg with Claude, 3.3.2026)

Previous
Previous

Form and Content

Next
Next

Where Does Excellence Live?