In the last post, I introduced the idea of generative versus extractive thinking. Generative asks "what are we building?" Extractive asks "how do I get more from what I have?" Both are necessary. But we've been stuck in extractive mode for 25 years, and it's no longer doing us any favors.
Now I want to talk about the elephant in the room: AI.
Because here's a thing I've been saying to people lately that seems to light them up in a way I didn't expect:
AI is fundamentally extractive.
That's not an insult. It's not positive or negative. It's a fact and it's a design constraint. And if you don't understand it, you're going to make some misguided decisions and potentially expensive mistakes.
What an LLM Actually Is
An LLM — a large language model, the thing under the hood of ChatGPT and Claude and Copilot and all the rest — is, at a fundamental level, a machine trained on the sum of what humans have already created. Every book. Every blog post. Every Stack Overflow answer. Every GitHub repo. Every Wikipedia article.
It takes all of that existing human thought and it recombines it. It synthesizes. It pattern-matches across an enormous corpus of stuff that already exists. And it's extraordinarily good at this. Crazy good. The things it can do with existing knowledge are genuinely amazing.
But every metaphor it produces, every framework it suggests, every connection it makes — those were assembled from pieces that other humans originated. It makes "new" things by taking a starting point (the prompt) and then looking at the probability of the next thing...and the next thing...and the next, etc. It is, at its core, an extraction engine that runs on randomness and probability. The most sophisticated extraction engine ever built.
It can riff on everything that exists. But it (probably) cannot originate what comes next on its own.
If you remember the improv comedy idea from last time: AI is the world's greatest "yes, and" performer — but pretty much only for scenes that have already been written or are at least well-understood. It can take everything humanity has ever said and done and find brilliant new combinations. What it can't do is walk on stage with something nobody's ever seen before.
That has to come from a person.
The Formula
This leads to something I think about constantly:
Generative human + extractive tool = something neither could produce alone.
Extractive human + extractive tool = a very efficient path to probably nowhere.
Same tool. Radically different outcomes. The variable isn't the AI. It's the human.
An organization that uses AI to amplify human creativity — to organize ideas, to find connections, to accelerate the stuff that humans originate — that organization is going to do extraordinary things. The AI helps the humans go faster, think more clearly, connect more dots. But the spark — the curiosity, the lived experience, the observation that no model could originate because it comes from actually being alive in the world — that comes from the person.
An organization that uses AI to replace human thinking — "let the AI generate the content, let the AI write the code, let the AI do the thinking" — that organization is asking an extractive tool to be generative. And they're getting exactly what you'd expect: competent, soulless output. Fast, efficient, going nowhere interesting.
Ideas are about to become more valuable than ever.
Oh and also...garbage in, garbage out.
Garbage In, Garbage Out (At Scale)
Here's where this gets practical.
I've been a software consultant for 28 years. I walk into companies in trouble — delayed projects, architectural disasters, teams that can't ship. And there's a pattern I see everywhere right now: companies trying to layer AI on top of not-so-great processes.
Their requirements are vague. Their priorities are unclear. Unclear priorities means the organization and teams are unfocused. Their teams can't agree on what "done" means. Their work-in-progress is out of control. Their deployment pipeline is held together with duct tape and prayers. Communication skills are iffy all around.
And now they're adding AI coders to the mix.
Here's what I tell them: AI helps you go faster. But if you've got a problem in your software development process, AI's just going to deliver those problems faster.
Bad requirements, fed to an AI coder, produce bad code. Granted that bad code gets delivered impressively fast...but it's also still bad code. Weak prioritization, amplified by AI productivity, produces a flood of features that maybe no one needed or asked for — really fast. An untestable architecture, with AI-generated code pouring into it, becomes a faster-moving dumpster fire.
AI doesn't fix garbage in, garbage out. It automates it.
If you know Goldratt's Theory of Constraints, you know what happens next: you've optimized one constraint and the bottleneck shifts somewhere else.
The Vibe Coding Fallacy
There's a term floating around — "vibe coding." The idea that you can just describe what you want to an AI, and it'll build it for you. And for certain things, this actually works. If you're prototyping a small, new application from scratch, vibe coding is real. You can move incredibly fast. It's genuinely exciting.
But here's what nobody's talking about: vibe architecture isn't real. Vibe maintenance isn't real. Vibe debugging isn't real. Vibe backlog refinement isn't real. Vibe generating new ideas isn't real.
The moment you move past "build me a small, well-defined, well-understood new thing" into "maintain this thing, evolve this thing, fix this thing, figure out what to build next" — all the hard problems that were always hard remain hard. They might even be harder, because now you've got a codebase that was generated fast by a tool that doesn't really understand your business context, your users' actual needs, or the twelve technical constraints nobody wrote down.
I call this "comprehension debt." It's technical debt's evil twin. The code works. The tests pass. But nobody really knows why it works the way it does. Six months from now, when that feature needs to change? Good luck.
The AI that wrote it doesn't remember writing it. It's not even the same AI. Every time you give an agent a task, you get a new instant of the agent. Whatever is behind that chat interface is totally new — no real memory of your codebase, your team's conventions, or the twelve conversations that shaped the original design.
Imagine onboarding a new contractor for every single task. That's pretty much what you're doing.
But actually, it's even a little bit worse than that. It's like getting disconnected from a customer service call, calling back to the same 800 number, and getting a completely different agent. Maybe not even in the same call center. And if you're lucky, there's a note in your account... but in all likelihood you're reestablishing the context from scratch.
Every task. Every time.
And the humans who reviewed that code? Since they didn't write that code, they might not have ever truly understood it in the first place.
Coding ceases to be the constraint. Everything around the coding activity becomes the constraint.
The Part Nobody's Talking About
Here's the conversation I keep having with CTOs and VPs of Engineering and tech leads:
"We're going to spin up AI coding agents. They'll crank out features. We'll ship faster."
And I say: "Great. Who's reviewing the code?"
Silence.
Because you've got 25 developers who are already struggling to coordinate with each other. Now you're adding 20 AI agents to the mix. Each one creating branches, generating code, requesting reviews. Your pull request queue just went from manageable to 'we need to hire three people just to review AI output.'
And even if they review it — even if they catch the bugs and verify the logic — they won't understand it the way they'd understand code they wrote themselves. You're accumulating comprehension debt with every merged PR.
Downstream of that is probably truly verifying the feature. Do you trust the unit tests that the AI created? Are there hidden performance problems? When a human tester checks that feature, does it actually, ya know, work?
So even if your coders understand the contents of the pull requests, can your testers keep up? And then after that, are you delivering faster than your stakeholders can absorb?
In short, are you shipping features faster than anyone can tell whether they're the right features? Because building the wrong things really efficiently isn't success. It's just faster failure.
This is extractive thinking applied to an extractive tool. It's optimization squared. And it produces exactly what you'd expect: faster coding output, roughly the same problems, and quite possibly new problems on top.
So What's the Generative Use of AI?
I use AI every day. Extensively. And I think it's one of the most powerful tools I've ever encountered. But the way I use it looks not much like what most organizations are talking about.
I use AI as a thinking partner. I bring the ideas — from 28 years of consulting, from conversations with clients, from observations about what's working and what's broken. The AI helps me organize those ideas, find connections between them, structure them into something coherent. It's the auxiliary brain I've always wanted. It helps me think faster and more clearly about things I've already observed.
This essay series? I'm using AI to help me write it. (It's the best editor ever.) But every insight in here — the Gen X Eeyore thing, the improv comedy connection, the accountant who can't create revenue, the observation that people have stopped raising their hands in retros — those came from being a human in the world for 50-some years. The AI helped me organize and articulate them. It helped mine the ore. But it didn't dig the mine. And it didn't even decide to found the mining company.
In short, nothing would have happened if I hadn't had the idea and started the conversation. And that's the right relationship. The human brings the generative energy. The AI provides extractive support. Together, you get something neither could produce alone.
But flip it — ask the AI to be generative while the human just manages the output — and you get the gray building from Essay 1. Packing paperclips into boxes under fluorescent lights. All output. No life.
The Question Every Leader Needs to Ask
Before you add AI to your team, to your process, to your product — ask this:
Are we using AI to amplify something generative? Or are we using AI to accelerate something extractive?
If your organization already has clear vision, curious people, good requirements, and a culture where people raise their hands — AI is going to be incredible for you. It'll amplify all of that virtuous goodness.
If your organization is numb, running on fumes, optimizing what's left of the last good idea someone had three years ago — AI is going to automate the numbness. Faster. More efficiently. With better dashboards. Supercharged numbness.
The tool doesn't determine the outcome. The orientation does.
What Comes Next
This is Essay 2 in a series. I'm going to get more practical from here — specific patterns I see in struggling organizations, the diagnostic questions I ask when I walk in the door, and the leadership and communication skills that actually move teams from extractive to generative.
But I wanted to establish this idea first, because it changes how you think about everything else: AI is the most powerful extractive tool ever built. That makes it extraordinarily valuable. And it makes the generative capacity of your humans — their curiosity, their ideas, their ability to present new information — more important than it's ever been.
The scene still needs new information. AI can't provide it. Your people can.
If they haven't stopped raising their hands.
—Ben