I woke up this morning and read an article by Hayden Field in The Verge about the AI free ride being over. It's something I've been thinking about a lot lately along with the common contention that the days of human work being just about over. I'm deeply skeptical of this and tried to come up with some numbers.
A Series of Bubbles
I was lucky enough to start my career by graduating into the middle of the internet tech bubble. That was ~1997 - 2001-ish. Back then, some really unprofitable companies were being spun up and run because mountains of venture capital (VC) money was chasing the internet craze.
There was a lot of garbage that got funded and then cratered. (Anyone remember Kozmo.com? omg that was the best.) There were some things that came out of it that 20+ years later continue to endure.
One meta thing that happened was that suddenly the world had a way to do knowledge work across timezones. And I remember around 2000 that my peers and I were starting to worry that "outsourcing to India" was going to mean the end of our jobs.
Ultimately, that turned out to not exactly be the case. A lot of work went to India and other places around the globe — but a lot of it stayed, too. There was an idea that "omg just think of how much we'll accomplish by being able to have people working around the planet 24/7". Kinda like the whole idea of the sun never setting on the British Empire. But it didn't quite work out that way exactly — more like there was always someone available to help you out with a problem because chances were high that SOMEONE was awake somewhere.
Around 2008 there was another bubbly moment. That bubble included the sub-prime mortgage stuff in it. That exploded and with it came a new bunch of work related to regulatory stuff. SOX compliance and the like.
Why Did the Bubbles Burst?
For all of these bubbles, they popped because some core idea — some kind of logical capstone — crumbled. And when that crumbled the finance peeps just didn't want to put any more money into X, Y, or Z. Which makes sense — they're not chuckin' money at things just for their health. They're looking to make a profit and see some kind of return on investment (ROI).
Let's fast forward to the modern day. AI sure seems like a bubble. And everyone in my industry is afraid that AI is going to steal our jobs and every other white-collar, non-trades job on the planet.
But here's the problem: AI companies aren't charging enough to cover their costs and make a reasonable (or sometimes ANY) profit.
This means that AI-inspired layoffs and AI-powered fear is happening based on the assumption that AI inference cost remains at the current levels. Which for all intents and purposes has essentially been free for multiple years.
So we all see these AI chat clients and AI video generators and stuff and it's legitimately amazing. But we aren't paying much or anything for it. Behind the scenes, datacenters are being built and run and compute capacity is being practically LIT ON FIRE driven significantly by investor money.
I Was Told There Wouldn't Be Math. (Here's the math.)
So how big is the AI subsidy? Let me show you.
I pay $200/month for Claude's Max 20x plan. And I use the heck out of it. Claude has become my auxiliary brain, editor, and software development partner.
At standard API rates, the quota I get is worth about $2,680/month. That's a 13x subsidy. Anthropic is effectively handing me $2,480 of free tokens every month because their business model encourages me to stay on the platform. For the value that I get out of it, it's an excellent deal.
Now imagine a CEO who wants to swap a human worker for an AI. Even at the subsidized $200/month price, users often complain that they sometimes burn up their token "quota" inside of a couple hours. One person on Reddit was suggesting that they frequently burned through their 5-hour token quota in 70 minutes. Using that person's usage and rounding it out to be more like 90 minutes to burn their 5 hour token allotment, then to cover a full 8-hour day, they'd need ~3.3x more capacity.
At unsubsidized API rates, that's north of $100,000 per year in tokens alone. Per AI "worker." That's before the human supervisor that's still required to conduct, supervise, and review the AI's work.
To me, that math doesn't look like "AI is going to steal all our jobs" territory. Granted, it's back-of-the-envelope math but that looks pretty expensive to me. And when you start doing that math, the AI industry can't go on like this. They're giving away mountains of compute ("inference") for pennies.
Economic Gravity Kicks In
They're going to HAVE TO start charging people. And when they start charging what it's actually worth, there'll be an inevitable economic reckoning. There's only so long that VC investors are going to subsidize the clear violations of the laws of economic gravity.
The AI world isn't quite running on a freemium model but once the free version goes away or gets turned way down, fewer people are going to use it. And for businesses, once they start charging what it's actually worth, it won't be just a minor line item anymore. It'll actually get analyzed and corporate CFOs are going to start to notice that AI doesn't exactly quite replace human employees.
Plus there's the problem that AI isn't getting cheaper to run the way most tech tends to. Moore's Law doesn't seem to apply here. Yah sure...maybe companies start making bespoke chips that are really good at running their models — but that doesn't fix the problem, it only delays the inevitable.
Scalability Realities
In fact, it's worse than that. AI capability gains follow what's called a power law — each meaningful improvement in model quality requires roughly 10x more compute, data, and parameters than the last one. So as the models get smarter, the unit economics get worse, not better. And modern reasoning models make it uglier still — they burn tens of thousands of tokens talking to themselves before producing an answer. Every query. Every user. Forever.
And then there's the energy bill.
I can write code, diagnose a broken process, coach a team, and review a PR on a bowl of rice and a cup of coffee. My brain runs on about 20 watts. A data center's energy usage is better measured in kilowatts and megawatts. The energy differential between human cognition and AI cognition is off the charts. It's not even close.
What This Actually Means
Now, I'm not saying that AI is going to go poof. But I think the doom scenarios just plain aren't going to play out in the same way that everyone thinks.
In times of change, always try to think about what stays the same. After running the numbers, I'd say it'll still be pretty darned recognizable.
Humans aren't gone yet.