The AI Productivity Illusion
People think AI is making us more productive. Every tech conference, every earnings call, every startup pitch deck repeats the same mantra: AI is a transformational productivity tool that will reshape work, unlock human potential, and drive economic growth.
They’re wrong.
What we’re experiencing isn’t productivity. It’s a massive VC-funded subsidy masquerading as technological progress. And when the subsidy ends...and it will end; we’re going to discover that most AI deployments are economic value destroyers, not creators.
This isn’t cynicism. It’s math.
The Economics of AI: A $5,000 Problem Sold as a $100 Solution
Here’s how AI economics actually work in 2026:
You pay $100/month for an AI application. Let’s say it’s a B2B SaaS tool that uses AI to automate customer support or generate marketing content.
That company burns approximately $500/month on compute costs from AWS or Azure to serve you. They’re losing $400 per customer per month, but they’re okay with it because they’re “building market share” and “demonstrating PMF to investors.”
But it gets worse.
Amazon and Microsoft are burning roughly $5,000 per month to acquire GPU capacity from NVIDIA to provide that compute power. They’re eating a 10x loss on every dollar of compute they sell.
So the actual economic stack looks like this:
You pay: $100
Real cost to app company: $500
Real cost to cloud provider: $5,000
Subsidy required: $4,900 per user per month
The only reason you pay $100 instead of $5,000 is that venture capital is absorbing a 50x cost difference. This isn’t a sustainable business model. It’s a transfer payment from Sand Hill Road to end users, mediated through a complex supply chain.
When I explain this to execs, they usually say: “But costs will come down as models get more efficient and GPU supply increases.”
Maybe. But that’s not what the incentive structure suggests.
Why Costs Won’t Fall Fast Enough
The AI cost curve is fighting against three powerful forces:
1. The Capability Treadmill
Every time models get cheaper, companies demand more capability. GPT-3.5 got cheap, so everyone moved to GPT-4. GPT-4 will get cheap, and everyone will move to whatever comes next. Claude Opus costs more than Sonnet, but companies pay it because they want the marginal improvement.
This is the same pattern we saw with cloud computing. AWS gets cheaper, but companies just consume more compute. The cost savings get competed away through feature expansion.
2. The Context Window Arms Race
Remember when 4K tokens was a long context window? Now we’re at 200K+. Every doubling of context window roughly doubles compute costs. As AI applications get more sophisticated—ingesting entire codebases, processing multiple documents, maintaining long conversation histories; context requirements explode.
More context = more compute = higher costs.
3. The Multi-Modal Explosion
Text was expensive. Then we added images. Now we’re adding video, audio, and real-time processing. Each modality multiplies the compute requirements.
The startups building AI video generators aren’t just burning money on compute. They’re burning industrial quantities of money. Some are spending $10-20M per month on GPU costs alone, pre-revenue.
So no, costs aren’t coming down fast enough to save the economics. If anything, the capability demands are growing faster than the efficiency gains.
The Individual Illusion: Why You Think You’re Productive
At the individual level, AI feels incredibly productive. I use Claude to draft documents, analyze data, and think through complex problems. It genuinely saves me time.
But here’s the mental framework error: I’m confusing personal time savings with economic productivity.
Real productivity means: Output value generated / Resources consumed
When I use AI:
I generate more output (good)
But I consume $500 worth of subsidized resources to do it (bad)
And I only pay $25/month (artificially low)
If the subsidy disappeared tomorrow and my Claude subscription became $300-400/month to reflect true costs, would I still subscribe? Maybe for some use cases. Definitely not for others.
That “maybe” is the entire problem.
The AI productivity narrative works when someone else is paying the real cost. It falls apart when you have to pay it yourself.
The Enterprise Reality: Where AI Goes to Die
Now scale this individual illusion to enterprises and governments, and the economics get truly horrifying.
I’ve run AI implementations for both startups and large organizations. The difference isn’t just speed; it’s fundamental economics.
Small Company Example:
Company: 50-person B2B SaaS startup
Implementation: AI-powered SEO content engine
Timeline: 30 days from decision to production
Cost: $5K in setup, $3K/month ongoing
Result: 300% increase in organic traffic, 150% increase in qualified leads
ROI: Positive within 90 days
Large Enterprise Example:
Company: International hotel chain, 5,000+ employees
Implementation: Same AI-powered SEO system
Timeline: 3 months from pilot to... nothing
Cost: $8K in consulting, $2K in software, $5K in internal resources
Result: Pilot failed, project canceled
ROI: Negative infinity
What’s the difference? Why did the exact same technology succeed in one context and fail in another?
The Internality Problem: Organizations Optimizing Against Themselves
Traditional economics focuses on externalities; costs imposed on others. Pollution is an externality. Secondhand smoke is an externality.
But there’s another failure mode: internalities. These are costs people impose on their future selves through bad decisions or lack of self-control.
Smokers understand intellectually that cigarettes are bad. But they start anyway because teenagers don’t think about their 50-year-old selves. Then they can’t quit because addiction overrides rational planning.
Large organizations have massive AI internality problems.
Here’s what happens:
Phase 1: Resistance
AI is identified as strategically important
Multiple stakeholders claim ownership
Each department wants control but none want accountability
Governance processes designed for software purchases are applied to AI
Projects get stuck in “alignment” for months
Phase 2: Pilot Purgatory
After 6-12 months, a small pilot is approved
Budget is constrained to minimize risk
Pilot is designed to prove value before scaling
But the pilot is too small to generate meaningful results
And it’s measured on the wrong metrics
Phase 3: The Failure Trap
Pilot shows “promising” but not “transformative” results
Stakeholders disagree on whether to continue
More analysis is commissioned
Original champions get promoted or leave
New stakeholders want “their” approach
Project dies quietly
This pattern repeats across 90% of enterprise AI initiatives. And it’s not because the technology doesn’t work. It’s because large organizations have governance structures that are optimized for preventing bad decisions rather than making good decisions.
The internality is this: Organizations know AI is strategically important. But their decision-making processes make it nearly impossible to deploy AI effectively. They’re optimizing against their own long-term interests.
The Hidden Cost Structure: What Enterprises Actually Pay
Let’s break down what a large enterprise actually spends on AI adoption:
Direct Costs:
Software/API fees: $500K - $2M/year
Infrastructure: $200K - $1M/year
Consulting/implementation: $1M - $5M one-time
Subtotal: $2-8M/year
Indirect Costs:
Internal resources (product, engineering, ops): $2-5M/year
Training and change management: $500K - $2M one-time
Process redesign: $1-3M
Opportunity cost of executive attention: Incalculable
Subtotal: $4-10M/year
Total: $6-18M/year for a meaningful AI transformation program
Now ask: What’s the return?
For most enterprises, it’s murky at best. They can point to efficiency gains (”our customer service team handles 20% more tickets”), but they can’t demonstrate clear ROI.
Why? Because the productivity gains are real but small, while the costs are massive and growing.
And remember: those software/API costs are artificially low because of VC subsidies. If you remove the subsidy, the economics get worse by an order of magnitude.
The Mental Framework: Externalities vs. Internalities
To understand where AI policy needs to go, you need to understand both externalities and internalities.
Externalities are costs imposed on others:
Biased hiring algorithms that discriminate against protected groups
Surveillance AI that erodes privacy for entire populations
Social media algorithms that optimize for engagement at the cost of societal cohesion
Job displacement that creates social costs (unemployment, retraining, safety net)
Internalities are costs organizations impose on their future selves:
Adopting AI too slowly and falling behind competitors
Adopting AI too quickly without proper governance and failing catastrophically
Building dependency on subsidized services that become unaffordable
Reorganizing around AI capabilities that don’t actually deliver value
Good policy needs to address both.
Right now, we’re failing on both dimensions. We’re allowing massive externalities to accumulate (algorithmic bias, privacy erosion, labor displacement) while simultaneously enabling internalities (organizations making bad long-term decisions based on artificially cheap AI).
The VC Subsidy as a Massive Externality
Here’s the key insight: The VC subsidy itself is a form of externality; a cost being dumped on the future.
When VCs fund money-losing AI companies, they’re not being charitable. They’re making a calculated bet that:
Some of these companies will achieve monopoly/oligopoly positions
Once entrenched, they can raise prices to profitable levels
Customers will be locked in and unable to leave
VCs will extract their returns during this transition
But this creates a systemic externality. Organizations are making strategic decisions (reorganizing, retraining, building dependencies) based on artificially cheap AI. When prices rise to sustainable levels, they’ll face a brutal choice:
Pay the real cost and destroy their economics
Rip out AI and lose the capabilities they’ve built around it
Shut down entirely
The companies that moved slowly and cautiously will actually be better positioned than the early adopters. The laggards won’t have built expensive dependencies on subsidized services.
This is the opposite of how technology adoption usually works. Usually, first-movers win. In AI, first-movers are building technical debt on borrowed money.
Why 90% of AI Pilots Fail
The failure rate isn’t a bug; it’s a feature of the underlying economics.
AI pilots fail for three primary reasons:
1. The Value Isn’t There Yet
Despite the hype, AI is still relatively narrow. It’s great at specific tasks (text generation, image recognition, pattern matching) but struggles with:
Complex reasoning across domains
Tasks requiring deep contextual understanding
Situations where errors are costly
Problems requiring true creativity or judgment
Most enterprise use cases fall into these categories. So pilots show marginal improvements, not transformational change.
2. The Integration Costs Are Underestimated
Every AI pilot requires:
Data preparation and cleaning
Integration with existing systems
Workflow redesign
User training and change management
Ongoing monitoring and refinement
These costs are typically 5-10x the software costs. And they’re pure expense; they don’t scale, they don’t compound, they just consume resources.
3. The Organizational Antibodies Are Strong
Organizations are optimized for stability, not change. Every AI implementation threatens someone’s job, budget, or political position. The resistance is rational from an individual perspective, even if it’s destructive from an organizational perspective.
So pilots get slow-rolled, starved of resources, or measured on impossible metrics. The failure is baked in from the start.
The Policy Imperative: Tax and Subsidy as Steering Mechanisms
If we let the market self-correct, we’re looking at a catastrophic adjustment when VC funding slows and AI costs rise to sustainable levels. Enterprises and governments will be blindsided.
We need policy intervention now. And we have two primary levers: tax and subsidy.
The goal isn’t to pick winners or losers. It’s to correct the market distortion and align AI incentives with long-term societal value.
The Subsidy Framework: Where AI Creates Genuine Public Value
Government should subsidize AI deployment where it generates positive externalities—where social benefit exceeds private return.
Healthcare AI: Cancer Detection
Imagine an AI system that improves early cancer detection rates by 20%. The private return to the hospital is marginal (they get paid for scans regardless). But the social return is massive:
Lives saved
Reduced treatment costs (early detection is cheaper)
Increased productivity from healthier population
Reduced suffering
This is a textbook case for subsidy. Government should pay for deployment in smaller hospitals and clinics that couldn’t otherwise afford it.
Education AI: Personalized Tutoring
An AI tutor that adapts to individual learning styles could dramatically improve educational outcomes, especially for disadvantaged students. The private return is limited (parents can’t pay much). But the social return is enormous:
Better-educated workforce
Reduced inequality
Higher future tax revenue
Lower social costs (crime, welfare)
Subsidize deployment in public schools and underserved communities.
Climate AI: Modeling and Optimization
AI for climate modeling, grid optimization, and resource management generates massive positive externalities. The private return is often low or non-existent. But the social value is existential.
Heavy subsidy, broad deployment.
Government Services: Bureaucratic Automation
This is where subsidy makes the most sense. Government has enormous amounts of routine work that AI can automate:
Processing applications
Answering citizen inquiries
Analyzing regulations for compliance
Managing public records
The private sector won’t solve this (no profit motive). But the social value is significant:
Lower taxes (fewer employees needed)
Faster service (citizens wait less)
Better allocation of human workers to complex cases
Modernized government that actually works
This is how you modernize a government in 90 days instead of 10 years. You subsidize AI deployment for routine work and redeploy humans to judgment-intensive tasks.
The Tax Framework: Where AI Creates Social Harm
Tax should be deployed where AI creates negative externalities—where private gain comes at social cost.
Social Media AI: Engagement Optimization
Facebook, TikTok, and YouTube use AI to optimize for engagement. The private return is massive (more engagement = more ads = more revenue). But the social cost is also massive:
Mental health damage (especially in teens)
Political polarization
Misinformation spread
Erosion of shared reality
Tax this heavily. Make companies internalize the social cost they’re externalizing.
The tax structure could be simple: revenue from AI-driven engagement minus documented social value. If you can’t prove your algorithm creates societal benefit, you pay the tax.
Surveillance AI: Privacy Erosion
AI-powered surveillance creates a massive negative externality: the erosion of privacy and civil liberties. Whether it’s governments tracking citizens or corporations tracking consumers, the social cost is real.
Tax AI surveillance systems based on:
Number of people surveilled
Sensitivity of data collected
Duration of retention
Scope of inference
Make privacy erosion expensive. Force organizations to internalize the cost they’re imposing on society.
Automated HR: Bias and Discrimination
AI hiring systems have been repeatedly shown to perpetuate and amplify bias. The private return is positive (cheaper than human recruiters). But the social cost is discrimination at scale.
Tax AI HR systems unless they can demonstrate:
Regular bias audits
Transparency in decision-making
Accountability for errors
Measurably better outcomes than human processes
If you can’t prove your system is better than humans, you pay for the risk you’re creating.
Job Displacement Without Transition
When AI automates jobs, companies capture the productivity gains while workers bear the adjustment costs. This is a classic externality.
Tax AI deployments based on job displacement, with exemptions for:
Retraining programs funded by the company
Gradual transition (not mass layoffs)
Creation of new roles requiring human judgment
Demonstrated productivity sharing with workers
The goal isn’t to prevent automation. It’s to make companies internalize the transition costs they’re currently externalizing.
The Tiered System: Carrots and Sticks Based on Impact
The smartest policy approach is a tiered system that combines tax and subsidy:
Tier 1: Public Good (Heavy Subsidy)
Healthcare diagnostics
Educational tools
Climate solutions
Public infrastructure
Government services
Scientific research
Tier 2: Productive Private Use (Tax Neutral)
Manufacturing automation with safety improvements
Productivity tools for knowledge workers
Supply chain optimization
Quality control systems
Infrastructure management
Tier 3: Mixed Impact (Light Tax)
Entertainment AI (no social harm, but no social benefit)
Convenience services
Consumer applications
Gaming and media
Tier 4: Social Harm (Heavy Tax)
Engagement optimization
Surveillance systems
Bias-prone decision systems
Job displacement without transition
Privacy-invasive applications
The tier determines your tax/subsidy rate. The specifics would vary by sector and use case, but the framework is clear: align private incentives with social value.
The Governance Challenge: Who Runs This?
AI policy is too complex for any single department. You need:
Treasury: Tax design and collection, subsidy disbursement, fiscal impact analysis
Tech Regulator: Technical standards, audit methodologies, certification processes
Labor Department: Workforce impact monitoring, transition assistance, retraining programs
Economic Policy Council: Coordination across departments, strategic direction, international alignment
Academic/Independent Bodies: Research on impact, unbiased analysis, public transparency
But critically, you need a central coordinating body—call it an AI Economic Council—that has the authority to:
Set strategic direction
Resolve conflicts between departments
Adjust policy based on evidence
Report directly to political leadership
Maintain independence from industry capture
The council should be staffed by people who understand both AI technology and economic policy. Not pure technologists (who undervalue social impact). Not pure economists (who undervalue technical constraints). But people who can think across both domains.
International Coordination: Why This Can’t Be One Country
Here’s the problem: AI is globally competitive. If the US implements smart tax/subsidy policy but China and Europe don’t, we risk:
Companies moving AI operations to lower-tax jurisdictions
Brain drain to countries with looser regulation
Competitive disadvantage for US firms
Inability to capture the benefits of good policy
This requires international coordination. Not full harmonization (different countries have different values), but at least:
Common frameworks for measuring AI impact
Minimum standards for harmful applications
Information sharing on what works
Coordination on subsidies to avoid races to the bottom
The model is climate policy: national implementation with international coordination.
Small countries can actually win here. If you’re Estonia, Singapore, or Israel, you can move faster than the US or China. Implement smart policy, attract AI companies doing genuine public good, and become a hub for high-value AI work.
Size isn’t destiny. Speed and smart policy are.
The Productivity Paradox: Why Cheap AI Might Be Worse Than Expensive AI
Here’s a counterintuitive insight: The VC subsidy making AI cheap might actually be harmful to productivity.
When something is cheap, we use it wastefully. When something is expensive, we use it carefully.
If AI cost its true economic price ($5,000/month instead of $100), organizations would:
Deploy it only where the value genuinely exceeds the cost
Invest more in making those deployments successful
Build sustainable business models instead of dependency traps
Focus on high-value use cases instead of nice-to-haves
The subsidy creates moral hazard. It encourages wasteful deployment, unsustainable business models, and hollowed-out organizations dependent on services they can’t afford.
Raising AI prices to sustainable levels might actually increase productivity by forcing better allocation of resources.
The Coming Reckoning: What Happens When Subsidies End
The VC subsidy can’t last forever. At some point—maybe 2-3 years, maybe 5-7 years; the money will tighten and AI companies will need to charge sustainable prices.
When that happens:
Scenario 1: The Soft Landing
Prices rise gradually
Efficiency improvements offset some of the increase
Organizations adjust and maintain critical AI deployments
Less critical uses are eliminated
We end up with more focused, higher-value AI adoption
Scenario 2: The Hard Crash
Prices spike suddenly
Organizations are locked into dependencies they can’t afford
Mass elimination of AI tools
Productivity losses from removing systems people rely on
Economic disruption, job losses in AI sector
Regulatory backlash and knee-jerk policy responses
Which scenario we get depends entirely on whether we implement smart policy now.
If we guide AI adoption toward genuine value creation, we get the soft landing. If we let the subsidy bubble inflate further, we get the crash.
The Bottom Line: What Actually Matters
AI’s productivity boom is largely a VC-subsidized illusion. We’re burning capital to create the appearance of progress.
Real productivity means generating more value than you consume. Right now, AI is doing the opposite at scale.
But this doesn’t mean AI can’t be productive. It means we need to build the economic and policy infrastructure to ensure it actually is.
That requires:
Honest accounting of AI’s true costs and benefits
Smart subsidies for applications with positive externalities
Appropriate taxes for applications with negative externalities
Coordinated governance across departments and countries
Long-term thinking that prioritizes sustainable value over short-term hype
The organizations winning with AI aren’t the ones deploying it everywhere. They’re the ones deploying it strategically, where the value genuinely exceeds the cost.
The countries that will win the AI race aren’t the ones with the most AI companies or the biggest compute clusters. They’re the ones that align AI incentives with long-term societal value.
This is applied AI economics for people who actually run things. Not hype. Not fear. Just the math.
And the math says: We’re building a productivity bubble on borrowed money. The question isn’t whether it will pop. It’s whether we’ll have the wisdom to deflate it gradually, or the foolishness to let it explode.
The Full-Stack Capitalist
writes about AI, economics, distribution, and operator systems. This is economic thinking for people who build and run things; not consultants, not academics, but operators navigating the AI transition in real time.

