AI and the Imminent Death of Capitalism
Successful AGI and capitalism can't coexist. What will happen when AI companies are successful?
Welcome to the Scarlet Ink newsletter. I'm Dave Anderson, an ex-Amazon Tech Director and GM. Each week I write a newsletter article on tech industry careers and tactical leadership advice.
As I mentioned last week, I wanted to write a brief digression about AI and capitalism. It was something I’d been thinking about, and I felt like sharing.
Except as I typed my brief digression, it became as long as an article. So that’s what happened.

What I’m going to talk about is that I feel that AI (and the related robotics investments) feels different from previous industrial advances. I sense the future cracks in capitalism beginning to show.
And I’m delighted that I don’t have the power or responsibility to fix it. Because this feels like an incredibly complex issue to solve.
Post-writing author note: Due to the complexity and weirdness of this topic (that’s a technical term), I invited some tech industry friends to read the draft of this article before publishing to see what I might be missing. Unfortunately, they all agreed with the conclusion. And they said that I probably didn’t emphasize enough how bad things might get.
First, I’m not a crazy hippie.
I’m not a poor young kid who doesn’t believe in selling my time to the man.
I sold my time to the man for quite a profit. I’ve done capitalism well. I got a decent education. I got a good job, then got a better one. I enjoyed it when I grew into a middle management position in a big corporation, which allowed us to save a bunch of money. We could now live off our invested capital if I wasn’t already supporting our family through my growing media empire. (That’s what you can call this, right?)
I also think that the competitive force that drives innovation in capitalism is a critical component that makes human culture work. It’s evolution for our financial and production systems.
In fact, it’s a big part of what I think made Amazon successful. We would sometimes have 2-3 teams building the same thing. Amazon leadership wouldn’t decide the winner. They’d just wait and see who made a successful business and then cut off the funding of the other teams. It’s cutthroat but sort of brilliant as well.
Replacing human labor wasn’t always bad.
One of the defenses of potential AI job losses is, “We’ve done this before. It’ll be ok.”
And we have indeed lost human jobs in the past. We replaced plenty of manual agricultural work with machines, and people found new jobs building buggy whips. Then we replaced buggy whips and buggies, and those people got jobs making cars.
The great thing about this progression is that at each stage, our ability to make things scaled. A single farmer could make 2x, and then 5x, and then 10x as much food. This meant that food prices have dropped like a stone. In 1947 we spent 23% of our income on food. This was down to 7.1% in 2023.
This scaling of manufacturing (and corresponding decrease in price) has continued across industries.
Families were excited to be able to afford one car, and yet many homes today have a car for every driver. When I was young, many homes had a single family TV. It was usually color (because I’m not that old), but it was exotic to have more TVs or one of those huge projection TVs. These days, plenty of families have a TV in every major room of their homes. And you can get a 100” TV for significantly less than the price of a 21” TV in the past.
We swim in luxury. Those earning minimum wage frequently walk around with a computer in their pocket, which is connected to the entire world’s information network. It’s pretty awesome how much we’ve improved the standard of living for the developed world.
The fact that anyone doesn’t have enough food isn’t a manufacturing problem (as it was in the past); it’s a political will issue. We simply don’t want to solve the problem; it’s not that we can’t afford it.
Every time we’ve scaled our ability to make goods and services, we’ve increased our consumption. We have TVs for everyone, phones for everyone, cars for everyone, and too much food for most people. By increasing our consumption, we’ve created opportunities for electronics stores and shipping companies and Uber drivers and fast food restaurants.

This time, it’s different.
This is where the rubber meets the road, as they say.
Today’s top tech companies and top investing firms are frantically and excitedly trying to replace all forms of human work.
They’re paying incredible amounts for top employees to entice them to help them win. Recent articles indicate that an Apple employee got a $200 million pay package to help Meta build AI.
Nvidia recently crossed the $4 trillion mark on the back of AI investments and growth, becoming the world’s most valuable company. The next most valuable? Microsoft, Apple, Amazon, Google, Meta. And what do they have in common? They’re all spending tens of billions on AI software and hardware.
Is this a bubble? Are they frantically spending money so they don’t look bad to their shareholders?
Well, that’s the many, many trillion-dollar question. And it comes down to two potential outcomes.
Someone succeeds with the goal to build general-purpose AI (AGI), or they don’t.
As a side note, general-purpose AI has many definitions, but in general, here’s how I’d describe it. Currently software is written for a very specific purpose. You have doorbell software for Ring smart doorbells, and you have phone call routing software for cell phones, and gradebook software for schools. Every bit of software is written for a purpose. So every time you want to do something new, you write it.
The idea behind AGI is that we could build AI that could learn rather than be rewritten. This means that you’d (at a high level) be able to point your AI at a task, and with some feedback it’d figure things out. Like a human does, except potentially faster, better and cheaper.
If they don’t build general-purpose AI (AGI)?
Imagine that we continue to slightly improve LLMs for many years. But progress stalls. It turns out that we’ve reached some local maxima. It’s not impossible. We don’t know for a fact that current designs have the capability of doing much more than they’re doing now. It’s just that so far, more processing power has meant more intelligence.
But imagine this stops becoming true. It’d be another tech bubble. Tulip mania, you might say. We’d see a stock market crash at some point once investor sentiment turned sour.
But the world? Capitalism? It could continue as expected.
If they succeed in building general-purpose AI (AGI)?
As I said, these companies are the most valuable companies in the world for a clear reason. You invest at these valuations if you expect potential massive returns.
Their stated intent is to build AI that can be used for anything. I can't, in fact, name anything that they don’t expect AI to take over. Except perhaps sports. I don’t know that we’d want to watch robots play baseball. Actually, I would rather not watch baseball either. So I’m not a great judge of how AI might influence sports.
They’re trying to build AI that will do our legal work, accounting, marketing, coding, and art creation. It will create music, drive trucks, and handle customer service.
In the physical world, they’re using the same approaches to try to build general purpose robots. These robots will build homes, clean homes, mine materials, build products, and ship those products to their final destinations.
I’m not technologically ignorant. I know that we’re not there tomorrow or next year. I strongly agree that many AI tech leaders are extremely optimistic when they announce their projected timelines.
As of today, AI writes mediocre music, drives us around in extremely limited locations, and good luck getting a general-purpose robot to pack an egg in a cardboard box. Doctors can teleoperate surgical robots, but the robots certainly don’t do things autonomously.
But reaching this AGI step doesn’t require super advanced Terminator-level AI that becomes sentient. Instead, it means that AI is capable of learning and doing tasks well enough, like a human.
Humans aren’t perfect at driving. But if an AI can be slightly better than humans? And we can improve from there? Then the idea of human drivers as a career path is absolutely doomed. And if AI can be slightly better than humans at writing legal documents? Then human legal document writers are doomed. And if AI can be slightly better than humans at taking out your appendix? You can see where I’m going with this.
We’re doomed, because AI scales.
This is all because AI scales. We can add new hardware and make AI cheaper to run and have multiple AIs work together to solve problems. The great promise of software has always been that it financially scales well.
Humans are inefficient. We need training. We need to be replaced when we get annoyed. We can’t work 24/7. We want raises. Humans are terrible employees compared to AI.
The winners in capitalism are those who scale. The small shops become multinational conglomerates. AWS can host millions of websites so much cheaper than a small shop.
If someone invents AI that can replace human workers in entire industries? That’s the best scaling business model of them all. Imagine if you don’t just build a self-driving AI to take over all driving jobs in the world. But what if the same AI can handle mining copper? And cutting meat in a deli? And writing marketing slogans? Woo, all the money in the world for that AI company. Which explains why AI is driving a huge percentage of the stock market valuation these days, and individual people are being paid literally hundreds of millions to build AI.
In general, you could argue that current capitalism is ‘winner takes a lot’, where Amazon scales and takes a larger percentage of various markets. And then another company fights to break Amazon’s hold on something like cloud databases. And a startup fights to break into those markets as well.
A generalized AI could be ‘winner takes all’, where it truly does everything. Unfortunately, in this scenario, only other AIs would be cheap / skilled / scaling enough to compete in this marketplace.
This job replacement was fine when we replaced carriage makers with car manufacturing. They were different jobs, but they were jobs all the same.
But what happens if humans are simply obsolete in the job market?
The consequences.
In the past, Walmart heirs might get rich, but the company still employs over 2 million people. We don’t like the pay of those lower-skilled jobs, but at least there are jobs. And even those retail checkout jobs allow you to consume products and services at levels that people in 1900 could only fantasize about. While not perfect, it works.
Current large companies make their money (and got rich) by making humans efficient. The scaling is still human scaling. Agricultural machines allowed a single farmer to generate more value. Amazon fulfillment machines allow a single employee to package up more boxes than past shipping workers. Facebook’s marketing platform allows marketers to reach 100x more customers than they could in the past.
Capitalism is all about deploying money to generate more money. I purchase tiny portions of companies, and in turn I get a portion of their profits (or frequently, I can just sell my share in the company for a profit later as they’ve since grown).
These companies take their revenue and spend it in various ways. They buy more hardware, for example, which makes the hardware companies money. They employ more people, who conveniently spend their paychecks to buy more products from more companies. We used to joke while working at Amazon that we simply reinvested our income back into our own company by buying everything there.
Every improvement in the past made humans generate more value per hour of work. But this AI revolution is all about taking humans out of the equation.
If they’re successful, we’re obsolete as workers. Why would someone with capital spend it on a human when an AI could surely do it better? When AI scales?
Keep in mind that humans are the capital distributors in our economy. How does money work when only the richest individuals (those running AI) make any money? Who will buy all these cheap products?
We can’t have our cake and eat it too.
What I mean by that is that I don’t think inventing AGI and human jobs (capitalism) can coexist.
Because that’s the line we’re repeatedly fed, and I don’t buy it.
“Our AGI will do all coding and designing and product marketing and blah blah blah. It’ll be PhD level intelligent and do everything faster and better and cheaper than humans.”
Slight pause.
“But don’t worry! Every time in the past when we’ve invented a new technology which destroyed jobs, humans always found something new to do! I’m sure people will create new opportunities for themselves to earn a living. And imagine how wonderful the future will be when everything is so cheap!”
In my mind, we either have AGI or we have jobs. Because, as I said in the above consequences section, any decent AGI means that humans as workers will become an obsolete concept.
Are you familiar with the idea of transitive properties? If A = B and B = C then A = C. Sound familiar?
Our AI companies are trying to create AGI. AGI means we don’t have jobs.
This means, through the magic of transitive properties, that a massive percentage of our economy is frantically working to make certain that no humans have jobs in the future. Which feels awfully weird. And concerning.
I have always believed in the ability of smart humans to solve any problem. We certainly have some of the best and brightest working on AI. Unfortunately, the problem they’re working on has a serious consequence.
I don’t want to get carried away. There are colossal potential positives. A true advanced AGI could lead to the end of hunger, disease, and want. We could live in a utopia where we could let robots serve us as we spent our days on our favorite hobbies. That’d be nice.
But I’m cautious. Industrialization of the world arguably took a couple of hundred years. It was an extremely slow (and painful and bloody) process.
This AI revolution may come much more quickly. And human society isn’t super great at pivoting fast. This year we’re testing Waymo cars in a few new cities. It’s not absolutely impossible that 20 years from now, we’re removing the last human drivers from the roads. And that’s just one industry.
I don’t trust our governments farther than I can throw those geriatric white guys. I could see us not preparing for this upcoming disaster. I’m fairly sure there was a buggy manufacturer somewhere who went bankrupt and starved to death because he couldn’t figure out how to get a car manufacturing job. What happens when all humans find that their meat bodies are simply obsolete?
Looking at things like climate change or national budgets, I don’t think human political systems handle long-term risks very well. I suspect that we won’t come up with a suitable solution until disaster slaps us in the face.
I don’t have any final answers.
For now, I don’t see anything changing.
AI companies will frantically race to ensure that their AI attempts don’t fall behind other AI companies.
Governments will support their tech companies because they’d hate to fall behind as global economic powers.
And we’ll have to cross our fingers and see what happens. Maybe AI improvements will plateau. Maybe governments will begin to test UBI and other methods of dealing with mass unemployment. But we should at least approach the future with our eyes wide open.
As a current Amazon engineer, I'm going to respectfully disagree. And I'm not surprised that your tech industry friends agreed with you.
Since when is AI progress linear? Booms followed by winters are the norm in the space.
I use AI all day everyday, mostly for coding, it's an amazing usecase for an LLM. But writing code is the easiest and least differentiating part of being an engineer. It's also the least time-consuming.
So far I've not seen anything that leads me to believe an AI can come up with creative ways to delight customers or save costs, persuade stakeholders to invest and work on that idea, build detailed designs considering multiple teams' constraints, and deploy and monitor that thing safely and responsibly.
Can AI make me more productive as an engineer? Absolutely. It's an amplifier. Maybe that means I need 4 people not 8 on my team, so I can combine two teams into 1. But eventually there's too much code, infrastructure, and too many partners. I think we're due for a long AI winter, during which people can work out how to even implement the current AI into their products (because no, almost no one is doing that successfully today)
Thanks for a cheery start to the week!