As a current Amazon engineer, I'm going to respectfully disagree. And I'm not surprised that your tech industry friends agreed with you.
Since when is AI progress linear? Booms followed by winters are the norm in the space.
I use AI all day everyday, mostly for coding, it's an amazing usecase for an LLM. But writing code is the easiest and least differentiating part of being an engineer. It's also the least time-consuming.
So far I've not seen anything that leads me to believe an AI can come up with creative ways to delight customers or save costs, persuade stakeholders to invest and work on that idea, build detailed designs considering multiple teams' constraints, and deploy and monitor that thing safely and responsibly.
Can AI make me more productive as an engineer? Absolutely. It's an amplifier. Maybe that means I need 4 people not 8 on my team, so I can combine two teams into 1. But eventually there's too much code, infrastructure, and too many partners. I think we're due for a long AI winter, during which people can work out how to even implement the current AI into their products (because no, almost no one is doing that successfully today)
I'm not necessarily disagreeing. I have a suspicion that the current designs of LLMs can replicate text / code / images / video, without understanding it. Which means you'll get some cool parrot effects, but not true creativity.
What I was getting at, in some ways, is that we all might be better off if my suspicion is right (which generally would lead to AI companies crashing & burning at some point, but at least capitalism is retained).
You might not have seen AI come up with such "creativity" or other things you described because you are likely working with a general purpose LLM trained on the data/code of the world and not specific to high-quality Amazon internal knowledge.
What you described looks to me like exactly what an LLM (or better still a small language model SLM https://arxiv.org/pdf/2506.02153 ) can do, if trained or fine-tuned with hyper-specific industry or company/organization related data/patterns, and at runtime invoked with appropriate contextual data, it could orchestrate everything you mentioned above. That is basically what an Engineer or set of Engineers say L6s or L7s do in their brains, there is nothing more to it, a general purpose LLM might not just be aware of it deeply to generate a meaningful response.
If the model is trained/fine-tuned with:
1. The expected mental/cultural model e.g. Amazon LPs and bar-raising examples of successful use of such LPs within an organization to drive business impact.
2. Innovative success stories of delighting customers and saving costs and the process behind it i.e. looking around corners, decision records etc, not just the final result.
3. Amazon organization hierarchy, people, teams, roles, levels, and responsibilities.
5. Team products and services, and dependency graph from RIP/RMS etc.
6. Stakeholders and their metadata from phonetool e.g. Director of XYZ responsible for ABC, Senior Engineer in ABC team responsible for Product foo, Service bar, PM in DEF team, etc.
7. Awareness of internal tools and how to call them e.g. meetings.amazon, Chime, Slack, SIM, Gitfarm/CRUX, Pipelines and Pipeline agents, Dogma, launch manager etc. Awareness of processes e.g. OP1/2. Awareness of mechanisms e.g. Working backwards, 1-pagers, PR/FAQs etc, and many examples of those both good and bad
8. What safe deployment and monitoring means for specific service pipelines etc,
And you invoke it with the right input data, it will do something like what you described
I think your criticism of Capitalism (it's self destructive nature) has been articulated in similar form by Karl Marx 150 years ago. And yet here we are. Does it mean you are wrong? Not necessarily. But perhaps it helps to position your ideas within the larger context of historical economic though.
Sobering article, Dave. I've sometimes wondered where the rush to AGI will lead and what world my kids will inherit. Many of my assumptions about how things work are based on the current model of capitalism (i.e., school/training -> job -> income/financial security, etc.). It feels like the system is already changing (e.g., the AI job apocalypse impacting entry-level jobs). I can't envision what the future looks like, but it does feel like we're creating problems. I try to be an optimist and think about how much more valued the human element will become, but if people don't have work, who will pay for any of it? Good time to learn homesteading... :)
How is this for a fun comparison: The expansion of the Roman Republic led to an influx of free labor - slaves from conquered lands who were imported by wealthy land owners. Slaves replaced farmers and artisans who could not compete with free labor. Wealth kept accumulating, but only for elites. Poverty and civil unrest eventually led to the collapse of the republic, and authoritarian rulers took over - wealthy people with private armies, who battled each other for power.
I couldn’t agree more with you on this point. And what really worries me is that people in charge are not seeing that we are digging our own graves. Very disturbing, what will happen with our kids, with their kids as like you said they can’t scale with the same rate as AI? Every human has to re-learn everything from birth while growing up, with AI it is all skipped, we don’t need humans anymore! The whole human race is at stake in my opinion.
I'd argue that many tasks and bundles of tasks in every industry could already be automated with current AI if the right people spent a few months focused on that category of domain and technical knowledge.
This means the problem isn't the technology - it's the adoption rate, which suggests we should be even more concerned about the long-run societal consequences, as some of the downsides would already be here if the adoption rate were faster.
How many tasks does an average business do in a day that are more complicated than Math Olympiad problems? If you can use Claude Code to get reasonable results in software engineering, do we really still think financial services or healthcare administration requires hundreds of thousands of people employed globally in 5 years time?
What an outstanding post as always, Imagining what will happen in the world if people don't need to work as AI is doing thing fundamentally differently in a way we don't understand what they're doing or their reasonings.
Multiple critiques, disagreeing with your conclusion. May be too much human-optimist, but
* You are incorrectly assuming that reaching AGI depends on LLM progress. Many experts contend that LLM is already stalled, and we require other technologies to be used for achieving AGI. This would still be your AGI scenario, but the features you assume of AGI may be very different to the ones you write about here, which affects the conclusion.
* You assume capitalism is static and doesn't evolve. Historically, there is a lot of differences between the capitalism at the dawn of the industrial revolution times, early XX century and today, or between countries and cultures like France capitalism vs. USA capitalism. It is not a stretch of thinking to imagine a very capitalistic society with different AGIs in the market, as consumer preferences may not align with winner-takes-all thinking. Not to mention, even in capitalist systems, monopolies are legally discouraged, and split when necessary.
* Your definition of AGI is based on adding learning to current LLMs. We have had machine learning for decades, proper learning capabilities; but no AI expert would assume that "just" combining ML and LLM would be qualified as AGI. Maybe some salespeople in the industry sound like that, but technically very imprecise.
* Your description of capitalism is only "there are human jobs". A better description will include at least "there are human bosses and human workers, human producers and human consumers". We have got at least AI producers injected, and it is still very much a capitalism situation, while the AI-generated content industries have only raised the appreciation for human-generated content, and look like a race to the bottom which historically is not a winner in capitalism. But unless in your future description we include a wide number AI-consumers, with a budget for the AI to spend; and AI-bosses, accumulating the capital, with decision-making power and legal ownership of the money, it is at least a very incomplete description. So far, what AI extremists project, is every human to be a boss of many AIs
* Last for this comment, but not least, you imply AI controlled robots, to substitute the physical jobs, and hardware to scale exponentially like software. And all the infrastructure behind, like factories and AWS, to scale the same, without humans.
As a current Amazon engineer, I'm going to respectfully disagree. And I'm not surprised that your tech industry friends agreed with you.
Since when is AI progress linear? Booms followed by winters are the norm in the space.
I use AI all day everyday, mostly for coding, it's an amazing usecase for an LLM. But writing code is the easiest and least differentiating part of being an engineer. It's also the least time-consuming.
So far I've not seen anything that leads me to believe an AI can come up with creative ways to delight customers or save costs, persuade stakeholders to invest and work on that idea, build detailed designs considering multiple teams' constraints, and deploy and monitor that thing safely and responsibly.
Can AI make me more productive as an engineer? Absolutely. It's an amplifier. Maybe that means I need 4 people not 8 on my team, so I can combine two teams into 1. But eventually there's too much code, infrastructure, and too many partners. I think we're due for a long AI winter, during which people can work out how to even implement the current AI into their products (because no, almost no one is doing that successfully today)
I'm not necessarily disagreeing. I have a suspicion that the current designs of LLMs can replicate text / code / images / video, without understanding it. Which means you'll get some cool parrot effects, but not true creativity.
What I was getting at, in some ways, is that we all might be better off if my suspicion is right (which generally would lead to AI companies crashing & burning at some point, but at least capitalism is retained).
You might not have seen AI come up with such "creativity" or other things you described because you are likely working with a general purpose LLM trained on the data/code of the world and not specific to high-quality Amazon internal knowledge.
What you described looks to me like exactly what an LLM (or better still a small language model SLM https://arxiv.org/pdf/2506.02153 ) can do, if trained or fine-tuned with hyper-specific industry or company/organization related data/patterns, and at runtime invoked with appropriate contextual data, it could orchestrate everything you mentioned above. That is basically what an Engineer or set of Engineers say L6s or L7s do in their brains, there is nothing more to it, a general purpose LLM might not just be aware of it deeply to generate a meaningful response.
If the model is trained/fine-tuned with:
1. The expected mental/cultural model e.g. Amazon LPs and bar-raising examples of successful use of such LPs within an organization to drive business impact.
2. Innovative success stories of delighting customers and saving costs and the process behind it i.e. looking around corners, decision records etc, not just the final result.
3. Amazon organization hierarchy, people, teams, roles, levels, and responsibilities.
4. Organization customers, personas, budgets/spending, pain-points etc,
5. Team products and services, and dependency graph from RIP/RMS etc.
6. Stakeholders and their metadata from phonetool e.g. Director of XYZ responsible for ABC, Senior Engineer in ABC team responsible for Product foo, Service bar, PM in DEF team, etc.
7. Awareness of internal tools and how to call them e.g. meetings.amazon, Chime, Slack, SIM, Gitfarm/CRUX, Pipelines and Pipeline agents, Dogma, launch manager etc. Awareness of processes e.g. OP1/2. Awareness of mechanisms e.g. Working backwards, 1-pagers, PR/FAQs etc, and many examples of those both good and bad
8. What safe deployment and monitoring means for specific service pipelines etc,
And you invoke it with the right input data, it will do something like what you described
And guess what, I have tested this idea with Claude as part of a product I am building and it came up with interesting creative ideas, see a simplistic example I quickly spun up: https://claude.ai/share/0e764e94-5752-4426-b3e7-ee19eb9c7888
I am an ex-Amazon engineer by the way.
Thanks for a cheery start to the week!
I think your criticism of Capitalism (it's self destructive nature) has been articulated in similar form by Karl Marx 150 years ago. And yet here we are. Does it mean you are wrong? Not necessarily. But perhaps it helps to position your ideas within the larger context of historical economic though.
Sobering article, Dave. I've sometimes wondered where the rush to AGI will lead and what world my kids will inherit. Many of my assumptions about how things work are based on the current model of capitalism (i.e., school/training -> job -> income/financial security, etc.). It feels like the system is already changing (e.g., the AI job apocalypse impacting entry-level jobs). I can't envision what the future looks like, but it does feel like we're creating problems. I try to be an optimist and think about how much more valued the human element will become, but if people don't have work, who will pay for any of it? Good time to learn homesteading... :)
How is this for a fun comparison: The expansion of the Roman Republic led to an influx of free labor - slaves from conquered lands who were imported by wealthy land owners. Slaves replaced farmers and artisans who could not compete with free labor. Wealth kept accumulating, but only for elites. Poverty and civil unrest eventually led to the collapse of the republic, and authoritarian rulers took over - wealthy people with private armies, who battled each other for power.
You're cheerful too!
I couldn’t agree more with you on this point. And what really worries me is that people in charge are not seeing that we are digging our own graves. Very disturbing, what will happen with our kids, with their kids as like you said they can’t scale with the same rate as AI? Every human has to re-learn everything from birth while growing up, with AI it is all skipped, we don’t need humans anymore! The whole human race is at stake in my opinion.
Great article!
I'd argue that many tasks and bundles of tasks in every industry could already be automated with current AI if the right people spent a few months focused on that category of domain and technical knowledge.
This means the problem isn't the technology - it's the adoption rate, which suggests we should be even more concerned about the long-run societal consequences, as some of the downsides would already be here if the adoption rate were faster.
How many tasks does an average business do in a day that are more complicated than Math Olympiad problems? If you can use Claude Code to get reasonable results in software engineering, do we really still think financial services or healthcare administration requires hundreds of thousands of people employed globally in 5 years time?
What an outstanding post as always, Imagining what will happen in the world if people don't need to work as AI is doing thing fundamentally differently in a way we don't understand what they're doing or their reasonings.
Multiple critiques, disagreeing with your conclusion. May be too much human-optimist, but
* You are incorrectly assuming that reaching AGI depends on LLM progress. Many experts contend that LLM is already stalled, and we require other technologies to be used for achieving AGI. This would still be your AGI scenario, but the features you assume of AGI may be very different to the ones you write about here, which affects the conclusion.
* You assume capitalism is static and doesn't evolve. Historically, there is a lot of differences between the capitalism at the dawn of the industrial revolution times, early XX century and today, or between countries and cultures like France capitalism vs. USA capitalism. It is not a stretch of thinking to imagine a very capitalistic society with different AGIs in the market, as consumer preferences may not align with winner-takes-all thinking. Not to mention, even in capitalist systems, monopolies are legally discouraged, and split when necessary.
* Your definition of AGI is based on adding learning to current LLMs. We have had machine learning for decades, proper learning capabilities; but no AI expert would assume that "just" combining ML and LLM would be qualified as AGI. Maybe some salespeople in the industry sound like that, but technically very imprecise.
* Your description of capitalism is only "there are human jobs". A better description will include at least "there are human bosses and human workers, human producers and human consumers". We have got at least AI producers injected, and it is still very much a capitalism situation, while the AI-generated content industries have only raised the appreciation for human-generated content, and look like a race to the bottom which historically is not a winner in capitalism. But unless in your future description we include a wide number AI-consumers, with a budget for the AI to spend; and AI-bosses, accumulating the capital, with decision-making power and legal ownership of the money, it is at least a very incomplete description. So far, what AI extremists project, is every human to be a boss of many AIs
* Last for this comment, but not least, you imply AI controlled robots, to substitute the physical jobs, and hardware to scale exponentially like software. And all the infrastructure behind, like factories and AWS, to scale the same, without humans.