"AI will write good code because it is economically advantageous to do so. Per our definition of good code, good code is easy to understand and modify from the reduced complexity."
---------
This doesn't necessarily follow. Yes, there might be economic pressure for AI to produce "good" code, but that doesn't necessarily mean efforts to make this so will succeed. LLM's might never become proficient at producing "good" code for the same reasons that LLM's perform poorly when trained on their own output. A heuristic prediction of what "good" code for a given solution looks like is likely always going to be less "good" than code produced by skilled and deliberate human design.
Just as there is a place for fast and dirty human code, there will be a place for slop code. Likely the same sort of place. However, we may still need humans to produce "good" code that AI can be trained on as well as for solutions that actually need to be "good". AI might not be able to do that for us anytime soon, no matter what the economic imperatives are.
seamossfet
I find most developers fall into one of two camps:
1. You treat your code as a means to an end to make a product for a user.
2. You treat the code itself as your craft, with the product being a vector for your craft.
The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good your code is.
The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
show comments
Animats
Meanwhile, the complexity of the average piece of software is drastically increasing. ... The stats suggest that devs are shipping more code with coding agents. The consequences may already be visible: analysis of vendor status pages [3] shows outages have steadily increased since 2022, suggesting software is becoming more brittle.
We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.
AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.
Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?
"The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.
stephc_int13
The economic angle is not as clear cut as the authors seem to think.
There is an abundance of mediocre and even awful code in products that are not failing because of it.
The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.
show comments
dang
The authors updated their title so I've updated it here too. Previous title was "Good code will still win" - but it was leading to too much superficial discussion based entirely on the phrase "good code" in the title. It's amazing how titles do that!
(Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)
jumploops
“John Ousterhout [..] argues that good code is:
- Simple and easy to understand
- Easy to modify”
In my career at fast-moving startups (scaling seed to series C), I’ve come to the same conclusion:
> Simple is robust
I’m sure my former teams were sick of me saying it, but I’ve found myself repeating this mantra to the LLMs.
Agentic tools will happily build anything you want, the key is knowing what you want!
reese_john
Why build each new airplane with the care and precision of a Rolls-Royce? In the early 1970s, Kelly Johnson and I [Ben Rich] had dinner in Los Angeles with the great Soviet aerodynamicist Alexander Tupolev, designer of their backfire Bear bomber. 'You Americans build airplanes like a Rolex watch,' he told us. 'Knock it off the night table and it stops ticking. We build airplanes like a cheap alarm clock. But knock it off the table and still it wakes you up.'...The Soviets, he explained, built brute-force machines that could withstand awful weather and primitive landing fields. Everything was ruthlessly sacrificed to cut costs, including pilot safety.
We don't need to be ruthless to save costs, but why build the luxury model when the Chevy would do just as well? Build it right the first time, but don't build it to last forever. - Ben Rich in Skunk Works
show comments
skybrian
I think for this to work you need some kind of complexity budget. AI's are good at optimizing but you need to give them the right goals.
pagecalm
Agreed on the economics side. Clean code saves you time and money whether a human or AI wrote it. That part doesn't change.
But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
show comments
socalgal2
When has this ever been true
Did the best processor win? no x86 is trash
Did the best computer language win? no (not that you can can pick a best)
The same is true pretty much everywhere else outside computers, with rare exception.
pizzly
The current iteration of models don't write clean code by itself but future ones will. The problem in my view is extremely similar to agentic/vibe coding. Instead of optimizing for results you can optimize for clean code. The demand is there, clean code will lead to less bugs, faster running code and less tokens used (thus less cost) when understanding the code from a fresh session. It makes sense that the first generation of vibe coding focused on the results first and not clean code. Am I missing something?
fnoef
I wish it was true, but it sounds like copium. I bet garment makers, or artisan woodworkers said the same when big store cheap retails came. I bet they said "people value quality and etc", but in the end, outside of a group of people who has principles, everyone else floods their home with H&Ms and crap from Temu.
So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.
Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.
show comments
simianwords
People are not emotionally ready to accept that certain layers of abstraction don’t need as much care and effort if they can be automated.
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
show comments
xnx
If "good code" == "useful code", then yes.
People forget that good engineering isn't "the strongest bridge", but the cheapest bridge that just barely won't fail under conditions.
show comments
vb-8448
Good code wasn't winning even before the ai slop era!
The pattern was always: ship fast, fix/document later, but when "later" comes "don't touch what is working".
To date nothing changed yet, I bet it won't change even in the future.
show comments
Ancalagon
Maybe the models get better on the code side but I thought slop referred to any AI generated text or imagery? It’s hard to see how most of the internet’s written words won’t be slop, especially when there’s no binding compiler contract like in code.
Tiberium
Electron won even in the pre-LLM era, I sure wonder why.
ezekg
The background pattern really makes it hard to read, just fyi. I'd make the content have a white bg if you absolutely must use the pattern.
personality1
I wish I could write beautiful good code, every part of me wants it, but I'm forced to deliver as fast as I can.
muskstinks
... for now.
And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.
We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.
A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.
And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.
We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)
Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.
Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).
We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
rbbydotdev
The wrinkle here is what exactly “win” means
ahussain
My prediction is that we'll start to see a whole new layer of abstraction to help us write high quality code with LLMs - meaning new programming languages, new toolchains, stricter typechecking, in-built feedback loops etc.
The slop we're seeing today comes primarily from the fact that LLMs are writing code with tools meant for human users.
sublinear
> economic forces will drive AI models toward generating good, simpler, code because it will be cheaper overall
Economic forces are completely irrelevant to the code quality of AI.
> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies
Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.
AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.
Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
yshamrei
good code do not earn money =)
darenr
Everyone's talking about AI, but let's posit that today's coding models are as good as a SDE on the performance/experience distribution, maybe in the lower quartile, but can we also posit that this will improve and over time the coding models equal and then better the median software engineer? It's not like SDE's are not also churning out poor quality code "it worked for me", "what tests?" "O(what?)", etc, we've all worked with them.
The difference is that over the years while tooling and process have dramatically improved, SDE's have not improved much, junior engineers still make the same mistakes. The assumption is that (not yet proven, but the whole bubble is based on this) that models will continue to improve - eventually leaving behind human SDEs (or other domain people, lawyers, doctors, etc) - if this happens these arguments I keep seeing on HN about AI slop will all be moot.
Assuming AI continues to improve, the cost and speed of software development will dramatically drop. I saw a comment yesterday that predicted that AI will just plateau and everyone will go back to vim and Makefiles (paraphrasing).
Maybe, I don't know, but all these people saying AI is slop, Ra Ra Humans is just wishful thinking. Let's admit it, we don't know how it will play out. There's people like Dario and Sam who naturally are cheerleading for AI, then there's the HN collective who hate every new release of MacOS and every AI model, just on principle! I understand the fear, anyone who's ever read Flora Thompson's Lark Rise to Candleford will see the parallels, things are changing, AI is the plough, the railway, the transistor...
I'm tired on the debate, my experience is that AI (Gemini for me) is awesome, we all have gaps in our knowledge/skills (but not Gemini), AI helps hardcore backend engineers throw together a Gradio demo in minutes to make their point, helps junior devs review their code before making a PR, helps Product put together presentations. I could go on and on, those that don't see value in AI are doing it wrong.
As Taylor Swift said "It's me, hi, I'm the problem, it's me" - take that to heart and learn to leverage the tools, stop whining please, it's embarrassing to the whole software industry.
RcouF1uZ4gsC
The existence and ubiquity of bash scripts make me doubt this.
seniorThrowaway
this submission is basically an ad
7e
None of this is true. Models will soon scale to several million tokens of context. That, combined with the combined experience of millions of feedback cycles, will make software a solved problem for machines, even as humans remain dumb. Yes, even complex software. Complex software is actually better because it is, generally, faster with more features. It’s smarter. Like a jet fighter, the more complex it is, the more capable it is.
"AI will write good code because it is economically advantageous to do so. Per our definition of good code, good code is easy to understand and modify from the reduced complexity."
---------
This doesn't necessarily follow. Yes, there might be economic pressure for AI to produce "good" code, but that doesn't necessarily mean efforts to make this so will succeed. LLM's might never become proficient at producing "good" code for the same reasons that LLM's perform poorly when trained on their own output. A heuristic prediction of what "good" code for a given solution looks like is likely always going to be less "good" than code produced by skilled and deliberate human design.
Just as there is a place for fast and dirty human code, there will be a place for slop code. Likely the same sort of place. However, we may still need humans to produce "good" code that AI can be trained on as well as for solutions that actually need to be "good". AI might not be able to do that for us anytime soon, no matter what the economic imperatives are.
I find most developers fall into one of two camps:
1. You treat your code as a means to an end to make a product for a user.
2. You treat the code itself as your craft, with the product being a vector for your craft.
The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good your code is.
The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
Meanwhile, the complexity of the average piece of software is drastically increasing. ... The stats suggest that devs are shipping more code with coding agents. The consequences may already be visible: analysis of vendor status pages [3] shows outages have steadily increased since 2022, suggesting software is becoming more brittle.
We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.
AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.
Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?
"The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.
The economic angle is not as clear cut as the authors seem to think.
There is an abundance of mediocre and even awful code in products that are not failing because of it.
The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.
The authors updated their title so I've updated it here too. Previous title was "Good code will still win" - but it was leading to too much superficial discussion based entirely on the phrase "good code" in the title. It's amazing how titles do that!
(Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)
“John Ousterhout [..] argues that good code is:
- Simple and easy to understand
- Easy to modify”
In my career at fast-moving startups (scaling seed to series C), I’ve come to the same conclusion:
> Simple is robust
I’m sure my former teams were sick of me saying it, but I’ve found myself repeating this mantra to the LLMs.
Agentic tools will happily build anything you want, the key is knowing what you want!
I think for this to work you need some kind of complexity budget. AI's are good at optimizing but you need to give them the right goals.
Agreed on the economics side. Clean code saves you time and money whether a human or AI wrote it. That part doesn't change.
But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
When has this ever been true
Did the best processor win? no x86 is trash
Did the best computer language win? no (not that you can can pick a best)
The same is true pretty much everywhere else outside computers, with rare exception.
The current iteration of models don't write clean code by itself but future ones will. The problem in my view is extremely similar to agentic/vibe coding. Instead of optimizing for results you can optimize for clean code. The demand is there, clean code will lead to less bugs, faster running code and less tokens used (thus less cost) when understanding the code from a fresh session. It makes sense that the first generation of vibe coding focused on the results first and not clean code. Am I missing something?
I wish it was true, but it sounds like copium. I bet garment makers, or artisan woodworkers said the same when big store cheap retails came. I bet they said "people value quality and etc", but in the end, outside of a group of people who has principles, everyone else floods their home with H&Ms and crap from Temu.
So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.
Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.
People are not emotionally ready to accept that certain layers of abstraction don’t need as much care and effort if they can be automated.
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
If "good code" == "useful code", then yes.
People forget that good engineering isn't "the strongest bridge", but the cheapest bridge that just barely won't fail under conditions.
Good code wasn't winning even before the ai slop era!
The pattern was always: ship fast, fix/document later, but when "later" comes "don't touch what is working".
To date nothing changed yet, I bet it won't change even in the future.
Maybe the models get better on the code side but I thought slop referred to any AI generated text or imagery? It’s hard to see how most of the internet’s written words won’t be slop, especially when there’s no binding compiler contract like in code.
Electron won even in the pre-LLM era, I sure wonder why.
The background pattern really makes it hard to read, just fyi. I'd make the content have a white bg if you absolutely must use the pattern.
I wish I could write beautiful good code, every part of me wants it, but I'm forced to deliver as fast as I can.
... for now.
And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.
We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.
A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.
And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.
We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)
Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.
Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).
We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
The wrinkle here is what exactly “win” means
My prediction is that we'll start to see a whole new layer of abstraction to help us write high quality code with LLMs - meaning new programming languages, new toolchains, stricter typechecking, in-built feedback loops etc.
The slop we're seeing today comes primarily from the fact that LLMs are writing code with tools meant for human users.
> economic forces will drive AI models toward generating good, simpler, code because it will be cheaper overall
Economic forces are completely irrelevant to the code quality of AI.
> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies
Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.
AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.
Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
good code do not earn money =)
Everyone's talking about AI, but let's posit that today's coding models are as good as a SDE on the performance/experience distribution, maybe in the lower quartile, but can we also posit that this will improve and over time the coding models equal and then better the median software engineer? It's not like SDE's are not also churning out poor quality code "it worked for me", "what tests?" "O(what?)", etc, we've all worked with them.
The difference is that over the years while tooling and process have dramatically improved, SDE's have not improved much, junior engineers still make the same mistakes. The assumption is that (not yet proven, but the whole bubble is based on this) that models will continue to improve - eventually leaving behind human SDEs (or other domain people, lawyers, doctors, etc) - if this happens these arguments I keep seeing on HN about AI slop will all be moot.
Assuming AI continues to improve, the cost and speed of software development will dramatically drop. I saw a comment yesterday that predicted that AI will just plateau and everyone will go back to vim and Makefiles (paraphrasing).
Maybe, I don't know, but all these people saying AI is slop, Ra Ra Humans is just wishful thinking. Let's admit it, we don't know how it will play out. There's people like Dario and Sam who naturally are cheerleading for AI, then there's the HN collective who hate every new release of MacOS and every AI model, just on principle! I understand the fear, anyone who's ever read Flora Thompson's Lark Rise to Candleford will see the parallels, things are changing, AI is the plough, the railway, the transistor...
I'm tired on the debate, my experience is that AI (Gemini for me) is awesome, we all have gaps in our knowledge/skills (but not Gemini), AI helps hardcore backend engineers throw together a Gradio demo in minutes to make their point, helps junior devs review their code before making a PR, helps Product put together presentations. I could go on and on, those that don't see value in AI are doing it wrong.
As Taylor Swift said "It's me, hi, I'm the problem, it's me" - take that to heart and learn to leverage the tools, stop whining please, it's embarrassing to the whole software industry.
The existence and ubiquity of bash scripts make me doubt this.
this submission is basically an ad
None of this is true. Models will soon scale to several million tokens of context. That, combined with the combined experience of millions of feedback cycles, will make software a solved problem for machines, even as humans remain dumb. Yes, even complex software. Complex software is actually better because it is, generally, faster with more features. It’s smarter. Like a jet fighter, the more complex it is, the more capable it is.