I think this is a reasonable decision (although maybe increasingly insufficient).
It doesn't really matter what your stance on AI is, the problem is the increased review burden on OSS maintainers.
In the past, the code itself was a sort of proof of effort - you would need to invest some time and effort on your PRs, otherwise they would be easily dismissed at a glance. That is no longer the case, as LLMs can quickly generate PRs that might look superficially correct. Effort can still have been out into those PRs, but there is no way to tell without spending time reviewing in more detail.
Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.
Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools.
show comments
lukaslalinsky
I think we will be getting into an interesting situation soon, where project maintainers use LLMs because they truly are useful in many cases, but will ban contributors for doing so, because they can't review how well did the user guide the LLM.
> any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed
Note the word "clearly". Weirdly, as a native English speaker this term makes the policy less strict. What about submarine LLM submissions?
I have no beef with Redox OS. I wish them well. This feels like the newest form of OSS virtue signaling.
show comments
qsera
I think clients who care about getting good software will eventually require that LLMs are not directly used during the development.
I think one way to compare the use of LLMs is that it is like comparing a dynamically typed language with a functional/statically typed one. Functional programming languages with static typing makes it harder to implement the solution without understanding and developing an intuition of the problem.
But programming languages with dynamic typing will let you create a (partial) solutions with a lesser understanding the problem.
LLMs takes it even more easy to implement an even more partial solutions, without actually understanding even less of the problem (actually zero understanding is required)..
If I am a client who wants reliable software, then I want an competent programmer to
1. actually understand the problem,
2. and then come up with a solution.
The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs.
khalic
The LLM ban is unenforceable, they must know this. Is it to scare off the most obvious stuff and have a way to kick people off easily in case of incomplete evidence?
show comments
jacquesm
Hiring managers could help here: the only thing that should count as a positive when - if - you feel like someone's open source contributions are important for your hiring decision is to make it plain that you only accept this if someone is a core contributor. Drive-by contributions should not count for anything, even if accepted.
stuaxo
We need LLMs that have a certificate of origin.
For instance a GPL LLM trained only on GPL code where the source data is all known, and the output is all GPL.
It could be done with a distributed effort.
show comments
tkel
Glad to see they are applying some rigor. I've started removing AI-heavy projects from my dependency tree.
hparadiz
I am 100% certain that code that Redox OS relies on in upstream already has LLM code in it.
show comments
BirAdam
So... my prediction is that they will either have to close off their dev process or start using LLMs to filter contributions in the attempt to detect submissions from LLMs.
dev_l1x_be
In my experience with the right set of guardrails LLMs can deliver high quality code. One interesting aspect is doing security reviews and formal verification with agents that is proven to be very useful in practice.
I am wondering why people spam OSS with AI slop pull requests in the first place?
Are they really that delusional to think that their AI slop has any value to the project?
Do they think acting like a complete prick and increasing the burden for the maintainers will get them a job offer?
I guess interacting with a sycophantic LLM for hours truly rots the brain.
To spell it out: No, your AI generated code has zero value. Actually less than that because generating it helped destroy the environment.
If the problem could be solved by using an LLM and the maintainers wanted to, they could prompt it themselves and get much better results than you do because they actually know the code. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects.
show comments
The-Ludwig
Hm, wondering how to enforce this rule.
Rules without any means to enforce them can put the honest people into a disadvantage.
show comments
hagen8
They will sooner or later change that policy or get very slow in keeping up.
algoth1
What would constitute "clearly llm generated" though
show comments
dana321
Generating small chunks of code with llms to save time works well, as long as you can read and understand the code i don't see what the problem is.
api
AI has the potential to level the playing field somewhat between open source and commercial software and SaaS that can afford armies of expensive paid developers.
Time consuming work can be done quickly at a fraction of the cost or even almost free with open weights LLMs.
scotty79
I see a lot of oss forks in the future where people just fork to fix their issues with LLMs without going through maintainers. Or even doing full LLM rewrites of smaller stuff.
estsauver
They're certainly welcome to do whatever they're like, and for a microkernel based OS it might make sense--I think there's probably pretty "Meh" output from a lot of LLMs.
I think part of the battle is actually just getting people to identify which LLM made it to understand if someones contribution is good or not. A javascript project with contributions from Opus 4.6 will probably be pretty good, but if someone is using Mistral small via the chat app, it's probably just a waste of time.
flanked-evergl
Spiritually Amish
menaerus
Let someone from the Redox team go read [1], [2], and [3]. If they still insist on keeping their position then ... well. The industry is being redefined as we speak and everyone doing the push-back are pushing against themselves really.
P.S. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open.
show comments
aleph_minus_one
While I am more on the AI-hater side, I don't consider this to be a good idea:
"any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed"
For example:
- What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?
- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?
show comments
baq
While I appreciate the morality and ethics of this choice, the current trend means projects going in this direction are making themselves irrelevant (don't bother quipping at how relevant redox is today, thanks). E.g. top security researches are now using LLMs to find new RCEs and local privilege escalations; no reason why the models couldn't fix these, too - and it's only the security surface.
IOW I think this stance is ethically good, but technically irresponsible.
show comments
lifis
Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.
What makes sense if that of course any LLM-generated code must be reviewed by a good programmer and must be correct and well written, and the AI usage must be precisely disclosed.
What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions.
I think this is a reasonable decision (although maybe increasingly insufficient).
It doesn't really matter what your stance on AI is, the problem is the increased review burden on OSS maintainers.
In the past, the code itself was a sort of proof of effort - you would need to invest some time and effort on your PRs, otherwise they would be easily dismissed at a glance. That is no longer the case, as LLMs can quickly generate PRs that might look superficially correct. Effort can still have been out into those PRs, but there is no way to tell without spending time reviewing in more detail.
Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.
Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools.
I think we will be getting into an interesting situation soon, where project maintainers use LLMs because they truly are useful in many cases, but will ban contributors for doing so, because they can't review how well did the user guide the LLM.
Zig has a similar stance on no-LLM policy
https://codeberg.org/ziglang/zig#strict-no-llm-no-ai-policy
I have no beef with Redox OS. I wish them well. This feels like the newest form of OSS virtue signaling.
I think clients who care about getting good software will eventually require that LLMs are not directly used during the development.
I think one way to compare the use of LLMs is that it is like comparing a dynamically typed language with a functional/statically typed one. Functional programming languages with static typing makes it harder to implement the solution without understanding and developing an intuition of the problem.
But programming languages with dynamic typing will let you create a (partial) solutions with a lesser understanding the problem.
LLMs takes it even more easy to implement an even more partial solutions, without actually understanding even less of the problem (actually zero understanding is required)..
If I am a client who wants reliable software, then I want an competent programmer to
1. actually understand the problem,
2. and then come up with a solution.
The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs.
The LLM ban is unenforceable, they must know this. Is it to scare off the most obvious stuff and have a way to kick people off easily in case of incomplete evidence?
Hiring managers could help here: the only thing that should count as a positive when - if - you feel like someone's open source contributions are important for your hiring decision is to make it plain that you only accept this if someone is a core contributor. Drive-by contributions should not count for anything, even if accepted.
We need LLMs that have a certificate of origin.
For instance a GPL LLM trained only on GPL code where the source data is all known, and the output is all GPL.
It could be done with a distributed effort.
Glad to see they are applying some rigor. I've started removing AI-heavy projects from my dependency tree.
I am 100% certain that code that Redox OS relies on in upstream already has LLM code in it.
So... my prediction is that they will either have to close off their dev process or start using LLMs to filter contributions in the attempt to detect submissions from LLMs.
In my experience with the right set of guardrails LLMs can deliver high quality code. One interesting aspect is doing security reviews and formal verification with agents that is proven to be very useful in practice.
https://www.datadoghq.com/blog/ai/harness-first-agents/
I am wondering why people spam OSS with AI slop pull requests in the first place?
Are they really that delusional to think that their AI slop has any value to the project?
Do they think acting like a complete prick and increasing the burden for the maintainers will get them a job offer?
I guess interacting with a sycophantic LLM for hours truly rots the brain.
To spell it out: No, your AI generated code has zero value. Actually less than that because generating it helped destroy the environment.
If the problem could be solved by using an LLM and the maintainers wanted to, they could prompt it themselves and get much better results than you do because they actually know the code. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects.
Hm, wondering how to enforce this rule. Rules without any means to enforce them can put the honest people into a disadvantage.
They will sooner or later change that policy or get very slow in keeping up.
What would constitute "clearly llm generated" though
Generating small chunks of code with llms to save time works well, as long as you can read and understand the code i don't see what the problem is.
AI has the potential to level the playing field somewhat between open source and commercial software and SaaS that can afford armies of expensive paid developers.
Time consuming work can be done quickly at a fraction of the cost or even almost free with open weights LLMs.
I see a lot of oss forks in the future where people just fork to fix their issues with LLMs without going through maintainers. Or even doing full LLM rewrites of smaller stuff.
They're certainly welcome to do whatever they're like, and for a microkernel based OS it might make sense--I think there's probably pretty "Meh" output from a lot of LLMs.
I think part of the battle is actually just getting people to identify which LLM made it to understand if someones contribution is good or not. A javascript project with contributions from Opus 4.6 will probably be pretty good, but if someone is using Mistral small via the chat app, it's probably just a waste of time.
Spiritually Amish
Let someone from the Redox team go read [1], [2], and [3]. If they still insist on keeping their position then ... well. The industry is being redefined as we speak and everyone doing the push-back are pushing against themselves really.
[1] https://www.datadoghq.com/blog/ai/harness-first-agents/
[2] https://www.datadoghq.com/blog/ai/fully-autonomous-optimizat...
[3] https://www.datadoghq.com/blog/engineering/self-optimizing-s...
P.S. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open.
While I am more on the AI-hater side, I don't consider this to be a good idea:
"any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed"
For example:
- What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?
- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?
While I appreciate the morality and ethics of this choice, the current trend means projects going in this direction are making themselves irrelevant (don't bother quipping at how relevant redox is today, thanks). E.g. top security researches are now using LLMs to find new RCEs and local privilege escalations; no reason why the models couldn't fix these, too - and it's only the security surface.
IOW I think this stance is ethically good, but technically irresponsible.
Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.
What makes sense if that of course any LLM-generated code must be reviewed by a good programmer and must be correct and well written, and the AI usage must be precisely disclosed.
What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions.