Documentation/process/generated-content.rst | 97 +++++++++++++++++++++ Documentation/process/index.rst | 1 + 2 files changed, 98 insertions(+) create mode 100644 Documentation/process/generated-content.rst
In the last few years, the capabilities of coding tools have exploded.
As those capabilities have expanded, contributors and maintainers have
more and more questions about how and when to apply those
capabilities.
Add new Documentation to guide contributors on how to best use kernel
development tools, new and old.
Note, though, there are fundamentally no new or unique rules in this
new document. It clarifies expectations that the kernel community has
had for many years. For example, researchers are already asked to
disclose the tools they use to find issues by
Documentation/process/researcher-guidelines.rst. This new document
just reiterates existing best practices for development tooling.
In short: Please show your work and make sure your contribution is
easy to review.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Shuah Khan <shuah@kernel.org>
Reviewed-by: Kees Cook <kees@kernel.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Miguel Ojeda <ojeda@kernel.org>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: SeongJae Park <sj@kernel.org>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: NeilBrown <neilb@ownmail.net>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: workflows@vger.kernel.org
Cc: ksummit@lists.linux.dev
--
There has been a ton of feedback since v2. Thanks everyone! I've
tried to respect all of the feedback, but some of it has been
contradictory and I haven't been able to incorporate everything.
Please speak up if I missed something important here.
Changes from v2:
* Mention testing (Shuah)
* Remove "very", rename LLM => coding assistant (Dan)
* More formatting sprucing up and minor typos (Miguel)
* Make changelog and text less flashy (Christian)
* Tone down critical=>helpful (Neil)
* Wording/formatting tweaks (Randy)
Changes from v1:
* Rename to generated-content.rst and add to documentation index.
(Jon)
* Rework subject to align with the new filename
* Replace commercial names with generic ones. (Jon)
* Be consistent about punctuation at the end of bullets for whole
sentences. (Miguel)
* Formatting sprucing up and minor typos (Miguel)
This document was a collaborative effort from all the members of
the TAB. I just reformatted it into .rst and wrote the changelog.
---
Documentation/process/generated-content.rst | 97 +++++++++++++++++++++
Documentation/process/index.rst | 1 +
2 files changed, 98 insertions(+)
create mode 100644 Documentation/process/generated-content.rst
diff --git a/Documentation/process/generated-content.rst b/Documentation/process/generated-content.rst
new file mode 100644
index 000000000000..917d6e93c66d
--- /dev/null
+++ b/Documentation/process/generated-content.rst
@@ -0,0 +1,97 @@
+============================================
+Kernel Guidelines for Tool-Generated Content
+============================================
+
+Purpose
+=======
+
+Kernel contributors have been using tooling to generate contributions
+for a long time. These tools can increase the volume of contributions.
+At the same time, reviewer and maintainer bandwidth is a scarce
+resource. Understanding which portions of a contribution come from
+humans versus tools is helpful to maintain those resources and keep
+kernel development healthy.
+
+The goal here is to clarify community expectations around tools. This
+lets everyone become more productive while also maintaining high
+degrees of trust between submitters and reviewers.
+
+Out of Scope
+============
+
+These guidelines do not apply to tools that make trivial tweaks to
+preexisting content. Nor do they pertain to AI tooling that helps with
+menial tasks. Some examples:
+
+ - Spelling and grammar fix ups, like rephrasing to imperative voice
+ - Typing aids like identifier completion, common boilerplate or
+ trivial pattern completion
+ - Purely mechanical transformations like variable renaming
+ - Reformatting, like running Lindent, ``clang-format`` or
+ ``rust-fmt``
+
+Even if your tool use is out of scope, you should still always consider
+if it would help reviewing your contribution if the reviewer knows
+about the tool that you used.
+
+In Scope
+========
+
+These guidelines apply when a meaningful amount of content in a kernel
+contribution was not written by a person in the Signed-off-by chain,
+but was instead created by a tool.
+
+Detection of a problem and testing the fix for it is also part of the
+development process; if a tool was used to find a problem addressed by
+a change, that should be noted in the changelog. This not only gives
+credit where it is due, it also helps fellow developers find out about
+these tools.
+
+Some examples:
+ - Any tool-suggested fix such as ``checkpatch.pl --fix``
+ - Coccinelle scripts
+ - A chatbot generated a new function in your patch to sort list entries.
+ - A .c file in the patch was originally generated by a coding
+ assistant but cleaned up by hand.
+ - The changelog was generated by handing the patch to a generative AI
+ tool and asking it to write the changelog.
+ - The changelog was translated from another language.
+
+If in doubt, choose transparency and assume these guidelines apply to
+your contribution.
+
+Guidelines
+==========
+
+First, read the Developer's Certificate of Origin:
+Documentation/process/submitting-patches.rst. Its rules are simple
+and have been in place for a long time. They have covered many
+tool-generated contributions. Ensure that you understand your entire
+submission and are prepared to respond to review comments.
+
+Second, when making a contribution, be transparent about the origin of
+content in cover letters and changelogs. You can be more transparent
+by adding information like this:
+
+ - What tools were used?
+ - The input to the tools you used, like the Coccinelle source script.
+ - If code was largely generated from a single or short set of
+ prompts, include those prompts. For longer sessions, include a
+ summary of the prompts and the nature of resulting assistance.
+ - Which portions of the content were affected by that tool?
+ - How is the submission tested and what tools were used to test the
+ fix?
+
+As with all contributions, individual maintainers have discretion to
+choose how they handle the contribution. For example, they might:
+
+ - Treat it just like any other contribution.
+ - Reject it outright.
+ - Treat the contribution specially like reviewing with extra scrutiny,
+ or at a lower priority than human-generated content.
+ - Suggest a better prompt instead of suggesting specific code changes.
+ - Ask for some other special steps, like asking the contributor to
+ elaborate on how the tool or model was trained.
+ - Ask the submitter to explain in more detail about the contribution
+ so that the maintainer can feel comfortable that the submitter fully
+ understands how the code works.
diff --git a/Documentation/process/index.rst b/Documentation/process/index.rst
index aa12f2660194..e1a8a31389f5 100644
--- a/Documentation/process/index.rst
+++ b/Documentation/process/index.rst
@@ -68,6 +68,7 @@ beyond).
stable-kernel-rules
management-style
researcher-guidelines
+ generated-content
Dealing with bugs
-----------------
--
2.34.1
Nit. Apparently this is v4, not v3? I show v3 from https://lore.kernel.org/20251114183528.1239900-1-dave.hansen@linux.intel.com On Tue, 6 Jan 2026 12:51:05 -0800 Dave Hansen <dave.hansen@linux.intel.com> wrote: [...] Thanks, SJ
On Tue, Jan 06, 2026 at 12:51:05PM -0800, Dave Hansen wrote: > In the last few years, the capabilities of coding tools have exploded. > As those capabilities have expanded, contributors and maintainers have > more and more questions about how and when to apply those > capabilities. > > Add new Documentation to guide contributors on how to best use kernel > development tools, new and old. > > Note, though, there are fundamentally no new or unique rules in this > new document. It clarifies expectations that the kernel community has > had for many years. For example, researchers are already asked to > disclose the tools they use to find issues by > Documentation/process/researcher-guidelines.rst. This new document > just reiterates existing best practices for development tooling. > > In short: Please show your work and make sure your contribution is > easy to review. > > Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> > Reviewed-by: Shuah Khan <shuah@kernel.org> > Reviewed-by: Kees Cook <kees@kernel.org> > Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> > Reviewed-by: Miguel Ojeda <ojeda@kernel.org> > Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> > Reviewed-by: SeongJae Park <sj@kernel.org> > Reviewed-by: Dan Williams <dan.j.williams@intel.com> > Reviewed-by: Steven Rostedt <rostedt@goodmis.org> > Cc: NeilBrown <neilb@ownmail.net> > Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > Cc: Dan Williams <dan.j.williams@intel.com> > Cc: Theodore Ts'o <tytso@mit.edu> > Cc: Sasha Levin <sashal@kernel.org> > Cc: Jonathan Corbet <corbet@lwn.net> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: workflows@vger.kernel.org > Cc: ksummit@lists.linux.dev > > -- > > There has been a ton of feedback since v2. Thanks everyone! I've > tried to respect all of the feedback, but some of it has been > contradictory and I haven't been able to incorporate everything. > > Please speak up if I missed something important here. Well you ignored my two previous proposals AFAICT so :) [0, 1] [0]:https://lore.kernel.org/all/c8d9f4fc-332f-4df8-9620-e0e2aa6dc0e9@lucifer.local/ [1]:https://lore.kernel.org/all/11eaf7fa-27d0-4a57-abf0-5f24c918966c@lucifer.local/ I guess I'll reiterate them below for what it's worth. > > Changes from v2: > * Mention testing (Shuah) > * Remove "very", rename LLM => coding assistant (Dan) > * More formatting sprucing up and minor typos (Miguel) > * Make changelog and text less flashy (Christian) > * Tone down critical=>helpful (Neil) > * Wording/formatting tweaks (Randy) > > Changes from v1: > * Rename to generated-content.rst and add to documentation index. > (Jon) > * Rework subject to align with the new filename > * Replace commercial names with generic ones. (Jon) > * Be consistent about punctuation at the end of bullets for whole > sentences. (Miguel) > * Formatting sprucing up and minor typos (Miguel) > > This document was a collaborative effort from all the members of > the TAB. I just reformatted it into .rst and wrote the changelog. > --- > Documentation/process/generated-content.rst | 97 +++++++++++++++++++++ > Documentation/process/index.rst | 1 + > 2 files changed, 98 insertions(+) > create mode 100644 Documentation/process/generated-content.rst > > diff --git a/Documentation/process/generated-content.rst b/Documentation/process/generated-content.rst > new file mode 100644 > index 000000000000..917d6e93c66d > --- /dev/null > +++ b/Documentation/process/generated-content.rst > @@ -0,0 +1,97 @@ > +============================================ > +Kernel Guidelines for Tool-Generated Content > +============================================ > + > +Purpose > +======= > + > +Kernel contributors have been using tooling to generate contributions > +for a long time. These tools can increase the volume of contributions. > +At the same time, reviewer and maintainer bandwidth is a scarce > +resource. Understanding which portions of a contribution come from > +humans versus tools is helpful to maintain those resources and keep > +kernel development healthy. > + > +The goal here is to clarify community expectations around tools. This > +lets everyone become more productive while also maintaining high > +degrees of trust between submitters and reviewers. I feel that LLMs are not like any other tools but in fact represent something entirely new in that you can end-to-end send patches using this tooling with little to no knowledge and the asymmetry between maintainer resource and the possible slurry of submissions that might arise makes this very significantly different. I know Linus had the cute interpretation of it 'just being another tool' but never before have people been able to do this. So I think this continues to be something that should be underlined, and for it to be put more forthrightly that if such 'slop' series are sent they can be dismissed without further discussion. This my the primary concern with these tools, and this document is far too hand-wavey about it in my view + doesn't really address that at all. > + > +Out of Scope > +============ > + > +These guidelines do not apply to tools that make trivial tweaks to > +preexisting content. Nor do they pertain to AI tooling that helps with > +menial tasks. Some examples: > + > + - Spelling and grammar fix ups, like rephrasing to imperative voice > + - Typing aids like identifier completion, common boilerplate or > + trivial pattern completion > + - Purely mechanical transformations like variable renaming > + - Reformatting, like running Lindent, ``clang-format`` or > + ``rust-fmt`` > + > +Even if your tool use is out of scope, you should still always consider > +if it would help reviewing your contribution if the reviewer knows > +about the tool that you used. > + > +In Scope > +======== > + > +These guidelines apply when a meaningful amount of content in a kernel > +contribution was not written by a person in the Signed-off-by chain, > +but was instead created by a tool. > + > +Detection of a problem and testing the fix for it is also part of the > +development process; if a tool was used to find a problem addressed by > +a change, that should be noted in the changelog. This not only gives > +credit where it is due, it also helps fellow developers find out about > +these tools. > + > +Some examples: > + - Any tool-suggested fix such as ``checkpatch.pl --fix`` > + - Coccinelle scripts > + - A chatbot generated a new function in your patch to sort list entries. > + - A .c file in the patch was originally generated by a coding > + assistant but cleaned up by hand. > + - The changelog was generated by handing the patch to a generative AI > + tool and asking it to write the changelog. > + - The changelog was translated from another language. > + > +If in doubt, choose transparency and assume these guidelines apply to > +your contribution. > + > +Guidelines > +========== > + > +First, read the Developer's Certificate of Origin: > +Documentation/process/submitting-patches.rst. Its rules are simple > +and have been in place for a long time. They have covered many > +tool-generated contributions. Ensure that you understand your entire > +submission and are prepared to respond to review comments. > + > +Second, when making a contribution, be transparent about the origin of > +content in cover letters and changelogs. You can be more transparent > +by adding information like this: > + > + - What tools were used? > + - The input to the tools you used, like the Coccinelle source script. > + - If code was largely generated from a single or short set of > + prompts, include those prompts. For longer sessions, include a > + summary of the prompts and the nature of resulting assistance. > + - Which portions of the content were affected by that tool? > + - How is the submission tested and what tools were used to test the > + fix? > + > +As with all contributions, individual maintainers have discretion to > +choose how they handle the contribution. For example, they might: > + > + - Treat it just like any other contribution. > + - Reject it outright. This is really not correct, it's simply not acceptable in the community to reject series outright without justification. Yes perhaps people do that, but it's really not something that's accepted. So again trying to squeezed this into the cute 'hey it's just like any other tooling!' box doesn't work. We should highlight that this is something _different_ from other such series. Again, I feel the document fails to highlight the biggest concern around LLMs. > + - Treat the contribution specially like reviewing with extra scrutiny, > + or at a lower priority than human-generated content. > + - Suggest a better prompt instead of suggesting specific code changes. > + - Ask for some other special steps, like asking the contributor to > + elaborate on how the tool or model was trained. > + - Ask the submitter to explain in more detail about the contribution > + so that the maintainer can feel comfortable that the submitter fully > + understands how the code works. > diff --git a/Documentation/process/index.rst b/Documentation/process/index.rst > index aa12f2660194..e1a8a31389f5 100644 > --- a/Documentation/process/index.rst > +++ b/Documentation/process/index.rst > @@ -68,6 +68,7 @@ beyond). > stable-kernel-rules > management-style > researcher-guidelines > + generated-content > > Dealing with bugs > ----------------- > -- > 2.34.1 > Thanks, Lorenzo
On 1/7/26 10:12, Lorenzo Stoakes wrote: ... > I know Linus had the cute interpretation of it 'just being another tool' > but never before have people been able to do this. I respect your position here. But I'm not sure how to reconcile: LLMs are just another tool and LLMs are not just another tool :) Let's look at it another way: What we all *want* for the kernel is simplicity. Simple rules, simple documentation, simple code. The simplest way to deal with the LLM onslaught is to pray that our existing rules will suffice. For now, I think the existing rules are holding. We have the luxury of treating LLMs like any other tool. That could change any day because some new tool comes along that's better at spamming patches at us. I think that's the point you're trying to make is that the dam might break any day and we should be prepared for it. Is that what it boils down to? >> +As with all contributions, individual maintainers have discretion to >> +choose how they handle the contribution. For example, they might: >> + >> + - Treat it just like any other contribution. >> + - Reject it outright. > > This is really not correct, it's simply not acceptable in the community to > reject series outright without justification. Yes perhaps people do that, > but it's really not something that's accepted. I'm not quite sure how this gives maintainers a new ability to reject things without justification, or encourages them to reject tool-generated code in a new way. Let's say something generated by "checkpatch.pl --fix" that's trying to patch arch/x86/foo.c lands in my inbox. I personally think it's OK for me as a maintainer to say: "No thanks, checkpatch has burned me too many times in foo.c and I don't trust its output there." To me, that's rejecting it outright. Could you explain a bit how this might encourage bad maintainer behavior?
On Wed, Jan 07, 2026 at 11:18:52AM -0800, Dave Hansen wrote: > On 1/7/26 10:12, Lorenzo Stoakes wrote: > ... > > I know Linus had the cute interpretation of it 'just being another tool' > > but never before have people been able to do this. > > I respect your position here. But I'm not sure how to reconcile: > > LLMs are just another tool > and > LLMs are not just another tool > > :) Well I'm not asking you to reconcile that, I'm providing my point of view which disagrees with the first position and makes a case for the second. Isn't review about feedback both positive and negative? Obviously if this was intended to simply inform the community of the committee's decision then apologies for misinterpreting it. I would simply argue that LLMs are not another tool on the basis of the drastic negative impact its had in very many areas, for which you need only take a cursory glance at the world to observe. Thinking LLMs are 'just another tool' is to say effectively that the kernel is immune from this. Which seems to me a silly position. > > Let's look at it another way: What we all *want* for the kernel is > simplicity. Simple rules, simple documentation, simple code. The > simplest way to deal with the LLM onslaught is to pray that our existing > rules will suffice. I'm not sure we really have rules quite as clearly as you say, as subsystems differ greatly in what they do. For one mm merges patches unless averse review is received. Which means a sudden influx of LLM series is likely to lead to real problems. Not all subsystems are alike like this. One rule that seems consistent is that arbitrary dismissal of series is seriously frowned upon. The document claims otherwise. > > For now, I think the existing rules are holding. We have the luxury of We're noticing a lot more LLM slop than we used to. It is becoming more and more of an issue. Secondly, as I said in my MS thread and maybe even in a previous version of this one (can't remember) - I fear that once it becomes public that we are open to LLM patches, the floodgates will open. The kernel has a thorny reputation of people pushing back, which probably plays some role in holding that off. And it's not like I'm asking for much, I'm not asking you to rewrite the document, or take an entirely different approach, I'm just saying that we should highlight that : 1. LLMs _allow you to send patches end-to-end without expertise_. 2. As a result, even though the community (rightly) strongly disapproves of blanket dismissals of series, if we suspect AI slop [I think it's useful to actually use that term], maintains can reject it out of hand. Point 2 is absolutely a new thing in my view. > treating LLMs like any other tool. That could change any day because > some new tool comes along that's better at spamming patches at us. I > think that's the point you're trying to make is that the dam might break > any day and we should be prepared for it. > > Is that what it boils down to? I feel I've answered that above. > > >> +As with all contributions, individual maintainers have discretion to > >> +choose how they handle the contribution. For example, they might: > >> + > >> + - Treat it just like any other contribution. > >> + - Reject it outright. > > > > This is really not correct, it's simply not acceptable in the community to > > reject series outright without justification. Yes perhaps people do that, > > but it's really not something that's accepted. > > I'm not quite sure how this gives maintainers a new ability to reject > things without justification, or encourages them to reject > tool-generated code in a new way. > > Let's say something generated by "checkpatch.pl --fix" that's trying to > patch arch/x86/foo.c lands in my inbox. I personally think it's OK for > me as a maintainer to say: "No thanks, checkpatch has burned me too many > times in foo.c and I don't trust its output there." To me, that's > rejecting it outright. > > Could you explain a bit how this might encourage bad maintainer behavior? I really don't understand your question or why you're formulating this to be about bad maintainer behaviour? It's generally frowned upon in the kernel to outright reject series without technical justification. I really don't see how you can say that is not the case? LLM generated series won't be a trivial checkpatch.pl --fix change, you've given a trivially identifiable case that you could absolutely justify. Again, I'm not really asking for much here. As a maintainer I am (very) concerned about the asymmetry between what can be submitted vs. review resource. And to me being able to reference this document and to say 'sorry this appears to be AI slop so we can't accept it' would be really useful. Referencing a document that tries very hard to say 'NOP' isn't quite so useful. Thanks, Lorenzo
> And it's not like I'm asking for much, I'm not asking you to rewrite the > document, or take an entirely different approach, I'm just saying that we > should highlight that : > > 1. LLMs _allow you to send patches end-to-end without expertise_. As somebody who reviews a lot of networking patches, i already see lots of human generated patches without expertise. So LLM might increase the volume of such patches, but the concept itself is not new, and does not require LLMs. > 2. As a result, even though the community (rightly) strongly disapproves of > blanket dismissals of series, if we suspect AI slop [I think it's useful > to actually use that term], maintains can reject it out of hand. And i do blanket dismiss all but one such patch from an author, and i try to teach that author how to get that one patch into shape, in the hope you can learn the processes and apply it to their other patches. Sometimes the effort works, and you get a new developers joining the community, sometimes it is a lost cause, and they go away after having their patches repeatedly rejected. So i don't think using LLMs makes a difference here. I've seen the same issue with blindly fixing checkpatch warning, sparse warning, other static analysis tool warnings. I just see LLMs are another such tool. > Point 2 is absolutely a new thing in my view. And i would disagree with this statement, it is not new, it already happens. Andrew
On Thu, Jan 08, 2026 at 02:41:20PM +0100, Andrew Lunn wrote: > > And it's not like I'm asking for much, I'm not asking you to rewrite the > > document, or take an entirely different approach, I'm just saying that we > > should highlight that : > > > > 1. LLMs _allow you to send patches end-to-end without expertise_. > > As somebody who reviews a lot of networking patches, i already see > lots of human generated patches without expertise. So LLM might I mean we all have :) > increase the volume of such patches, but the concept itself is not > new, and does not require LLMs. The difference is the order of magnitude possible. There's a real barrier to entry for clueless people, and there's a linearity in time taken to generate submissions. LLMs don't change the problem, they change the magnitude. > > > 2. As a result, even though the community (rightly) strongly disapproves of > > blanket dismissals of series, if we suspect AI slop [I think it's useful > > to actually use that term], maintains can reject it out of hand. > > And i do blanket dismiss all but one such patch from an author, and i > try to teach that author how to get that one patch into shape, in the > hope you can learn the processes and apply it to their other > patches. Sometimes the effort works, and you get a new developers > joining the community, sometimes it is a lost cause, and they go away > after having their patches repeatedly rejected. > > So i don't think using LLMs makes a difference here. I've seen the > same issue with blindly fixing checkpatch warning, sparse warning, > other static analysis tool warnings. I just see LLMs are another such > tool. > > > Point 2 is absolutely a new thing in my view. > > And i would disagree with this statement, it is not new, it already > happens. Well this is the thing - it varies by subsystem. In mm it's really not like this. At any rate, given you disagree - the document suggesting that maintainers may dismiss out of hand shouldn't be in any way controversial :) I have submitted an incremental diff to make concrete what I'm suggesting at [0]. [0]:https://lore.kernel.org/ksummit/611c4a95-cbf2-492c-a991-e54042cf226a@lucifer.local/ > > Andrew Cheers, Lorenzo
On 1/7/26 13:15, Lorenzo Stoakes wrote: > Thinking LLMs are 'just another tool' is to say effectively that the kernel > is immune from this. Which seems to me a silly position. I had a good chat with Lorenzo on IRC. I had it in my head that he wanted a really different document than the one I posted. After talking, it sounds like he had some much more modest changes in mind. I caught him at the end of his day, but I think he's planning to send out a small diff on top of what I posted so I can get a better idea of what he wants to see tweaked.
On Wed, Jan 07, 2026 at 04:20:04PM -0800, Dave Hansen wrote:
> On 1/7/26 13:15, Lorenzo Stoakes wrote:
> > Thinking LLMs are 'just another tool' is to say effectively that the kernel
> > is immune from this. Which seems to me a silly position.
>
> I had a good chat with Lorenzo on IRC. I had it in my head that he
> wanted a really different document than the one I posted. After talking,
> it sounds like he had some much more modest changes in mind. I caught
> him at the end of his day, but I think he's planning to send out a small
> diff on top of what I posted so I can get a better idea of what he wants
> to see tweaked.
I enclose the suggested incremental change below.
Cheers, Lorenzo
----8<----
From ccefc4da6b929914c754c2f898b0eb17d7fb3ebd Mon Sep 17 00:00:00 2001
From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Date: Thu, 8 Jan 2026 11:55:10 +0000
Subject: [PATCH] suggestion
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
Documentation/process/generated-content.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/Documentation/process/generated-content.rst b/Documentation/process/generated-content.rst
index 917d6e93c66d..1423ed9d971d 100644
--- a/Documentation/process/generated-content.rst
+++ b/Documentation/process/generated-content.rst
@@ -95,3 +95,11 @@ choose how they handle the contribution. For example, they might:
- Ask the submitter to explain in more detail about the contribution
so that the maintainer can feel comfortable that the submitter fully
understands how the code works.
+
+If tools permit you to generate series entirely automatically, expect
+additional scrutiny.
+
+As with the output of any tooling, maintainers will not tolerate 'slop' -
+you are expected to understand and to be able to defend everything you
+submit. If you are unable to do so, maintainers may choose to reject your
+series outright.
--
2.52.0
On Thu, Jan 08, 2026 at 11:56:19AM +0000, Lorenzo Stoakes wrote:
>diff --git a/Documentation/process/generated-content.rst b/Documentation/process/generated-content.rst
>index 917d6e93c66d..1423ed9d971d 100644
>--- a/Documentation/process/generated-content.rst
>+++ b/Documentation/process/generated-content.rst
>@@ -95,3 +95,11 @@ choose how they handle the contribution. For example, they might:
> - Ask the submitter to explain in more detail about the contribution
> so that the maintainer can feel comfortable that the submitter fully
> understands how the code works.
>+
>+If tools permit you to generate series entirely automatically, expect
>+additional scrutiny.
>+
>+As with the output of any tooling, maintainers will not tolerate 'slop' -
Could you define what "slop" in the context of a kernel patch means? Clearly
it's not just innocent error, but it's not clear to me what line needs to be
crossed for a mistake to turn into "slop".
>+you are expected to understand and to be able to defend everything you
>+submit. If you are unable to do so, maintainers may choose to reject your
>+series outright.
We already have something like this in Documentation/process/howto.rst:
"Before making any actual modifications to the Linux kernel code, it is
imperative to understand how the code in question works."
I suppose that we can restate the same here, but whats the purpose? to put it
in front of whatever media outlets might be looking?
--
Thanks,
Sasha
On 1/8/26 08:42, Sasha Levin wrote: > I suppose that we can restate the same here, but whats the purpose? > to put it in front of whatever media outlets might be looking? Yeah, that's my only objection to adding the new hunk that James and Lorenzo were suggesting. It's arguably covered earlier in _this_ document: > +Guidelines > +========== .... > +tool-generated contributions. Ensure that you understand your entire > +submission and are prepared to respond to review comments. But, if folks feel it's that important a point, I guess mentioning it twice-ish is OK.
On Thu, Jan 8, 2026 at 5:42 PM Sasha Levin <sashal@kernel.org> wrote:
>
> We already have something like this in Documentation/process/howto.rst:
>
> "Before making any actual modifications to the Linux kernel code, it is
> imperative to understand how the code in question works."
The patch already mentions something similar as well:
Ensure that you understand your entire submission and are prepared
to respond to review comments.
And then talks about the maintainers discretion and rejecting etc. at
the bullet list at the bottom, so it seems fairly clear to me, i.e.
that patches may get "rejected outright" if one cannot explain the
submitted series.
Cheers,
Miguel
On Thu, Jan 08, 2026 at 07:27:17PM +0100, Miguel Ojeda wrote: > On Thu, Jan 8, 2026 at 5:42 PM Sasha Levin <sashal@kernel.org> wrote: > > > > We already have something like this in Documentation/process/howto.rst: > > > > "Before making any actual modifications to the Linux kernel code, it is > > imperative to understand how the code in question works." > > The patch already mentions something similar as well: > > Ensure that you understand your entire submission and are prepared > to respond to review comments. > > And then talks about the maintainers discretion and rejecting etc. at > the bullet list at the bottom, so it seems fairly clear to me, i.e. > that patches may get "rejected outright" if one cannot explain the > submitted series. I understand that of course. I feel I said it already but perhaps I wasn't clear. The issue is that this is put very softly and in such a way as to lose emphasis: 'You _can_ be more transparent by adding information like this:...' 'As with all contributions, individual maintainers have discretion to choose how they handle the contribution. For example, they _might_:' '[They might] Ask the submitter to explain in more detail about the contribution so that the maintainer can _feel comfortable_ that the submitter fully understands how the code works.' All of this is a little weak and reads like 'please if you could take the trouble we'd love if you'd maybe abide by this'. The point is to say very clearly - we won't accept slop. For all the various arguments I've seen on here, none have amounted to us being happy to, so I hope that it's not too egregious to ask for that kind of emphasis. > > Cheers, > Miguel Thanks, Lorenzo
On Thu, Jan 8, 2026 at 8:28 PM Lorenzo Stoakes
<lorenzo.stoakes@oracle.com> wrote:
>
> 'You _can_ be more transparent by adding information like this:...'
I am not a native speaker, but my reading of that "can" was that it is
suggesting ways to be more transparent that may or may not apply in
particular cases, but the requirement of being transparent was already
established by the previous sentence:
Second, when making a contribution, be transparent about
the origin of content in cover letters and changelogs.
Which is reinforced by another imperative in the bullet point about prompts:
If code was largely generated from a single or short set of
prompts, include those prompts.
Similarly, I read those other "might"s you quote like a set of things
that could happen or not (and is not exhaustive) in particular cases
and/or depending on the maintainer etc.
At least that is my reading, and as far as I understood the TAB
discussions, the goal of this patch was to document that non-trivial
tool usage needs to be disclosed, including LLM use, and to me the
patch already did that, but perhaps the wording can be more direct.
I hope that clarifies a bit...
Cheers,
Miguel
On Fri, Jan 09, 2026 at 05:30:17PM +0100, Miguel Ojeda wrote: > On Thu, Jan 8, 2026 at 8:28 PM Lorenzo Stoakes > <lorenzo.stoakes@oracle.com> wrote: > > > > 'You _can_ be more transparent by adding information like this:...' > > I am not a native speaker, but my reading of that "can" was that it is > suggesting ways to be more transparent that may or may not apply in > particular cases, but the requirement of being transparent was already > established by the previous sentence: > > Second, when making a contribution, be transparent about > the origin of content in cover letters and changelogs. > > Which is reinforced by another imperative in the bullet point about prompts: > > If code was largely generated from a single or short set of > prompts, include those prompts. > > Similarly, I read those other "might"s you quote like a set of things > that could happen or not (and is not exhaustive) in particular cases > and/or depending on the maintainer etc. Right I mean I'm not disputing the logic of it, and the document _is_ well written. > > At least that is my reading, and as far as I understood the TAB > discussions, the goal of this patch was to document that non-trivial > tool usage needs to be disclosed, including LLM use, and to me the > patch already did that, but perhaps the wording can be more direct. Yes, exactly. Really for me the whole thing is about emphasis. The current version of my proposal is (hopefully) reaching towards quorum as it takes into account feedback from Dave, Jens, Steven, and others probably who I forget here (apologies) - see [0] - so I'm hoping that this _should_ be acceptable as a means of establishing that emphasis without disrupting the overall aims of the document? It pleasingly is applicable to _all_ tooling and doesn't take a 'position' per se on LLMs specifically. [0]: https://lore.kernel.org/all/1273cff8-b114-4381-bbfe-aa228ce0d20d@lucifer.local/ > > I hope that clarifies a bit... Yes indeed :) thanks for that! > > Cheers, > Miguel Cheers, Lorenzo
On Thu, Jan 08, 2026 at 07:28:13PM +0000, Lorenzo Stoakes wrote: > On Thu, Jan 08, 2026 at 07:27:17PM +0100, Miguel Ojeda wrote: > > On Thu, Jan 8, 2026 at 5:42 PM Sasha Levin <sashal@kernel.org> wrote: > > > > > > We already have something like this in Documentation/process/howto.rst: Sorry I missed here that that you referenced another document. I think it's useful to have the emphasis I mentioned in a single place so people can be referred there as to our expectaitons re: tool-generated code. People are far more likely to miss things if located elsewhere. So if we have emphasis on this there, it should make it even more sensible to have emphasis here too. Thanks, Lorenzo
On Thu, Jan 08, 2026 at 11:42:49AM -0500, Sasha Levin wrote: > On Thu, Jan 08, 2026 at 11:56:19AM +0000, Lorenzo Stoakes wrote: > > diff --git a/Documentation/process/generated-content.rst b/Documentation/process/generated-content.rst > > index 917d6e93c66d..1423ed9d971d 100644 > > --- a/Documentation/process/generated-content.rst > > +++ b/Documentation/process/generated-content.rst > > @@ -95,3 +95,11 @@ choose how they handle the contribution. For example, they might: > > - Ask the submitter to explain in more detail about the contribution > > so that the maintainer can feel comfortable that the submitter fully > > understands how the code works. > > + > > +If tools permit you to generate series entirely automatically, expect > > +additional scrutiny. > > + > > +As with the output of any tooling, maintainers will not tolerate 'slop' - > > Could you define what "slop" in the context of a kernel patch means? Clearly > it's not just innocent error, but it's not clear to me what line needs to be > crossed for a mistake to turn into "slop". I accepted James's suggested alternative in this thread. > > > +you are expected to understand and to be able to defend everything you > > +submit. If you are unable to do so, maintainers may choose to reject your > > +series outright. > > We already have something like this in Documentation/process/howto.rst: > > "Before making any actual modifications to the Linux kernel code, it is > imperative to understand how the code in question works." > > I suppose that we can restate the same here, but whats the purpose? to put it > in front of whatever media outlets might be looking? I feel I've already addressed this in the thread. > > -- > Thanks, > Sasha Thanks, Lorenzo
On Thu, 2026-01-08 at 11:56 +0000, Lorenzo Stoakes wrote: > On Wed, Jan 07, 2026 at 04:20:04PM -0800, Dave Hansen wrote: > > On 1/7/26 13:15, Lorenzo Stoakes wrote: > > > Thinking LLMs are 'just another tool' is to say effectively that > > > the kernel > > > is immune from this. Which seems to me a silly position. > > > > I had a good chat with Lorenzo on IRC. I had it in my head that he > > wanted a really different document than the one I posted. After > > talking, > > it sounds like he had some much more modest changes in mind. I > > caught > > him at the end of his day, but I think he's planning to send out a > > small > > diff on top of what I posted so I can get a better idea of what he > > wants > > to see tweaked. > > I enclose the suggested incremental change below. > > Cheers, Lorenzo > > ----8<---- > From ccefc4da6b929914c754c2f898b0eb17d7fb3ebd Mon Sep 17 00:00:00 > 2001 > From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > Date: Thu, 8 Jan 2026 11:55:10 +0000 > Subject: [PATCH] suggestion > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > --- > Documentation/process/generated-content.rst | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/Documentation/process/generated-content.rst > b/Documentation/process/generated-content.rst > index 917d6e93c66d..1423ed9d971d 100644 > --- a/Documentation/process/generated-content.rst > +++ b/Documentation/process/generated-content.rst > @@ -95,3 +95,11 @@ choose how they handle the contribution. For > example, they might: > - Ask the submitter to explain in more detail about the > contribution > so that the maintainer can feel comfortable that the submitter > fully > understands how the code works. > + > +If tools permit you to generate series entirely automatically, > expect > +additional scrutiny. > + > +As with the output of any tooling, > maintainers will not tolerate 'slop' - Just delete this phrase (partly because it's very tied to a non- standard and very recent use of the word slop, but mostly because it doesn't add anything actionable to the reader). > +you are expected to understand and to be able to defend everything > you > +submit. If you are unable to do so, maintainers may choose to reject > your > +series outright. And I thing the addition would apply to any tool used to generate a patch set whether AI or not. Regards, James
On Thu, Jan 08, 2026 at 08:17:09AM -0500, James Bottomley wrote: > On Thu, 2026-01-08 at 11:56 +0000, Lorenzo Stoakes wrote: > > On Wed, Jan 07, 2026 at 04:20:04PM -0800, Dave Hansen wrote: > > > On 1/7/26 13:15, Lorenzo Stoakes wrote: > > > > Thinking LLMs are 'just another tool' is to say effectively that > > > > the kernel > > > > is immune from this. Which seems to me a silly position. > > > > > > I had a good chat with Lorenzo on IRC. I had it in my head that he > > > wanted a really different document than the one I posted. After > > > talking, > > > it sounds like he had some much more modest changes in mind. I > > > caught > > > him at the end of his day, but I think he's planning to send out a > > > small > > > diff on top of what I posted so I can get a better idea of what he > > > wants > > > to see tweaked. > > > > I enclose the suggested incremental change below. > > > > Cheers, Lorenzo > > > > ----8<---- > > From ccefc4da6b929914c754c2f898b0eb17d7fb3ebd Mon Sep 17 00:00:00 > > 2001 > > From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > > Date: Thu, 8 Jan 2026 11:55:10 +0000 > > Subject: [PATCH] suggestion > > > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > > --- > > Documentation/process/generated-content.rst | 8 ++++++++ > > 1 file changed, 8 insertions(+) > > > > diff --git a/Documentation/process/generated-content.rst > > b/Documentation/process/generated-content.rst > > index 917d6e93c66d..1423ed9d971d 100644 > > --- a/Documentation/process/generated-content.rst > > +++ b/Documentation/process/generated-content.rst > > @@ -95,3 +95,11 @@ choose how they handle the contribution. For > > example, they might: > > - Ask the submitter to explain in more detail about the > > contribution > > so that the maintainer can feel comfortable that the submitter > > fully > > understands how the code works. > > + > > +If tools permit you to generate series entirely automatically, > > expect > > +additional scrutiny. > > + > > +As with the output of any tooling, > > > > maintainers will not tolerate 'slop' - > > Just delete this phrase (partly because it's very tied to a non- > standard and very recent use of the word slop, but mostly because it > doesn't add anything actionable to the reader). I mean I'm not expecting this to land given Linus's position :) But if removing this sentence allowed the below in sure. However personally I think it's very important to say 'slop' here. It's more so to make it abundantly clear that the kernel takes the position that we don't accept it. Nothing else here really does make that clear in my opinion it's all far too gently worded. This is with an eye to press reporting also (they've already reported, again, on the Linus's position that AI tools are just tools which I think only helps propagate the idea that the kernel is open-for-business for AI in general slop or otherwise). > > > +you are expected to understand and to be able to defend everything > > you > > +submit. If you are unable to do so, maintainers may choose to reject > > your > > +series outright. > > And I thing the addition would apply to any tool used to generate a > patch set whether AI or not. Right, yes agreed. > > Regards, > > James > Cheers, Lorenzo
On Thu, 2026-01-08 at 13:56 +0000, Lorenzo Stoakes wrote: > On Thu, Jan 08, 2026 at 08:17:09AM -0500, James Bottomley wrote: > > On Thu, 2026-01-08 at 11:56 +0000, Lorenzo Stoakes wrote: [...] > > > + > > > +As with the output of any tooling, > > > > > > > maintainers will not tolerate 'slop' - > > > > Just delete this phrase (partly because it's very tied to a non- > > standard and very recent use of the word slop, but mostly because > > it doesn't add anything actionable to the reader). > > I mean I'm not expecting this to land given Linus's position :) > > But if removing this sentence allowed the below in sure. > > However personally I think it's very important to say 'slop' here. > It's more so to make it abundantly clear that the kernel takes the > position that we don't accept it. Perhaps I can help clarify. You're using the word "slop" to mean output of tools that is actually wrong ... which can happen to any tool, not just AI. And you want any statement to include that explicitly. I'm saying anything you can't explain won't be accepted, which, I think, necessarily includes any output the tool gets wrong. But I don't object to saying this in a more generic form, so how about this as the compromise --- +As with the output of any tooling, The result can be incorrect or inappropriate so +you are expected to understand and to be able to defend everything you +submit. If you are unable to do so, maintainers may choose to reject your +series outright. --- Regards, James
On Thu, Jan 08, 2026 at 10:58:08AM -0500, James Bottomley wrote: > On Thu, 2026-01-08 at 13:56 +0000, Lorenzo Stoakes wrote: > > On Thu, Jan 08, 2026 at 08:17:09AM -0500, James Bottomley wrote: > > > On Thu, 2026-01-08 at 11:56 +0000, Lorenzo Stoakes wrote: > [...] > > > > + > > > > +As with the output of any tooling, > > > > > > > > > > maintainers will not tolerate 'slop' - > > > > > > Just delete this phrase (partly because it's very tied to a non- > > > standard and very recent use of the word slop, but mostly because > > > it doesn't add anything actionable to the reader). > > > > I mean I'm not expecting this to land given Linus's position :) > > > > But if removing this sentence allowed the below in sure. > > > > However personally I think it's very important to say 'slop' here. > > It's more so to make it abundantly clear that the kernel takes the > > position that we don't accept it. > > Perhaps I can help clarify. You're using the word "slop" to mean > output of tools that is actually wrong ... which can happen to any > tool, not just AI. And you want any statement to include that > explicitly. > > I'm saying anything you can't explain won't be accepted, which, I > think, necessarily includes any output the tool gets wrong. But I > don't object to saying this in a more generic form, so how about this > as the compromise > > --- > +As with the output of any tooling, > > The result can be incorrect or inappropriate so LGTM! :) > > +you are expected to understand and to be able to defend everything you > +submit. If you are unable to do so, maintainers may choose to reject > your > +series outright. > --- > > Regards, > > James > Cheers, Lorenzo
On 1/8/26 08:35, Lorenzo Stoakes wrote:
<snip>
>> +As with the output of any tooling,
>>
>> The result can be incorrect or inappropriate so
>
> LGTM! :)
...
I tweaked James's version a wee bit, but I think I left the message in
place. How does this hunk look?
@@ -95,3 +95,8 @@ choose how they handle the contribution. For example, they might:
- Ask the submitter to explain in more detail about the contribution
so that the maintainer can feel comfortable that the submitter fully
understands how the code works.
+
+Finally, always be prepared for tooling that produces incorrect or
+inappropriate content. Make sure you understand and to be able to
+defend everything you submit. If you are unable to do so, maintainers
+may choose to reject your series outright.
On Thu, Jan 08, 2026 at 11:10:40AM -0800, Dave Hansen wrote: > On 1/8/26 08:35, Lorenzo Stoakes wrote: > <snip> > >> +As with the output of any tooling, > >> > >> The result can be incorrect or inappropriate so > > > > LGTM! :) > ... > > I tweaked James's version a wee bit, but I think I left the message in > place. How does this hunk look? > > @@ -95,3 +95,8 @@ choose how they handle the contribution. For example, they might: > - Ask the submitter to explain in more detail about the contribution > so that the maintainer can feel comfortable that the submitter fully > understands how the code works. > + > +Finally, always be prepared for tooling that produces incorrect or > +inappropriate content. Make sure you understand and to be able to > +defend everything you submit. If you are unable to do so, maintainers > +may choose to reject your series outright. > I feel like this formulation waters it down so much as to lose the emphasis which was the entire point of it. I'm also not sure why we're losing the scrutiny part? Something like: +If tools permit you to generate series entirely automatically, expect +additional scrutiny. + +As with the output of any tooling, the result maybe incorrect or +inappropriate, so you are expected to understand and to be able to defend +everything you submit. If you are unable to do so, maintainers may choose +to reject your series outright. ?
On 1/8/26 12:23 PM, Lorenzo Stoakes wrote: >> @@ -95,3 +95,8 @@ choose how they handle the contribution. For example, they might: >> - Ask the submitter to explain in more detail about the contribution >> so that the maintainer can feel comfortable that the submitter fully >> understands how the code works. >> + >> +Finally, always be prepared for tooling that produces incorrect or >> +inappropriate content. Make sure you understand and to be able to >> +defend everything you submit. If you are unable to do so, maintainers >> +may choose to reject your series outright. >> > > I feel like this formulation waters it down so much as to lose the emphasis > which was the entire point of it. > > I'm also not sure why we're losing the scrutiny part? > > Something like: > > +If tools permit you to generate series entirely automatically, expect > +additional scrutiny. > + > +As with the output of any tooling, the result maybe incorrect or > +inappropriate, so you are expected to understand and to be able to defend > +everything you submit. If you are unable to do so, maintainers may choose > +to reject your series outright. Eh, why not some variant of: "If you are unable to do so, then don't submit the resulting changes." Talking only for myself, I have ZERO interest in receiving code from someone that doesn't even understand what it does. And it'd be better to NOT waste my or anyone elses time if that's the level of the submission. -- Jens Axboe
* Jens Axboe <axboe@kernel.dk> [260108 15:54]: > On 1/8/26 12:23 PM, Lorenzo Stoakes wrote: > >> @@ -95,3 +95,8 @@ choose how they handle the contribution. For example, they might: > >> - Ask the submitter to explain in more detail about the contribution > >> so that the maintainer can feel comfortable that the submitter fully > >> understands how the code works. > >> + > >> +Finally, always be prepared for tooling that produces incorrect or > >> +inappropriate content. Make sure you understand and to be able to > >> +defend everything you submit. If you are unable to do so, maintainers > >> +may choose to reject your series outright. > >> > > > > I feel like this formulation waters it down so much as to lose the emphasis > > which was the entire point of it. > > > > I'm also not sure why we're losing the scrutiny part? > > > > Something like: > > > > +If tools permit you to generate series entirely automatically, expect > > +additional scrutiny. > > + > > +As with the output of any tooling, the result maybe incorrect or > > +inappropriate, so you are expected to understand and to be able to defend > > +everything you submit. If you are unable to do so, maintainers may choose > > +to reject your series outright. > > Eh, why not some variant of: > > "If you are unable to do so, then don't submit the resulting changes." > > Talking only for myself, I have ZERO interest in receiving code from > someone that doesn't even understand what it does. And it'd be better to > NOT waste my or anyone elses time if that's the level of the submission. Yes, agreed. If I cannot understand it and the author is clueless about the patch, then I'm going to be way more grumpy than the wording of that statement. I'd assume the submitter would just get the ai to answer it anyways since that's fitting with the level of the submission. Thanks, Liam
On Thu, Jan 08, 2026 at 04:04:39PM -0500, Liam R. Howlett wrote: > * Jens Axboe <axboe@kernel.dk> [260108 15:54]: > > On 1/8/26 12:23 PM, Lorenzo Stoakes wrote: > > >> @@ -95,3 +95,8 @@ choose how they handle the contribution. For example, they might: > > >> - Ask the submitter to explain in more detail about the contribution > > >> so that the maintainer can feel comfortable that the submitter fully > > >> understands how the code works. > > >> + > > >> +Finally, always be prepared for tooling that produces incorrect or > > >> +inappropriate content. Make sure you understand and to be able to > > >> +defend everything you submit. If you are unable to do so, maintainers > > >> +may choose to reject your series outright. > > >> > > > > > > I feel like this formulation waters it down so much as to lose the emphasis > > > which was the entire point of it. > > > > > > I'm also not sure why we're losing the scrutiny part? > > > > > > Something like: > > > > > > +If tools permit you to generate series entirely automatically, expect > > > +additional scrutiny. > > > + > > > +As with the output of any tooling, the result maybe incorrect or > > > +inappropriate, so you are expected to understand and to be able to defend > > > +everything you submit. If you are unable to do so, maintainers may choose > > > +to reject your series outright. > > > > Eh, why not some variant of: > > > > "If you are unable to do so, then don't submit the resulting changes." > > > > Talking only for myself, I have ZERO interest in receiving code from > > someone that doesn't even understand what it does. And it'd be better to > > NOT waste my or anyone elses time if that's the level of the submission. > > Yes, agreed. > Yeah. Me too. > If I cannot understand it and the author is clueless about the patch, > then I'm going to be way more grumpy than the wording of that statement. > > I'd assume the submitter would just get the ai to answer it anyways > since that's fitting with the level of the submission. Yes. That has happened to me. I asked the submitter how do you know this is true? And the v2 had a long AI generated explanation which quoted a spec from an AI hallucination. I like Dave's document but the first paragraph should be to not send AI slop. regards, dan carpenter
On Fri, Jan 09, 2026 at 08:29:58AM +0300, Dan Carpenter wrote: > On Thu, Jan 08, 2026 at 04:04:39PM -0500, Liam R. Howlett wrote: > > * Jens Axboe <axboe@kernel.dk> [260108 15:54]: > > > On 1/8/26 12:23 PM, Lorenzo Stoakes wrote: > > > >> @@ -95,3 +95,8 @@ choose how they handle the contribution. For example, they might: > > > >> - Ask the submitter to explain in more detail about the contribution > > > >> so that the maintainer can feel comfortable that the submitter fully > > > >> understands how the code works. > > > >> + > > > >> +Finally, always be prepared for tooling that produces incorrect or > > > >> +inappropriate content. Make sure you understand and to be able to > > > >> +defend everything you submit. If you are unable to do so, maintainers > > > >> +may choose to reject your series outright. > > > >> > > > > > > > > I feel like this formulation waters it down so much as to lose the emphasis > > > > which was the entire point of it. > > > > > > > > I'm also not sure why we're losing the scrutiny part? > > > > > > > > Something like: > > > > > > > > +If tools permit you to generate series entirely automatically, expect > > > > +additional scrutiny. > > > > + > > > > +As with the output of any tooling, the result maybe incorrect or > > > > +inappropriate, so you are expected to understand and to be able to defend > > > > +everything you submit. If you are unable to do so, maintainers may choose > > > > +to reject your series outright. > > > > > > Eh, why not some variant of: > > > > > > "If you are unable to do so, then don't submit the resulting changes." > > > > > > Talking only for myself, I have ZERO interest in receiving code from > > > someone that doesn't even understand what it does. And it'd be better to > > > NOT waste my or anyone elses time if that's the level of the submission. > > > > Yes, agreed. > > > > Yeah. Me too. > > > If I cannot understand it and the author is clueless about the patch, > > then I'm going to be way more grumpy than the wording of that statement. > > > > I'd assume the submitter would just get the ai to answer it anyways > > since that's fitting with the level of the submission. > > Yes. That has happened to me. I asked the submitter how do you know > this is true? And the v2 had a long AI generated explanation which quoted > a spec from an AI hallucination. > > I like Dave's document but the first paragraph should be to not send AI > slop. This is the entire point of my push back here :) I'd prefer us to be truly emphatic with a 'NO SLOP PLEASE' as the opener and using that term, but I'm compromising because... well you saw Linus's position right? I do find it... naive to think that we won't experience this. For one it's already started, for another people working on open source projects like Postgres may have something to say e.g. [0]... [0]:https://mastodon.social/@AndresFreundTec/115860496055796941 Do we really want to provide a milquetoast document that is lovely and agreeable and reading it doesn't explicitly say no slop that _will_ be reported on like that? Note that Linus's position on this has been reported as essentially 'Linus says AI tools are like other tools and you are STUPID if you think otherwise they are FINE' - which is not what he said, but does that matter? Do we really truly think doing that is going to have no impact on people sending us crap? There are a bunch of well-meaning but less-talented people who try to do kernel stuff, we've all seen it and dealt with it. These same people _will_ pay attention to this kind of thing and try it on. Yes we can't do anything about bad faith people who'll ignore everything. But in that case being able to point at the doc will make life practically _easier_. Either way I think it's important we have something vaguely emphatic there. Which is why I'm tiring myself out with this thread when I have a lot of other things to do :) > > regards, > dan carpenter > Cheers, Lorenzo
On Fri, Jan 09, 2026 at 07:54:31AM +0000, Lorenzo Stoakes wrote: > On Fri, Jan 09, 2026 at 08:29:58AM +0300, Dan Carpenter wrote: > > On Thu, Jan 08, 2026 at 04:04:39PM -0500, Liam R. Howlett wrote: > > > * Jens Axboe <axboe@kernel.dk> [260108 15:54]: > > > > On 1/8/26 12:23 PM, Lorenzo Stoakes wrote: > > > > >> @@ -95,3 +95,8 @@ choose how they handle the contribution. For example, they might: > > > > >> - Ask the submitter to explain in more detail about the contribution > > > > >> so that the maintainer can feel comfortable that the submitter fully > > > > >> understands how the code works. > > > > >> + > > > > >> +Finally, always be prepared for tooling that produces incorrect or > > > > >> +inappropriate content. Make sure you understand and to be able to > > > > >> +defend everything you submit. If you are unable to do so, maintainers > > > > >> +may choose to reject your series outright. > > > > >> > > > > > > > > > > I feel like this formulation waters it down so much as to lose the emphasis > > > > > which was the entire point of it. > > > > > > > > > > I'm also not sure why we're losing the scrutiny part? > > > > > > > > > > Something like: > > > > > > > > > > +If tools permit you to generate series entirely automatically, expect > > > > > +additional scrutiny. > > > > > + > > > > > +As with the output of any tooling, the result maybe incorrect or > > > > > +inappropriate, so you are expected to understand and to be able to defend > > > > > +everything you submit. If you are unable to do so, maintainers may choose > > > > > +to reject your series outright. > > > > > > > > Eh, why not some variant of: > > > > > > > > "If you are unable to do so, then don't submit the resulting changes." > > > > > > > > Talking only for myself, I have ZERO interest in receiving code from > > > > someone that doesn't even understand what it does. And it'd be better to > > > > NOT waste my or anyone elses time if that's the level of the submission. > > > > > > Yes, agreed. > > > > > > > Yeah. Me too. > > > > > If I cannot understand it and the author is clueless about the patch, > > > then I'm going to be way more grumpy than the wording of that statement. > > > > > > I'd assume the submitter would just get the ai to answer it anyways > > > since that's fitting with the level of the submission. > > > > Yes. That has happened to me. I asked the submitter how do you know > > this is true? And the v2 had a long AI generated explanation which quoted > > a spec from an AI hallucination. > > > > I like Dave's document but the first paragraph should be to not send AI > > slop. > > This is the entire point of my push back here :) > > I'd prefer us to be truly emphatic with a 'NO SLOP PLEASE' as the opener and > using that term, but I'm compromising because... well you saw Linus's position > right? I just don't think the word "slop" should be used, because while it may be very clear to you, and may be clearly defined in some communities, me, I'm just guessing what you mean by it.
On Sat, Jan 10, 2026 at 09:25:36AM -0600, Serge E. Hallyn wrote: > I just don't think the word "slop" should be used, because while it may > be very clear to you, and may be clearly defined in some communities, me, > I'm just guessing what you mean by it. https://www.merriam-webster.com/wordplay/word-of-the-year Picked up by AP and widely reported on by news organisations, eg: https://www.cbc.ca/news/entertainment/slop-word-of-the-year-9.7015916 https://www.pbs.org/newshour/nation/merriam-websters-word-of-the-year-for-2025-is-ais-slop https://arstechnica.com/ai/2025/12/merriam-webster-crowns-slop-word-of-the-year-as-ai-content-floods-internet/ It's widely known.
On Sat, 2026-01-10 at 15:52 +0000, Matthew Wilcox wrote: > On Sat, Jan 10, 2026 at 09:25:36AM -0600, Serge E. Hallyn wrote: > > I just don't think the word "slop" should be used, because while it > > may be very clear to you, and may be clearly defined in some > > communities, me, I'm just guessing what you mean by it. > > https://www.merriam-webster.com/wordplay/word-of-the-year Just because it's the word of the year this year doesn't mean people will remember what it means even after a few years. "Rawdog" was the OED word of the year in 2024 ... that's losing its resonance and who of the under 30 crowd knows what the 2000 word of the year "chad" means? The point of the formulation I proposed (without mentioning slop) was to be generic and retain its meaning over time. Regards, James
On Sat, Jan 10, 2026 at 11:02:19AM -0500, James Bottomley wrote: > On Sat, 2026-01-10 at 15:52 +0000, Matthew Wilcox wrote: > > On Sat, Jan 10, 2026 at 09:25:36AM -0600, Serge E. Hallyn wrote: > > > I just don't think the word "slop" should be used, because while it > > > may be very clear to you, and may be clearly defined in some > > > communities, me, I'm just guessing what you mean by it. > > > > https://www.merriam-webster.com/wordplay/word-of-the-year > > Just because it's the word of the year this year doesn't mean people > will remember what it means even after a few years. "Rawdog" was the > OED word of the year in 2024 ... that's losing its resonance and who of > the under 30 crowd knows what the 2000 word of the year "chad" means? > The point of the formulation I proposed (without mentioning slop) was > to be generic and retain its meaning over time. Slop means you produced the patches in such quantity that you don't have time to review the output before sending it. This isn't a totally new thing, people have used clang-format to reformat a whole driver and it's clear they didn't look at the output. Even for bug reports, the truth is that no one reads mass bug reports. I occasionally send mass bug reports if I create a new warning. No one ever reads them. regards, dan carpenter
On Sat, 10 Jan 2026 11:02:19 -0500 James Bottomley <James.Bottomley@HansenPartnership.com> wrote: > On Sat, 2026-01-10 at 15:52 +0000, Matthew Wilcox wrote: > > On Sat, Jan 10, 2026 at 09:25:36AM -0600, Serge E. Hallyn wrote: > > > I just don't think the word "slop" should be used, because while it > > > may be very clear to you, and may be clearly defined in some > > > communities, me, I'm just guessing what you mean by it. > > > > https://www.merriam-webster.com/wordplay/word-of-the-year > > Just because it's the word of the year this year doesn't mean people > will remember what it means even after a few years. "Rawdog" was the > OED word of the year in 2024 ... that's losing its resonance and who of > the under 30 crowd knows what the 2000 word of the year "chad" means? > The point of the formulation I proposed (without mentioning slop) was > to be generic and retain its meaning over time. I agree with James here. "Slop" may be well known today, but it is still a slang word. It may easily lose its meaning in the future, and I don't think "slang" words should be used in the document. -- Steve
On Fri, Jan 09, 2026 at 07:54:31AM +0000, Lorenzo Stoakes wrote: > On Fri, Jan 09, 2026 at 08:29:58AM +0300, Dan Carpenter wrote: > > On Thu, Jan 08, 2026 at 04:04:39PM -0500, Liam R. Howlett wrote: > > > * Jens Axboe <axboe@kernel.dk> [260108 15:54]: > > > > On 1/8/26 12:23 PM, Lorenzo Stoakes wrote: > > > > >> @@ -95,3 +95,8 @@ choose how they handle the contribution. For example, they might: > > > > >> - Ask the submitter to explain in more detail about the contribution > > > > >> so that the maintainer can feel comfortable that the submitter fully > > > > >> understands how the code works. > > > > >> + > > > > >> +Finally, always be prepared for tooling that produces incorrect or > > > > >> +inappropriate content. Make sure you understand and to be able to > > > > >> +defend everything you submit. If you are unable to do so, maintainers > > > > >> +may choose to reject your series outright. > > > > >> > > > > > > > > > > I feel like this formulation waters it down so much as to lose the emphasis > > > > > which was the entire point of it. > > > > > > > > > > I'm also not sure why we're losing the scrutiny part? > > > > > > > > > > Something like: > > > > > > > > > > +If tools permit you to generate series entirely automatically, expect > > > > > +additional scrutiny. > > > > > + > > > > > +As with the output of any tooling, the result maybe incorrect or > > > > > +inappropriate, so you are expected to understand and to be able to defend > > > > > +everything you submit. If you are unable to do so, maintainers may choose > > > > > +to reject your series outright. > > > > > > > > Eh, why not some variant of: > > > > > > > > "If you are unable to do so, then don't submit the resulting changes." > > > > > > > > Talking only for myself, I have ZERO interest in receiving code from > > > > someone that doesn't even understand what it does. And it'd be better to > > > > NOT waste my or anyone elses time if that's the level of the submission. > > > > > > Yes, agreed. > > > > > > > Yeah. Me too. > > > > > If I cannot understand it and the author is clueless about the patch, > > > then I'm going to be way more grumpy than the wording of that statement. > > > > > > I'd assume the submitter would just get the ai to answer it anyways > > > since that's fitting with the level of the submission. > > > > Yes. That has happened to me. I asked the submitter how do you know > > this is true? And the v2 had a long AI generated explanation which quoted > > a spec from an AI hallucination. > > > > I like Dave's document but the first paragraph should be to not send AI > > slop. > > This is the entire point of my push back here :) > > I'd prefer us to be truly emphatic with a 'NO SLOP PLEASE' as the opener and > using that term, but I'm compromising because... well you saw Linus's position > right? > > I do find it... naive to think that we won't experience this. For one it's > already started, for another people working on open source projects like > Postgres may have something to say e.g. [0]... > > [0]:https://mastodon.social/@AndresFreundTec/115860496055796941 > > Do we really want to provide a milquetoast document that is lovely and agreeable > and reading it doesn't explicitly say no slop that _will_ be reported on like that? > > Note that Linus's position on this has been reported as essentially 'Linus says > AI tools are like other tools and you are STUPID if you think otherwise they are > FINE' - which is not what he said, but does that matter? > > Do we really truly think doing that is going to have no impact on people sending > us crap? There are a bunch of well-meaning but less-talented people who try to > do kernel stuff, we've all seen it and dealt with it. These same people _will_ > pay attention to this kind of thing and try it on. > > Yes we can't do anything about bad faith people who'll ignore everything. But in > that case being able to point at the doc will make life practically _easier_. > > Either way I think it's important we have something vaguely emphatic there. > > Which is why I'm tiring myself out with this thread when I have a lot of other > things to do :) Thank you for that. As a lurker in this mail thread, I really appreciate your efforts as they're saving the time I would need to argue as strongly as you do :-) While I agree with the argument that kernel documentation should not cover every single hypothetical case that one could come up with, the issue at hand here is real (based on the multiple people who have replied saying they have seen it happen), and I don't think anyone expects the problem to disappear magically given the industry trend. It is also absolutely true that actors with questionable ethics will not care about the documentation. I do see value in being able to point developers acting in good faith to the rules, but an even more important point in my opinion is the message your proposal gives to maintainers. On a side note, I wonder if this is symptomatic of an erosion of trust in this conflictual world, with some maintainers increasingly fearing they will be forced or overridden. -- Regards, Laurent Pinchart
On Fri, 9 Jan 2026 10:54:46 +0200 Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote: > > Which is why I'm tiring myself out with this thread when I have a lot of other > > things to do :) > > Thank you for that. As a lurker in this mail thread, I really appreciate > your efforts as they're saving the time I would need to argue as > strongly as you do :-) And even though I'm arguing with Lorenzo, I appreciate him giving his feedback. I'm not at all frustrated with him, and his arguments help me understand my own ideas about this document. > > While I agree with the argument that kernel documentation should not > cover every single hypothetical case that one could come up with, the > issue at hand here is real (based on the multiple people who have > replied saying they have seen it happen), and I don't think anyone > expects the problem to disappear magically given the industry trend. > > It is also absolutely true that actors with questionable ethics will not > care about the documentation. I do see value in being able to point > developers acting in good faith to the rules, but an even more important > point in my opinion is the message your proposal gives to maintainers. I'm actually not against a document that is all about AI slop. I'm just against hijacking this document into being that. This wasn't the purpose of this document. In the TAB, where we started discussing this (and I was supposed to be the one that wrote the first version, but thankfully Dave did a great job at getting it going). The focus was to be to document what we currently do in practice when it comes to tool-generated content. Notice that the subject of this document doesn't even mention AI. I personally (and I hope others do too) want to keep this document focused on transparency when it comes to tool-generated content which also includes testing and such. Now, in the future there may be a need for a harsher document to cover AI slop. I just don't want it to be this document. I don't think AI is just another tool, but in this document it is, as the focus was to talk about all tooling that generates patches (which is everything from sed scripts to AI). I don't want this document to be focused on AI at all. If you want something to point to when you receive AI slop, create a separate document that is for that purpose only. It will keep this document clearer and also be more useful to the one that needs to read the AI slop document, as it will be explicitly for them. -- Steve
On Fri, Jan 9, 2026 at 4:50 PM Steven Rostedt <rostedt@goodmis.org> wrote: > > In the TAB, where we started discussing this (and I was > supposed to be the one that wrote the first version, but thankfully Dave > did a great job at getting it going). The focus was to be to document what > we currently do in practice when it comes to tool-generated content. Yes, that matches my understanding of the TAB discussions. Cheers, Miguel
On Fri, Jan 09, 2026 at 10:51:04AM -0500, Steven Rostedt wrote: > On Fri, 9 Jan 2026 10:54:46 +0200 > Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote: > > > > Which is why I'm tiring myself out with this thread when I have a lot of other > > > things to do :) > > > > Thank you for that. As a lurker in this mail thread, I really appreciate > > your efforts as they're saving the time I would need to argue as > > strongly as you do :-) > > And even though I'm arguing with Lorenzo, I appreciate him giving his > feedback. I'm not at all frustrated with him, and his arguments help me > understand my own ideas about this document. And to reciprocate - I'm not frustrated or upset with you or anybody else here or even Linus ;) I see this as healthy debate and that's all I wanted here. Civil disagreement is a vital part of a healthy community IMO! > > > > > While I agree with the argument that kernel documentation should not > > cover every single hypothetical case that one could come up with, the > > issue at hand here is real (based on the multiple people who have > > replied saying they have seen it happen), and I don't think anyone > > expects the problem to disappear magically given the industry trend. > > > > It is also absolutely true that actors with questionable ethics will not > > care about the documentation. I do see value in being able to point > > developers acting in good faith to the rules, but an even more important > > point in my opinion is the message your proposal gives to maintainers. > > I'm actually not against a document that is all about AI slop. I'm just > against hijacking this document into being that. This wasn't the purpose of > this document. In the TAB, where we started discussing this (and I was > supposed to be the one that wrote the first version, but thankfully Dave > did a great job at getting it going). The focus was to be to document what > we currently do in practice when it comes to tool-generated content. Notice > that the subject of this document doesn't even mention AI. > > I personally (and I hope others do too) want to keep this document focused > on transparency when it comes to tool-generated content which also includes > testing and such. > > Now, in the future there may be a need for a harsher document to cover AI > slop. I just don't want it to be this document. > > I don't think AI is just another tool, but in this document it is, as the > focus was to talk about all tooling that generates patches (which is > everything from sed scripts to AI). I don't want this document to be > focused on AI at all. > > If you want something to point to when you receive AI slop, create a > separate document that is for that purpose only. It will keep this document > clearer and also be more useful to the one that needs to read the AI slop > document, as it will be explicitly for them. Sure to the above, but it seems (...?) you are ok with my addition to the document which hopefully is tempered enough to provide the emphasis I'm looking for (note I say - all tools - even if LLMs are the most obvious exmaple) - without being so strident as to seem out of scope? > > -- Steve Cheers, Lorenzo
On Fri, 9 Jan 2026 15:55:01 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > Sure to the above, but it seems (...?) you are ok with my addition to the > document which hopefully is tempered enough to provide the emphasis I'm > looking for (note I say - all tools - even if LLMs are the most obvious > exmaple) - without being so strident as to seem out of scope? Yes I liked the last example. As I stated, this discussion helped me understand the issues I had with what you wanted to add. I wanted this document to be just as applicable to checkpatch and sed scripts as it is to LLMs. My fear was it was becoming too focused on AI where those that are submitting checkpatch and coccinelle scripts will think this doesn't apply to them. -- Steve
On 1/8/26 11:23, Lorenzo Stoakes wrote: > I'm also not sure why we're losing the scrutiny part? > > Something like: > > +If tools permit you to generate series entirely automatically, expect > +additional scrutiny. The reason I resisted integrating this is it tries to draw too specific a line in the sand. Someone could rightfully read that and say they don't expect additional scrutiny because the entire series was not automatically generated. What I want to say is: the more automation your tool provides, the more scrutiny you get. Maybe: Expect increasing amounts of maintainer scrutiny on contributions that were increasingly generated by tooling.
On Thu, 8 Jan 2026 11:50:29 -0800 Dave Hansen <dave@sr71.net> wrote: > On 1/8/26 11:23, Lorenzo Stoakes wrote: > > I'm also not sure why we're losing the scrutiny part? > > > > Something like: > > > > +If tools permit you to generate series entirely automatically, expect > > +additional scrutiny. > > The reason I resisted integrating this is it tries to draw too specific > a line in the sand. Someone could rightfully read that and say they > don't expect additional scrutiny because the entire series was not > automatically generated. > > What I want to say is: the more automation your tool provides, the more > scrutiny you get. Maybe: > > Expect increasing amounts of maintainer scrutiny on > contributions that were increasingly generated by tooling. Honestly that just sounds "grumpy" to me ;-) How about something like: All tooling is prone to make mistakes that differ from mistakes generated by humans. A maintainer may push back harder on submissions that were entirely or partially generated by tooling and expect the submitter to demonstrate that even the generated code was verified to be accurate. -- Steve
+cc Jens as reference him On Thu, Jan 08, 2026 at 03:14:37PM -0500, Steven Rostedt wrote: > On Thu, 8 Jan 2026 11:50:29 -0800 > Dave Hansen <dave@sr71.net> wrote: > > > On 1/8/26 11:23, Lorenzo Stoakes wrote: > > > I'm also not sure why we're losing the scrutiny part? > > > > > > Something like: > > > > > > +If tools permit you to generate series entirely automatically, expect > > > +additional scrutiny. > > > > The reason I resisted integrating this is it tries to draw too specific > > a line in the sand. Someone could rightfully read that and say they > > don't expect additional scrutiny because the entire series was not > > automatically generated. I mean you are making an absolutely valid point, I'd say that'd be a rather silly conclusion to take, but we have to be wary of 'lawyering' the doc here. > > > > What I want to say is: the more automation your tool provides, the more > > scrutiny you get. Maybe: > > > > Expect increasing amounts of maintainer scrutiny on > > contributions that were increasingly generated by tooling. > > Honestly that just sounds "grumpy" to me ;-) > > How about something like: > > All tooling is prone to make mistakes that differ from mistakes > generated by humans. A maintainer may push back harder on > submissions that were entirely or partially generated by tooling > and expect the submitter to demonstrate that even the generated > code was verified to be accurate. > > -- Steve I don't really read that as grumpy, I understand wanting to be agreeable but sometimes it's appropriate to be emphatic, which is the entire purpose of this amendment. Taking into account Jens's input too: +If tools permit you to generate series automatically, expect +additional scrutiny in proportion to how much of it was generated. + +As with the output of any tooling, the result maybe incorrect or +inappropriate, so you are expected to understand and to be able to defend +everything you submit. If you are unable to do so, then don't submit the +resulting changes. + +If you do so anyway, maintainers are entitled to reject your series without +detailed review. Does this work? As per Dan later in this thread I do truly wish we could have (yes in all caps) 'NO SLOP PLEASE'. But I am compromising on that ;) Cheers, Lorenzo
On Fri, Jan 09, 2026 at 07:48:35AM +0000, Lorenzo Stoakes wrote: > +cc Jens as reference him > > On Thu, Jan 08, 2026 at 03:14:37PM -0500, Steven Rostedt wrote: > > On Thu, 8 Jan 2026 11:50:29 -0800 > > Dave Hansen <dave@sr71.net> wrote: > > > > > On 1/8/26 11:23, Lorenzo Stoakes wrote: > > > > I'm also not sure why we're losing the scrutiny part? > > > > > > > > Something like: > > > > > > > > +If tools permit you to generate series entirely automatically, expect > > > > +additional scrutiny. > > > > > > The reason I resisted integrating this is it tries to draw too specific > > > a line in the sand. Someone could rightfully read that and say they > > > don't expect additional scrutiny because the entire series was not > > > automatically generated. > > I mean you are making an absolutely valid point, I'd say that'd be a rather > silly conclusion to take, but we have to be wary of 'lawyering' the doc > here. > > > > > > > What I want to say is: the more automation your tool provides, the more > > > scrutiny you get. Maybe: > > > > > > Expect increasing amounts of maintainer scrutiny on > > > contributions that were increasingly generated by tooling. > > > > Honestly that just sounds "grumpy" to me ;-) > > > > How about something like: > > > > All tooling is prone to make mistakes that differ from mistakes > > generated by humans. A maintainer may push back harder on > > submissions that were entirely or partially generated by tooling > > and expect the submitter to demonstrate that even the generated > > code was verified to be accurate. > > > > -- Steve > > I don't really read that as grumpy, I understand wanting to be agreeable > but sometimes it's appropriate to be emphatic, which is the entire purpose > of this amendment. > > Taking into account Jens's input too: > > +If tools permit you to generate series automatically, expect > +additional scrutiny in proportion to how much of it was generated. > + > +As with the output of any tooling, the result maybe incorrect or > +inappropriate, so you are expected to understand and to be able to defend > +everything you submit. If you are unable to do so, then don't submit the > +resulting changes. > + > +If you do so anyway, maintainers are entitled to reject your series without > +detailed review. This is too subtle. In real life if we suspect a patchset is AI Slop, then we're going to reject the whole thing immediately. No one is going to review all fifteen patches one by one as if we're searching through monkey poo for edible grains of corn. The AI slop patches I've seen were not bad actors. Someone saw a TODO in the file and thought that AI could solve it. The patch compiled, it was formatted correctly and the commit message sounded confident so they sent it. To me the audience for this is maybe a team working on AI and they don't have any kernel developers on staff so they assume they're being helpful sending unreviewed patches. The message should be that every patch needs to be reviewed carefully before it is sent upstream. I've been asked to review patches like this in the past. Get outside help if you need to, but every patch needs to be reviewed. regards, dan carpenter
Dan, thanks for taking care of this. My overall not-strongly-held take is that we shouldn't try to be overly proscriptive at this stage. Wait and see if a problematic pattern emerges and then deal with it. But my main reason for weighing in: I haven't yet seen evidence that the LLMs produce useful kernel changes, but AI is looking to be useful at finding bugs. If an AI-generated bug report comes in the form of a purported code fix then it's "thanks for the bug report", delete the email then get in and fix the issue in our usual way. As we work through these issues, please let's not accidentally do anything which impedes our ability to receive AI-generated bug reports. If that means having to deal with poor fixes for those bugs then so be it - the benefit of the bug report outweighs the cost of discarding the purported fix.
On Fri, 9 Jan 2026 10:34:35 -0800 Andrew Morton <akpm@linux-foundation.org> wrote: > As we work through these issues, please let's not accidentally do > anything which impedes our ability to receive AI-generated bug reports. > If that means having to deal with poor fixes for those bugs then so be > it - the benefit of the bug report outweighs the cost of discarding the > purported fix. I agree with this statement. I just said that I find AI a much better bug finder than code creator: https://lore.kernel.org/all/20260109111929.2010949e@gandalf.local.home/ -- Steve
On Fri, Jan 09, 2026 at 02:00:39PM +0300, Dan Carpenter wrote: > On Fri, Jan 09, 2026 at 07:48:35AM +0000, Lorenzo Stoakes wrote: > > +cc Jens as reference him > > > > On Thu, Jan 08, 2026 at 03:14:37PM -0500, Steven Rostedt wrote: > > > On Thu, 8 Jan 2026 11:50:29 -0800 > > > Dave Hansen <dave@sr71.net> wrote: > > > > > > > On 1/8/26 11:23, Lorenzo Stoakes wrote: > > > > > I'm also not sure why we're losing the scrutiny part? > > > > > > > > > > Something like: > > > > > > > > > > +If tools permit you to generate series entirely automatically, expect > > > > > +additional scrutiny. > > > > > > > > The reason I resisted integrating this is it tries to draw too specific > > > > a line in the sand. Someone could rightfully read that and say they > > > > don't expect additional scrutiny because the entire series was not > > > > automatically generated. > > > > I mean you are making an absolutely valid point, I'd say that'd be a rather > > silly conclusion to take, but we have to be wary of 'lawyering' the doc > > here. > > > > > > > > > > What I want to say is: the more automation your tool provides, the more > > > > scrutiny you get. Maybe: > > > > > > > > Expect increasing amounts of maintainer scrutiny on > > > > contributions that were increasingly generated by tooling. > > > > > > Honestly that just sounds "grumpy" to me ;-) > > > > > > How about something like: > > > > > > All tooling is prone to make mistakes that differ from mistakes > > > generated by humans. A maintainer may push back harder on > > > submissions that were entirely or partially generated by tooling > > > and expect the submitter to demonstrate that even the generated > > > code was verified to be accurate. > > > > > > -- Steve > > > > I don't really read that as grumpy, I understand wanting to be agreeable > > but sometimes it's appropriate to be emphatic, which is the entire purpose > > of this amendment. > > > > Taking into account Jens's input too: > > > > +If tools permit you to generate series automatically, expect > > +additional scrutiny in proportion to how much of it was generated. > > + > > +As with the output of any tooling, the result maybe incorrect or > > +inappropriate, so you are expected to understand and to be able to defend > > +everything you submit. If you are unable to do so, then don't submit the > > +resulting changes. > > + > > +If you do so anyway, maintainers are entitled to reject your series without > > +detailed review. > > This is too subtle. In real life if we suspect a patchset is AI Slop, > then we're going to reject the whole thing immediately. No one is > going to review all fifteen patches one by one as if we're searching > through monkey poo for edible grains of corn. I'm trying to compromise as the general direction on this document is to be very soft (see the suggested edits so far). I get why, but the entire purpose of this amendment is to put emphasis and really to stand up as a community and to say clearly this isn't something we want. > > The AI slop patches I've seen were not bad actors. Someone saw a > TODO in the file and thought that AI could solve it. The patch > compiled, it was formatted correctly and the commit message sounded > confident so they sent it. Yes exactly this. Exactly. I've said it elsewhere, but: a. People who have good intentions who will take this as a green light to just send out fully LLM generated stuff. b. Press coverage (it's already happening) will essentially signal it's a green light on this. For e.g.: https://www.phoronix.com/news/Torvalds-Linux-Kernel-AI-Slop https://www.theregister.com/2026/01/08/linus_versus_llms_ai_slop_docs/?td=rt-3a > > To me the audience for this is maybe a team working on AI and they > don't have any kernel developers on staff so they assume they're being > helpful sending unreviewed patches. The message should be that every > patch needs to be reviewed carefully before it is sent upstream. I've > been asked to review patches like this in the past. Get outside help > if you need to, but every patch needs to be reviewed. Yes exactly. But also it's useful when dealing even with bad actors to point at the community _actually taking a postiion_. And frankly on waiting for it to 'get worse' (i.e. to get like basically the rest of open source) - I have little faith the document really will be updated to say anything forthright at least at any speed, and by then it'll be too little too late. The idea the kernel community taking a position doesn't have any impact is simply false. I think far too much thinking in terms of how computers are going on here, and too little about how people are. > > regards, > dan carpenter Cheers, Lorenzo
On Fri, 9 Jan 2026 11:25:57 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > I don't really read that as grumpy, I understand wanting to be agreeable > > > but sometimes it's appropriate to be emphatic, which is the entire purpose > > > of this amendment. > > > > > > Taking into account Jens's input too: > > > > > > +If tools permit you to generate series automatically, expect > > > +additional scrutiny in proportion to how much of it was generated. > > > + > > > +As with the output of any tooling, the result maybe incorrect or > > > +inappropriate, so you are expected to understand and to be able to defend > > > +everything you submit. If you are unable to do so, then don't submit the > > > +resulting changes. > > > + > > > +If you do so anyway, maintainers are entitled to reject your series without > > > +detailed review. I like it. > > > > This is too subtle. In real life if we suspect a patchset is AI Slop, > > then we're going to reject the whole thing immediately. No one is > > going to review all fifteen patches one by one as if we're searching > > through monkey poo for edible grains of corn. I'll repeat here what I mentioned in my other email. Those that send the slop are NOT GOING TO READ THIS. The ones that are going to read this are the ones trying to do the right thing. I don't think this is too subtle. It basically tells honest contributors what to expect. It doesn't have to be a "Do this or else!" document. > > I'm trying to compromise as the general direction on this document is to be > very soft (see the suggested edits so far). > > I get why, but the entire purpose of this amendment is to put emphasis and > really to stand up as a community and to say clearly this isn't something > we want. As I mentioned before. This is to clarify what we expect. Some people may be harsher on AI slop than others. We don't need to make this document at the tone of those that hate AI slop the most. I want the tone to be aimed at people who want to know how to submit something. Not a tone at those that are going to be doing it wrong *because they didn't read any documents*. > > > > > The AI slop patches I've seen were not bad actors. Someone saw a > > TODO in the file and thought that AI could solve it. The patch > > compiled, it was formatted correctly and the commit message sounded > > confident so they sent it. > > Yes exactly this. Exactly. > > I've said it elsewhere, but: > > a. People who have good intentions who will take this as a green light to > just send out fully LLM generated stuff. I'm pretty sure this document does not express that. Even when being more "soft". > b. Press coverage (it's already happening) will essentially signal it's a > green light on this. > > For e.g.: > https://www.phoronix.com/news/Torvalds-Linux-Kernel-AI-Slop > https://www.theregister.com/2026/01/08/linus_versus_llms_ai_slop_docs/?td=rt-3a Reading the comments appears to show that most people think AI is mostly over hyped. > > > > To me the audience for this is maybe a team working on AI and they > > don't have any kernel developers on staff so they assume they're being > > helpful sending unreviewed patches. The message should be that every > > patch needs to be reviewed carefully before it is sent upstream. I've > > been asked to review patches like this in the past. Get outside help > > if you need to, but every patch needs to be reviewed. And those people are exactly who will likely not read this document! > > Yes exactly. > > But also it's useful when dealing even with bad actors to point at the > community _actually taking a postiion_. As I stated before. This wasn't the purpose of the document. > > And frankly on waiting for it to 'get worse' (i.e. to get like basically > the rest of open source) - I have little faith the document really will be > updated to say anything forthright at least at any speed, and by then it'll > be too little too late. Honestly, if it gets worse, I would suggest creating a separate document specifically about AI. This document is just writing down the unwritten rules we already have with tool-generated content. This document includes coccinelle and checkpatch. If we need a "AI slop go away!" document, that should be a separate one. Feel free to create that and submit an RFC ;-) -- Steve > > The idea the kernel community taking a position doesn't have any impact is > simply false. > > I think far too much thinking in terms of how computers are going on here, > and too little about how people are.
On Fri, Jan 09, 2026 at 10:39:24AM -0500, Steven Rostedt wrote: > On Fri, 9 Jan 2026 11:25:57 +0000 > Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > > I don't really read that as grumpy, I understand wanting to be agreeable > > > > but sometimes it's appropriate to be emphatic, which is the entire purpose > > > > of this amendment. > > > > > > > > Taking into account Jens's input too: > > > > > > > > +If tools permit you to generate series automatically, expect > > > > +additional scrutiny in proportion to how much of it was generated. > > > > + > > > > +As with the output of any tooling, the result maybe incorrect or > > > > +inappropriate, so you are expected to understand and to be able to defend > > > > +everything you submit. If you are unable to do so, then don't submit the > > > > +resulting changes. > > > > + > > > > +If you do so anyway, maintainers are entitled to reject your series without > > > > +detailed review. > > I like it. Hmm, you like my version but then below argue against every point I make in favour of it? I'm confused? Did you mean to say you liked a suggested other revision or... really this one? :) If so and Dave likes it too then LGTM, pending any Linus/other veto. For the rest of your email - a lawyer would say 'asked and answered'. I've responded to every point of yours there about 3 times apiece across the thread and I don't think it's a good use of time to loop around on things! Cheers, Lorenzo
On 1/9/26 07:48, Lorenzo Stoakes wrote: >>>>> +If tools permit you to generate series automatically, expect >>>>> +additional scrutiny in proportion to how much of it was generated. >>>>> + >>>>> +As with the output of any tooling, the result maybe incorrect or >>>>> +inappropriate, so you are expected to understand and to be able to defend >>>>> +everything you submit. If you are unable to do so, then don't submit the >>>>> +resulting changes. >>>>> + >>>>> +If you do so anyway, maintainers are entitled to reject your series without >>>>> +detailed review. >> I like it. > Hmm, you like my version but then below argue against every point I make in > favour of it? I'm confused? > > Did you mean to say you liked a suggested other revision or... really this > one? 🙂 > > If so and Dave likes it too then LGTM, pending any Linus/other veto. Look good to me too! I'll try to get a v5 out with this later toady.
On Fri, 9 Jan 2026 15:48:49 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > On Fri, Jan 09, 2026 at 10:39:24AM -0500, Steven Rostedt wrote: > > On Fri, 9 Jan 2026 11:25:57 +0000 > > Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > > > > I don't really read that as grumpy, I understand wanting to be agreeable > > > > > but sometimes it's appropriate to be emphatic, which is the entire purpose > > > > > of this amendment. > > > > > > > > > > Taking into account Jens's input too: > > > > > > > > > > +If tools permit you to generate series automatically, expect > > > > > +additional scrutiny in proportion to how much of it was generated. > > > > > + > > > > > +As with the output of any tooling, the result maybe incorrect or > > > > > +inappropriate, so you are expected to understand and to be able to defend > > > > > +everything you submit. If you are unable to do so, then don't submit the > > > > > +resulting changes. > > > > > + > > > > > +If you do so anyway, maintainers are entitled to reject your series without > > > > > +detailed review. > > > > I like it. > > Hmm, you like my version but then below argue against every point I make in > favour of it? I'm confused? I don't see how it's contradictory to what I expressed later. > > Did you mean to say you liked a suggested other revision or... really this > one? :) I like this one, as it relates to any automated tooling (checkpatch and coccinelle too, not just AI). Because I do believe this is documenting exactly what we do today and have been doing for years. I always scrutinize tooling more than when someone wrote it. Because using tooling myself, there's always that strange corner case that causes the tooling to do something you didn't expect. Whereas humans usually make the mistakes that you do expect ;-) > > If so and Dave likes it too then LGTM, pending any Linus/other veto. > > For the rest of your email - a lawyer would say 'asked and answered'. I've > responded to every point of yours there about 3 times apiece across the > thread and I don't think it's a good use of time to loop around on things! I believe that you think I disagree more than what I actually do disagree with ;-) -- Steve
On Fri, Jan 09, 2026 at 11:03:47AM -0500, Steven Rostedt wrote: > On Fri, 9 Jan 2026 15:48:49 +0000 > Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > On Fri, Jan 09, 2026 at 10:39:24AM -0500, Steven Rostedt wrote: > > > On Fri, 9 Jan 2026 11:25:57 +0000 > > > Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > > > > > > I don't really read that as grumpy, I understand wanting to be agreeable > > > > > > but sometimes it's appropriate to be emphatic, which is the entire purpose > > > > > > of this amendment. > > > > > > > > > > > > Taking into account Jens's input too: > > > > > > > > > > > > +If tools permit you to generate series automatically, expect > > > > > > +additional scrutiny in proportion to how much of it was generated. > > > > > > + > > > > > > +As with the output of any tooling, the result maybe incorrect or > > > > > > +inappropriate, so you are expected to understand and to be able to defend > > > > > > +everything you submit. If you are unable to do so, then don't submit the > > > > > > +resulting changes. > > > > > > + > > > > > > +If you do so anyway, maintainers are entitled to reject your series without > > > > > > +detailed review. > > > > > > I like it. > > > > Hmm, you like my version but then below argue against every point I make in > > favour of it? I'm confused? > > I don't see how it's contradictory to what I expressed later. Haha I should stop arguing with you then and just nod and shake your hand ;) OK then I'm good with the above! Dave - that LGTY? > > > > > Did you mean to say you liked a suggested other revision or... really this > > one? :) > > I like this one, as it relates to any automated tooling (checkpatch and > coccinelle too, not just AI). Because I do believe this is documenting > exactly what we do today and have been doing for years. > > I always scrutinize tooling more than when someone wrote it. Because using > tooling myself, there's always that strange corner case that causes the > tooling to do something you didn't expect. Whereas humans usually make the > mistakes that you do expect ;-) Sure well it's actually unexpected somewhat to me that this happens to cover all that off nicely too. Obviously the same thing applies to _any_ tooling! > > > > > > If so and Dave likes it too then LGTM, pending any Linus/other veto. > > > > For the rest of your email - a lawyer would say 'asked and answered'. I've > > responded to every point of yours there about 3 times apiece across the > > thread and I don't think it's a good use of time to loop around on things! > > I believe that you think I disagree more than what I actually do disagree with ;-) *Nods and shakes hand* ;) > > -- Steve > Cheers, Lorenzo
On Thu, Jan 08, 2026 at 03:14:37PM -0500, Steven Rostedt wrote: > On Thu, 8 Jan 2026 11:50:29 -0800 > Dave Hansen <dave@sr71.net> wrote: > > > On 1/8/26 11:23, Lorenzo Stoakes wrote: > > > I'm also not sure why we're losing the scrutiny part? > > > > > > Something like: > > > > > > +If tools permit you to generate series entirely automatically, expect > > > +additional scrutiny. > > > > The reason I resisted integrating this is it tries to draw too specific > > a line in the sand. Someone could rightfully read that and say they > > don't expect additional scrutiny because the entire series was not > > automatically generated. > > > > What I want to say is: the more automation your tool provides, the more > > scrutiny you get. Maybe: > > > > Expect increasing amounts of maintainer scrutiny on > > contributions that were increasingly generated by tooling. > > Honestly that just sounds "grumpy" to me ;-) > > How about something like: > > All tooling is prone to make mistakes that differ from mistakes > generated by humans. A maintainer may push back harder on > submissions that were entirely or partially generated by tooling > and expect the submitter to demonstrate that even the generated > code was verified to be accurate. > > -- Steve It's better to have a grumpy document, instead of grumpy emails. We need it to sound grumpy and it needs to be the first paragraph. AI Slop: AI can generate a ton of patches automatically which creates a burden on the upstream maintainers. The maintainers need to review every line of every patch and they expect the submitters to demonstrate that even the generated code was verified to be accurate. If you are unsure of whether a patch is appropriate then do not send it. NO AI SLOP! Of course, sensible people don't need to be told this stuff, but there are well intentioned people who need it explained. regards, dan carpenter
On Fri, Jan 09, 2026 at 08:42:56AM +0300, Dan Carpenter wrote: > On Thu, Jan 08, 2026 at 03:14:37PM -0500, Steven Rostedt wrote: > > On Thu, 8 Jan 2026 11:50:29 -0800 > > Dave Hansen <dave@sr71.net> wrote: > > > > > On 1/8/26 11:23, Lorenzo Stoakes wrote: > > > > I'm also not sure why we're losing the scrutiny part? > > > > > > > > Something like: > > > > > > > > +If tools permit you to generate series entirely automatically, expect > > > > +additional scrutiny. > > > > > > The reason I resisted integrating this is it tries to draw too specific > > > a line in the sand. Someone could rightfully read that and say they > > > don't expect additional scrutiny because the entire series was not > > > automatically generated. > > > > > > What I want to say is: the more automation your tool provides, the more > > > scrutiny you get. Maybe: > > > > > > Expect increasing amounts of maintainer scrutiny on > > > contributions that were increasingly generated by tooling. > > > > Honestly that just sounds "grumpy" to me ;-) > > > > How about something like: > > > > All tooling is prone to make mistakes that differ from mistakes > > generated by humans. A maintainer may push back harder on > > submissions that were entirely or partially generated by tooling > > and expect the submitter to demonstrate that even the generated > > code was verified to be accurate. > > > > -- Steve > > It's better to have a grumpy document, instead of grumpy emails. We > need it to sound grumpy and it needs to be the first paragraph. > > AI Slop: AI can generate a ton of patches automatically which creates a > burden on the upstream maintainers. The maintainers need to review > every line of every patch and they expect the submitters to demonstrate > that even the generated code was verified to be accurate. If you are > unsure of whether a patch is appropriate then do not send it. NO AI > SLOP! > > Of course, sensible people don't need to be told this stuff, but there > are well intentioned people who need it explained. > > regards, > dan carpenter > Exactly. Every version of watering it down just makes it meaningless noise. The point is to emphasise this.
On Fri, 9 Jan 2026 07:28:01 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > It's better to have a grumpy document, instead of grumpy emails. We > > need it to sound grumpy and it needs to be the first paragraph. I disagree. Specifically because of what Linus had said (see below). > > > > AI Slop: AI can generate a ton of patches automatically which creates a > > burden on the upstream maintainers. The maintainers need to review > > every line of every patch and they expect the submitters to demonstrate > > that even the generated code was verified to be accurate. If you are > > unsure of whether a patch is appropriate then do not send it. NO AI > > SLOP! > > > > Of course, sensible people don't need to be told this stuff, but there > > are well intentioned people who need it explained. > > > > regards, > > dan carpenter > > > > Exactly. > > Every version of watering it down just makes it meaningless noise. The point is > to emphasise this. The thing is, the AI slop sending culprits are not going to be the ones to read this. It's the people who want to do the right thing that this document is focused on and that's why I think it should be more welcoming. That said, I just started looking at your other email and that does look better. I'll reply there. -- Steve
On Fri, Jan 09, 2026 at 10:28:46AM -0500, Steven Rostedt wrote: > On Fri, 9 Jan 2026 07:28:01 +0000 > Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > It's better to have a grumpy document, instead of grumpy emails. We > > > need it to sound grumpy and it needs to be the first paragraph. > > I disagree. Specifically because of what Linus had said (see below). > > > > > > > AI Slop: AI can generate a ton of patches automatically which creates a > > > burden on the upstream maintainers. The maintainers need to review > > > every line of every patch and they expect the submitters to demonstrate > > > that even the generated code was verified to be accurate. If you are > > > unsure of whether a patch is appropriate then do not send it. NO AI > > > SLOP! > > > > > > Of course, sensible people don't need to be told this stuff, but there > > > are well intentioned people who need it explained. > > > > > > regards, > > > dan carpenter > > > > > > > Exactly. > > > > Every version of watering it down just makes it meaningless noise. The point is > > to emphasise this. > > The thing is, the AI slop sending culprits are not going to be the ones to > read this. It's the people who want to do the right thing that this > document is focused on and that's why I think it should be more welcoming. I think you and Linus are wrong about this. There are a class of 'good intent bad results' people who will absolutely do this _and_ pay attention to the document. I expect you as a maintainer must have run into this, I know I have! And given how inaccurate that register article was, I think you can see that having something clear matters from that perspective too, in practice. > > That said, I just started looking at your other email and that does look > better. I'll reply there. Thanks! > > -- Steve Cheers, Lorenzo
On Thu, Jan 08, 2026 at 08:17:09AM -0500, James Bottomley wrote: > > +you are expected to understand and to be able to defend everything > > you > > +submit. If you are unable to do so, maintainers may choose to reject > > your > > +series outright. > > And I thing the addition would apply to any tool used to generate a > patch set whether AI or not. Exactly. I saw my share of "fix checkpatch warning" slop. This is no different. -- MST
On Thu, 8 Jan 2026, Michael S. Tsirkin wrote: > On Thu, Jan 08, 2026 at 08:17:09AM -0500, James Bottomley wrote: > > > +you are expected to understand and to be able to defend everything > > > you > > > +submit. If you are unable to do so, maintainers may choose to reject > > > your > > > +series outright. > > > > And I thing the addition would apply to any tool used to generate a > > patch set whether AI or not. > > Exactly. I saw my share of "fix checkpatch warning" slop. This is no > different. I guess that most maintainers can easily recognize a patch that was motivated by checkpatch, Coccinelle, smatch etc. Then the review can be informed by previous experience with the tool. Will the same be the case for AI? Or does it not matter? julia
On Thu, Jan 08, 2026 at 03:48:14PM +0100, Julia Lawall wrote: > > > On Thu, 8 Jan 2026, Michael S. Tsirkin wrote: > > > On Thu, Jan 08, 2026 at 08:17:09AM -0500, James Bottomley wrote: > > > > +you are expected to understand and to be able to defend everything > > > > you > > > > +submit. If you are unable to do so, maintainers may choose to reject > > > > your > > > > +series outright. > > > > > > And I thing the addition would apply to any tool used to generate a > > > patch set whether AI or not. > > > > Exactly. I saw my share of "fix checkpatch warning" slop. This is no > > different. > > I guess that most maintainers can easily recognize a patch that was > motivated by checkpatch, Coccinelle, smatch etc. Then the review can be > informed by previous experience with the tool. Will the same be the case > for AI? Or does it not matter? > > julia It is not the issue that checkpatch motivated something. The issue is that a lot of people don't understand that "checkpatch complained" is not motivation enough to make a change. I expect this holds for all tools. -- MST
On Thu, Jan 08, 2026 at 09:01:09AM -0500, Michael S. Tsirkin wrote: > On Thu, Jan 08, 2026 at 08:17:09AM -0500, James Bottomley wrote: > > > +you are expected to understand and to be able to defend everything > > > you > > > +submit. If you are unable to do so, maintainers may choose to reject > > > your > > > +series outright. > > > > And I thing the addition would apply to any tool used to generate a > > patch set whether AI or not. > > Exactly. I saw my share of "fix checkpatch warning" slop. This is no > different. I'm a maintainer too and have seen this kinds of thing as well as many variations on a theme of 'bad series'. An analgous thing might be to ask anybody working in education how these tools differ from all others students have used previously. Checkpatch fixes and the like are relatively easy to identify and can only ever be trivial changes which can be reasonably dismissed. Whereas LLMs can generate entirely novel series that can't so easily be dismissed, though the sudden appearance of a new person with completely new code can be identified. At any rate, even if you feel this is exactly the same, you surely therefore cannot object to the suggested changes in [0] which would amount in your view then to the same kind of dismissal you might give to a checkpatch --fix series? The suggested change gives latitude to the maintainer to dismiss out of hand should the pattern be obvious, or to use the nuclear weapon against slop of asking somebody to explain the series (an LLM-generated explanation should be fairly easy to spot in this case also...) My motive here is the asymmetry between maintainer resource/patch influx which is already at critical levels in at least some areas of mm. An uptick would be a big problem right now. Thanks, Lorenzo [0]:https://lore.kernel.org/ksummit/611c4a95-cbf2-492c-a991-e54042cf226a@lucifer.local/ > > -- > MST > Cheers, Lorenzo
On Thu, Jan 08, 2026 at 02:24:55PM +0000, Lorenzo Stoakes wrote: > > At any rate, even if you feel this is exactly the same, you surely therefore > cannot object to the suggested changes in [0] which would amount in your view > then to the same kind of dismissal you might give to a checkpatch --fix series? I have no problem with the suggested changes. I am especially happy that they say "As with the output of any tooling". -- MST
On Thu, Jan 08, 2026 at 09:28:09AM -0500, Michael S. Tsirkin wrote: > On Thu, Jan 08, 2026 at 02:24:55PM +0000, Lorenzo Stoakes wrote: > > > > At any rate, even if you feel this is exactly the same, you surely therefore > > cannot object to the suggested changes in [0] which would amount in your view > > then to the same kind of dismissal you might give to a checkpatch --fix series? > > I have no problem with the suggested changes. I am especially happy that > they say "As with the output of any tooling". See? I can compromise... ;)
On Wed, Jan 07, 2026 at 04:20:04PM -0800, Dave Hansen wrote: > On 1/7/26 13:15, Lorenzo Stoakes wrote: > > Thinking LLMs are 'just another tool' is to say effectively that the kernel > > is immune from this. Which seems to me a silly position. > > I had a good chat with Lorenzo on IRC. I had it in my head that he > wanted a really different document than the one I posted. After talking, > it sounds like he had some much more modest changes in mind. I caught > him at the end of his day, but I think he's planning to send out a small > diff on top of what I posted so I can get a better idea of what he wants > to see tweaked. Ack thanks Dave, FWIW if that'd be useful I can do that just to really clarify what I mean here rather than hand wave. Will take a look at doing that a little later. Cheers, Lorenzo
On Wed, 7 Jan 2026 at 13:20, Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote:
>
> Thinking LLMs are 'just another tool' is to say effectively that the kernel
> is immune from this. Which seems to me a silly position.
No. Your position is the silly one.
There is *zero* point in talking about AI slop. That's just plain stupid.
Why? Because the AI slop people aren't going to document their patches
as such. That's such an obvious truism that I don't understand why
anybody even brings up AI slop.
So stop this idiocy.
The documentation is for good actors, and pretending anything else is
pointless posturing.
As I said in private elsewhere, I do *not* want any kernel development
documentation to be some AI statement. We have enough people on both
sides of the "sky is falling" and "it's going to revolutionize
software engineering", I don't want some kernel development docs to
take either stance.
It's why I strongly want this to be that "just a tool" statement.
And the AI slop issue is *NOT* going to be solved with documentation,
and anybody who thinks it is either just naive, or wants to "make a
statement".
Neither of which is a good reason for documentation.
Linus
+cc Chris as I mention him :) On Wed, Jan 07, 2026 at 04:06:35PM -0800, Linus Torvalds wrote: > On Wed, 7 Jan 2026 at 13:20, Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > Thinking LLMs are 'just another tool' is to say effectively that the kernel > > is immune from this. Which seems to me a silly position. > > No. Your position is the silly one. > > There is *zero* point in talking about AI slop. That's just plain stupid. > > Why? Because the AI slop people aren't going to document their patches > as such. That's such an obvious truism that I don't understand why > anybody even brings up AI slop. The point is: a. For the tech press to not gleefully report that the kernel just accepts AI patches now since hey it's just another tool. b. To be able to refer back to the document when rejecting series. As to point a., as I said before in other threads, I remain concerned that the second the tech press say 'the kernel accepts AI patches now' we'll see an influx. It's sad we have to think about that, but it's a fact of life. > > So stop this idiocy. > > The documentation is for good actors, and pretending anything else is > pointless posturing. I mean with respect, if the document is saying in effect 'hey everything's the same relax', what's the point of the document again? > > As I said in private elsewhere, I do *not* want any kernel development > documentation to be some AI statement. We have enough people on both > sides of the "sky is falling" and "it's going to revolutionize > software engineering", I don't want some kernel development docs to > take either stance. To be clear I am actually quite optimistic about AI tooling in some areas, most notably review (Chris Mason is doing some great work on this for instance! :) My suggestions are _not_ taking either position. They are just there to address points a and b above, while otherwise retaining the same exact position as the document currently does. (I actually feel the rest of the document is good, as I said in v1 review, Dave + of course the other reviewers did a good job.) > > It's why I strongly want this to be that "just a tool" statement. > > And the AI slop issue is *NOT* going to be solved with documentation, > and anybody who thinks it is either just naive, or wants to "make a > statement". I mean, not sure I said we'd be solving AI slop here :) if we could solve it with a document that'd be great, but I'm not that naive/hopeful obviously. Dave asked me to send an incremental patch to the documentation to be entirely clear as to what change I'm suggesting, I am happy to do that FWIW. Perhaps that'll make my suggestion a little clearer. > > Neither of which is a good reason for documentation. > > Linus Thanks, Lorenzo
Lorenzo Stoakes wrote: [..] > And it's not like I'm asking for much, I'm not asking you to rewrite the > document, or take an entirely different approach, I'm just saying that we > should highlight that : > > 1. LLMs _allow you to send patches end-to-end without expertise_. > > 2. As a result, even though the community (rightly) strongly disapproves of > blanket dismissals of series, if we suspect AI slop [I think it's useful > to actually use that term], maintains can reject it out of hand. > > Point 2 is absolutely a new thing in my view. I worry what this sentiment does to the health of the project. Is "hunting for slop" really what we want to be doing? When the accusation is false, what then? If the goal of the wording change is to give cover and license for that kind of activity, I have a hard time seeing that as good for the project. It has always been the case that problematic submitters put stress on maintainer bandwidth. Having a name for one class of potential maintainer stress in a process document does not advance the status quo. A maintainer is trusted to maintain the code and have always been able to give feedback of "I don't like it, leaves a bad taste", "I don't trust it does what it claims", or "I don't trust you, $submitter, to be able to maintain the implications of this proposal long term". That feedback is not strictly technical, but it is more actionable than "this is AI slop".
On Wed, Jan 07, 2026 at 03:50:30PM -0800, dan.j.williams@intel.com wrote: > Lorenzo Stoakes wrote: > [..] > > And it's not like I'm asking for much, I'm not asking you to rewrite the > > document, or take an entirely different approach, I'm just saying that we > > should highlight that : > > > > 1. LLMs _allow you to send patches end-to-end without expertise_. > > > > 2. As a result, even though the community (rightly) strongly disapproves of > > blanket dismissals of series, if we suspect AI slop [I think it's useful > > to actually use that term], maintains can reject it out of hand. > > > > Point 2 is absolutely a new thing in my view. > > I worry what this sentiment does to the health of the project. Is > "hunting for slop" really what we want to be doing? When the accusation > is false, what then? Yeah that's a very good point, and we don't want a witch hunt. In fact in practice already I've had discussions with other maintainers about series that seemed to have LLM elements in them (entirely in good faith I might add). Really I'm talking about series that are _very clearly_ slop. And it's about the asymmetry between maintainer resource and the capacity for people to send mountains of code. The ability to send things completely end-to-end is the big difference here vs. other tooling. > > If the goal of the wording change is to give cover and license for that > kind of activity, I have a hard time seeing that as good for the > project. I agree entirely, and I absolutely do not want that. > > It has always been the case that problematic submitters put stress on > maintainer bandwidth. Having a name for one class of potential > maintainer stress in a process document does not advance the status quo. > > A maintainer is trusted to maintain the code and have always been able > to give feedback of "I don't like it, leaves a bad taste", "I don't > trust it does what it claims", or "I don't trust you, $submitter, to be > able to maintain the implications of this proposal long term". That > feedback is not strictly technical, but it is more actionable than "this > is AI slop". I really don't think it is the case that maintainers can simplly dismiss an entire series like that. The reason why is that, unlike e.g. a coccinelle script or something, this won't be doing just cleanups, or fixing scope, or whatever. LLMs can uniquely allow you to send a series that is entirely novel, introducing new functionality or making significant changes. For good reason, the community frowns upon just-rejecting that kind of series without providing technical feedback. There's a spectrum of opinions on these tools - on the extreme positive side you have people who'd say we _should_ accept such series, or at least review them in detail each time. On the extreme negative people would say you should reject anything like this altogether even if you don't state that an LLM helped you. I think you'd probably agree both extremes are silly, but even many moderate positions would leave the 'should we review these in detail' rather blurry. And thus it isn't therefore entirely clear that a maintainer dismissing these kinds of series out of hand wouldn't be violating the norm of 'don't reject series without technical reasoning'. It would therefore be useful for the document to make it clear that they in fact can. Otherwise I fear we don't have an answer for the asymmetry issue. And as I said to Linus, I think it'd be useful to be able to reference the document in doing so. Cheers, Lorenzo
On Thu, Jan 8, 2026 at 11:29 AM Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > I really don't think it is the case that maintainers can simplly dismiss an > entire series like that. I think Dan was referring to all kinds of series, i.e. maintainers have leeway to reject proposals, whether they are big or small and whether they are new features or cleanups. After all, the project works by trusting maintainers to do the right thing (i.e. the best they can with the information and time at their disposal), but sometimes there may not be concrete technical reasons. For instance, sometimes it is just a matter of bandwidth -- if maintainers cannot maintain the code, and no one (that is trusted to some degree) is willing to do so, then it would be a bad idea to take the code anyway, even if the feature is great, whether LLM-generated or not. That is also why it is often said that it is a good idea to contact maintainers/community before developing completely a new feature etc. etc. So if a subsystem suddenly starts to get an onslaught of series like you warn about, then they cannot be expected to review and give technical feedback to everything, and they will need to prioritize somehow (e.g. fixes), or try to get more maintainers, or raise the issue in ksummit, etc. At least, that is my take, i.e. we need to allow maintainers to adjust as things come. And, of course, as a community, we can always reassess as conditions change. Cheers, Miguel
On Thu, Jan 08, 2026 at 12:43:50PM +0100, Miguel Ojeda wrote: > On Thu, Jan 8, 2026 at 11:29 AM Lorenzo Stoakes > <lorenzo.stoakes@oracle.com> wrote: > > > > I really don't think it is the case that maintainers can simplly dismiss an > > entire series like that. > > I think Dan was referring to all kinds of series, i.e. maintainers > have leeway to reject proposals, whether they are big or small and > whether they are new features or cleanups. Sure, but I would say it's reasonable that there's a norm in place that rejecting series outright that aren't _trivially_ dismissable is bad if no technical objection is given right? The issue with LLMs is you can generate entirely novel series that aren't so trivially dimissible but could very well have other signals to hand e.g. brand new person, never done any kernel work before, sends a bunch of series at once etc. So maybe it's worth highlighting this? > > After all, the project works by trusting maintainers to do the right > thing (i.e. the best they can with the information and time at their > disposal), but sometimes there may not be concrete technical reasons. > > For instance, sometimes it is just a matter of bandwidth -- if > maintainers cannot maintain the code, and no one (that is trusted to > some degree) is willing to do so, then it would be a bad idea to take > the code anyway, even if the feature is great, whether LLM-generated > or not. Haha well mm does just merge stuff even if there isn't review capacity :) which I am arguing against presently as a very silly (and unworkable) thing to do. But that's another debate... > > That is also why it is often said that it is a good idea to contact > maintainers/community before developing completely a new feature etc. > etc. Yes, and we've seen in mm people come to the commnuity with a huge new patchset that is rejected. But it almost inevitably has _technical feedback_ as part of that reject that took time, something that the asymmetry of slop wouldn't permit so cleraly. > > So if a subsystem suddenly starts to get an onslaught of series like > you warn about, then they cannot be expected to review and give > technical feedback to everything, and they will need to prioritize > somehow (e.g. fixes), or try to get more maintainers, or raise the > issue in ksummit, etc. Right, but we also need to be able to take the sensible approach of simply not tolerating it. I mean if the contention is we already in effect can do this, then surely it's no harm to provide emphasis in the document no? > > At least, that is my take, i.e. we need to allow maintainers to adjust > as things come. And, of course, as a community, we can always reassess > as conditions change. See my other point about the tail wags the dog effect when an official kernel policy document appears to say 'open to business for LLMs'. Linus has already been quoted in the press I believe with his 'LLMs are just like any other tool'. I wish we didn't have to think about that, but we do. Anyway I'm submitting my suggested delta shortly. It's really not all that much different from the rest, just putting some emphasis on the slop aspect. > > Cheers, > Miguel Thanks, Lorenzo
On Wed, 2026-01-07 at 21:15 +0000, Lorenzo Stoakes wrote: > On Wed, Jan 07, 2026 at 11:18:52AM -0800, Dave Hansen wrote: > > On 1/7/26 10:12, Lorenzo Stoakes wrote: > > ... > > > I know Linus had the cute interpretation of it 'just being > > > another tool' but never before have people been able to do this. > > > > I respect your position here. But I'm not sure how to reconcile: > > > > LLMs are just another tool > > and > > LLMs are not just another tool > > > > :) > > Well I'm not asking you to reconcile that, I'm providing my point of > view which disagrees with the first position and makes a case for the > second. Isn't review about feedback both positive and negative? > > Obviously if this was intended to simply inform the community of the > committee's decision then apologies for misinterpreting it. > > I would simply argue that LLMs are not another tool on the basis of > the drastic negative impact its had in very many areas, for which you > need only take a cursory glance at the world to observe. > > Thinking LLMs are 'just another tool' is to say effectively that the > kernel is immune from this. Which seems to me a silly position. All tools are double edged and the better a tool is the more problematic its harmful uses become but people often use them anyway because of the beneficial uses. You don't for instance classify chainsaws as not another tool because they can be used to deforest the Amazon. All the document is saying is that we start from the place of treating AI like any other tool and, like any other tool, if it proves to cause way more problems than it solves, then we can then move on to other things. There are other tools we've tried and abandoned (like compiling the kernel with c++), so this really isn't any different. Regards, James
On Wed, Jan 07, 2026 at 05:39:48PM -0500, James Bottomley wrote: > On Wed, 2026-01-07 at 21:15 +0000, Lorenzo Stoakes wrote: > > On Wed, Jan 07, 2026 at 11:18:52AM -0800, Dave Hansen wrote: > > > On 1/7/26 10:12, Lorenzo Stoakes wrote: > > > ... > > > > I know Linus had the cute interpretation of it 'just being > > > > another tool' but never before have people been able to do this. > > > > > > I respect your position here. But I'm not sure how to reconcile: > > > > > > LLMs are just another tool > > > and > > > LLMs are not just another tool > > > > > > :) > > > > Well I'm not asking you to reconcile that, I'm providing my point of > > view which disagrees with the first position and makes a case for the > > second. Isn't review about feedback both positive and negative? > > > > Obviously if this was intended to simply inform the community of the > > committee's decision then apologies for misinterpreting it. > > > > I would simply argue that LLMs are not another tool on the basis of > > the drastic negative impact its had in very many areas, for which you > > need only take a cursory glance at the world to observe. > > > > Thinking LLMs are 'just another tool' is to say effectively that the > > kernel is immune from this. Which seems to me a silly position. > > All tools are double edged and the better a tool is the more > problematic its harmful uses become but people often use them anyway > because of the beneficial uses. You don't for instance classify > chainsaws as not another tool because they can be used to deforest the > Amazon. All the document is saying is that we start from the place of > treating AI like any other tool and, like any other tool, if it proves > to cause way more problems than it solves, then we can then move on to > other things. There are other tools we've tried and abandoned (like > compiling the kernel with c++), so this really isn't any different. I mean using the same analogy I'd say the existing norms are designed for spoons, you'd probably not want to apply those same to a chainsaw :) > > Regards, > > James >
On Wed, 7 Jan 2026 21:15:17 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > I would simply argue that LLMs are not another tool on the basis of the > drastic negative impact its had in very many areas, for which you need only > take a cursory glance at the world to observe. > > Thinking LLMs are 'just another tool' is to say effectively that the kernel > is immune from this. Which seems to me a silly position. But has this started to become a real problem with the kernel today? > > > > > Let's look at it another way: What we all *want* for the kernel is > > simplicity. Simple rules, simple documentation, simple code. The > > simplest way to deal with the LLM onslaught is to pray that our existing > > rules will suffice. > > I'm not sure we really have rules quite as clearly as you say, as > subsystems differ greatly in what they do. > > For one mm merges patches unless averse review is received. Which means a > sudden influx of LLM series is likely to lead to real problems. Not all > subsystems are alike like this. But has this happened yet? > > One rule that seems consistent is that arbitrary dismissal of series is > seriously frowned upon. If it is AI slop coming in, you can say, "unless you can prove to me that you understand this series and there's nothing wrong with it, I'm rejecting it" If the series looks good then what's the issue. But if it's AI slop and it's obvious the person behind the code doesn't understand what they are submitting, that could even be rationale for sending that person to your /dev/null folder. > > The document claims otherwise. > > > > > For now, I think the existing rules are holding. We have the luxury of > > We're noticing a lot more LLM slop than we used to. It is becoming more and > more of an issue. Are you noticing this in submissions? > > Secondly, as I said in my MS thread and maybe even in a previous version of > this one (can't remember) - I fear that once it becomes public that we are > open to LLM patches, the floodgates will open. This document is not about addressing anything that we fear will happen. It is only to state our current view of how things work today. If the floodgates do open and we get inundated with AI slop, then we can most definitely update his document to have a bit more teeth. But one thing I learned about my decade on the TAB, is don't worry about things you are afraid might happen, just make sure you address what is currently happening. Especially when it's easy to update the rules. > > The kernel has a thorny reputation of people pushing back, which probably > plays some role in holding that off. > > And it's not like I'm asking for much, I'm not asking you to rewrite the > document, or take an entirely different approach, I'm just saying that we > should highlight that : > > 1. LLMs _allow you to send patches end-to-end without expertise_. Why does this need to be added to the document. I think we should only be addressing how we handle tool generated content. > > 2. As a result, even though the community (rightly) strongly disapproves of > blanket dismissals of series, if we suspect AI slop [I think it's useful > to actually use that term], maintains can reject it out of hand. > > Point 2 is absolutely a new thing in my view. I don't believe that is necessary. I reject patches outright all the time. Especially checkpatch "fixes" on code that is already in the tree. I just say: "checkpatch is for patches, not accepted content. If it's not a real bug, don't use checkpatch." If the AI code is decent, why reject it? If it's slop, then yeah, you have a lot of reasons to reject it. > > > treating LLMs like any other tool. That could change any day because > > some new tool comes along that's better at spamming patches at us. I > > think that's the point you're trying to make is that the dam might break > > any day and we should be prepared for it. > > > > Is that what it boils down to? > > I feel I've answered that above. > > > > > >> +As with all contributions, individual maintainers have discretion to > > >> +choose how they handle the contribution. For example, they might: > > >> + > > >> + - Treat it just like any other contribution. > > >> + - Reject it outright. > > > > > > This is really not correct, it's simply not acceptable in the community to > > > reject series outright without justification. Yes perhaps people do that, > > > but it's really not something that's accepted. > > > > I'm not quite sure how this gives maintainers a new ability to reject > > things without justification, or encourages them to reject > > tool-generated code in a new way. > > > > Let's say something generated by "checkpatch.pl --fix" that's trying to > > patch arch/x86/foo.c lands in my inbox. I personally think it's OK for > > me as a maintainer to say: "No thanks, checkpatch has burned me too many > > times in foo.c and I don't trust its output there." To me, that's > > rejecting it outright. > > > > Could you explain a bit how this might encourage bad maintainer behavior? > > I really don't understand your question or why you're formulating this to > be about bad maintainer behaviour? > > It's generally frowned upon in the kernel to outright reject series without > technical justification. I really don't see how you can say that is not the > case? If it's AI slop, then I'm sure you could easily find lots of technical justifications for rejecting it. Why do we need to explicitly state it here?. > > LLM generated series won't be a trivial checkpatch.pl --fix change, you've > given a trivially identifiable case that you could absolutely justify. Is it trivial just because it's checkpatch? I gave another example above too. But if AI slop is coming in, I'm sure there's lots of reasons to reject it. Are you saying that if there's good AI code coming in (I wouldn't call it slop then) that you want to outright reject it too? > > Again, I'm not really asking for much here. As a maintainer I am (very) > concerned about the asymmetry between what can be submitted vs. review > resource. > > And to me being able to reference this document and to say 'sorry this > appears to be AI slop so we can't accept it' would be really useful. Then why not come up with a list of reasons AI slop is bad and make a boiler plate and send that every time. Basically states that if you submit AI code, the burden is on the submitter to prove that they understand the code. Or would you like that explicitly stated in this document? Something like: - If you submit any type of tool generated code, then it is the responsibility of the submitter to prove to the maintainer that they understand the code that they are submitting. Otherwise the maintainer may simply reject the changes outright. ? > > Referencing a document that tries very hard to say 'NOP' isn't quite so > useful. I don't think this document's goal was to be a pointer to show people why you are rejecting AI submissions. This is just a guideline to how tool generated code should be submitted. It's about how things work today. It's not about how things will work going forward with AI submissions. That document is for another day. -- Steve
On Wed, Jan 07, 2026 at 04:58:16PM -0500, Steven Rostedt wrote: > On Wed, 7 Jan 2026 21:15:17 +0000 > Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > I would simply argue that LLMs are not another tool on the basis of the > > drastic negative impact its had in very many areas, for which you need only > > take a cursory glance at the world to observe. > > > > Thinking LLMs are 'just another tool' is to say effectively that the kernel > > is immune from this. Which seems to me a silly position. > > But has this started to become a real problem with the kernel today? It's becoming a problem. And as I said to Linus I seriously worry about what news coverage of the kernel's stance on these kinds of series will do. > > > > > > > > > Let's look at it another way: What we all *want* for the kernel is > > > simplicity. Simple rules, simple documentation, simple code. The > > > simplest way to deal with the LLM onslaught is to pray that our existing > > > rules will suffice. > > > > I'm not sure we really have rules quite as clearly as you say, as > > subsystems differ greatly in what they do. > > > > For one mm merges patches unless averse review is received. Which means a > > sudden influx of LLM series is likely to lead to real problems. Not all > > subsystems are alike like this. > > But has this happened yet? You're doing the 'repeat for emphasis' thing here which I respect as a useful literary tool :) but addressed above. > > > > > One rule that seems consistent is that arbitrary dismissal of series is > > seriously frowned upon. > > If it is AI slop coming in, you can say, "unless you can prove to me that > you understand this series and there's nothing wrong with it, I'm rejecting > it" > > If the series looks good then what's the issue. But if it's AI slop and > it's obvious the person behind the code doesn't understand what they are > submitting, that could even be rationale for sending that person to your > /dev/null folder. Right, sure, but I feel this sits outside of current norms, I made a case for it in my reply to Dan [0]. [0]:https://lore.kernel.org/ksummit/12d910d5-0937-4aba-976c-9872289d21a4@lucifer.local/ > > > > > The document claims otherwise. > > > > > > > > For now, I think the existing rules are holding. We have the luxury of > > > > We're noticing a lot more LLM slop than we used to. It is becoming more and > > more of an issue. > > Are you noticing this in submissions? Yes. > > > > > Secondly, as I said in my MS thread and maybe even in a previous version of > > this one (can't remember) - I fear that once it becomes public that we are > > open to LLM patches, the floodgates will open. > > This document is not about addressing anything that we fear will happen. It > is only to state our current view of how things work today. > > If the floodgates do open and we get inundated with AI slop, then we can > most definitely update his document to have a bit more teeth. > > But one thing I learned about my decade on the TAB, is don't worry about > things you are afraid might happen, just make sure you address what is > currently happening. Especially when it's easy to update the rules. I mean why are we even writing the document at all in that case :) why did this discussion come up at the maintainer's summit, etc. I think it's sensible to establish a clear policy on how we deal with this _ahead of time_. And as I said to Linus (and previously in discussions on this) I fear the press reporting 'linux kernel welcomes AI submissions, sees it like any other tool'. So the tail could wag the dog here. And is it really problematic to simply underline that that doesn't mean we are ok with the unique ability of LLM's to allow submissions end-to-end in bulk? Again I'll send an incremental change showing what I actually want to change here. Maybe that'll clarify my intent. > > > > > > The kernel has a thorny reputation of people pushing back, which probably > > plays some role in holding that off. > > > > And it's not like I'm asking for much, I'm not asking you to rewrite the > > document, or take an entirely different approach, I'm just saying that we > > should highlight that : > > > > 1. LLMs _allow you to send patches end-to-end without expertise_. > > Why does this need to be added to the document. I think we should only be > addressing how we handle tool generated content. Because of maintainer/review asymmetry and this being a uniquely new situation which attacks that. > > > > > 2. As a result, even though the community (rightly) strongly disapproves of > > blanket dismissals of series, if we suspect AI slop [I think it's useful > > to actually use that term], maintains can reject it out of hand. > > > > Point 2 is absolutely a new thing in my view. > > I don't believe that is necessary. I reject patches outright all the time. > Especially checkpatch "fixes" on code that is already in the tree. I just > say: "checkpatch is for patches, not accepted content. If it's not a real > bug, don't use checkpatch." I find it interesting that both examples given here are of trivially rejectable things that nobody would object to. Again see my reply to Dan for an argument as to why I feel this is different. > > If the AI code is decent, why reject it? If it's slop, then yeah, you have > a lot of reasons to reject it. Because it takes time to review to determine that it's decent even if it might be obvious it's entirely AI-generated in the first place? > > > > > > treating LLMs like any other tool. That could change any day because > > > some new tool comes along that's better at spamming patches at us. I > > > think that's the point you're trying to make is that the dam might break > > > any day and we should be prepared for it. > > > > > > Is that what it boils down to? > > > > I feel I've answered that above. > > > > > > > > >> +As with all contributions, individual maintainers have discretion to > > > >> +choose how they handle the contribution. For example, they might: > > > >> + > > > >> + - Treat it just like any other contribution. > > > >> + - Reject it outright. > > > > > > > > This is really not correct, it's simply not acceptable in the community to > > > > reject series outright without justification. Yes perhaps people do that, > > > > but it's really not something that's accepted. > > > > > > I'm not quite sure how this gives maintainers a new ability to reject > > > things without justification, or encourages them to reject > > > tool-generated code in a new way. > > > > > > Let's say something generated by "checkpatch.pl --fix" that's trying to > > > patch arch/x86/foo.c lands in my inbox. I personally think it's OK for > > > me as a maintainer to say: "No thanks, checkpatch has burned me too many > > > times in foo.c and I don't trust its output there." To me, that's > > > rejecting it outright. > > > > > > Could you explain a bit how this might encourage bad maintainer behavior? > > > > I really don't understand your question or why you're formulating this to > > be about bad maintainer behaviour? > > > > It's generally frowned upon in the kernel to outright reject series without > > technical justification. I really don't see how you can say that is not the > > case? > > If it's AI slop, then I'm sure you could easily find lots of technical > justifications for rejecting it. Why do we need to explicitly state it > here?. Aha! Now you've honed on _exactly_ the problem. To find the technical justification, you'd need to read through the series, and with the asymmetry of maintainer/submitter resource this is an issue. > > > > > LLM generated series won't be a trivial checkpatch.pl --fix change, you've > > given a trivially identifiable case that you could absolutely justify. > > Is it trivial just because it's checkpatch? I gave another example above > too. But if AI slop is coming in, I'm sure there's lots of reasons to > reject it. I mean come on Steve :) yes it is trivial. Apologies, but I didn't pick up on the other example above? > > Are you saying that if there's good AI code coming in (I wouldn't call it > slop then) that you want to outright reject it too? No I'm saying that maintainers should be able to reserve the right in order to not be overwhelmed. > > > > > Again, I'm not really asking for much here. As a maintainer I am (very) > > concerned about the asymmetry between what can be submitted vs. review > > resource. > > > > And to me being able to reference this document and to say 'sorry this > > appears to be AI slop so we can't accept it' would be really useful. > > Then why not come up with a list of reasons AI slop is bad and make a > boiler plate and send that every time. Basically states that if you submit > AI code, the burden is on the submitter to prove that they understand the > code. Or would you like that explicitly stated in this document? Something > like: > > - If you submit any type of tool generated code, then it is the > responsibility of the submitter to prove to the maintainer that they > understand the code that they are submitting. Otherwise the maintainer > may simply reject the changes outright. > > ? I mean of course I wholeheartedly agree with that. But we to some degree already have that: + - Ask the submitter to explain in more detail about the contribution + so that the maintainer can feel comfortable that the submitter fully + understands how the code works. I think it'd be most useful to actually show what change I'd like in a diff, which I'll send in a little while. It's more about emphasis than really radically changing anything in the document. > > > > > Referencing a document that tries very hard to say 'NOP' isn't quite so > > useful. > > I don't think this document's goal was to be a pointer to show people why > you are rejecting AI submissions. This is just a guideline to how tool > generated code should be submitted. It might not be the goal but it establishes kernel policy even if it seems the desire is to say 'NOP', and would be useful for maintainers on the ground. If nobody references kernel policy in how they do things then what is the use of kernel policy? > > It's about how things work today. It's not about how things will work going > forward with AI submissions. That document is for another day. I feel I've addressed this above, but we're already mentioning things that pertain to possible AI slop. I don't think the position here can both be 'well we already address this with existing rules' and 'we have no need to address this at all' at the same time. And shouldn't we perhaps take a defensive position to make it abundantly clear that we won't tolerate this _ahead of time_? I obviously take Linus's point that many slop producers couldn't care less about norms or documentation, but with the impact on the press reporting and _general sense_ of what the kernel will tolerate + those who _will_ think they're abiding by the norms - that it actually will have a practical impact. > > -- Steve Cheers, Lorenzo
On Thu, 8 Jan 2026 11:29:47 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > But one thing I learned about my decade on the TAB, is don't worry about > > things you are afraid might happen, just make sure you address what is > > currently happening. Especially when it's easy to update the rules. > > I mean why are we even writing the document at all in that case :) why did this > discussion come up at the maintainer's summit, etc. What happened that started this discussion was me reading about an AI patch that was submitted and accepted without the maintainer knowing that the patch was 100% created by AI. That maintainer just happened to be me! I made a stink about not disclosing the fact that the patch was generated by AI. I wanted full transparency. A long discussion started there where we noticed that we have no written policy on transparency of tooling used to create patches and wanted to fix that. That was the reason this all started, but it expanded to "Oh we need to document our policy on AI too". That was an after thought. See why I'm still pushing to only document what our current policy is. > > I think it's sensible to establish a clear policy on how we deal with this > _ahead of time_. Why? We don't know what is going to happen. We are only assuming things are going to be a problem, where it may never be. > > And as I said to Linus (and previously in discussions on this) I fear the > press reporting 'linux kernel welcomes AI submissions, sees it like any > other tool'. But this document doesn't even say that. It's only expressing in writing what our policy is on transparency of using tooling where AI is just one more tool. AI submissions have already been done. It's only accepted after the normal process is followed. -- Steve
On Thu, Jan 08, 2026 at 01:19:26PM -0500, Steven Rostedt wrote: > On Thu, 8 Jan 2026 11:29:47 +0000 > Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > But one thing I learned about my decade on the TAB, is don't worry about > > > things you are afraid might happen, just make sure you address what is > > > currently happening. Especially when it's easy to update the rules. > > > > I mean why are we even writing the document at all in that case :) why did this > > discussion come up at the maintainer's summit, etc. > > What happened that started this discussion was me reading about an AI patch > that was submitted and accepted without the maintainer knowing that the > patch was 100% created by AI. That maintainer just happened to be me! I > made a stink about not disclosing the fact that the patch was generated by > AI. I wanted full transparency. > > A long discussion started there where we noticed that we have no written > policy on transparency of tooling used to create patches and wanted to fix > that. That was the reason this all started, but it expanded to "Oh we need > to document our policy on AI too". That was an after thought. > > See why I'm still pushing to only document what our current policy is. Hm, not sure I can square that with 'these rules already existed'. Were they unwritten rules...? I mean from my + outside world's perspective it kicked off from Sasha sending the patch adding config files for LLM tooling, then the MS thread(s), then this thread. Though obviously you mentioned that occasion there. > > > > > I think it's sensible to establish a clear policy on how we deal with this > > _ahead of time_. > > Why? We don't know what is going to happen. We are only assuming things are > going to be a problem, where it may never be. I mean google 'AI slop'. If you think the kernel is mysteriously immune to it I'd be curious as to the justification. As a maintainer I find it mildly irritating that you'd be so resistant to very small changes to the document to put a little more emphasis on this and instead ask me to wait until I'm overwhelmed. It's not really a huge ask. > > > > > And as I said to Linus (and previously in discussions on this) I fear the > > press reporting 'linux kernel welcomes AI submissions, sees it like any > > other tool'. > > But this document doesn't even say that. It's only expressing in writing > what our policy is on transparency of using tooling where AI is just one > more tool. AI submissions have already been done. It's only accepted after > the normal process is followed. Honestly you really think that people are looking at this as a general 'tools' thing and not about AI? Really? I mean have you _read_ kernel reporting lately, especially the more tabloid clickbaity stuff? > > -- Steve Honestly this is all moot as Linus has made his position clear. But I wanted to be heard. Thanks, Lorenzo
On Tue, Jan 06, 2026 at 12:51:05PM -0800, Dave Hansen wrote: > In the last few years, the capabilities of coding tools have exploded. > As those capabilities have expanded, contributors and maintainers have > more and more questions about how and when to apply those > capabilities. > > Add new Documentation to guide contributors on how to best use kernel > development tools, new and old. > > Note, though, there are fundamentally no new or unique rules in this > new document. It clarifies expectations that the kernel community has > had for many years. For example, researchers are already asked to > disclose the tools they use to find issues by > Documentation/process/researcher-guidelines.rst. This new document > just reiterates existing best practices for development tooling. > > In short: Please show your work and make sure your contribution is > easy to review. > > Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> > Reviewed-by: Shuah Khan <shuah@kernel.org> > Reviewed-by: Kees Cook <kees@kernel.org> > Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> > Reviewed-by: Miguel Ojeda <ojeda@kernel.org> > Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> > Reviewed-by: SeongJae Park <sj@kernel.org> > Reviewed-by: Dan Williams <dan.j.williams@intel.com> > Reviewed-by: Steven Rostedt <rostedt@goodmis.org> > Cc: NeilBrown <neilb@ownmail.net> > Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > Cc: Dan Williams <dan.j.williams@intel.com> > Cc: Theodore Ts'o <tytso@mit.edu> > Cc: Sasha Levin <sashal@kernel.org> > Cc: Jonathan Corbet <corbet@lwn.net> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: workflows@vger.kernel.org > Cc: ksummit@lists.linux.dev The "Ask for some other special steps, like asking the contributor to elaborate on how the tool or model was trained" covers my copyright concerns, so: Reviewed-by: Paul E. McKenney <paulmck@kernel.org> > -- > > There has been a ton of feedback since v2. Thanks everyone! I've > tried to respect all of the feedback, but some of it has been > contradictory and I haven't been able to incorporate everything. > > Please speak up if I missed something important here. > > Changes from v2: > * Mention testing (Shuah) > * Remove "very", rename LLM => coding assistant (Dan) > * More formatting sprucing up and minor typos (Miguel) > * Make changelog and text less flashy (Christian) > * Tone down critical=>helpful (Neil) > * Wording/formatting tweaks (Randy) > > Changes from v1: > * Rename to generated-content.rst and add to documentation index. > (Jon) > * Rework subject to align with the new filename > * Replace commercial names with generic ones. (Jon) > * Be consistent about punctuation at the end of bullets for whole > sentences. (Miguel) > * Formatting sprucing up and minor typos (Miguel) > > This document was a collaborative effort from all the members of > the TAB. I just reformatted it into .rst and wrote the changelog. > --- > Documentation/process/generated-content.rst | 97 +++++++++++++++++++++ > Documentation/process/index.rst | 1 + > 2 files changed, 98 insertions(+) > create mode 100644 Documentation/process/generated-content.rst > > diff --git a/Documentation/process/generated-content.rst b/Documentation/process/generated-content.rst > new file mode 100644 > index 000000000000..917d6e93c66d > --- /dev/null > +++ b/Documentation/process/generated-content.rst > @@ -0,0 +1,97 @@ > +============================================ > +Kernel Guidelines for Tool-Generated Content > +============================================ > + > +Purpose > +======= > + > +Kernel contributors have been using tooling to generate contributions > +for a long time. These tools can increase the volume of contributions. > +At the same time, reviewer and maintainer bandwidth is a scarce > +resource. Understanding which portions of a contribution come from > +humans versus tools is helpful to maintain those resources and keep > +kernel development healthy. > + > +The goal here is to clarify community expectations around tools. This > +lets everyone become more productive while also maintaining high > +degrees of trust between submitters and reviewers. > + > +Out of Scope > +============ > + > +These guidelines do not apply to tools that make trivial tweaks to > +preexisting content. Nor do they pertain to AI tooling that helps with > +menial tasks. Some examples: > + > + - Spelling and grammar fix ups, like rephrasing to imperative voice > + - Typing aids like identifier completion, common boilerplate or > + trivial pattern completion > + - Purely mechanical transformations like variable renaming > + - Reformatting, like running Lindent, ``clang-format`` or > + ``rust-fmt`` > + > +Even if your tool use is out of scope, you should still always consider > +if it would help reviewing your contribution if the reviewer knows > +about the tool that you used. > + > +In Scope > +======== > + > +These guidelines apply when a meaningful amount of content in a kernel > +contribution was not written by a person in the Signed-off-by chain, > +but was instead created by a tool. > + > +Detection of a problem and testing the fix for it is also part of the > +development process; if a tool was used to find a problem addressed by > +a change, that should be noted in the changelog. This not only gives > +credit where it is due, it also helps fellow developers find out about > +these tools. > + > +Some examples: > + - Any tool-suggested fix such as ``checkpatch.pl --fix`` > + - Coccinelle scripts > + - A chatbot generated a new function in your patch to sort list entries. > + - A .c file in the patch was originally generated by a coding > + assistant but cleaned up by hand. > + - The changelog was generated by handing the patch to a generative AI > + tool and asking it to write the changelog. > + - The changelog was translated from another language. > + > +If in doubt, choose transparency and assume these guidelines apply to > +your contribution. > + > +Guidelines > +========== > + > +First, read the Developer's Certificate of Origin: > +Documentation/process/submitting-patches.rst. Its rules are simple > +and have been in place for a long time. They have covered many > +tool-generated contributions. Ensure that you understand your entire > +submission and are prepared to respond to review comments. > + > +Second, when making a contribution, be transparent about the origin of > +content in cover letters and changelogs. You can be more transparent > +by adding information like this: > + > + - What tools were used? > + - The input to the tools you used, like the Coccinelle source script. > + - If code was largely generated from a single or short set of > + prompts, include those prompts. For longer sessions, include a > + summary of the prompts and the nature of resulting assistance. > + - Which portions of the content were affected by that tool? > + - How is the submission tested and what tools were used to test the > + fix? > + > +As with all contributions, individual maintainers have discretion to > +choose how they handle the contribution. For example, they might: > + > + - Treat it just like any other contribution. > + - Reject it outright. > + - Treat the contribution specially like reviewing with extra scrutiny, > + or at a lower priority than human-generated content. > + - Suggest a better prompt instead of suggesting specific code changes. > + - Ask for some other special steps, like asking the contributor to > + elaborate on how the tool or model was trained. > + - Ask the submitter to explain in more detail about the contribution > + so that the maintainer can feel comfortable that the submitter fully > + understands how the code works. > diff --git a/Documentation/process/index.rst b/Documentation/process/index.rst > index aa12f2660194..e1a8a31389f5 100644 > --- a/Documentation/process/index.rst > +++ b/Documentation/process/index.rst > @@ -68,6 +68,7 @@ beyond). > stable-kernel-rules > management-style > researcher-guidelines > + generated-content > > Dealing with bugs > ----------------- > -- > 2.34.1 > >
© 2016 - 2026 Red Hat, Inc.