Documentation/power/opp.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
This patch series adds unified configuration and documentation for AI coding assistants working with the Linux kernel codebase. As AI tools become increasingly common in software development, it's important to establish clear guidelines for their use in kernel development. The series consists of two patches: 1. The first patch adds unified configuration files for various AI coding assistants (Claude, GitHub Copilot, Cursor, Codeium, Continue, Windsurf, and Aider). These are all symlinked to a central documentation file to ensure consistency across tools. 2. The second patch adds the actual rules and documentation that guide AI assistants on Linux kernel development practices, including: - Following kernel coding standards - Respecting the development process - Properly attributing AI-generated contributions - Understanding licensing requirements The examples below demonstrate how these guidelines work in practice, showing proper AI attribution in commits and the assistant's understanding of kernel documentation requirements. All AI assistants are required to identify themselves in commits using Co-developed-by tags, ensuring full transparency about AI involvement in code development. "Hello world" test: $ claude -p "do you need to attribute your commits?" Yes, according to the CLAUDE.md file in this Linux kernel repository, I must identify myself as an AI assistant in commits by including: ``` Co-developed-by: Claude claude-opus-4-20250514 ``` I should NOT add a Signed-off-by tag - only you (the human developer) should add that, as it represents a legal certification. Example patch creation with Claude Code: $ claude -p "Fix the dont -> don't typo in @Documentation/power/opp.rst. Commit the result" Done! The typo has been fixed and committed. And the resulting patch (no manual edits): From 7613561f430920803d945b3cd53aa81440d4f6f0 Mon Sep 17 00:00:00 2001 From: Sasha Levin <sashal@kernel.org> Date: Fri, 25 Jul 2025 13:38:43 -0400 Subject: [PATCH] Documentation: power: opp: Fix typo dont -> don't Fix a typo in the OPP documentation where "dont" was missing an apostrophe. Co-developed-by: Claude claude-opus-4-20250514 --- Documentation/power/opp.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/power/opp.rst b/Documentation/power/opp.rst index 1b7f1d854f14a..3eae4b955a63b 100644 --- a/Documentation/power/opp.rst +++ b/Documentation/power/opp.rst @@ -149,7 +149,7 @@ dev_pm_opp_find_freq_exact opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false); dev_pm_opp_put(opp); - /* dont operate on the pointer.. just do a sanity check.. */ + /* don't operate on the pointer.. just do a sanity check.. */ if (IS_ERR(opp)) { pr_err("frequency not disabled!\n"); /* trigger appropriate actions.. */ Sasha Levin (2): AI: Add unified AI coding assistant configuration AI: Add initial set of rules and docs .aider.conf.yml | 1 + .codeium/instructions.md | 1 + .continue/context.md | 1 + .cursorrules | 1 + .github/copilot-instructions.md | 1 + .windsurfrules | 1 + CLAUDE.md | 1 + Documentation/AI/main.md | 71 +++++++++++++++++++++++++++++++++ 8 files changed, 78 insertions(+) create mode 120000 .aider.conf.yml create mode 120000 .codeium/instructions.md create mode 120000 .continue/context.md create mode 120000 .cursorrules create mode 120000 .github/copilot-instructions.md create mode 120000 .windsurfrules create mode 120000 CLAUDE.md create mode 100644 Documentation/AI/main.md -- 2.39.5
Em Fri, 25 Jul 2025 13:53:56 -0400 Sasha Levin <sashal@kernel.org> escreveu: > This patch series adds unified configuration and documentation for AI > coding assistants working with the Linux kernel codebase. As AI tools > become increasingly common in software development, it's important to > establish clear guidelines for their use in kernel development. ... > The series consists of two patches: > > 1. The first patch adds unified configuration files for various AI > coding assistants (Claude, GitHub Copilot, Cursor, Codeium, Continue, > Windsurf, and Aider). These are all symlinked to a central documentation > file to ensure consistency across tools. Agreed with the goal, however people may be using other LLM tools, like chatgpt and deepseek, as a development aid tool, but not directly writing patches from it. Some LLM tools can be very useful when handling some bugs that may require seeking the web for some possible alternatives. On such kind of usage, it is not different of using a search engine like duckduckgo, google, yahoo,... LLMs can also be used to write unit tests/functional tests, where the actual patch can be independent of any LLM coding, if the testing toolchain is not submitted. > 2. The second patch adds the actual rules and documentation that guide > AI assistants on Linux kernel development practices, including: > - Following kernel coding standards > - Respecting the development process > - Properly attributing AI-generated contributions > - Understanding licensing requirements > > The examples below demonstrate how these guidelines work in practice, showing > proper AI attribution in commits and the assistant's understanding of kernel > documentation requirements. > > All AI assistants are required to identify themselves in commits using > Co-developed-by tags, ensuring full transparency about AI involvement in > code development. > > "Hello world" test: > > $ claude -p "do you need to attribute your commits?" > Yes, according to the CLAUDE.md file in this Linux kernel repository, I must identify myself as an AI assistant in commits by including: > > ``` > Co-developed-by: Claude claude-opus-4-20250514 Due to the above, I don't think that Co-developed-by is the best tag here, nor if we want a tag or some text written at the patch description describing how AI was used. For instance, on your example: > - /* dont operate on the pointer.. just do a sanity check.. */ > + /* don't operate on the pointer.. just do a sanity check.. */ LLM is not developing anything. It is just doing what any other spellchecker tool would do. We don't have any tags to document that, although we sometimes add comment saying that we used cocci or some other tool to identify (and/or autocorrect) the coding style and/or grammar/syntax issues. Thanks, Mauro
> Example patch creation with Claude Code: > > $ claude -p "Fix the dont -> don't typo in @Documentation/power/opp.rst. Commit the result" > Done! The typo has been fixed and committed. Is this actually how people use AI agents? I've never thought of asking an agent to write a whole patch and commit the result. The way that I've seen it is things like Github Copilot within VScode where there are inline suggestions. It's kinda like clangd except it suggests corrections to your mistakes instead of just underlining them with red squiggles. Like if you messed up the argument and passed a pointer when it was supposed to be a pointer to a pointer it will give you a little tooltip correction. But this is long before you would be ready to actually commit a patch, heck it's before even testing it (obviously). The actual committing action would be by running 'git commit'. So I don't see how these tags could end up in there.
On July 26, 2025 2:07:40 AM PDT, Mario Limonciello <superm1@kernel.org> wrote: >> Example patch creation with Claude Code: >> >> $ claude -p "Fix the dont -> don't typo in @Documentation/power/opp.rst. Commit the result" >> Done! The typo has been fixed and committed. > >Is this actually how people use AI agents? I've never thought of asking an agent to write a whole patch and commit the result. Yeah! I've been using the interactive modes (e.g. just the "claude" command). The insight by a friend of mine is to have the agent update its own knowledge base regularly. For example, for a first time session, using Claude I would run "claude" in the root of the kernel tree, and then write: /init Find and read the coding style and submitting patches documentation in the Documentation/ directory Always use a build output directory, like "claude-build". This must always be specified with the O= option for make This is a big build machine, so also always use the -j128 option for make Perform a build of the kernel Save anything new you've learned in your CLAUDE.md file Keep all new .md files and temp files in the agentic/ directory Examine git history of commits by Kees Cook over the last 5 years and write out a new .md file that describes his commit log writing style Use this for any commit messages we write Figure out how to run KUnit tests, the docs are in Documentation/ Run the fortify KUnit tests Update your CLAUDE.md with anything new you've learned (Note that I run the agent as an entirely separate user in a Docker container.) The agent acts like an intern who reads VERY quickly, but forgets things regularly. So it has to be both reminded to save new stuff it learns (I break things into separate .md files), and then read them back again after it starts forgetting. It still needs extensive hand-holding, and it's rare that I'm happy with its commit logs, but it is pretty helpful so far. That said I've been focusing on some smaller refactoring work just to feel out how to use it. I have some hints that it would struggle with doing very large scale refactoring on its own, but it's nice to toss it at a huge patch I generated via Coccinelle and say "build this across all our configured archs and fix any compilation failures you encounter" and then go have lunch. ;) -Kees -- Kees Cook
On Sat, Jul 26, 2025 at 04:07:40AM -0500, Mario Limonciello wrote: >>Example patch creation with Claude Code: >> >> $ claude -p "Fix the dont -> don't typo in @Documentation/power/opp.rst. Commit the result" >> Done! The typo has been fixed and committed. > >Is this actually how people use AI agents? I've never thought of >asking an agent to write a whole patch and commit the result. Yup, there are a bunch of usecases for this, and while this is an oversimplified example (that's why I picked a simple spelling fix that the agent could get right "the first time"). See Kees' post at https://hachyderm.io/@kees/114907228284590439 , in particular the buffer overflow fix where he logged his prompts. >The way that I've seen it is things like Github Copilot within VScode >where there are inline suggestions. It's kinda like clangd except it >suggests corrections to your mistakes instead of just underlining them >with red squiggles. > >Like if you messed up the argument and passed a pointer when it was >supposed to be a pointer to a pointer it will give you a little >tooltip correction. But this is long before you would be ready to >actually commit a patch, heck it's before even testing it (obviously). > >The actual committing action would be by running 'git commit'. So I >don't see how these tags could end up in there. Even vscode has a commit feature which can help write commit messages and actually commit them for you: https://code.visualstudio.com/docs/sourcecontrol/overview But yes, if you choose to manually handle the committing aspect then you should also be manually attributing the agent :) -- Thanks, Sasha
On Fri, 25 Jul 2025 13:53:56 -0400 Sasha Levin wrote: > Co-developed-by: Claude claude-opus-4-20250514 > --- > Documentation/power/opp.rst | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) I think we should suggest that the tag is under --- ? It's only relevant during the review. Once the patch is committed whether the code was organic or generated by Corp XYZ's Banana AI is just free advertising..
On Fri, 25 Jul 2025 11:41:14 -0700 Jakub Kicinski <kuba@kernel.org> wrote: > On Fri, 25 Jul 2025 13:53:56 -0400 Sasha Levin wrote: > > Co-developed-by: Claude claude-opus-4-20250514 > > --- > > Documentation/power/opp.rst | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > I think we should suggest that the tag is under --- ? > It's only relevant during the review. Once the patch is committed > whether the code was organic or generated by Corp XYZ's Banana AI > is just free advertising.. What's the difference between that and others using their corporate email? I even add (Google) to my SoB to denote who is paying me to do the work. Also, I would argue that it would be useful in the change log as if there's a bug in the generated code, you know who or *what* to blame. Especially if there is a pattern to be found. -- Steve
On Fri, Jul 25, 2025 at 03:00:46PM -0400, Steven Rostedt wrote: > Also, I would argue that it would be useful in the change log as if there's > a bug in the generated code, you know who or *what* to blame. Especially if > there is a pattern to be found. Yeah, this is where I feel like it's the most potentially useful. Since they are distinctly code-generators, we should include the info to identify it. We include version numbers and such the compilers and linkers, though they are only informally included in commit logs when dealing with specific problems. Having had to do "find all commits from [set of authors]" research for security audits, I would be very unhappy if I had to do this again in the future for a specific Agent (used any author), and had to loop lore into the process. Yes, it's *doable*, but it'd be very annoying. -- Kees Cook
Kees Cook wrote: > On Fri, Jul 25, 2025 at 03:00:46PM -0400, Steven Rostedt wrote: > > Also, I would argue that it would be useful in the change log as if there's > > a bug in the generated code, you know who or *what* to blame. Especially if > > there is a pattern to be found. > > Yeah, this is where I feel like it's the most potentially useful. Since > they are distinctly code-generators, we should include the info to > identify it. We include version numbers and such the compilers and > linkers, though they are only informally included in commit logs when > dealing with specific problems. > > Having had to do "find all commits from [set of authors]" research for > security audits, I would be very unhappy if I had to do this again in > the future for a specific Agent (used any author), and had to loop lore > into the process. Yes, it's *doable*, but it'd be very annoying. Oh, yes, that gives me pause. However, so too does the idea that AI contributions, beyond mere mechanical code complete of trusted human developers, would become more prevalent. *If* it gets to that point, I agree that this forensic ability is necessary. Now, if it gets to that point it also assumes that the "AI contribution review decimating human reviewer bandwidth" problem has a mitigation. So "doable, but very annoying" strikes me as a problem space where an AI agent could help. It is not clear to me that a concise commit trailer captures everything needed to help both the review and after the fact forensics problem, especially when model fine tuning and prompting are in play.
On July 28, 2025 8:57:21 AM PDT, dan.j.williams@intel.com wrote: >Kees Cook wrote: >> Having had to do "find all commits from [set of authors]" research for >> security audits, I would be very unhappy if I had to do this again in >> the future for a specific Agent (used any author), and had to loop lore >> into the process. Yes, it's *doable*, but it'd be very annoying. > [...] >So "doable, but very annoying" strikes me as a problem space where an AI >agent could help. It is not clear to me that a concise commit trailer >captures everything needed to help both the review and after the fact >forensics problem, especially when model fine tuning and prompting are >in play. Heh, good point. :) We can make all problems *with* agents and problem *for* agents. :P -- Kees Cook
On Fri, 25 Jul 2025 15:00:46 -0400 Steven Rostedt wrote: > On Fri, 25 Jul 2025 11:41:14 -0700 > Jakub Kicinski <kuba@kernel.org> wrote: > > On Fri, 25 Jul 2025 13:53:56 -0400 Sasha Levin wrote: > > > Co-developed-by: Claude claude-opus-4-20250514 > > > --- > > > Documentation/power/opp.rst | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > I think we should suggest that the tag is under --- ? > > It's only relevant during the review. Once the patch is committed > > whether the code was organic or generated by Corp XYZ's Banana AI > > is just free advertising.. > > What's the difference between that and others using their corporate email? > I even add (Google) to my SoB to denote who is paying me to do the work. To be clear, it's not my main point, my main point is that the information is of no proven use right now. As long as committer follows the BKP of adding Link: https://patch.msgid.link/... we can find the metadata later. We never found the need to attach the exact version of smatch / sparse / cocci that found the bug or "wrote" a patch. Let us not overreact to the AI tools. > Also, I would argue that it would be useful in the change log as if there's > a bug in the generated code, you know who or *what* to blame. Especially if > there is a pattern to be found. This touches on explainability of AI. Perhaps the metadata would be interesting for XAI research... not sure that's enough to be lugging those tags in git history.
Jakub Kicinski wrote: [..] > To be clear, it's not my main point, my main point is that > the information is of no proven use right now. As long as > committer follows the BKP of adding Link: https://patch.msgid.link/... > we can find the metadata later. > > We never found the need to attach the exact version of smatch / sparse > / cocci that found the bug or "wrote" a patch. Let us not overreact to > the AI tools. > > > Also, I would argue that it would be useful in the change log as if there's > > a bug in the generated code, you know who or *what* to blame. Especially if > > there is a pattern to be found. > > This touches on explainability of AI. Perhaps the metadata would be > interesting for XAI research... not sure that's enough to be lugging > those tags in git history. Agree. The "who to blame" is "Author:". They signed DCO they are responsible for debugging what went wrong in any stage of the development of a patch per usual. We have a long history of debugging tool problems without tracking tool versions in git history.
On Fri, 25 Jul 2025 13:34:32 -0700 <dan.j.williams@intel.com> wrote: > > This touches on explainability of AI. Perhaps the metadata would be > > interesting for XAI research... not sure that's enough to be lugging > > those tags in git history. > > Agree. The "who to blame" is "Author:". They signed DCO they are > responsible for debugging what went wrong in any stage of the > development of a patch per usual. We have a long history of debugging > tool problems without tracking tool versions in git history. My point of the "who to blame" was not about the author of said code, but if two or more developers are using the same AI agent and then some patter of bugs appears that is only with that AI agent, then we know that the AI agent is likely the culprit and to look for code by other developers that used that same AI agent. It's a way to track down a bug in a tool that is creating code, not about moving blame from a developer to the agent itself. -- Steve
Em Mon, 28 Jul 2025 13:46:53 -0400 Steven Rostedt <rostedt@goodmis.org> escreveu: > On Fri, 25 Jul 2025 13:34:32 -0700 > <dan.j.williams@intel.com> wrote: > > > > This touches on explainability of AI. Perhaps the metadata would be > > > interesting for XAI research... not sure that's enough to be lugging > > > those tags in git history. > > > > Agree. The "who to blame" is "Author:". They signed DCO they are > > responsible for debugging what went wrong in any stage of the > > development of a patch per usual. We have a long history of debugging > > tool problems without tracking tool versions in git history. > > My point of the "who to blame" was not about the author of said code, > but if two or more developers are using the same AI agent and then some > patter of bugs appears that is only with that AI agent, then we know > that the AI agent is likely the culprit and to look for code by other > developers that used that same AI agent. > > It's a way to track down a bug in a tool that is creating code, not > about moving blame from a developer to the agent itself. I don't think you shall blame the tool, as much as you you cannot blame gcc for a badly written code. Also, the same way a kernel maintainer needs to know how to produce a good code, someone using AI also must learn how to properly use the tool. After all, at least at the current stage, AI is not intelligent. Artificial "neurons" just sums up values from its neighbors, trying to mimic what we know so far about neurons, which is not perfect. On several aspects, it is not much different than doing an stochastic analysis that would try to converge into a result. The entire process resembles the kind of systems that you could be analyzed using control theory[1], like root locus analysis. Anyone that ever played with that knows that sometimes the system is stable enough to converge to the best results, but the convergence is affected by poles and zeros: sometimes it might converge to a local minimum; sometimes it can end into a zero and diverge, producing completely bogus results. On other words, the one that posted a bad patch is the one to blame, together with the ones that reviewed it. AI is not a replacement for real intelligence. - Btw, if you want to play with that, I suggest using deepseek. Ensure that the DeepThink (R1) is enabled, as it shows how that particular model tries to find a convergence. - Out of curiosity, I asked two AIs for articles relating control theory with LLM: Bhargava, A., Witkowski, C., Looi, S.-Z., & Thomson, M. (2023). What’s the Magic Word? A Control Theory of LLM Prompting. arXiv preprint arXiv:2310.04444. URL: https://arxiv.org/abs/2310.04444 Kevian, D., Syed, U., Guo, X., Havens, A., Dullerud, G., Seiler, P., Qin, L., & Hu, B. (2024). Capabilities of Large Language Models in Control Engineering: A Benchmark Study. arXiv preprint arXiv:2404.03647. URL: https://arxiv.org/abs/2404.03647 Maher, G. (2025). LLMPC: Large Language Model Predictive Control. arXiv preprint arXiv:2501.02486. URL: https://arxiv.org/abs/2501.02486 Zahedifar, R. et al. "LLM-Agent-Controller: A Universal Multi-Agent Large Language Model System as a Control Engineer" URL: https://arxiv.org/abs/2505.19567 Zhang, Y. et al. "Unveiling LLM Mechanisms Through Neural ODEs and Control Theory" URL: https://arxiv.org/abs/2406.16985 Barfield, Woodrow (2021) "A Systems and Control Theory Approach for Law and Artificial Intelligence: Demystifying the 'Black-Box'" URL: https://www.mdpi.com/2571-8800/4/4/41 Zahedifar, R. et al. "LLM-controller: Dynamic robot control adaptation using large language models" URL: https://www.sciencedirect.com/science/article/abs/pii/S0921889024002975 Bhargava, A. "Toward a Control Theory of LLMs" (Blog Post) URL: https://aman-bhargava.com/ai/2023/12/17/towards-a-control-theory-of-LLMs.html I didn't read them (yet). Thanks, Mauro
Em Tue, 29 Jul 2025 00:12:33 +0200 Mauro Carvalho Chehab <mchehab+huawei@kernel.org> escreveu: > Em Mon, 28 Jul 2025 13:46:53 -0400 > Steven Rostedt <rostedt@goodmis.org> escreveu: > > > On Fri, 25 Jul 2025 13:34:32 -0700 > > <dan.j.williams@intel.com> wrote: > > > > > > This touches on explainability of AI. Perhaps the metadata would be > > > > interesting for XAI research... not sure that's enough to be lugging > > > > those tags in git history. > > > > > > Agree. The "who to blame" is "Author:". They signed DCO they are > > > responsible for debugging what went wrong in any stage of the > > > development of a patch per usual. We have a long history of debugging > > > tool problems without tracking tool versions in git history. > > > > My point of the "who to blame" was not about the author of said code, > > but if two or more developers are using the same AI agent and then some > > patter of bugs appears that is only with that AI agent, then we know > > that the AI agent is likely the culprit and to look for code by other > > developers that used that same AI agent. > > > > It's a way to track down a bug in a tool that is creating code, not > > about moving blame from a developer to the agent itself. > > I don't think you shall blame the tool, as much as you you cannot > blame gcc for a badly written code. Also, the same way a kernel > maintainer needs to know how to produce a good code, someone using > AI also must learn how to properly use the tool. > > After all, at least at the current stage, AI is not intelligent. Heh, after re-reading my post, I realized that I could have been too technical, specially for people not familiar with electrical engineering and systems control theory(*). What I'm trying to say is that, while AI is a great tool, it is just another tool that tries to guess something. If you get enough luck, you'll get decent results, but one should never trust on its result, as it is based on statistics: it will guess an answer that will likely be the right one, but could also be completely off. (*) systems control theory is a field that studies a system stability. It can be used, for instance, to ensure that an electrical motor can be properly controlled and provide precise movements. It is not limited to mechanics, though. It can used to explain other systems that have any sort of feedbacks. at the light of the control theory, an AI training would be mapped as a feedback. Regards, Mauro Thanks, Mauro
Steven Rostedt wrote: > On Fri, 25 Jul 2025 13:34:32 -0700 > <dan.j.williams@intel.com> wrote: > > > > This touches on explainability of AI. Perhaps the metadata would be > > > interesting for XAI research... not sure that's enough to be lugging > > > those tags in git history. > > > > Agree. The "who to blame" is "Author:". They signed DCO they are > > responsible for debugging what went wrong in any stage of the > > development of a patch per usual. We have a long history of debugging > > tool problems without tracking tool versions in git history. > > My point of the "who to blame" was not about the author of said code, > but if two or more developers are using the same AI agent and then some > patter of bugs appears that is only with that AI agent, then we know > that the AI agent is likely the culprit and to look for code by other > developers that used that same AI agent. > > It's a way to track down a bug in a tool that is creating code, not > about moving blame from a developer to the agent itself. Between fine tuning, the process of doing local training to emphasize / de-emphasize some weights in the model, and prompt variability, the signal from a patch trailer is diluted. If maintainers care about commit text conciseness for humans and traceability for AI, those competing concerns will conflict above the "---" line in patches.
On Fri, Jul 25, 2025 at 01:34:32PM -0700, dan.j.williams@intel.com wrote: >Jakub Kicinski wrote: >[..] >> To be clear, it's not my main point, my main point is that >> the information is of no proven use right now. As long as >> committer follows the BKP of adding Link: https://patch.msgid.link/... >> we can find the metadata later. >> >> We never found the need to attach the exact version of smatch / sparse >> / cocci that found the bug or "wrote" a patch. Let us not overreact to >> the AI tools. >> >> > Also, I would argue that it would be useful in the change log as if there's >> > a bug in the generated code, you know who or *what* to blame. Especially if >> > there is a pattern to be found. >> >> This touches on explainability of AI. Perhaps the metadata would be >> interesting for XAI research... not sure that's enough to be lugging >> those tags in git history. > >Agree. The "who to blame" is "Author:". They signed DCO they are >responsible for debugging what went wrong in any stage of the >development of a patch per usual. We have a long history of debugging >tool problems without tracking tool versions in git history. And it would be great to avoid the potential "it wasn't me, it was the AI!" or "whoops I don't know how that exploitable issue ended up in my patch, must have been the AI". -- Thanks, Sasha
* Jakub Kicinski (kuba@kernel.org) wrote: > On Fri, 25 Jul 2025 15:00:46 -0400 Steven Rostedt wrote: > > On Fri, 25 Jul 2025 11:41:14 -0700 > > Jakub Kicinski <kuba@kernel.org> wrote: > > > On Fri, 25 Jul 2025 13:53:56 -0400 Sasha Levin wrote: > > > > Co-developed-by: Claude claude-opus-4-20250514 > > > > --- > > > > Documentation/power/opp.rst | 2 +- > > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > I think we should suggest that the tag is under --- ? > > > It's only relevant during the review. Once the patch is committed > > > whether the code was organic or generated by Corp XYZ's Banana AI > > > is just free advertising.. > > > > What's the difference between that and others using their corporate email? > > I even add (Google) to my SoB to denote who is paying me to do the work. > > To be clear, it's not my main point, my main point is that > the information is of no proven use right now. As long as > committer follows the BKP of adding Link: https://patch.msgid.link/... > we can find the metadata later. > > We never found the need to attach the exact version of smatch / sparse > / cocci that found the bug or "wrote" a patch. Let us not overreact to > the AI tools. People have done it (using inconsistent tags and comments) for things like Coverity for years; some people worry a lot about AI, some not at all; adding a tag: a) Lets the people who worry keep of track what our mechanical overlords are doing. b) Reviewers who are wary of slop get to cast a careful eye. c) Gives the tools (and their developers) suitable credit. After all machines need love too. > > Also, I would argue that it would be useful in the change log as if there's > > a bug in the generated code, you know who or *what* to blame. Especially if > > there is a pattern to be found. > > This touches on explainability of AI. Perhaps the metadata would be > interesting for XAI research... not sure that's enough to be lugging > those tags in git history. We carry lots more random stuff in commit messages! Dave -- -----Open up your eyes, open up your mind, open up your code ------- / Dr. David Alan Gilbert | Running GNU/Linux | Happy \ \ dave @ treblig.org | | In Hex / \ _________________________|_____ http://www.treblig.org |_______/
© 2016 - 2025 Red Hat, Inc.