From: Daniel P. Berrangé <berrange@redhat.com>
Bug reports from automated tools and AI agents are time consuming to
triage and have poor signal/noise ratio. Set strong expectations for
any reporters using such tools, in a (likely doomed) attempt to stem
the flow of poor quality reports.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
docs/bugs.rst | 14 ++++++++++++++
docs/securityprocess.rst | 4 ++++
2 files changed, 18 insertions(+)
diff --git a/docs/bugs.rst b/docs/bugs.rst
index 5fd1970caf..e12a6c74ec 100644
--- a/docs/bugs.rst
+++ b/docs/bugs.rst
@@ -76,6 +76,20 @@ Linux Distribution specific bug reports
like to have your procedure for filing bugs mentioned here, please mail the
libvirt development list.
+Use of automated tools / AI agents
+----------------------------------
+
+If any automated tool / AI agent is used to identify a bug / security
+flaw, the following additional expectations apply when filing a report:
+
+- The tool / agent used **MUST** be clearly declared in the description
+- All stated facts **MUST** be validated as correct and free from AI
+ hallucinations prior to filing
+- The problem **MUST** be described against an upstream release that is
+ no more than 3 months old.
+- The problem **SHOULD** be analysed and accompanied with a proposed
+ patch that can be directly applied to current git
+
How to file high quality bug reports
------------------------------------
diff --git a/docs/securityprocess.rst b/docs/securityprocess.rst
index 075679df74..b7695ddc59 100644
--- a/docs/securityprocess.rst
+++ b/docs/securityprocess.rst
@@ -27,6 +27,10 @@ and moderated for non-members. As such you will receive an auto-reply indicating
the report is held for moderation. Postings by non-members will be approved by a
moderator and the reporter copied on any replies.
+Refer to the `bug reporting <bugs.html#use-of-automated-tools-ai-agents>`__
+page for the *expectations around the use of automated tools and AI agents*,
+**prior** to filing any security report.
+
Security notices
----------------
--
2.49.0
On 6/6/25 10:52, Daniel P. Berrangé via Devel wrote: > From: Daniel P. Berrangé <berrange@redhat.com> > > Bug reports from automated tools and AI agents are time consuming Maybe orthogonal topic, but should we also discourage (if not ban) people from sending patches generated by AI tools? For instance Gentoo has done so [1] and their foremost reason is possible licensing problem / copyright violation. I've seen some people asking some language models what does this or that internal function of ours do (when investigating code). And while I might have preferences on that, it's probably okay. But letting LLMs generate pieces of code that was trained on who-knows-what might pose problem once such code is merged. OTOH - we have Developer Certificate of Origin which should mean that the author can send given patch. 1: https://wiki.gentoo.org/wiki/Project:Council/AI_policy Michal
On Mon, Jun 09, 2025 at 03:06:00PM +0200, Michal Prívozník wrote: > On 6/6/25 10:52, Daniel P. Berrangé via Devel wrote: > > From: Daniel P. Berrangé <berrange@redhat.com> > > > > Bug reports from automated tools and AI agents are time consuming > > > Maybe orthogonal topic, but should we also discourage (if not ban) > people from sending patches generated by AI tools? For instance Gentoo > has done so [1] and their foremost reason is possible licensing problem > / copyright violation. > > I've seen some people asking some language models what does this or that > internal function of ours do (when investigating code). And while I > might have preferences on that, it's probably okay. But letting LLMs > generate pieces of code that was trained on who-knows-what might pose > problem once such code is merged. > > OTOH - we have Developer Certificate of Origin which should mean that > the author can send given patch. In QEMU I put forward the viewpoint that contributing under the DCO is incompatible with the use of common AI content generators today, given the inability to satisfy any of the DCO clauses for the AI portion of the patch. https://lists.nongnu.org/archive/html/qemu-devel/2025-06/msg00453.html In theory we shouldn't need to state anything, but I expect people wouldn't be thinking of the implications of the DCO rules when they decide to use AI tools, hence the suggestion to document it in QEMU. The QEMU proposal isn't merged yet, but we should an eye on it, as the position would apply equally to libvirt as QEMU. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Fri, Jun 06, 2025 at 09:52:49 +0100, Daniel P. Berrangé via Devel wrote: > From: Daniel P. Berrangé <berrange@redhat.com> > > Bug reports from automated tools and AI agents are time consuming to > triage and have poor signal/noise ratio. Set strong expectations for > any reporters using such tools, in a (likely doomed) attempt to stem > the flow of poor quality reports. > > Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> > --- > docs/bugs.rst | 14 ++++++++++++++ > docs/securityprocess.rst | 4 ++++ > 2 files changed, 18 insertions(+) > > diff --git a/docs/bugs.rst b/docs/bugs.rst > index 5fd1970caf..e12a6c74ec 100644 > --- a/docs/bugs.rst > +++ b/docs/bugs.rst > @@ -76,6 +76,20 @@ Linux Distribution specific bug reports > like to have your procedure for filing bugs mentioned here, please mail the > libvirt development list. > > +Use of automated tools / AI agents > +---------------------------------- > + > +If any automated tool / AI agent is used to identify a bug / security > +flaw, the following additional expectations apply when filing a report: > + > +- The tool / agent used **MUST** be clearly declared in the description > +- All stated facts **MUST** be validated as correct and free from AI > + hallucinations prior to filing > +- The problem **MUST** be described against an upstream release that is > + no more than 3 months old. > +- The problem **SHOULD** be analysed and accompanied with a proposed > + patch that can be directly applied to current git I'd also like to prohibit/avoid vague and too general statements. In the few last reports that were low quality that I've seen, the problem statement and reproducer were true because they were too vague. E.g. saying that "if you call this function with NULL argument it will crash" can be true, but if we're making sure that it can't happen elsewhere it's quite useless. I'm not sure though how to formulate that. Otherwise looks good and even like this: Reviewed-by: Peter Krempa <pkrempa@redhat.com>
On Fri, Jun 06, 2025 at 11:05:23AM +0200, Peter Krempa wrote: > On Fri, Jun 06, 2025 at 09:52:49 +0100, Daniel P. Berrangé via Devel wrote: > > From: Daniel P. Berrangé <berrange@redhat.com> > > > > Bug reports from automated tools and AI agents are time consuming to > > triage and have poor signal/noise ratio. Set strong expectations for > > any reporters using such tools, in a (likely doomed) attempt to stem > > the flow of poor quality reports. > > > > Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> > > --- > > docs/bugs.rst | 14 ++++++++++++++ > > docs/securityprocess.rst | 4 ++++ > > 2 files changed, 18 insertions(+) > > > > diff --git a/docs/bugs.rst b/docs/bugs.rst > > index 5fd1970caf..e12a6c74ec 100644 > > --- a/docs/bugs.rst > > +++ b/docs/bugs.rst > > @@ -76,6 +76,20 @@ Linux Distribution specific bug reports > > like to have your procedure for filing bugs mentioned here, please mail the > > libvirt development list. > > > > +Use of automated tools / AI agents > > +---------------------------------- > > + > > +If any automated tool / AI agent is used to identify a bug / security > > +flaw, the following additional expectations apply when filing a report: > > + > > +- The tool / agent used **MUST** be clearly declared in the description > > +- All stated facts **MUST** be validated as correct and free from AI > > + hallucinations prior to filing > > +- The problem **MUST** be described against an upstream release that is > > + no more than 3 months old. > > +- The problem **SHOULD** be analysed and accompanied with a proposed > > + patch that can be directly applied to current git > > I'd also like to prohibit/avoid vague and too general statements. > In the few last reports that were low quality that I've seen, the > problem statement and reproducer were true because they were too vague. > > E.g. saying that "if you call this function with NULL argument it will > crash" can be true, but if we're making sure that it can't happen > elsewhere it's quite useless. > > I'm not sure though how to formulate that. I figure that kind of vague / have-wavy nonsense is often a characteristic of AI output. By requiring use of AI to be declared upfront, when we see such vague statements, we can just dismiss the bug or require the reporter to explain properly. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Fri, Jun 06, 2025 at 10:21:16 +0100, Daniel P. Berrangé wrote: > On Fri, Jun 06, 2025 at 11:05:23AM +0200, Peter Krempa wrote: > > On Fri, Jun 06, 2025 at 09:52:49 +0100, Daniel P. Berrangé via Devel wrote: > > > From: Daniel P. Berrangé <berrange@redhat.com> > > > > > > Bug reports from automated tools and AI agents are time consuming to > > > triage and have poor signal/noise ratio. Set strong expectations for > > > any reporters using such tools, in a (likely doomed) attempt to stem ^^^^ [1] > > > the flow of poor quality reports. [...] > > > +Use of automated tools / AI agents > > > +---------------------------------- > > > + > > > +If any automated tool / AI agent is used to identify a bug / security > > > +flaw, the following additional expectations apply when filing a report: > > > + > > > +- The tool / agent used **MUST** be clearly declared in the description > > > +- All stated facts **MUST** be validated as correct and free from AI > > > + hallucinations prior to filing > > > +- The problem **MUST** be described against an upstream release that is > > > + no more than 3 months old. > > > +- The problem **SHOULD** be analysed and accompanied with a proposed > > > + patch that can be directly applied to current git > > > > I'd also like to prohibit/avoid vague and too general statements. > > In the few last reports that were low quality that I've seen, the > > problem statement and reproducer were true because they were too vague. > > > > E.g. saying that "if you call this function with NULL argument it will > > crash" can be true, but if we're making sure that it can't happen > > elsewhere it's quite useless. > > > > I'm not sure though how to formulate that. > > I figure that kind of vague / have-wavy nonsense is often a characteristic > of AI output. By requiring use of AI to be declared upfront, when we see > such vague statements, we can just dismiss the bug or require the reporter > to explain properly. Good point. Also as you point out [1] it's likely that slop submitters won't conform to this either; mostly because they'd have to put effort in reading this which goes against the use of slop generators.
© 2016 - 2025 Red Hat, Inc.