These patches contain three changes to QEMU's code provenance policy
with respect to AI-generated content. I am sorting them from least to
most controversial.
First, I am emphasizing the intended scope: the policy is not about
content generators, it is about generated content (patch 1).
Second, I am adding some procedural requirements and liability boundaries
to the exception process (patches 2-3). These changes provide a structure
for the process and clarify that the process is not an expansion of the
maintainers' responsibilities.
On top of these changes, however, I am also expanding the exception
process so that it is actually feasible to request and obtain an
exception. Requesting "clarity of the license and copyright status
for the tool's output" is almost asking for the impossible; a problem
that is also shared by other AI policies such as the Linux Foundation's
(https://www.linuxfoundation.org/legal/generative-ai). Therefore, add a
second case for an exception, limited but practical, which is "limited
or non-existing creative content" (patch 4).
Paolo
Paolo Bonzini (4):
docs/code-provenance: clarify scope very early
docs/code-provenance: make the exception process more prominent
docs/code-provenance: clarify the scope of AI exceptions
docs/code-provenance: make the exception process feasible
docs/devel/code-provenance.rst | 46 +++++++++++++++++++++++-----------
1 file changed, 31 insertions(+), 15 deletions(-)
--
2.51.0