[libvirt] [PATCH] RFC: qemu: add spice/virgl rendernode

marcandre.lureau@redhat.com posted 1 patch 7 years, 1 month ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/libvirt tags/patchew/20170213115148.20405-1-marcandre.lureau@redhat.com
docs/formatdomain.html.in                          | 13 ++++++-
docs/schemas/domaincommon.rng                      |  5 +++
src/conf/domain_conf.c                             | 45 ++++++++++++++++++++--
src/conf/domain_conf.h                             |  1 +
src/qemu/qemu_capabilities.c                       |  2 +
src/qemu/qemu_capabilities.h                       |  1 +
src/qemu/qemu_command.c                            | 22 +++++++++++
.../qemuxml2argv-video-virtio-gpu-spice-gl.args    |  2 +-
.../qemuxml2argv-video-virtio-gpu-spice-gl.xml     |  6 ++-
tests/qemuxml2argvtest.c                           |  1 +
.../qemuxml2xmlout-video-virtio-gpu-spice-gl.xml   |  6 ++-
11 files changed, 97 insertions(+), 7 deletions(-)
[libvirt] [PATCH] RFC: qemu: add spice/virgl rendernode
Posted by marcandre.lureau@redhat.com 7 years, 1 month ago
From: Marc-André Lureau <marcandre.lureau@redhat.com>

I am working on a WIP series to add QEMU Spice/virgl rendernode option.
Since rendernodes are not stable across reboots, I propose that QEMU
accepts also a PCI address (other bus types may be added in the future).

This is how I translated it to libvirt. I picked <gpu> over
<rendernode>, since it seems more generic. Comments welcome!

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 docs/formatdomain.html.in                          | 13 ++++++-
 docs/schemas/domaincommon.rng                      |  5 +++
 src/conf/domain_conf.c                             | 45 ++++++++++++++++++++--
 src/conf/domain_conf.h                             |  1 +
 src/qemu/qemu_capabilities.c                       |  2 +
 src/qemu/qemu_capabilities.h                       |  1 +
 src/qemu/qemu_command.c                            | 22 +++++++++++
 .../qemuxml2argv-video-virtio-gpu-spice-gl.args    |  2 +-
 .../qemuxml2argv-video-virtio-gpu-spice-gl.xml     |  6 ++-
 tests/qemuxml2argvtest.c                           |  1 +
 .../qemuxml2xmlout-video-virtio-gpu-spice-gl.xml   |  6 ++-
 11 files changed, 97 insertions(+), 7 deletions(-)

diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 0a115f5dc..5c0f80b3c 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -5619,7 +5619,11 @@ qemu-kvm -net nic,model=? /dev/null
   &lt;clipboard copypaste='no'/&gt;
   &lt;mouse mode='client'/&gt;
   &lt;filetransfer enable='no'/&gt;
-  &lt;gl enable='yes'/&gt;
+  &lt;gl enable='yes'&gt;
+    &lt;gpu&gt;
+      &lt;address type='pci' domain='0x0000' bus='0x06' slot='0x12' function='0x5'/&gt;
+    &lt;/gpu&gt;
+  &lt;/gl&gt;
 &lt;/graphics&gt;</pre>
             <p>
               Spice supports variable compression settings for audio, images and
@@ -5665,6 +5669,13 @@ qemu-kvm -net nic,model=? /dev/null
               the <code>gl</code> element, by setting the <code>enable</code>
               property. (QEMU only, <span class="since">since 1.3.3</span>).
             </p>
+            <p>
+              By default, QEMU will pick the first available GPU. You
+              may specify a host GPU to use for rendering with the
+              <code>gpu</code> element that supports a PCI
+              <code>address</code> child element. (QEMU only, <span
+              class="since">since 3.1</span>).
+            </p>
           </dd>
           <dt><code>rdp</code></dt>
           <dd>
diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng
index d715bff29..cc85e07d8 100644
--- a/docs/schemas/domaincommon.rng
+++ b/docs/schemas/domaincommon.rng
@@ -3033,6 +3033,11 @@
                 <attribute name="enable">
                   <ref name="virYesNo"/>
                 </attribute>
+                <optional>
+                  <element name="gpu">
+                    <ref name="address"/>
+                  </element>
+                </optional>
                 <empty/>
               </element>
             </optional>
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 1bc72a4e9..a18db6dd9 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -1300,6 +1300,7 @@ void virDomainGraphicsDefFree(virDomainGraphicsDefPtr def)
         break;
 
     case VIR_DOMAIN_GRAPHICS_TYPE_SPICE:
+        virDomainDeviceInfoClear(&def->data.spice.gpu);
         VIR_FREE(def->data.spice.keymap);
         virDomainGraphicsAuthDefClear(&def->data.spice.auth);
         break;
@@ -12159,6 +12160,13 @@ virDomainGraphicsDefParseXMLSpice(virDomainGraphicsDefPtr def,
                 VIR_FREE(enable);
 
                 def->data.spice.gl = enableVal;
+
+                if (cur->children && cur->children->type == XML_ELEMENT_NODE &&
+                    xmlStrEqual(cur->children->name, BAD_CAST "gpu") &&
+                    virDomainDeviceInfoParseXML(cur->children, NULL,
+                                                &def->data.spice.gpu, flags) < 0)
+                    goto error;
+
             } else if (xmlStrEqual(cur->name, BAD_CAST "mouse")) {
                 char *mode = virXMLPropString(cur, "mode");
                 int modeVal;
@@ -22839,6 +22847,36 @@ virDomainGraphicsListenDefFormatAddr(virBufferPtr buf,
         virBufferAsprintf(buf, " listen='%s'", glisten->address);
 }
 
+static int
+virDomainSpiceGLDefFormat(virBufferPtr buf, virDomainGraphicsDefPtr def)
+{
+    virBuffer childrenBuf = VIR_BUFFER_INITIALIZER;
+    int indent = virBufferGetIndent(buf, false);
+
+    if (def->data.spice.gl == VIR_TRISTATE_BOOL_ABSENT) {
+        return 0;
+    }
+
+    virBufferAsprintf(buf, "<gl enable='%s'",
+                      virTristateBoolTypeToString(def->data.spice.gl));
+
+    virBufferAdjustIndent(&childrenBuf, indent + 4);
+    if (virDomainDeviceInfoFormat(&childrenBuf, &def->data.spice.gpu, 0) < 0)
+        return -1;
+    if (virBufferUse(&childrenBuf)) {
+        virBufferAddLit(buf, ">\n");
+        virBufferAdjustIndent(buf, 2);
+        virBufferAddLit(buf, "<gpu>\n");
+        virBufferAddBuffer(buf, &childrenBuf);
+        virBufferAddLit(buf, "</gpu>\n");
+        virBufferAdjustIndent(buf, -2);
+        virBufferAddLit(buf, "</gl>\n");
+    } else {
+        virBufferAddLit(buf, "/>\n");
+    }
+    virBufferFreeAndReset(&childrenBuf);
+    return 0;
+}
 
 static int
 virDomainGraphicsDefFormat(virBufferPtr buf,
@@ -23082,9 +23120,10 @@ virDomainGraphicsDefFormat(virBufferPtr buf,
         if (def->data.spice.filetransfer)
             virBufferAsprintf(buf, "<filetransfer enable='%s'/>\n",
                               virTristateBoolTypeToString(def->data.spice.filetransfer));
-        if (def->data.spice.gl)
-            virBufferAsprintf(buf, "<gl enable='%s'/>\n",
-                              virTristateBoolTypeToString(def->data.spice.gl));
+
+        if (virDomainSpiceGLDefFormat(buf, def) < 0) {
+            return -1;
+        }
     }
 
     if (children) {
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index dd79206f6..04bdf8914 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -1544,6 +1544,7 @@ struct _virDomainGraphicsDef {
             virTristateBool copypaste;
             virTristateBool filetransfer;
             virTristateBool gl;
+            virDomainDeviceInfo gpu;
         } spice;
     } data;
     /* nListens, listens, and *port are only useful if type is vnc,
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 399e31447..e851eec7a 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -357,6 +357,7 @@ VIR_ENUM_IMPL(virQEMUCaps, QEMU_CAPS_LAST,
 
               "query-cpu-model-expansion", /* 245 */
               "virtio-net.host_mtu",
+              "spice-rendernode",
     );
 
 
@@ -2950,6 +2951,7 @@ static struct virQEMUCapsCommandLineProps virQEMUCapsCommandLine[] = {
     { "spice", "unix", QEMU_CAPS_SPICE_UNIX },
     { "drive", "throttling.bps-total-max-length", QEMU_CAPS_DRIVE_IOTUNE_MAX_LENGTH },
     { "drive", "throttling.group", QEMU_CAPS_DRIVE_IOTUNE_GROUP },
+    { "spice", "rendernode", QEMU_CAPS_SPICE_RENDERNODE },
 };
 
 static int
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index 95bb67d44..0f998c473 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -393,6 +393,7 @@ typedef enum {
     /* 245 */
     QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION, /* qmp query-cpu-model-expansion */
     QEMU_CAPS_VIRTIO_NET_HOST_MTU, /* virtio-net-*.host_mtu */
+    QEMU_CAPS_SPICE_RENDERNODE, /* -spice rendernode */
 
     QEMU_CAPS_LAST /* this must always be the last item */
 } virQEMUCapsFlags;
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index c00a47a91..3c1de3862 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -7920,6 +7920,28 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg,
          * TristateSwitch helper */
         virBufferAsprintf(&opt, "gl=%s,",
                           virTristateSwitchTypeToString(graphics->data.spice.gl));
+
+        if (graphics->data.spice.gpu.type !=
+            VIR_DOMAIN_DEVICE_ADDRESS_TYPE_NONE) {
+            virDomainDeviceInfo *info = &graphics->data.spice.gpu;
+            char *devstr;
+
+            if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_RENDERNODE)) {
+                virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+                               _("This QEMU doesn't support spice OpenGL rendernode"));
+                goto error;
+            }
+            if (info->type != VIR_DOMAIN_DEVICE_ADDRESS_TYPE_PCI) {
+                virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+                               _("only 'pci' addresses are supported for the gpu device"));
+                goto error;
+            }
+            if (!(devstr = virDomainPCIAddressAsString(&info->addr.pci)))
+                goto error;
+
+            virBufferAsprintf(&opt, "rendernode=%s,", devstr);
+            VIR_FREE(devstr);
+        }
     }
 
     if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_SEAMLESS_MIGRATION)) {
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-video-virtio-gpu-spice-gl.args b/tests/qemuxml2argvdata/qemuxml2argv-video-virtio-gpu-spice-gl.args
index f1ebb62e4..1e0728fb5 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-video-virtio-gpu-spice-gl.args
+++ b/tests/qemuxml2argvdata/qemuxml2argv-video-virtio-gpu-spice-gl.args
@@ -19,6 +19,6 @@ QEMU_AUDIO_DRV=spice \
 -drive file=/var/lib/libvirt/images/QEMUGuest1,format=qcow2,if=none,\
 id=drive-ide0-0-0,cache=none \
 -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 \
--spice port=0,gl=on \
+-spice port=0,gl=on,rendernode=0000:06:12.5 \
 -device virtio-gpu-pci,id=video0,virgl=on,bus=pci.0,addr=0x2 \
 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-video-virtio-gpu-spice-gl.xml b/tests/qemuxml2argvdata/qemuxml2argv-video-virtio-gpu-spice-gl.xml
index b9c7c8af0..fdd3939d2 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-video-virtio-gpu-spice-gl.xml
+++ b/tests/qemuxml2argvdata/qemuxml2argv-video-virtio-gpu-spice-gl.xml
@@ -26,7 +26,11 @@
     <input type='mouse' bus='ps2'/>
     <input type='keyboard' bus='ps2'/>
     <graphics type='spice' autoport='no'>
-      <gl enable='yes'/>
+      <gl enable='yes'>
+        <gpu>
+          <address type='pci' domain='0x0000' bus='0x06' slot='0x12' function='0x5'/>
+        </gpu>
+      </gl>
     </graphics>
     <video>
       <model type='virtio' heads='1'>
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index 8d737fdc8..dcc094960 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -1722,6 +1722,7 @@ mymain(void)
             QEMU_CAPS_VIRTIO_GPU_VIRGL,
             QEMU_CAPS_SPICE,
             QEMU_CAPS_SPICE_GL,
+            QEMU_CAPS_SPICE_RENDERNODE,
             QEMU_CAPS_DEVICE_VIDEO_PRIMARY);
     DO_TEST("video-virtio-gpu-secondary",
             QEMU_CAPS_DEVICE_VIRTIO_GPU,
diff --git a/tests/qemuxml2xmloutdata/qemuxml2xmlout-video-virtio-gpu-spice-gl.xml b/tests/qemuxml2xmloutdata/qemuxml2xmlout-video-virtio-gpu-spice-gl.xml
index 9fb533ad9..89972d1fe 100644
--- a/tests/qemuxml2xmloutdata/qemuxml2xmlout-video-virtio-gpu-spice-gl.xml
+++ b/tests/qemuxml2xmloutdata/qemuxml2xmlout-video-virtio-gpu-spice-gl.xml
@@ -31,7 +31,11 @@
     <input type='keyboard' bus='ps2'/>
     <graphics type='spice'>
       <listen type='none'/>
-      <gl enable='yes'/>
+      <gl enable='yes'>
+        <gpu>
+          <address type='pci' domain='0x0000' bus='0x06' slot='0x12' function='0x5'/>
+        </gpu>
+      </gl>
     </graphics>
     <video>
       <model type='virtio' heads='1' primary='yes'>
-- 
2.11.0.295.gd7dffce1c.dirty

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] RFC: qemu: add spice/virgl rendernode
Posted by Daniel P. Berrange 7 years, 1 month ago
On Mon, Feb 13, 2017 at 03:51:48PM +0400, marcandre.lureau@redhat.com wrote:
> From: Marc-André Lureau <marcandre.lureau@redhat.com>
> 
> I am working on a WIP series to add QEMU Spice/virgl rendernode option.
> Since rendernodes are not stable across reboots, I propose that QEMU
> accepts also a PCI address (other bus types may be added in the future).

Hmm, can you elaborate on this aspect ?  It feels like a parallel
to saying NIC device names are not stable, so we should configure
guests using PCI addresses instead of 'eth0', etc but we stuck with
using NIC names in libvirt on the basis that you can create udev
rules to ensure stable naming ?

So is there not a case to be made that if you want stable render
device names when multiple NICs are present, then you should use
udev to ensure a given device always maps to the same PCI dev.

> This is how I translated it to libvirt. I picked <gpu> over
> <rendernode>, since it seems more generic. Comments welcome!


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] RFC: qemu: add spice/virgl rendernode
Posted by Marc-André Lureau 7 years, 1 month ago
Hi

----- Original Message -----
> On Mon, Feb 13, 2017 at 03:51:48PM +0400, marcandre.lureau@redhat.com wrote:
> > From: Marc-André Lureau <marcandre.lureau@redhat.com>
> > 
> > I am working on a WIP series to add QEMU Spice/virgl rendernode option.
> > Since rendernodes are not stable across reboots, I propose that QEMU
> > accepts also a PCI address (other bus types may be added in the future).
> 
> Hmm, can you elaborate on this aspect ?  It feels like a parallel
> to saying NIC device names are not stable, so we should configure
> guests using PCI addresses instead of 'eth0', etc but we stuck with
> using NIC names in libvirt on the basis that you can create udev
> rules to ensure stable naming ?
> 
> So is there not a case to be made that if you want stable render
> device names when multiple NICs are present, then you should use
> udev to ensure a given device always maps to the same PCI dev.

I thought it was simpler to use a PCI address (do you expect users to create udev rules for the GPUs?)

We could also allow both form (this is what I'll propose in qemu, fyi https://github.com/elmarco/qemu/commits/spice)

> > This is how I translated it to libvirt. I picked <gpu> over
> > <rendernode>, since it seems more generic. Comments welcome!
> 
> 
> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] RFC: qemu: add spice/virgl rendernode
Posted by Daniel P. Berrange 7 years, 1 month ago
On Mon, Feb 13, 2017 at 07:19:04AM -0500, Marc-André Lureau wrote:
> Hi
> 
> ----- Original Message -----
> > On Mon, Feb 13, 2017 at 03:51:48PM +0400, marcandre.lureau@redhat.com wrote:
> > > From: Marc-André Lureau <marcandre.lureau@redhat.com>
> > > 
> > > I am working on a WIP series to add QEMU Spice/virgl rendernode option.
> > > Since rendernodes are not stable across reboots, I propose that QEMU
> > > accepts also a PCI address (other bus types may be added in the future).
> > 
> > Hmm, can you elaborate on this aspect ?  It feels like a parallel
> > to saying NIC device names are not stable, so we should configure
> > guests using PCI addresses instead of 'eth0', etc but we stuck with
> > using NIC names in libvirt on the basis that you can create udev
> > rules to ensure stable naming ?
> > 
> > So is there not a case to be made that if you want stable render
> > device names when multiple NICs are present, then you should use
> > udev to ensure a given device always maps to the same PCI dev.
> 
> I thought it was simpler to use a PCI address (do you expect users
> to create udev rules for the GPUs?)

Well most users will only have 1 GPU so surely this won't be a problem
in the common case.  Is it possible to get some stable naming rules into
udev upstream though, so all distros get stable names by default


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] RFC: qemu: add spice/virgl rendernode
Posted by Marc-André Lureau 7 years, 1 month ago
Hi

----- Original Message -----
> On Mon, Feb 13, 2017 at 07:19:04AM -0500, Marc-André Lureau wrote:
> > Hi
> > 
> > ----- Original Message -----
> > > On Mon, Feb 13, 2017 at 03:51:48PM +0400, marcandre.lureau@redhat.com
> > > wrote:
> > > > From: Marc-André Lureau <marcandre.lureau@redhat.com>
> > > > 
> > > > I am working on a WIP series to add QEMU Spice/virgl rendernode option.
> > > > Since rendernodes are not stable across reboots, I propose that QEMU
> > > > accepts also a PCI address (other bus types may be added in the
> > > > future).
> > > 
> > > Hmm, can you elaborate on this aspect ?  It feels like a parallel
> > > to saying NIC device names are not stable, so we should configure
> > > guests using PCI addresses instead of 'eth0', etc but we stuck with
> > > using NIC names in libvirt on the basis that you can create udev
> > > rules to ensure stable naming ?
> > > 
> > > So is there not a case to be made that if you want stable render
> > > device names when multiple NICs are present, then you should use
> > > udev to ensure a given device always maps to the same PCI dev.
> > 
> > I thought it was simpler to use a PCI address (do you expect users
> > to create udev rules for the GPUs?)
> 
> Well most users will only have 1 GPU so surely this won't be a problem
> in the common case.  Is it possible to get some stable naming rules into
> udev upstream though, so all distros get stable names by default

Optimus is getting more and more mainstream, see recent Fedora desktop effort (fwiw I have a t460p nouveau/i915). I don't think a random user of such hw/laptop should have to create udev rules.

I suppose systemd-udev could learn to create stable path with help from src/udev/udev-builtin-path_id.c. I will work on it. However, I have virt-manager code to lookup GPU infos/path, using libdrm, and it is unlikely that it will work with the udev rules. So I'll have to patch libdrm to support that too. 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] RFC: qemu: add spice/virgl rendernode
Posted by Daniel P. Berrange 7 years, 1 month ago
On Mon, Feb 13, 2017 at 08:08:20AM -0500, Marc-André Lureau wrote:
> Hi
> 
> ----- Original Message -----
> > On Mon, Feb 13, 2017 at 07:19:04AM -0500, Marc-André Lureau wrote:
> > > Hi
> > > 
> > > ----- Original Message -----
> > > > On Mon, Feb 13, 2017 at 03:51:48PM +0400, marcandre.lureau@redhat.com
> > > > wrote:
> > > > > From: Marc-André Lureau <marcandre.lureau@redhat.com>
> > > > > 
> > > > > I am working on a WIP series to add QEMU Spice/virgl rendernode option.
> > > > > Since rendernodes are not stable across reboots, I propose that QEMU
> > > > > accepts also a PCI address (other bus types may be added in the
> > > > > future).
> > > > 
> > > > Hmm, can you elaborate on this aspect ?  It feels like a parallel
> > > > to saying NIC device names are not stable, so we should configure
> > > > guests using PCI addresses instead of 'eth0', etc but we stuck with
> > > > using NIC names in libvirt on the basis that you can create udev
> > > > rules to ensure stable naming ?
> > > > 
> > > > So is there not a case to be made that if you want stable render
> > > > device names when multiple NICs are present, then you should use
> > > > udev to ensure a given device always maps to the same PCI dev.
> > > 
> > > I thought it was simpler to use a PCI address (do you expect users
> > > to create udev rules for the GPUs?)
> > 
> > Well most users will only have 1 GPU so surely this won't be a problem
> > in the common case.  Is it possible to get some stable naming rules into
> > udev upstream though, so all distros get stable names by default
> 
> Optimus is getting more and more mainstream, see recent Fedora desktop effort (fwiw I have a t460p nouveau/i915). I don't think a random user of such hw/laptop should have to create udev rules.
> 
> I suppose systemd-udev could learn to create stable path with help
> from src/udev/udev-builtin-path_id.c. I will work on it. However,
> I have virt-manager code to lookup GPU infos/path, using libdrm,
> and it is unlikely that it will work with the udev rules. So I'll
> have to patch libdrm to support that too.

The generic goal is that Libvirt should be providing enough information
for apps to be able to configure the guest, without resorting to side
channels like libdrm. This is to ensure that apps can manage guests
with no loss of functionality even when connected to a remote libvirt.
e.g. libdrm isn't going to be able to enumerate remote GPUs, for
virt-manager, so the neccessary info needs to be exposed by libvirt
via its virNodeDevice APIs. We can already identify the PCI device
and that its a GPU device, but I imagine we're not reporting any
data about DRI render paths associated with the GPUs we report.
So I think that's a gap we'd need to fill

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] RFC: qemu: add spice/virgl rendernode
Posted by Marc-André Lureau 7 years, 1 month ago
Hi

----- Original Message -----
> On Mon, Feb 13, 2017 at 08:08:20AM -0500, Marc-André Lureau wrote:
> > Hi
> > 
> > ----- Original Message -----
> > > On Mon, Feb 13, 2017 at 07:19:04AM -0500, Marc-André Lureau wrote:
> > > > Hi
> > > > 
> > > > ----- Original Message -----
> > > > > On Mon, Feb 13, 2017 at 03:51:48PM +0400, marcandre.lureau@redhat.com
> > > > > wrote:
> > > > > > From: Marc-André Lureau <marcandre.lureau@redhat.com>
> > > > > > 
> > > > > > I am working on a WIP series to add QEMU Spice/virgl rendernode
> > > > > > option.
> > > > > > Since rendernodes are not stable across reboots, I propose that
> > > > > > QEMU
> > > > > > accepts also a PCI address (other bus types may be added in the
> > > > > > future).
> > > > > 
> > > > > Hmm, can you elaborate on this aspect ?  It feels like a parallel
> > > > > to saying NIC device names are not stable, so we should configure
> > > > > guests using PCI addresses instead of 'eth0', etc but we stuck with
> > > > > using NIC names in libvirt on the basis that you can create udev
> > > > > rules to ensure stable naming ?
> > > > > 
> > > > > So is there not a case to be made that if you want stable render
> > > > > device names when multiple NICs are present, then you should use
> > > > > udev to ensure a given device always maps to the same PCI dev.
> > > > 
> > > > I thought it was simpler to use a PCI address (do you expect users
> > > > to create udev rules for the GPUs?)
> > > 
> > > Well most users will only have 1 GPU so surely this won't be a problem
> > > in the common case.  Is it possible to get some stable naming rules into
> > > udev upstream though, so all distros get stable names by default
> > 
> > Optimus is getting more and more mainstream, see recent Fedora desktop
> > effort (fwiw I have a t460p nouveau/i915). I don't think a random user of
> > such hw/laptop should have to create udev rules.
> > 
> > I suppose systemd-udev could learn to create stable path with help
> > from src/udev/udev-builtin-path_id.c. I will work on it. However,
> > I have virt-manager code to lookup GPU infos/path, using libdrm,
> > and it is unlikely that it will work with the udev rules. So I'll
> > have to patch libdrm to support that too.
> 
> The generic goal is that Libvirt should be providing enough information
> for apps to be able to configure the guest, without resorting to side
> channels like libdrm. This is to ensure that apps can manage guests
> with no loss of functionality even when connected to a remote libvirt.
> e.g. libdrm isn't going to be able to enumerate remote GPUs, for
> virt-manager, so the neccessary info needs to be exposed by libvirt
> via its virNodeDevice APIs. We can already identify the PCI device
> and that its a GPU device, but I imagine we're not reporting any
> data about DRI render paths associated with the GPUs we report.
> So I think that's a gap we'd need to fill

Ah that makes sense, I'll probably have to drop my wip libdrm virt-manager code (although I'd like virt-manager to pick the current display GPU by default, since it is less likely to have issues, perhaps it will still be useful).

So I assume it's fine for libvirt to link with libdrm (it's not a really a graphical library, it's more a system-level library). I'll investigate the virNodeDevice changes.

thanks

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list