[libvirt] [PATCH 02/12] qemu: command: spice: Pick the first available DRM render node

Erik Skultety posted 12 patches 7 years, 2 months ago
There is a newer version of this series
[libvirt] [PATCH 02/12] qemu: command: spice: Pick the first available DRM render node
Posted by Erik Skultety 7 years, 2 months ago
Up until now, we formatted 'rendernode=' onto QEMU cmdline only if the
user specified it in the XML, otherwise we let QEMU do it for us. This
causes permission issues because by default the /dev/dri/renderDX
permissions are as follows:

crw-rw----. 1 root video

There's literally no reason why it shouldn't be libvirt picking the DRM
render node instead of QEMU, that way (and because we're using
namespaces by default), we can safely relabel the device within the
namespace.

Signed-off-by: Erik Skultety <eskultet@redhat.com>
---
 src/qemu/qemu_command.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 23a6661c10..f6bee10d5c 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -8235,6 +8235,8 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg,
     }
 
     if (graphics->data.spice.gl == VIR_TRISTATE_BOOL_YES) {
+        char **rendernode = &graphics->data.spice.rendernode;
+
         if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_GL)) {
             virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
                            _("This QEMU doesn't support spice OpenGL"));
@@ -8246,17 +8248,20 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg,
         virBufferAsprintf(&opt, "gl=%s,",
                           virTristateSwitchTypeToString(graphics->data.spice.gl));
 
-        if (graphics->data.spice.rendernode) {
-            if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_RENDERNODE)) {
-                virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
-                               _("This QEMU doesn't support spice OpenGL rendernode"));
-                goto error;
-            }
-
-            virBufferAddLit(&opt, "rendernode=");
-            virQEMUBuildBufferEscapeComma(&opt, graphics->data.spice.rendernode);
-            virBufferAddLit(&opt, ",");
+        if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_RENDERNODE)) {
+            virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+                           _("This QEMU doesn't support spice OpenGL rendernode"));
+            goto error;
         }
+
+        /* pick the first DRM render node if none was specified */
+        if (!*rendernode &&
+            !(*rendernode = virHostGetDRMRenderNode()))
+            goto error;
+
+        virBufferAddLit(&opt, "rendernode=");
+        virQEMUBuildBufferEscapeComma(&opt, graphics->data.spice.rendernode);
+        virBufferAddLit(&opt, ",");
     }
 
     if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_SEAMLESS_MIGRATION)) {
-- 
2.19.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 02/12] qemu: command: spice: Pick the first available DRM render node
Posted by Ján Tomko 7 years, 2 months ago
[I see you're taking the Peter approach to prefixes]

On Thu, Nov 22, 2018 at 05:36:00PM +0100, Erik Skultety wrote:
>Up until now, we formatted 'rendernode=' onto QEMU cmdline only if the
>user specified it in the XML, otherwise we let QEMU do it for us. This
>causes permission issues because by default the /dev/dri/renderDX
>permissions are as follows:
>
>crw-rw----. 1 root video
>
>There's literally no reason why it shouldn't be libvirt picking the DRM
>render node instead of QEMU, that way (and because we're using
>namespaces by default), we can safely relabel the device within the
>namespace.
>
>Signed-off-by: Erik Skultety <eskultet@redhat.com>
>---
> src/qemu/qemu_command.c | 25 +++++++++++++++----------
> 1 file changed, 15 insertions(+), 10 deletions(-)
>
>diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
>index 23a6661c10..f6bee10d5c 100644
>--- a/src/qemu/qemu_command.c
>+++ b/src/qemu/qemu_command.c
>@@ -8235,6 +8235,8 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg,
>     }
>
>     if (graphics->data.spice.gl == VIR_TRISTATE_BOOL_YES) {
>+        char **rendernode = &graphics->data.spice.rendernode;
>+
>         if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_GL)) {
>             virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
>                            _("This QEMU doesn't support spice OpenGL"));
>@@ -8246,17 +8248,20 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg,
>         virBufferAsprintf(&opt, "gl=%s,",
>                           virTristateSwitchTypeToString(graphics->data.spice.gl));
>
>-        if (graphics->data.spice.rendernode) {
>-            if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_RENDERNODE)) {
>-                virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
>-                               _("This QEMU doesn't support spice OpenGL rendernode"));
>-                goto error;
>-            }
>-
>-            virBufferAddLit(&opt, "rendernode=");
>-            virQEMUBuildBufferEscapeComma(&opt, graphics->data.spice.rendernode);
>-            virBufferAddLit(&opt, ",");
>+        if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_RENDERNODE)) {
>+            virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
>+                           _("This QEMU doesn't support spice OpenGL rendernode"));
>+            goto error;
>         }
>+
>+        /* pick the first DRM render node if none was specified */
>+        if (!*rendernode &&
>+            !(*rendernode = virHostGetDRMRenderNode()))

Ideally qemuBuild*CommandLine would not be affected by host state.
qemuProcessPrepareDomain is the place for live XML modifications

Jano
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list