hw/display/trace-events | 1 + hw/display/virtio-gpu-base.c | 7 +- hw/display/virtio-gpu-pci.c | 15 ++ hw/display/virtio-gpu-virgl.c | 230 +++++++++++++++++++- hw/display/virtio-gpu.c | 67 +++++- hw/display/virtio-vga.c | 33 ++- hw/virtio/virtio-pci.c | 18 ++ include/hw/virtio/virtio-gpu-bswap.h | 18 ++ include/hw/virtio/virtio-gpu.h | 21 ++ include/hw/virtio/virtio-pci.h | 4 + include/standard-headers/linux/virtio_gpu.h | 2 + meson.build | 9 + 12 files changed, 403 insertions(+), 22 deletions(-)
This series of patches enables support for the Venus VirtIO-GPU Vulkan driver by adding some features required by the driver: - CONTEXT_INIT - HOSTMEM - RESOURCE_UUID - BLOB_RESOURCES In addition to these features, Venus capset support was required together with the implementation for Virgl blob resource commands. Antonio Caggiano (7): virtio-gpu: Handle resource blob commands virtio-gpu: CONTEXT_INIT feature virtio-gpu: Unrealize virtio-gpu: Resource UUID virtio-gpu: Support Venus capset virtio-gpu: Initialize Venus virtio-gpu: Get EGL Display callback Dr. David Alan Gilbert (1): virtio: Add shared memory capability Gerd Hoffmann (1): virtio-gpu: hostmem hw/display/trace-events | 1 + hw/display/virtio-gpu-base.c | 7 +- hw/display/virtio-gpu-pci.c | 15 ++ hw/display/virtio-gpu-virgl.c | 230 +++++++++++++++++++- hw/display/virtio-gpu.c | 67 +++++- hw/display/virtio-vga.c | 33 ++- hw/virtio/virtio-pci.c | 18 ++ include/hw/virtio/virtio-gpu-bswap.h | 18 ++ include/hw/virtio/virtio-gpu.h | 21 ++ include/hw/virtio/virtio-pci.h | 4 + include/standard-headers/linux/virtio_gpu.h | 2 + meson.build | 9 + 12 files changed, 403 insertions(+), 22 deletions(-) -- 2.34.1
Antonio Caggiano <antonio.caggiano@collabora.com> writes: > This series of patches enables support for the Venus VirtIO-GPU Vulkan > driver by adding some features required by the driver: > > - CONTEXT_INIT > - HOSTMEM > - RESOURCE_UUID > - BLOB_RESOURCES > > In addition to these features, Venus capset support was required > together with the implementation for Virgl blob resource commands. I managed to apply to current master but I needed a bunch of patches to get it to compile with my old virgl: --8<---------------cut here---------------start------------->8--- modified hw/display/virtio-gpu-virgl.c @@ -744,10 +744,12 @@ static int virgl_make_context_current(void *opaque, int scanout_idx, qctx); } +#if VIRGL_RENDERER_CALLBACKS_VERSION >= 4 static void *virgl_get_egl_display(void *opaque) { return eglGetCurrentDisplay(); } +#endif static struct virgl_renderer_callbacks virtio_gpu_3d_cbs = { .version = 4, @@ -755,7 +757,9 @@ static struct virgl_renderer_callbacks virtio_gpu_3d_cbs = { .create_gl_context = virgl_create_context, .destroy_gl_context = virgl_destroy_context, .make_current = virgl_make_context_current, +#if VIRGL_RENDERER_CALLBACKS_VERSION >= 4 .get_egl_display = virgl_get_egl_display, +#endif }; static void virtio_gpu_print_stats(void *opaque) @@ -813,7 +817,7 @@ int virtio_gpu_virgl_init(VirtIOGPU *g) { int ret; - ret = virgl_renderer_init(g, VIRGL_RENDERER_VENUS, &virtio_gpu_3d_cbs); + ret = virgl_renderer_init(g, 0 /* VIRGL_RENDERER_VENUS */, &virtio_gpu_3d_cbs); if (ret != 0) { error_report("virgl could not be initialized: %d", ret); return ret; modified hw/display/virtio-gpu.c @@ -873,9 +873,12 @@ void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g, static void virtio_gpu_cleanup_mapping(VirtIOGPU *g, struct virtio_gpu_simple_resource *res) { + +#ifdef HAVE_VIRGL_RESOURCE_BLOB if (res->mapped) { virtio_gpu_virgl_resource_unmap(g, res); } +#endif virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt); res->iov = NULL; --8<---------------cut here---------------end--------------->8--- However when I run it with: gdb --args ./qemu-system-aarch64 \ -cpu max,pauth-impdef=on \ -machine type=virt,virtualization=on,gic-version=3 \ -serial mon:stdio \ -netdev user,id=unet,hostfwd=tcp::2222-:22 \ -device virtio-net-pci,netdev=unet,id=virt-net,disable-legacy=on \ -device virtio-scsi-pci,id=virt-scsi,disable-legacy=on \ -blockdev driver=raw,node-name=hd,discard=unmap,file.driver=host_device,file.filename=/dev/zen-disk/debian-bullseye-arm64 \ -device scsi-hd,drive=hd,id=virt-scsi-hd \ -kernel $HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image \ -append "root=/dev/sda2" \ -smp 4 -m 4096 \ -object memory-backend-memfd,id=mem,size=4G,share=on \ -numa node,memdev=mem \ -device qemu-xhci \ -device usb-tablet \ -device usb-kbd -global virtio-mmio.force-legacy=false \ -display gtk,gl=on -device virtio-gpu-pci something must be broken because it asserts: qemu-system-aarch64: ../../hw/core/qdev.c:282: qdev_realize: Assertion `!dev->realized && !dev->parent_bus' failed. Thread 1 "qemu-system-aar" received signal SIGABRT, Aborted. __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) bt #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 #1 0x00007ffff5309537 in __GI_abort () at abort.c:79 #2 0x00007ffff530940f in __assert_fail_base (fmt=0x7ffff54816a8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5555562da640 "!dev->realized && !dev->parent_bus", file=0x5555562da6a7 "../../hw/core/qdev.c", line=282, function=<optimized out>) at assert.c:92 #3 0x00007ffff5318662 in __GI___assert_fail (assertion=assertion@entry=0x5555562da640 "!dev->realized && !dev->parent_bus", file=file@entry=0x5555562da6a7 "../../hw/core/qd ev.c", line=line@entry=282, function=function@entry=0x5555562da868 <__PRETTY_FUNCTION__.14> "qdev_realize") at assert.c:101 #4 0x0000555555f64b6f in qdev_realize (dev=dev@entry=0x555558251370, bus=<optimized out>, errp=errp@entry=0x7fffffffd670) at ../../hw/core/qdev.c:282 #5 0x0000555555bbecaa in virtio_gpu_pci_base_realize (vpci_dev=0x555558248fa0, errp=0x7fffffffd670) at ../../hw/display/virtio-gpu-pci.c:52 #6 0x0000555555a6048d in pci_qdev_realize (qdev=0x555558248fa0, errp=<optimized out>) at ../../hw/pci/pci.c:2043 #7 0x0000555555f6416e in device_set_realized (obj=<optimized out>, value=<optimized out>, errp=0x7fffffffd880) at ../../hw/core/qdev.c:510 #8 0x0000555555f67ea6 in property_set_bool (obj=0x555558248fa0, v=<optimized out>, name=<optimized out>, opaque=0x555556c35ab0, errp=0x7fffffffd880) at ../../qom/object.c:2 285 #9 0x0000555555f6aee4 in object_property_set (obj=obj@entry=0x555558248fa0, name=name@entry=0x555556231289 "realized", v=v@entry=0x5555582545a0, errp=errp@entry=0x7fffffffd 880) at ../../qom/object.c:1420 #10 0x0000555555f6e290 in object_property_set_qobject (obj=obj@entry=0x555558248fa0, name=name@entry=0x555556231289 "realized", value=value@entry=0x555558253390, errp=errp@e ntry=0x7fffffffd880) at ../../qom/qom-qobject.c:28 #11 0x0000555555f6b505 in object_property_set_bool (obj=0x555558248fa0, name=name@entry=0x555556231289 "realized", value=value@entry=true, errp=errp@entry=0x7fffffffd880) at ../../qom/object.c:1489 #12 0x0000555555f64aee in qdev_realize (dev=<optimized out>, bus=bus@entry=0x555557696f70, errp=errp@entry=0x7fffffffd880) at ../../hw/core/qdev.c:292 #13 0x0000555555b36d26 in qdev_device_add_from_qdict (opts=opts@entry=0x555557d52d40, from_json=from_json@entry=false, errp=0x7fffffffd880, errp@entry=0x555556b02790 <error_ fatal>) at ../../softmmu/qdev-monitor.c:714 #14 0x0000555555b36e42 in qdev_device_add (opts=0x555556c31d20, errp=errp@entry=0x555556b02790 <error_fatal>) at ../../softmmu/qdev-monitor.c:733 #15 0x0000555555b38c4f in device_init_func (opaque=<optimized out>, opts=<optimized out>, errp=0x555556b02790 <error_fatal>) at ../../softmmu/vl.c:1143 #16 0x00005555560e6382 in qemu_opts_foreach (list=<optimized out>, func=func@entry=0x555555b38c40 <device_init_func>, opaque=opaque@entry=0x0, errp=0x555556b02790 <error_fat al>) at ../../util/qemu-option.c:1135 #17 0x0000555555b3b4ae in qemu_create_cli_devices () at ../../softmmu/vl.c:2539 #18 qmp_x_exit_preconfig (errp=<optimized out>) at ../../softmmu/vl.c:2607 #19 0x0000555555b3ee5d in qmp_x_exit_preconfig (errp=<optimized out>) at ../../softmmu/vl.c:2601 #20 qemu_init (argc=<optimized out>, argv=<optimized out>) at ../../softmmu/vl.c:3613 #21 0x00005555558b3fa9 in main (argc=<optimized out>, argv=<optimized out>) at (gdb) p dev $1 = (DeviceState *) 0x555558251370 (gdb) p *$ $2 = {parent_obj = {class = 0x555556e36b10, free = 0x0, properties = 0x555558204c60, ref = 2, parent = 0x555558248fa0}, id = 0x0, canonical_path = 0x0, realized = false, pending_deleted_event = false, pending_deleted_expires_ms = 0, opts = 0x0, hotplugged = 0, allow_unplug_during_migration = false, parent_bus = 0x5555582512e0, gpios = {lh_first = 0x0}, clocks = {lh_first = 0x0}, child_bus = {lh_first = 0x0}, num_child_bus = 0, instance_id_alias = -1, alias_required_for_version = 0, reset = {count = 0, hold_phase_pending = false, exit_phase_in_progress = false}, unplug_blockers = 0x0} (gdb) p dev->realized $3 = false (gdb) p dev->parent_bus $4 = (BusState *) 0x5555582512e0 (gdb) p *$ $5 = {obj = {class = 0x555556e192e0, free = 0x0, properties = 0x555558204aa0 = {[0x555558259d90 "hotplug-handler"] = 0x555558259ec0, [0x55555825a1e0 "child[0]"] = 0x55555825a180, [0x555558259d70 "realized"] = 0x555558259fe0}, ref = 2, parent = 0x555558248fa0}, parent = 0x555558248fa0, name = 0x55555825a040 "virtio-bus", hotplug_handler = 0x0, max_index = 1, realized = false, full = false, num_children = 1, children = {tqh_first = 0x55555825a120, tqh_circ = {tql_next = 0x55555825a120, tql_prev = 0x55555825a140}}, sibling = {le_next = 0x0, le_prev = 0x555558249010}, reset = {count = 0, hold_phase_pending = false, exit_phase_in_progress = false}} (gdb) > > Antonio Caggiano (7): > virtio-gpu: Handle resource blob commands > virtio-gpu: CONTEXT_INIT feature > virtio-gpu: Unrealize > virtio-gpu: Resource UUID > virtio-gpu: Support Venus capset > virtio-gpu: Initialize Venus > virtio-gpu: Get EGL Display callback > > Dr. David Alan Gilbert (1): > virtio: Add shared memory capability > > Gerd Hoffmann (1): > virtio-gpu: hostmem > > hw/display/trace-events | 1 + > hw/display/virtio-gpu-base.c | 7 +- > hw/display/virtio-gpu-pci.c | 15 ++ > hw/display/virtio-gpu-virgl.c | 230 +++++++++++++++++++- > hw/display/virtio-gpu.c | 67 +++++- > hw/display/virtio-vga.c | 33 ++- > hw/virtio/virtio-pci.c | 18 ++ > include/hw/virtio/virtio-gpu-bswap.h | 18 ++ > include/hw/virtio/virtio-gpu.h | 21 ++ > include/hw/virtio/virtio-pci.h | 4 + > include/standard-headers/linux/virtio_gpu.h | 2 + > meson.build | 9 + > 12 files changed, 403 insertions(+), 22 deletions(-) -- Alex Bennée Virtualisation Tech Lead @ Linaro
Hello, On 1/30/23 20:00, Alex Bennée wrote: > > Antonio Caggiano <antonio.caggiano@collabora.com> writes: > >> This series of patches enables support for the Venus VirtIO-GPU Vulkan >> driver by adding some features required by the driver: >> >> - CONTEXT_INIT >> - HOSTMEM >> - RESOURCE_UUID >> - BLOB_RESOURCES >> >> In addition to these features, Venus capset support was required >> together with the implementation for Virgl blob resource commands. > > I managed to apply to current master but I needed a bunch of patches to > get it to compile with my old virgl: Thank you for reviewing and testing the patches! Antonio isn't working on Venus anymore, I'm going to continue this effort. Last year we stabilized some of the virglrenderer Venus APIs, this year Venus may transition to supporting per-context fences only and require to init a renderserver, which will result in a more changes to Qemu. I'm going to wait a bit for Venus to settle down and then make a v4. In the end we will either need to add more #ifdefs if we will want to keep supporting older virglrenderer versions in Qemu, or bump the min required virglrenderer version. -- Best regards, Dmitry
On Tue, Jan 31, 2023 at 3:15 PM Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote: > > Hello, > > On 1/30/23 20:00, Alex Bennée wrote: > > > > Antonio Caggiano <antonio.caggiano@collabora.com> writes: > > > >> This series of patches enables support for the Venus VirtIO-GPU Vulkan > >> driver by adding some features required by the driver: > >> > >> - CONTEXT_INIT > >> - HOSTMEM > >> - RESOURCE_UUID > >> - BLOB_RESOURCES > >> > >> In addition to these features, Venus capset support was required > >> together with the implementation for Virgl blob resource commands. > > > > I managed to apply to current master but I needed a bunch of patches to > > get it to compile with my old virgl: > > Thank you for reviewing and testing the patches! Antonio isn't working > on Venus anymore, I'm going to continue this effort. Last year we > stabilized some of the virglrenderer Venus APIs, this year Venus may > transition to supporting per-context fences only and require to init a > renderserver, which will result in a more changes to Qemu. I'm going to > wait a bit for Venus to settle down and then make a v4. > > In the end we will either need to add more #ifdefs if we will want to > keep supporting older virglrenderer versions in Qemu, or bump the min > required virglrenderer version. Hi Dmitry, Thanks for working on this, it's great to see QEMU graphics moving forward. I noticed a few things from your patchset: 1) Older versions of virglrenderer -- supported or not? As you alluded to, there have been significant changes to virglrenderer since the last QEMU graphics update. For example, the asynchronous callback introduces an entirely different and incompatible way to signal fence completion. Notionally, QEMU must support older versions of virglrenderer, though in practice I'm not sure how much that is true. If we want to keep up the notion that older versions must be supported, you'll need: a) virtio-gpu-virgl.c b) virtio-gpu-virgl2.c (or an equivalent) Similarly for the vhost-user paths (if you want to support that). If older versions of virglrenderer don't need to be supported, then that would simplify the amount of additional paths/#ifdefs. 2) Additional context type: gfxstream [i]? One of the major motivations for adding context types in the virtio-gpu spec was supporting gfxstream. gfxstream is used in the Android Studio emulator (a variant of QEMU) [ii], among other places. That would move the Android emulator closer to the goal of using upstream QEMU for everything. If (1) is resolved, I don't think it's actually too bad to add gfxstream support. We just need an additional layer of dispatch between virglrenderer and gfxstream (thus, virtio-gpu-virgl2.c would be renamed virtio-gpu-context-types.c or something similar). The QEMU command line will have to be modified to pass in the enabled context type (--context={virgl, venus, gfxstream}). crosvm has been using the same trick. If (1) is resolved in v4, I would estimate adding gfxstream support would at max take 1-2 months for a single engineer. I'm not saying gfxstream need necessarily be a part of a v5 patch-stack, but given this patch-stack has been around for 1 year plus, it certainly could be. We can certainly design things in such a way that adding gfxstream is easy subsequently. The hardest part is actually package management (Debian) for gfxstream, but those can be resolved. I'm not sure exactly how QEMU accelerated graphics are utilized in user-facing actual products currently, so not sure what the standard is. What do QEMU maintainers and users think about these issues, particularly about the potential gfxstream addition in QEMU as a context type? We are most interested in Android guests. [i] https://android.googlesource.com/device/generic/vulkan-cereal/ [ii] https://developer.android.com/studio/run/emulator > > -- > Best regards, > Dmitry > >
Hi Gurchetan On Tue, Mar 7, 2023 at 2:41 AM Gurchetan Singh <gurchetansingh@chromium.org> wrote: > > On Tue, Jan 31, 2023 at 3:15 PM Dmitry Osipenko > <dmitry.osipenko@collabora.com> wrote: > > > > Hello, > > > > On 1/30/23 20:00, Alex Bennée wrote: > > > > > > Antonio Caggiano <antonio.caggiano@collabora.com> writes: > > > > > >> This series of patches enables support for the Venus VirtIO-GPU Vulkan > > >> driver by adding some features required by the driver: > > >> > > >> - CONTEXT_INIT > > >> - HOSTMEM > > >> - RESOURCE_UUID > > >> - BLOB_RESOURCES > > >> > > >> In addition to these features, Venus capset support was required > > >> together with the implementation for Virgl blob resource commands. > > > > > > I managed to apply to current master but I needed a bunch of patches to > > > get it to compile with my old virgl: > > > > Thank you for reviewing and testing the patches! Antonio isn't working > > on Venus anymore, I'm going to continue this effort. Last year we > > stabilized some of the virglrenderer Venus APIs, this year Venus may > > transition to supporting per-context fences only and require to init a > > renderserver, which will result in a more changes to Qemu. I'm going to > > wait a bit for Venus to settle down and then make a v4. > > > > In the end we will either need to add more #ifdefs if we will want to > > keep supporting older virglrenderer versions in Qemu, or bump the min > > required virglrenderer version. > > Hi Dmitry, > > Thanks for working on this, it's great to see QEMU graphics moving > forward. I noticed a few things from your patchset: > > 1) Older versions of virglrenderer -- supported or not? > > As you alluded to, there have been significant changes to > virglrenderer since the last QEMU graphics update. For example, the > asynchronous callback introduces an entirely different and > incompatible way to signal fence completion. > > Notionally, QEMU must support older versions of virglrenderer, though > in practice I'm not sure how much that is true. If we want to keep up > the notion that older versions must be supported, you'll need: > > a) virtio-gpu-virgl.c > b) virtio-gpu-virgl2.c (or an equivalent) > > Similarly for the vhost-user paths (if you want to support that). If > older versions of virglrenderer don't need to be supported, then that > would simplify the amount of additional paths/#ifdefs. We should support old versions of virgl (as described in https://www.qemu.org/docs/master/about/build-platforms.html#linux-os-macos-freebsd-netbsd-openbsd). Whether a new virtio-gpu-virgl2. (or equivalent) is necessary, we can't really tell without seeing the changes involved. > > 2) Additional context type: gfxstream [i]? > > One of the major motivations for adding context types in the > virtio-gpu spec was supporting gfxstream. gfxstream is used in the > Android Studio emulator (a variant of QEMU) [ii], among other places. > That would move the Android emulator closer to the goal of using > upstream QEMU for everything. What is the advantage of using gfxstream over virgl? or zink+venus? Only AOSP can run with virgl perhaps? I am not familiar with Android development.. I guess it doesn't make use of Mesa, and thus no virgl at all? > > If (1) is resolved, I don't think it's actually too bad to add > gfxstream support. We just need an additional layer of dispatch > between virglrenderer and gfxstream (thus, virtio-gpu-virgl2.c would > be renamed virtio-gpu-context-types.c or something similar). The QEMU > command line will have to be modified to pass in the enabled context > type (--context={virgl, venus, gfxstream}). crosvm has been using the > same trick. > > If (1) is resolved in v4, I would estimate adding gfxstream support > would at max take 1-2 months for a single engineer. I'm not saying > gfxstream need necessarily be a part of a v5 patch-stack, but given > this patch-stack has been around for 1 year plus, it certainly could > be. We can certainly design things in such a way that adding > gfxstream is easy subsequently. > > The hardest part is actually package management (Debian) for > gfxstream, but those can be resolved. > It looks like gfxstream is actually offering an API similar to virlgrenderer (with "pipe_" prefix). I suppose the guest needs to be configured in a special way then (how? without mesa?). > I'm not sure exactly how QEMU accelerated graphics are utilized in > user-facing actual products currently, so not sure what the standard > is. > > What do QEMU maintainers and users think about these issues, > particularly about the potential gfxstream addition in QEMU as a > context type? We are most interested in Android guests. It would be great if the Android emulator was more aligned with upstream QEMU development! thanks -- Marc-André Lureau
On Mon, Mar 13, 2023 at 5:58 AM Marc-André Lureau <marcandre.lureau@gmail.com> wrote: > > Hi Gurchetan > > On Tue, Mar 7, 2023 at 2:41 AM Gurchetan Singh > <gurchetansingh@chromium.org> wrote: > > > > On Tue, Jan 31, 2023 at 3:15 PM Dmitry Osipenko > > <dmitry.osipenko@collabora.com> wrote: > > > > > > Hello, > > > > > > On 1/30/23 20:00, Alex Bennée wrote: > > > > > > > > Antonio Caggiano <antonio.caggiano@collabora.com> writes: > > > > > > > >> This series of patches enables support for the Venus VirtIO-GPU Vulkan > > > >> driver by adding some features required by the driver: > > > >> > > > >> - CONTEXT_INIT > > > >> - HOSTMEM > > > >> - RESOURCE_UUID > > > >> - BLOB_RESOURCES > > > >> > > > >> In addition to these features, Venus capset support was required > > > >> together with the implementation for Virgl blob resource commands. > > > > > > > > I managed to apply to current master but I needed a bunch of patches to > > > > get it to compile with my old virgl: > > > > > > Thank you for reviewing and testing the patches! Antonio isn't working > > > on Venus anymore, I'm going to continue this effort. Last year we > > > stabilized some of the virglrenderer Venus APIs, this year Venus may > > > transition to supporting per-context fences only and require to init a > > > renderserver, which will result in a more changes to Qemu. I'm going to > > > wait a bit for Venus to settle down and then make a v4. > > > > > > In the end we will either need to add more #ifdefs if we will want to > > > keep supporting older virglrenderer versions in Qemu, or bump the min > > > required virglrenderer version. > > > > Hi Dmitry, > > > > Thanks for working on this, it's great to see QEMU graphics moving > > forward. I noticed a few things from your patchset: > > > > 1) Older versions of virglrenderer -- supported or not? > > > > As you alluded to, there have been significant changes to > > virglrenderer since the last QEMU graphics update. For example, the > > asynchronous callback introduces an entirely different and > > incompatible way to signal fence completion. > > > > Notionally, QEMU must support older versions of virglrenderer, though > > in practice I'm not sure how much that is true. If we want to keep up > > the notion that older versions must be supported, you'll need: > > > > a) virtio-gpu-virgl.c > > b) virtio-gpu-virgl2.c (or an equivalent) > > > > Similarly for the vhost-user paths (if you want to support that). If > > older versions of virglrenderer don't need to be supported, then that > > would simplify the amount of additional paths/#ifdefs. > > We should support old versions of virgl (as described in > https://www.qemu.org/docs/master/about/build-platforms.html#linux-os-macos-freebsd-netbsd-openbsd). > > Whether a new virtio-gpu-virgl2. (or equivalent) is necessary, we > can't really tell without seeing the changes involved. Ack. Something to keep in mind as Dmitry refactors. > > > > > 2) Additional context type: gfxstream [i]? > > > > One of the major motivations for adding context types in the > > virtio-gpu spec was supporting gfxstream. gfxstream is used in the > > Android Studio emulator (a variant of QEMU) [ii], among other places. > > That would move the Android emulator closer to the goal of using > > upstream QEMU for everything. > > What is the advantage of using gfxstream over virgl? or zink+venus? History/backstory: gfxstream development has its roots in the development of the Android Emulator (circa 2010). In those days, both DRM and Android were relatively new and the communities didn't know much about each other. A method was devised to auto-generate GLES calls (that's all Android needed) and stream it over an interface very similar to pipe(..). Host generated IDs were used to track shareable buffers. That same method used to auto-generate GLES was expanded to Vulkan and support for coherent memory was added. In 2018 the Android Emulator was the first to ship CTS-compliant virtualized Vulkan via downstream kernel interfaces, before work on venus began. As virtio-gpu continued to mature, gfxstream was actually the first to ship both blob resources [1] and context types [2] in production via crosvm to form a completely upstreamable solution (I consider AOSP to be an "upstream" as well). [1] https://patchwork.kernel.org/project/dri-devel/cover/20200814024000.2485-1-gurchetansingh@chromium.org/ [2] https://lists.oasis-open.org/archives/virtio-dev/202108/msg00141.html With this history out of the way, here are some advantages of gfxstream GLES over virgl: - gfxstream GLES actually has much less rendering artifacts than virgl since it's autogenerated and not hand-written. Using an Gallium command stream is lossy (partly since the GLES spec is ambiguous and drivers are buggy), and we always had better dEQP runs on gfxstream GLES than on virgl (especially on closed source drivers). - Better memory management: virgl makes heavy use of RESOURCE_CREATE_3D, which creates shadow buffers for every GL texture/buffer. gfxstream just uses a single guest memory buffer per DRM instance for buffer/texture upload. For example, gfxstream doesn't need the virtio-gpu shrinker as much [3] since it doesn't use as much guest memory. Though I know there have been recent fixes for this in virgl, but talking from a design POV. - Performance: In 2020, a vendor ran the GPU emulation stress test comparing virgl and gfxstream GLES. Here are some results: CVD: drm_virgl - 7.01 fps CVD: gfxstream - 43.68 fps I've seen similarly dramatic results with gfxbench/3D Mark on some automotive platforms. - Multi-threaded by design: gfxstream GLES is multi-threaded by design. Each guest GL thread get's its own host thread to decod commands. virgl is single threaded (before the asynchronous callback, which hasn't landed in QEMU yet) - Cross-platform: Windows, MacOS, and Linux support (though with downstream QEMU patches unfortunately). virgl is more a Linux thing. - Snapshots: Not possible with virgl. We don't intend to upstream live migration snapshot support in the initial CL, but that's something to note that users like. gfxstream is the "native" solution for Android and is thus better optimized, just like virgl is the native solution for Linux guests. Re: Zink/ANGLE/venus versus ANGLE/gfxstream VK venus in many ways has similar design characteristics as gfxstream VK (auto-generated, multi-threaded). gfxstream VK has better cross-platform support, with it shipping on via the Android emulator and Google Play Games [4] on PC. venus is designed with open-source Linux platforms in mind, with Chromebook gaming as the initial use case [5]. That leads to different design decisions, mostly centered around resource sharing/state-tracking. Snapshots are also a goal for gfxstream VK, though not ready yet. Both venus and gfxstream are Google-sponsored. There were meetings between Android and ChromeOS bigwigs about gfxstream VK/venus in 2019, and the outcome seemed to be "we'll share work where it makes sense, but there might not be a one-size fits all solution". Layering which passes CTS is expected to take quite a while, especially for a cross-platform target such as the emulator. It would be great to have gfxstream GLES support alone in the interim. [3] https://lore.kernel.org/lkml/20230305221011.1404672-1-dmitry.osipenko@collabora.com/ [4] https://play.google.com/googleplaygames [5] https://www.xda-developers.com/how-to-run-steam-chromebook/ > > Only AOSP can run with virgl perhaps? I am not familiar with Android > development.. I guess it doesn't make use of Mesa, and thus no virgl > at all? Some AOSP targets (Cuttlefish) can use virgl along with gfxstream, just for testing sake. It's not hard to support both via crosvm, so we do it. https://source.android.com/docs/setup/create/cuttlefish-ref-gpu The Android Emulator (the most relevant use case here) does ship gfxstream when a developer uses Android Studio though, and plans to do so in the future. > > > > > If (1) is resolved, I don't think it's actually too bad to add > > gfxstream support. We just need an additional layer of dispatch > > between virglrenderer and gfxstream (thus, virtio-gpu-virgl2.c would > > be renamed virtio-gpu-context-types.c or something similar). The QEMU > > command line will have to be modified to pass in the enabled context > > type (--context={virgl, venus, gfxstream}). crosvm has been using the > > same trick. > > > > If (1) is resolved in v4, I would estimate adding gfxstream support > > would at max take 1-2 months for a single engineer. I'm not saying > > gfxstream need necessarily be a part of a v5 patch-stack, but given > > this patch-stack has been around for 1 year plus, it certainly could > > be. We can certainly design things in such a way that adding > > gfxstream is easy subsequently. > > > > The hardest part is actually package management (Debian) for > > gfxstream, but those can be resolved. > > > > It looks like gfxstream is actually offering an API similar to > virlgrenderer (with "pipe_" prefix). For gfxstream, my ideal solution would not use that "pipe_" API directly from QEMU (though vulkan-cereal will be packaged properly). Instead, I intend to package the "rutabaga_gfx_ffi.h" proxy library over gfxstream [6]: https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/main/rutabaga_gfx/ffi/src/include/rutabaga_gfx_ffi.h The advantage with this approach is one gets Wayland passthrough [7] with Linux guests which is written in Rust , along with gfxstream. The main issues are around Debian Rust packaging. Rough sketch, here's what I think we might need: a) virtio-gpu-virgl-legacy.c for older versions of virglrenderer b) virtio-gpu-virgl2.c c) virtio-gpu-rutabaga.c or virtio-gpu-gfxstream.c (depending on rust packaging investigations) Though Wayland passthrough is a "nice to have", upstreaming gfxstream for Android Emulator is the most important product goal. So if Rust Debian packaging becomes too onerous (virtio-gpu-rutabaga.c), we can backtrack to a simpler solution (virtio-gpu-gfxstream.c). [6] it can also proxy virglrenderer calls too, but I'll that decision to virglrenderer maintainers [7] try out the feature here: https://crosvm.dev/book/devices/wayland.html > I suppose the guest needs to be > configured in a special way then (how? without mesa?). For AOSP, androidboot.hardware.vulkan and androidboot.hardware.egl allow toggling of GLES and Vulkan impls. QEMU won't have to do anything special given the way the launchers are designed (there's an equivalent of a "virtmanager"). There needs to be logic around context selection for Linux guests. QEMU needs a "--ctx_type={virgl, venus, drm, gfxstream}" argument. See crosvm for an example: https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/main/rutabaga_gfx/src/rutabaga_core.rs#910 This argument is important for Linux upcoming "DRM native" context types [8] as well. [8] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21658 > > > I'm not sure exactly how QEMU accelerated graphics are utilized in > > user-facing actual products currently, so not sure what the standard > > is. > > > > What do QEMU maintainers and users think about these issues, > > particularly about the potential gfxstream addition in QEMU as a > > context type? We are most interested in Android guests. > > It would be great if the Android emulator was more aligned with > upstream QEMU development! Awesome! I envisage the initial gfxstream integration as just a first step. With the graphics solution upstreamed, subsequent MacOS/Windows specific patches will start to make more sense. > > thanks > > -- > Marc-André Lureau
On Mon, Mar 13, 2023 at 11:44 AM Gurchetan Singh < gurchetansingh@chromium.org> wrote: > On Mon, Mar 13, 2023 at 5:58 AM Marc-André Lureau > <marcandre.lureau@gmail.com> wrote: > > > > Hi Gurchetan > > > > On Tue, Mar 7, 2023 at 2:41 AM Gurchetan Singh > > <gurchetansingh@chromium.org> wrote: > > > > > > On Tue, Jan 31, 2023 at 3:15 PM Dmitry Osipenko > > > <dmitry.osipenko@collabora.com> wrote: > > > > > > > > Hello, > > > > > > > > On 1/30/23 20:00, Alex Bennée wrote: > > > > > > > > > > Antonio Caggiano <antonio.caggiano@collabora.com> writes: > > > > > > > > > >> This series of patches enables support for the Venus VirtIO-GPU > Vulkan > > > > >> driver by adding some features required by the driver: > > > > >> > > > > >> - CONTEXT_INIT > > > > >> - HOSTMEM > > > > >> - RESOURCE_UUID > > > > >> - BLOB_RESOURCES > > > > >> > > > > >> In addition to these features, Venus capset support was required > > > > >> together with the implementation for Virgl blob resource commands. > > > > > > > > > > I managed to apply to current master but I needed a bunch of > patches to > > > > > get it to compile with my old virgl: > > > > > > > > Thank you for reviewing and testing the patches! Antonio isn't > working > > > > on Venus anymore, I'm going to continue this effort. Last year we > > > > stabilized some of the virglrenderer Venus APIs, this year Venus may > > > > transition to supporting per-context fences only and require to init > a > > > > renderserver, which will result in a more changes to Qemu. I'm going > to > > > > wait a bit for Venus to settle down and then make a v4. > > > > > > > > In the end we will either need to add more #ifdefs if we will want to > > > > keep supporting older virglrenderer versions in Qemu, or bump the min > > > > required virglrenderer version. > > > > > > Hi Dmitry, > > > > > > Thanks for working on this, it's great to see QEMU graphics moving > > > forward. I noticed a few things from your patchset: > > > > > > 1) Older versions of virglrenderer -- supported or not? > > > > > > As you alluded to, there have been significant changes to > > > virglrenderer since the last QEMU graphics update. For example, the > > > asynchronous callback introduces an entirely different and > > > incompatible way to signal fence completion. > > > > > > Notionally, QEMU must support older versions of virglrenderer, though > > > in practice I'm not sure how much that is true. If we want to keep up > > > the notion that older versions must be supported, you'll need: > > > > > > a) virtio-gpu-virgl.c > > > b) virtio-gpu-virgl2.c (or an equivalent) > > > > > > Similarly for the vhost-user paths (if you want to support that). If > > > older versions of virglrenderer don't need to be supported, then that > > > would simplify the amount of additional paths/#ifdefs. > > > > We should support old versions of virgl (as described in > > > https://www.qemu.org/docs/master/about/build-platforms.html#linux-os-macos-freebsd-netbsd-openbsd > ). > > > > Whether a new virtio-gpu-virgl2. (or equivalent) is necessary, we > > can't really tell without seeing the changes involved. > > Ack. Something to keep in mind as Dmitry refactors. > > > > > > > > > 2) Additional context type: gfxstream [i]? > > > > > > One of the major motivations for adding context types in the > > > virtio-gpu spec was supporting gfxstream. gfxstream is used in the > > > Android Studio emulator (a variant of QEMU) [ii], among other places. > > > That would move the Android emulator closer to the goal of using > > > upstream QEMU for everything. > > > > What is the advantage of using gfxstream over virgl? or zink+venus? > > History/backstory: > > gfxstream development has its roots in the development of the Android > Emulator (circa 2010). In those days, both DRM and Android were > relatively new and the communities didn't know much about each other. > > A method was devised to auto-generate GLES calls (that's all Android > needed) and stream it over an interface very similar to pipe(..). > Host generated IDs were used to track shareable buffers. > > That same method used to auto-generate GLES was expanded to Vulkan and > support for coherent memory was added. In 2018 the Android Emulator > was the first to ship CTS-compliant virtualized Vulkan via downstream > kernel interfaces, before work on venus began. > > As virtio-gpu continued to mature, gfxstream was actually the first to > ship both blob resources [1] and context types [2] in production via > crosvm to form a completely upstreamable solution (I consider AOSP to > be an "upstream" as well). > > [1] > https://patchwork.kernel.org/project/dri-devel/cover/20200814024000.2485-1-gurchetansingh@chromium.org/ > [2] https://lists.oasis-open.org/archives/virtio-dev/202108/msg00141.html > > With this history out of the way, here are some advantages of > gfxstream GLES over virgl: > > - gfxstream GLES actually has much less rendering artifacts than virgl > since it's autogenerated and not hand-written. Using an Gallium > command stream is lossy (partly since the GLES spec is ambiguous and > drivers are buggy), and we always had better dEQP runs on gfxstream > GLES than on virgl (especially on closed source drivers). > > - Better memory management: virgl makes heavy use of > RESOURCE_CREATE_3D, which creates shadow buffers for every GL > texture/buffer. gfxstream just uses a single guest memory buffer per > DRM instance for buffer/texture upload. For example, gfxstream > doesn't need the virtio-gpu shrinker as much [3] since it doesn't use > as much guest memory. Though I know there have been recent fixes for > this in virgl, but talking from a design POV. > > - Performance: In 2020, a vendor ran the GPU emulation stress test > comparing virgl and gfxstream GLES. Here are some results: > > CVD: drm_virgl - 7.01 fps > CVD: gfxstream - 43.68 fps > > I've seen similarly dramatic results with gfxbench/3D Mark on some > automotive platforms. > > - Multi-threaded by design: gfxstream GLES is multi-threaded by > design. Each guest GL thread get's its own host thread to decod > commands. virgl is single threaded (before the asynchronous callback, > which hasn't landed in QEMU yet) > > - Cross-platform: Windows, MacOS, and Linux support (though with > downstream QEMU patches unfortunately). virgl is more a Linux thing. > > - Snapshots: Not possible with virgl. We don't intend to upstream > live migration snapshot support in the initial CL, but that's > something to note that users like. > > gfxstream is the "native" solution for Android and is thus better > optimized, just like virgl is the native solution for Linux guests. > > Re: Zink/ANGLE/venus versus ANGLE/gfxstream VK > > venus in many ways has similar design characteristics as gfxstream VK > (auto-generated, multi-threaded). gfxstream VK has better > cross-platform support, with it shipping on via the Android emulator > and Google Play Games [4] on PC. venus is designed with open-source > Linux platforms in mind, with Chromebook gaming as the initial use > case [5]. > > That leads to different design decisions, mostly centered around > resource sharing/state-tracking. Snapshots are also a goal for > gfxstream VK, though not ready yet. > > Both venus and gfxstream are Google-sponsored. There were meetings > between Android and ChromeOS bigwigs about gfxstream VK/venus in 2019, > and the outcome seemed to be "we'll share work where it makes sense, > but there might not be a one-size fits all solution". > > Layering which passes CTS is expected to take quite a while, > especially for a cross-platform target such as the emulator. It would > be great to have gfxstream GLES support alone in the interim. > > [3] > https://lore.kernel.org/lkml/20230305221011.1404672-1-dmitry.osipenko@collabora.com/ > [4] https://play.google.com/googleplaygames > [5] https://www.xda-developers.com/how-to-run-steam-chromebook/ > > > > > Only AOSP can run with virgl perhaps? I am not familiar with Android > > development.. I guess it doesn't make use of Mesa, and thus no virgl > > at all? > > Some AOSP targets (Cuttlefish) can use virgl along with gfxstream, > just for testing sake. It's not hard to support both via crosvm, so > we do it. > > https://source.android.com/docs/setup/create/cuttlefish-ref-gpu > > The Android Emulator (the most relevant use case here) does ship > gfxstream when a developer uses Android Studio though, and plans to do > so in the future. > > > > > > > > > If (1) is resolved, I don't think it's actually too bad to add > > > gfxstream support. We just need an additional layer of dispatch > > > between virglrenderer and gfxstream (thus, virtio-gpu-virgl2.c would > > > be renamed virtio-gpu-context-types.c or something similar). The QEMU > > > command line will have to be modified to pass in the enabled context > > > type (--context={virgl, venus, gfxstream}). crosvm has been using the > > > same trick. > > > > > > If (1) is resolved in v4, I would estimate adding gfxstream support > > > would at max take 1-2 months for a single engineer. I'm not saying > > > gfxstream need necessarily be a part of a v5 patch-stack, but given > > > this patch-stack has been around for 1 year plus, it certainly could > > > be. We can certainly design things in such a way that adding > > > gfxstream is easy subsequently. > > > > > > The hardest part is actually package management (Debian) for > > > gfxstream, but those can be resolved. > > > > > > > It looks like gfxstream is actually offering an API similar to > > virlgrenderer (with "pipe_" prefix). > > For gfxstream, my ideal solution would not use that "pipe_" API > directly from QEMU (though vulkan-cereal will be packaged properly). > Instead, I intend to package the "rutabaga_gfx_ffi.h" proxy library > over gfxstream [6]: > > > https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/main/rutabaga_gfx/ffi/src/include/rutabaga_gfx_ffi.h > > The advantage with this approach is one gets Wayland passthrough [7] > with Linux guests which is written in Rust , along with gfxstream. > The main issues are around Debian Rust packaging. > > Rough sketch, here's what I think we might need: > > a) virtio-gpu-virgl-legacy.c for older versions of virglrenderer > b) virtio-gpu-virgl2.c > c) virtio-gpu-rutabaga.c or virtio-gpu-gfxstream.c (depending on rust > packaging investigations) > > Though Wayland passthrough is a "nice to have", upstreaming gfxstream > for Android Emulator is the most important product goal. So if Rust > Debian packaging becomes too onerous (virtio-gpu-rutabaga.c), we can > backtrack to a simpler solution (virtio-gpu-gfxstream.c). > > [6] it can also proxy virglrenderer calls too, but I'll that decision > to virglrenderer maintainers > [7] try out the feature here: > https://crosvm.dev/book/devices/wayland.html > > > I suppose the guest needs to be > > configured in a special way then (how? without mesa?). > > For AOSP, androidboot.hardware.vulkan and androidboot.hardware.egl > allow toggling of GLES and Vulkan impls. QEMU > won't have to do anything special given the way the launchers are > designed (there's an equivalent of a "virtmanager"). > > There needs to be logic around context selection for Linux guests. > QEMU needs a "--ctx_type={virgl, venus, drm, gfxstream}" argument. > See crosvm for an example: > > > https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/main/rutabaga_gfx/src/rutabaga_core.rs#910 > > This argument is important for Linux upcoming "DRM native" context > types [8] as well. > > [8] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21658 > > > > > > I'm not sure exactly how QEMU accelerated graphics are utilized in > > > user-facing actual products currently, so not sure what the standard > > > is. > > > > > > What do QEMU maintainers and users think about these issues, > > > particularly about the potential gfxstream addition in QEMU as a > > > context type? We are most interested in Android guests. > > > > It would be great if the Android emulator was more aligned with > > upstream QEMU development! > > Awesome! I envisage the initial gfxstream integration as just a first > step. With the graphics solution upstreamed, subsequent MacOS/Windows > specific patches will start to make more sense. > Okay, I think the next steps would actually be code so you can see our vision. I have few questions that will help with my RFC: 1) Packaging -- before or after? gfxstream does not have a package in upstream Portage or Debian (though there are downstream implementations). Is it sufficient to have a versioned release (i.e, Git tag) without the package before the change can be merged into QEMU? Is packaging required before merging into QEMU? 2) Optional Rust dependencies To achieve seamless Wayland windowing with the same implementation as crosvm, we'll need optional Rust dependencies. There actually has been interest in making Rust a non-optional dependency: https://wiki.qemu.org/RustInQemu https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg04589.html I actually only want Rust as an optional dependency on Linux, Windows, and MacOS -- where Rust support is quite good. Is there any problem with using Rust library with a C API from QEMU? 3) Rust "Build-Depends" in Debian This is mostly a question to Debian packagers (CC: mjt@) Any Rust package would likely depend on 10-30 additional packages (that's just the way Rust works), but they are all in Debian stable right now. https://packages.debian.org/stable/rust/ I noticed when enabling virgl, there were complaints about a ton of UI libraries being pulled in. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=813658 That necessitated the creation of the `qemu-system-gui` package for people who don't need a UI. I want to make gfxstream a Suggested Package in qemu-system-gui, but that would potentially pull in 10-30 additional Rust build dependencies I mentioned. Would the 10-30 Rust Build dependencies be problematic? I think QEMU already has hundreds right now. Thanks! > > > > thanks > > > > -- > > Marc-André Lureau >
18.03.2023 00:40, Gurchetan Singh пишет: [[big snip]] > Okay, I think the next steps would actually be code so you can see our vision. I have few questions that will help with my RFC: > > 1) Packaging -- before or after? > > gfxstream does not have a package in upstream Portage or Debian (though there are downstream implementations). Is it sufficient to have a versioned > release (i.e, Git tag) without the package before the change can be merged into QEMU? > > Is packaging required before merging into QEMU? > > 2) Optional Rust dependencies > > To achieve seamless Wayland windowing with the same implementation as crosvm, we'll need optional Rust dependencies. There actually has been interest > in making Rust a non-optional dependency: > > https://wiki.qemu.org/RustInQemu <https://wiki.qemu.org/RustInQemu> > https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg04589.html <https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg04589.html> > > I actually only want Rust as an optional dependency on Linux, Windows, and MacOS -- where Rust support is quite good. Is there any problem with using > Rust library with a C API from QEMU? > > 3) Rust "Build-Depends" in Debian > > This is mostly a question to Debian packagers (CC: mjt@) > > Any Rust package would likely depend on 10-30 additional packages (that's just the way Rust works), but they are all in Debian stable right now. > > https://packages.debian.org/stable/rust/ <https://packages.debian.org/stable/rust/> > > I noticed when enabling virgl, there were complaints about a ton of UI libraries being pulled in. > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=813658 <https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=813658> > > That necessitated the creation of the `qemu-system-gui` package for people who don't need a UI. I want to make gfxstream a Suggested Package in > qemu-system-gui, but that would potentially pull in 10-30 additional Rust build dependencies I mentioned. > > Would the 10-30 Rust Build dependencies be problematic? I think QEMU already has hundreds right now. There's no reason to worry about *build*-time dependencies. There's also no big reason to worry about runtime dependencies, and to worry about Suggests vs Recommends, etc, - this all can be adjusted when needed, packages can be spilt further, etc, - it is normal practice. /mjt
On 3/13/23 15:58, Marc-André Lureau wrote: ... >> 2) Additional context type: gfxstream [i]? >> >> One of the major motivations for adding context types in the >> virtio-gpu spec was supporting gfxstream. gfxstream is used in the >> Android Studio emulator (a variant of QEMU) [ii], among other places. >> That would move the Android emulator closer to the goal of using >> upstream QEMU for everything. > > What is the advantage of using gfxstream over virgl? or zink+venus? > > Only AOSP can run with virgl perhaps? I am not familiar with Android > development.. I guess it doesn't make use of Mesa, and thus no virgl > at all? +1 I'm also very interested in getting an overview of gfxstream advantages over virgl and why Android emulator can't move to use virgl+venus (shouldn't it just work out-of-the-box already?). Thanks! -- Best regards, Dmitry
If gfxstream is the android pipe based transport I think it's a legacy from before the switch to pure VirtIO for the new Cuttlefish models. On Mon, 13 Mar 2023, 13:27 Dmitry Osipenko, <dmitry.osipenko@collabora.com> wrote: > On 3/13/23 15:58, Marc-André Lureau wrote: > ... > >> 2) Additional context type: gfxstream [i]? > >> > >> One of the major motivations for adding context types in the > >> virtio-gpu spec was supporting gfxstream. gfxstream is used in the > >> Android Studio emulator (a variant of QEMU) [ii], among other places. > >> That would move the Android emulator closer to the goal of using > >> upstream QEMU for everything. > > > > What is the advantage of using gfxstream over virgl? or zink+venus? > > > > Only AOSP can run with virgl perhaps? I am not familiar with Android > > development.. I guess it doesn't make use of Mesa, and thus no virgl > > at all? > > +1 I'm also very interested in getting an overview of gfxstream > advantages over virgl and why Android emulator can't move to use > virgl+venus (shouldn't it just work out-of-the-box already?). Thanks! > > -- > Best regards, > Dmitry > >
On 3/13/23 17:51, Alex Bennée wrote: > If gfxstream is the android pipe based transport I think it's a legacy from > before the switch to pure VirtIO for the new Cuttlefish models. Right, so older Android versions will only work using gfxstream. Good point, thanks. -- Best regards, Dmitry
© 2016 - 2024 Red Hat, Inc.