[RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU

Jan Kiszka posted 3 patches 4 years, 5 months ago
Test docker-clang@ubuntu passed
Test docker-quick@centos7 passed
Test asan passed
Test checkpatch failed
Test FreeBSD passed
Test docker-mingw@fedora passed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/cover.1573477032.git.jan.kiszka@siemens.com
There is a newer version of this series
Makefile                                  |    3 +
Makefile.objs                             |    1 +
configure                                 |    1 +
contrib/ivshmem2-server/Makefile.objs     |    1 +
contrib/ivshmem2-server/ivshmem2-server.c |  462 ++++++++++++
contrib/ivshmem2-server/ivshmem2-server.h |  158 +++++
contrib/ivshmem2-server/main.c            |  313 +++++++++
docs/specs/ivshmem-2-device-spec.md       |  333 +++++++++
hw/misc/Makefile.objs                     |    2 +-
hw/misc/ivshmem2.c                        | 1091 +++++++++++++++++++++++++++++
include/hw/misc/ivshmem2.h                |   48 ++
11 files changed, 2412 insertions(+), 1 deletion(-)
create mode 100644 contrib/ivshmem2-server/Makefile.objs
create mode 100644 contrib/ivshmem2-server/ivshmem2-server.c
create mode 100644 contrib/ivshmem2-server/ivshmem2-server.h
create mode 100644 contrib/ivshmem2-server/main.c
create mode 100644 docs/specs/ivshmem-2-device-spec.md
create mode 100644 hw/misc/ivshmem2.c
create mode 100644 include/hw/misc/ivshmem2.h
[RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU
Posted by Jan Kiszka 4 years, 5 months ago
To get the ball rolling after my presentation of the topic at KVM Forum
[1] and many fruitful discussions around it, this is a first concrete
code series. As discussed, I'm starting with the IVSHMEM implementation
of a QEMU device and server. It's RFC because, besides specification and
implementation details, there will still be some decisions needed about
how to integrate the new version best into the existing code bases.

If you want to play with this, the basic setup of the shared memory
device is described in patch 1 and 3. UIO driver and also the
virtio-ivshmem prototype can be found at

    http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2

Accessing the device via UIO is trivial enough. If you want to use it
for virtio, this is additionally to the description in patch 3 needed on
the virtio console backend side:

    modprobe uio_ivshmem
    echo "1af4 1110 1af4 1100 ffc003 ffffff" > /sys/bus/pci/drivers/uio_ivshmem/new_id
    linux/tools/virtio/virtio-ivshmem-console /dev/uio0

And for virtio block:

    echo "1af4 1110 1af4 1100 ffc002 ffffff" > /sys/bus/pci/drivers/uio_ivshmem/new_id
    linux/tools/virtio/virtio-ivshmem-console /dev/uio0 /path/to/disk.img

After that, you can start the QEMU frontend instance with the
virtio-ivshmem driver installed which can use the new /dev/hvc* or
/dev/vda* as usual.

Any feedback welcome!

Jan

PS: Let me know if I missed someone potentially interested in this topic
on CC - or if you would like to be dropped from the list.

PPS: The Jailhouse queues are currently out of sync /wrt minor details
of this one, primarily the device ID. Will update them when the general
direction is clear.

[1] https://kvmforum2019.sched.com/event/TmxI

Jan Kiszka (3):
  hw/misc: Add implementation of ivshmem revision 2 device
  docs/specs: Add specification of ivshmem device revision 2
  contrib: Add server for ivshmem revision 2

 Makefile                                  |    3 +
 Makefile.objs                             |    1 +
 configure                                 |    1 +
 contrib/ivshmem2-server/Makefile.objs     |    1 +
 contrib/ivshmem2-server/ivshmem2-server.c |  462 ++++++++++++
 contrib/ivshmem2-server/ivshmem2-server.h |  158 +++++
 contrib/ivshmem2-server/main.c            |  313 +++++++++
 docs/specs/ivshmem-2-device-spec.md       |  333 +++++++++
 hw/misc/Makefile.objs                     |    2 +-
 hw/misc/ivshmem2.c                        | 1091 +++++++++++++++++++++++++++++
 include/hw/misc/ivshmem2.h                |   48 ++
 11 files changed, 2412 insertions(+), 1 deletion(-)
 create mode 100644 contrib/ivshmem2-server/Makefile.objs
 create mode 100644 contrib/ivshmem2-server/ivshmem2-server.c
 create mode 100644 contrib/ivshmem2-server/ivshmem2-server.h
 create mode 100644 contrib/ivshmem2-server/main.c
 create mode 100644 docs/specs/ivshmem-2-device-spec.md
 create mode 100644 hw/misc/ivshmem2.c
 create mode 100644 include/hw/misc/ivshmem2.h

-- 
2.16.4


Re: [RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU
Posted by no-reply@patchew.org 4 years, 5 months ago
Patchew URL: https://patchew.org/QEMU/cover.1573477032.git.jan.kiszka@siemens.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Subject: [RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU
Type: series
Message-id: cover.1573477032.git.jan.kiszka@siemens.com

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
45625de contrib: Add server for ivshmem revision 2
df18ce0 docs/specs: Add specification of ivshmem device revision 2
ff35318 hw/misc: Add implementation of ivshmem revision 2 device

=== OUTPUT BEGIN ===
1/3 Checking commit ff35318fdf84 (hw/misc: Add implementation of ivshmem revision 2 device)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#63: 
new file mode 100644

ERROR: return is not a function, parentheses are not required
#206: FILE: hw/misc/ivshmem2.c:139:
+    return (ivs->features & (1 << feature));

ERROR: memory barrier without comment
#250: FILE: hw/misc/ivshmem2.c:183:
+    smp_mb();

ERROR: braces {} are necessary for all arms of this statement
#625: FILE: hw/misc/ivshmem2.c:558:
+    if (msg->vector == 0)
[...]

WARNING: Block comments use a leading /* on a separate line
#775: FILE: hw/misc/ivshmem2.c:708:
+/* Select the MSI-X vectors used by device.

WARNING: Block comments use a trailing */ on a separate line
#777: FILE: hw/misc/ivshmem2.c:710:
+ * we just enable all vectors on init and after reset. */

total: 3 errors, 3 warnings, 1147 lines checked

Patch 1/3 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

2/3 Checking commit df18ce079161 (docs/specs: Add specification of ivshmem device revision 2)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#24: 
new file mode 100644

total: 0 errors, 1 warnings, 333 lines checked

Patch 2/3 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
3/3 Checking commit 45625def0d51 (contrib: Add server for ivshmem revision 2)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#77: 
new file mode 100644

WARNING: Block comments use a leading /* on a separate line
#174: FILE: contrib/ivshmem2-server/ivshmem2-server.c:86:
+/* free a peer when the server advertises a disconnection or when the

WARNING: Block comments use a trailing */ on a separate line
#175: FILE: contrib/ivshmem2-server/ivshmem2-server.c:87:
+ * server is freed */

ERROR: memory barrier without comment
#194: FILE: contrib/ivshmem2-server/ivshmem2-server.c:106:
+    smp_mb();

WARNING: Block comments use a leading /* on a separate line
#276: FILE: contrib/ivshmem2-server/ivshmem2-server.c:188:
+    /* XXX: this could use id allocation such as Linux IDA, or simply

WARNING: Block comments use a trailing */ on a separate line
#277: FILE: contrib/ivshmem2-server/ivshmem2-server.c:189:
+     * a free-list */

WARNING: Block comments use a leading /* on a separate line
#342: FILE: contrib/ivshmem2-server/ivshmem2-server.c:254:
+/* Try to ftruncate a file to next power of 2 of shmsize.

WARNING: Block comments use a trailing */ on a separate line
#346: FILE: contrib/ivshmem2-server/ivshmem2-server.c:258:
+ * shm_size value. */

WARNING: Block comments use a leading /* on a separate line
#619: FILE: contrib/ivshmem2-server/ivshmem2-server.h:63:
+    const char *shm_path;           /**< Path to the shared memory; path

WARNING: Block comments use * on subsequent lines
#620: FILE: contrib/ivshmem2-server/ivshmem2-server.h:64:
+    const char *shm_path;           /**< Path to the shared memory; path
+                                         corresponds to a POSIX shm name or a

WARNING: Block comments use a trailing */ on a separate line
#621: FILE: contrib/ivshmem2-server/ivshmem2-server.h:65:
+                                         hugetlbfs mount point. */

WARNING: Block comments use a leading /* on a separate line
#622: FILE: contrib/ivshmem2-server/ivshmem2-server.h:66:
+    bool use_shm_open;              /**< true to use shm_open, false for

WARNING: Block comments use * on subsequent lines
#623: FILE: contrib/ivshmem2-server/ivshmem2-server.h:67:
+    bool use_shm_open;              /**< true to use shm_open, false for
+                                         file-backed shared memory */

WARNING: Block comments use a trailing */ on a separate line
#623: FILE: contrib/ivshmem2-server/ivshmem2-server.h:67:
+                                         file-backed shared memory */

ERROR: spaces required around that '*' (ctx:VxV)
#742: FILE: contrib/ivshmem2-server/main.c:22:
+#define IVSHMEM_SERVER_DEFAULT_SHM_SIZE       (4*1024*1024)
                                                 ^

ERROR: spaces required around that '*' (ctx:VxV)
#742: FILE: contrib/ivshmem2-server/main.c:22:
+#define IVSHMEM_SERVER_DEFAULT_SHM_SIZE       (4*1024*1024)
                                                      ^

WARNING: Block comments use a leading /* on a separate line
#906: FILE: contrib/ivshmem2-server/main.c:186:
+/* wait for events on listening server unix socket and connected client

WARNING: Block comments use a trailing */ on a separate line
#907: FILE: contrib/ivshmem2-server/main.c:187:
+ * sockets */

WARNING: Block comments use a leading /* on a separate line
#977: FILE: contrib/ivshmem2-server/main.c:257:
+    /* Ignore SIGPIPE, see this link for more info:

WARNING: Block comments use a trailing */ on a separate line
#978: FILE: contrib/ivshmem2-server/main.c:258:
+     * http://www.mail-archive.com/libevent-users@monkey.org/msg01606.html */

total: 3 errors, 17 warnings, 963 lines checked

Patch 3/3 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

=== OUTPUT END ===

Test command exited with code: 1


The full log is available at
http://patchew.org/logs/cover.1573477032.git.jan.kiszka@siemens.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
Re: [RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU
Posted by Liang Yan 4 years, 5 months ago

On 11/11/19 7:57 AM, Jan Kiszka wrote:
> To get the ball rolling after my presentation of the topic at KVM Forum
> [1] and many fruitful discussions around it, this is a first concrete
> code series. As discussed, I'm starting with the IVSHMEM implementation
> of a QEMU device and server. It's RFC because, besides specification and
> implementation details, there will still be some decisions needed about
> how to integrate the new version best into the existing code bases.
> 
> If you want to play with this, the basic setup of the shared memory
> device is described in patch 1 and 3. UIO driver and also the
> virtio-ivshmem prototype can be found at
> 
>     http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2
> 
> Accessing the device via UIO is trivial enough. If you want to use it
> for virtio, this is additionally to the description in patch 3 needed on
> the virtio console backend side:
> 
>     modprobe uio_ivshmem
>     echo "1af4 1110 1af4 1100 ffc003 ffffff" > /sys/bus/pci/drivers/uio_ivshmem/new_id
>     linux/tools/virtio/virtio-ivshmem-console /dev/uio0
> 
> And for virtio block:
> 
>     echo "1af4 1110 1af4 1100 ffc002 ffffff" > /sys/bus/pci/drivers/uio_ivshmem/new_id
>     linux/tools/virtio/virtio-ivshmem-console /dev/uio0 /path/to/disk.img
> 
> After that, you can start the QEMU frontend instance with the
> virtio-ivshmem driver installed which can use the new /dev/hvc* or
> /dev/vda* as usual.
> 
> Any feedback welcome!

Hi, Jan,

I have been playing your code for last few weeks, mostly study and test,
of course. Really nice work. I have a few questions here:

First, qemu part looks good, I tried test between couple VMs, and device
could pop up correctly for all of them, but I had some problems when
trying load driver. For example, if set up two VMs, vm1 and vm2, start
ivshmem server as you suggested. vm1 could load uio_ivshmem and
virtio_ivshmem correctly, vm2 could load uio_ivshmem but could not show
up "/dev/uio0", virtio_ivshmem could not be loaded at all, these still
exist even I switch the load sequence of vm1 and vm2, and sometimes
reset "virtio_ivshmem" could crash both vm1 and vm2. Not quite sure this
is bug or "Ivshmem Mode" issue, but I went through ivshmem-server code,
did not related information.

I started some code work recently, such as fix code style issues and
some work based on above testing, however I know you are also working on
RFC V2, beside the protocol between server-client and client-client is
not finalized yet either, things may change, so much appreciate if you
could squeeze me into your develop schedule and share with me some
plans, :-)  Maybe I could send some pull request in your github repo?

I personally like this project a lot, there would be a lot of potential
and user case for it, especially some devices like
ivshmem-net/ivshmem-block. Anyway, thanks for adding me to the list, and
looking forward to your reply.

Best,
Liang

> 
> Jan
> 
> PS: Let me know if I missed someone potentially interested in this topic
> on CC - or if you would like to be dropped from the list.
> 
> PPS: The Jailhouse queues are currently out of sync /wrt minor details
> of this one, primarily the device ID. Will update them when the general
> direction is clear.
> 
> [1] https://kvmforum2019.sched.com/event/TmxI
> 
> Jan Kiszka (3):
>   hw/misc: Add implementation of ivshmem revision 2 device
>   docs/specs: Add specification of ivshmem device revision 2
>   contrib: Add server for ivshmem revision 2
> 
>  Makefile                                  |    3 +
>  Makefile.objs                             |    1 +
>  configure                                 |    1 +
>  contrib/ivshmem2-server/Makefile.objs     |    1 +
>  contrib/ivshmem2-server/ivshmem2-server.c |  462 ++++++++++++
>  contrib/ivshmem2-server/ivshmem2-server.h |  158 +++++
>  contrib/ivshmem2-server/main.c            |  313 +++++++++
>  docs/specs/ivshmem-2-device-spec.md       |  333 +++++++++
>  hw/misc/Makefile.objs                     |    2 +-
>  hw/misc/ivshmem2.c                        | 1091 +++++++++++++++++++++++++++++
>  include/hw/misc/ivshmem2.h                |   48 ++
>  11 files changed, 2412 insertions(+), 1 deletion(-)
>  create mode 100644 contrib/ivshmem2-server/Makefile.objs
>  create mode 100644 contrib/ivshmem2-server/ivshmem2-server.c
>  create mode 100644 contrib/ivshmem2-server/ivshmem2-server.h
>  create mode 100644 contrib/ivshmem2-server/main.c
>  create mode 100644 docs/specs/ivshmem-2-device-spec.md
>  create mode 100644 hw/misc/ivshmem2.c
>  create mode 100644 include/hw/misc/ivshmem2.h
> 
Re: [RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU
Posted by Jan Kiszka 4 years, 5 months ago
Hi Liang,

On 27.11.19 16:28, Liang Yan wrote:
> 
> 
> On 11/11/19 7:57 AM, Jan Kiszka wrote:
>> To get the ball rolling after my presentation of the topic at KVM Forum
>> [1] and many fruitful discussions around it, this is a first concrete
>> code series. As discussed, I'm starting with the IVSHMEM implementation
>> of a QEMU device and server. It's RFC because, besides specification and
>> implementation details, there will still be some decisions needed about
>> how to integrate the new version best into the existing code bases.
>>
>> If you want to play with this, the basic setup of the shared memory
>> device is described in patch 1 and 3. UIO driver and also the
>> virtio-ivshmem prototype can be found at
>>
>>      http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2
>>
>> Accessing the device via UIO is trivial enough. If you want to use it
>> for virtio, this is additionally to the description in patch 3 needed on
>> the virtio console backend side:
>>
>>      modprobe uio_ivshmem
>>      echo "1af4 1110 1af4 1100 ffc003 ffffff" > /sys/bus/pci/drivers/uio_ivshmem/new_id
>>      linux/tools/virtio/virtio-ivshmem-console /dev/uio0
>>
>> And for virtio block:
>>
>>      echo "1af4 1110 1af4 1100 ffc002 ffffff" > /sys/bus/pci/drivers/uio_ivshmem/new_id
>>      linux/tools/virtio/virtio-ivshmem-console /dev/uio0 /path/to/disk.img
>>
>> After that, you can start the QEMU frontend instance with the
>> virtio-ivshmem driver installed which can use the new /dev/hvc* or
>> /dev/vda* as usual.
>>
>> Any feedback welcome!
> 
> Hi, Jan,
> 
> I have been playing your code for last few weeks, mostly study and test,
> of course. Really nice work. I have a few questions here:
> 
> First, qemu part looks good, I tried test between couple VMs, and device
> could pop up correctly for all of them, but I had some problems when
> trying load driver. For example, if set up two VMs, vm1 and vm2, start
> ivshmem server as you suggested. vm1 could load uio_ivshmem and
> virtio_ivshmem correctly, vm2 could load uio_ivshmem but could not show
> up "/dev/uio0", virtio_ivshmem could not be loaded at all, these still
> exist even I switch the load sequence of vm1 and vm2, and sometimes
> reset "virtio_ivshmem" could crash both vm1 and vm2. Not quite sure this
> is bug or "Ivshmem Mode" issue, but I went through ivshmem-server code,
> did not related information.

If we are only talking about one ivshmem link and vm1 is the master, 
there is not role for virtio_ivshmem on it as backend. That is purely a 
frontend driver. Vice versa for vm2: If you want to use its ivshmem 
instance as virtio frontend, uio_ivshmem plays no role.

The "crash" is would be interesting to understand: Do you see kernel 
panics of the guests? Or are they stuck? Or are the QEMU instances 
stuck? Do you know that you can debug the guest kernels via gdb (and 
gdb-scripts of the kernel)?

> 
> I started some code work recently, such as fix code style issues and
> some work based on above testing, however I know you are also working on
> RFC V2, beside the protocol between server-client and client-client is
> not finalized yet either, things may change, so much appreciate if you
> could squeeze me into your develop schedule and share with me some
> plans, :-)  Maybe I could send some pull request in your github repo?

I'm currently working on a refresh of the Jailhouse queue and the kernel 
patches to incorporate just two smaller changes:

  - use Siemens device ID
  - drop "features" register from ivshmem device

I have not yet touched the QEMU code for that so far, thus no conflict 
yet. I would wait for your patches then.

If it helps us to work on this together, I can push things to github as 
well. Will drop you a note when done. We should just present the outcome 
frequently as new series to the list.

> 
> I personally like this project a lot, there would be a lot of potential
> and user case for it, especially some devices like
> ivshmem-net/ivshmem-block. Anyway, thanks for adding me to the list, and
> looking forward to your reply.

Thanks for the positive feedback. I'm looking forward to work on this 
together!

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

Re: [RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU
Posted by Jan Kiszka 4 years, 4 months ago
On 27.11.19 18:19, Jan Kiszka wrote:
> Hi Liang,
> 
> On 27.11.19 16:28, Liang Yan wrote:
>>
>>
>> On 11/11/19 7:57 AM, Jan Kiszka wrote:
>>> To get the ball rolling after my presentation of the topic at KVM Forum
>>> [1] and many fruitful discussions around it, this is a first concrete
>>> code series. As discussed, I'm starting with the IVSHMEM implementation
>>> of a QEMU device and server. It's RFC because, besides specification and
>>> implementation details, there will still be some decisions needed about
>>> how to integrate the new version best into the existing code bases.
>>>
>>> If you want to play with this, the basic setup of the shared memory
>>> device is described in patch 1 and 3. UIO driver and also the
>>> virtio-ivshmem prototype can be found at
>>>
>>>      
>>> http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2 
>>>
>>>
>>> Accessing the device via UIO is trivial enough. If you want to use it
>>> for virtio, this is additionally to the description in patch 3 needed on
>>> the virtio console backend side:
>>>
>>>      modprobe uio_ivshmem
>>>      echo "1af4 1110 1af4 1100 ffc003 ffffff" > 
>>> /sys/bus/pci/drivers/uio_ivshmem/new_id
>>>      linux/tools/virtio/virtio-ivshmem-console /dev/uio0
>>>
>>> And for virtio block:
>>>
>>>      echo "1af4 1110 1af4 1100 ffc002 ffffff" > 
>>> /sys/bus/pci/drivers/uio_ivshmem/new_id
>>>      linux/tools/virtio/virtio-ivshmem-console /dev/uio0 
>>> /path/to/disk.img
>>>
>>> After that, you can start the QEMU frontend instance with the
>>> virtio-ivshmem driver installed which can use the new /dev/hvc* or
>>> /dev/vda* as usual.
>>>
>>> Any feedback welcome!
>>
>> Hi, Jan,
>>
>> I have been playing your code for last few weeks, mostly study and test,
>> of course. Really nice work. I have a few questions here:
>>
>> First, qemu part looks good, I tried test between couple VMs, and device
>> could pop up correctly for all of them, but I had some problems when
>> trying load driver. For example, if set up two VMs, vm1 and vm2, start
>> ivshmem server as you suggested. vm1 could load uio_ivshmem and
>> virtio_ivshmem correctly, vm2 could load uio_ivshmem but could not show
>> up "/dev/uio0", virtio_ivshmem could not be loaded at all, these still
>> exist even I switch the load sequence of vm1 and vm2, and sometimes
>> reset "virtio_ivshmem" could crash both vm1 and vm2. Not quite sure this
>> is bug or "Ivshmem Mode" issue, but I went through ivshmem-server code,
>> did not related information.
> 
> If we are only talking about one ivshmem link and vm1 is the master, 
> there is not role for virtio_ivshmem on it as backend. That is purely a 
> frontend driver. Vice versa for vm2: If you want to use its ivshmem 
> instance as virtio frontend, uio_ivshmem plays no role.
> 
> The "crash" is would be interesting to understand: Do you see kernel 
> panics of the guests? Or are they stuck? Or are the QEMU instances 
> stuck? Do you know that you can debug the guest kernels via gdb (and 
> gdb-scripts of the kernel)?
> 
>>
>> I started some code work recently, such as fix code style issues and
>> some work based on above testing, however I know you are also working on
>> RFC V2, beside the protocol between server-client and client-client is
>> not finalized yet either, things may change, so much appreciate if you
>> could squeeze me into your develop schedule and share with me some
>> plans, :-)  Maybe I could send some pull request in your github repo?
> 
> I'm currently working on a refresh of the Jailhouse queue and the kernel 
> patches to incorporate just two smaller changes:
> 
>   - use Siemens device ID
>   - drop "features" register from ivshmem device
> 
> I have not yet touched the QEMU code for that so far, thus no conflict 
> yet. I would wait for your patches then.
> 
> If it helps us to work on this together, I can push things to github as 
> well. Will drop you a note when done. We should just present the outcome 
> frequently as new series to the list.

I've updated my queues, mostly small changes, mostly to the kernel bits. 
Besides the already announced places, you can also find them as PR 
targets on

https://github.com/siemens/qemu/commits/wip/ivshmem2
https://github.com/siemens/linux/commits/queues/ivshmem2

To give the whole thing broader coverage, I will now also move forward 
and integrate the current state into Jailhouse - at the risk of having 
to rework the interface there once again. But there are a number of 
users already requiring the extended features (or even using them), plus 
this gives a nice test coverage of key components and properties.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux