[PATCH 0/6] Add ivshmem-flat device

Gustavo Romero posted 6 patches 2 months, 2 weeks ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20240222222218.2261956-1-gustavo.romero@linaro.org
Maintainers: Peter Maydell <peter.maydell@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, Thomas Huth <thuth@redhat.com>, Laurent Vivier <lvivier@redhat.com>
There is a newer version of this series
docs/system/devices/ivshmem-flat.rst |  90 +++++
hw/arm/mps2.c                        |   3 +
hw/arm/stellaris.c                   |   3 +
hw/arm/virt.c                        |   2 +
hw/core/sysbus-fdt.c                 |   2 +
hw/misc/Kconfig                      |   5 +
hw/misc/ivshmem-flat.c               | 531 +++++++++++++++++++++++++++
hw/misc/{ivshmem.c => ivshmem-pci.c} |   0
hw/misc/meson.build                  |   4 +-
hw/misc/trace-events                 |  17 +
include/hw/misc/ivshmem-flat.h       |  94 +++++
tests/qtest/ivshmem-flat-test.c      | 338 +++++++++++++++++
tests/qtest/ivshmem-test.c           | 113 +-----
tests/qtest/ivshmem-utils.c          | 156 ++++++++
tests/qtest/ivshmem-utils.h          |  56 +++
tests/qtest/meson.build              |   8 +-
16 files changed, 1312 insertions(+), 110 deletions(-)
create mode 100644 docs/system/devices/ivshmem-flat.rst
create mode 100644 hw/misc/ivshmem-flat.c
rename hw/misc/{ivshmem.c => ivshmem-pci.c} (100%)
create mode 100644 include/hw/misc/ivshmem-flat.h
create mode 100644 tests/qtest/ivshmem-flat-test.c
create mode 100644 tests/qtest/ivshmem-utils.c
create mode 100644 tests/qtest/ivshmem-utils.h
[PATCH 0/6] Add ivshmem-flat device
Posted by Gustavo Romero 2 months, 2 weeks ago
Since v1:
- Correct code style
- Correct trace event format strings
- Include minimum headers in ivshmem-flat.h
- Allow ivshmem_flat_recv_msg() take NULL
- Factored ivshmem_flat_connect_server() out
- Split sysbus-auto-wire controversial code in different patch
- Document QDev interface

Since v2:
- Addressed all comments from Thomas Huth about qtest:
  1) Use of g_usleep + number of attemps for timeout
  2) Use of g_get_tmp_dir instead of hard-coded /tmp
  3) Test if machine lm3s6965evb is available, if not skip test
- Use of qemu_irq_pulse instead of 2x qemu_set_irq
- Fixed all tests for new device options and IRQ name change
- Updated doc and commit messages regarding new/deleted device options
- Turned device options 'x-bus-address-iomem' and 'x-bus-address-shmem' mandatory

--

This patchset introduces a new device, ivshmem-flat, which is similar to the
current ivshmem device but does not require a PCI bus. It implements the ivshmem
status and control registers as MMRs and the shared memory as a directly
accessible memory region in the VM memory layout. It's meant to be used on
machines like those with Cortex-M MCUs, which usually lack a PCI bus, e.g.,
lm3s6965evb and mps2-an385. Additionally, it has the benefit of requiring a tiny
'device driver,' which is helpful on some RTOSes, like Zephyr, that run on
memory-constrained resource targets.

The patchset includes a QTest for the ivshmem-flat device, however, it's also
possible to experiment with it in two ways:

(a) using two Cortex-M VMs running Zephyr; or
(b) using one aarch64 VM running Linux with the ivshmem PCI device and another
    arm (Cortex-M) VM running Zephyr with the new ivshmem-flat device.

Please note that for running the ivshmem-flat QTests the following patch, which
is not committed to the tree yet, must be applied:

https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg03176.html

--

To experiment with (a), clone this Zephyr repo [0], set the Zephyr build
environment [1], and follow the instructions in the 'ivshmem' sample main.c [2].

[0] https://github.com/gromero/zephyr/tree/ivshmem
[1] https://docs.zephyrproject.org/latest/develop/getting_started/index.html
[2] https://github.com/gromero/zephyr/commit/73fbd481e352b25ae5483ba5048a2182b90b7f00#diff-16fa1f481a49b995d0d1a62da37b9f33033f5ee477035e73465e7208521ddbe0R9-R70
[3] https://lore.kernel.org/qemu-devel/20231127052024.435743-1-gustavo.romero@linaro.org/

To experiment with (b):

$ git clone -b uio_ivshmem --single-branch https://github.com/gromero/linux.git
$ cd linux
$ wget https://people.linaro.org/~gustavo.romero/ivshmem/arm64_uio_ivshmem.config -O .config

If in an x86_64 machine, cross compile the kernel, for instance:

$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -j 36

Install image in some directory, let's say, in ~/linux:

$ mkdir ~/linux
$ export INSTALL_PATH=~/linux
$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -j 36 install

or, if you prefer, download the compiled image from:

$ wget https://people.linaro.org/~gustavo.romero/ivshmem/vmlinuz-6.6.0-rc1-g28f3f88ee261

... and then the rootfs:

$ wget https://people.linaro.org/~gustavo.romero/ivshmem/rootfs.qcow2

Now, build QEMU with this patchset applied:

$ mkdir build && cd build
$ ../configure --target-list=arm-softmmu,aarch64-softmmu
$ make -j 36

Start the ivshmem server:

$ contrib/ivshmem-server/ivshmem-server -F

Start the aarch64 VM + Linux + ivshmem PCI device:

$ ./qemu-system-aarch64 -kernel ~/linux/vmlinuz-6.6.0-rc1-g28f3f88ee261 -append "root=/dev/vda initrd=/bin/bash console=ttyAMA0,115200" -drive file=~/linux/rootfs.qcow2,media=disk,if=virtio -machine virt-6.2 -nographic -accel tcg -cpu cortex-a57 -m 8192 -netdev bridge,id=hostnet0,br=virbr0,helper=/usr/lib/qemu/qemu-bridge-helper -device pcie-root-port,port=8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:d9:d1:12,bus=pci.1,addr=0x0 -device ivshmem-doorbell,vectors=2,chardev=ivshmem -chardev socket,path=/tmp/ivshmem_socket,id=ivshmem

Log into the VM with user/pass: root/abc123

should show:

[    2.656367] uio_ivshmem 0000:00:02.0: ivshmem-mmr at 0x0000000010203000, size 0x0000000000001000
[    2.656931] uio_ivshmem 0000:00:02.0: ivshmem-shmem at 0x0000008000000000, size 0x0000000000400000
[    2.662554] uio_ivshmem 0000:00:02.0: module successfully loaded

In another console, clone and build Zephyr image from 'uio_ivhsmem' branch:

$ git clone -b uio_ivshmem --single-branch https://github.com/gromero/zephyr
$ west -v --verbose build -p always -b qemu_cortex_m3 ./samples/uio_ivshmem/

... and then start the arm VM + Zephyr image + ivshmem-flat device:

$ ./qemu-system-arm -machine lm3s6965evb -nographic -net none -chardev socket,path=/tmp/ivshmem_socket,id=ivshmem_flat -device ivshmem-flat,chardev=ivshmem_flat,x-irq-qompath='/machine/unattached/device[1]/nvic/unnamed-gpio-in[0]',x-bus-qompath='/sysbus' -kernel ~/zephyrproject/zephyr/build/qemu_cortex_m3/uio_ivshmem/zephyr/zephyr.elf

You should see something like:

*** Booting Zephyr OS build zephyr-v3.3.0-8350-gfb003e583600 ***
*** Board: qemu_cortex_m3
*** Installing direct IRQ handler for external IRQ0 (Exception #16)...
*** Enabling IRQ0 in the NVIC logic...
*** Received IVSHMEM PEER ID: 7
*** Waiting notification from peers to start...

Now, from the Linux terminal, notify the arm VM (use the "IVSHMEM PEER ID"
reported by Zephyr as the third arg, in this example: 7):

MMRs mapped at 0xffff8fb28000 in VMA.
shmem mapped at 0xffff8f728000 in VMA.
mmr0: 0 0
mmr1: 0 0
mmr2: 6 6
mmr3: 0 0
Data ok. 4194304 byte(s) checked.

The arm VM should report something like:

*** Got interrupt at vector 0!
*** Writting constant 0xb5b5b5b5 to shmem... done!
*** Notifying back peer ID 6 at vector 0...

Cheers,
Gustavo

Gustavo Romero (6):
  hw/misc/ivshmem: Add ivshmem-flat device
  hw/misc/ivshmem-flat: Allow device to wire itself on sysbus
  hw/arm: Allow some machines to use the ivshmem-flat device
  hw/misc/ivshmem: Rename ivshmem to ivshmem-pci
  tests/qtest: Reorganize common code in ivshmem-test
  tests/qtest: Add ivshmem-flat test

 docs/system/devices/ivshmem-flat.rst |  90 +++++
 hw/arm/mps2.c                        |   3 +
 hw/arm/stellaris.c                   |   3 +
 hw/arm/virt.c                        |   2 +
 hw/core/sysbus-fdt.c                 |   2 +
 hw/misc/Kconfig                      |   5 +
 hw/misc/ivshmem-flat.c               | 531 +++++++++++++++++++++++++++
 hw/misc/{ivshmem.c => ivshmem-pci.c} |   0
 hw/misc/meson.build                  |   4 +-
 hw/misc/trace-events                 |  17 +
 include/hw/misc/ivshmem-flat.h       |  94 +++++
 tests/qtest/ivshmem-flat-test.c      | 338 +++++++++++++++++
 tests/qtest/ivshmem-test.c           | 113 +-----
 tests/qtest/ivshmem-utils.c          | 156 ++++++++
 tests/qtest/ivshmem-utils.h          |  56 +++
 tests/qtest/meson.build              |   8 +-
 16 files changed, 1312 insertions(+), 110 deletions(-)
 create mode 100644 docs/system/devices/ivshmem-flat.rst
 create mode 100644 hw/misc/ivshmem-flat.c
 rename hw/misc/{ivshmem.c => ivshmem-pci.c} (100%)
 create mode 100644 include/hw/misc/ivshmem-flat.h
 create mode 100644 tests/qtest/ivshmem-flat-test.c
 create mode 100644 tests/qtest/ivshmem-utils.c
 create mode 100644 tests/qtest/ivshmem-utils.h

-- 
2.34.1
Re: [PATCH 0/6] Add ivshmem-flat device
Posted by Markus Armbruster 2 months, 1 week ago
Gustavo Romero <gustavo.romero@linaro.org> writes:

[...]

> This patchset introduces a new device, ivshmem-flat, which is similar to the
> current ivshmem device but does not require a PCI bus. It implements the ivshmem
> status and control registers as MMRs and the shared memory as a directly
> accessible memory region in the VM memory layout. It's meant to be used on
> machines like those with Cortex-M MCUs, which usually lack a PCI bus, e.g.,
> lm3s6965evb and mps2-an385. Additionally, it has the benefit of requiring a tiny
> 'device driver,' which is helpful on some RTOSes, like Zephyr, that run on
> memory-constrained resource targets.
>
> The patchset includes a QTest for the ivshmem-flat device, however, it's also
> possible to experiment with it in two ways:
>
> (a) using two Cortex-M VMs running Zephyr; or
> (b) using one aarch64 VM running Linux with the ivshmem PCI device and another
>     arm (Cortex-M) VM running Zephyr with the new ivshmem-flat device.
>
> Please note that for running the ivshmem-flat QTests the following patch, which
> is not committed to the tree yet, must be applied:
>
> https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg03176.html

What problem are you trying to solve with ivshmem?

Shared memory is not a solution to any communication problem, it's
merely a building block for building such solutions: you invariably have
to layer some protocol on top.  What do you intend to put on top of
ivshmem?

[...]
Re: [PATCH 0/6] Add ivshmem-flat device
Posted by Gustavo Romero 2 weeks, 5 days ago
Hi Markus,

Thanks for interesting in the ivshmem-flat device.

Bill Mills (cc:ed) is the best person to answer your question,
so please find his answer below.

On 2/28/24 3:29 AM, Markus Armbruster wrote:
> Gustavo Romero <gustavo.romero@linaro.org> writes:
> 
> [...]
> 
>> This patchset introduces a new device, ivshmem-flat, which is similar to the
>> current ivshmem device but does not require a PCI bus. It implements the ivshmem
>> status and control registers as MMRs and the shared memory as a directly
>> accessible memory region in the VM memory layout. It's meant to be used on
>> machines like those with Cortex-M MCUs, which usually lack a PCI bus, e.g.,
>> lm3s6965evb and mps2-an385. Additionally, it has the benefit of requiring a tiny
>> 'device driver,' which is helpful on some RTOSes, like Zephyr, that run on
>> memory-constrained resource targets.
>>
>> The patchset includes a QTest for the ivshmem-flat device, however, it's also
>> possible to experiment with it in two ways:
>>
>> (a) using two Cortex-M VMs running Zephyr; or
>> (b) using one aarch64 VM running Linux with the ivshmem PCI device and another
>>      arm (Cortex-M) VM running Zephyr with the new ivshmem-flat device.
>>
>> Please note that for running the ivshmem-flat QTests the following patch, which
>> is not committed to the tree yet, must be applied:
>>
>> https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg03176.html
> 
> What problem are you trying to solve with ivshmem?
> 
> Shared memory is not a solution to any communication problem, it's
> merely a building block for building such solutions: you invariably have
> to layer some protocol on top.  What do you intend to put on top of
> ivshmem?

Actually ivshmem is shared memory and bi-direction notifications (in this case a doorbell register and an irq).

This is the fundamental requirement for many types of communication but our interest is for the OpenAMP project [1].

All the OpenAMP project's communication is based on shared memory and bi-directional notification.  Often this is on a AMP SOC with Cortex-As and Cortex-Ms or Rs.  However we are now expanding into PCIe based AMP. One example of this is an x86 host computer and a PCIe card with an ARM SOC.  Other examples include two systems with PCIe root complex connected via a non-transparent bridge.

The existing PCI based ivshmem lets us model these types of systems in a simple generic way without worrying about the details of the RC/EP relationship or the details of a specific non-transparent bridge.  In fact the ivshmem looks to the two (or more) systems like a non-transparent bridge with its own memory (and no other memory access is allowed).

Right now we are testing this with RPMSG between two QEMU system where both systems are cortex-a53 and both running Zephyr. [2]

We will expand this by switching one of the QEMU systems to either arm64 Linux or x86 Linux.

We (and others) are also working on a generic virtio transport that will work between any two systems as long as they have shared memory and bi-directional notifications.

Now for ivshmem-flat.  We want to expand this model to include MCU like CPUs and RTOS'es that don't have PCIe.  We focus on Cortex-M because every open source RTOS has an existing port for one of the Cortex-M machines already in QEMU.  However they don't normally pick the same one.  If we added our own custom machine for this, the QEMU project would push back and even if accepted we would have to do a port for each RTOS.  This would mean we would not test as many RTOSes.

The ivshmem-flat is actually a good model for what a Cortex-M based PCIe card would look like.  The host system would see the connection as PCIe but to the Cortex-M it would just appear as memory, MMR's for the doorbell, and an IRQ.

So even after we have a "roll your own machine definition from a file", I expect ivshmem and ivshmem-flat to still be very useful.

[1] https://www.openampproject.org/
[2] Work in progress here: https://github.com/OpenAMP/openamp-system-reference/tree/main/examples/zephyr/dual_qemu_ivshmem


Cheers,
Gustavo
Re: [PATCH 0/6] Add ivshmem-flat device
Posted by Markus Armbruster 2 weeks, 4 days ago
Gustavo Romero <gustavo.romero@linaro.org> writes:

> Hi Markus,
>
> Thanks for interesting in the ivshmem-flat device.
>
> Bill Mills (cc:ed) is the best person to answer your question,
> so please find his answer below.
>
> On 2/28/24 3:29 AM, Markus Armbruster wrote:
>> Gustavo Romero <gustavo.romero@linaro.org> writes:
>> 
>> [...]
>> 
>>> This patchset introduces a new device, ivshmem-flat, which is similar to the
>>> current ivshmem device but does not require a PCI bus. It implements the ivshmem
>>> status and control registers as MMRs and the shared memory as a directly
>>> accessible memory region in the VM memory layout. It's meant to be used on
>>> machines like those with Cortex-M MCUs, which usually lack a PCI bus, e.g.,
>>> lm3s6965evb and mps2-an385. Additionally, it has the benefit of requiring a tiny
>>> 'device driver,' which is helpful on some RTOSes, like Zephyr, that run on
>>> memory-constrained resource targets.
>>>
>>> The patchset includes a QTest for the ivshmem-flat device, however, it's also
>>> possible to experiment with it in two ways:
>>>
>>> (a) using two Cortex-M VMs running Zephyr; or
>>> (b) using one aarch64 VM running Linux with the ivshmem PCI device and another
>>>      arm (Cortex-M) VM running Zephyr with the new ivshmem-flat device.
>>>
>>> Please note that for running the ivshmem-flat QTests the following patch, which
>>> is not committed to the tree yet, must be applied:
>>>
>>> https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg03176.html
>> 
>> What problem are you trying to solve with ivshmem?
>> 
>> Shared memory is not a solution to any communication problem, it's
>> merely a building block for building such solutions: you invariably have
>> to layer some protocol on top.  What do you intend to put on top of
>> ivshmem?
>
> Actually ivshmem is shared memory and bi-direction notifications (in this case a doorbell register and an irq).

Yes, ivshmem-doorbell supports interrupts.  Doesn't change my argument.

> This is the fundamental requirement for many types of communication but our interest is for the OpenAMP project [1].
>
> All the OpenAMP project's communication is based on shared memory and bi-directional notification.  Often this is on a AMP SOC with Cortex-As and Cortex-Ms or Rs.  However we are now expanding into PCIe based AMP. One example of this is an x86 host computer and a PCIe card with an ARM SOC.  Other examples include two systems with PCIe root complex connected via a non-transparent bridge.
>
> The existing PCI based ivshmem lets us model these types of systems in a simple generic way without worrying about the details of the RC/EP relationship or the details of a specific non-transparent bridge.  In fact the ivshmem looks to the two (or more) systems like a non-transparent bridge with its own memory (and no other memory access is allowed).
>
> Right now we are testing this with RPMSG between two QEMU system where both systems are cortex-a53 and both running Zephyr. [2]
>
> We will expand this by switching one of the QEMU systems to either arm64 Linux or x86 Linux.

So you want to simulate a heterogeneous machine by connecting multiple
qemu-system-FOO processes via ivshmem, correct?

> We (and others) are also working on a generic virtio transport that will work between any two systems as long as they have shared memory and bi-directional notifications.

On top of or adjacent to ivshmem?

> Now for ivshmem-flat.  We want to expand this model to include MCU like CPUs and RTOS'es that don't have PCIe.  We focus on Cortex-M because every open source RTOS has an existing port for one of the Cortex-M machines already in QEMU.  However they don't normally pick the same one.  If we added our own custom machine for this, the QEMU project would push back and even if accepted we would have to do a port for each RTOS.  This would mean we would not test as many RTOSes.
>
> The ivshmem-flat is actually a good model for what a Cortex-M based PCIe card would look like.  The host system would see the connection as PCIe but to the Cortex-M it would just appear as memory, MMR's for the doorbell, and an IRQ.
>
> So even after we have a "roll your own machine definition from a file", I expect ivshmem and ivshmem-flat to still be very useful.
>
> [1] https://www.openampproject.org/
> [2] Work in progress here: https://github.com/OpenAMP/openamp-system-reference/tree/main/examples/zephyr/dual_qemu_ivshmem
Re: [PATCH 0/6] Add ivshmem-flat device
Posted by Bill Mills 2 weeks, 4 days ago
Hi Markus,

On 4/23/24 6:39 AM, Markus Armbruster wrote:
> Gustavo Romero <gustavo.romero@linaro.org> writes:
> 
>> Hi Markus,
>>
>> Thanks for interesting in the ivshmem-flat device.
>>
>> Bill Mills (cc:ed) is the best person to answer your question,
>> so please find his answer below.
>>
>> On 2/28/24 3:29 AM, Markus Armbruster wrote:
>>> Gustavo Romero <gustavo.romero@linaro.org> writes:
>>>
>>> [...]
>>>
>>>> This patchset introduces a new device, ivshmem-flat, which is similar to the
>>>> current ivshmem device but does not require a PCI bus. It implements the ivshmem
>>>> status and control registers as MMRs and the shared memory as a directly
>>>> accessible memory region in the VM memory layout. It's meant to be used on
>>>> machines like those with Cortex-M MCUs, which usually lack a PCI bus, e.g.,
>>>> lm3s6965evb and mps2-an385. Additionally, it has the benefit of requiring a tiny
>>>> 'device driver,' which is helpful on some RTOSes, like Zephyr, that run on
>>>> memory-constrained resource targets.
>>>>
>>>> The patchset includes a QTest for the ivshmem-flat device, however, it's also
>>>> possible to experiment with it in two ways:
>>>>
>>>> (a) using two Cortex-M VMs running Zephyr; or
>>>> (b) using one aarch64 VM running Linux with the ivshmem PCI device and another
>>>>       arm (Cortex-M) VM running Zephyr with the new ivshmem-flat device.
>>>>
>>>> Please note that for running the ivshmem-flat QTests the following patch, which
>>>> is not committed to the tree yet, must be applied:
>>>>
>>>> https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg03176.html
>>>
>>> What problem are you trying to solve with ivshmem?
>>>
>>> Shared memory is not a solution to any communication problem, it's
>>> merely a building block for building such solutions: you invariably have
>>> to layer some protocol on top.  What do you intend to put on top of
>>> ivshmem?
>>
>> Actually ivshmem is shared memory and bi-direction notifications (in this case a doorbell register and an irq).
> 
> Yes, ivshmem-doorbell supports interrupts.  Doesn't change my argument.
> 
>> This is the fundamental requirement for many types of communication but our interest is for the OpenAMP project [1].
>>
>> All the OpenAMP project's communication is based on shared memory and bi-directional notification.  Often this is on a AMP SOC with Cortex-As and Cortex-Ms or Rs.  However we are now expanding into PCIe based AMP. One example of this is an x86 host computer and a PCIe card with an ARM SOC.  Other examples include two systems with PCIe root complex connected via a non-transparent bridge.
>>
>> The existing PCI based ivshmem lets us model these types of systems in a simple generic way without worrying about the details of the RC/EP relationship or the details of a specific non-transparent bridge.  In fact the ivshmem looks to the two (or more) systems like a non-transparent bridge with its own memory (and no other memory access is allowed).
>>
>> Right now we are testing this with RPMSG between two QEMU system where both systems are cortex-a53 and both running Zephyr. [2]
>>
>> We will expand this by switching one of the QEMU systems to either arm64 Linux or x86 Linux.
> 
> So you want to simulate a heterogeneous machine by connecting multiple
> qemu-system-FOO processes via ivshmem, correct?

An AMP SOC is one use case.  A PCIe card with an embedded Cortex-M would 
be another.

> 
>> We (and others) are also working on a generic virtio transport that will work between any two systems as long as they have shared memory and bi-directional notifications.
> 
> On top of or adjacent to ivshmem?
> 

On top of ivshmem.  It is not the only use case but it is an important one.

I just gave a talk on this subject at EOSS.  If you would like to look 
at the slides they are here:
https://sched.co/1aBFm

Thanks,
Bill

>> Now for ivshmem-flat.  We want to expand this model to include MCU like CPUs and RTOS'es that don't have PCIe.  We focus on Cortex-M because every open source RTOS has an existing port for one of the Cortex-M machines already in QEMU.  However they don't normally pick the same one.  If we added our own custom machine for this, the QEMU project would push back and even if accepted we would have to do a port for each RTOS.  This would mean we would not test as many RTOSes.
>>
>> The ivshmem-flat is actually a good model for what a Cortex-M based PCIe card would look like.  The host system would see the connection as PCIe but to the Cortex-M it would just appear as memory, MMR's for the doorbell, and an IRQ.
>>
>> So even after we have a "roll your own machine definition from a file", I expect ivshmem and ivshmem-flat to still be very useful.
>>
>> [1] https://www.openampproject.org/
>> [2] Work in progress here: https://github.com/OpenAMP/openamp-system-reference/tree/main/examples/zephyr/dual_qemu_ivshmem
> 

-- 
Bill Mills
Principal Technical Consultant, Linaro
+1-240-643-0836
TZ: US Eastern
Work Schedule:  Tues/Wed/Thur
Re: [PATCH 0/6] Add ivshmem-flat device
Posted by Markus Armbruster 2 weeks, 2 days ago
Bill Mills <bill.mills@linaro.org> writes:

> Hi Markus,
>
> On 4/23/24 6:39 AM, Markus Armbruster wrote:
>> Gustavo Romero <gustavo.romero@linaro.org> writes:
>> 
>>> Hi Markus,
>>>
>>> Thanks for interesting in the ivshmem-flat device.
>>>
>>> Bill Mills (cc:ed) is the best person to answer your question,
>>> so please find his answer below.
>>>
>>> On 2/28/24 3:29 AM, Markus Armbruster wrote:
>>>> Gustavo Romero <gustavo.romero@linaro.org> writes:
>>>>
>>>> [...]
>>>>
>>>>> This patchset introduces a new device, ivshmem-flat, which is similar to the
>>>>> current ivshmem device but does not require a PCI bus. It implements the ivshmem
>>>>> status and control registers as MMRs and the shared memory as a directly
>>>>> accessible memory region in the VM memory layout. It's meant to be used on
>>>>> machines like those with Cortex-M MCUs, which usually lack a PCI bus, e.g.,
>>>>> lm3s6965evb and mps2-an385. Additionally, it has the benefit of requiring a tiny
>>>>> 'device driver,' which is helpful on some RTOSes, like Zephyr, that run on
>>>>> memory-constrained resource targets.
>>>>>
>>>>> The patchset includes a QTest for the ivshmem-flat device, however, it's also
>>>>> possible to experiment with it in two ways:
>>>>>
>>>>> (a) using two Cortex-M VMs running Zephyr; or
>>>>> (b) using one aarch64 VM running Linux with the ivshmem PCI device and another
>>>>>       arm (Cortex-M) VM running Zephyr with the new ivshmem-flat device.
>>>>>
>>>>> Please note that for running the ivshmem-flat QTests the following patch, which
>>>>> is not committed to the tree yet, must be applied:
>>>>>
>>>>> https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg03176.html
>>>>
>>>> What problem are you trying to solve with ivshmem?
>>>>
>>>> Shared memory is not a solution to any communication problem, it's
>>>> merely a building block for building such solutions: you invariably have
>>>> to layer some protocol on top.  What do you intend to put on top of
>>>> ivshmem?
>>>
>>> Actually ivshmem is shared memory and bi-direction notifications (in this case a doorbell register and an irq).
>>
>> Yes, ivshmem-doorbell supports interrupts.  Doesn't change my argument.
>> 
>>> This is the fundamental requirement for many types of communication but our interest is for the OpenAMP project [1].
>>>
>>> All the OpenAMP project's communication is based on shared memory and bi-directional notification.  Often this is on a AMP SOC with Cortex-As and Cortex-Ms or Rs.  However we are now expanding into PCIe based AMP. One example of this is an x86 host computer and a PCIe card with an ARM SOC.  Other examples include two systems with PCIe root complex connected via a non-transparent bridge.
>>>
>>> The existing PCI based ivshmem lets us model these types of systems in a simple generic way without worrying about the details of the RC/EP relationship or the details of a specific non-transparent bridge.  In fact the ivshmem looks to the two (or more) systems like a non-transparent bridge with its own memory (and no other memory access is allowed).
>>>
>>> Right now we are testing this with RPMSG between two QEMU system where both systems are cortex-a53 and both running Zephyr. [2]
>>>
>>> We will expand this by switching one of the QEMU systems to either arm64 Linux or x86 Linux.
>> So you want to simulate a heterogeneous machine by connecting multiple
>> qemu-system-FOO processes via ivshmem, correct?
>
> An AMP SOC is one use case.  A PCIe card with an embedded Cortex-M would be another.
>
>> 
>>> We (and others) are also working on a generic virtio transport that will work between any two systems as long as they have shared memory and bi-directional notifications.
>> 
>> On top of or adjacent to ivshmem?
>
> On top of ivshmem.  It is not the only use case but it is an important one.

Interesting.

> I just gave a talk on this subject at EOSS.  If you would like to look at the slides they are here:
> https://sched.co/1aBFm

The talk's abstract:

    AMP Virtio: A New Virto Transport for AMP Systems, with Focus on
    Zephyr, Linux, and Xen

    Asymmetric multiprocessing systems are common in automotive,
    Industrial, and mobile markets and are entering the data center
    market as well.  The OpenAMP project strives to make AMP systems
    easier and more standards based.  The OpenAMP project is working on
    a new Virtio transport layer that can be used between cores that do
    not share a hypervisor.  Example systems include: * AMP SOCs running
    running Linux on Cortex-A and Zephyr on Cortex-M, * x86 and Arm
    systems connected via PCIe, both running Linux.  AMP Virtio can also
    be used in Xen and other hypervisors to reduce worse case latency
    and increase freedom of interference (FFI).  These aspects are
    critical in real-time and functionally safe systems.  This
    presentation will cover:
    * Why Virtio for AMP systems
    * What are the problems with the existing virtio transports for AMP
      systems
    * Outline of the transport proposal
    * Prototype software for Zephyr, Linux, and Xen
    * Show various topologies and use cases for Device and Driver
      placement
    * Show portability to other RTOSes and hypervisors.

You're interested in systems that contain multiple cores.  You want to
create a virtio transport to let these cores talk to each other.

Let's talk physical hardware.  The transport needs to go over some kind
of device.  The device could be pretty smart and provide the virtio
transport, or it could be really dumb and provide just enough to let
software running on the core implement the virtio transport.  What do
you have in mind?

You need QEMU to emulate one or more of the devices you have in mind.

Note: emulation is about the guest-facing part of the QEMU device model.
We'll get to the host-facing part in a minute.

Smart device: we know how to emulate virtio devices with various
connectors, such as PCI, CCW, MMIO.

Dumb device: ivshmem-doorbell could serve as a virtual dumb device.
Note that a physical ivshmem would be a bad idea; it's design is rather
poor.  Is this why you're interested in ivshmem?

As is, ivshmem-doorbell comes with a PCI connector.  You want an MMIO
connector.  Fair enough.

Once you have an actual dumb physical device, you're likely better off
emulating that instead of approximating it with ivshmem.

Approximating could still be useful as a stopgap.

I sidestepped an important problem so far: the "asymmetric" in AMP.
When your asymmetric system contains cores with different architectures,
QEMU can't emulate the entire system, because qemu-system-TARGET can
only emulate cores with achitecture TARGET.

I guess you want to work around this limitation by running multiple
qemu-system-TARGETs.  Trouble is you then need to connect them somehow.
ivshmem's host-facing part can connect its qemu-system-TARGET to other
processes, including other qemu-system-TARGETs, and the TARGETs need not
be identical.  Correct?

What if we had a qemu-system that wasn't limited to a single
architecture?  Would you still want to run multiple connected instances
then, or would you simply model your asymmetric system in a single one?

[...]