On 15.08.20 20:24, Julien Grall wrote:
> Hi Oleksandr,
Hi Julien.
>
> On 03/08/2020 19:21, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Hello all.
>>
>> The purpose of this patch series is to add IOREQ/DM support to Xen on
>> Arm.
>> You can find an initial discussion at [1]. Xen on Arm requires some
>> implementation
>> to forward guest MMIO access to a device model in order to implement
>> virtio-mmio
>> backend or even mediator outside of hypervisor. As Xen on x86 already
>> contains
>> required support this patch series tries to make it common and
>> introduce Arm
>> specific bits plus some new functionality. Patch series is based on
>> Julien's
>> PoC "xen/arm: Add support for Guest IO forwarding to a device emulator".
>> Besides splitting existing IOREQ/DM support and introducing Arm side,
>> the patch series also includes virtio-mmio related changes (toolstack)
>> for the reviewers to be able to see how the whole picture could look
>> like.
>> For a non-RFC, the IOREQ/DM and virtio-mmio support will be sent
>> separately.
>>
>> According to the initial discussion there are a few open
>> questions/concerns
>> regarding security, performance in VirtIO solution:
>> 1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require
>> different
>> transport...
>> 2. virtio backend is able to access all guest memory, some kind of
>> protection
>> is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys
>> in guest'
>> 3. interface between toolstack and 'out-of-qemu' virtio backend,
>> avoid using
>> Xenstore in virtio backend if possible.
>> 4. a lot of 'foreing mapping' could lead to the memory exhaustion,
>> Julien
>> has some idea regarding that.
>>
>> Looks like all of them are valid and worth considering, but the first
>> thing
>> which we need on Arm is a mechanism to forward guest IO to a device
>> emulator,
>> so let's focus on it in the first place.
>>
>> ***
>>
>> Patch series [2] was rebased on Xen v4.14 release and tested on
>> Renesas Salvator-X
>> board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk backend (we will
>> share it later)
>> running in driver domain and unmodified Linux Guest running on existing
>> virtio-blk driver (frontend). No issues were observed. Guest domain
>> 'reboot/destroy'
>> use-cases work properly. Patch series was only build-tested on x86.
>>
>> Please note, build-test passed for the following modes:
>> 1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
>> 2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
>> 3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
>> 4. Arm64: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set
>> 5. Arm32: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set
>>
>> Build-test didn't pass for Arm32 mode with 'CONFIG_IOREQ_SERVER=y'
>> due to the lack of
>> cmpxchg_64 support on Arm32. See cmpxchg usage in
>> hvm_send_buffered_ioreq()).
>
> I have sent a patch to implement cmpxchg64() and guest_cmpxchg64()
> (see [1]).
>
> Cheers,
>
> [1]
> https://lore.kernel.org/xen-devel/20200815172143.1327-1-julien@xen.org/T/#u
Thank you! I have already build-tested it. No issues). I will update
corresponding patch to select IOREQ_SERVER for "config ARM" instead of
"config ARM64".
--
Regards,
Oleksandr Tyshchenko