RE: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm

Wei Chen posted 23 patches 3 years, 5 months ago
Only 0 patches received!
RE: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
Posted by Wei Chen 3 years, 5 months ago
Hi Oleksandr,

Thanks for the sharing of virtio-disk backend. I have tested it on arm FVP_base platform.
We used domain-0 to run virtio disk backend. The backend disk is a loop device.
    "virtio_disks": [
        {
            "backend_domname": "Domain-0",
            "devid": 0,
            "disks": [
                {
                    "filename": "/dev/loop0"
                }
            ]
        }
    ],

It works fine and I've pasted some logs:

-------------------------------------------
Domain-0 logs:
main: read backend domid 0
(XEN) gnttab_mark_dirty not implemented yet
(XEN) domain_direct_pl011_init for domain#2
main: read frontend domid 2
  Info: connected to dom2

demu_seq_next: >XENSTORE_ATTACHED
demu_seq_next: domid = 2
demu_seq_next: filename[0] = /dev/loop0
demu_seq_next: readonly[0] = 0
demu_seq_next: base[0]     = 0x2000000
demu_seq_next: irq[0]      = 33
demu_seq_next: >XENCTRL_OPEN
demu_seq_next: >XENEVTCHN_OPEN
demu_seq_next: >XENFOREIGNMEMORY_OPEN
demu_seq_next: >XENDEVICEMODEL_OPEN
demu_initialize: 2 vCPU(s)
demu_seq_next: >SERVER_REGISTERED
demu_seq_next: ioservid = 0
demu_seq_next: >RESOURCE_MAPPED
demu_seq_next: shared_iopage = 0xffffae6de000
demu_seq_next: buffered_iopage = 0xffffae6dd000
demu_seq_next: >SERVER_ENABLED
demu_seq_next: >PORT_ARRAY_ALLOCATED
demu_seq_next: >EVTCHN_PORTS_BOUND
demu_seq_next: VCPU0: 3 -> 7
demu_seq_next: VCPU1: 5 -> 8
demu_seq_next: >EVTCHN_BUF_PORT_BOUND
demu_seq_next: 0 -> 9
demu_register_memory_space: 2000000 - 20001ff
  Info: (virtio/mmio.c) virtio_mmio_init:290: mailto:virtio-mmio.devices=0x200@0x2000000:33
demu_seq_next: >DEVICE_INITIALIZED
demu_seq_next: >INITIALIZED
IO request not ready
IO request not ready

----------------
Dom-U logs:
[    0.491037] xen:xen_evtchn: Event-channel device installed
[    0.493600] Initialising Xen pvcalls frontend driver
[    0.516807] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[    0.525565] cacheinfo: Unable to detect cache hierarchy for CPU 0
[    0.562275] brd: module loaded
[    0.595300] loop: module loaded
[    0.683800] virtio_blk virtio0: [vda] 131072 512-byte logical blocks (67.1 MB/64.0 MiB)
[    0.684000] vda: detected capacity change from 0 to 67108864


/ # dd if=/dev/vda of=/dev/null bs=1M count=64
64+0 records in
64+0 records out
67108864 bytes (64.0MB) copied, 3.196242 seconds, 20.0MB/s
/ # dd if=/dev/zero of=/dev/vda bs=1M count=64
64+0 records in
64+0 records out
67108864 bytes (64.0MB) copied, 3.704594 seconds, 17.3MB/s
---------------------

The read/write seems OK in dom-U. The FVP platform is a emulator, so the performance is no reference.
We will test it on real hardware like N1SDP.

Thanks,
Wei Chen

----------------------------------------------------------------------------------
From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Oleksandr Tyshchenko
Sent: 2020年11月1日 5:11
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>; Alex Bennée <alex.bennee@linaro.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>; xen-devel <xen-devel@lists.xenproject.org>; Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant <paul@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; Julien Grall <Julien.Grall@arm.com>; George Dunlap <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Julien Grall <julien@xen.org>; Tim Deegan <tim@xen.org>; Daniel De Graaf <dgdegra@tycho.nsa.gov>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Jun Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; Anthony PERARD <anthony.perard@citrix.com>; Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm



On Fri, Oct 30, 2020 at 1:34 PM Masami Hiramatsu <mailto:masami.hiramatsu@linaro.org> wrote:
Hi Oleksandr,
 
Hi Masami, all

[sorry for the possible format issue]
 
>> >
>> >       Could you tell me how can I test it?
>> >
>> >
>> > I assume it is due to the lack of the virtio-disk backend (which I haven't shared yet as I focused on the IOREQ/DM support on Arm in the
>> > first place).
>> > Could you wait a little bit, I am going to share it soon.
>>
>> Do you have a quick-and-dirty hack you can share in the meantime? Even
>> just on github as a special branch? It would be very useful to be able
>> to have a test-driver for the new feature.
>
> Well, I will provide a branch on github with our PoC virtio-disk backend by the end of this week. It will be possible to test this series with it.

Great! OK I'll be waiting for the PoC backend.

Thank you!

You can find the virtio-disk backend PoC (shared as is) at [1]. 
Brief description...

The virtio-disk backend PoC is a completely standalone entity (IOREQ server) which emulates a virtio-mmio disk device.
It is based on code from DEMU [2] (for IOREQ server purposes) and some code from kvmtool [3] to implement virtio protocol,
disk operations over underlying H/W and Xenbus code to be able to read configuration from the Xenstore
(it is configured via domain config file). Last patch in this series (marked as RFC) actually adds required bits to the libxl code.   

Some notes...

Backend could be used with current V2 IOREQ series [4] without any modifications, all what you need is to enable
CONFIG_IOREQ_SERVER on Arm [5], since it is disabled by default within this series.

Please note that in our system we run backend in DomD (driver domain). I haven't tested it in Dom0,
since in our system the Dom0 is thin (without any H/W) and only used to launch VMs, so there is no underlying block H/W. 
But, I hope, it is possible to run it in Dom0 as well (at least there is nothing specific to a particular domain in the backend itself, nothing hardcoded).
If you are going to run a backend in other than Dom0 domain you need to write your own policy (FLASK) for the backend (running in that domain)
to be able to issue DM related requests, etc. Only for test purposes you could use this patch [6] that tweeks Xen dummy policy (not for upstream).
  
As I mentioned elsewhere you don't need to modify Guest Linux (DomU), just enable VirtIO related configs.
If I remember correctly, the following would be enough:
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_VIRTIO_BLK=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
If I remember correctly, if your Host Linux (Dom0 or DomD) version >= 4.17 you don't need to modify it as well.
Otherwise, you need to cherry-pick "xen/privcmd: add IOCTL_PRIVCMD_MMAP_RESOURCE" from the upstream to be able
to use the acquire interface for the resource mapping.


We usually build a backend in the context of the Yocto build process and run it as a systemd service,
but you can also build and run it manually (it should be launched before DomU creation).

There are no command line options at all. Everything is configured via domain configuration file:
# This option is mandatory, it shows that VirtIO is going to be used by guest
virtio=1
# Example of domain configuration (two disks are assigned to the guest, the latter is in readonly mode):
vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ]

Hope that helps. Feel free to ask questions if any.

[1] https://github.com/xen-troops/virtio-disk/commits/ioreq_v3
[2] https://xenbits.xen.org/gitweb/?p=people/pauldu/demu.git;a=summary
[3] https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/
[4] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml3
[5] https://github.com/otyshchenko1/xen/commit/ee221102193f0422a240832edc41d73f6f3da923
[6] https://github.com/otyshchenko1/xen/commit/be868a63014b7aa6c9731d5692200d7f2f57c611

-- 
Regards,

Oleksandr Tyshchenko
Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
Posted by Oleksandr 3 years, 5 months ago
On 02.11.20 09:23, Wei Chen wrote:
> Hi Oleksandr,

Hi Wei.


>
> Thanks for the sharing of virtio-disk backend. I have tested it on arm FVP_base platform.
> We used domain-0 to run virtio disk backend. The backend disk is a loop device.
>      "virtio_disks": [
>          {
>              "backend_domname": "Domain-0",
>              "devid": 0,
>              "disks": [
>                  {
>                      "filename": "/dev/loop0"
>                  }
>              ]
>          }
>      ],
>
> It works fine and I've pasted some logs:
>
> -------------------------------------------
> Domain-0 logs:
> main: read backend domid 0
> (XEN) gnttab_mark_dirty not implemented yet
> (XEN) domain_direct_pl011_init for domain#2
> main: read frontend domid 2
>    Info: connected to dom2
>
> demu_seq_next: >XENSTORE_ATTACHED
> demu_seq_next: domid = 2
> demu_seq_next: filename[0] = /dev/loop0
> demu_seq_next: readonly[0] = 0
> demu_seq_next: base[0]     = 0x2000000
> demu_seq_next: irq[0]      = 33
> demu_seq_next: >XENCTRL_OPEN
> demu_seq_next: >XENEVTCHN_OPEN
> demu_seq_next: >XENFOREIGNMEMORY_OPEN
> demu_seq_next: >XENDEVICEMODEL_OPEN
> demu_initialize: 2 vCPU(s)
> demu_seq_next: >SERVER_REGISTERED
> demu_seq_next: ioservid = 0
> demu_seq_next: >RESOURCE_MAPPED
> demu_seq_next: shared_iopage = 0xffffae6de000
> demu_seq_next: buffered_iopage = 0xffffae6dd000
> demu_seq_next: >SERVER_ENABLED
> demu_seq_next: >PORT_ARRAY_ALLOCATED
> demu_seq_next: >EVTCHN_PORTS_BOUND
> demu_seq_next: VCPU0: 3 -> 7
> demu_seq_next: VCPU1: 5 -> 8
> demu_seq_next: >EVTCHN_BUF_PORT_BOUND
> demu_seq_next: 0 -> 9
> demu_register_memory_space: 2000000 - 20001ff
>    Info: (virtio/mmio.c) virtio_mmio_init:290: mailto:virtio-mmio.devices=0x200@0x2000000:33
> demu_seq_next: >DEVICE_INITIALIZED
> demu_seq_next: >INITIALIZED
> IO request not ready
> IO request not ready
>
> ----------------
> Dom-U logs:
> [    0.491037] xen:xen_evtchn: Event-channel device installed
> [    0.493600] Initialising Xen pvcalls frontend driver
> [    0.516807] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
> [    0.525565] cacheinfo: Unable to detect cache hierarchy for CPU 0
> [    0.562275] brd: module loaded
> [    0.595300] loop: module loaded
> [    0.683800] virtio_blk virtio0: [vda] 131072 512-byte logical blocks (67.1 MB/64.0 MiB)
> [    0.684000] vda: detected capacity change from 0 to 67108864
>
>
> / # dd if=/dev/vda of=/dev/null bs=1M count=64
> 64+0 records in
> 64+0 records out
> 67108864 bytes (64.0MB) copied, 3.196242 seconds, 20.0MB/s
> / # dd if=/dev/zero of=/dev/vda bs=1M count=64
> 64+0 records in
> 64+0 records out
> 67108864 bytes (64.0MB) copied, 3.704594 seconds, 17.3MB/s
> ---------------------
>
> The read/write seems OK in dom-U. The FVP platform is a emulator, so the performance is no reference.
> We will test it on real hardware like N1SDP.


This is really a good news. Thank you for testing!


-- 
Regards,

Oleksandr Tyshchenko