[RFC patch 0/1] block: vhost-blk backend

Andrey Zhadchenko via posted 1 patch 1 year, 9 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20220725205527.313973-1-andrey.zhadchenko@virtuozzo.com
Maintainers: Kevin Wolf <kwolf@redhat.com>, Hanna Reitz <hreitz@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>
There is a newer version of this series
configure                     |  13 ++
hw/block/Kconfig              |   5 +
hw/block/meson.build          |   1 +
hw/block/vhost-blk.c          | 395 ++++++++++++++++++++++++++++++++++
hw/virtio/meson.build         |   1 +
hw/virtio/vhost-blk-pci.c     | 102 +++++++++
include/hw/virtio/vhost-blk.h |  44 ++++
linux-headers/linux/vhost.h   |   3 +
8 files changed, 564 insertions(+)
create mode 100644 hw/block/vhost-blk.c
create mode 100644 hw/virtio/vhost-blk-pci.c
create mode 100644 include/hw/virtio/vhost-blk.h
[RFC patch 0/1] block: vhost-blk backend
Posted by Andrey Zhadchenko via 1 year, 9 months ago
Although QEMU virtio-blk is quite fast, there is still some room for
improvements. Disk latency can be reduced if we handle virito-blk requests
in host kernel so we avoid a lot of syscalls and context switches.

The biggest disadvantage of this vhost-blk flavor is raw format.
Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html

Also by using kernel modules we can bypass iothread limitation and finaly scale
block requests with cpus for high-performance devices. This is planned to be
implemented in next version.

Linux kernel module part:
https://lore.kernel.org/kvm/20220725202753.298725-1-andrey.zhadchenko@virtuozzo.com/

test setups and results:
fio --direct=1 --rw=randread  --bs=4k  --ioengine=libaio --iodepth=128
QEMU drive options: cache=none
filesystem: xfs

SSD:
               | randread, IOPS  | randwrite, IOPS |
Host           |      95.8k	 |	85.3k	   |
QEMU virtio    |      57.5k	 |	79.4k	   |
QEMU vhost-blk |      95.6k	 |	84.3k	   |

RAMDISK (vq == vcpu):
                 | randread, IOPS | randwrite, IOPS |
virtio, 1vcpu    |	123k	  |	 129k       |
virtio, 2vcpu    |	253k (??) |	 250k (??)  |
virtio, 4vcpu    |	158k	  |	 154k       |
vhost-blk, 1vcpu |	110k	  |	 113k       |
vhost-blk, 2vcpu |	247k	  |	 252k       |
vhost-blk, 4vcpu |	576k	  |	 567k       |

Andrey Zhadchenko (1):
  block: add vhost-blk backend

 configure                     |  13 ++
 hw/block/Kconfig              |   5 +
 hw/block/meson.build          |   1 +
 hw/block/vhost-blk.c          | 395 ++++++++++++++++++++++++++++++++++
 hw/virtio/meson.build         |   1 +
 hw/virtio/vhost-blk-pci.c     | 102 +++++++++
 include/hw/virtio/vhost-blk.h |  44 ++++
 linux-headers/linux/vhost.h   |   3 +
 8 files changed, 564 insertions(+)
 create mode 100644 hw/block/vhost-blk.c
 create mode 100644 hw/virtio/vhost-blk-pci.c
 create mode 100644 include/hw/virtio/vhost-blk.h

-- 
2.31.1
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Stefan Hajnoczi 1 year, 6 months ago
On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
> Although QEMU virtio-blk is quite fast, there is still some room for
> improvements. Disk latency can be reduced if we handle virito-blk requests
> in host kernel so we avoid a lot of syscalls and context switches.
> 
> The biggest disadvantage of this vhost-blk flavor is raw format.
> Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
> files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
> 
> Also by using kernel modules we can bypass iothread limitation and finaly scale
> block requests with cpus for high-performance devices. This is planned to be
> implemented in next version.
> 
> Linux kernel module part:
> https://lore.kernel.org/kvm/20220725202753.298725-1-andrey.zhadchenko@virtuozzo.com/
> 
> test setups and results:
> fio --direct=1 --rw=randread  --bs=4k  --ioengine=libaio --iodepth=128

> QEMU drive options: cache=none
> filesystem: xfs

Please post the full QEMU command-line so it's clear exactly what this
is benchmarking.

A preallocated raw image file is a good baseline with:

  --object iothread,id=iothread0 \
  --blockdev file,filename=test.img,cache.direct=on,aio=native,node-name=drive0 \
  --device virtio-blk-pci,drive=drive0,iothread=iothread0

(BTW QEMU's default vq size is 256 descriptors and the number of vqs is
the number of vCPUs.)

> 
> SSD:
>                | randread, IOPS  | randwrite, IOPS |
> Host           |      95.8k	 |	85.3k	   |
> QEMU virtio    |      57.5k	 |	79.4k	   |
> QEMU vhost-blk |      95.6k	 |	84.3k	   |
> 
> RAMDISK (vq == vcpu):

With fio numjobs=vcpu here?

>                  | randread, IOPS | randwrite, IOPS |
> virtio, 1vcpu    |	123k	  |	 129k       |
> virtio, 2vcpu    |	253k (??) |	 250k (??)  |

QEMU's aio=threads (default) gets around the single IOThread. It beats
aio=native for this reason in some cases. Were you using aio=native or
aio=threads?

> virtio, 4vcpu    |	158k	  |	 154k       |
> vhost-blk, 1vcpu |	110k	  |	 113k       |
> vhost-blk, 2vcpu |	247k	  |	 252k       |
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Andrey Zhadchenko 1 year, 6 months ago

On 10/4/22 21:26, Stefan Hajnoczi wrote:
> On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
>> Although QEMU virtio-blk is quite fast, there is still some room for
>> improvements. Disk latency can be reduced if we handle virito-blk requests
>> in host kernel so we avoid a lot of syscalls and context switches.
>>
>> The biggest disadvantage of this vhost-blk flavor is raw format.
>> Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
>> files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
>>
>> Also by using kernel modules we can bypass iothread limitation and finaly scale
>> block requests with cpus for high-performance devices. This is planned to be
>> implemented in next version.
>>
>> Linux kernel module part:
>> https://lore.kernel.org/kvm/20220725202753.298725-1-andrey.zhadchenko@virtuozzo.com/
>>
>> test setups and results:
>> fio --direct=1 --rw=randread  --bs=4k  --ioengine=libaio --iodepth=128
> 
>> QEMU drive options: cache=none
>> filesystem: xfs
> 
> Please post the full QEMU command-line so it's clear exactly what this
> is benchmarking.

The full command for vhost is this:
qemu-system-x86_64 \
-kernel bzImage -nographic -append "console=ttyS0 root=/dev/sdb rw 
systemd.unified_cgroup_hierarchy=0 nokaslr" \
-m 1024 -s --enable-kvm -smp $2 \
-drive id=main_drive,file=debian_sid.img,media=disk,format=raw \
-drive id=vhost_drive,file=$1,media=disk,format=raw,if=none \
-device vhost-blk-pci,drive=vhost_drive,num-threads=$3

(num-threads option for vhost-blk-pci was not used)

For virtio I used this:
qemu-system-x86_64 \
-kernel bzImage -nographic -append "console=ttyS0 root=/dev/sdb rw 
systemd.unified_cgroup_hierarchy=0 nokaslr" \
-m 1024 -s --enable-kvm -smp $2 \
-drive file=debian_sid.img,media=disk \
-drive file=$1,media=disk,if=virtio,cache=none,if=none,id=d1,aio=threads\
-device virtio-blk-pci,drive=d1

> 
> A preallocated raw image file is a good baseline with:
> 
>    --object iothread,id=iothread0 \
>    --blockdev file,filename=test.img,cache.direct=on,aio=native,node-name=drive0 >    --device virtio-blk-pci,drive=drive0,iothread=iothread0
The image I used was preallocated qcow2 image set up with dm-qcow2 
because this vhost-blk version directly uses bio interface and can't 
work with regular files.

> 
> (BTW QEMU's default vq size is 256 descriptors and the number of vqs is
> the number of vCPUs.)
> 
>>
>> SSD:
>>                 | randread, IOPS  | randwrite, IOPS |
>> Host           |      95.8k	 |	85.3k	   |
>> QEMU virtio    |      57.5k	 |	79.4k	   |

Adding iothread0 and using raw file instead of qcow2 + dm-qcow2 setup 
brings the numbers to
                   |      60.4k   |      84.3k      |

>> QEMU vhost-blk |      95.6k	 |	84.3k	   |
>>
>> RAMDISK (vq == vcpu):
> 
> With fio numjobs=vcpu here?

Yes

> 
>>                   | randread, IOPS | randwrite, IOPS |
>> virtio, 1vcpu    |	123k	  |	 129k       |
>> virtio, 2vcpu    |	253k (??) |	 250k (??)  |
> 
> QEMU's aio=threads (default) gets around the single IOThread. It beats
> aio=native for this reason in some cases. Were you using aio=native or
> aio=threads?

At some point of time I started to specify aio=threads (and before that 
I did not use this option). I am not sure when exactly. I will 
re-measure all cases for the next submission.

> 
>> virtio, 4vcpu    |	158k	  |	 154k       |
>> vhost-blk, 1vcpu |	110k	  |	 113k       |
>> vhost-blk, 2vcpu |	247k	  |	 252k       |
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Stefan Hajnoczi 1 year, 6 months ago
On Wed, Oct 05, 2022 at 01:28:14PM +0300, Andrey Zhadchenko wrote:
> 
> 
> On 10/4/22 21:26, Stefan Hajnoczi wrote:
> > On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
> > > Although QEMU virtio-blk is quite fast, there is still some room for
> > > improvements. Disk latency can be reduced if we handle virito-blk requests
> > > in host kernel so we avoid a lot of syscalls and context switches.
> > > 
> > > The biggest disadvantage of this vhost-blk flavor is raw format.
> > > Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
> > > files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
> > > 
> > > Also by using kernel modules we can bypass iothread limitation and finaly scale
> > > block requests with cpus for high-performance devices. This is planned to be
> > > implemented in next version.
> > > 
> > > Linux kernel module part:
> > > https://lore.kernel.org/kvm/20220725202753.298725-1-andrey.zhadchenko@virtuozzo.com/
> > > 
> > > test setups and results:
> > > fio --direct=1 --rw=randread  --bs=4k  --ioengine=libaio --iodepth=128
> > 
> > > QEMU drive options: cache=none
> > > filesystem: xfs
> > 
> > Please post the full QEMU command-line so it's clear exactly what this
> > is benchmarking.
> 
> The full command for vhost is this:
> qemu-system-x86_64 \
> -kernel bzImage -nographic -append "console=ttyS0 root=/dev/sdb rw
> systemd.unified_cgroup_hierarchy=0 nokaslr" \
> -m 1024 -s --enable-kvm -smp $2 \
> -drive id=main_drive,file=debian_sid.img,media=disk,format=raw \
> -drive id=vhost_drive,file=$1,media=disk,format=raw,if=none \

No cache=none because vhost-blk directly submits bios in the kernel?

> -device vhost-blk-pci,drive=vhost_drive,num-threads=$3
> 
> (num-threads option for vhost-blk-pci was not used)
> 
> For virtio I used this:
> qemu-system-x86_64 \
> -kernel bzImage -nographic -append "console=ttyS0 root=/dev/sdb rw
> systemd.unified_cgroup_hierarchy=0 nokaslr" \
> -m 1024 -s --enable-kvm -smp $2 \
> -drive file=debian_sid.img,media=disk \
> -drive file=$1,media=disk,if=virtio,cache=none,if=none,id=d1,aio=threads\
> -device virtio-blk-pci,drive=d1
> 
> > 
> > A preallocated raw image file is a good baseline with:
> > 
> >    --object iothread,id=iothread0 \
> >    --blockdev file,filename=test.img,cache.direct=on,aio=native,node-name=drive0 >    --device virtio-blk-pci,drive=drive0,iothread=iothread0
> The image I used was preallocated qcow2 image set up with dm-qcow2 because
> this vhost-blk version directly uses bio interface and can't work with
> regular files.

I see. 

> 
> > 
> > (BTW QEMU's default vq size is 256 descriptors and the number of vqs is
> > the number of vCPUs.)
> > 
> > > 
> > > SSD:
> > >                 | randread, IOPS  | randwrite, IOPS |
> > > Host           |      95.8k	 |	85.3k	   |
> > > QEMU virtio    |      57.5k	 |	79.4k	   |
> 
> Adding iothread0 and using raw file instead of qcow2 + dm-qcow2 setup brings
> the numbers to
>                   |      60.4k   |      84.3k      |
> 
> > > QEMU vhost-blk |      95.6k	 |	84.3k	   |
> > > 
> > > RAMDISK (vq == vcpu):
> > 
> > With fio numjobs=vcpu here?
> 
> Yes
> 
> > 
> > >                   | randread, IOPS | randwrite, IOPS |
> > > virtio, 1vcpu    |	123k	  |	 129k       |
> > > virtio, 2vcpu    |	253k (??) |	 250k (??)  |
> > 
> > QEMU's aio=threads (default) gets around the single IOThread. It beats
> > aio=native for this reason in some cases. Were you using aio=native or
> > aio=threads?
> 
> At some point of time I started to specify aio=threads (and before that I
> did not use this option). I am not sure when exactly. I will re-measure all
> cases for the next submission.

aio=native is usually recommended. aio=threads is less optimized.

aio=native should have lower latency than aio=threads although it scales
worse on hosts with free CPUs because it's limited to a single thread.

> 
> > 
> > > virtio, 4vcpu    |	158k	  |	 154k       |
> > > vhost-blk, 1vcpu |	110k	  |	 113k       |
> > > vhost-blk, 2vcpu |	247k	  |	 252k       |
> 
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Stefan Hajnoczi 1 year, 6 months ago
On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
> Although QEMU virtio-blk is quite fast, there is still some room for
> improvements. Disk latency can be reduced if we handle virito-blk requests
> in host kernel so we avoid a lot of syscalls and context switches.
> 
> The biggest disadvantage of this vhost-blk flavor is raw format.
> Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
> files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
> 
> Also by using kernel modules we can bypass iothread limitation and finaly scale
> block requests with cpus for high-performance devices. This is planned to be
> implemented in next version.
> 
> Linux kernel module part:
> https://lore.kernel.org/kvm/20220725202753.298725-1-andrey.zhadchenko@virtuozzo.com/
> 
> test setups and results:
> fio --direct=1 --rw=randread  --bs=4k  --ioengine=libaio --iodepth=128
> QEMU drive options: cache=none
> filesystem: xfs
> 
> SSD:
>                | randread, IOPS  | randwrite, IOPS |
> Host           |      95.8k	 |	85.3k	   |
> QEMU virtio    |      57.5k	 |	79.4k	   |
> QEMU vhost-blk |      95.6k	 |	84.3k	   |
> 
> RAMDISK (vq == vcpu):
>                  | randread, IOPS | randwrite, IOPS |
> virtio, 1vcpu    |	123k	  |	 129k       |
> virtio, 2vcpu    |	253k (??) |	 250k (??)  |
> virtio, 4vcpu    |	158k	  |	 154k       |
> vhost-blk, 1vcpu |	110k	  |	 113k       |
> vhost-blk, 2vcpu |	247k	  |	 252k       |
> vhost-blk, 4vcpu |	576k	  |	 567k       |
> 
> Andrey Zhadchenko (1):
>   block: add vhost-blk backend
> 
>  configure                     |  13 ++
>  hw/block/Kconfig              |   5 +
>  hw/block/meson.build          |   1 +
>  hw/block/vhost-blk.c          | 395 ++++++++++++++++++++++++++++++++++
>  hw/virtio/meson.build         |   1 +
>  hw/virtio/vhost-blk-pci.c     | 102 +++++++++
>  include/hw/virtio/vhost-blk.h |  44 ++++
>  linux-headers/linux/vhost.h   |   3 +
>  8 files changed, 564 insertions(+)
>  create mode 100644 hw/block/vhost-blk.c
>  create mode 100644 hw/virtio/vhost-blk-pci.c
>  create mode 100644 include/hw/virtio/vhost-blk.h

vhost-blk has been tried several times in the past. That doesn't mean it
cannot be merged this time, but past arguments should be addressed:

- What makes it necessary to move the code into the kernel? In the past
  the performance results were not very convincing. The fastest
  implementations actually tend to be userspace NVMe PCI drivers that
  bypass the kernel! Bypassing the VFS and submitting block requests
  directly was not a huge boost. The syscall/context switch argument
  sounds okay but the numbers didn't really show that kernel block I/O
  is much faster than userspace block I/O.

  I've asked for more details on the QEMU command-line to understand
  what your numbers show. Maybe something has changed since previous
  times when vhost-blk has been tried.

  The only argument I see is QEMU's current 1 IOThread per virtio-blk
  device limitation, which is currently being worked on. If that's the
  only reason for vhost-blk then is it worth doing all the work of
  getting vhost-blk shipped (kernel, QEMU, and libvirt changes)? It
  seems like a short-term solution.

- The security impact of bugs in kernel vhost-blk code is more serious
  than bugs in a QEMU userspace process.

- The management stack needs to be changed to use vhost-blk whereas
  QEMU can be optimized without affecting other layers.

Stefan
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Andrey Zhadchenko 1 year, 6 months ago

On 10/4/22 22:00, Stefan Hajnoczi wrote:
> On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
>> Although QEMU virtio-blk is quite fast, there is still some room for
>> improvements. Disk latency can be reduced if we handle virito-blk requests
>> in host kernel so we avoid a lot of syscalls and context switches.
>>
>> The biggest disadvantage of this vhost-blk flavor is raw format.
>> Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
>> files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
>>
>> Also by using kernel modules we can bypass iothread limitation and finaly scale
>> block requests with cpus for high-performance devices. This is planned to be
>> implemented in next version.
>>
>> Linux kernel module part:
>> https://lore.kernel.org/kvm/20220725202753.298725-1-andrey.zhadchenko@virtuozzo.com/
>>
>> test setups and results:
>> fio --direct=1 --rw=randread  --bs=4k  --ioengine=libaio --iodepth=128
>> QEMU drive options: cache=none
>> filesystem: xfs
>>
>> SSD:
>>                 | randread, IOPS  | randwrite, IOPS |
>> Host           |      95.8k	 |	85.3k	   |
>> QEMU virtio    |      57.5k	 |	79.4k	   |
>> QEMU vhost-blk |      95.6k	 |	84.3k	   |
>>
>> RAMDISK (vq == vcpu):
>>                   | randread, IOPS | randwrite, IOPS |
>> virtio, 1vcpu    |	123k	  |	 129k       |
>> virtio, 2vcpu    |	253k (??) |	 250k (??)  |
>> virtio, 4vcpu    |	158k	  |	 154k       |
>> vhost-blk, 1vcpu |	110k	  |	 113k       |
>> vhost-blk, 2vcpu |	247k	  |	 252k       |
>> vhost-blk, 4vcpu |	576k	  |	 567k       |
>>
>> Andrey Zhadchenko (1):
>>    block: add vhost-blk backend
>>
>>   configure                     |  13 ++
>>   hw/block/Kconfig              |   5 +
>>   hw/block/meson.build          |   1 +
>>   hw/block/vhost-blk.c          | 395 ++++++++++++++++++++++++++++++++++
>>   hw/virtio/meson.build         |   1 +
>>   hw/virtio/vhost-blk-pci.c     | 102 +++++++++
>>   include/hw/virtio/vhost-blk.h |  44 ++++
>>   linux-headers/linux/vhost.h   |   3 +
>>   8 files changed, 564 insertions(+)
>>   create mode 100644 hw/block/vhost-blk.c
>>   create mode 100644 hw/virtio/vhost-blk-pci.c
>>   create mode 100644 include/hw/virtio/vhost-blk.h
> 
> vhost-blk has been tried several times in the past. That doesn't mean it
> cannot be merged this time, but past arguments should be addressed:
> 
> - What makes it necessary to move the code into the kernel? In the past
>    the performance results were not very convincing. The fastest
>    implementations actually tend to be userspace NVMe PCI drivers that
>    bypass the kernel! Bypassing the VFS and submitting block requests
>    directly was not a huge boost. The syscall/context switch argument
>    sounds okay but the numbers didn't really show that kernel block I/O
>    is much faster than userspace block I/O.
> 
>    I've asked for more details on the QEMU command-line to understand
>    what your numbers show. Maybe something has changed since previous
>    times when vhost-blk has been tried.
> 
>    The only argument I see is QEMU's current 1 IOThread per virtio-blk
>    device limitation, which is currently being worked on. If that's the
>    only reason for vhost-blk then is it worth doing all the work of
>    getting vhost-blk shipped (kernel, QEMU, and libvirt changes)? It
>    seems like a short-term solution.
> 
> - The security impact of bugs in kernel vhost-blk code is more serious
>    than bugs in a QEMU userspace process.
> 
> - The management stack needs to be changed to use vhost-blk whereas
>    QEMU can be optimized without affecting other layers.
> 
> Stefan

Indeed there was several vhost-blk attempts, but from what I found in 
mailing lists only the Asias attempt got some attention and discussion. 
Ramdisk performance results were great but ramdisk is more a benchmark 
than a real use. I didn't find out why Asias dropped his version except 
vague "He concluded performance results was not worth". The storage 
speed is very important for vhost-blk performance, as there is no point 
to cut cpu costs from 1ms to 0,1ms if the request need 50ms to proceed 
in the actual disk. I think that 10 years ago NVMI was non-existent and 
SSD + SATA was probably a lot faster than HDD but still not enough to 
utilize this technology.

The tests I did give me 60k IOPS randwrite for VM and 95k for host. And 
the vhost-blk is able to negate the difference even using only 1 
thread/vq/vcpu. And unlinke current QEMU single IOThread it can be 
easily scaled with number of cpus/vcpus. For sure this can be solved by 
liftimg IOThread limitations but this will probably be even more 
disastrous amount of changes (and adding vhost-blk won't break old setups!).

Probably the only undisputed advantage of vhost-blk is syscalls 
reduction. And again the benefit really depends on a storage speed, as 
it should be somehow comparable with syscalls time. Also I must note 
that this may be good for high-density servers with a lot of VMs. But 
for now I did not have the exact numbers which show how much time we are 
really winning for a single request at average.

Overall vhost-blk will only become better along with the increase of 
storage speed.

Also I must note that all arguments above apply to vdpa-blk. And unlike 
vhost-blk, which needs it's own QEMU code, vdpa-blk can be setup with 
generic virtio-vdpa QEMU code (I am not sure if it is merged yet but 
still). Although vdpa-blk have it's own problems for now.
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Stefan Hajnoczi 1 year, 6 months ago
On Wed, Oct 05, 2022 at 02:50:06PM +0300, Andrey Zhadchenko wrote:
> 
> 
> On 10/4/22 22:00, Stefan Hajnoczi wrote:
> > On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
> > > Although QEMU virtio-blk is quite fast, there is still some room for
> > > improvements. Disk latency can be reduced if we handle virito-blk requests
> > > in host kernel so we avoid a lot of syscalls and context switches.
> > > 
> > > The biggest disadvantage of this vhost-blk flavor is raw format.
> > > Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
> > > files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
> > > 
> > > Also by using kernel modules we can bypass iothread limitation and finaly scale
> > > block requests with cpus for high-performance devices. This is planned to be
> > > implemented in next version.
> > > 
> > > Linux kernel module part:
> > > https://lore.kernel.org/kvm/20220725202753.298725-1-andrey.zhadchenko@virtuozzo.com/
> > > 
> > > test setups and results:
> > > fio --direct=1 --rw=randread  --bs=4k  --ioengine=libaio --iodepth=128
> > > QEMU drive options: cache=none
> > > filesystem: xfs
> > > 
> > > SSD:
> > >                 | randread, IOPS  | randwrite, IOPS |
> > > Host           |      95.8k	 |	85.3k	   |
> > > QEMU virtio    |      57.5k	 |	79.4k	   |
> > > QEMU vhost-blk |      95.6k	 |	84.3k	   |
> > > 
> > > RAMDISK (vq == vcpu):
> > >                   | randread, IOPS | randwrite, IOPS |
> > > virtio, 1vcpu    |	123k	  |	 129k       |
> > > virtio, 2vcpu    |	253k (??) |	 250k (??)  |
> > > virtio, 4vcpu    |	158k	  |	 154k       |
> > > vhost-blk, 1vcpu |	110k	  |	 113k       |
> > > vhost-blk, 2vcpu |	247k	  |	 252k       |
> > > vhost-blk, 4vcpu |	576k	  |	 567k       |
> > > 
> > > Andrey Zhadchenko (1):
> > >    block: add vhost-blk backend
> > > 
> > >   configure                     |  13 ++
> > >   hw/block/Kconfig              |   5 +
> > >   hw/block/meson.build          |   1 +
> > >   hw/block/vhost-blk.c          | 395 ++++++++++++++++++++++++++++++++++
> > >   hw/virtio/meson.build         |   1 +
> > >   hw/virtio/vhost-blk-pci.c     | 102 +++++++++
> > >   include/hw/virtio/vhost-blk.h |  44 ++++
> > >   linux-headers/linux/vhost.h   |   3 +
> > >   8 files changed, 564 insertions(+)
> > >   create mode 100644 hw/block/vhost-blk.c
> > >   create mode 100644 hw/virtio/vhost-blk-pci.c
> > >   create mode 100644 include/hw/virtio/vhost-blk.h
> > 
> > vhost-blk has been tried several times in the past. That doesn't mean it
> > cannot be merged this time, but past arguments should be addressed:
> > 
> > - What makes it necessary to move the code into the kernel? In the past
> >    the performance results were not very convincing. The fastest
> >    implementations actually tend to be userspace NVMe PCI drivers that
> >    bypass the kernel! Bypassing the VFS and submitting block requests
> >    directly was not a huge boost. The syscall/context switch argument
> >    sounds okay but the numbers didn't really show that kernel block I/O
> >    is much faster than userspace block I/O.
> > 
> >    I've asked for more details on the QEMU command-line to understand
> >    what your numbers show. Maybe something has changed since previous
> >    times when vhost-blk has been tried.
> > 
> >    The only argument I see is QEMU's current 1 IOThread per virtio-blk
> >    device limitation, which is currently being worked on. If that's the
> >    only reason for vhost-blk then is it worth doing all the work of
> >    getting vhost-blk shipped (kernel, QEMU, and libvirt changes)? It
> >    seems like a short-term solution.
> > 
> > - The security impact of bugs in kernel vhost-blk code is more serious
> >    than bugs in a QEMU userspace process.
> > 
> > - The management stack needs to be changed to use vhost-blk whereas
> >    QEMU can be optimized without affecting other layers.
> > 
> > Stefan
> 
> Indeed there was several vhost-blk attempts, but from what I found in
> mailing lists only the Asias attempt got some attention and discussion.
> Ramdisk performance results were great but ramdisk is more a benchmark than
> a real use. I didn't find out why Asias dropped his version except vague "He
> concluded performance results was not worth". The storage speed is very
> important for vhost-blk performance, as there is no point to cut cpu costs
> from 1ms to 0,1ms if the request need 50ms to proceed in the actual disk. I
> think that 10 years ago NVMI was non-existent and SSD + SATA was probably a
> lot faster than HDD but still not enough to utilize this technology.

Yes, it's possible that latency improvements are more noticeable now.
Thank you for posting the benchmark results. I will also run benchmarks
so we can compare vhost-blk with today's QEMU as well as multiqueue
IOThreads QEMU (for which I only have a hacky prototype) on a local NVMe
PCI SSD.

> The tests I did give me 60k IOPS randwrite for VM and 95k for host. And the
> vhost-blk is able to negate the difference even using only 1 thread/vq/vcpu.
> And unlinke current QEMU single IOThread it can be easily scaled with number
> of cpus/vcpus. For sure this can be solved by liftimg IOThread limitations
> but this will probably be even more disastrous amount of changes (and adding
> vhost-blk won't break old setups!).
> 
> Probably the only undisputed advantage of vhost-blk is syscalls reduction.
> And again the benefit really depends on a storage speed, as it should be
> somehow comparable with syscalls time. Also I must note that this may be
> good for high-density servers with a lot of VMs. But for now I did not have
> the exact numbers which show how much time we are really winning for a
> single request at average.
> 
> Overall vhost-blk will only become better along with the increase of storage
> speed.
> 
> Also I must note that all arguments above apply to vdpa-blk. And unlike
> vhost-blk, which needs it's own QEMU code, vdpa-blk can be setup with
> generic virtio-vdpa QEMU code (I am not sure if it is merged yet but still).
> Although vdpa-blk have it's own problems for now.

Yes, I think that's why Stefano hasn't pushed for a software vpda-blk
device yet despite having played with it and is more focussed on
hardware enablement. vdpa-blk has the same issues as vhost-blk.

Stefan
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Stefan Hajnoczi 1 year, 6 months ago
On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
> Although QEMU virtio-blk is quite fast, there is still some room for
> improvements. Disk latency can be reduced if we handle virito-blk requests
> in host kernel so we avoid a lot of syscalls and context switches.
> 
> The biggest disadvantage of this vhost-blk flavor is raw format.
> Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
> files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
> 
> Also by using kernel modules we can bypass iothread limitation and finaly scale
> block requests with cpus for high-performance devices. This is planned to be
> implemented in next version.

Hi Andrey,
Do you have a new version of this patch series that uses multiple
threads?

I have been playing with vq-IOThread mapping in QEMU and would like to
benchmark vhost-blk vs QEMU virtio-blk mq IOThreads:
https://gitlab.com/stefanha/qemu/-/tree/virtio-blk-mq-iothread-prototype

Thanks,
Stefan
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Andrey Zhadchenko 1 year, 6 months ago
On 10/4/22 21:13, Stefan Hajnoczi wrote:
> On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
>> Although QEMU virtio-blk is quite fast, there is still some room for
>> improvements. Disk latency can be reduced if we handle virito-blk requests
>> in host kernel so we avoid a lot of syscalls and context switches.
>>
>> The biggest disadvantage of this vhost-blk flavor is raw format.
>> Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
>> files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
>>
>> Also by using kernel modules we can bypass iothread limitation and finaly scale
>> block requests with cpus for high-performance devices. This is planned to be
>> implemented in next version.
> 
> Hi Andrey,
> Do you have a new version of this patch series that uses multiple
> threads?
> 
> I have been playing with vq-IOThread mapping in QEMU and would like to
> benchmark vhost-blk vs QEMU virtio-blk mq IOThreads:
> https://gitlab.com/stefanha/qemu/-/tree/virtio-blk-mq-iothread-prototype
> 
> Thanks,
> Stefan

Hi Stefan
For now my multi-threaded version is only available for Red Hat 9 5.14.0 
kernel. If you really want you can grab it from here: 
https://lists.openvz.org/pipermail/devel/2022-September/079951.html (kernel)
For QEMU part all you need is adding to vhost_blk_start something like:

#define VHOST_SET_NWORKERS _IOW(VHOST_VIRTIO, 0x1F, int)
ioctl(s->vhostfd, VHOST_SET_NWORKERS, &nworkers);

Or you can wait a bit. I should be able to send second versions by the 
end of the week (Monday in worst case).

Thanks,
Andrey
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Stefan Hajnoczi 1 year, 6 months ago
On Wed, Oct 05, 2022 at 12:14:18PM +0300, Andrey Zhadchenko wrote:
> On 10/4/22 21:13, Stefan Hajnoczi wrote:
> > On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
> > > Although QEMU virtio-blk is quite fast, there is still some room for
> > > improvements. Disk latency can be reduced if we handle virito-blk requests
> > > in host kernel so we avoid a lot of syscalls and context switches.
> > > 
> > > The biggest disadvantage of this vhost-blk flavor is raw format.
> > > Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
> > > files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
> > > 
> > > Also by using kernel modules we can bypass iothread limitation and finaly scale
> > > block requests with cpus for high-performance devices. This is planned to be
> > > implemented in next version.
> > 
> > Hi Andrey,
> > Do you have a new version of this patch series that uses multiple
> > threads?
> > 
> > I have been playing with vq-IOThread mapping in QEMU and would like to
> > benchmark vhost-blk vs QEMU virtio-blk mq IOThreads:
> > https://gitlab.com/stefanha/qemu/-/tree/virtio-blk-mq-iothread-prototype
> > 
> > Thanks,
> > Stefan
> 
> Hi Stefan
> For now my multi-threaded version is only available for Red Hat 9 5.14.0
> kernel. If you really want you can grab it from here:
> https://lists.openvz.org/pipermail/devel/2022-September/079951.html (kernel)
> For QEMU part all you need is adding to vhost_blk_start something like:
> 
> #define VHOST_SET_NWORKERS _IOW(VHOST_VIRTIO, 0x1F, int)
> ioctl(s->vhostfd, VHOST_SET_NWORKERS, &nworkers);
> 
> Or you can wait a bit. I should be able to send second versions by the end
> of the week (Monday in worst case).

Thanks, I will wait for the next revision.

Stefan
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Michael S. Tsirkin 1 year, 9 months ago
On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
> Although QEMU virtio-blk is quite fast, there is still some room for
> improvements. Disk latency can be reduced if we handle virito-blk requests
> in host kernel so we avoid a lot of syscalls and context switches.
> 
> The biggest disadvantage of this vhost-blk flavor is raw format.
> Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
> files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html

That one seems stalled. Do you plan to work on that too?

> Also by using kernel modules we can bypass iothread limitation and finaly scale
> block requests with cpus for high-performance devices. This is planned to be
> implemented in next version.
> 
> Linux kernel module part:
> https://lore.kernel.org/kvm/20220725202753.298725-1-andrey.zhadchenko@virtuozzo.com/
> 
> test setups and results:
> fio --direct=1 --rw=randread  --bs=4k  --ioengine=libaio --iodepth=128
> QEMU drive options: cache=none
> filesystem: xfs
> 
> SSD:
>                | randread, IOPS  | randwrite, IOPS |
> Host           |      95.8k	 |	85.3k	   |
> QEMU virtio    |      57.5k	 |	79.4k	   |
> QEMU vhost-blk |      95.6k	 |	84.3k	   |
> 
> RAMDISK (vq == vcpu):
>                  | randread, IOPS | randwrite, IOPS |
> virtio, 1vcpu    |	123k	  |	 129k       |
> virtio, 2vcpu    |	253k (??) |	 250k (??)  |
> virtio, 4vcpu    |	158k	  |	 154k       |
> vhost-blk, 1vcpu |	110k	  |	 113k       |
> vhost-blk, 2vcpu |	247k	  |	 252k       |
> vhost-blk, 4vcpu |	576k	  |	 567k       |
> 
> Andrey Zhadchenko (1):
>   block: add vhost-blk backend


From vhost/virtio side the patchset looks ok. But let's see what do
block devs think about it.


>  configure                     |  13 ++
>  hw/block/Kconfig              |   5 +
>  hw/block/meson.build          |   1 +
>  hw/block/vhost-blk.c          | 395 ++++++++++++++++++++++++++++++++++
>  hw/virtio/meson.build         |   1 +
>  hw/virtio/vhost-blk-pci.c     | 102 +++++++++
>  include/hw/virtio/vhost-blk.h |  44 ++++
>  linux-headers/linux/vhost.h   |   3 +
>  8 files changed, 564 insertions(+)
>  create mode 100644 hw/block/vhost-blk.c
>  create mode 100644 hw/virtio/vhost-blk-pci.c
>  create mode 100644 include/hw/virtio/vhost-blk.h
> 
> -- 
> 2.31.1
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Denis V. Lunev 1 year, 9 months ago
On 26.07.2022 15:51, Michael S. Tsirkin wrote:
> On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
>> Although QEMU virtio-blk is quite fast, there is still some room for
>> improvements. Disk latency can be reduced if we handle virito-blk requests
>> in host kernel so we avoid a lot of syscalls and context switches.
>>
>> The biggest disadvantage of this vhost-blk flavor is raw format.
>> Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
>> files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
> That one seems stalled. Do you plan to work on that too?
We have too. The difference in numbers, as you seen below is quite too
much. We have waited for this patch to be sent to keep pushing.

It should be noted that may be talk on OSS this year could also push a bit.

Den


>> Also by using kernel modules we can bypass iothread limitation and finaly scale
>> block requests with cpus for high-performance devices. This is planned to be
>> implemented in next version.
>>
>> Linux kernel module part:
>> https://lore.kernel.org/kvm/20220725202753.298725-1-andrey.zhadchenko@virtuozzo.com/
>>
>> test setups and results:
>> fio --direct=1 --rw=randread  --bs=4k  --ioengine=libaio --iodepth=128
>> QEMU drive options: cache=none
>> filesystem: xfs
>>
>> SSD:
>>                 | randread, IOPS  | randwrite, IOPS |
>> Host           |      95.8k	 |	85.3k	   |
>> QEMU virtio    |      57.5k	 |	79.4k	   |
>> QEMU vhost-blk |      95.6k	 |	84.3k	   |
>>
>> RAMDISK (vq == vcpu):
>>                   | randread, IOPS | randwrite, IOPS |
>> virtio, 1vcpu    |	123k	  |	 129k       |
>> virtio, 2vcpu    |	253k (??) |	 250k (??)  |
>> virtio, 4vcpu    |	158k	  |	 154k       |
>> vhost-blk, 1vcpu |	110k	  |	 113k       |
>> vhost-blk, 2vcpu |	247k	  |	 252k       |
>> vhost-blk, 4vcpu |	576k	  |	 567k       |
>>
>> Andrey Zhadchenko (1):
>>    block: add vhost-blk backend
>
>  From vhost/virtio side the patchset looks ok. But let's see what do
> block devs think about it.
>
>
>>   configure                     |  13 ++
>>   hw/block/Kconfig              |   5 +
>>   hw/block/meson.build          |   1 +
>>   hw/block/vhost-blk.c          | 395 ++++++++++++++++++++++++++++++++++
>>   hw/virtio/meson.build         |   1 +
>>   hw/virtio/vhost-blk-pci.c     | 102 +++++++++
>>   include/hw/virtio/vhost-blk.h |  44 ++++
>>   linux-headers/linux/vhost.h   |   3 +
>>   8 files changed, 564 insertions(+)
>>   create mode 100644 hw/block/vhost-blk.c
>>   create mode 100644 hw/virtio/vhost-blk-pci.c
>>   create mode 100644 include/hw/virtio/vhost-blk.h
>>
>> -- 
>> 2.31.1
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Stefano Garzarella 1 year, 9 months ago
On Tue, Jul 26, 2022 at 04:15:48PM +0200, Denis V. Lunev wrote:
>On 26.07.2022 15:51, Michael S. Tsirkin wrote:
>>On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
>>>Although QEMU virtio-blk is quite fast, there is still some room for
>>>improvements. Disk latency can be reduced if we handle virito-blk requests
>>>in host kernel so we avoid a lot of syscalls and context switches.
>>>
>>>The biggest disadvantage of this vhost-blk flavor is raw format.
>>>Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
>>>files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
>>That one seems stalled. Do you plan to work on that too?
>We have too. The difference in numbers, as you seen below is quite too
>much. We have waited for this patch to be sent to keep pushing.
>
>It should be noted that may be talk on OSS this year could also push a bit.

Cool, the results are similar of what I saw when I compared vhost-blk 
and io_uring passthrough with NVMe (Slide 7 here: [1]).

About QEMU block layer support, we recently started to work on libblkio 
[2]. Stefan also sent an RFC [3] to implement the QEMU BlockDriver.
Currently it supports virtio-blk devices using vhost-vdpa and 
vhost-user.
We could add support for vhost (kernel) as well, though, we were 
thinking of leveraging vDPA to implement in-kernel software device as 
well.

That way we could reuse a lot of the code to support both hardware and 
software accelerators.

In the talk [1] I describe the idea a little bit, and a few months ago I 
did a PoC (unsubmitted RFC) to see if it was feasible and the numbers 
were in line with vhost-blk.

Do you think we could join forces and just have an in-kernel vdpa-blk 
software device?

Of course we could have both vhost-blk and vdpa-blk, but with vDPA 
perhaps we can have one software stack to maintain for both HW and 
software accelerators.

Thanks,
Stefano

[1] 
https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-and-software-offload-for-virtio-blk-stefano-garzarella-red-hat
[2] https://gitlab.com/libblkio/libblkio
[3] 
https://lore.kernel.org/qemu-devel/20220708041737.1768521-1-stefanha@redhat.com/
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Andrey Zhadchenko 1 year, 9 months ago
On 7/27/22 16:06, Stefano Garzarella wrote:
> On Tue, Jul 26, 2022 at 04:15:48PM +0200, Denis V. Lunev wrote:
>> On 26.07.2022 15:51, Michael S. Tsirkin wrote:
>>> On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
>>>> Although QEMU virtio-blk is quite fast, there is still some room for
>>>> improvements. Disk latency can be reduced if we handle virito-blk 
>>>> requests
>>>> in host kernel so we avoid a lot of syscalls and context switches.
>>>>
>>>> The biggest disadvantage of this vhost-blk flavor is raw format.
>>>> Luckily Kirill Thai proposed device mapper driver for QCOW2 format 
>>>> to attach
>>>> files as block devices: 
>>>> https://www.spinics.net/lists/kernel/msg4292965.html
>>> That one seems stalled. Do you plan to work on that too?
>> We have too. The difference in numbers, as you seen below is quite too
>> much. We have waited for this patch to be sent to keep pushing.
>>
>> It should be noted that may be talk on OSS this year could also push a 
>> bit.
> 
> Cool, the results are similar of what I saw when I compared vhost-blk 
> and io_uring passthrough with NVMe (Slide 7 here: [1]).
> 
> About QEMU block layer support, we recently started to work on libblkio 
> [2]. Stefan also sent an RFC [3] to implement the QEMU BlockDriver.
> Currently it supports virtio-blk devices using vhost-vdpa and vhost-user.
> We could add support for vhost (kernel) as well, though, we were 
> thinking of leveraging vDPA to implement in-kernel software device as well.
> 
> That way we could reuse a lot of the code to support both hardware and 
> software accelerators.
> 
> In the talk [1] I describe the idea a little bit, and a few months ago I 
> did a PoC (unsubmitted RFC) to see if it was feasible and the numbers 
> were in line with vhost-blk.
> 
> Do you think we could join forces and just have an in-kernel vdpa-blk 
> software device?

This seems worth trying. Why double the efforts to do the same. Yet I 
would like to play a bit with your vdpa-blk PoC beforehand. Can you send 
it to me with some instructions how to run it?

> 
> Of course we could have both vhost-blk and vdpa-blk, but with vDPA 
> perhaps we can have one software stack to maintain for both HW and 
> software accelerators.
> 
> Thanks,
> Stefano
> 
> [1] 
> https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-and-software-offload-for-virtio-blk-stefano-garzarella-red-hat 
> 
> [2] https://gitlab.com/libblkio/libblkio
> [3] 
> https://lore.kernel.org/qemu-devel/20220708041737.1768521-1-stefanha@redhat.com/ 
> 
>
Re: [RFC patch 0/1] block: vhost-blk backend
Posted by Stefano Garzarella 1 year, 9 months ago
On Thu, Jul 28, 2022 at 7:28 AM Andrey Zhadchenko <andrey.zhadchenko@virtuozzo.com> wrote:
> On 7/27/22 16:06, Stefano Garzarella wrote:
> > On Tue, Jul 26, 2022 at 04:15:48PM +0200, Denis V. Lunev wrote:
> >> On 26.07.2022 15:51, Michael S. Tsirkin wrote:
> >>> On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
> >>>> Although QEMU virtio-blk is quite fast, there is still some room for
> >>>> improvements. Disk latency can be reduced if we handle virito-blk
> >>>> requests
> >>>> in host kernel so we avoid a lot of syscalls and context switches.
> >>>>
> >>>> The biggest disadvantage of this vhost-blk flavor is raw format.
> >>>> Luckily Kirill Thai proposed device mapper driver for QCOW2 format
> >>>> to attach
> >>>> files as block devices:
> >>>> https://www.spinics.net/lists/kernel/msg4292965.html
> >>> That one seems stalled. Do you plan to work on that too?
> >> We have too. The difference in numbers, as you seen below is quite too
> >> much. We have waited for this patch to be sent to keep pushing.
> >>
> >> It should be noted that may be talk on OSS this year could also push a
> >> bit.
> >
> > Cool, the results are similar of what I saw when I compared vhost-blk
> > and io_uring passthrough with NVMe (Slide 7 here: [1]).
> >
> > About QEMU block layer support, we recently started to work on libblkio
> > [2]. Stefan also sent an RFC [3] to implement the QEMU BlockDriver.
> > Currently it supports virtio-blk devices using vhost-vdpa and vhost-user.
> > We could add support for vhost (kernel) as well, though, we were
> > thinking of leveraging vDPA to implement in-kernel software device as well.
> >
> > That way we could reuse a lot of the code to support both hardware and
> > software accelerators.
> >
> > In the talk [1] I describe the idea a little bit, and a few months ago I
> > did a PoC (unsubmitted RFC) to see if it was feasible and the numbers
> > were in line with vhost-blk.
> >
> > Do you think we could join forces and just have an in-kernel vdpa-blk
> > software device?
>
> This seems worth trying. Why double the efforts to do the same. Yet I
> would like to play a bit with your vdpa-blk PoC beforehand.

Great :-)

> Can you send it to me with some instructions how to run it?

Yep, sure!

The PoC is available here: 
https://gitlab.com/sgarzarella/linux/-/tree/vdpa-sw-blk-poc

The tree was based on Linux v5.16, but I had some issues to rebuild with 
new gcc, so I rebased on v5.16.20 (not tested), configs needed:
CONFIG_VDPA_SW_BLOCK=m + CONFIG_VHOST_VDPA=m + dependencies.

It contains:
  - patches required for QEMU generic vhost-vdpa support
  - patches to support blk_mq_ops->poll() (to use io_uring iopoll) in
    the guest virtio-blk driver (I used the same kernel on guest and
    host)
  - some improvements for vringh (not completed, it could be a
    bottleneck)
  - vdpa-sw and vdpa-sw-blk patches (and hacks)

It is based on the vDPA simulator framework already merged upstream. The 
idea is to generalize the simulator to share the code between both 
software devices and simulators. The code needs a lot of work, I was 
focusing just on a working virtio-blk device emulation, but more focus 
on the generic part should be done.
In the code there are a couple of defines to control polling.

About the vdpa-blk device, you need iproute2's vdpa tool available 
upstream:
  https://wiki.linuxfoundation.org/networking/iproute2

Once the device is instantiated (see instructions later), the backend 
(raw file or device) can be set through a device attribute (not robust, 
but it was a PoC): /sys/bus/vdpa/devices/$dev_name/backend_fd

I wrote a simple python script available here: 
https://github.com/stefano-garzarella/vm-build/blob/main/vm-tools/vdpa_set_backend_fd.py

For QEMU, we are working on libblkio to support both slow path (when 
QEMU block layer is needed) and fast path (vqs passed directly to the 
device). For now libblkio supports only slow path, so to test the fast 
path you can use Longpeng's patches (not yet merged upstream) with 
generic vhost-vdpa support: 
https://lore.kernel.org/qemu-devel/20220514041107.1980-1-longpeng2@huawei.com/

Steps:
  # load vDPA block in-kernel sw device module
  modprobe vdpa_sw_blk

  # load nvme module with poll_queues set if you want to use iopoll
  modprobe nvme poll_queues=15

  # instantiate a new vdpa-blk device
  vdpa dev add mgmtdev vdpasw_blk name blk0

  # set backend (/dev/nvme0n1)
  vdpa_set_backend_fd.py -b /dev/nvme0n1 blk0

  # load vhost vDPA bus ...
  modprobe vhost_vdpa

  # ... and vhost-vdpa device will appear
  ls -l /dev/vhost-vdpa-0
  crw-------. 1 root root 510, 0 Jul 28 17:06 /dev/vhost-vdpa-0

  # start QEMU patched with generic vhost-vdpa
  qemu-system-x86_64 ... \
  -device vhost-vdpa-device-pci,vhostdev=/dev/vhost-vdpa-0

I haven't tested it recently, so I'm not sure it all works, but in the 
next few days I'll try. For anything else, feel free to reach me here or 
on IRC (sgarzare on #qemu).

Thanks,
Stefano