[PATCH] vhost-vdpa: fix assert !virtio_net_get_subqueue(nc)->async_tx.elem in virtio_net_reset

Si-Wei Liu posted 1 patch 1 year, 6 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/1664913563-3351-1-git-send-email-si-wei.liu@oracle.com
Maintainers: Jason Wang <jasowang@redhat.com>
net/vhost-vdpa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
[PATCH] vhost-vdpa: fix assert !virtio_net_get_subqueue(nc)->async_tx.elem in virtio_net_reset
Posted by Si-Wei Liu 1 year, 6 months ago
The citing commit has incorrect code in vhost_vdpa_receive() that returns
zero instead of full packet size to the caller. This renders pending packets
unable to be freed so then get clogged in the tx queue forever. When device
is being reset later on, below assertion failure ensues:

0  0x00007f86d53bb387 in raise () from /lib64/libc.so.6
1  0x00007f86d53bca78 in abort () from /lib64/libc.so.6
2  0x00007f86d53b41a6 in __assert_fail_base () from /lib64/libc.so.6
3  0x00007f86d53b4252 in __assert_fail () from /lib64/libc.so.6
4  0x000055b8f6ff6fcc in virtio_net_reset (vdev=<optimized out>) at /usr/src/debug/qemu/hw/net/virtio-net.c:563
5  0x000055b8f7012fcf in virtio_reset (opaque=0x55b8faf881f0) at /usr/src/debug/qemu/hw/virtio/virtio.c:1993
6  0x000055b8f71f0086 in virtio_bus_reset (bus=bus@entry=0x55b8faf88178) at /usr/src/debug/qemu/hw/virtio/virtio-bus.c:102
7  0x000055b8f71f1620 in virtio_pci_reset (qdev=<optimized out>) at /usr/src/debug/qemu/hw/virtio/virtio-pci.c:1845
8  0x000055b8f6fafc6c in memory_region_write_accessor (mr=<optimized out>, addr=<optimized out>, value=<optimized out>,
   size=<optimized out>, shift=<optimized out>, mask=<optimized out>, attrs=...) at /usr/src/debug/qemu/memory.c:483
9  0x000055b8f6fadce9 in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f867e7fb7e8, size=size@entry=1,
   access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x55b8f6fafc20 <memory_region_write_accessor>,
   mr=0x55b8faf80a50, attrs=...) at /usr/src/debug/qemu/memory.c:544
10 0x000055b8f6fb1d0b in memory_region_dispatch_write (mr=mr@entry=0x55b8faf80a50, addr=addr@entry=20, data=0, op=<optimized out>,
   attrs=attrs@entry=...) at /usr/src/debug/qemu/memory.c:1470
11 0x000055b8f6f62ada in flatview_write_continue (fv=fv@entry=0x7f86ac04cd20, addr=addr@entry=549755813908, attrs=...,
   attrs@entry=..., buf=buf@entry=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=len@entry=1, addr1=20, l=1,
   mr=0x55b8faf80a50) at /usr/src/debug/qemu/exec.c:3266
12 0x000055b8f6f62c8f in flatview_write (fv=0x7f86ac04cd20, addr=549755813908, attrs=...,
   buf=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=1) at /usr/src/debug/qemu/exec.c:3306
13 0x000055b8f6f674cb in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>,
   len=<optimized out>) at /usr/src/debug/qemu/exec.c:3396
14 0x000055b8f6f67575 in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=..., attrs@entry=...,
   buf=buf@entry=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=<optimized out>, is_write=<optimized out>)
   at /usr/src/debug/qemu/exec.c:3406
15 0x000055b8f6fc1cc8 in kvm_cpu_exec (cpu=cpu@entry=0x55b8f9aa0e10) at /usr/src/debug/qemu/accel/kvm/kvm-all.c:2410
16 0x000055b8f6fa5f5e in qemu_kvm_cpu_thread_fn (arg=0x55b8f9aa0e10) at /usr/src/debug/qemu/cpus.c:1318
17 0x000055b8f7336e16 in qemu_thread_start (args=0x55b8f9ac8480) at /usr/src/debug/qemu/util/qemu-thread-posix.c:519
18 0x00007f86d575aea5 in start_thread () from /lib64/libpthread.so.0
19 0x00007f86d5483b2d in clone () from /lib64/libc.so.6

Make vhost_vdpa_receive() return the size passed in as is, so that the
caller qemu_deliver_packet_iov() would eventually propagate it back to
virtio_net_flush_tx() to release pending packets from the async_tx queue.
Which corresponds to the drop path where qemu_sendv_packet_async() returns
non-zero in virtio_net_flush_tx().

Fixes: 846a1e85da64 ("vdpa: Add dummy receive callback")
Cc: Eugenio Perez Martin <eperezma@redhat.com>
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
---
 net/vhost-vdpa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 4bc3fd0..182b3a1 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -211,7 +211,7 @@ static bool vhost_vdpa_check_peer_type(NetClientState *nc, ObjectClass *oc,
 static ssize_t vhost_vdpa_receive(NetClientState *nc, const uint8_t *buf,
                                   size_t size)
 {
-    return 0;
+    return size;
 }
 
 static NetClientInfo net_vhost_vdpa_info = {
-- 
1.8.3.1
Re: [PATCH] vhost-vdpa: fix assert !virtio_net_get_subqueue(nc)->async_tx.elem in virtio_net_reset
Posted by Eugenio Perez Martin 1 year, 6 months ago
On Tue, Oct 4, 2022 at 11:05 PM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
> The citing commit has incorrect code in vhost_vdpa_receive() that returns
> zero instead of full packet size to the caller. This renders pending packets
> unable to be freed so then get clogged in the tx queue forever. When device
> is being reset later on, below assertion failure ensues:
>
> 0  0x00007f86d53bb387 in raise () from /lib64/libc.so.6
> 1  0x00007f86d53bca78 in abort () from /lib64/libc.so.6
> 2  0x00007f86d53b41a6 in __assert_fail_base () from /lib64/libc.so.6
> 3  0x00007f86d53b4252 in __assert_fail () from /lib64/libc.so.6
> 4  0x000055b8f6ff6fcc in virtio_net_reset (vdev=<optimized out>) at /usr/src/debug/qemu/hw/net/virtio-net.c:563
> 5  0x000055b8f7012fcf in virtio_reset (opaque=0x55b8faf881f0) at /usr/src/debug/qemu/hw/virtio/virtio.c:1993
> 6  0x000055b8f71f0086 in virtio_bus_reset (bus=bus@entry=0x55b8faf88178) at /usr/src/debug/qemu/hw/virtio/virtio-bus.c:102
> 7  0x000055b8f71f1620 in virtio_pci_reset (qdev=<optimized out>) at /usr/src/debug/qemu/hw/virtio/virtio-pci.c:1845
> 8  0x000055b8f6fafc6c in memory_region_write_accessor (mr=<optimized out>, addr=<optimized out>, value=<optimized out>,
>    size=<optimized out>, shift=<optimized out>, mask=<optimized out>, attrs=...) at /usr/src/debug/qemu/memory.c:483
> 9  0x000055b8f6fadce9 in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f867e7fb7e8, size=size@entry=1,
>    access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x55b8f6fafc20 <memory_region_write_accessor>,
>    mr=0x55b8faf80a50, attrs=...) at /usr/src/debug/qemu/memory.c:544
> 10 0x000055b8f6fb1d0b in memory_region_dispatch_write (mr=mr@entry=0x55b8faf80a50, addr=addr@entry=20, data=0, op=<optimized out>,
>    attrs=attrs@entry=...) at /usr/src/debug/qemu/memory.c:1470
> 11 0x000055b8f6f62ada in flatview_write_continue (fv=fv@entry=0x7f86ac04cd20, addr=addr@entry=549755813908, attrs=...,
>    attrs@entry=..., buf=buf@entry=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=len@entry=1, addr1=20, l=1,
>    mr=0x55b8faf80a50) at /usr/src/debug/qemu/exec.c:3266
> 12 0x000055b8f6f62c8f in flatview_write (fv=0x7f86ac04cd20, addr=549755813908, attrs=...,
>    buf=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=1) at /usr/src/debug/qemu/exec.c:3306
> 13 0x000055b8f6f674cb in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>,
>    len=<optimized out>) at /usr/src/debug/qemu/exec.c:3396
> 14 0x000055b8f6f67575 in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=..., attrs@entry=...,
>    buf=buf@entry=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=<optimized out>, is_write=<optimized out>)
>    at /usr/src/debug/qemu/exec.c:3406
> 15 0x000055b8f6fc1cc8 in kvm_cpu_exec (cpu=cpu@entry=0x55b8f9aa0e10) at /usr/src/debug/qemu/accel/kvm/kvm-all.c:2410
> 16 0x000055b8f6fa5f5e in qemu_kvm_cpu_thread_fn (arg=0x55b8f9aa0e10) at /usr/src/debug/qemu/cpus.c:1318
> 17 0x000055b8f7336e16 in qemu_thread_start (args=0x55b8f9ac8480) at /usr/src/debug/qemu/util/qemu-thread-posix.c:519
> 18 0x00007f86d575aea5 in start_thread () from /lib64/libpthread.so.0
> 19 0x00007f86d5483b2d in clone () from /lib64/libc.so.6
>
> Make vhost_vdpa_receive() return the size passed in as is, so that the
> caller qemu_deliver_packet_iov() would eventually propagate it back to
> virtio_net_flush_tx() to release pending packets from the async_tx queue.
> Which corresponds to the drop path where qemu_sendv_packet_async() returns
> non-zero in virtio_net_flush_tx().
>

Acked-by: Eugenio Pérez <eperezma@redhat.com>


> Fixes: 846a1e85da64 ("vdpa: Add dummy receive callback")
> Cc: Eugenio Perez Martin <eperezma@redhat.com>
> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
> ---
>  net/vhost-vdpa.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 4bc3fd0..182b3a1 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -211,7 +211,7 @@ static bool vhost_vdpa_check_peer_type(NetClientState *nc, ObjectClass *oc,
>  static ssize_t vhost_vdpa_receive(NetClientState *nc, const uint8_t *buf,
>                                    size_t size)
>  {
> -    return 0;
> +    return size;
>  }
>
>  static NetClientInfo net_vhost_vdpa_info = {
> --
> 1.8.3.1
>
Re: [PATCH] vhost-vdpa: fix assert !virtio_net_get_subqueue(nc)->async_tx.elem in virtio_net_reset
Posted by Si-Wei Liu 1 year, 6 months ago
Hi Jason,

This one is a one-line simple bug fix but seems to be missed from the 
pull request. If there's a v2 for the PULL, would appreciate if you can 
piggyback. Thanks in advance!

Regards,
-Siwei

On 10/7/2022 8:42 AM, Eugenio Perez Martin wrote:
> On Tue, Oct 4, 2022 at 11:05 PM Si-Wei Liu<si-wei.liu@oracle.com>  wrote:
>> The citing commit has incorrect code in vhost_vdpa_receive() that returns
>> zero instead of full packet size to the caller. This renders pending packets
>> unable to be freed so then get clogged in the tx queue forever. When device
>> is being reset later on, below assertion failure ensues:
>>
>> 0  0x00007f86d53bb387 in raise () from /lib64/libc.so.6
>> 1  0x00007f86d53bca78 in abort () from /lib64/libc.so.6
>> 2  0x00007f86d53b41a6 in __assert_fail_base () from /lib64/libc.so.6
>> 3  0x00007f86d53b4252 in __assert_fail () from /lib64/libc.so.6
>> 4  0x000055b8f6ff6fcc in virtio_net_reset (vdev=<optimized out>) at /usr/src/debug/qemu/hw/net/virtio-net.c:563
>> 5  0x000055b8f7012fcf in virtio_reset (opaque=0x55b8faf881f0) at /usr/src/debug/qemu/hw/virtio/virtio.c:1993
>> 6  0x000055b8f71f0086 in virtio_bus_reset (bus=bus@entry=0x55b8faf88178) at /usr/src/debug/qemu/hw/virtio/virtio-bus.c:102
>> 7  0x000055b8f71f1620 in virtio_pci_reset (qdev=<optimized out>) at /usr/src/debug/qemu/hw/virtio/virtio-pci.c:1845
>> 8  0x000055b8f6fafc6c in memory_region_write_accessor (mr=<optimized out>, addr=<optimized out>, value=<optimized out>,
>>     size=<optimized out>, shift=<optimized out>, mask=<optimized out>, attrs=...) at /usr/src/debug/qemu/memory.c:483
>> 9  0x000055b8f6fadce9 in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f867e7fb7e8, size=size@entry=1,
>>     access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x55b8f6fafc20 <memory_region_write_accessor>,
>>     mr=0x55b8faf80a50, attrs=...) at /usr/src/debug/qemu/memory.c:544
>> 10 0x000055b8f6fb1d0b in memory_region_dispatch_write (mr=mr@entry=0x55b8faf80a50, addr=addr@entry=20, data=0, op=<optimized out>,
>>     attrs=attrs@entry=...) at /usr/src/debug/qemu/memory.c:1470
>> 11 0x000055b8f6f62ada in flatview_write_continue (fv=fv@entry=0x7f86ac04cd20, addr=addr@entry=549755813908, attrs=...,
>>     attrs@entry=..., buf=buf@entry=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=len@entry=1, addr1=20, l=1,
>>     mr=0x55b8faf80a50) at /usr/src/debug/qemu/exec.c:3266
>> 12 0x000055b8f6f62c8f in flatview_write (fv=0x7f86ac04cd20, addr=549755813908, attrs=...,
>>     buf=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=1) at /usr/src/debug/qemu/exec.c:3306
>> 13 0x000055b8f6f674cb in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>,
>>     len=<optimized out>) at /usr/src/debug/qemu/exec.c:3396
>> 14 0x000055b8f6f67575 in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=..., attrs@entry=...,
>>     buf=buf@entry=0x7f86d0223028 <Address 0x7f86d0223028 out of bounds>, len=<optimized out>, is_write=<optimized out>)
>>     at /usr/src/debug/qemu/exec.c:3406
>> 15 0x000055b8f6fc1cc8 in kvm_cpu_exec (cpu=cpu@entry=0x55b8f9aa0e10) at /usr/src/debug/qemu/accel/kvm/kvm-all.c:2410
>> 16 0x000055b8f6fa5f5e in qemu_kvm_cpu_thread_fn (arg=0x55b8f9aa0e10) at /usr/src/debug/qemu/cpus.c:1318
>> 17 0x000055b8f7336e16 in qemu_thread_start (args=0x55b8f9ac8480) at /usr/src/debug/qemu/util/qemu-thread-posix.c:519
>> 18 0x00007f86d575aea5 in start_thread () from /lib64/libpthread.so.0
>> 19 0x00007f86d5483b2d in clone () from /lib64/libc.so.6
>>
>> Make vhost_vdpa_receive() return the size passed in as is, so that the
>> caller qemu_deliver_packet_iov() would eventually propagate it back to
>> virtio_net_flush_tx() to release pending packets from the async_tx queue.
>> Which corresponds to the drop path where qemu_sendv_packet_async() returns
>> non-zero in virtio_net_flush_tx().
>>
> Acked-by: Eugenio Pérez<eperezma@redhat.com>
>
>
>> Fixes: 846a1e85da64 ("vdpa: Add dummy receive callback")
>> Cc: Eugenio Perez Martin<eperezma@redhat.com>
>> Signed-off-by: Si-Wei Liu<si-wei.liu@oracle.com>
>> ---
>>   net/vhost-vdpa.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>> index 4bc3fd0..182b3a1 100644
>> --- a/net/vhost-vdpa.c
>> +++ b/net/vhost-vdpa.c
>> @@ -211,7 +211,7 @@ static bool vhost_vdpa_check_peer_type(NetClientState *nc, ObjectClass *oc,
>>   static ssize_t vhost_vdpa_receive(NetClientState *nc, const uint8_t *buf,
>>                                     size_t size)
>>   {
>> -    return 0;
>> +    return size;
>>   }
>>
>>   static NetClientInfo net_vhost_vdpa_info = {
>> --
>> 1.8.3.1
>>
Re: [PATCH] vhost-vdpa: fix assert !virtio_net_get_subqueue(nc)->async_tx.elem in virtio_net_reset
Posted by Jason Wang 1 year, 5 months ago
On Sat, Oct 29, 2022 at 1:28 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
> Hi Jason,
>
> This one is a one-line simple bug fix but seems to be missed from the pull request. If there's a v2 for the PULL, would appreciate if you can piggyback. Thanks in advance!
>
> Regards,
> -Siwei
>

I've queued this for rc1.

Thanks