[PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()

Dmitrii Gavrilov posted 1 patch 6 months, 1 week ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20231103105602.90475-1-ds-gavr@yandex-team.ru
Maintainers: Paolo Bonzini <pbonzini@redhat.com>, "Daniel P. Berrangé" <berrange@redhat.com>, Eduardo Habkost <eduardo@habkost.net>
system/qdev-monitor.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
[PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by Dmitrii Gavrilov 6 months, 1 week ago
Original goal of addition of drain_call_rcu to qmp_device_add was to cover
the failure case of qdev_device_add. It seems call of drain_call_rcu was
misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
under happy path too. What led to overall performance degradation of
qmp_device_add.

In this patch call of drain_call_rcu moved under handling of failure of
qdev_device_add.

Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
---
 system/qdev-monitor.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
index 1b8005a..dc7b02d 100644
--- a/system/qdev-monitor.c
+++ b/system/qdev-monitor.c
@@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
         return;
     }
     dev = qdev_device_add(opts, errp);
-
-    /*
-     * Drain all pending RCU callbacks. This is done because
-     * some bus related operations can delay a device removal
-     * (in this case this can happen if device is added and then
-     * removed due to a configuration error)
-     * to a RCU callback, but user might expect that this interface
-     * will finish its job completely once qmp command returns result
-     * to the user
-     */
-    drain_call_rcu();
-
     if (!dev) {
+        /*
+         * Drain all pending RCU callbacks. This is done because
+         * some bus related operations can delay a device removal
+         * (in this case this can happen if device is added and then
+         * removed due to a configuration error)
+         * to a RCU callback, but user might expect that this interface
+         * will finish its job completely once qmp command returns result
+         * to the user
+         */
+        drain_call_rcu();
+
         qemu_opts_del(opts);
         return;
     }
-- 
2.34.1
Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by Igor Mammedov 1 week, 2 days ago
On Fri,  3 Nov 2023 13:56:02 +0300
Dmitrii Gavrilov <ds-gavr@yandex-team.ru> wrote:

Seems related to cpu hotpug issues,
CCing Boris for awareness.

> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
> 
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.
> 
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
> ---
>  system/qdev-monitor.c | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
> index 1b8005a..dc7b02d 100644
> --- a/system/qdev-monitor.c
> +++ b/system/qdev-monitor.c
> @@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
>          return;
>      }
>      dev = qdev_device_add(opts, errp);
> -
> -    /*
> -     * Drain all pending RCU callbacks. This is done because
> -     * some bus related operations can delay a device removal
> -     * (in this case this can happen if device is added and then
> -     * removed due to a configuration error)
> -     * to a RCU callback, but user might expect that this interface
> -     * will finish its job completely once qmp command returns result
> -     * to the user
> -     */
> -    drain_call_rcu();
> -
>      if (!dev) {
> +        /*
> +         * Drain all pending RCU callbacks. This is done because
> +         * some bus related operations can delay a device removal
> +         * (in this case this can happen if device is added and then
> +         * removed due to a configuration error)
> +         * to a RCU callback, but user might expect that this interface
> +         * will finish its job completely once qmp command returns result
> +         * to the user
> +         */
> +        drain_call_rcu();
> +
>          qemu_opts_del(opts);
>          return;
>      }
Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by boris.ostrovsky@oracle.com 1 week, 2 days ago

On 4/30/24 10:27 AM, Igor Mammedov wrote:
> On Fri,  3 Nov 2023 13:56:02 +0300
> Dmitrii Gavrilov <ds-gavr@yandex-team.ru> wrote:
> 
> Seems related to cpu hotpug issues,
> CCing Boris for awareness.

Thank you Igor.

This patch appears to change timing in my test which makes the problem 
much more difficult to reproduce. However, it can still be triggered if 
I insert a delay after qdev_device_add() which is roughly equivalent to 
what was happening in drain_call_rcu().

(https://lore.kernel.org/kvm/534247e4-76d6-41d2-86c7-0155406ccd80@oracle.com/ 
for context)



-boris
Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by Marc Hartmayer 1 week, 6 days ago
On Fri, Nov 03, 2023 at 01:56 PM +0300, Dmitrii Gavrilov <ds-gavr@yandex-team.ru> wrote:
> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
>
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.
>
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>

I don't know the exact reason, but this commit caused udev events to
show up much slower than before (~3s vs. ~23s) when a virtio-scsi device
is hotplugged (I’ve tested this only on s390x). Importantly, this only
happens when asynchronous SCSI scanning is disabled in the *guest*
kernel (scsi_mod.scan=sync or CONFIG_SCSI_SCAN_ASYNC=n).

The `udevadm monitor` output captured while hotplugging the device
(using QEMU 012b170173bc ("system/qdev-monitor: move drain_call_rcu call
under if (!dev) in qmp_device_add()")):

…
KERNEL[2.166575] add      /devices/css0/0.0.0002/0.0.0002 (ccw)
KERNEL[2.166594] bind     /devices/css0/0.0.0002/0.0.0002 (ccw)
KERNEL[2.166826] add      /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
UDEV  [2.166846] add      /devices/css0/0.0.0002/0.0.0002 (ccw)
UDEV  [2.167013] bind     /devices/css0/0.0.0002/0.0.0002 (ccw)
KERNEL[2.167560] add      /devices/virtual/workqueue/scsi_tmf_0 (workqueue)
UDEV  [2.167977] add      /devices/virtual/workqueue/scsi_tmf_0 (workqueue)
KERNEL[2.167987] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0 (scsi)
KERNEL[2.167996] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/scsi_host/host0 (scsi_host)
KERNEL[2.169113] change   /0:0:0:0 (scsi)
UDEV  [2.169212] change   /0:0:0:0 (scsi)
KERNEL[2.199500] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0 (scsi)
KERNEL[2.199513] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
KERNEL[2.199523] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device)
KERNEL[2.199532] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk)
KERNEL[2.199564] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic)
KERNEL[2.199586] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg)
KERNEL[2.280482] add      /devices/virtual/bdi/8:0 (bdi)
UDEV  [2.280634] add      /devices/virtual/bdi/8:0 (bdi)
KERNEL[3.060145] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/block/sda (block)
KERNEL[3.060160] bind     /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
KERNEL[22.160147] bind     /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
KERNEL[22.160161] add      /bus/virtio/drivers/virtio_scsi (drivers)
KERNEL[22.160169] add      /module/virtio_scsi (module)
UDEV  [22.161078] add      /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
UDEV  [22.161339] add      /bus/virtio/drivers/virtio_scsi (drivers)
UDEV  [22.161860] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0 (scsi)
UDEV  [22.161869] add      /module/virtio_scsi (module)
UDEV  [22.161880] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0 (scsi)
UDEV  [22.161890] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/scsi_host/host0 (scsi_host)
UDEV  [22.161901] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
UDEV  [22.161911] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk)
UDEV  [22.161924] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg)
UDEV  [22.161937] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic)
UDEV  [22.162123] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device)
UDEV  [22.468924] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/block/sda (block)
UDEV  [22.473955] bind     /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
UDEV  [22.473970] bind     /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)


The `udevadm monitor` output without this commit (QEMU 9876359990dd ("hw/scsi/lsi53c895a: add timer to scripts processing")):

…
KERNEL[2.091114] add      /devices/virtual/workqueue/scsi_tmf_0 (workqueue)
UDEV  [2.091218] add      /devices/virtual/workqueue/scsi_tmf_0 (workqueue)
KERNEL[2.091408] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0 (scsi)
KERNEL[2.091418] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/scsi_host/host0 (scsi_host)
KERNEL[2.200461] bind     /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
KERNEL[2.200473] add      /bus/virtio/drivers/virtio_scsi (drivers)
KERNEL[2.200481] add      /module/virtio_scsi (module)
UDEV  [2.200634] add      /module/virtio_scsi (module)
UDEV  [2.200678] add      /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
UDEV  [2.200746] add      /bus/virtio/drivers/virtio_scsi (drivers)
UDEV  [2.200830] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0 (scsi)
UDEV  [2.200972] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/scsi_host/host0 (scsi_host)
UDEV  [2.201148] bind     /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
KERNEL[2.201699] change   /0:0:0:0 (scsi)
KERNEL[2.201734] change   /0:0:0:0 (scsi)
UDEV  [2.201815] change   /0:0:0:0 (scsi)
UDEV  [2.201888] change   /0:0:0:0 (scsi)
KERNEL[2.222062] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0 (scsi)
KERNEL[2.222074] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
KERNEL[2.222083] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device)
KERNEL[2.222092] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk)
KERNEL[2.222104] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic)
KERNEL[2.222127] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg)
UDEV  [2.222241] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0 (scsi)
UDEV  [2.222486] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
UDEV  [2.222667] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk)
UDEV  [2.222715] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg)
UDEV  [2.222877] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device)
UDEV  [2.223116] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic)
KERNEL[2.303063] add      /devices/virtual/bdi/8:0 (bdi)
UDEV  [2.303197] add      /devices/virtual/bdi/8:0 (bdi)
KERNEL[2.394175] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/block/sda (block)
KERNEL[2.394186] bind     /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
UDEV  [2.706054] add      /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/block/sda (block)
UDEV  [2.706075] bind     /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)

I’ve used as host kernel 6.7.0-rc3-00033-ge72f947b4f0d and guest kernel
v6.5.0.

QEMU 'info qtree' output when the device was hotplugged:

bus: main-system-bus
  type System
  dev: s390-pcihost, id ""
    x-config-reg-migration-enabled = true
    bypass-iommu = false
    bus: s390-pcibus.0
      type s390-pcibus
    bus: pci.0
      type PCI
  dev: virtual-css-bridge, id ""
    css_dev_path = true
    bus: virtual-css
      type virtual-css-bus
      dev: virtio-scsi-ccw, id "scsi0"
        ioeventfd = true
        max_revision = 2 (0x2)
        devno = "fe.0.0002"
        dev_id = "fe.0.0002"
        subch_id = "fe.0.0002"
        bus: virtio-bus
          type virtio-ccw-bus
          dev: virtio-scsi-device, id ""
            num_queues = 1 (0x1)
            virtqueue_size = 256 (0x100)
            seg_max_adjust = true
            max_sectors = 65535 (0xffff)
            cmd_per_lun = 128 (0x80)
            hotplug = true
            param_change = true
            indirect_desc = true
            event_idx = true
            notify_on_empty = true
            any_layout = true
            iommu_platform = false
            packed = false
            queue_reset = true
            use-started = true
            use-disabled-flag = true
            x-disable-legacy-check = false
            bus: scsi0.0
              type SCSI
              dev: scsi-generic, id "hostdev0"
                drive = "libvirt-1-backend"
                share-rw = false
                io_timeout = 30 (0x1e)
                channel = 0 (0x0)
                scsi-id = 0 (0x0)
                lun = 0 (0x0)
…

Any ideas?

Thanks in advance.

Kind regards,
 Marc
Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by Dmitrii Gavrilov 1 week, 6 days ago

                
            
Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by Marc Hartmayer 1 week, 6 days ago
On Fri, Apr 26, 2024 at 11:57 AM +0300, Dmitrii Gavrilov <ds-gavr@yandex-team.ru> wrote:
> 26.04.2024, 11:17, "Marc Hartmayer" <mhartmay@linux.ibm.com>:
>
>  On Fri, Nov 03, 2023 at 01:56 PM +0300, Dmitrii Gavrilov <ds-gavr@yandex-team.ru> wrote:
>
>   Original goal of addition of drain_call_rcu to qmp_device_add was to cover
>   the failure case of qdev_device_add. It seems call of drain_call_rcu was
>   misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
>   under happy path too. What led to overall performance degradation of
>   qmp_device_add.
>
>   In this patch call of drain_call_rcu moved under handling of failure of
>   qdev_device_add.
>
>   Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
>
>  I don't know the exact reason, but this commit caused udev events to
>  show up much slower than before (~3s vs. ~23s) when a virtio-scsi device
>  is hotplugged (I’ve tested this only on s390x). Importantly, this only
>  happens when asynchronous SCSI scanning is disabled in the *guest*
>  kernel (scsi_mod.scan=sync or CONFIG_SCSI_SCAN_ASYNC=n).
>
>  The `udevadm monitor` output captured while hotplugging the device
>  (using QEMU 012b170173bc ("system/qdev-monitor: move drain_call_rcu call
>  under if (!dev) in qmp_device_add()")):
>

[…snip…]

>  Any ideas?
>
>  Thanks in advance.
>
>  Kind regards,
>   Marc
>
> Hello!
>  
> Thank you for mentioning this.
>  
> At first glance it seems that using scsi in synchonous mode caues the global
> QEMU mutex lock until the scanning phase is complete. Prior to 012b170173bc
> ("system/qdev-monitor: move drain_call_rcu call under if (!dev) in
> qmp_device_add()") on each device adition the lock would be forcibly removed
> allowing callbacks (including UDEV ones) to be processed after a new device
> is added.
>  
> I`ll try to investigate this furter. But currently it appears to me like
> performance or observability dilemma.

I tried the test on my local laptop (x86_64) and there seems to be no
issue (I used the kernel cmdline option scsi_mod.scan=sync for the
guest) - guest and host kernel == 6.8.7. But please double check.

>  
> Is behaviour you mentioning consistant?

Yep, at least for more than 50 iterations (I stopped the test then).

>  
> Best regards,
> Dmitrii
-- 
Kind regards / Beste Grüße
   Marc Hartmayer

IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Wolfgang Wendt
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294
Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by Paolo Bonzini 2 months, 1 week ago
Queued, thanks.

Paolo
Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by Vladimir Sementsov-Ogievskiy 2 months, 1 week ago
ping.

Hi again!

Paolo could you please take a look? Could we merge that? It still applies to master branch.

On 03.11.23 13:56, Dmitrii Gavrilov wrote:
> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
> 
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.
> 
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
> ---
>   system/qdev-monitor.c | 23 +++++++++++------------
>   1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
> index 1b8005a..dc7b02d 100644
> --- a/system/qdev-monitor.c
> +++ b/system/qdev-monitor.c
> @@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
>           return;
>       }
>       dev = qdev_device_add(opts, errp);
> -
> -    /*
> -     * Drain all pending RCU callbacks. This is done because
> -     * some bus related operations can delay a device removal
> -     * (in this case this can happen if device is added and then
> -     * removed due to a configuration error)
> -     * to a RCU callback, but user might expect that this interface
> -     * will finish its job completely once qmp command returns result
> -     * to the user
> -     */
> -    drain_call_rcu();
> -
>       if (!dev) {
> +        /*
> +         * Drain all pending RCU callbacks. This is done because
> +         * some bus related operations can delay a device removal
> +         * (in this case this can happen if device is added and then
> +         * removed due to a configuration error)
> +         * to a RCU callback, but user might expect that this interface
> +         * will finish its job completely once qmp command returns result
> +         * to the user
> +         */
> +        drain_call_rcu();
> +
>           qemu_opts_del(opts);
>           return;
>       }

-- 
Best regards,
Vladimir
Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by Michael S. Tsirkin 6 months ago
On Fri, Nov 03, 2023 at 01:56:02PM +0300, Dmitrii Gavrilov wrote:
> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
> 
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.


Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

Also:

Fixes: 7bed89958b ("device_core: use drain_call_rcu in in qmp_device_add")
Cc: "Maxim Levitsky" <mlevitsk@redhat.com>


> 
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
> ---
>  system/qdev-monitor.c | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
> index 1b8005a..dc7b02d 100644
> --- a/system/qdev-monitor.c
> +++ b/system/qdev-monitor.c
> @@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
>          return;
>      }
>      dev = qdev_device_add(opts, errp);
> -
> -    /*
> -     * Drain all pending RCU callbacks. This is done because
> -     * some bus related operations can delay a device removal
> -     * (in this case this can happen if device is added and then
> -     * removed due to a configuration error)
> -     * to a RCU callback, but user might expect that this interface
> -     * will finish its job completely once qmp command returns result
> -     * to the user
> -     */
> -    drain_call_rcu();
> -
>      if (!dev) {
> +        /*
> +         * Drain all pending RCU callbacks. This is done because
> +         * some bus related operations can delay a device removal
> +         * (in this case this can happen if device is added and then
> +         * removed due to a configuration error)
> +         * to a RCU callback, but user might expect that this interface
> +         * will finish its job completely once qmp command returns result
> +         * to the user
> +         */
> +        drain_call_rcu();
> +
>          qemu_opts_del(opts);
>          return;
>      }
> -- 
> 2.34.1
> 
>
Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by Vladimir Sementsov-Ogievskiy 6 months ago
On 07.11.23 10:32, Michael S. Tsirkin wrote:
> On Fri, Nov 03, 2023 at 01:56:02PM +0300, Dmitrii Gavrilov wrote:
>> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
>> the failure case of qdev_device_add. It seems call of drain_call_rcu was
>> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
>> under happy path too. What led to overall performance degradation of
>> qmp_device_add.
>>
>> In this patch call of drain_call_rcu moved under handling of failure of
>> qdev_device_add.
> 
> 
> Suggested-by: Michael S. Tsirkin <mst@redhat.com>

Right, sorry for missing that

> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

Thanks!

> 
> Also:
> 
> Fixes: 7bed89958b ("device_core: use drain_call_rcu in in qmp_device_add")
> Cc: "Maxim Levitsky" <mlevitsk@redhat.com>
> 
> 
>>
>> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
>> ---
>>   system/qdev-monitor.c | 23 +++++++++++------------
>>   1 file changed, 11 insertions(+), 12 deletions(-)
>>
>> diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
>> index 1b8005a..dc7b02d 100644
>> --- a/system/qdev-monitor.c
>> +++ b/system/qdev-monitor.c
>> @@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
>>           return;
>>       }
>>       dev = qdev_device_add(opts, errp);
>> -
>> -    /*
>> -     * Drain all pending RCU callbacks. This is done because
>> -     * some bus related operations can delay a device removal
>> -     * (in this case this can happen if device is added and then
>> -     * removed due to a configuration error)
>> -     * to a RCU callback, but user might expect that this interface
>> -     * will finish its job completely once qmp command returns result
>> -     * to the user
>> -     */
>> -    drain_call_rcu();
>> -
>>       if (!dev) {
>> +        /*
>> +         * Drain all pending RCU callbacks. This is done because
>> +         * some bus related operations can delay a device removal
>> +         * (in this case this can happen if device is added and then
>> +         * removed due to a configuration error)
>> +         * to a RCU callback, but user might expect that this interface
>> +         * will finish its job completely once qmp command returns result
>> +         * to the user
>> +         */
>> +        drain_call_rcu();
>> +
>>           qemu_opts_del(opts);
>>           return;
>>       }
>> -- 
>> 2.34.1
>>
>>
> 

-- 
Best regards,
Vladimir
Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
Posted by Vladimir Sementsov-Ogievskiy 6 months ago
[add Michael]

On 03.11.23 13:56, Dmitrii Gavrilov wrote:
> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
> 
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.
> 
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>

Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>


> ---
>   system/qdev-monitor.c | 23 +++++++++++------------
>   1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
> index 1b8005a..dc7b02d 100644
> --- a/system/qdev-monitor.c
> +++ b/system/qdev-monitor.c
> @@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
>           return;
>       }
>       dev = qdev_device_add(opts, errp);
> -
> -    /*
> -     * Drain all pending RCU callbacks. This is done because
> -     * some bus related operations can delay a device removal
> -     * (in this case this can happen if device is added and then
> -     * removed due to a configuration error)
> -     * to a RCU callback, but user might expect that this interface
> -     * will finish its job completely once qmp command returns result
> -     * to the user
> -     */
> -    drain_call_rcu();
> -
>       if (!dev) {
> +        /*
> +         * Drain all pending RCU callbacks. This is done because
> +         * some bus related operations can delay a device removal
> +         * (in this case this can happen if device is added and then
> +         * removed due to a configuration error)
> +         * to a RCU callback, but user might expect that this interface
> +         * will finish its job completely once qmp command returns result
> +         * to the user
> +         */
> +        drain_call_rcu();
> +
>           qemu_opts_del(opts);
>           return;
>       }

-- 
Best regards,
Vladimir