From nobody Sun Mar 22 15:40:02 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1773962049; cv=none; d=zohomail.com; s=zohoarc; b=n+jzOWTiBLsrgvpZu/sUujk+XFVV45P09ioC/RPiPkCQ6e+7Vf7RPP2Q/RHza0Ud7j2wF74PuoCPHs6/H0gl/B+FYldYBL+XuFGgMx+vbs8/tAjB65Udcrug8tqAHdb1ai6HHpsWxjIaAiIh77ZLX6C3oS3JqVTRVjHDDJheTTM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1773962049; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=D1BrBdrzK29CHSb+V1d+R+8Ipqmktv7AoIQg4+C/0zE=; b=PRU4ja81Tq/Yhf1KCciv28QWR7i1gE14VPeWnJo0x/PAtr1TRb3LX70kBCNW+75oc26BG3hsQWIE4eSbLw0xQbnlD0q4szN38yQ6hP5l+0HW5jSRHsmYsq3atmXtb1Tkp5TeameZ+DL67b+IiDqxQZm1JiXfptLk5U1HOsMz0kA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1773962049003306.148360896801; Thu, 19 Mar 2026 16:14:09 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1w3MYQ-0001OF-ED; Thu, 19 Mar 2026 19:13:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1w3MYO-0001Nh-Gm for qemu-devel@nongnu.org; Thu, 19 Mar 2026 19:13:24 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1w3MYL-0001QF-Sj for qemu-devel@nongnu.org; Thu, 19 Mar 2026 19:13:24 -0400 Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-182-NDSYfyt4Pa6nNIMjCdOgKw-1; Thu, 19 Mar 2026 19:13:19 -0400 Received: by mail-qt1-f197.google.com with SMTP id d75a77b69052e-50b31cff27fso9189071cf.3 for ; Thu, 19 Mar 2026 16:13:19 -0700 (PDT) Received: from x1.local ([142.189.10.167]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-50b36e5bee3sm6717161cf.21.2026.03.19.16.13.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Mar 2026 16:13:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1773962001; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=D1BrBdrzK29CHSb+V1d+R+8Ipqmktv7AoIQg4+C/0zE=; b=YtxRGAUrQiHnPmf6e0aC/WeuOcqEfZpWs+cRDpM+++nd4au+7wKKKz9w8cwrfUx0eaXPk6 U5bzhl+VbTyfjvdJ1hEMWB1501v5NEtmTEdw/w2Boz16BsLnxLeUEsFYLr/7F+obtvdNHz REzErzGdM8IiNzZHLT9xDn0sJGho7T4= X-MC-Unique: NDSYfyt4Pa6nNIMjCdOgKw-1 X-Mimecast-MFC-AGG-ID: NDSYfyt4Pa6nNIMjCdOgKw_1773961999 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1773961999; x=1774566799; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=D1BrBdrzK29CHSb+V1d+R+8Ipqmktv7AoIQg4+C/0zE=; b=jXQGZxiCR+6aDRNM1bqf2zitwB70k4zVx/cewnGk4XbtME3/ySslE+uEmQ1svMfv5o IBy0tS2V+5jYC6WD/2OpiHtoShHe1k+BEpD2fE+q2uQ5bAWc6PLZwRvaGaFE/WS0vdkm Ja4LShKKXA9we8uwpnBS+6gzvrluA6ecLcIEeea3ttDmc1kDxUZUgQtDFaRQ5/SrdLDr WhyRpkfHYOAhvgJw3TEAdQ8MzDADIHU6HThGQ4FFUTQd2wRq4CSx7NCqNHxwV//US9wo al3O/uGQacqzBCji48fHkNYg/FF4BBpv+b01KtF2a5pU+XYDHOkd/aH2aieORlDU4Na3 9JRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773961999; x=1774566799; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=D1BrBdrzK29CHSb+V1d+R+8Ipqmktv7AoIQg4+C/0zE=; b=lcB3ApSU4XmkMxmSSDpVebrIcqBI2EFkdLGnU6VFd5/J7be/zUegZUr/tIUAaGutWe YLVdADjqktlsvwr2awvpquU2oQbsr6rqOeX1jk74sihn7CHxxz5OSVpRlrwgEOHqVPN9 EkGvlL1c7k/0EIhc4N2g2QSQgRaShX1V93ILJuvixtiadGXPAQDpRlpR+3RPsGvdYP+i G7wqnm5zQaL2fzNdydfVcpatct2G7vxl/PsZRLZ8z8mbBV7D7lZ/Hg4UvuUPh2d/myCq MNQ8B3EJxhWkX+QdooW9tar+yRpvcq0ujmX4yndtTZYptYiyKX7n/j5Ep/alv2YiJV6a jcrg== X-Gm-Message-State: AOJu0Yw8nfSh0UwLBL8SUhby/PEuhblo8PRUg8ASHGOFmIZv4SX9r4qS 7muB4/SagyOPfkBAPIiqOMBK/46+8XlE/SVBineKcywI1v5jfdQ+p5y37nAH+PZfRRphN/SLgUr Ue3XfvawcCIbVr/9i0JFj0iuJWb2nyy1QC1DwCuQdE+tX0Il7H5KM310ZmpxJPfsIBHJ7BICAzh r3uSR31mopp+1pP9KF29cHdNS4h0SnEoW1R2f+dA== X-Gm-Gg: ATEYQzwzYc5LGy5jPSWh0YUbWTfQRyo0A18g72R9OpjVjXeWtD92Jui7iJqfM4BcW2O klKO0zjkorjcOHS6lUxA6vNF/aCDUOsTDvdEaq5pPltHCw1tUU2drRJrdrSPA9KpOfOoWi9VAdH PQuJqDekLDaEuWRkRQ2wvs4iDtgaqabBm6IIh4NR0kQdHsbk8U/25wFtsYf531rKrM0aErYISiz QUX0ThTrOmGO2VRl6VtNBarAkjNmoLrIzJSsoGRalZMPFrwYtJeg0ZuXruEArvzXMwYdzj4GsfB WwXPwQypm/vjmoghRN5TjXvi9W39UfpcHMNQz0NKUpNw6NsUHr8dE+XzxPR8JlilMOOLkmJVYGH jy3AbiCa0bFQZZg== X-Received: by 2002:a05:622a:6691:b0:509:aa4:49f0 with SMTP id d75a77b69052e-50b37541c05mr11342181cf.50.1773961998830; Thu, 19 Mar 2026 16:13:18 -0700 (PDT) X-Received: by 2002:a05:622a:6691:b0:509:aa4:49f0 with SMTP id d75a77b69052e-50b37541c05mr11341551cf.50.1773961998114; Thu, 19 Mar 2026 16:13:18 -0700 (PDT) From: Peter Xu To: qemu-devel@nongnu.org Cc: Juraj Marcin , Kirti Wankhede , "Maciej S . Szmigiero" , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Joao Martins , Alex Williamson , Yishai Hadas , Fabiano Rosas , Pranav Tyagi , peterx@redhat.com, Zhiyi Guo , Markus Armbruster , Avihai Horon , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Halil Pasic , Christian Borntraeger , Jason Herne , Eric Farman , Matthew Rosato , Richard Henderson , Ilya Leoshkevich , David Hildenbrand , Cornelia Huck , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow Subject: [PATCH RFC 05/12] migration/treewide: Merge @state_pending_{exact|estimate} APIs Date: Thu, 19 Mar 2026 19:12:55 -0400 Message-ID: <20260319231302.123135-6-peterx@redhat.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20260319231302.123135-1-peterx@redhat.com> References: <20260319231302.123135-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=peterx@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -3 X-Spam_score: -0.4 X-Spam_bar: / X-Spam_report: (-0.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.819, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.903, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1773962051977158500 Content-Type: text/plain; charset="utf-8" These two APIs are a slight duplication. For example, there're a few users that directly pass in the same function. It might also be slightly error prone to provide two hooks, so that it's easier to happen that one module report different things via the two hooks. In reality they should always report the same thing, only about whether we should use a fast-path when the slow path might be too slow, and even if we need to pay with some less accuracy. Let's just merge it into one API, but instead provide a bool showing if the query is a fast query or not. No functional change intended. Export qemu_savevm_query_pending(). We should likely directly use the new API here provided when there're new users to do the query. This will happen very soon. Cc: Halil Pasic Cc: Christian Borntraeger Cc: Jason Herne Cc: Eric Farman Cc: Matthew Rosato Cc: Richard Henderson Cc: Ilya Leoshkevich Cc: David Hildenbrand Cc: Cornelia Huck Cc: Eric Blake Cc: Vladimir Sementsov-Ogievskiy Cc: John Snow Signed-off-by: Peter Xu --- docs/devel/migration/main.rst | 5 ++- docs/devel/migration/vfio.rst | 9 ++---- include/migration/register.h | 52 ++++++++++-------------------- migration/savevm.h | 3 ++ hw/s390x/s390-stattrib.c | 8 ++--- hw/vfio/migration.c | 58 +++++++++++++++++++--------------- migration/block-dirty-bitmap.c | 9 ++---- migration/ram.c | 32 ++++++------------- migration/savevm.c | 43 ++++++++++++------------- hw/vfio/trace-events | 3 +- 10 files changed, 93 insertions(+), 129 deletions(-) diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst index 234d280249..22c5910d5c 100644 --- a/docs/devel/migration/main.rst +++ b/docs/devel/migration/main.rst @@ -519,9 +519,8 @@ An iterative device must provide: data we must save. The core migration code will use this to determine when to pause the CPUs and complete the migration. =20 - - A ``state_pending_estimate`` function that indicates how much more - data we must save. When the estimated amount is smaller than the - threshold, we call ``state_pending_exact``. + - A ``save_query_pending`` function that indicates how much more + data we must save. =20 - A ``save_live_iterate`` function should send a chunk of data until the point that stream bandwidth limits tell it to stop. Each call diff --git a/docs/devel/migration/vfio.rst b/docs/devel/migration/vfio.rst index 0790e5031d..33768c877c 100644 --- a/docs/devel/migration/vfio.rst +++ b/docs/devel/migration/vfio.rst @@ -50,13 +50,8 @@ VFIO implements the device hooks for the iterative appro= ach as follows: * A ``load_setup`` function that sets the VFIO device on the destination in _RESUMING state. =20 -* A ``state_pending_estimate`` function that reports an estimate of the - remaining pre-copy data that the vendor driver has yet to save for the V= FIO - device. - -* A ``state_pending_exact`` function that reads pending_bytes from the ven= dor - driver, which indicates the amount of data that the vendor driver has ye= t to - save for the VFIO device. +* A ``save_query_pending`` function that reports the remaining pre-copy + data that the vendor driver has yet to save for the VFIO device. =20 * An ``is_active_iterate`` function that indicates ``save_live_iterate`` is active only when the VFIO device is in pre-copy states. diff --git a/include/migration/register.h b/include/migration/register.h index d0f37f5f43..2320c3a981 100644 --- a/include/migration/register.h +++ b/include/migration/register.h @@ -16,6 +16,15 @@ =20 #include "hw/core/vmstate-if.h" =20 +typedef struct MigPendingData { + /* How many bytes are pending for precopy / stopcopy? */ + uint64_t precopy_bytes; + /* How many bytes are pending that can be transferred in postcopy? */ + uint64_t postcopy_bytes; + /* Is this a fastpath query (which can be inaccurate)? */ + bool fastpath; +} MigPendingData ; + /** * struct SaveVMHandlers: handler structure to finely control * migration of complex subsystems and devices, such as RAM, block and @@ -197,46 +206,17 @@ typedef struct SaveVMHandlers { bool (*save_postcopy_prepare)(QEMUFile *f, void *opaque, Error **errp); =20 /** - * @state_pending_estimate - * - * This estimates the remaining data to transfer - * - * Sum of @can_postcopy and @must_postcopy is the whole amount of - * pending data. - * - * @opaque: data pointer passed to register_savevm_live() - * @must_precopy: amount of data that must be migrated in precopy - * or in stopped state, i.e. that must be migrated - * before target start. - * @can_postcopy: amount of data that can be migrated in postcopy - * or in stopped state, i.e. after target start. - * Some can also be migrated during precopy (RAM). - * Some must be migrated after source stops - * (block-dirty-bitmap) - */ - void (*state_pending_estimate)(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy); - - /** - * @state_pending_exact - * - * This calculates the exact remaining data to transfer + * @save_query_pending * - * Sum of @can_postcopy and @must_postcopy is the whole amount of - * pending data. + * This estimates the remaining data to transfer on the source side. + * It's highly suggested that the module should implement both fastpath + * and slowpath version of it when it can be slow (for more information + * please check pending->fastpath field). * * @opaque: data pointer passed to register_savevm_live() - * @must_precopy: amount of data that must be migrated in precopy - * or in stopped state, i.e. that must be migrated - * before target start. - * @can_postcopy: amount of data that can be migrated in postcopy - * or in stopped state, i.e. after target start. - * Some can also be migrated during precopy (RAM). - * Some must be migrated after source stops - * (block-dirty-bitmap) + * @pending: pointer to a MigPendingData struct */ - void (*state_pending_exact)(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy); + void (*save_query_pending)(void *opaque, MigPendingData *pending); =20 /** * @load_state diff --git a/migration/savevm.h b/migration/savevm.h index b3d1e8a13c..b116933bce 100644 --- a/migration/savevm.h +++ b/migration/savevm.h @@ -14,6 +14,8 @@ #ifndef MIGRATION_SAVEVM_H #define MIGRATION_SAVEVM_H =20 +#include "migration/register.h" + #define QEMU_VM_FILE_MAGIC 0x5145564d #define QEMU_VM_FILE_VERSION_COMPAT 0x00000002 #define QEMU_VM_FILE_VERSION 0x00000003 @@ -43,6 +45,7 @@ int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy); void qemu_savevm_state_cleanup(void); void qemu_savevm_state_complete_postcopy(QEMUFile *f); int qemu_savevm_state_complete_precopy(MigrationState *s); +void qemu_savevm_query_pending(MigPendingData *pending, bool fastpath); void qemu_savevm_state_pending_exact(uint64_t *must_precopy, uint64_t *can_postcopy); void qemu_savevm_state_pending_estimate(uint64_t *must_precopy, diff --git a/hw/s390x/s390-stattrib.c b/hw/s390x/s390-stattrib.c index d808ece3b9..b1ec51c77a 100644 --- a/hw/s390x/s390-stattrib.c +++ b/hw/s390x/s390-stattrib.c @@ -187,15 +187,14 @@ static int cmma_save_setup(QEMUFile *f, void *opaque,= Error **errp) return 0; } =20 -static void cmma_state_pending(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy) +static void cmma_state_pending(void *opaque, MigPendingData *pending) { S390StAttribState *sas =3D S390_STATTRIB(opaque); S390StAttribClass *sac =3D S390_STATTRIB_GET_CLASS(sas); long long res =3D sac->get_dirtycount(sas); =20 if (res >=3D 0) { - *must_precopy +=3D res; + pending->precopy_bytes +=3D res; } } =20 @@ -340,8 +339,7 @@ static SaveVMHandlers savevm_s390_stattrib_handlers =3D= { .save_setup =3D cmma_save_setup, .save_live_iterate =3D cmma_save_iterate, .save_complete =3D cmma_save_complete, - .state_pending_exact =3D cmma_state_pending, - .state_pending_estimate =3D cmma_state_pending, + .save_query_pending =3D cmma_state_pending, .save_cleanup =3D cmma_save_cleanup, .load_state =3D cmma_load, .is_active =3D cmma_active, diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 827d3ded63..c054c749b0 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -570,42 +570,51 @@ static void vfio_save_cleanup(void *opaque) trace_vfio_save_cleanup(vbasedev->name); } =20 -static void vfio_state_pending_estimate(void *opaque, uint64_t *must_preco= py, - uint64_t *can_postcopy) +static void vfio_state_pending_sync(VFIODevice *vbasedev) { - VFIODevice *vbasedev =3D opaque; VFIOMigration *migration =3D vbasedev->migration; =20 - if (!vfio_device_state_is_precopy(vbasedev)) { - return; - } + vfio_query_stop_copy_size(vbasedev); =20 - *must_precopy +=3D - migration->precopy_init_size + migration->precopy_dirty_size; + if (vfio_device_state_is_precopy(vbasedev)) { + vfio_query_precopy_size(migration); + } =20 - trace_vfio_state_pending_estimate(vbasedev->name, *must_precopy, - *can_postcopy, - migration->precopy_init_size, - migration->precopy_dirty_size); + /* + * In all cases, all PRECOPY data should be no more than STOPCOPY data. + * Otherwise we have a problem. So far, only dump some errors. + */ + if (migration->precopy_init_size + migration->precopy_dirty_size < + migration->stopcopy_size) { + error_report_once("%s: wrong pending data (init=3D%" PRIx64 + ", dirty=3D%"PRIx64", stop=3D%"PRIx64")", + __func__, migration->precopy_init_size, + migration->precopy_dirty_size, + migration->stopcopy_size); + } } =20 -static void vfio_state_pending_exact(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy) +static void vfio_state_pending(void *opaque, MigPendingData *pending) { VFIODevice *vbasedev =3D opaque; VFIOMigration *migration =3D vbasedev->migration; + uint64_t remain; =20 - vfio_query_stop_copy_size(vbasedev); - *must_precopy +=3D migration->stopcopy_size; - - if (vfio_device_state_is_precopy(vbasedev)) { - vfio_query_precopy_size(migration); + if (pending->fastpath) { + if (!vfio_device_state_is_precopy(vbasedev)) { + return; + } + remain =3D migration->precopy_init_size + migration->precopy_dirty= _size; + } else { + vfio_state_pending_sync(vbasedev); + remain =3D migration->stopcopy_size; } =20 - trace_vfio_state_pending_exact(vbasedev->name, *must_precopy, *can_pos= tcopy, - migration->stopcopy_size, - migration->precopy_init_size, - migration->precopy_dirty_size); + pending->precopy_bytes +=3D remain; + + trace_vfio_state_pending(vbasedev->name, migration->stopcopy_size, + migration->precopy_init_size, + migration->precopy_dirty_size); } =20 static bool vfio_is_active_iterate(void *opaque) @@ -850,8 +859,7 @@ static const SaveVMHandlers savevm_vfio_handlers =3D { .save_prepare =3D vfio_save_prepare, .save_setup =3D vfio_save_setup, .save_cleanup =3D vfio_save_cleanup, - .state_pending_estimate =3D vfio_state_pending_estimate, - .state_pending_exact =3D vfio_state_pending_exact, + .save_query_pending =3D vfio_state_pending, .is_active_iterate =3D vfio_is_active_iterate, .save_live_iterate =3D vfio_save_iterate, .save_complete =3D vfio_save_complete_precopy, diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c index a061aad817..376a9b43ac 100644 --- a/migration/block-dirty-bitmap.c +++ b/migration/block-dirty-bitmap.c @@ -766,9 +766,7 @@ static int dirty_bitmap_save_complete(QEMUFile *f, void= *opaque) return 0; } =20 -static void dirty_bitmap_state_pending(void *opaque, - uint64_t *must_precopy, - uint64_t *can_postcopy) +static void dirty_bitmap_state_pending(void *opaque, MigPendingData *data) { DBMSaveState *s =3D &((DBMState *)opaque)->save; SaveBitmapState *dbms; @@ -788,7 +786,7 @@ static void dirty_bitmap_state_pending(void *opaque, =20 trace_dirty_bitmap_state_pending(pending); =20 - *can_postcopy +=3D pending; + data->postcopy_bytes +=3D pending; } =20 /* First occurrence of this bitmap. It should be created if doesn't exist = */ @@ -1250,8 +1248,7 @@ static SaveVMHandlers savevm_dirty_bitmap_handlers = =3D { .save_setup =3D dirty_bitmap_save_setup, .save_complete =3D dirty_bitmap_save_complete, .has_postcopy =3D dirty_bitmap_has_postcopy, - .state_pending_exact =3D dirty_bitmap_state_pending, - .state_pending_estimate =3D dirty_bitmap_state_pending, + .save_query_pending =3D dirty_bitmap_state_pending, .save_live_iterate =3D dirty_bitmap_save_iterate, .is_active_iterate =3D dirty_bitmap_is_active_iterate, .load_state =3D dirty_bitmap_load, diff --git a/migration/ram.c b/migration/ram.c index 979751f61b..89f761a471 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3443,30 +3443,17 @@ static int ram_save_complete(QEMUFile *f, void *opa= que) return qemu_fflush(f); } =20 -static void ram_state_pending_estimate(void *opaque, uint64_t *must_precop= y, - uint64_t *can_postcopy) -{ - RAMState **temp =3D opaque; - RAMState *rs =3D *temp; - - uint64_t remaining_size =3D rs->migration_dirty_pages * TARGET_PAGE_SI= ZE; - - if (migrate_postcopy_ram()) { - /* We can do postcopy, and all the data is postcopiable */ - *can_postcopy +=3D remaining_size; - } else { - *must_precopy +=3D remaining_size; - } -} - -static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy) +static void ram_state_pending(void *opaque, MigPendingData *pending) { RAMState **temp =3D opaque; RAMState *rs =3D *temp; uint64_t remaining_size; =20 - if (!migration_in_postcopy()) { + /* + * Sync is only needed either with: (1) a fast query, or (2) postcopy + * as started (in which case no new dirty will generate anymore). + */ + if (!pending->fastpath && !migration_in_postcopy()) { bql_lock(); WITH_RCU_READ_LOCK_GUARD() { migration_bitmap_sync_precopy(false); @@ -3478,9 +3465,9 @@ static void ram_state_pending_exact(void *opaque, uin= t64_t *must_precopy, =20 if (migrate_postcopy_ram()) { /* We can do postcopy, and all the data is postcopiable */ - *can_postcopy +=3D remaining_size; + pending->postcopy_bytes +=3D remaining_size; } else { - *must_precopy +=3D remaining_size; + pending->precopy_bytes +=3D remaining_size; } } =20 @@ -4703,8 +4690,7 @@ static SaveVMHandlers savevm_ram_handlers =3D { .save_live_iterate =3D ram_save_iterate, .save_complete =3D ram_save_complete, .has_postcopy =3D ram_has_postcopy, - .state_pending_exact =3D ram_state_pending_exact, - .state_pending_estimate =3D ram_state_pending_estimate, + .save_query_pending =3D ram_state_pending, .load_state =3D ram_load, .save_cleanup =3D ram_save_cleanup, .load_setup =3D ram_load_setup, diff --git a/migration/savevm.c b/migration/savevm.c index dd58f2a705..6268e68382 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -1762,46 +1762,45 @@ int qemu_savevm_state_complete_precopy(MigrationSta= te *s) return qemu_fflush(f); } =20 -/* Give an estimate of the amount left to be transferred, - * the result is split into the amount for units that can and - * for units that can't do postcopy. - */ -void qemu_savevm_state_pending_estimate(uint64_t *must_precopy, - uint64_t *can_postcopy) +void qemu_savevm_query_pending(MigPendingData *pending, bool fastpath) { SaveStateEntry *se; =20 - *must_precopy =3D 0; - *can_postcopy =3D 0; + pending->precopy_bytes =3D 0; + pending->postcopy_bytes =3D 0; + pending->fastpath =3D fastpath; =20 QTAILQ_FOREACH(se, &savevm_state.handlers, entry) { - if (!se->ops || !se->ops->state_pending_estimate) { + if (!se->ops || !se->ops->save_query_pending) { continue; } if (!qemu_savevm_state_active(se)) { continue; } - se->ops->state_pending_estimate(se->opaque, must_precopy, can_post= copy); + se->ops->save_query_pending(se->opaque, pending); } } =20 +void qemu_savevm_state_pending_estimate(uint64_t *must_precopy, + uint64_t *can_postcopy) +{ + MigPendingData pending; + + qemu_savevm_query_pending(&pending, true); + + *must_precopy =3D pending.precopy_bytes; + *can_postcopy =3D pending.postcopy_bytes; +} + void qemu_savevm_state_pending_exact(uint64_t *must_precopy, uint64_t *can_postcopy) { - SaveStateEntry *se; + MigPendingData pending; =20 - *must_precopy =3D 0; - *can_postcopy =3D 0; + qemu_savevm_query_pending(&pending, false); =20 - QTAILQ_FOREACH(se, &savevm_state.handlers, entry) { - if (!se->ops || !se->ops->state_pending_exact) { - continue; - } - if (!qemu_savevm_state_active(se)) { - continue; - } - se->ops->state_pending_exact(se->opaque, must_precopy, can_postcop= y); - } + *must_precopy =3D pending.precopy_bytes; + *can_postcopy =3D pending.postcopy_bytes; } =20 void qemu_savevm_state_cleanup(void) diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 846e3625c5..7cf5a9eb2d 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -173,8 +173,7 @@ vfio_save_device_config_state(const char *name) " (%s)" vfio_save_iterate(const char *name, uint64_t precopy_init_size, uint64_t p= recopy_dirty_size) " (%s) precopy initial size %"PRIu64" precopy dirty size= %"PRIu64 vfio_save_iterate_start(const char *name) " (%s)" vfio_save_setup(const char *name, uint64_t data_buffer_size) " (%s) data b= uffer size %"PRIu64 -vfio_state_pending_estimate(const char *name, uint64_t precopy, uint64_t p= ostcopy, uint64_t precopy_init_size, uint64_t precopy_dirty_size) " (%s) pr= ecopy %"PRIu64" postcopy %"PRIu64" precopy initial size %"PRIu64" precopy d= irty size %"PRIu64 -vfio_state_pending_exact(const char *name, uint64_t precopy, uint64_t post= copy, uint64_t stopcopy_size, uint64_t precopy_init_size, uint64_t precopy_= dirty_size) " (%s) precopy %"PRIu64" postcopy %"PRIu64" stopcopy size %"PRI= u64" precopy initial size %"PRIu64" precopy dirty size %"PRIu64 +vfio_state_pending(const char *name, uint64_t stopcopy_size, uint64_t prec= opy_init_size, uint64_t precopy_dirty_size) " (%s) stopcopy size %"PRIu64" = precopy initial size %"PRIu64" precopy dirty size %"PRIu64 vfio_vmstate_change(const char *name, int running, const char *reason, con= st char *dev_state) " (%s) running %d reason %s device state %s" vfio_vmstate_change_prepare(const char *name, int running, const char *rea= son, const char *dev_state) " (%s) running %d reason %s device state %s" =20 --=20 2.50.1