From nobody Fri Apr 19 21:57:39 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1685033306306764.0175981594601; Thu, 25 May 2023 09:48:26 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1q2E7z-0007O8-HG; Thu, 25 May 2023 12:47:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1q2E7x-0007Nl-Jn; Thu, 25 May 2023 12:47:49 -0400 Received: from proxmox-new.maurer-it.com ([94.136.29.106]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1q2E7v-0004uj-Ej; Thu, 25 May 2023 12:47:49 -0400 Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id B7898471B3; Thu, 25 May 2023 18:47:31 +0200 (CEST) From: Fiona Ebner To: qemu-devel@nongnu.org Cc: quintela@redhat.com, peterx@redhat.com, leobras@redhat.com, eblake@redhat.com, vsementsov@yandex-team.ru, jsnow@redhat.com, stefanha@redhat.com, fam@euphon.net, qemu-block@nongnu.org, pbonzini@redhat.com, t.lamprecht@proxmox.com Subject: [PATCH v2] migration: hold the BQL during setup Date: Thu, 25 May 2023 18:47:26 +0200 Message-Id: <20230525164726.45176-1-f.ebner@proxmox.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=94.136.29.106; envelope-from=f.ebner@proxmox.com; helo=proxmox-new.maurer-it.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZM-MESSAGEID: 1685033309305100002 Content-Type: text/plain; charset="utf-8" This is intended to be a semantic revert of commit 9b09503752 ("migration: run setup callbacks out of big lock"). There have been so many changes since that commit (e.g. a new setup callback dirty_bitmap_save_setup() that also needs to be adapted now), it's easier to do the revert manually. For snapshots, the bdrv_writev_vmstate() function is used during setup (in QIOChannelBlock backing the QEMUFile), but not holding the BQL while calling it could lead to an assertion failure. To understand how, first note the following: 1. Generated coroutine wrappers for block layer functions spawn the coroutine and use AIO_WAIT_WHILE()/aio_poll() to wait for it. 2. If the host OS switches threads at an inconvenient time, it can happen that a bottom half scheduled for the main thread's AioContext is executed as part of a vCPU thread's aio_poll(). An example leading to the assertion failure is as follows: main thread: 1. A snapshot-save QMP command gets issued. 2. snapshot_save_job_bh() is scheduled. vCPU thread: 3. aio_poll() for the main thread's AioContext is called (e.g. when the guest writes to a pflash device, as part of blk_pwrite which is a generated coroutine wrapper). 4. snapshot_save_job_bh() is executed as part of aio_poll(). 3. qemu_savevm_state() is called. 4. qemu_mutex_unlock_iothread() is called. Now qemu_get_current_aio_context() returns 0x0. 5. bdrv_writev_vmstate() is executed during the usual savevm setup via qemu_fflush(). But this function is a generated coroutine wrapper, so it uses AIO_WAIT_WHILE. There, the assertion assert(qemu_get_current_aio_context() =3D=3D qemu_get_aio_context()); will fail. To fix it, ensure that the BQL is held during setup. While it would only be needed for snapshots, adapting migration too avoids additional logic for conditional locking/unlocking in the setup callbacks. Writing the header could (in theory) also trigger qemu_fflush() and thus bdrv_writev_vmstate(), so the locked section also covers the qemu_savevm_state_header() call, even for migration for consistentcy. The comment in ram_init_bitmaps() was introduced by 4987783400 ("migration: fix incorrect memory_global_dirty_log_start outside BQL") and is removed, because it referred to the qemu_mutex_lock_iothread() call. Signed-off-by: Fiona Ebner Reviewed-by: Juan Quintela --- include/migration/register.h | 2 +- migration/block-dirty-bitmap.c | 3 --- migration/block.c | 5 ----- migration/migration.c | 6 ++++++ migration/ram.c | 3 --- migration/savevm.c | 2 -- 6 files changed, 7 insertions(+), 14 deletions(-) diff --git a/include/migration/register.h b/include/migration/register.h index a8dfd8fefd..ed8c42063b 100644 --- a/include/migration/register.h +++ b/include/migration/register.h @@ -20,6 +20,7 @@ typedef struct SaveVMHandlers { /* This runs inside the iothread lock. */ SaveStateHandler *save_state; =20 + int (*save_setup)(QEMUFile *f, void *opaque); void (*save_cleanup)(void *opaque); int (*save_live_complete_postcopy)(QEMUFile *f, void *opaque); int (*save_live_complete_precopy)(QEMUFile *f, void *opaque); @@ -45,7 +46,6 @@ typedef struct SaveVMHandlers { int (*save_live_iterate)(QEMUFile *f, void *opaque); =20 /* This runs outside the iothread lock! */ - int (*save_setup)(QEMUFile *f, void *opaque); /* Note for save_live_pending: * must_precopy: * - must be migrated in precopy or in stopped state diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c index 032fc5f405..03cb2e72ee 100644 --- a/migration/block-dirty-bitmap.c +++ b/migration/block-dirty-bitmap.c @@ -1214,9 +1214,7 @@ static int dirty_bitmap_save_setup(QEMUFile *f, void = *opaque) DBMSaveState *s =3D &((DBMState *)opaque)->save; SaveBitmapState *dbms =3D NULL; =20 - qemu_mutex_lock_iothread(); if (init_dirty_bitmap_migration(s) < 0) { - qemu_mutex_unlock_iothread(); return -1; } =20 @@ -1224,7 +1222,6 @@ static int dirty_bitmap_save_setup(QEMUFile *f, void = *opaque) send_bitmap_start(f, s, dbms); } qemu_put_bitmap_flags(f, DIRTY_BITMAP_MIG_FLAG_EOS); - qemu_mutex_unlock_iothread(); return 0; } =20 diff --git a/migration/block.c b/migration/block.c index b9580a6c7e..4df7862f07 100644 --- a/migration/block.c +++ b/migration/block.c @@ -722,18 +722,13 @@ static int block_save_setup(QEMUFile *f, void *opaque) trace_migration_block_save("setup", block_mig_state.submitted, block_mig_state.transferred); =20 - qemu_mutex_lock_iothread(); ret =3D init_blk_migration(f); if (ret < 0) { - qemu_mutex_unlock_iothread(); return ret; } =20 /* start track dirty blocks */ ret =3D set_dirty_tracking(); - - qemu_mutex_unlock_iothread(); - if (ret) { return ret; } diff --git a/migration/migration.c b/migration/migration.c index 5de7f734b9..c1b3c515e9 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -2929,7 +2929,9 @@ static void *migration_thread(void *opaque) object_ref(OBJECT(s)); update_iteration_initial_status(s); =20 + qemu_mutex_lock_iothread(); qemu_savevm_state_header(s->to_dst_file); + qemu_mutex_unlock_iothread(); =20 /* * If we opened the return path, we need to make sure dst has it @@ -2957,7 +2959,9 @@ static void *migration_thread(void *opaque) qemu_savevm_send_colo_enable(s->to_dst_file); } =20 + qemu_mutex_lock_iothread(); qemu_savevm_state_setup(s->to_dst_file); + qemu_mutex_unlock_iothread(); =20 qemu_savevm_wait_unplug(s, MIGRATION_STATUS_SETUP, MIGRATION_STATUS_ACTIVE); @@ -3068,8 +3072,10 @@ static void *bg_migration_thread(void *opaque) ram_write_tracking_prepare(); #endif =20 + qemu_mutex_lock_iothread(); qemu_savevm_state_header(s->to_dst_file); qemu_savevm_state_setup(s->to_dst_file); + qemu_mutex_unlock_iothread(); =20 qemu_savevm_wait_unplug(s, MIGRATION_STATUS_SETUP, MIGRATION_STATUS_ACTIVE); diff --git a/migration/ram.c b/migration/ram.c index 9fb076fa58..1a2edef527 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2881,8 +2881,6 @@ static void migration_bitmap_clear_discarded_pages(RA= MState *rs) =20 static void ram_init_bitmaps(RAMState *rs) { - /* For memory_global_dirty_log_start below. */ - qemu_mutex_lock_iothread(); qemu_mutex_lock_ramlist(); =20 WITH_RCU_READ_LOCK_GUARD() { @@ -2894,7 +2892,6 @@ static void ram_init_bitmaps(RAMState *rs) } } qemu_mutex_unlock_ramlist(); - qemu_mutex_unlock_iothread(); =20 /* * After an eventual first bitmap sync, fixup the initial bitmap diff --git a/migration/savevm.c b/migration/savevm.c index 03795ce8dc..be97770620 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -1624,10 +1624,8 @@ static int qemu_savevm_state(QEMUFile *f, Error **er= rp) memset(&compression_counters, 0, sizeof(compression_counters)); ms->to_dst_file =3D f; =20 - qemu_mutex_unlock_iothread(); qemu_savevm_state_header(f); qemu_savevm_state_setup(f); - qemu_mutex_lock_iothread(); =20 while (qemu_file_get_error(f) =3D=3D 0) { if (qemu_savevm_state_iterate(f, false) > 0) { --=20 2.39.2