From nobody Tue Feb 10 12:57:48 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1501230527584895.8635381626066; Fri, 28 Jul 2017 01:28:47 -0700 (PDT) Received: from localhost ([::1]:46758 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1db0dY-0006Mb-QL for importer@patchew.org; Fri, 28 Jul 2017 04:28:44 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47556) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1db0Jw-0001gP-BS for qemu-devel@nongnu.org; Fri, 28 Jul 2017 04:08:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1db0Jt-0001kf-5C for qemu-devel@nongnu.org; Fri, 28 Jul 2017 04:08:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40534) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1db0Js-0001k7-S8 for qemu-devel@nongnu.org; Fri, 28 Jul 2017 04:08:25 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D6AADD3D6D; Fri, 28 Jul 2017 08:08:23 +0000 (UTC) Received: from pxdev.xzpeter.org.com (dhcp-15-224.nay.redhat.com [10.66.15.224]) by smtp.corp.redhat.com (Postfix) with ESMTP id B0E096031D; Fri, 28 Jul 2017 08:08:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com D6AADD3D6D Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=peterx@redhat.com From: Peter Xu To: qemu-devel@nongnu.org Date: Fri, 28 Jul 2017 16:06:35 +0800 Message-Id: <1501229198-30588-27-git-send-email-peterx@redhat.com> In-Reply-To: <1501229198-30588-1-git-send-email-peterx@redhat.com> References: <1501229198-30588-1-git-send-email-peterx@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Fri, 28 Jul 2017 08:08:24 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [RFC 26/29] migration: synchronize dirty bitmap for resume X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Andrea Arcangeli , Juan Quintela , Alexey Perevalov , peterx@redhat.com, "Dr . David Alan Gilbert" Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This patch implements the first part of core RAM resume logic for postcopy. ram_resume_prepare() is provided for the work. When the migration is interrupted by network failure, the dirty bitmap on the source side will be meaningless, because even the dirty bit is cleared, it is still possible that the sent page was lost along the way to destination. Here instead of continue the migration with the old dirty bitmap on source, we ask the destination side to send back its received bitmap, then invert it to be our initial dirty bitmap. The source side send thread will issue the MIG_CMD_RECV_BITMAP requests, once per ramblock, to ask for the received bitmap. On destination side, MIG_RP_MSG_RECV_BITMAP will be issued, along with the requested bitmap. Data will be received on the return-path thread of source, and the main migration thread will be notified when all the ramblock bitmaps are synchronized. One issue to be solved here is how to synchronize the source send thread and return-path thread. Semaphore cannot really work here since we cannot guarantee the order of wait/post (it's possible that the reply is very fast, even before send thread starts to wait). So conditional variable is used to make sure the ordering is always correct. Signed-off-by: Peter Xu --- migration/migration.c | 4 +++ migration/migration.h | 4 +++ migration/ram.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++= ++++ migration/trace-events | 1 + 4 files changed, 77 insertions(+) diff --git a/migration/migration.c b/migration/migration.c index 6cb0ad3..93fbc96 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1093,6 +1093,8 @@ static void migrate_fd_cleanup(void *opaque) =20 qemu_sem_destroy(&s->postcopy_pause_sem); qemu_sem_destroy(&s->postcopy_pause_rp_sem); + qemu_mutex_destroy(&s->resume_lock); + qemu_cond_destroy(&s->resume_cond); } =20 void migrate_fd_error(MigrationState *s, const Error *error) @@ -1238,6 +1240,8 @@ MigrationState *migrate_init(void) s->error =3D NULL; qemu_sem_init(&s->postcopy_pause_sem, 0); qemu_sem_init(&s->postcopy_pause_rp_sem, 0); + qemu_mutex_init(&s->resume_lock); + qemu_cond_init(&s->resume_cond); =20 migrate_set_state(&s->state, MIGRATION_STATUS_NONE, MIGRATION_STATUS_S= ETUP); =20 diff --git a/migration/migration.h b/migration/migration.h index 2a3f905..c270f4c 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -159,6 +159,10 @@ struct MigrationState /* Needed by postcopy-pause state */ QemuSemaphore postcopy_pause_sem; QemuSemaphore postcopy_pause_rp_sem; + + /* Used to sync-up between main send thread and rp-thread */ + QemuMutex resume_lock; + QemuCond resume_cond; }; =20 void migrate_set_state(int *state, int old_state, int new_state); diff --git a/migration/ram.c b/migration/ram.c index d543483..c695b13 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -46,6 +46,7 @@ #include "exec/ram_addr.h" #include "qemu/rcu_queue.h" #include "migration/colo.h" +#include "savevm.h" =20 /***********************************************************/ /* ram save/restore */ @@ -256,6 +257,8 @@ struct RAMState { RAMBlock *last_req_rb; /* Queue of outstanding page requests from the destination */ QemuMutex src_page_req_mutex; + /* Ramblock counts to sync dirty bitmap. Only used for recovery */ + int ramblock_to_sync; QSIMPLEQ_HEAD(src_page_requests, RAMSrcPageRequest) src_page_requests; }; typedef struct RAMState RAMState; @@ -2731,6 +2734,57 @@ static int ram_load(QEMUFile *f, void *opaque, int v= ersion_id) return ret; } =20 +/* Sync all the dirty bitmap with destination VM. */ +static int ram_dirty_bitmap_sync_all(MigrationState *s, RAMState *rs) +{ + RAMBlock *block; + QEMUFile *file =3D s->to_dst_file; + int ramblock_count =3D 0; + + trace_ram_dirty_bitmap_sync("start"); + + /* + * We need to take the resume lock to make sure that the send + * thread (current thread) and the rp-thread will do their work in + * order. + */ + qemu_mutex_lock(&s->resume_lock); + + /* Request for receive-bitmap for each block */ + RAMBLOCK_FOREACH(block) { + ramblock_count++; + qemu_savevm_send_recv_bitmap(file, block->idstr); + } + + /* Init the ramblock count to total */ + atomic_set(&rs->ramblock_to_sync, ramblock_count); + + trace_ram_dirty_bitmap_sync("wait-bitmap"); + + /* Wait until all the ramblocks' dirty bitmap synced */ + while (rs->ramblock_to_sync) { + qemu_cond_wait(&s->resume_cond, &s->resume_lock); + } + + trace_ram_dirty_bitmap_sync("completed"); + + qemu_mutex_unlock(&s->resume_lock); + + return 0; +} + +static void ram_dirty_bitmap_reload_notify(MigrationState *s) +{ + qemu_mutex_lock(&s->resume_lock); + atomic_dec(&ram_state->ramblock_to_sync); + if (ram_state->ramblock_to_sync =3D=3D 0) { + /* Make sure the other thread gets the latest */ + trace_ram_dirty_bitmap_sync("notify-send"); + qemu_cond_signal(&s->resume_cond); + } + qemu_mutex_unlock(&s->resume_lock); +} + /* * Read the received bitmap, revert it as the initial dirty bitmap. * This is only used when the postcopy migration is paused but wants @@ -2776,9 +2830,22 @@ int ram_dirty_bitmap_reload(MigrationState *s, RAMBl= ock *block) =20 trace_ram_dirty_bitmap_reload(block->idstr); =20 + /* + * We succeeded to sync bitmap for current ramblock. If this is + * the last one to sync, we need to notify the main send thread. + */ + ram_dirty_bitmap_reload_notify(s); + return 0; } =20 +static int ram_resume_prepare(MigrationState *s, void *opaque) +{ + RAMState *rs =3D *(RAMState **)opaque; + + return ram_dirty_bitmap_sync_all(s, rs); +} + static SaveVMHandlers savevm_ram_handlers =3D { .save_setup =3D ram_save_setup, .save_live_iterate =3D ram_save_iterate, @@ -2789,6 +2856,7 @@ static SaveVMHandlers savevm_ram_handlers =3D { .save_cleanup =3D ram_save_cleanup, .load_setup =3D ram_load_setup, .load_cleanup =3D ram_load_cleanup, + .resume_prepare =3D ram_resume_prepare, }; =20 void ram_mig_init(void) diff --git a/migration/trace-events b/migration/trace-events index 0fb2d1e..15ff1bf 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -80,6 +80,7 @@ ram_postcopy_send_discard_bitmap(void) "" ram_save_page(const char *rbname, uint64_t offset, void *host) "%s: offset= : %" PRIx64 " host: %p" ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: st= art: %zx len: %zx" ram_dirty_bitmap_reload(char *str) "%s" +ram_dirty_bitmap_sync(const char *str) "%s" =20 # migration/migration.c await_return_path_close_on_source_close(void) "" --=20 2.7.4