From nobody Wed Nov 5 10:02:54 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1499705179289749.5365071836037; Mon, 10 Jul 2017 09:46:19 -0700 (PDT) Received: from localhost ([::1]:42027 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dUbpC-0004N1-0n for importer@patchew.org; Mon, 10 Jul 2017 12:46:18 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47115) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dUba6-0000fM-2Q for qemu-devel@nongnu.org; Mon, 10 Jul 2017 12:30:44 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dUba2-0001Ne-Sf for qemu-devel@nongnu.org; Mon, 10 Jul 2017 12:30:41 -0400 Received: from mailhub.sw.ru ([195.214.232.25]:34740 helo=relay.sw.ru) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dUba2-0001M8-0y; Mon, 10 Jul 2017 12:30:38 -0400 Received: from kvm.sw.ru (msk-vpn.virtuozzo.com [195.214.232.6]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id v6AGUTIf031112; Mon, 10 Jul 2017 19:30:32 +0300 (MSK) From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org, qemu-devel@nongnu.org Date: Mon, 10 Jul 2017 19:30:16 +0300 Message-Id: <20170710163029.129912-4-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.11.1 In-Reply-To: <20170710163029.129912-1-vsementsov@virtuozzo.com> References: <20170710163029.129912-1-vsementsov@virtuozzo.com> X-detected-operating-system: by eggs.gnu.org: OpenBSD 3.x [fuzzy] X-Received-From: 195.214.232.25 Subject: [Qemu-devel] [PATCH v7 03/16] migration: split common postcopy out of ram postcopy X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, peter.maydell@linaro.org, vsementsov@virtuozzo.com, famz@redhat.com, lirans@il.ibm.com, quintela@redhat.com, jsnow@redhat.com, armbru@redhat.com, mreitz@redhat.com, stefanha@redhat.com, den@openvz.org, amit.shah@redhat.com, pbonzini@redhat.com, dgilbert@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Split common postcopy staff from ram postcopy staff. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Dr. David Alan Gilbert --- migration/migration.c | 39 ++++++++++++++++++++++++++------------- migration/migration.h | 2 ++ migration/savevm.c | 48 +++++++++++++++++++++++++++++++++++++++--------- 3 files changed, 67 insertions(+), 22 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index 51ccd1a4c5..d3a2fd405a 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1221,6 +1221,11 @@ bool migrate_postcopy_ram(void) return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_RAM]; } =20 +bool migrate_postcopy(void) +{ + return migrate_postcopy_ram(); +} + bool migrate_auto_converge(void) { MigrationState *s; @@ -1577,9 +1582,11 @@ static int postcopy_start(MigrationState *ms, bool *= old_vm_running) * need to tell the destination to throw any pages it's already receiv= ed * that are dirty */ - if (ram_postcopy_send_discard_bitmap(ms)) { - error_report("postcopy send discard bitmap failed"); - goto fail; + if (migrate_postcopy_ram()) { + if (ram_postcopy_send_discard_bitmap(ms)) { + error_report("postcopy send discard bitmap failed"); + goto fail; + } } =20 /* @@ -1588,8 +1595,10 @@ static int postcopy_start(MigrationState *ms, bool *= old_vm_running) * wrap their state up here */ qemu_file_set_rate_limit(ms->to_dst_file, INT64_MAX); - /* Ping just for debugging, helps line traces up */ - qemu_savevm_send_ping(ms->to_dst_file, 2); + if (migrate_postcopy_ram()) { + /* Ping just for debugging, helps line traces up */ + qemu_savevm_send_ping(ms->to_dst_file, 2); + } =20 /* * While loading the device state we may trigger page transfer @@ -1614,7 +1623,9 @@ static int postcopy_start(MigrationState *ms, bool *o= ld_vm_running) qemu_savevm_send_postcopy_listen(fb); =20 qemu_savevm_state_complete_precopy(fb, false, false); - qemu_savevm_send_ping(fb, 3); + if (migrate_postcopy_ram()) { + qemu_savevm_send_ping(fb, 3); + } =20 qemu_savevm_send_postcopy_run(fb); =20 @@ -1649,11 +1660,13 @@ static int postcopy_start(MigrationState *ms, bool = *old_vm_running) =20 qemu_mutex_unlock_iothread(); =20 - /* - * Although this ping is just for debug, it could potentially be - * used for getting a better measurement of downtime at the source. - */ - qemu_savevm_send_ping(ms->to_dst_file, 4); + if (migrate_postcopy_ram()) { + /* + * Although this ping is just for debug, it could potentially be + * used for getting a better measurement of downtime at the source. + */ + qemu_savevm_send_ping(ms->to_dst_file, 4); + } =20 if (migrate_release_ram()) { ram_postcopy_migrated_memory_release(ms); @@ -1831,7 +1844,7 @@ static void *migration_thread(void *opaque) qemu_savevm_send_ping(s->to_dst_file, 1); } =20 - if (migrate_postcopy_ram()) { + if (migrate_postcopy()) { /* * Tell the destination that we *might* want to do postcopy later; * if the other end can't do postcopy it should fail now, nice and @@ -1864,7 +1877,7 @@ static void *migration_thread(void *opaque) if (pending_size && pending_size >=3D threshold_size) { /* Still a significant amount to transfer */ =20 - if (migrate_postcopy_ram() && + if (migrate_postcopy() && s->state !=3D MIGRATION_STATUS_POSTCOPY_ACTIVE && pend_nonpost <=3D threshold_size && atomic_read(&s->start_postcopy)) { diff --git a/migration/migration.h b/migration/migration.h index 148c9facbc..1d974bacce 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -165,6 +165,8 @@ bool migration_is_blocked(Error **errp); bool migration_in_postcopy(void); MigrationState *migrate_get_current(void); =20 +bool migrate_postcopy(void); + bool migrate_release_ram(void); bool migrate_postcopy_ram(void); bool migrate_zero_blocks(void); diff --git a/migration/savevm.c b/migration/savevm.c index 37da83509f..cf79e1d3ac 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -89,7 +89,7 @@ static struct mig_cmd_args { [MIG_CMD_INVALID] =3D { .len =3D -1, .name =3D "INVALID" }, [MIG_CMD_OPEN_RETURN_PATH] =3D { .len =3D 0, .name =3D "OPEN_RETURN_P= ATH" }, [MIG_CMD_PING] =3D { .len =3D sizeof(uint32_t), .name =3D = "PING" }, - [MIG_CMD_POSTCOPY_ADVISE] =3D { .len =3D 16, .name =3D "POSTCOPY_ADVI= SE" }, + [MIG_CMD_POSTCOPY_ADVISE] =3D { .len =3D -1, .name =3D "POSTCOPY_ADVI= SE" }, [MIG_CMD_POSTCOPY_LISTEN] =3D { .len =3D 0, .name =3D "POSTCOPY_LIST= EN" }, [MIG_CMD_POSTCOPY_RUN] =3D { .len =3D 0, .name =3D "POSTCOPY_RUN"= }, [MIG_CMD_POSTCOPY_RAM_DISCARD] =3D { @@ -98,6 +98,23 @@ static struct mig_cmd_args { [MIG_CMD_MAX] =3D { .len =3D -1, .name =3D "MAX" }, }; =20 +/* Note for MIG_CMD_POSTCOPY_ADVISE: + * The format of arguments is depending on postcopy mode: + * - postcopy RAM only + * uint64_t host page size + * uint64_t taget page size + * + * - postcopy RAM and postcopy dirty bitmaps + * format is the same as for postcopy RAM only + * + * - postcopy dirty bitmaps only + * Nothing. Command length field is 0. + * + * Be careful: adding a new postcopy entity with some other parameters sho= uld + * not break format self-description ability. Good way is to introduce some + * generic extendable format with an exception for two old entities. + */ + static int announce_self_create(uint8_t *buf, uint8_t *mac_addr) { @@ -861,12 +878,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint= 8_t *buf, size_t len) /* Send prior to any postcopy transfer */ void qemu_savevm_send_postcopy_advise(QEMUFile *f) { - uint64_t tmp[2]; - tmp[0] =3D cpu_to_be64(ram_pagesize_summary()); - tmp[1] =3D cpu_to_be64(qemu_target_page_size()); + if (migrate_postcopy_ram()) { + uint64_t tmp[2]; + tmp[0] =3D cpu_to_be64(ram_pagesize_summary()); + tmp[1] =3D cpu_to_be64(qemu_target_page_size()); =20 - trace_qemu_savevm_send_postcopy_advise(); - qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tm= p); + trace_qemu_savevm_send_postcopy_advise(); + qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, + 16, (uint8_t *)tmp); + } else { + qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL); + } } =20 /* Sent prior to starting the destination running in postcopy, discard pag= es @@ -1352,6 +1374,10 @@ static int loadvm_postcopy_handle_advise(MigrationIn= comingState *mis) return -1; } =20 + if (!migrate_postcopy_ram()) { + return 0; + } + if (!postcopy_ram_supported_by_host()) { postcopy_state_set(POSTCOPY_INCOMING_NONE); return -1; @@ -1562,7 +1588,9 @@ static int loadvm_postcopy_handle_listen(MigrationInc= omingState *mis) * A rare case, we entered listen without having to do any discard= s, * so do the setup that's normally done at the time of the 1st dis= card. */ - postcopy_ram_prepare_discard(mis); + if (migrate_postcopy_ram()) { + postcopy_ram_prepare_discard(mis); + } } =20 /* @@ -1570,8 +1598,10 @@ static int loadvm_postcopy_handle_listen(MigrationIn= comingState *mis) * However, at this point the CPU shouldn't be running, and the IO * shouldn't be doing anything yet so don't actually expect requests */ - if (postcopy_ram_enable_notify(mis)) { - return -1; + if (migrate_postcopy_ram()) { + if (postcopy_ram_enable_notify(mis)) { + return -1; + } } =20 if (mis->have_listen_thread) { --=20 2.11.1