From nobody Tue Feb 10 13:16:36 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1506036090959923.8939276060212; Thu, 21 Sep 2017 16:21:30 -0700 (PDT) Received: from localhost ([::1]:55879 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dvAmg-0008FO-6t for importer@patchew.org; Thu, 21 Sep 2017 19:21:30 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34135) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dvAaQ-0005zf-TK for qemu-devel@nongnu.org; Thu, 21 Sep 2017 19:08:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dvAaP-0001py-Cb for qemu-devel@nongnu.org; Thu, 21 Sep 2017 19:08:50 -0400 Received: from mx1.redhat.com ([209.132.183.28]:35882) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dvAaP-0001p0-2o for qemu-devel@nongnu.org; Thu, 21 Sep 2017 19:08:49 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3EC1A267C4; Thu, 21 Sep 2017 23:08:48 +0000 (UTC) Received: from secure.mitica (ovpn-116-75.ams2.redhat.com [10.36.116.75]) by smtp.corp.redhat.com (Postfix) with ESMTP id B019A60246; Thu, 21 Sep 2017 23:08:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 3EC1A267C4 Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=quintela@redhat.com From: Juan Quintela To: qemu-devel@nongnu.org Date: Fri, 22 Sep 2017 01:08:09 +0200 Message-Id: <20170921230812.7095-16-quintela@redhat.com> In-Reply-To: <20170921230812.7095-1-quintela@redhat.com> References: <20170921230812.7095-1-quintela@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Thu, 21 Sep 2017 23:08:48 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 15/18] migration: split common postcopy out of ram postcopy X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lvivier@redhat.com, Vladimir Sementsov-Ogievskiy , dgilbert@redhat.com, peterx@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Vladimir Sementsov-Ogievskiy Split common postcopy staff from ram postcopy staff. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Dr. David Alan Gilbert Reviewed-by: Juan Quintela Signed-off-by: Juan Quintela --- migration/migration.c | 39 ++++++++++++++++++++++++++------------- migration/migration.h | 2 ++ migration/savevm.c | 48 +++++++++++++++++++++++++++++++++++++++--------- 3 files changed, 67 insertions(+), 22 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index 067dd929c4..266cd39c36 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1443,6 +1443,11 @@ bool migrate_postcopy_ram(void) return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_RAM]; } =20 +bool migrate_postcopy(void) +{ + return migrate_postcopy_ram(); +} + bool migrate_auto_converge(void) { MigrationState *s; @@ -1826,9 +1831,11 @@ static int postcopy_start(MigrationState *ms, bool *= old_vm_running) * need to tell the destination to throw any pages it's already receiv= ed * that are dirty */ - if (ram_postcopy_send_discard_bitmap(ms)) { - error_report("postcopy send discard bitmap failed"); - goto fail; + if (migrate_postcopy_ram()) { + if (ram_postcopy_send_discard_bitmap(ms)) { + error_report("postcopy send discard bitmap failed"); + goto fail; + } } =20 /* @@ -1837,8 +1844,10 @@ static int postcopy_start(MigrationState *ms, bool *= old_vm_running) * wrap their state up here */ qemu_file_set_rate_limit(ms->to_dst_file, INT64_MAX); - /* Ping just for debugging, helps line traces up */ - qemu_savevm_send_ping(ms->to_dst_file, 2); + if (migrate_postcopy_ram()) { + /* Ping just for debugging, helps line traces up */ + qemu_savevm_send_ping(ms->to_dst_file, 2); + } =20 /* * While loading the device state we may trigger page transfer @@ -1863,7 +1872,9 @@ static int postcopy_start(MigrationState *ms, bool *o= ld_vm_running) qemu_savevm_send_postcopy_listen(fb); =20 qemu_savevm_state_complete_precopy(fb, false, false); - qemu_savevm_send_ping(fb, 3); + if (migrate_postcopy_ram()) { + qemu_savevm_send_ping(fb, 3); + } =20 qemu_savevm_send_postcopy_run(fb); =20 @@ -1898,11 +1909,13 @@ static int postcopy_start(MigrationState *ms, bool = *old_vm_running) =20 qemu_mutex_unlock_iothread(); =20 - /* - * Although this ping is just for debug, it could potentially be - * used for getting a better measurement of downtime at the source. - */ - qemu_savevm_send_ping(ms->to_dst_file, 4); + if (migrate_postcopy_ram()) { + /* + * Although this ping is just for debug, it could potentially be + * used for getting a better measurement of downtime at the source. + */ + qemu_savevm_send_ping(ms->to_dst_file, 4); + } =20 if (migrate_release_ram()) { ram_postcopy_migrated_memory_release(ms); @@ -2080,7 +2093,7 @@ static void *migration_thread(void *opaque) qemu_savevm_send_ping(s->to_dst_file, 1); } =20 - if (migrate_postcopy_ram()) { + if (migrate_postcopy()) { /* * Tell the destination that we *might* want to do postcopy later; * if the other end can't do postcopy it should fail now, nice and @@ -2113,7 +2126,7 @@ static void *migration_thread(void *opaque) if (pending_size && pending_size >=3D threshold_size) { /* Still a significant amount to transfer */ =20 - if (migrate_postcopy_ram() && + if (migrate_postcopy() && s->state !=3D MIGRATION_STATUS_POSTCOPY_ACTIVE && pend_nonpost <=3D threshold_size && atomic_read(&s->start_postcopy)) { diff --git a/migration/migration.h b/migration/migration.h index 641290f51b..b83cceadc4 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -169,6 +169,8 @@ bool migration_is_blocked(Error **errp); bool migration_in_postcopy(void); MigrationState *migrate_get_current(void); =20 +bool migrate_postcopy(void); + bool migrate_release_ram(void); bool migrate_postcopy_ram(void); bool migrate_zero_blocks(void); diff --git a/migration/savevm.c b/migration/savevm.c index 9a48b7b4cb..231474da34 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -89,7 +89,7 @@ static struct mig_cmd_args { [MIG_CMD_INVALID] =3D { .len =3D -1, .name =3D "INVALID" }, [MIG_CMD_OPEN_RETURN_PATH] =3D { .len =3D 0, .name =3D "OPEN_RETURN_P= ATH" }, [MIG_CMD_PING] =3D { .len =3D sizeof(uint32_t), .name =3D = "PING" }, - [MIG_CMD_POSTCOPY_ADVISE] =3D { .len =3D 16, .name =3D "POSTCOPY_ADVI= SE" }, + [MIG_CMD_POSTCOPY_ADVISE] =3D { .len =3D -1, .name =3D "POSTCOPY_ADVI= SE" }, [MIG_CMD_POSTCOPY_LISTEN] =3D { .len =3D 0, .name =3D "POSTCOPY_LIST= EN" }, [MIG_CMD_POSTCOPY_RUN] =3D { .len =3D 0, .name =3D "POSTCOPY_RUN"= }, [MIG_CMD_POSTCOPY_RAM_DISCARD] =3D { @@ -98,6 +98,23 @@ static struct mig_cmd_args { [MIG_CMD_MAX] =3D { .len =3D -1, .name =3D "MAX" }, }; =20 +/* Note for MIG_CMD_POSTCOPY_ADVISE: + * The format of arguments is depending on postcopy mode: + * - postcopy RAM only + * uint64_t host page size + * uint64_t taget page size + * + * - postcopy RAM and postcopy dirty bitmaps + * format is the same as for postcopy RAM only + * + * - postcopy dirty bitmaps only + * Nothing. Command length field is 0. + * + * Be careful: adding a new postcopy entity with some other parameters sho= uld + * not break format self-description ability. Good way is to introduce some + * generic extendable format with an exception for two old entities. + */ + static int announce_self_create(uint8_t *buf, uint8_t *mac_addr) { @@ -861,12 +878,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint= 8_t *buf, size_t len) /* Send prior to any postcopy transfer */ void qemu_savevm_send_postcopy_advise(QEMUFile *f) { - uint64_t tmp[2]; - tmp[0] =3D cpu_to_be64(ram_pagesize_summary()); - tmp[1] =3D cpu_to_be64(qemu_target_page_size()); + if (migrate_postcopy_ram()) { + uint64_t tmp[2]; + tmp[0] =3D cpu_to_be64(ram_pagesize_summary()); + tmp[1] =3D cpu_to_be64(qemu_target_page_size()); =20 - trace_qemu_savevm_send_postcopy_advise(); - qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tm= p); + trace_qemu_savevm_send_postcopy_advise(); + qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, + 16, (uint8_t *)tmp); + } else { + qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL); + } } =20 /* Sent prior to starting the destination running in postcopy, discard pag= es @@ -1354,6 +1376,10 @@ static int loadvm_postcopy_handle_advise(MigrationIn= comingState *mis) return -1; } =20 + if (!migrate_postcopy_ram()) { + return 0; + } + if (!postcopy_ram_supported_by_host()) { postcopy_state_set(POSTCOPY_INCOMING_NONE); return -1; @@ -1564,7 +1590,9 @@ static int loadvm_postcopy_handle_listen(MigrationInc= omingState *mis) * A rare case, we entered listen without having to do any discard= s, * so do the setup that's normally done at the time of the 1st dis= card. */ - postcopy_ram_prepare_discard(mis); + if (migrate_postcopy_ram()) { + postcopy_ram_prepare_discard(mis); + } } =20 /* @@ -1572,8 +1600,10 @@ static int loadvm_postcopy_handle_listen(MigrationIn= comingState *mis) * However, at this point the CPU shouldn't be running, and the IO * shouldn't be doing anything yet so don't actually expect requests */ - if (postcopy_ram_enable_notify(mis)) { - return -1; + if (migrate_postcopy_ram()) { + if (postcopy_ram_enable_notify(mis)) { + return -1; + } } =20 if (mis->have_listen_thread) { --=20 2.13.5