From nobody Fri Dec 19 18:53:06 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1537932902770938.2052932006143; Tue, 25 Sep 2018 20:35:02 -0700 (PDT) Received: from localhost ([::1]:56248 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g50bI-00046J-RM for importer@patchew.org; Tue, 25 Sep 2018 23:34:56 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55532) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g50St-0004Vh-Af for qemu-devel@nongnu.org; Tue, 25 Sep 2018 23:26:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1g50KL-0002P9-PL for qemu-devel@nongnu.org; Tue, 25 Sep 2018 23:17:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51844) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1g50KL-0002Nr-H9 for qemu-devel@nongnu.org; Tue, 25 Sep 2018 23:17:25 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CA55D308214B; Wed, 26 Sep 2018 03:17:24 +0000 (UTC) Received: from jason-ThinkPad-T450s.redhat.com (ovpn-12-161.pek2.redhat.com [10.72.12.161]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5110660BE8; Wed, 26 Sep 2018 03:17:22 +0000 (UTC) From: Jason Wang To: qemu-devel@nongnu.org Date: Wed, 26 Sep 2018 11:16:34 +0800 Message-Id: <20180926031650.8892-10-jasowang@redhat.com> In-Reply-To: <20180926031650.8892-1-jasowang@redhat.com> References: <20180926031650.8892-1-jasowang@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.42]); Wed, 26 Sep 2018 03:17:24 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 09/25] COLO: Flush memory data from ram cache X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, Jason Wang , zhanghailiang , Li Zhijian , Zhang Chen Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RDMRC_1 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Zhang Chen During the time of VM's running, PVM may dirty some pages, we will transfer PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkp= oint time. So, the content of SVM's RAM cache will always be same with PVM's mem= ory after checkpoint. Instead of flushing all content of PVM's RAM cache into SVM's MEMORY, we do this in a more efficient way: Only flush any page that dirtied by PVM since last checkpoint. In this way, we can ensure SVM's memory same with PVM's. Besides, we must ensure flush RAM cache before load device state. Signed-off-by: zhanghailiang Signed-off-by: Li Zhijian Reviewed-by: Dr. David Alan Gilbert Signed-off-by: Jason Wang --- migration/ram.c | 37 +++++++++++++++++++++++++++++++++++++ migration/trace-events | 2 ++ 2 files changed, 39 insertions(+) diff --git a/migration/ram.c b/migration/ram.c index 5929b61180..4d0c5e5501 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3925,6 +3925,39 @@ static bool postcopy_is_running(void) return ps >=3D POSTCOPY_INCOMING_LISTENING && ps < POSTCOPY_INCOMING_E= ND; } =20 +/* + * Flush content of RAM cache into SVM's memory. + * Only flush the pages that be dirtied by PVM or SVM or both. + */ +static void colo_flush_ram_cache(void) +{ + RAMBlock *block =3D NULL; + void *dst_host; + void *src_host; + unsigned long offset =3D 0; + + trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages); + rcu_read_lock(); + block =3D QLIST_FIRST_RCU(&ram_list.blocks); + + while (block) { + offset =3D migration_bitmap_find_dirty(ram_state, block, offset); + + if (offset << TARGET_PAGE_BITS >=3D block->used_length) { + offset =3D 0; + block =3D QLIST_NEXT_RCU(block, next); + } else { + migration_bitmap_clear_dirty(ram_state, block, offset); + dst_host =3D block->host + (offset << TARGET_PAGE_BITS); + src_host =3D block->colo_cache + (offset << TARGET_PAGE_BITS); + memcpy(dst_host, src_host, TARGET_PAGE_SIZE); + } + } + + rcu_read_unlock(); + trace_colo_flush_ram_cache_end(); +} + static int ram_load(QEMUFile *f, void *opaque, int version_id) { int flags =3D 0, ret =3D 0, invalid_flags =3D 0; @@ -4101,6 +4134,10 @@ static int ram_load(QEMUFile *f, void *opaque, int v= ersion_id) ret |=3D wait_for_decompress_done(); rcu_read_unlock(); trace_ram_load_complete(ret, seq_iter); + + if (!ret && migration_incoming_in_colo_state()) { + colo_flush_ram_cache(); + } return ret; } =20 diff --git a/migration/trace-events b/migration/trace-events index fa0ff3f3bf..bd2d0cd25a 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -102,6 +102,8 @@ ram_dirty_bitmap_sync_start(void) "" ram_dirty_bitmap_sync_wait(void) "" ram_dirty_bitmap_sync_complete(void) "" ram_state_resume_prepare(uint64_t v) "%" PRId64 +colo_flush_ram_cache_begin(uint64_t dirty_pages) "dirty_pages %" PRIu64 +colo_flush_ram_cache_end(void) "" =20 # migration/migration.c await_return_path_close_on_source_close(void) "" --=20 2.17.1