From nobody Thu Dec 18 00:32:02 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1492849722308500.7508419697074; Sat, 22 Apr 2017 01:28:42 -0700 (PDT) Received: from localhost ([::1]:34693 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d1qPG-0002wY-78 for importer@patchew.org; Sat, 22 Apr 2017 04:28:38 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59777) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d1qNR-0001jE-0g for qemu-devel@nongnu.org; Sat, 22 Apr 2017 04:26:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1d1qNN-0000gx-KS for qemu-devel@nongnu.org; Sat, 22 Apr 2017 04:26:45 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:3397 helo=dggrg03-dlp.huawei.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.71) (envelope-from ) id 1d1qNM-0000g4-OT for qemu-devel@nongnu.org; Sat, 22 Apr 2017 04:26:41 -0400 Received: from 172.30.72.54 (EHLO DGGEML403-HUB.china.huawei.com) ([172.30.72.54]) by dggrg03-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AMH71128; Sat, 22 Apr 2017 16:26:31 +0800 (CST) Received: from localhost (10.177.24.212) by DGGEML403-HUB.china.huawei.com (10.3.17.33) with Microsoft SMTP Server id 14.3.301.0; Sat, 22 Apr 2017 16:26:24 +0800 From: zhanghailiang To: , Date: Sat, 22 Apr 2017 16:25:49 +0800 Message-ID: <1492849558-17540-10-git-send-email-zhang.zhanghailiang@huawei.com> X-Mailer: git-send-email 2.7.2.windows.1 In-Reply-To: <1492849558-17540-1-git-send-email-zhang.zhanghailiang@huawei.com> References: <1492849558-17540-1-git-send-email-zhang.zhanghailiang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.24.212] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090201.58FB13B7.0048, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: e19224d6e1f03c86629b2269880f9fa2 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] [fuzzy] X-Received-From: 45.249.212.189 Subject: [Qemu-devel] [PATCH v2 09/18] COLO: Flush memory data from ram cache X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: zhanghailiang , xiecl.fnst@cn.fujitsu.com, zhangchen.fnst@cn.fujitsu.com, lizhijian@cn.fujitsu.com, quintela@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" During the time of VM's running, PVM may dirty some pages, we will transfer PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkp= oint time. So, the content of SVM's RAM cache will always be same with PVM's mem= ory after checkpoint. Instead of flushing all content of PVM's RAM cache into SVM's MEMORY, we do this in a more efficient way: Only flush any page that dirtied by PVM since last checkpoint. In this way, we can ensure SVM's memory same with PVM's. Besides, we must ensure flush RAM cache before load device state. Cc: Juan Quintela Signed-off-by: zhanghailiang Signed-off-by: Li Zhijian Reviewed-by: Dr. David Alan Gilbert --- include/migration/migration.h | 1 + migration/ram.c | 40 ++++++++++++++++++++++++++++++++++++++++ migration/trace-events | 2 ++ 3 files changed, 43 insertions(+) diff --git a/include/migration/migration.h b/include/migration/migration.h index ba765eb..2aa7654 100644 --- a/include/migration/migration.h +++ b/include/migration/migration.h @@ -364,4 +364,5 @@ PostcopyState postcopy_state_set(PostcopyState new_stat= e); /* ram cache */ int colo_init_ram_cache(void); void colo_release_ram_cache(void); +void colo_flush_ram_cache(void); #endif diff --git a/migration/ram.c b/migration/ram.c index 0653a24..df10d4b 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2602,6 +2602,7 @@ static int ram_load(QEMUFile *f, void *opaque, int ve= rsion_id) bool postcopy_running =3D postcopy_state_get() >=3D POSTCOPY_INCOMING_= LISTENING; /* ADVISE is earlier, it shows the source has the postcopy capability = on */ bool postcopy_advised =3D postcopy_state_get() >=3D POSTCOPY_INCOMING_= ADVISE; + bool need_flush =3D false; =20 seq_iter++; =20 @@ -2636,6 +2637,7 @@ static int ram_load(QEMUFile *f, void *opaque, int ve= rsion_id) /* After going into COLO, we should load the Page into colo_ca= che */ if (migration_incoming_in_colo_state()) { host =3D colo_cache_from_block_offset(block, addr); + need_flush =3D true; } else { host =3D host_from_ram_block_offset(block, addr); } @@ -2742,6 +2744,10 @@ static int ram_load(QEMUFile *f, void *opaque, int v= ersion_id) wait_for_decompress_done(); rcu_read_unlock(); trace_ram_load_complete(ret, seq_iter); + + if (!ret && ram_cache_enable && need_flush) { + colo_flush_ram_cache(); + } return ret; } =20 @@ -2810,6 +2816,40 @@ void colo_release_ram_cache(void) rcu_read_unlock(); } =20 +/* + * Flush content of RAM cache into SVM's memory. + * Only flush the pages that be dirtied by PVM or SVM or both. + */ +void colo_flush_ram_cache(void) +{ + RAMBlock *block =3D NULL; + void *dst_host; + void *src_host; + unsigned long offset =3D 0; + + trace_colo_flush_ram_cache_begin(ram_state.migration_dirty_pages); + rcu_read_lock(); + block =3D QLIST_FIRST_RCU(&ram_list.blocks); + + while (block) { + offset =3D migration_bitmap_find_dirty(&ram_state, block, offset); + migration_bitmap_clear_dirty(&ram_state, block, offset); + + if (offset << TARGET_PAGE_BITS >=3D block->used_length) { + offset =3D 0; + block =3D QLIST_NEXT_RCU(block, next); + } else { + dst_host =3D block->host + (offset << TARGET_PAGE_BITS); + src_host =3D block->colo_cache + (offset << TARGET_PAGE_BITS); + memcpy(dst_host, src_host, TARGET_PAGE_SIZE); + } + } + + rcu_read_unlock(); + trace_colo_flush_ram_cache_end(); + assert(ram_state.migration_dirty_pages =3D=3D 0); +} + static SaveVMHandlers savevm_ram_handlers =3D { .save_live_setup =3D ram_save_setup, .save_live_iterate =3D ram_save_iterate, diff --git a/migration/trace-events b/migration/trace-events index b8f01a2..93f4337 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -72,6 +72,8 @@ ram_discard_range(const char *rbname, uint64_t start, siz= e_t len) "%s: start: %" ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x" ram_postcopy_send_discard_bitmap(void) "" ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: st= art: %zx len: %zx" +colo_flush_ram_cache_begin(uint64_t dirty_pages) "dirty_pages %" PRIu64 +colo_flush_ram_cache_end(void) "" =20 # migration/migration.c await_return_path_close_on_source_close(void) "" --=20 1.8.3.1