From nobody Sun Feb 8 14:34:55 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1537982120330331.7923821437174; Wed, 26 Sep 2018 10:15:20 -0700 (PDT) Received: from localhost ([::1]:59871 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g5DP8-0001u4-5I for importer@patchew.org; Wed, 26 Sep 2018 13:15:14 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:52640) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g5DMx-0000gY-7p for qemu-devel@nongnu.org; Wed, 26 Sep 2018 13:13:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1g5DMw-0006nV-EC for qemu-devel@nongnu.org; Wed, 26 Sep 2018 13:12:59 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34372) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1g5DMw-0006mf-70 for qemu-devel@nongnu.org; Wed, 26 Sep 2018 13:12:58 -0400 Received: from smtp.corp.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6DDCD307D865; Wed, 26 Sep 2018 17:12:57 +0000 (UTC) Received: from dgilbert-t530.redhat.com (unknown [10.36.118.26]) by smtp.corp.redhat.com (Postfix) with ESMTP id C8735DB561; Wed, 26 Sep 2018 17:12:54 +0000 (UTC) From: "Dr. David Alan Gilbert (git)" To: qemu-devel@nongnu.org, xiaoguangrong@tencent.com, joserz@linux.ibm.com, wei@redhat.com, thuth@redhat.com, marcandre.lureau@redhat.com, fli@suse.com Date: Wed, 26 Sep 2018 18:12:25 +0100 Message-Id: <20180926171236.45203-6-dgilbert@redhat.com> In-Reply-To: <20180926171236.45203-1-dgilbert@redhat.com> References: <20180926171236.45203-1-dgilbert@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Wed, 26 Sep 2018 17:12:57 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 05/16] migration: do not flush_compressed_data at the end of iteration X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: quintela@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RDMRC_1 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Xiao Guangrong flush_compressed_data() needs to wait all compression threads to finish their work, after that all threads are free until the migration feeds new request to them, reducing its call can improve the throughput and use CPU resource more effectively We do not need to flush all threads at the end of iteration, the data can be kept locally until the memory block is changed or memory migration starts over in that case we will meet a dirtied page which may still exists in compression threads's ring Signed-off-by: Xiao Guangrong Reviewed-by: Juan Quintela Message-Id: <20180906070101.27280-2-xiaoguangrong@tencent.com> Signed-off-by: Juan Quintela Signed-off-by: Dr. David Alan Gilbert --- migration/ram.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 43360f6483..2c039892d3 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1996,17 +1996,22 @@ static bool find_dirty_block(RAMState *rs, PageSear= chStatus *pss, bool *again) pss->page =3D 0; pss->block =3D QLIST_NEXT_RCU(pss->block, next); if (!pss->block) { + /* + * If memory migration starts over, we will meet a dirtied page + * which may still exists in compression threads's ring, so we + * should flush the compressed data to make sure the new page + * is not overwritten by the old one in the destination. + * + * Also If xbzrle is on, stop using the data compression at th= is + * point. In theory, xbzrle can do better than compression. + */ + flush_compressed_data(rs); + /* Hit the end of the list */ pss->block =3D QLIST_FIRST_RCU(&ram_list.blocks); /* Flag that we've looped */ pss->complete_round =3D true; rs->ram_bulk_stage =3D false; - if (migrate_use_xbzrle()) { - /* If xbzrle is on, stop using the data compression at this - * point. In theory, xbzrle can do better than compression. - */ - flush_compressed_data(rs); - } } /* Didn't find anything this time, but try again on the new block = */ *again =3D true; @@ -3219,7 +3224,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) } i++; } - flush_compressed_data(rs); rcu_read_unlock(); =20 /* --=20 2.17.1