From nobody Wed Nov 5 20:16:08 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1536844786702195.67073933058452; Thu, 13 Sep 2018 06:19:46 -0700 (PDT) Received: from localhost ([::1]:42430 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g0RX7-0005fD-Gw for importer@patchew.org; Thu, 13 Sep 2018 09:19:45 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47698) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g0R8X-0003Qs-PN for qemu-devel@nongnu.org; Thu, 13 Sep 2018 08:54:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1g0R8U-0003GT-HB for qemu-devel@nongnu.org; Thu, 13 Sep 2018 08:54:21 -0400 Received: from mx1.redhat.com ([209.132.183.28]:44280) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1g0R8T-0003Dw-Bs for qemu-devel@nongnu.org; Thu, 13 Sep 2018 08:54:18 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0722A3082B71; Thu, 13 Sep 2018 12:54:15 +0000 (UTC) Received: from secure.mitica (ovpn-116-187.ams2.redhat.com [10.36.116.187]) by smtp.corp.redhat.com (Postfix) with ESMTP id 615185C1B4; Thu, 13 Sep 2018 12:54:13 +0000 (UTC) From: Juan Quintela To: qemu-devel@nongnu.org Date: Thu, 13 Sep 2018 14:53:36 +0200 Message-Id: <20180913125343.10912-6-quintela@redhat.com> In-Reply-To: <20180913125343.10912-1-quintela@redhat.com> References: <20180913125343.10912-1-quintela@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Thu, 13 Sep 2018 12:54:15 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 05/12] migration: do not flush_compressed_data at the end of iteration X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lvivier@redhat.com, Xiao Guangrong , dgilbert@redhat.com, peterx@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RDMRC_1 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Xiao Guangrong flush_compressed_data() needs to wait all compression threads to finish their work, after that all threads are free until the migration feeds new request to them, reducing its call can improve the throughput and use CPU resource more effectively We do not need to flush all threads at the end of iteration, the data can be kept locally until the memory block is changed or memory migration starts over in that case we will meet a dirtied page which may still exists in compression threads's ring Signed-off-by: Xiao Guangrong Reviewed-by: Juan Quintela Message-Id: <20180906070101.27280-2-xiaoguangrong@tencent.com> Signed-off-by: Juan Quintela --- migration/ram.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 2add09174d..e152831254 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1996,17 +1996,22 @@ static bool find_dirty_block(RAMState *rs, PageSear= chStatus *pss, bool *again) pss->page =3D 0; pss->block =3D QLIST_NEXT_RCU(pss->block, next); if (!pss->block) { + /* + * If memory migration starts over, we will meet a dirtied page + * which may still exists in compression threads's ring, so we + * should flush the compressed data to make sure the new page + * is not overwritten by the old one in the destination. + * + * Also If xbzrle is on, stop using the data compression at th= is + * point. In theory, xbzrle can do better than compression. + */ + flush_compressed_data(rs); + /* Hit the end of the list */ pss->block =3D QLIST_FIRST_RCU(&ram_list.blocks); /* Flag that we've looped */ pss->complete_round =3D true; rs->ram_bulk_stage =3D false; - if (migrate_use_xbzrle()) { - /* If xbzrle is on, stop using the data compression at this - * point. In theory, xbzrle can do better than compression. - */ - flush_compressed_data(rs); - } } /* Didn't find anything this time, but try again on the new block = */ *again =3D true; @@ -3219,7 +3224,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) } i++; } - flush_compressed_data(rs); rcu_read_unlock(); =20 /* --=20 2.17.1