From nobody Wed Apr 16 07:43:28 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1513631785781421.1953205560699; Mon, 18 Dec 2017 13:16:25 -0800 (PST) Received: from localhost ([::1]:42569 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eR2lV-0008JD-6U for importer@patchew.org; Mon, 18 Dec 2017 16:16:01 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59790) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eR2eN-0001sa-A7 for qemu-devel@nongnu.org; Mon, 18 Dec 2017 16:08:40 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eR2eM-0003MY-8X for qemu-devel@nongnu.org; Mon, 18 Dec 2017 16:08:39 -0500 Received: from mx1.redhat.com ([209.132.183.28]:53312) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eR2eJ-0003Jo-Hu; Mon, 18 Dec 2017 16:08:35 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B8A44356CF; Mon, 18 Dec 2017 21:08:34 +0000 (UTC) Received: from localhost (ovpn-120-76.rdu2.redhat.com [10.10.120.76]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 8F02B17125; Mon, 18 Dec 2017 21:08:33 +0000 (UTC) From: Jeff Cody To: qemu-block@nongnu.org Date: Mon, 18 Dec 2017 16:08:13 -0500 Message-Id: <20171218210819.31576-5-jcody@redhat.com> In-Reply-To: <20171218210819.31576-1-jcody@redhat.com> References: <20171218210819.31576-1-jcody@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Mon, 18 Dec 2017 21:08:34 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 04/10] backup: simplify non-dirty bits progress processing X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, jcody@redhat.com, vsementsov@virtuozzo.com, jsnow@redhat.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Vladimir Sementsov-Ogievskiy Set fake progress for non-dirty clusters in copy_bitmap initialization, to. It simplifies code and allows further refactoring. This patch changes user's view of backup progress, but formally it doesn't changed: progress hops are just moved to the beginning. Actually it's just a point of view: when do we actually skip clusters? We can say in the very beginning, that we skip these clusters and do not think about them later. Of course, if go through disk sequentially, it's logical to say, that we skip clusters between copied portions to the left and to the right of them. But even now copying progress is not sequential because of write notifiers. Future patches will introduce new backup architecture which will do copying in several coroutines in parallel, so it will make no sense to publish fake progress by parts in parallel with other copying requests. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: John Snow Reviewed-by: Stefan Hajnoczi Reviewed-by: Jeff Cody Message-id: 20171012135313.227864-5-vsementsov@virtuozzo.com Signed-off-by: Jeff Cody --- block/backup.c | 18 +++--------------- 1 file changed, 3 insertions(+), 15 deletions(-) diff --git a/block/backup.c b/block/backup.c index b8901ea..8ee2200 100644 --- a/block/backup.c +++ b/block/backup.c @@ -369,7 +369,6 @@ static int coroutine_fn backup_run_incremental(BackupBl= ockJob *job) int64_t offset; int64_t cluster; int64_t end; - int64_t last_cluster =3D -1; BdrvDirtyBitmapIter *dbi; =20 granularity =3D bdrv_dirty_bitmap_granularity(job->sync_bitmap); @@ -380,12 +379,6 @@ static int coroutine_fn backup_run_incremental(BackupB= lockJob *job) while ((offset =3D bdrv_dirty_iter_next(dbi)) >=3D 0) { cluster =3D offset / job->cluster_size; =20 - /* Fake progress updates for any clusters we skipped */ - if (cluster !=3D last_cluster + 1) { - job->common.offset +=3D ((cluster - last_cluster - 1) * - job->cluster_size); - } - for (end =3D cluster + clusters_per_iter; cluster < end; cluster++= ) { do { if (yield_and_check(job)) { @@ -407,14 +400,6 @@ static int coroutine_fn backup_run_incremental(BackupB= lockJob *job) if (granularity < job->cluster_size) { bdrv_set_dirty_iter(dbi, cluster * job->cluster_size); } - - last_cluster =3D cluster - 1; - } - - /* Play some final catchup with the progress meter */ - end =3D DIV_ROUND_UP(job->common.len, job->cluster_size); - if (last_cluster + 1 < end) { - job->common.offset +=3D ((end - last_cluster - 1) * job->cluster_s= ize); } =20 out: @@ -456,6 +441,9 @@ static void backup_incremental_init_copy_bitmap(BackupB= lockJob *job) bdrv_set_dirty_iter(dbi, next_cluster * job->cluster_size); } =20 + job->common.offset =3D job->common.len - + hbitmap_count(job->copy_bitmap) * job->cluster_si= ze; + bdrv_dirty_iter_free(dbi); } =20 --=20 2.9.5