From nobody Mon Apr 29 05:08:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com ARC-Seal: i=1; a=rsa-sha256; t=1574878945; cv=none; d=zohomail.com; s=zohoarc; b=KY9f2in4/wvJIHEHV5qDvj9tpyH/D0bfn1dOrx148AyM5/8Ne2AtQyWYPmhBGhNUvk7imgy3+u6Qt/1gVvrrlN18tXhh0IysH6gwQpVXIoA7TwkPYqvsC7amMGhVisEmNuhLvIU1eOOMzMKACq+5jsMchqsF+MX7hVTiPmZJzaU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1574878945; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=9/Yz1yTb4IPw7H7YX1GI4BEIsw7181GDQE+jWcMBTHk=; b=KzwFHz7MoNG4DQeS+gJIeF5LM7f28mk97WQ0t4SqLXGXyuxaK7lc4zik+YOrQ6lftoM6H3yncXro1UOGHT/sj5sTPYeJf/wIsScxBMyDbecJRHkZwxQG3I0Z9I3p5LSjoXquW9jwb3EVFlnwoUvwSTy2ysnXVR8GyNTSqGrMN4E= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 15748789457891002.2518690630861; Wed, 27 Nov 2019 10:22:25 -0800 (PST) Received: from localhost ([::1]:41486 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia1xI-0002QI-9Y for importer@patchew.org; Wed, 27 Nov 2019 13:22:24 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:56393) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia1kA-0006dl-R6 for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:52 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ia1k9-0003s0-Cc for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:50 -0500 Received: from relay.sw.ru ([185.231.240.75]:49970) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ia1k5-0003ik-Tc; Wed, 27 Nov 2019 13:08:46 -0500 Received: from vovaso.qa.sw.ru ([10.94.3.0] helo=kvm.qa.sw.ru) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ia1k0-0002UU-JC; Wed, 27 Nov 2019 21:08:40 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org Subject: [PATCH v2 1/7] block/block-copy: specialcase first copy_range request Date: Wed, 27 Nov 2019 21:08:34 +0300 Message-Id: <20191127180840.11937-2-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191127180840.11937-1-vsementsov@virtuozzo.com> References: <20191127180840.11937-1-vsementsov@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 185.231.240.75 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, qemu-devel@nongnu.org, mreitz@redhat.com, den@openvz.org, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" In block_copy_do_copy we fallback to read+write if copy_range failed. In this case copy_size is larger than defined for buffered IO, and there is corresponding commit. Still, backup copies data cluster by cluster, and most of requests are limited to one cluster anyway, so the only source of this one bad-limited request is copy-before-write operation. Further patch will move backup to use block_copy directly, than for cases where copy_range is not supported, first request will be oversized in each backup. It's not good, let's change it now. Fix is simple: just limit first copy_range request like buffer-based request. If it succeed, set larger copy_range limit. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Andrey Shinkevich --- block/block-copy.c | 41 ++++++++++++++++++++++++++++++----------- 1 file changed, 30 insertions(+), 11 deletions(-) diff --git a/block/block-copy.c b/block/block-copy.c index 79798a1567..8602e2cae7 100644 --- a/block/block-copy.c +++ b/block/block-copy.c @@ -70,16 +70,19 @@ void block_copy_state_free(BlockCopyState *s) g_free(s); } =20 +static uint32_t block_copy_max_transfer(BdrvChild *source, BdrvChild *targ= et) +{ + return MIN_NON_ZERO(INT_MAX, + MIN_NON_ZERO(source->bs->bl.max_transfer, + target->bs->bl.max_transfer)); +} + BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target, int64_t cluster_size, BdrvRequestFlags write_flags, Error *= *errp) { BlockCopyState *s; BdrvDirtyBitmap *copy_bitmap; - uint32_t max_transfer =3D - MIN_NON_ZERO(INT_MAX, - MIN_NON_ZERO(source->bs->bl.max_transfer, - target->bs->bl.max_transfer)); =20 copy_bitmap =3D bdrv_create_dirty_bitmap(source->bs, cluster_size, NUL= L, errp); @@ -99,7 +102,7 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, = BdrvChild *target, .mem =3D shres_create(BLOCK_COPY_MAX_MEM), }; =20 - if (max_transfer < cluster_size) { + if (block_copy_max_transfer(source, target) < cluster_size) { /* * copy_range does not respect max_transfer. We don't want to both= er * with requests smaller than block-copy cluster size, so fallback= to @@ -114,12 +117,11 @@ BlockCopyState *block_copy_state_new(BdrvChild *sourc= e, BdrvChild *target, s->copy_size =3D cluster_size; } else { /* - * copy_range does not respect max_transfer (it's a TODO), so we f= actor - * that in here. + * We enable copy-range, but keep small copy_size, until first + * successful copy_range (look at block_copy_do_copy). */ s->use_copy_range =3D true; - s->copy_size =3D MIN(MAX(cluster_size, BLOCK_COPY_MAX_COPY_RANGE), - QEMU_ALIGN_DOWN(max_transfer, cluster_size)); + s->copy_size =3D MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER); } =20 QLIST_INIT(&s->inflight_reqs); @@ -168,7 +170,21 @@ static int coroutine_fn block_copy_do_copy(BlockCopySt= ate *s, s->use_copy_range =3D false; s->copy_size =3D MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER); /* Fallback to read+write with allocated buffer */ - } else { + } else if (s->use_copy_range) { + /* + * Successful copy-range. Now increase copy_size. + * copy_range does not respect max_transfer (it's a TODO), so = we + * factor that in here. + * + * Note: we double-check s->use_copy_range for the case when + * parallel block-copy request unset it during previous + * bdrv_co_copy_range call. + */ + s->copy_size =3D + MIN(MAX(s->cluster_size, BLOCK_COPY_MAX_COPY_RANGE), + QEMU_ALIGN_DOWN(block_copy_max_transfer(s->source, + s->target), + s->cluster_size)); goto out; } } @@ -176,7 +192,10 @@ static int coroutine_fn block_copy_do_copy(BlockCopySt= ate *s, /* * In case of failed copy_range request above, we may proceed with buf= fered * request larger than BLOCK_COPY_MAX_BUFFER. Still, further requests = will - * be properly limited, so don't care too much. + * be properly limited, so don't care too much. Moreover the most poss= ible + * case (copy_range is unsupported for the configuration, so the very = first + * copy_range request fails) is handled by setting large copy_size only + * after first successful copy_range. */ =20 bounce_buffer =3D qemu_blockalign(s->source->bs, nbytes); --=20 2.21.0 From nobody Mon Apr 29 05:08:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com ARC-Seal: i=1; a=rsa-sha256; t=1574879293; cv=none; d=zohomail.com; s=zohoarc; b=UA9ZbxMIxzIRDCOKe9A+8gnHW57zp+85SD0DzCo8c/E9adX6tJouTUD//+w7YzMDv/6FvH1bCde1U4ExENnisEPL/UJ76EQHW22XrddDXBTmBAsSARmv8e8MEvWAmY9Fc0qwmcmdABaASe1DnSyDsZfWIX9DWGuIx8ETBU3sI/g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1574879293; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=s1gaZZlWfHqXs/hHHXZNSEALf74xhQ4S5ejZKozGpLw=; b=k1t/iPmBEzdjsE0PcH/k1roQrjxBmlp250PfCw/eihI7E6nlQKyo9twXoHag8uVJUcZ16/MaHjmAEkzbTD/uplC1A7TqvkZ14rvxPMuNRm9Jfc450j1WzJaPnEtfu3qRQMIQE8u6C3ld9rM82cQrSjMdmZgcmxApkjdeJq9c88c= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1574879293391679.237884151295; Wed, 27 Nov 2019 10:28:13 -0800 (PST) Received: from localhost ([::1]:41568 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia22u-0007qJ-1D for importer@patchew.org; Wed, 27 Nov 2019 13:28:12 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:56409) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia1kB-0006dy-1h for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:52 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ia1k9-0003sk-Mu for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:50 -0500 Received: from relay.sw.ru ([185.231.240.75]:49980) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ia1k6-0003ip-4O; Wed, 27 Nov 2019 13:08:46 -0500 Received: from vovaso.qa.sw.ru ([10.94.3.0] helo=kvm.qa.sw.ru) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ia1k0-0002UU-P5; Wed, 27 Nov 2019 21:08:40 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org Subject: [PATCH v2 2/7] block/block-copy: use block_status Date: Wed, 27 Nov 2019 21:08:35 +0300 Message-Id: <20191127180840.11937-3-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191127180840.11937-1-vsementsov@virtuozzo.com> References: <20191127180840.11937-1-vsementsov@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 185.231.240.75 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, qemu-devel@nongnu.org, mreitz@redhat.com, den@openvz.org, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Use bdrv_block_status_above to chose effective chunk size and to handle zeroes effectively. This substitutes checking for just being allocated or not, and drops old code path for it. Assistance by backup job is dropped too, as caching block-status information is more difficult than just caching is-allocated information in our dirty bitmap, and backup job is not good place for this caching anyway. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Andrey Shinkevich Reviewed-by: Max Reitz --- block/block-copy.c | 67 +++++++++++++++++++++++++++++++++++++--------- block/trace-events | 1 + 2 files changed, 55 insertions(+), 13 deletions(-) diff --git a/block/block-copy.c b/block/block-copy.c index 8602e2cae7..74295d93d5 100644 --- a/block/block-copy.c +++ b/block/block-copy.c @@ -152,7 +152,7 @@ void block_copy_set_callbacks( */ static int coroutine_fn block_copy_do_copy(BlockCopyState *s, int64_t start, int64_t end, - bool *error_is_read) + bool zeroes, bool *error_is_rea= d) { int ret; int nbytes =3D MIN(end, s->len) - start; @@ -162,6 +162,18 @@ static int coroutine_fn block_copy_do_copy(BlockCopySt= ate *s, assert(QEMU_IS_ALIGNED(end, s->cluster_size)); assert(end < s->len || end =3D=3D QEMU_ALIGN_UP(s->len, s->cluster_siz= e)); =20 + if (zeroes) { + ret =3D bdrv_co_pwrite_zeroes(s->target, start, nbytes, s->write_f= lags & + ~BDRV_REQ_WRITE_COMPRESSED); + if (ret < 0) { + trace_block_copy_write_zeroes_fail(s, start, ret); + if (error_is_read) { + *error_is_read =3D false; + } + } + return ret; + } + if (s->use_copy_range) { ret =3D bdrv_co_copy_range(s->source, start, s->target, start, nby= tes, 0, s->write_flags); @@ -225,6 +237,34 @@ out: return ret; } =20 +static int block_copy_block_status(BlockCopyState *s, int64_t offset, + int64_t bytes, int64_t *pnum) +{ + int64_t num; + BlockDriverState *base; + int ret; + + if (s->skip_unallocated && s->source->bs->backing) { + base =3D s->source->bs->backing->bs; + } else { + base =3D NULL; + } + + ret =3D bdrv_block_status_above(s->source->bs, base, offset, bytes, &n= um, + NULL, NULL); + if (ret < 0 || num < s->cluster_size) { + num =3D s->cluster_size; + ret =3D BDRV_BLOCK_ALLOCATED | BDRV_BLOCK_DATA; + } else if (offset + num =3D=3D s->len) { + num =3D QEMU_ALIGN_UP(num, s->cluster_size); + } else { + num =3D QEMU_ALIGN_DOWN(num, s->cluster_size); + } + + *pnum =3D num; + return ret; +} + /* * Check if the cluster starting at offset is allocated or not. * return via pnum the number of contiguous clusters sharing this allocati= on. @@ -301,7 +341,6 @@ int coroutine_fn block_copy(BlockCopyState *s, { int ret =3D 0; int64_t end =3D bytes + start; /* bytes */ - int64_t status_bytes; BlockCopyInFlightReq req; =20 /* @@ -318,7 +357,7 @@ int coroutine_fn block_copy(BlockCopyState *s, block_copy_inflight_req_begin(s, &req, start, end); =20 while (start < end) { - int64_t next_zero, chunk_end; + int64_t next_zero, chunk_end, status_bytes; =20 if (!bdrv_dirty_bitmap_get(s->copy_bitmap, start)) { trace_block_copy_skip(s, start); @@ -336,23 +375,25 @@ int coroutine_fn block_copy(BlockCopyState *s, chunk_end =3D next_zero; } =20 - if (s->skip_unallocated) { - ret =3D block_copy_reset_unallocated(s, start, &status_bytes); - if (ret =3D=3D 0) { - trace_block_copy_skip_range(s, start, status_bytes); - start +=3D status_bytes; - continue; - } - /* Clamp to known allocated region */ - chunk_end =3D MIN(chunk_end, start + status_bytes); + ret =3D block_copy_block_status(s, start, chunk_end - start, + &status_bytes); + if (s->skip_unallocated && !(ret & BDRV_BLOCK_ALLOCATED)) { + bdrv_reset_dirty_bitmap(s->copy_bitmap, start, status_bytes); + s->progress_reset_callback(s->progress_opaque); + trace_block_copy_skip_range(s, start, status_bytes); + start +=3D status_bytes; + continue; } =20 + chunk_end =3D MIN(chunk_end, start + status_bytes); + trace_block_copy_process(s, start); =20 bdrv_reset_dirty_bitmap(s->copy_bitmap, start, chunk_end - start); =20 co_get_from_shres(s->mem, chunk_end - start); - ret =3D block_copy_do_copy(s, start, chunk_end, error_is_read); + ret =3D block_copy_do_copy(s, start, chunk_end, ret & BDRV_BLOCK_Z= ERO, + error_is_read); co_put_to_shres(s->mem, chunk_end - start); if (ret < 0) { bdrv_set_dirty_bitmap(s->copy_bitmap, start, chunk_end - start= ); diff --git a/block/trace-events b/block/trace-events index 6ba86decca..346537a1d2 100644 --- a/block/trace-events +++ b/block/trace-events @@ -48,6 +48,7 @@ block_copy_process(void *bcs, int64_t start) "bcs %p star= t %"PRId64 block_copy_copy_range_fail(void *bcs, int64_t start, int ret) "bcs %p star= t %"PRId64" ret %d" block_copy_read_fail(void *bcs, int64_t start, int ret) "bcs %p start %"PR= Id64" ret %d" block_copy_write_fail(void *bcs, int64_t start, int ret) "bcs %p start %"P= RId64" ret %d" +block_copy_write_zeroes_fail(void *bcs, int64_t start, int ret) "bcs %p st= art %"PRId64" ret %d" =20 # ../blockdev.c qmp_block_job_cancel(void *job) "job %p" --=20 2.21.0 From nobody Mon Apr 29 05:08:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com ARC-Seal: i=1; a=rsa-sha256; t=1574879587; cv=none; d=zohomail.com; s=zohoarc; b=ApG3A/rsJLi9v+YPCRN0u1YhTC76nozXIj/PSsO7KMa4aGb4/2mLFAHFC5IM0FkWSSS+MF4z1buomRubK5gZDi1T7VPCYw5HI4RoEILfg3/JHW4ZLZ5I4chjWzRwYmILLQxY8OlYAvzAVz1oMTmZJ/WPlMkoGfVZkW28EgDkSyE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1574879587; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=9tAXxvnQ8W/gq1FHA7QSxbbJv0X+qPLo3NUMGKuVAtc=; b=KMKQozno6KvLyC4oraVDX0zMNvrWYQ8WCqSX8FIUwaSMYlXYBbrAa36qo0hMBLQXjeWDohsd1kCUUPTvE5/83nG2ttQqbLPzyVpqO76smpgQ3kEQY4bMJHowkZg9smvxYzPa+CifceLeWmCP6wz1WK7/bfPMh4PcIE2bTXx0aps= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1574879587302674.6599226064717; Wed, 27 Nov 2019 10:33:07 -0800 (PST) Received: from localhost ([::1]:41628 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia27d-0003VY-LV for importer@patchew.org; Wed, 27 Nov 2019 13:33:05 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:56381) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia1kA-0006d8-EV for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:51 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ia1k9-0003s8-Cq for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:50 -0500 Received: from relay.sw.ru ([185.231.240.75]:49986) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ia1k6-0003im-6M; Wed, 27 Nov 2019 13:08:46 -0500 Received: from vovaso.qa.sw.ru ([10.94.3.0] helo=kvm.qa.sw.ru) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ia1k0-0002UU-Sc; Wed, 27 Nov 2019 21:08:40 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org Subject: [PATCH v2 3/7] block/block-copy: factor out block_copy_find_inflight_req Date: Wed, 27 Nov 2019 21:08:36 +0300 Message-Id: <20191127180840.11937-4-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191127180840.11937-1-vsementsov@virtuozzo.com> References: <20191127180840.11937-1-vsementsov@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 185.231.240.75 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, qemu-devel@nongnu.org, mreitz@redhat.com, den@openvz.org, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Split block_copy_find_inflight_req to be used in seprate. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Andrey Shinkevich --- block/block-copy.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/block/block-copy.c b/block/block-copy.c index 74295d93d5..94e7e855ef 100644 --- a/block/block-copy.c +++ b/block/block-copy.c @@ -24,23 +24,30 @@ #define BLOCK_COPY_MAX_BUFFER (1 * MiB) #define BLOCK_COPY_MAX_MEM (128 * MiB) =20 +static BlockCopyInFlightReq *block_copy_find_inflight_req(BlockCopyState *= s, + int64_t start, + int64_t end) +{ + BlockCopyInFlightReq *req; + + QLIST_FOREACH(req, &s->inflight_reqs, list) { + if (end > req->start_byte && start < req->end_byte) { + return req; + } + } + + return NULL; +} + static void coroutine_fn block_copy_wait_inflight_reqs(BlockCopyState *s, int64_t start, int64_t end) { BlockCopyInFlightReq *req; - bool waited; - - do { - waited =3D false; - QLIST_FOREACH(req, &s->inflight_reqs, list) { - if (end > req->start_byte && start < req->end_byte) { - qemu_co_queue_wait(&req->wait_queue, NULL); - waited =3D true; - break; - } - } - } while (waited); + + while ((req =3D block_copy_find_inflight_req(s, start, end))) { + qemu_co_queue_wait(&req->wait_queue, NULL); + } } =20 static void block_copy_inflight_req_begin(BlockCopyState *s, --=20 2.21.0 From nobody Mon Apr 29 05:08:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com ARC-Seal: i=1; a=rsa-sha256; t=1574879276; cv=none; d=zohomail.com; s=zohoarc; b=Ylp9BUChglCn3k1vTi9g8AAI3ZJyPaHL8DGHkRw7CI0hT0QQBK9y8sLEoTMGfqsQFr47bQVSrf486NM4bH2tYb82nHEt8/sBtVfzR78fLrhyPfxJNJJlOaI3nOiE3NChVg7eSOAFPi6SRxl4iq2V3RpNmjPA0UU4sinf6qy1mbQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1574879276; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=NZTUSZOsbbWgVFFF4KqwHD1Mm+T7hDElSQD5j+PqerA=; b=AmdvWvXPDzK7WaIY91iZWFCYfUyqHIAzhzpT3uynblFjza2xxhKDpHzSjfUJSJqxrxC/5enfdqIAM+8glVayWYF/5Iag/Yhi6N2Chejy2j+cnxOt3S6iiGf7WnCNtW5po8k+PMTefGWG/U5atxEYefTsx/2F5MiZHls+jowrdsU= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1574879276021105.73728382079685; Wed, 27 Nov 2019 10:27:56 -0800 (PST) Received: from localhost ([::1]:41564 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia22c-0007Yf-6Y for importer@patchew.org; Wed, 27 Nov 2019 13:27:54 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:56430) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia1kB-0006et-LL for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:53 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ia1kA-0003tM-2Q for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:51 -0500 Received: from relay.sw.ru ([185.231.240.75]:49982) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ia1k6-0003io-6c; Wed, 27 Nov 2019 13:08:46 -0500 Received: from vovaso.qa.sw.ru ([10.94.3.0] helo=kvm.qa.sw.ru) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ia1k1-0002UU-1F; Wed, 27 Nov 2019 21:08:41 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org Subject: [PATCH v2 4/7] block/block-copy: refactor interfaces to use bytes instead of end Date: Wed, 27 Nov 2019 21:08:37 +0300 Message-Id: <20191127180840.11937-5-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191127180840.11937-1-vsementsov@virtuozzo.com> References: <20191127180840.11937-1-vsementsov@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 185.231.240.75 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, qemu-devel@nongnu.org, mreitz@redhat.com, den@openvz.org, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" We have a lot of "chunk_end - start" invocations, let's switch to bytes/cur_bytes scheme instead. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Andrey Shinkevich Reviewed-by: Max Reitz --- include/block/block-copy.h | 4 +-- block/block-copy.c | 68 ++++++++++++++++++++------------------ 2 files changed, 37 insertions(+), 35 deletions(-) diff --git a/include/block/block-copy.h b/include/block/block-copy.h index 0a161724d7..7321b3d305 100644 --- a/include/block/block-copy.h +++ b/include/block/block-copy.h @@ -19,8 +19,8 @@ #include "qemu/co-shared-resource.h" =20 typedef struct BlockCopyInFlightReq { - int64_t start_byte; - int64_t end_byte; + int64_t start; + int64_t bytes; QLIST_ENTRY(BlockCopyInFlightReq) list; CoQueue wait_queue; /* coroutines blocked on this request */ } BlockCopyInFlightReq; diff --git a/block/block-copy.c b/block/block-copy.c index 94e7e855ef..cc273b6cb8 100644 --- a/block/block-copy.c +++ b/block/block-copy.c @@ -26,12 +26,12 @@ =20 static BlockCopyInFlightReq *block_copy_find_inflight_req(BlockCopyState *= s, int64_t start, - int64_t end) + int64_t bytes) { BlockCopyInFlightReq *req; =20 QLIST_FOREACH(req, &s->inflight_reqs, list) { - if (end > req->start_byte && start < req->end_byte) { + if (start + bytes > req->start && start < req->start + req->bytes)= { return req; } } @@ -41,21 +41,21 @@ static BlockCopyInFlightReq *block_copy_find_inflight_r= eq(BlockCopyState *s, =20 static void coroutine_fn block_copy_wait_inflight_reqs(BlockCopyState *s, int64_t start, - int64_t end) + int64_t bytes) { BlockCopyInFlightReq *req; =20 - while ((req =3D block_copy_find_inflight_req(s, start, end))) { + while ((req =3D block_copy_find_inflight_req(s, start, bytes))) { qemu_co_queue_wait(&req->wait_queue, NULL); } } =20 static void block_copy_inflight_req_begin(BlockCopyState *s, BlockCopyInFlightReq *req, - int64_t start, int64_t end) + int64_t start, int64_t bytes) { - req->start_byte =3D start; - req->end_byte =3D end; + req->start =3D start; + req->bytes =3D bytes; qemu_co_queue_init(&req->wait_queue); QLIST_INSERT_HEAD(&s->inflight_reqs, req, list); } @@ -150,24 +150,26 @@ void block_copy_set_callbacks( /* * block_copy_do_copy * - * Do copy of cluser-aligned chunk. @end is allowed to exceed s->len only = to - * cover last cluster when s->len is not aligned to clusters. + * Do copy of cluser-aligned chunk. Requested region is allowed to exceed = s->len + * only to cover last cluster when s->len is not aligned to clusters. * * No sync here: nor bitmap neighter intersecting requests handling, only = copy. * * Returns 0 on success. */ static int coroutine_fn block_copy_do_copy(BlockCopyState *s, - int64_t start, int64_t end, + int64_t start, int64_t bytes, bool zeroes, bool *error_is_rea= d) { int ret; - int nbytes =3D MIN(end, s->len) - start; + int nbytes =3D MIN(start + bytes, s->len) - start; void *bounce_buffer =3D NULL; =20 + assert(start >=3D 0 && bytes > 0 && INT64_MAX - start >=3D bytes); assert(QEMU_IS_ALIGNED(start, s->cluster_size)); - assert(QEMU_IS_ALIGNED(end, s->cluster_size)); - assert(end < s->len || end =3D=3D QEMU_ALIGN_UP(s->len, s->cluster_siz= e)); + assert(QEMU_IS_ALIGNED(bytes, s->cluster_size)); + assert(start + bytes <=3D s->len || + start + bytes =3D=3D QEMU_ALIGN_UP(s->len, s->cluster_size)); =20 if (zeroes) { ret =3D bdrv_co_pwrite_zeroes(s->target, start, nbytes, s->write_f= lags & @@ -347,7 +349,6 @@ int coroutine_fn block_copy(BlockCopyState *s, bool *error_is_read) { int ret =3D 0; - int64_t end =3D bytes + start; /* bytes */ BlockCopyInFlightReq req; =20 /* @@ -358,58 +359,59 @@ int coroutine_fn block_copy(BlockCopyState *s, bdrv_get_aio_context(s->target->bs)); =20 assert(QEMU_IS_ALIGNED(start, s->cluster_size)); - assert(QEMU_IS_ALIGNED(end, s->cluster_size)); + assert(QEMU_IS_ALIGNED(bytes, s->cluster_size)); =20 block_copy_wait_inflight_reqs(s, start, bytes); - block_copy_inflight_req_begin(s, &req, start, end); + block_copy_inflight_req_begin(s, &req, start, bytes); =20 - while (start < end) { - int64_t next_zero, chunk_end, status_bytes; + while (bytes) { + int64_t next_zero, cur_bytes, status_bytes; =20 if (!bdrv_dirty_bitmap_get(s->copy_bitmap, start)) { trace_block_copy_skip(s, start); start +=3D s->cluster_size; + bytes -=3D s->cluster_size; continue; /* already copied */ } =20 - chunk_end =3D MIN(end, start + s->copy_size); + cur_bytes =3D MIN(bytes, s->copy_size); =20 next_zero =3D bdrv_dirty_bitmap_next_zero(s->copy_bitmap, start, - chunk_end - start); + cur_bytes); if (next_zero >=3D 0) { assert(next_zero > start); /* start is dirty */ - assert(next_zero < chunk_end); /* no need to do MIN() */ - chunk_end =3D next_zero; + assert(next_zero < start + cur_bytes); /* no need to do MIN() = */ + cur_bytes =3D next_zero - start; } =20 - ret =3D block_copy_block_status(s, start, chunk_end - start, - &status_bytes); + ret =3D block_copy_block_status(s, start, cur_bytes, &status_bytes= ); if (s->skip_unallocated && !(ret & BDRV_BLOCK_ALLOCATED)) { bdrv_reset_dirty_bitmap(s->copy_bitmap, start, status_bytes); s->progress_reset_callback(s->progress_opaque); trace_block_copy_skip_range(s, start, status_bytes); start +=3D status_bytes; + bytes -=3D status_bytes; continue; } =20 - chunk_end =3D MIN(chunk_end, start + status_bytes); + cur_bytes =3D MIN(cur_bytes, status_bytes); =20 trace_block_copy_process(s, start); =20 - bdrv_reset_dirty_bitmap(s->copy_bitmap, start, chunk_end - start); + bdrv_reset_dirty_bitmap(s->copy_bitmap, start, cur_bytes); =20 - co_get_from_shres(s->mem, chunk_end - start); - ret =3D block_copy_do_copy(s, start, chunk_end, ret & BDRV_BLOCK_Z= ERO, + co_get_from_shres(s->mem, cur_bytes); + ret =3D block_copy_do_copy(s, start, cur_bytes, ret & BDRV_BLOCK_Z= ERO, error_is_read); - co_put_to_shres(s->mem, chunk_end - start); + co_put_to_shres(s->mem, cur_bytes); if (ret < 0) { - bdrv_set_dirty_bitmap(s->copy_bitmap, start, chunk_end - start= ); + bdrv_set_dirty_bitmap(s->copy_bitmap, start, cur_bytes); break; } =20 - s->progress_bytes_callback(chunk_end - start, s->progress_opaque); - start =3D chunk_end; - ret =3D 0; + s->progress_bytes_callback(cur_bytes, s->progress_opaque); + start +=3D cur_bytes; + bytes -=3D cur_bytes; } =20 block_copy_inflight_req_end(&req); --=20 2.21.0 From nobody Mon Apr 29 05:08:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com ARC-Seal: i=1; a=rsa-sha256; t=1574879468; cv=none; d=zohomail.com; s=zohoarc; b=jPBrMuO8HOdlQN4esqYnb7/2Nj0pCwcuGX0uj4kNjhnxIyhs9v3BJ8rITWAtPPVZ3MKmmEbOUUDcDexla/m/f65C3gZHwCEBTC7+G0FQaWgyuNIQDI0YsxaZrC9nf1au1m0u/w5nC9yx4mSIwFLqzxtNfbl1JDvyLEF7nV22iaM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1574879468; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=H3m5loof3BYDxAuY5TaMHCjPkK/vAW5MloVkq7NucpQ=; b=jJNN6fHimAvwzD1XMsBprvRGpdUK8V8B3Vkm+6+dZBKu2IEup01Jg4vaASutZP+gLoE/wx2Qx2bepJKQzK1pLO0lYK3EiVUKy12JVmG82EE41glsQybtRVFcDdicRRkbk9GepH1KzeKB6rWKdTT8fA83uk+i56f7S84G6PWbHq0= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1574879468464107.62297672016143; Wed, 27 Nov 2019 10:31:08 -0800 (PST) Received: from localhost ([::1]:41604 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia25i-0001nZ-UH for importer@patchew.org; Wed, 27 Nov 2019 13:31:06 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:56434) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia1kB-0006ew-O1 for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:53 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ia1k9-0003t8-U4 for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:51 -0500 Received: from relay.sw.ru ([185.231.240.75]:49972) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ia1k5-0003ii-Vv; Wed, 27 Nov 2019 13:08:46 -0500 Received: from vovaso.qa.sw.ru ([10.94.3.0] helo=kvm.qa.sw.ru) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ia1k1-0002UU-5m; Wed, 27 Nov 2019 21:08:41 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org Subject: [PATCH v2 5/7] block/block-copy: rename start to offset in interfaces Date: Wed, 27 Nov 2019 21:08:38 +0300 Message-Id: <20191127180840.11937-6-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191127180840.11937-1-vsementsov@virtuozzo.com> References: <20191127180840.11937-1-vsementsov@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 185.231.240.75 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, qemu-devel@nongnu.org, mreitz@redhat.com, den@openvz.org, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" offset/bytes pair is more usual naming in block layer, let's use it. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Andrey Shinkevich Reviewed-by: Max Reitz --- include/block/block-copy.h | 2 +- block/block-copy.c | 80 +++++++++++++++++++------------------- 2 files changed, 41 insertions(+), 41 deletions(-) diff --git a/include/block/block-copy.h b/include/block/block-copy.h index 7321b3d305..d96b097267 100644 --- a/include/block/block-copy.h +++ b/include/block/block-copy.h @@ -19,7 +19,7 @@ #include "qemu/co-shared-resource.h" =20 typedef struct BlockCopyInFlightReq { - int64_t start; + int64_t offset; int64_t bytes; QLIST_ENTRY(BlockCopyInFlightReq) list; CoQueue wait_queue; /* coroutines blocked on this request */ diff --git a/block/block-copy.c b/block/block-copy.c index cc273b6cb8..20068cd699 100644 --- a/block/block-copy.c +++ b/block/block-copy.c @@ -25,13 +25,13 @@ #define BLOCK_COPY_MAX_MEM (128 * MiB) =20 static BlockCopyInFlightReq *block_copy_find_inflight_req(BlockCopyState *= s, - int64_t start, + int64_t offset, int64_t bytes) { BlockCopyInFlightReq *req; =20 QLIST_FOREACH(req, &s->inflight_reqs, list) { - if (start + bytes > req->start && start < req->start + req->bytes)= { + if (offset + bytes > req->offset && offset < req->offset + req->by= tes) { return req; } } @@ -40,21 +40,21 @@ static BlockCopyInFlightReq *block_copy_find_inflight_r= eq(BlockCopyState *s, } =20 static void coroutine_fn block_copy_wait_inflight_reqs(BlockCopyState *s, - int64_t start, + int64_t offset, int64_t bytes) { BlockCopyInFlightReq *req; =20 - while ((req =3D block_copy_find_inflight_req(s, start, bytes))) { + while ((req =3D block_copy_find_inflight_req(s, offset, bytes))) { qemu_co_queue_wait(&req->wait_queue, NULL); } } =20 static void block_copy_inflight_req_begin(BlockCopyState *s, BlockCopyInFlightReq *req, - int64_t start, int64_t bytes) + int64_t offset, int64_t bytes) { - req->start =3D start; + req->offset =3D offset; req->bytes =3D bytes; qemu_co_queue_init(&req->wait_queue); QLIST_INSERT_HEAD(&s->inflight_reqs, req, list); @@ -158,24 +158,24 @@ void block_copy_set_callbacks( * Returns 0 on success. */ static int coroutine_fn block_copy_do_copy(BlockCopyState *s, - int64_t start, int64_t bytes, + int64_t offset, int64_t bytes, bool zeroes, bool *error_is_rea= d) { int ret; - int nbytes =3D MIN(start + bytes, s->len) - start; + int nbytes =3D MIN(offset + bytes, s->len) - offset; void *bounce_buffer =3D NULL; =20 - assert(start >=3D 0 && bytes > 0 && INT64_MAX - start >=3D bytes); - assert(QEMU_IS_ALIGNED(start, s->cluster_size)); + assert(offset >=3D 0 && bytes > 0 && INT64_MAX - offset >=3D bytes); + assert(QEMU_IS_ALIGNED(offset, s->cluster_size)); assert(QEMU_IS_ALIGNED(bytes, s->cluster_size)); - assert(start + bytes <=3D s->len || - start + bytes =3D=3D QEMU_ALIGN_UP(s->len, s->cluster_size)); + assert(offset + bytes <=3D s->len || + offset + bytes =3D=3D QEMU_ALIGN_UP(s->len, s->cluster_size)); =20 if (zeroes) { - ret =3D bdrv_co_pwrite_zeroes(s->target, start, nbytes, s->write_f= lags & + ret =3D bdrv_co_pwrite_zeroes(s->target, offset, nbytes, s->write_= flags & ~BDRV_REQ_WRITE_COMPRESSED); if (ret < 0) { - trace_block_copy_write_zeroes_fail(s, start, ret); + trace_block_copy_write_zeroes_fail(s, offset, ret); if (error_is_read) { *error_is_read =3D false; } @@ -184,10 +184,10 @@ static int coroutine_fn block_copy_do_copy(BlockCopyS= tate *s, } =20 if (s->use_copy_range) { - ret =3D bdrv_co_copy_range(s->source, start, s->target, start, nby= tes, + ret =3D bdrv_co_copy_range(s->source, offset, s->target, offset, n= bytes, 0, s->write_flags); if (ret < 0) { - trace_block_copy_copy_range_fail(s, start, ret); + trace_block_copy_copy_range_fail(s, offset, ret); s->use_copy_range =3D false; s->copy_size =3D MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER); /* Fallback to read+write with allocated buffer */ @@ -221,19 +221,19 @@ static int coroutine_fn block_copy_do_copy(BlockCopyS= tate *s, =20 bounce_buffer =3D qemu_blockalign(s->source->bs, nbytes); =20 - ret =3D bdrv_co_pread(s->source, start, nbytes, bounce_buffer, 0); + ret =3D bdrv_co_pread(s->source, offset, nbytes, bounce_buffer, 0); if (ret < 0) { - trace_block_copy_read_fail(s, start, ret); + trace_block_copy_read_fail(s, offset, ret); if (error_is_read) { *error_is_read =3D true; } goto out; } =20 - ret =3D bdrv_co_pwrite(s->target, start, nbytes, bounce_buffer, + ret =3D bdrv_co_pwrite(s->target, offset, nbytes, bounce_buffer, s->write_flags); if (ret < 0) { - trace_block_copy_write_fail(s, start, ret); + trace_block_copy_write_fail(s, offset, ret); if (error_is_read) { *error_is_read =3D false; } @@ -345,7 +345,7 @@ int64_t block_copy_reset_unallocated(BlockCopyState *s, } =20 int coroutine_fn block_copy(BlockCopyState *s, - int64_t start, uint64_t bytes, + int64_t offset, uint64_t bytes, bool *error_is_read) { int ret =3D 0; @@ -358,59 +358,59 @@ int coroutine_fn block_copy(BlockCopyState *s, assert(bdrv_get_aio_context(s->source->bs) =3D=3D bdrv_get_aio_context(s->target->bs)); =20 - assert(QEMU_IS_ALIGNED(start, s->cluster_size)); + assert(QEMU_IS_ALIGNED(offset, s->cluster_size)); assert(QEMU_IS_ALIGNED(bytes, s->cluster_size)); =20 - block_copy_wait_inflight_reqs(s, start, bytes); - block_copy_inflight_req_begin(s, &req, start, bytes); + block_copy_wait_inflight_reqs(s, offset, bytes); + block_copy_inflight_req_begin(s, &req, offset, bytes); =20 while (bytes) { int64_t next_zero, cur_bytes, status_bytes; =20 - if (!bdrv_dirty_bitmap_get(s->copy_bitmap, start)) { - trace_block_copy_skip(s, start); - start +=3D s->cluster_size; + if (!bdrv_dirty_bitmap_get(s->copy_bitmap, offset)) { + trace_block_copy_skip(s, offset); + offset +=3D s->cluster_size; bytes -=3D s->cluster_size; continue; /* already copied */ } =20 cur_bytes =3D MIN(bytes, s->copy_size); =20 - next_zero =3D bdrv_dirty_bitmap_next_zero(s->copy_bitmap, start, + next_zero =3D bdrv_dirty_bitmap_next_zero(s->copy_bitmap, offset, cur_bytes); if (next_zero >=3D 0) { - assert(next_zero > start); /* start is dirty */ - assert(next_zero < start + cur_bytes); /* no need to do MIN() = */ - cur_bytes =3D next_zero - start; + assert(next_zero > offset); /* offset is dirty */ + assert(next_zero < offset + cur_bytes); /* no need to do MIN()= */ + cur_bytes =3D next_zero - offset; } =20 - ret =3D block_copy_block_status(s, start, cur_bytes, &status_bytes= ); + ret =3D block_copy_block_status(s, offset, cur_bytes, &status_byte= s); if (s->skip_unallocated && !(ret & BDRV_BLOCK_ALLOCATED)) { - bdrv_reset_dirty_bitmap(s->copy_bitmap, start, status_bytes); + bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, status_bytes); s->progress_reset_callback(s->progress_opaque); - trace_block_copy_skip_range(s, start, status_bytes); - start +=3D status_bytes; + trace_block_copy_skip_range(s, offset, status_bytes); + offset +=3D status_bytes; bytes -=3D status_bytes; continue; } =20 cur_bytes =3D MIN(cur_bytes, status_bytes); =20 - trace_block_copy_process(s, start); + trace_block_copy_process(s, offset); =20 - bdrv_reset_dirty_bitmap(s->copy_bitmap, start, cur_bytes); + bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, cur_bytes); =20 co_get_from_shres(s->mem, cur_bytes); - ret =3D block_copy_do_copy(s, start, cur_bytes, ret & BDRV_BLOCK_Z= ERO, + ret =3D block_copy_do_copy(s, offset, cur_bytes, ret & BDRV_BLOCK_= ZERO, error_is_read); co_put_to_shres(s->mem, cur_bytes); if (ret < 0) { - bdrv_set_dirty_bitmap(s->copy_bitmap, start, cur_bytes); + bdrv_set_dirty_bitmap(s->copy_bitmap, offset, cur_bytes); break; } =20 s->progress_bytes_callback(cur_bytes, s->progress_opaque); - start +=3D cur_bytes; + offset +=3D cur_bytes; bytes -=3D cur_bytes; } =20 --=20 2.21.0 From nobody Mon Apr 29 05:08:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com ARC-Seal: i=1; a=rsa-sha256; t=1574879107; cv=none; d=zohomail.com; s=zohoarc; b=FkciLoOpgBAxMEmIaeOtAu0QU8UPiiBF7eOITxd/U1LLslEOyEvPQ8TgbJirDJ3McTC8AslKP7Dfo07MTNcrqoMMq8H+7b5q1huqJurrgdeGlwJ4OvMdxIDF1a2q+HDlzKiFj19+zYYgsvHn7kFsPiFj3cccRsTxfTJlypbWHBc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1574879107; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=tnpjhPxiZI4PXV/0cpXH+VXlfPLStKGCGGxTxwU4WG8=; b=SV0YDF6w2lgnFUuRKqwEdnuSytTPSb58YW5Ft4ok+hu4pGR+IOl+2+j5bmUJ+w5RrYB/+meIZwQZeNMo6aThfD2B94j8AxCwIftTyeGL+47ldH0rvuDZdfByT4fwmTX2BsZVzUMZ7Ys0L3Fof+iWK6+LEk0dVNH8v9RlmOFaVJg= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 157487910788894.57792077273882; Wed, 27 Nov 2019 10:25:07 -0800 (PST) Received: from localhost ([::1]:41518 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia1zt-0004wb-IK for importer@patchew.org; Wed, 27 Nov 2019 13:25:06 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:56420) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia1kB-0006eP-B9 for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:53 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ia1k9-0003sx-QJ for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:51 -0500 Received: from relay.sw.ru ([185.231.240.75]:49994) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ia1k5-0003in-Uz; Wed, 27 Nov 2019 13:08:46 -0500 Received: from vovaso.qa.sw.ru ([10.94.3.0] helo=kvm.qa.sw.ru) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ia1k1-0002UU-Cn; Wed, 27 Nov 2019 21:08:41 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org Subject: [PATCH v2 6/7] block/block-copy: reduce intersecting request lock Date: Wed, 27 Nov 2019 21:08:39 +0300 Message-Id: <20191127180840.11937-7-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191127180840.11937-1-vsementsov@virtuozzo.com> References: <20191127180840.11937-1-vsementsov@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 185.231.240.75 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, qemu-devel@nongnu.org, mreitz@redhat.com, den@openvz.org, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Currently, block_copy operation lock the whole requested region. But there is no reason to lock clusters, which are already copied, it will disturb other parallel block_copy requests for no reason. Let's instead do the following: Lock only sub-region, which we are going to operate on. Then, after copying all dirty sub-regions, we should wait for intersecting requests block-copy, if they failed, we should retry these new dirty clusters. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Andrey Shinkevich --- block/block-copy.c | 116 +++++++++++++++++++++++++++++++++++++-------- 1 file changed, 95 insertions(+), 21 deletions(-) diff --git a/block/block-copy.c b/block/block-copy.c index 20068cd699..aca44b13fb 100644 --- a/block/block-copy.c +++ b/block/block-copy.c @@ -39,29 +39,62 @@ static BlockCopyInFlightReq *block_copy_find_inflight_r= eq(BlockCopyState *s, return NULL; } =20 -static void coroutine_fn block_copy_wait_inflight_reqs(BlockCopyState *s, - int64_t offset, - int64_t bytes) +/* + * If there are no intersecting requests return false. Otherwise, wait for= the + * first found intersecting request to finish and return true. + */ +static bool coroutine_fn block_copy_wait_one(BlockCopyState *s, int64_t st= art, + int64_t end) { - BlockCopyInFlightReq *req; + BlockCopyInFlightReq *req =3D block_copy_find_inflight_req(s, start, e= nd); =20 - while ((req =3D block_copy_find_inflight_req(s, offset, bytes))) { - qemu_co_queue_wait(&req->wait_queue, NULL); + if (!req) { + return false; } + + qemu_co_queue_wait(&req->wait_queue, NULL); + + return true; } =20 +/* Called only on full-dirty region */ static void block_copy_inflight_req_begin(BlockCopyState *s, BlockCopyInFlightReq *req, int64_t offset, int64_t bytes) { + assert(!block_copy_find_inflight_req(s, offset, bytes)); + + bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, bytes); + req->offset =3D offset; req->bytes =3D bytes; qemu_co_queue_init(&req->wait_queue); QLIST_INSERT_HEAD(&s->inflight_reqs, req, list); } =20 -static void coroutine_fn block_copy_inflight_req_end(BlockCopyInFlightReq = *req) +static void coroutine_fn block_copy_inflight_req_shrink(BlockCopyState *s, + BlockCopyInFlightReq *req, int64_t new_bytes) { + if (new_bytes =3D=3D req->bytes) { + return; + } + + assert(new_bytes > 0 && new_bytes < req->bytes); + + bdrv_set_dirty_bitmap(s->copy_bitmap, + req->offset + new_bytes, req->bytes - new_bytes); + + req->bytes =3D new_bytes; + qemu_co_queue_restart_all(&req->wait_queue); +} + +static void coroutine_fn block_copy_inflight_req_end(BlockCopyState *s, + BlockCopyInFlightReq = *req, + int ret) +{ + if (ret < 0) { + bdrv_set_dirty_bitmap(s->copy_bitmap, req->offset, req->bytes); + } QLIST_REMOVE(req, list); qemu_co_queue_restart_all(&req->wait_queue); } @@ -344,12 +377,19 @@ int64_t block_copy_reset_unallocated(BlockCopyState *= s, return ret; } =20 -int coroutine_fn block_copy(BlockCopyState *s, - int64_t offset, uint64_t bytes, - bool *error_is_read) +/* + * block_copy_dirty_clusters + * + * Copy dirty clusters in @start/@bytes range. + * Returns 1 if dirty clusters found and successfully copied, 0 if no dirty + * clusters found and -errno on failure. + */ +static int coroutine_fn block_copy_dirty_clusters(BlockCopyState *s, + int64_t offset, int64_t = bytes, + bool *error_is_read) { int ret =3D 0; - BlockCopyInFlightReq req; + bool found_dirty =3D false; =20 /* * block_copy() user is responsible for keeping source and target in s= ame @@ -361,10 +401,8 @@ int coroutine_fn block_copy(BlockCopyState *s, assert(QEMU_IS_ALIGNED(offset, s->cluster_size)); assert(QEMU_IS_ALIGNED(bytes, s->cluster_size)); =20 - block_copy_wait_inflight_reqs(s, offset, bytes); - block_copy_inflight_req_begin(s, &req, offset, bytes); - while (bytes) { + BlockCopyInFlightReq req; int64_t next_zero, cur_bytes, status_bytes; =20 if (!bdrv_dirty_bitmap_get(s->copy_bitmap, offset)) { @@ -374,6 +412,8 @@ int coroutine_fn block_copy(BlockCopyState *s, continue; /* already copied */ } =20 + found_dirty =3D true; + cur_bytes =3D MIN(bytes, s->copy_size); =20 next_zero =3D bdrv_dirty_bitmap_next_zero(s->copy_bitmap, offset, @@ -383,10 +423,12 @@ int coroutine_fn block_copy(BlockCopyState *s, assert(next_zero < offset + cur_bytes); /* no need to do MIN()= */ cur_bytes =3D next_zero - offset; } + block_copy_inflight_req_begin(s, &req, offset, cur_bytes); =20 ret =3D block_copy_block_status(s, offset, cur_bytes, &status_byte= s); + block_copy_inflight_req_shrink(s, &req, status_bytes); if (s->skip_unallocated && !(ret & BDRV_BLOCK_ALLOCATED)) { - bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, status_bytes); + block_copy_inflight_req_end(s, &req, 0); s->progress_reset_callback(s->progress_opaque); trace_block_copy_skip_range(s, offset, status_bytes); offset +=3D status_bytes; @@ -398,15 +440,13 @@ int coroutine_fn block_copy(BlockCopyState *s, =20 trace_block_copy_process(s, offset); =20 - bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, cur_bytes); - co_get_from_shres(s->mem, cur_bytes); ret =3D block_copy_do_copy(s, offset, cur_bytes, ret & BDRV_BLOCK_= ZERO, error_is_read); co_put_to_shres(s->mem, cur_bytes); + block_copy_inflight_req_end(s, &req, ret); if (ret < 0) { - bdrv_set_dirty_bitmap(s->copy_bitmap, offset, cur_bytes); - break; + return ret; } =20 s->progress_bytes_callback(cur_bytes, s->progress_opaque); @@ -414,7 +454,41 @@ int coroutine_fn block_copy(BlockCopyState *s, bytes -=3D cur_bytes; } =20 - block_copy_inflight_req_end(&req); + return found_dirty; +} =20 - return ret; +int coroutine_fn block_copy(BlockCopyState *s, int64_t start, uint64_t byt= es, + bool *error_is_read) +{ + while (true) { + int ret =3D block_copy_dirty_clusters(s, start, bytes, error_is_re= ad); + + if (ret < 0) { + /* + * IO operation failed, which means the whole block_copy reque= st + * failed. + */ + return ret; + } + if (ret) { + /* + * Something was copied, which means that there were yield poi= nts + * and some new dirty bits may appered (due to failed parallel + * block-copy requests). + */ + continue; + } + + /* + * Here ret =3D=3D 0, which means that there is no dirty clusters = in + * requested region. + */ + + if (!block_copy_wait_one(s, start, bytes)) { + /* No dirty bits and nothing to wait: the whole request is don= e */ + break; + } + } + + return 0; } --=20 2.21.0 From nobody Mon Apr 29 05:08:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com ARC-Seal: i=1; a=rsa-sha256; t=1574879696; cv=none; d=zohomail.com; s=zohoarc; b=S6BlTzPU57TLFteDSXEqzEpcaC+LtlshUOPuSfsPVCYTy6oCddViHNAZ6bRQEfqztHW08I5Ha9V8an+P01TGCa5fPvAMNZGn4IrL7VyX9tGeUg6PYkW/ggGs1BkK8Pta5XrOcrTIFk7lwlFzdhasj4rnbYkoCKxYBDPd2EsT+3c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1574879696; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=JZFUHzmKoIkG/P8gty1nohpzVLbd69Vsa16/AjoXQFk=; b=cXrWO02NM0uY11zCD/Idm06/8x3CZdGQ1CqP489fPwJpqaTf4LoWdHkFxteGUhSTJCd982D/4rB/cP0l6/9S0ewdgmcJIPSQlv1aoERrHtfVcSmcb1n0bChEvV40GdtF3FxAx7MwYcdfh4FmV3pVSIyGoso8YPhS5uaqN1IdEbI= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1574879696104870.3941962072057; Wed, 27 Nov 2019 10:34:56 -0800 (PST) Received: from localhost ([::1]:41638 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia29O-0004eX-HT for importer@patchew.org; Wed, 27 Nov 2019 13:34:54 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:56473) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ia1kD-0006gY-7m for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:56 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ia1kA-0003u8-GJ for qemu-devel@nongnu.org; Wed, 27 Nov 2019 13:08:52 -0500 Received: from relay.sw.ru ([185.231.240.75]:49964) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ia1k6-0003ij-1V; Wed, 27 Nov 2019 13:08:46 -0500 Received: from vovaso.qa.sw.ru ([10.94.3.0] helo=kvm.qa.sw.ru) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ia1k1-0002UU-GL; Wed, 27 Nov 2019 21:08:41 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org Subject: [PATCH v2 7/7] block/block-copy: hide structure definitions Date: Wed, 27 Nov 2019 21:08:40 +0300 Message-Id: <20191127180840.11937-8-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191127180840.11937-1-vsementsov@virtuozzo.com> References: <20191127180840.11937-1-vsementsov@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 185.231.240.75 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, qemu-devel@nongnu.org, mreitz@redhat.com, den@openvz.org, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Hide structure definitions and add explicit API instead, to keep an eye on the scope of the shared fields. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Andrey Shinkevich Reviewed-by: Max Reitz --- include/block/block-copy.h | 57 +++------------------------------ block/backup-top.c | 6 ++-- block/backup.c | 27 ++++++++-------- block/block-copy.c | 64 ++++++++++++++++++++++++++++++++++++++ 4 files changed, 86 insertions(+), 68 deletions(-) diff --git a/include/block/block-copy.h b/include/block/block-copy.h index d96b097267..753fa663ac 100644 --- a/include/block/block-copy.h +++ b/include/block/block-copy.h @@ -18,61 +18,9 @@ #include "block/block.h" #include "qemu/co-shared-resource.h" =20 -typedef struct BlockCopyInFlightReq { - int64_t offset; - int64_t bytes; - QLIST_ENTRY(BlockCopyInFlightReq) list; - CoQueue wait_queue; /* coroutines blocked on this request */ -} BlockCopyInFlightReq; - typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque); typedef void (*ProgressResetCallbackFunc)(void *opaque); -typedef struct BlockCopyState { - /* - * BdrvChild objects are not owned or managed by block-copy. They are - * provided by block-copy user and user is responsible for appropriate - * permissions on these children. - */ - BdrvChild *source; - BdrvChild *target; - BdrvDirtyBitmap *copy_bitmap; - int64_t cluster_size; - bool use_copy_range; - int64_t copy_size; - uint64_t len; - QLIST_HEAD(, BlockCopyInFlightReq) inflight_reqs; - - BdrvRequestFlags write_flags; - - /* - * skip_unallocated: - * - * Used by sync=3Dtop jobs, which first scan the source node for unall= ocated - * areas and clear them in the copy_bitmap. During this process, the = bitmap - * is thus not fully initialized: It may still have bits set for areas= that - * are unallocated and should actually not be copied. - * - * This is indicated by skip_unallocated. - * - * In this case, block_copy() will query the source=E2=80=99s allocati= on status, - * skip unallocated regions, clear them in the copy_bitmap, and invoke - * block_copy_reset_unallocated() every time it does. - */ - bool skip_unallocated; - - /* progress_bytes_callback: called when some copying progress is done.= */ - ProgressBytesCallbackFunc progress_bytes_callback; - - /* - * progress_reset_callback: called when some bytes reset from copy_bit= map - * (see @skip_unallocated above). The callee is assumed to recalculate= how - * many bytes remain based on the dirty bit count of copy_bitmap. - */ - ProgressResetCallbackFunc progress_reset_callback; - void *progress_opaque; - - SharedResource *mem; -} BlockCopyState; +typedef struct BlockCopyState BlockCopyState; =20 BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target, int64_t cluster_size, @@ -93,4 +41,7 @@ int64_t block_copy_reset_unallocated(BlockCopyState *s, int coroutine_fn block_copy(BlockCopyState *s, int64_t start, uint64_t byt= es, bool *error_is_read); =20 +BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s); +void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip); + #endif /* BLOCK_COPY_H */ diff --git a/block/backup-top.c b/block/backup-top.c index 7cdb1f8eba..1026628b57 100644 --- a/block/backup-top.c +++ b/block/backup-top.c @@ -38,6 +38,7 @@ typedef struct BDRVBackupTopState { BlockCopyState *bcs; BdrvChild *target; bool active; + int64_t cluster_size; } BDRVBackupTopState; =20 static coroutine_fn int backup_top_co_preadv( @@ -51,8 +52,8 @@ static coroutine_fn int backup_top_cbw(BlockDriverState *= bs, uint64_t offset, uint64_t bytes) { BDRVBackupTopState *s =3D bs->opaque; - uint64_t end =3D QEMU_ALIGN_UP(offset + bytes, s->bcs->cluster_size); - uint64_t off =3D QEMU_ALIGN_DOWN(offset, s->bcs->cluster_size); + uint64_t end =3D QEMU_ALIGN_UP(offset + bytes, s->cluster_size); + uint64_t off =3D QEMU_ALIGN_DOWN(offset, s->cluster_size); =20 return block_copy(s->bcs, off, end - off, NULL); } @@ -227,6 +228,7 @@ BlockDriverState *bdrv_backup_top_append(BlockDriverSta= te *source, goto failed_after_append; } =20 + state->cluster_size =3D cluster_size; state->bcs =3D block_copy_state_new(top->backing, state->target, cluster_size, write_flags, &local_er= r); if (local_err) { diff --git a/block/backup.c b/block/backup.c index cf62b1a38c..acab0d08da 100644 --- a/block/backup.c +++ b/block/backup.c @@ -48,6 +48,7 @@ typedef struct BackupBlockJob { int64_t cluster_size; =20 BlockCopyState *bcs; + BdrvDirtyBitmap *bcs_bitmap; } BackupBlockJob; =20 static const BlockJobDriver backup_job_driver; @@ -63,7 +64,7 @@ static void backup_progress_bytes_callback(int64_t bytes,= void *opaque) static void backup_progress_reset_callback(void *opaque) { BackupBlockJob *s =3D opaque; - uint64_t estimate =3D bdrv_get_dirty_count(s->bcs->copy_bitmap); + uint64_t estimate =3D bdrv_get_dirty_count(s->bcs_bitmap); =20 job_progress_set_remaining(&s->common.job, estimate); } @@ -111,8 +112,7 @@ static void backup_cleanup_sync_bitmap(BackupBlockJob *= job, int ret) =20 if (ret < 0 && job->bitmap_mode =3D=3D BITMAP_SYNC_MODE_ALWAYS) { /* If we failed and synced, merge in the bits we didn't copy: */ - bdrv_dirty_bitmap_merge_internal(bm, job->bcs->copy_bitmap, - NULL, true); + bdrv_dirty_bitmap_merge_internal(bm, job->bcs_bitmap, NULL, true); } } =20 @@ -151,7 +151,7 @@ void backup_do_checkpoint(BlockJob *job, Error **errp) return; } =20 - bdrv_set_dirty_bitmap(backup_job->bcs->copy_bitmap, 0, backup_job->len= ); + bdrv_set_dirty_bitmap(backup_job->bcs_bitmap, 0, backup_job->len); } =20 static BlockErrorAction backup_error_action(BackupBlockJob *job, @@ -196,7 +196,7 @@ static int coroutine_fn backup_loop(BackupBlockJob *job) BdrvDirtyBitmapIter *bdbi; int ret =3D 0; =20 - bdbi =3D bdrv_dirty_iter_new(job->bcs->copy_bitmap); + bdbi =3D bdrv_dirty_iter_new(job->bcs_bitmap); while ((offset =3D bdrv_dirty_iter_next(bdbi)) !=3D -1) { do { if (yield_and_check(job)) { @@ -216,13 +216,13 @@ static int coroutine_fn backup_loop(BackupBlockJob *j= ob) return ret; } =20 -static void backup_init_copy_bitmap(BackupBlockJob *job) +static void backup_init_bcs_bitmap(BackupBlockJob *job) { bool ret; uint64_t estimate; =20 if (job->sync_mode =3D=3D MIRROR_SYNC_MODE_BITMAP) { - ret =3D bdrv_dirty_bitmap_merge_internal(job->bcs->copy_bitmap, + ret =3D bdrv_dirty_bitmap_merge_internal(job->bcs_bitmap, job->sync_bitmap, NULL, true); assert(ret); @@ -232,12 +232,12 @@ static void backup_init_copy_bitmap(BackupBlockJob *j= ob) * We can't hog the coroutine to initialize this thoroughly. * Set a flag and resume work when we are able to yield safely. */ - job->bcs->skip_unallocated =3D true; + block_copy_set_skip_unallocated(job->bcs, true); } - bdrv_set_dirty_bitmap(job->bcs->copy_bitmap, 0, job->len); + bdrv_set_dirty_bitmap(job->bcs_bitmap, 0, job->len); } =20 - estimate =3D bdrv_get_dirty_count(job->bcs->copy_bitmap); + estimate =3D bdrv_get_dirty_count(job->bcs_bitmap); job_progress_set_remaining(&job->common.job, estimate); } =20 @@ -246,7 +246,7 @@ static int coroutine_fn backup_run(Job *job, Error **er= rp) BackupBlockJob *s =3D container_of(job, BackupBlockJob, common.job); int ret =3D 0; =20 - backup_init_copy_bitmap(s); + backup_init_bcs_bitmap(s); =20 if (s->sync_mode =3D=3D MIRROR_SYNC_MODE_TOP) { int64_t offset =3D 0; @@ -265,12 +265,12 @@ static int coroutine_fn backup_run(Job *job, Error **= errp) =20 offset +=3D count; } - s->bcs->skip_unallocated =3D false; + block_copy_set_skip_unallocated(s->bcs, false); } =20 if (s->sync_mode =3D=3D MIRROR_SYNC_MODE_NONE) { /* - * All bits are set in copy_bitmap to allow any cluster to be copi= ed. + * All bits are set in bcs_bitmap to allow any cluster to be copie= d. * This does not actually require them to be copied. */ while (!job_is_cancelled(job)) { @@ -458,6 +458,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDr= iverState *bs, job->sync_bitmap =3D sync_bitmap; job->bitmap_mode =3D bitmap_mode; job->bcs =3D bcs; + job->bcs_bitmap =3D block_copy_dirty_bitmap(bcs); job->cluster_size =3D cluster_size; job->len =3D len; =20 diff --git a/block/block-copy.c b/block/block-copy.c index aca44b13fb..7e14e86a2d 100644 --- a/block/block-copy.c +++ b/block/block-copy.c @@ -24,6 +24,60 @@ #define BLOCK_COPY_MAX_BUFFER (1 * MiB) #define BLOCK_COPY_MAX_MEM (128 * MiB) =20 +typedef struct BlockCopyInFlightReq { + int64_t offset; + int64_t bytes; + QLIST_ENTRY(BlockCopyInFlightReq) list; + CoQueue wait_queue; /* coroutines blocked on this request */ +} BlockCopyInFlightReq; + +typedef struct BlockCopyState { + /* + * BdrvChild objects are not owned or managed by block-copy. They are + * provided by block-copy user and user is responsible for appropriate + * permissions on these children. + */ + BdrvChild *source; + BdrvChild *target; + BdrvDirtyBitmap *copy_bitmap; + int64_t cluster_size; + bool use_copy_range; + int64_t copy_size; + uint64_t len; + QLIST_HEAD(, BlockCopyInFlightReq) inflight_reqs; + + BdrvRequestFlags write_flags; + + /* + * skip_unallocated: + * + * Used by sync=3Dtop jobs, which first scan the source node for unall= ocated + * areas and clear them in the copy_bitmap. During this process, the = bitmap + * is thus not fully initialized: It may still have bits set for areas= that + * are unallocated and should actually not be copied. + * + * This is indicated by skip_unallocated. + * + * In this case, block_copy() will query the source=E2=80=99s allocati= on status, + * skip unallocated regions, clear them in the copy_bitmap, and invoke + * block_copy_reset_unallocated() every time it does. + */ + bool skip_unallocated; + + /* progress_bytes_callback: called when some copying progress is done.= */ + ProgressBytesCallbackFunc progress_bytes_callback; + + /* + * progress_reset_callback: called when some bytes reset from copy_bit= map + * (see @skip_unallocated above). The callee is assumed to recalculate= how + * many bytes remain based on the dirty bit count of copy_bitmap. + */ + ProgressResetCallbackFunc progress_reset_callback; + void *progress_opaque; + + SharedResource *mem; +} BlockCopyState; + static BlockCopyInFlightReq *block_copy_find_inflight_req(BlockCopyState *= s, int64_t offset, int64_t bytes) @@ -492,3 +546,13 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t= start, uint64_t bytes, =20 return 0; } + +BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s) +{ + return s->copy_bitmap; +} + +void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip) +{ + s->skip_unallocated =3D skip; +} --=20 2.21.0