From nobody Fri Dec 19 17:13:34 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1687795778; cv=none; d=zohomail.com; s=zohoarc; b=k0egvt07OIBtWtfKT6ivAr8kFttKbjJ78LTqQ7RIeXpeZJJYmbTVwTJPiCxp2MQJ/q+020DIKy+8MHgA+NGVGJ90L9pfWPB1U4QUW4yoVSBJRwbUyrw2KoQIFLmhMkwegZF7h9dKODz1J0lWEQDPX5gngQ8qCCAZ6TSJHmY6+qQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1687795778; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=V7LzyuJvoWR2c0ke+8WQdU1bCKhWbK652L6F9KiuJpc=; b=VeBJO13h/3o1eJtbYzb+4Tg7HmM14TNBkKC2/H8SpxTWs3KP3c5tcyIYpTKl06hDDdhteT7wNVF2EAbTfPPqHpZxSP2dAMdf3G18fOJ8WcGV5PTw/mPtIXa2U1mO6+GfQajKpYUO9bUfclCUjUg6XnmAn78Rm32ekbVK1XyV2Yo= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1687795778466741.9559104658798; Mon, 26 Jun 2023 09:09:38 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qDolj-0004E4-L0; Mon, 26 Jun 2023 12:08:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qDolg-0004BO-1w; Mon, 26 Jun 2023 12:08:44 -0400 Received: from relay.virtuozzo.com ([130.117.225.111]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qDolc-0001gn-Rb; Mon, 26 Jun 2023 12:08:43 -0400 Received: from dev005.ch-qa.vzint.dev ([172.29.1.10]) by relay.virtuozzo.com with esmtp (Exim 4.96) (envelope-from ) id 1qDoka-00DiMx-1N; Mon, 26 Jun 2023 18:08:27 +0200 To: qemu-block@nongnu.org, qemu-stable@nongnu.org Cc: qemu-devel@nongnu.org, kwolf@redhat.com, hreitz@redhat.com, vsementsov@yandex-team.ru, eblake@redhat.com, andrey.drobyshev@virtuozzo.com, den@virtuozzo.com Subject: [PATCH 2/3] block/io: align requests to subcluster_size Date: Mon, 26 Jun 2023 19:08:33 +0300 Message-Id: <20230626160834.696680-3-andrey.drobyshev@virtuozzo.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230626160834.696680-1-andrey.drobyshev@virtuozzo.com> References: <20230626160834.696680-1-andrey.drobyshev@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=130.117.225.111; envelope-from=andrey.drobyshev@virtuozzo.com; helo=relay.virtuozzo.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Andrey Drobyshev From: Andrey Drobyshev via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZM-MESSAGEID: 1687795780352100003 Content-Type: text/plain; charset="utf-8" When target image is using subclusters, and we align the request during copy-on-read, it makes sense to align to subcluster_size rather than cluster_size. Otherwise we end up with unnecessary allocations. This commit renames bdrv_round_to_clusters() to bdrv_round_to_subclusters() and utilizes subcluster_size field of BlockDriverInfo to make necessary alignments. It affects copy-on-read as well as mirror job (which is using bdrv_round_to_clusters()). This change also fixes the following bug with failing assert (covered by the test in the subsequent commit): qemu-img create -f qcow2 base.qcow2 64K qemu-img create -f qcow2 -o extended_l2=3Don,backing_file=3Dbase.qcow2,back= ing_fmt=3Dqcow2 img.qcow2 64K qemu-io -c "write -P 0xaa 0 2K" img.qcow2 qemu-io -C -c "read -P 0x00 2K 62K" img.qcow2 qemu-io: ../block/io.c:1236: bdrv_co_do_copy_on_readv: Assertion `skip_byte= s < pnum' failed. Signed-off-by: Andrey Drobyshev Reviewed-by: Denis V. Lunev Reviewed-by: Eric Blake --- block/io.c | 50 ++++++++++++++++++++-------------------- block/mirror.c | 8 +++---- include/block/block-io.h | 2 +- 3 files changed, 30 insertions(+), 30 deletions(-) diff --git a/block/io.c b/block/io.c index 30748f0b59..f1f8fad409 100644 --- a/block/io.c +++ b/block/io.c @@ -728,21 +728,21 @@ BdrvTrackedRequest *coroutine_fn bdrv_co_get_self_req= uest(BlockDriverState *bs) } =20 /** - * Round a region to cluster boundaries + * Round a region to subcluster (if supported) or cluster boundaries */ void coroutine_fn GRAPH_RDLOCK -bdrv_round_to_clusters(BlockDriverState *bs, int64_t offset, int64_t bytes, - int64_t *cluster_offset, int64_t *cluster_bytes) +bdrv_round_to_subclusters(BlockDriverState *bs, int64_t offset, int64_t by= tes, + int64_t *align_offset, int64_t *align_bytes) { BlockDriverInfo bdi; IO_CODE(); - if (bdrv_co_get_info(bs, &bdi) < 0 || bdi.cluster_size =3D=3D 0) { - *cluster_offset =3D offset; - *cluster_bytes =3D bytes; + if (bdrv_co_get_info(bs, &bdi) < 0 || bdi.subcluster_size =3D=3D 0) { + *align_offset =3D offset; + *align_bytes =3D bytes; } else { - int64_t c =3D bdi.cluster_size; - *cluster_offset =3D QEMU_ALIGN_DOWN(offset, c); - *cluster_bytes =3D QEMU_ALIGN_UP(offset - *cluster_offset + bytes,= c); + int64_t c =3D bdi.subcluster_size; + *align_offset =3D QEMU_ALIGN_DOWN(offset, c); + *align_bytes =3D QEMU_ALIGN_UP(offset - *align_offset + bytes, c); } } =20 @@ -1168,8 +1168,8 @@ bdrv_co_do_copy_on_readv(BdrvChild *child, int64_t of= fset, int64_t bytes, void *bounce_buffer =3D NULL; =20 BlockDriver *drv =3D bs->drv; - int64_t cluster_offset; - int64_t cluster_bytes; + int64_t align_offset; + int64_t align_bytes; int64_t skip_bytes; int ret; int max_transfer =3D MIN_NON_ZERO(bs->bl.max_transfer, @@ -1203,28 +1203,28 @@ bdrv_co_do_copy_on_readv(BdrvChild *child, int64_t = offset, int64_t bytes, * BDRV_REQUEST_MAX_BYTES (even when the original read did not), which * is one reason we loop rather than doing it all at once. */ - bdrv_round_to_clusters(bs, offset, bytes, &cluster_offset, &cluster_by= tes); - skip_bytes =3D offset - cluster_offset; + bdrv_round_to_subclusters(bs, offset, bytes, &align_offset, &align_byt= es); + skip_bytes =3D offset - align_offset; =20 trace_bdrv_co_do_copy_on_readv(bs, offset, bytes, - cluster_offset, cluster_bytes); + align_offset, align_bytes); =20 - while (cluster_bytes) { + while (align_bytes) { int64_t pnum; =20 if (skip_write) { ret =3D 1; /* "already allocated", so nothing will be copied */ - pnum =3D MIN(cluster_bytes, max_transfer); + pnum =3D MIN(align_bytes, max_transfer); } else { - ret =3D bdrv_is_allocated(bs, cluster_offset, - MIN(cluster_bytes, max_transfer), &pnu= m); + ret =3D bdrv_is_allocated(bs, align_offset, + MIN(align_bytes, max_transfer), &pnum); if (ret < 0) { /* * Safe to treat errors in querying allocation as if * unallocated; we'll probably fail again soon on the * read, but at least that will set a decent errno. */ - pnum =3D MIN(cluster_bytes, max_transfer); + pnum =3D MIN(align_bytes, max_transfer); } =20 /* Stop at EOF if the image ends in the middle of the cluster = */ @@ -1242,7 +1242,7 @@ bdrv_co_do_copy_on_readv(BdrvChild *child, int64_t of= fset, int64_t bytes, /* Must copy-on-read; use the bounce buffer */ pnum =3D MIN(pnum, MAX_BOUNCE_BUFFER); if (!bounce_buffer) { - int64_t max_we_need =3D MAX(pnum, cluster_bytes - pnum); + int64_t max_we_need =3D MAX(pnum, align_bytes - pnum); int64_t max_allowed =3D MIN(max_transfer, MAX_BOUNCE_BUFFE= R); int64_t bounce_buffer_len =3D MIN(max_we_need, max_allowed= ); =20 @@ -1254,7 +1254,7 @@ bdrv_co_do_copy_on_readv(BdrvChild *child, int64_t of= fset, int64_t bytes, } qemu_iovec_init_buf(&local_qiov, bounce_buffer, pnum); =20 - ret =3D bdrv_driver_preadv(bs, cluster_offset, pnum, + ret =3D bdrv_driver_preadv(bs, align_offset, pnum, &local_qiov, 0, 0); if (ret < 0) { goto err; @@ -1266,13 +1266,13 @@ bdrv_co_do_copy_on_readv(BdrvChild *child, int64_t = offset, int64_t bytes, /* FIXME: Should we (perhaps conditionally) be setting * BDRV_REQ_MAY_UNMAP, if it will allow for a sparser copy * that still correctly reads as zero? */ - ret =3D bdrv_co_do_pwrite_zeroes(bs, cluster_offset, pnum, + ret =3D bdrv_co_do_pwrite_zeroes(bs, align_offset, pnum, BDRV_REQ_WRITE_UNCHANGED); } else { /* This does not change the data on the disk, it is not * necessary to flush even in cache=3Dwritethrough mode. */ - ret =3D bdrv_driver_pwritev(bs, cluster_offset, pnum, + ret =3D bdrv_driver_pwritev(bs, align_offset, pnum, &local_qiov, 0, BDRV_REQ_WRITE_UNCHANGED); } @@ -1301,8 +1301,8 @@ bdrv_co_do_copy_on_readv(BdrvChild *child, int64_t of= fset, int64_t bytes, } } =20 - cluster_offset +=3D pnum; - cluster_bytes -=3D pnum; + align_offset +=3D pnum; + align_bytes -=3D pnum; progress +=3D pnum - skip_bytes; skip_bytes =3D 0; } diff --git a/block/mirror.c b/block/mirror.c index d3cacd1708..e213a892db 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -283,8 +283,8 @@ static int coroutine_fn mirror_cow_align(MirrorBlockJob= *s, int64_t *offset, need_cow |=3D !test_bit((*offset + *bytes - 1) / s->granularity, s->cow_bitmap); if (need_cow) { - bdrv_round_to_clusters(blk_bs(s->target), *offset, *bytes, - &align_offset, &align_bytes); + bdrv_round_to_subclusters(blk_bs(s->target), *offset, *bytes, + &align_offset, &align_bytes); } =20 if (align_bytes > max_bytes) { @@ -576,8 +576,8 @@ static void coroutine_fn mirror_iteration(MirrorBlockJo= b *s) int64_t target_offset; int64_t target_bytes; WITH_GRAPH_RDLOCK_GUARD() { - bdrv_round_to_clusters(blk_bs(s->target), offset, io_bytes, - &target_offset, &target_bytes); + bdrv_round_to_subclusters(blk_bs(s->target), offset, io_by= tes, + &target_offset, &target_bytes); } if (target_offset =3D=3D offset && target_bytes =3D=3D io_bytes) { diff --git a/include/block/block-io.h b/include/block/block-io.h index 43af816d75..1311bec5e2 100644 --- a/include/block/block-io.h +++ b/include/block/block-io.h @@ -189,7 +189,7 @@ bdrv_get_info(BlockDriverState *bs, BlockDriverInfo *bd= i); ImageInfoSpecific *bdrv_get_specific_info(BlockDriverState *bs, Error **errp); BlockStatsSpecific *bdrv_get_specific_stats(BlockDriverState *bs); -void bdrv_round_to_clusters(BlockDriverState *bs, +void bdrv_round_to_subclusters(BlockDriverState *bs, int64_t offset, int64_t bytes, int64_t *cluster_offset, int64_t *cluster_bytes); --=20 2.39.3