From nobody Wed Apr 16 07:39:14 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1499449118034794.2148657109894; Fri, 7 Jul 2017 10:38:38 -0700 (PDT) Received: from localhost ([::1]:57851 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dTXD9-0005Wl-GF for importer@patchew.org; Fri, 07 Jul 2017 13:38:35 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41914) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dTWlM-0005Cj-P7 for qemu-devel@nongnu.org; Fri, 07 Jul 2017 13:09:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dTWlL-000875-8W for qemu-devel@nongnu.org; Fri, 07 Jul 2017 13:09:52 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40308) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dTWlH-00080w-Mm; Fri, 07 Jul 2017 13:09:47 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B531161E50; Fri, 7 Jul 2017 17:09:46 +0000 (UTC) Received: from noname.redhat.com (ovpn-117-34.ams2.redhat.com [10.36.117.34]) by smtp.corp.redhat.com (Postfix) with ESMTP id CD04C61F23; Fri, 7 Jul 2017 17:09:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com B531161E50 Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=kwolf@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com B531161E50 From: Kevin Wolf To: qemu-block@nongnu.org Date: Fri, 7 Jul 2017 19:07:46 +0200 Message-Id: <1499447335-6125-32-git-send-email-kwolf@redhat.com> In-Reply-To: <1499447335-6125-1-git-send-email-kwolf@redhat.com> References: <1499447335-6125-1-git-send-email-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Fri, 07 Jul 2017 17:09:46 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 031/100] mirror: Switch mirror_do_read() to byte-based X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Eric Blake We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Convert another internal function, preserving all existing semantics, and adding one more assertion that things are still sector-aligned (so that conversions to sectors in mirror_read_complete don't need to round). Signed-off-by: Eric Blake Signed-off-by: Kevin Wolf --- block/mirror.c | 74 ++++++++++++++++++++++++++----------------------------= ---- 1 file changed, 33 insertions(+), 41 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index 0378bd2..262fddf 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -196,7 +196,7 @@ static inline int mirror_clip_sectors(MirrorBlockJob *s, /* Round offset and/or bytes to target cluster if COW is needed, and * return the offset of the adjusted tail against original. */ static int mirror_cow_align(MirrorBlockJob *s, int64_t *offset, - unsigned int *bytes) + uint64_t *bytes) { bool need_cow; int ret =3D 0; @@ -204,6 +204,7 @@ static int mirror_cow_align(MirrorBlockJob *s, int64_t = *offset, unsigned int align_bytes =3D *bytes; int max_bytes =3D s->granularity * s->max_iov; =20 + assert(*bytes < INT_MAX); need_cow =3D !test_bit(*offset / s->granularity, s->cow_bitmap); need_cow |=3D !test_bit((*offset + *bytes - 1) / s->granularity, s->cow_bitmap); @@ -238,59 +239,51 @@ static inline void mirror_wait_for_io(MirrorBlockJob = *s) } =20 /* Submit async read while handling COW. - * Returns: The number of sectors copied after and including sector_num, - * excluding any sectors copied prior to sector_num due to alignm= ent. - * This will be nb_sectors if no alignment is necessary, or - * (new_end - sector_num) if tail is rounded up or down due to + * Returns: The number of bytes copied after and including offset, + * excluding any bytes copied prior to offset due to alignment. + * This will be @bytes if no alignment is necessary, or + * (new_end - offset) if tail is rounded up or down due to * alignment or buffer limit. */ -static int mirror_do_read(MirrorBlockJob *s, int64_t sector_num, - int nb_sectors) +static uint64_t mirror_do_read(MirrorBlockJob *s, int64_t offset, + uint64_t bytes) { BlockBackend *source =3D s->common.blk; - int sectors_per_chunk, nb_chunks; - int ret; + int nb_chunks; + uint64_t ret; MirrorOp *op; - int max_sectors; + uint64_t max_bytes; =20 - sectors_per_chunk =3D s->granularity >> BDRV_SECTOR_BITS; - max_sectors =3D sectors_per_chunk * s->max_iov; + max_bytes =3D s->granularity * s->max_iov; =20 /* We can only handle as much as buf_size at a time. */ - nb_sectors =3D MIN(s->buf_size >> BDRV_SECTOR_BITS, nb_sectors); - nb_sectors =3D MIN(max_sectors, nb_sectors); - assert(nb_sectors); - assert(nb_sectors < BDRV_REQUEST_MAX_SECTORS); - ret =3D nb_sectors; + bytes =3D MIN(s->buf_size, MIN(max_bytes, bytes)); + assert(bytes); + assert(bytes < BDRV_REQUEST_MAX_BYTES); + ret =3D bytes; =20 if (s->cow_bitmap) { - int64_t offset =3D sector_num * BDRV_SECTOR_SIZE; - unsigned int bytes =3D nb_sectors * BDRV_SECTOR_SIZE; - int gap; - - gap =3D mirror_cow_align(s, &offset, &bytes); - sector_num =3D offset / BDRV_SECTOR_SIZE; - nb_sectors =3D bytes / BDRV_SECTOR_SIZE; - ret +=3D gap / BDRV_SECTOR_SIZE; + ret +=3D mirror_cow_align(s, &offset, &bytes); } - assert(nb_sectors << BDRV_SECTOR_BITS <=3D s->buf_size); - /* The sector range must meet granularity because: + assert(bytes <=3D s->buf_size); + /* The offset is granularity-aligned because: * 1) Caller passes in aligned values; * 2) mirror_cow_align is used only when target cluster is larger. */ - assert(!(sector_num % sectors_per_chunk)); - nb_chunks =3D DIV_ROUND_UP(nb_sectors, sectors_per_chunk); + assert(QEMU_IS_ALIGNED(offset, s->granularity)); + /* The range is sector-aligned, since bdrv_getlength() rounds up. */ + assert(QEMU_IS_ALIGNED(bytes, BDRV_SECTOR_SIZE)); + nb_chunks =3D DIV_ROUND_UP(bytes, s->granularity); =20 while (s->buf_free_count < nb_chunks) { - trace_mirror_yield_in_flight(s, sector_num * BDRV_SECTOR_SIZE, - s->in_flight); + trace_mirror_yield_in_flight(s, offset, s->in_flight); mirror_wait_for_io(s); } =20 /* Allocate a MirrorOp that is used as an AIO callback. */ op =3D g_new(MirrorOp, 1); op->s =3D s; - op->offset =3D sector_num * BDRV_SECTOR_SIZE; - op->bytes =3D nb_sectors * BDRV_SECTOR_SIZE; + op->offset =3D offset; + op->bytes =3D bytes; =20 /* Now make a QEMUIOVector taking enough granularity-sized chunks * from s->buf_free. @@ -298,7 +291,7 @@ static int mirror_do_read(MirrorBlockJob *s, int64_t se= ctor_num, qemu_iovec_init(&op->qiov, nb_chunks); while (nb_chunks-- > 0) { MirrorBuffer *buf =3D QSIMPLEQ_FIRST(&s->buf_free); - size_t remaining =3D nb_sectors * BDRV_SECTOR_SIZE - op->qiov.size; + size_t remaining =3D bytes - op->qiov.size; =20 QSIMPLEQ_REMOVE_HEAD(&s->buf_free, next); s->buf_free_count--; @@ -307,12 +300,10 @@ static int mirror_do_read(MirrorBlockJob *s, int64_t = sector_num, =20 /* Copy the dirty cluster. */ s->in_flight++; - s->bytes_in_flight +=3D nb_sectors * BDRV_SECTOR_SIZE; - trace_mirror_one_iteration(s, sector_num * BDRV_SECTOR_SIZE, - nb_sectors * BDRV_SECTOR_SIZE); + s->bytes_in_flight +=3D bytes; + trace_mirror_one_iteration(s, offset, bytes); =20 - blk_aio_preadv(source, sector_num * BDRV_SECTOR_SIZE, &op->qiov, 0, - mirror_read_complete, op); + blk_aio_preadv(source, offset, &op->qiov, 0, mirror_read_complete, op); return ret; } =20 @@ -460,8 +451,9 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlo= ckJob *s) io_sectors =3D mirror_clip_sectors(s, sector_num, io_sectors); switch (mirror_method) { case MIRROR_METHOD_COPY: - io_sectors =3D mirror_do_read(s, sector_num, io_sectors); - io_bytes_acct =3D io_sectors * BDRV_SECTOR_SIZE; + io_bytes_acct =3D mirror_do_read(s, sector_num * BDRV_SECTOR_S= IZE, + io_sectors * BDRV_SECTOR_SIZE); + io_sectors =3D io_bytes_acct / BDRV_SECTOR_SIZE; break; case MIRROR_METHOD_ZERO: case MIRROR_METHOD_DISCARD: --=20 1.8.3.1