From nobody Wed Apr 23 05:13:42 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: <qemu-devel-bounces+importer=patchew.org@nongnu.org> Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1499449122143360.5330944223997; Fri, 7 Jul 2017 10:38:42 -0700 (PDT) Received: from localhost ([::1]:57853 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from <qemu-devel-bounces+importer=patchew.org@nongnu.org>) id 1dTXDD-0005am-Q2 for importer@patchew.org; Fri, 07 Jul 2017 13:38:39 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41987) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from <kwolf@redhat.com>) id 1dTWlR-0005IJ-H9 for qemu-devel@nongnu.org; Fri, 07 Jul 2017 13:09:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from <kwolf@redhat.com>) id 1dTWlM-00088V-Kw for qemu-devel@nongnu.org; Fri, 07 Jul 2017 13:09:57 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49860) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from <kwolf@redhat.com>) id 1dTWlI-000830-Ui; Fri, 07 Jul 2017 13:09:49 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id F0F978B11B; Fri, 7 Jul 2017 17:09:47 +0000 (UTC) Received: from noname.redhat.com (ovpn-117-34.ams2.redhat.com [10.36.117.34]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0DAF961F52; Fri, 7 Jul 2017 17:09:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com F0F978B11B Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=kwolf@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com F0F978B11B From: Kevin Wolf <kwolf@redhat.com> To: qemu-block@nongnu.org Date: Fri, 7 Jul 2017 19:07:47 +0200 Message-Id: <1499447335-6125-33-git-send-email-kwolf@redhat.com> In-Reply-To: <1499447335-6125-1-git-send-email-kwolf@redhat.com> References: <1499447335-6125-1-git-send-email-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Fri, 07 Jul 2017 17:09:48 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 032/100] mirror: Switch mirror_iteration() to byte-based X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: <qemu-devel.nongnu.org> List-Unsubscribe: <https://lists.nongnu.org/mailman/options/qemu-devel>, <mailto:qemu-devel-request@nongnu.org?subject=unsubscribe> List-Archive: <http://lists.nongnu.org/archive/html/qemu-devel/> List-Post: <mailto:qemu-devel@nongnu.org> List-Help: <mailto:qemu-devel-request@nongnu.org?subject=help> List-Subscribe: <https://lists.nongnu.org/mailman/listinfo/qemu-devel>, <mailto:qemu-devel-request@nongnu.org?subject=subscribe> Cc: kwolf@redhat.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" <qemu-devel-bounces+importer=patchew.org@nongnu.org> X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Eric Blake <eblake@redhat.com> We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Change the internal loop iteration of mirroring to track by bytes instead of sectors (although we are still guaranteed that we iterate by steps that are both sector-aligned and multiples of the granularity). Drop the now-unused mirror_clip_sectors(). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> --- block/mirror.c | 105 +++++++++++++++++++++++++----------------------------= ---- 1 file changed, 46 insertions(+), 59 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index 262fddf..b33f4bb 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -184,15 +184,6 @@ static inline int64_t mirror_clip_bytes(MirrorBlockJob= *s, return MIN(bytes, s->bdev_length - offset); } =20 -/* Clip nb_sectors relative to sector_num to not exceed end-of-file */ -static inline int mirror_clip_sectors(MirrorBlockJob *s, - int64_t sector_num, - int nb_sectors) -{ - return MIN(nb_sectors, - s->bdev_length / BDRV_SECTOR_SIZE - sector_num); -} - /* Round offset and/or bytes to target cluster if COW is needed, and * return the offset of the adjusted tail against original. */ static int mirror_cow_align(MirrorBlockJob *s, int64_t *offset, @@ -336,30 +327,28 @@ static void mirror_do_zero_or_discard(MirrorBlockJob = *s, static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s) { BlockDriverState *source =3D s->source; - int64_t sector_num, first_chunk; + int64_t offset, first_chunk; uint64_t delay_ns =3D 0; /* At least the first dirty chunk is mirrored in one iteration. */ int nb_chunks =3D 1; - int64_t end =3D s->bdev_length / BDRV_SECTOR_SIZE; int sectors_per_chunk =3D s->granularity >> BDRV_SECTOR_BITS; bool write_zeroes_ok =3D bdrv_can_write_zeroes_with_unmap(blk_bs(s->ta= rget)); int max_io_bytes =3D MAX(s->buf_size / MAX_IN_FLIGHT, MAX_IO_BYTES); =20 bdrv_dirty_bitmap_lock(s->dirty_bitmap); - sector_num =3D bdrv_dirty_iter_next(s->dbi); - if (sector_num < 0) { + offset =3D bdrv_dirty_iter_next(s->dbi) * BDRV_SECTOR_SIZE; + if (offset < 0) { bdrv_set_dirty_iter(s->dbi, 0); - sector_num =3D bdrv_dirty_iter_next(s->dbi); + offset =3D bdrv_dirty_iter_next(s->dbi) * BDRV_SECTOR_SIZE; trace_mirror_restart_iter(s, bdrv_get_dirty_count(s->dirty_bitmap)= * BDRV_SECTOR_SIZE); - assert(sector_num >=3D 0); + assert(offset >=3D 0); } bdrv_dirty_bitmap_unlock(s->dirty_bitmap); =20 - first_chunk =3D sector_num / sectors_per_chunk; + first_chunk =3D offset / s->granularity; while (test_bit(first_chunk, s->in_flight_bitmap)) { - trace_mirror_yield_in_flight(s, sector_num * BDRV_SECTOR_SIZE, - s->in_flight); + trace_mirror_yield_in_flight(s, offset, s->in_flight); mirror_wait_for_io(s); } =20 @@ -368,25 +357,26 @@ static uint64_t coroutine_fn mirror_iteration(MirrorB= lockJob *s) /* Find the number of consective dirty chunks following the first dirty * one, and wait for in flight requests in them. */ bdrv_dirty_bitmap_lock(s->dirty_bitmap); - while (nb_chunks * sectors_per_chunk < (s->buf_size >> BDRV_SECTOR_BIT= S)) { + while (nb_chunks * s->granularity < s->buf_size) { int64_t next_dirty; - int64_t next_sector =3D sector_num + nb_chunks * sectors_per_chunk; - int64_t next_chunk =3D next_sector / sectors_per_chunk; - if (next_sector >=3D end || - !bdrv_get_dirty_locked(source, s->dirty_bitmap, next_sector)) { + int64_t next_offset =3D offset + nb_chunks * s->granularity; + int64_t next_chunk =3D next_offset / s->granularity; + if (next_offset >=3D s->bdev_length || + !bdrv_get_dirty_locked(source, s->dirty_bitmap, + next_offset >> BDRV_SECTOR_BITS)) { break; } if (test_bit(next_chunk, s->in_flight_bitmap)) { break; } =20 - next_dirty =3D bdrv_dirty_iter_next(s->dbi); - if (next_dirty > next_sector || next_dirty < 0) { + next_dirty =3D bdrv_dirty_iter_next(s->dbi) * BDRV_SECTOR_SIZE; + if (next_dirty > next_offset || next_dirty < 0) { /* The bitmap iterator's cache is stale, refresh it */ - bdrv_set_dirty_iter(s->dbi, next_sector); - next_dirty =3D bdrv_dirty_iter_next(s->dbi); + bdrv_set_dirty_iter(s->dbi, next_offset >> BDRV_SECTOR_BITS); + next_dirty =3D bdrv_dirty_iter_next(s->dbi) * BDRV_SECTOR_SIZE; } - assert(next_dirty =3D=3D next_sector); + assert(next_dirty =3D=3D next_offset); nb_chunks++; } =20 @@ -394,14 +384,15 @@ static uint64_t coroutine_fn mirror_iteration(MirrorB= lockJob *s) * calling bdrv_get_block_status_above could yield - if some blocks are * marked dirty in this window, we need to know. */ - bdrv_reset_dirty_bitmap_locked(s->dirty_bitmap, sector_num, - nb_chunks * sectors_per_chunk); + bdrv_reset_dirty_bitmap_locked(s->dirty_bitmap, offset >> BDRV_SECTOR_= BITS, + nb_chunks * sectors_per_chunk); bdrv_dirty_bitmap_unlock(s->dirty_bitmap); =20 - bitmap_set(s->in_flight_bitmap, sector_num / sectors_per_chunk, nb_chu= nks); - while (nb_chunks > 0 && sector_num < end) { + bitmap_set(s->in_flight_bitmap, offset / s->granularity, nb_chunks); + while (nb_chunks > 0 && offset < s->bdev_length) { int64_t ret; int io_sectors; + unsigned int io_bytes; int64_t io_bytes_acct; BlockDriverState *file; enum MirrorMethod { @@ -410,28 +401,28 @@ static uint64_t coroutine_fn mirror_iteration(MirrorB= lockJob *s) MIRROR_METHOD_DISCARD } mirror_method =3D MIRROR_METHOD_COPY; =20 - assert(!(sector_num % sectors_per_chunk)); - ret =3D bdrv_get_block_status_above(source, NULL, sector_num, + assert(!(offset % s->granularity)); + ret =3D bdrv_get_block_status_above(source, NULL, + offset >> BDRV_SECTOR_BITS, nb_chunks * sectors_per_chunk, &io_sectors, &file); + io_bytes =3D io_sectors * BDRV_SECTOR_SIZE; if (ret < 0) { - io_sectors =3D MIN(nb_chunks * sectors_per_chunk, - max_io_bytes >> BDRV_SECTOR_BITS); + io_bytes =3D MIN(nb_chunks * s->granularity, max_io_bytes); } else if (ret & BDRV_BLOCK_DATA) { - io_sectors =3D MIN(io_sectors, max_io_bytes >> BDRV_SECTOR_BIT= S); + io_bytes =3D MIN(io_bytes, max_io_bytes); } =20 - io_sectors -=3D io_sectors % sectors_per_chunk; - if (io_sectors < sectors_per_chunk) { - io_sectors =3D sectors_per_chunk; + io_bytes -=3D io_bytes % s->granularity; + if (io_bytes < s->granularity) { + io_bytes =3D s->granularity; } else if (ret >=3D 0 && !(ret & BDRV_BLOCK_DATA)) { - int64_t target_sector_num; - int target_nb_sectors; - bdrv_round_sectors_to_clusters(blk_bs(s->target), sector_num, - io_sectors, &target_sector_num, - &target_nb_sectors); - if (target_sector_num =3D=3D sector_num && - target_nb_sectors =3D=3D io_sectors) { + int64_t target_offset; + unsigned int target_bytes; + bdrv_round_to_clusters(blk_bs(s->target), offset, io_bytes, + &target_offset, &target_bytes); + if (target_offset =3D=3D offset && + target_bytes =3D=3D io_bytes) { mirror_method =3D ret & BDRV_BLOCK_ZERO ? MIRROR_METHOD_ZERO : MIRROR_METHOD_DISCARD; @@ -439,8 +430,7 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlo= ckJob *s) } =20 while (s->in_flight >=3D MAX_IN_FLIGHT) { - trace_mirror_yield_in_flight(s, sector_num * BDRV_SECTOR_SIZE, - s->in_flight); + trace_mirror_yield_in_flight(s, offset, s->in_flight); mirror_wait_for_io(s); } =20 @@ -448,30 +438,27 @@ static uint64_t coroutine_fn mirror_iteration(MirrorB= lockJob *s) return 0; } =20 - io_sectors =3D mirror_clip_sectors(s, sector_num, io_sectors); + io_bytes =3D mirror_clip_bytes(s, offset, io_bytes); switch (mirror_method) { case MIRROR_METHOD_COPY: - io_bytes_acct =3D mirror_do_read(s, sector_num * BDRV_SECTOR_S= IZE, - io_sectors * BDRV_SECTOR_SIZE); - io_sectors =3D io_bytes_acct / BDRV_SECTOR_SIZE; + io_bytes =3D io_bytes_acct =3D mirror_do_read(s, offset, io_by= tes); break; case MIRROR_METHOD_ZERO: case MIRROR_METHOD_DISCARD: - mirror_do_zero_or_discard(s, sector_num * BDRV_SECTOR_SIZE, - io_sectors * BDRV_SECTOR_SIZE, + mirror_do_zero_or_discard(s, offset, io_bytes, mirror_method =3D=3D MIRROR_METHOD_D= ISCARD); if (write_zeroes_ok) { io_bytes_acct =3D 0; } else { - io_bytes_acct =3D io_sectors * BDRV_SECTOR_SIZE; + io_bytes_acct =3D io_bytes; } break; default: abort(); } - assert(io_sectors); - sector_num +=3D io_sectors; - nb_chunks -=3D DIV_ROUND_UP(io_sectors, sectors_per_chunk); + assert(io_bytes); + offset +=3D io_bytes; + nb_chunks -=3D DIV_ROUND_UP(io_bytes, s->granularity); if (s->common.speed) { delay_ns =3D ratelimit_calculate_delay(&s->limit, io_bytes_acc= t); } --=20 1.8.3.1