From nobody Mon Feb 9 09:52:59 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1491950352697309.78607018090565; Tue, 11 Apr 2017 15:39:12 -0700 (PDT) Received: from localhost ([::1]:41373 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cy4RL-0004an-5j for importer@patchew.org; Tue, 11 Apr 2017 18:39:11 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40321) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cy4Ig-0006LA-5K for qemu-devel@nongnu.org; Tue, 11 Apr 2017 18:30:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cy4Ic-0004kF-Lg for qemu-devel@nongnu.org; Tue, 11 Apr 2017 18:30:14 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60070) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cy4IZ-0004fA-CP; Tue, 11 Apr 2017 18:30:07 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6F7452E606D; Tue, 11 Apr 2017 22:30:06 +0000 (UTC) Received: from red.redhat.com (ovpn-122-61.rdu2.redhat.com [10.10.122.61]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4AA4F18F00; Tue, 11 Apr 2017 22:30:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 6F7452E606D Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=eblake@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 6F7452E606D From: Eric Blake To: qemu-devel@nongnu.org Date: Tue, 11 Apr 2017 17:29:39 -0500 Message-Id: <20170411222945.11741-12-eblake@redhat.com> In-Reply-To: <20170411222945.11741-1-eblake@redhat.com> References: <20170411222945.11741-1-eblake@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 11 Apr 2017 22:30:06 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 11/17] mirror: Switch mirror_iteration() to byte-based X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, Jeff Cody , qemu-block@nongnu.org, Max Reitz Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Change the internal loop iteration of mirroring to track by bytes instead of sectors (although we are still guaranteed that we iterate by steps that are both sector-aligned and multiples of the granularity). Drop the now-unused mirror_clip_sectors(). Signed-off-by: Eric Blake --- block/mirror.c | 103 +++++++++++++++++++++++++----------------------------= ---- 1 file changed, 45 insertions(+), 58 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index da23b70..8de0492 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -184,15 +184,6 @@ static inline void mirror_clip_bytes(MirrorBlockJob *s, *bytes =3D MIN(*bytes, s->bdev_length - offset); } -/* Clip nb_sectors relative to sector_num to not exceed end-of-file */ -static inline void mirror_clip_sectors(MirrorBlockJob *s, - int64_t sector_num, - int *nb_sectors) -{ - *nb_sectors =3D MIN(*nb_sectors, - s->bdev_length / BDRV_SECTOR_SIZE - sector_num); -} - /* Round offset and/or bytes to target cluster if COW is needed, and * return the offset of the adjusted tail against original. */ static int mirror_cow_align(MirrorBlockJob *s, int64_t *offset, @@ -336,28 +327,26 @@ static void mirror_do_zero_or_discard(MirrorBlockJob = *s, static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s) { BlockDriverState *source =3D s->source; - int64_t sector_num, first_chunk; + int64_t offset, first_chunk; uint64_t delay_ns =3D 0; /* At least the first dirty chunk is mirrored in one iteration. */ int nb_chunks =3D 1; - int64_t end =3D s->bdev_length / BDRV_SECTOR_SIZE; int sectors_per_chunk =3D s->granularity >> BDRV_SECTOR_BITS; bool write_zeroes_ok =3D bdrv_can_write_zeroes_with_unmap(blk_bs(s->ta= rget)); int max_io_bytes =3D MAX(s->buf_size / MAX_IN_FLIGHT, MAX_IO_BYTES); - sector_num =3D bdrv_dirty_iter_next(s->dbi); - if (sector_num < 0) { + offset =3D bdrv_dirty_iter_next(s->dbi) * BDRV_SECTOR_SIZE; + if (offset < 0) { bdrv_set_dirty_iter(s->dbi, 0); - sector_num =3D bdrv_dirty_iter_next(s->dbi); + offset =3D bdrv_dirty_iter_next(s->dbi) * BDRV_SECTOR_SIZE; trace_mirror_restart_iter(s, bdrv_get_dirty_count(s->dirty_bitmap)= * BDRV_SECTOR_SIZE); - assert(sector_num >=3D 0); + assert(offset >=3D 0); } - first_chunk =3D sector_num / sectors_per_chunk; + first_chunk =3D offset / s->granularity; while (test_bit(first_chunk, s->in_flight_bitmap)) { - trace_mirror_yield_in_flight(s, sector_num * BDRV_SECTOR_SIZE, - s->in_flight); + trace_mirror_yield_in_flight(s, offset, s->in_flight); mirror_wait_for_io(s); } @@ -365,25 +354,26 @@ static uint64_t coroutine_fn mirror_iteration(MirrorB= lockJob *s) /* Find the number of consective dirty chunks following the first dirty * one, and wait for in flight requests in them. */ - while (nb_chunks * sectors_per_chunk < (s->buf_size >> BDRV_SECTOR_BIT= S)) { + while (nb_chunks * s->granularity < s->buf_size) { int64_t next_dirty; - int64_t next_sector =3D sector_num + nb_chunks * sectors_per_chunk; - int64_t next_chunk =3D next_sector / sectors_per_chunk; - if (next_sector >=3D end || - !bdrv_get_dirty(source, s->dirty_bitmap, next_sector)) { + int64_t next_offset =3D offset + nb_chunks * s->granularity; + int64_t next_chunk =3D next_offset / s->granularity; + if (next_offset >=3D s->bdev_length || + !bdrv_get_dirty(source, s->dirty_bitmap, + next_offset >> BDRV_SECTOR_BITS)) { break; } if (test_bit(next_chunk, s->in_flight_bitmap)) { break; } - next_dirty =3D bdrv_dirty_iter_next(s->dbi); - if (next_dirty > next_sector || next_dirty < 0) { + next_dirty =3D bdrv_dirty_iter_next(s->dbi) * BDRV_SECTOR_SIZE; + if (next_dirty > next_offset || next_dirty < 0) { /* The bitmap iterator's cache is stale, refresh it */ - bdrv_set_dirty_iter(s->dbi, next_sector); - next_dirty =3D bdrv_dirty_iter_next(s->dbi); + bdrv_set_dirty_iter(s->dbi, next_offset >> BDRV_SECTOR_BITS); + next_dirty =3D bdrv_dirty_iter_next(s->dbi) * BDRV_SECTOR_SIZE; } - assert(next_dirty =3D=3D next_sector); + assert(next_dirty =3D=3D next_offset); nb_chunks++; } @@ -391,12 +381,13 @@ static uint64_t coroutine_fn mirror_iteration(MirrorB= lockJob *s) * calling bdrv_get_block_status_above could yield - if some blocks are * marked dirty in this window, we need to know. */ - bdrv_reset_dirty_bitmap(s->dirty_bitmap, sector_num, + bdrv_reset_dirty_bitmap(s->dirty_bitmap, offset >> BDRV_SECTOR_BITS, nb_chunks * sectors_per_chunk); - bitmap_set(s->in_flight_bitmap, sector_num / sectors_per_chunk, nb_chu= nks); - while (nb_chunks > 0 && sector_num < end) { + bitmap_set(s->in_flight_bitmap, offset / s->granularity, nb_chunks); + while (nb_chunks > 0 && offset < s->bdev_length) { int64_t ret; int io_sectors; + unsigned int io_bytes; int64_t io_bytes_acct; BlockDriverState *file; enum MirrorMethod { @@ -405,28 +396,28 @@ static uint64_t coroutine_fn mirror_iteration(MirrorB= lockJob *s) MIRROR_METHOD_DISCARD } mirror_method =3D MIRROR_METHOD_COPY; - assert(!(sector_num % sectors_per_chunk)); - ret =3D bdrv_get_block_status_above(source, NULL, sector_num, + assert(!(offset % s->granularity)); + ret =3D bdrv_get_block_status_above(source, NULL, + offset >> BDRV_SECTOR_BITS, nb_chunks * sectors_per_chunk, &io_sectors, &file); + io_bytes =3D io_sectors * BDRV_SECTOR_SIZE; if (ret < 0) { - io_sectors =3D MIN(nb_chunks * sectors_per_chunk, - max_io_bytes >> BDRV_SECTOR_BITS); + io_bytes =3D MIN(nb_chunks * s->granularity, max_io_bytes); } else if (ret & BDRV_BLOCK_DATA) { - io_sectors =3D MIN(io_sectors, max_io_bytes >> BDRV_SECTOR_BIT= S); + io_bytes =3D MIN(io_bytes, max_io_bytes); } - io_sectors -=3D io_sectors % sectors_per_chunk; - if (io_sectors < sectors_per_chunk) { - io_sectors =3D sectors_per_chunk; + io_bytes -=3D io_bytes % s->granularity; + if (io_bytes < s->granularity) { + io_bytes =3D s->granularity; } else if (ret >=3D 0 && !(ret & BDRV_BLOCK_DATA)) { - int64_t target_sector_num; - int target_nb_sectors; - bdrv_round_sectors_to_clusters(blk_bs(s->target), sector_num, - io_sectors, &target_sector_num, - &target_nb_sectors); - if (target_sector_num =3D=3D sector_num && - target_nb_sectors =3D=3D io_sectors) { + int64_t target_offset; + unsigned int target_bytes; + bdrv_round_to_clusters(blk_bs(s->target), offset, io_bytes, + &target_offset, &target_bytes); + if (target_offset =3D=3D offset && + target_bytes =3D=3D io_bytes) { mirror_method =3D ret & BDRV_BLOCK_ZERO ? MIRROR_METHOD_ZERO : MIRROR_METHOD_DISCARD; @@ -434,8 +425,7 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlo= ckJob *s) } while (s->in_flight >=3D MAX_IN_FLIGHT) { - trace_mirror_yield_in_flight(s, sector_num * BDRV_SECTOR_SIZE, - s->in_flight); + trace_mirror_yield_in_flight(s, offset, s->in_flight); mirror_wait_for_io(s); } @@ -443,30 +433,27 @@ static uint64_t coroutine_fn mirror_iteration(MirrorB= lockJob *s) return 0; } - mirror_clip_sectors(s, sector_num, &io_sectors); + mirror_clip_bytes(s, offset, &io_bytes); switch (mirror_method) { case MIRROR_METHOD_COPY: - io_bytes_acct =3D mirror_do_read(s, sector_num * BDRV_SECTOR_S= IZE, - io_sectors * BDRV_SECTOR_SIZE); - io_sectors =3D io_bytes_acct / BDRV_SECTOR_SIZE; + io_bytes =3D io_bytes_acct =3D mirror_do_read(s, offset, io_by= tes); break; case MIRROR_METHOD_ZERO: case MIRROR_METHOD_DISCARD: - mirror_do_zero_or_discard(s, sector_num * BDRV_SECTOR_SIZE, - io_sectors * BDRV_SECTOR_SIZE, + mirror_do_zero_or_discard(s, offset, io_bytes, mirror_method =3D=3D MIRROR_METHOD_D= ISCARD); if (write_zeroes_ok) { io_bytes_acct =3D 0; } else { - io_bytes_acct =3D io_sectors * BDRV_SECTOR_SIZE; + io_bytes_acct =3D io_bytes; } break; default: abort(); } - assert(io_sectors); - sector_num +=3D io_sectors; - nb_chunks -=3D DIV_ROUND_UP(io_sectors, sectors_per_chunk); + assert(io_bytes); + offset +=3D io_bytes; + nb_chunks -=3D DIV_ROUND_UP(io_bytes, s->granularity); if (s->common.speed) { delay_ns =3D ratelimit_calculate_delay(&s->limit, io_bytes_acc= t); } --=20 2.9.3