From nobody Tue Nov 4 19:05:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1530963815668972.4100288253227; Sat, 7 Jul 2018 04:43:35 -0700 (PDT) Received: from localhost ([::1]:33216 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fblcX-0001fh-RE for importer@patchew.org; Sat, 07 Jul 2018 07:43:21 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50123) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fblbZ-0001Lw-DX for qemu-devel@nongnu.org; Sat, 07 Jul 2018 07:42:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fblbU-0000L8-Dl for qemu-devel@nongnu.org; Sat, 07 Jul 2018 07:42:21 -0400 Received: from mx-v6.kamp.de ([2a02:248:0:51::16]:43648 helo=mx01.kamp.de) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fblbU-0000J4-4C for qemu-devel@nongnu.org; Sat, 07 Jul 2018 07:42:16 -0400 Received: (qmail 28719 invoked by uid 89); 7 Jul 2018 11:42:08 -0000 Received: from [195.62.97.192] by client-16-kamp (envelope-from , uid 89) with qmail-scanner-2010/03/19-MF (clamdscan: 0.100.0/24730. avast: 1.2.2/17010300. spamassassin: 3.4.1. Clear:RC:1(195.62.97.192):. Processed in 0.106414 secs); 07 Jul 2018 11:42:08 -0000 Received: from kerio.kamp.de ([195.62.97.192]) by mx01.kamp.de with ESMTPS (DHE-RSA-AES256-SHA encrypted); 7 Jul 2018 11:42:07 -0000 Received: from submission.kamp.de ([195.62.97.28]) by kerio.kamp.de with ESMTPS (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits)) for qemu-devel@nongnu.org; Sat, 7 Jul 2018 13:42:04 +0200 Received: (qmail 18495 invoked from network); 7 Jul 2018 11:42:06 -0000 Received: from lieven-vm.kamp-intra.net (HELO lieven-vm-neu) (relay@kamp.de@::ffff:172.21.12.69) by submission.kamp.de with ESMTPS (DHE-RSA-AES256-GCM-SHA384 encrypted) ESMTPA; 7 Jul 2018 11:42:06 -0000 Received: by lieven-vm-neu (Postfix, from userid 1060) id 0161C201F6; Sat, 7 Jul 2018 13:42:05 +0200 (CEST) X-GL_Whitelist: yes X-Footer: a2FtcC5kZQ== From: Peter Lieven To: qemu-devel@nongnu.org, qemu-block@nongnu.org Date: Sat, 7 Jul 2018 13:42:03 +0200 Message-Id: <1530963723-14380-1-git-send-email-pl@kamp.de> X-Mailer: git-send-email 1.9.1 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a02:248:0:51::16 Subject: [Qemu-devel] [PATCH V4] qemu-img: align result of is_allocated_sectors X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, Peter Lieven , mreitz@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" We currently don't enforce that the sparse segments we detect during conver= t are aligned. This leads to unnecessary and costly read-modify-write cycles eith= er internally in Qemu or in the background on the storage device as nearly all modern filesystems or hardware have a 4k alignment internally. This patch modifies is_allocated_sectors so that its *pnum result will alwa= ys end at an alignment boundary. This way all requests will end at an alignment boundary. The start of all requests will also be aligned as long as the res= ults of get_block_status do not lead to an unaligned offset. The number of RMW cycles when converting an example image [1] to a raw devi= ce that has 4k sector size is about 4600 4k read requests to perform a total of abo= ut 15000 write requests. With this path the additional 4600 read requests are elimin= ated while the number of total write requests stays constant. [1] https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-ser= ver-cloudimg-amd64-disk1.vmdk Signed-off-by: Peter Lieven --- V3->V4: - only focus on the end offset in is_allocated_sectors [Kevin] V2->V3: - ensure that s.alignment is a power of 2 - correctly handle n < alignment in is_allocated_sectors if sector_num % alignment > 0. V1->V2: - take the current sector offset into account [Max] - try to figure out the target alignment [Max] qemu-img.c | 44 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 38 insertions(+), 6 deletions(-) diff --git a/qemu-img.c b/qemu-img.c index e1a506f..20e3236 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -1105,11 +1105,15 @@ static int64_t find_nonzero(const uint8_t *buf, int= 64_t n) * * 'pnum' is set to the number of sectors (including and immediately follo= wing * the first one) that are known to be in the same allocated/unallocated s= tate. + * The function will try to align the end offset to alignment boundaries so + * that the request will at least end aligned and consequtive requests will + * also start at an aligned offset. */ -static int is_allocated_sectors(const uint8_t *buf, int n, int *pnum) +static int is_allocated_sectors(const uint8_t *buf, int n, int *pnum, + int64_t sector_num, int alignment) { bool is_zero; - int i; + int i, tail; =20 if (n <=3D 0) { *pnum =3D 0; @@ -1122,6 +1126,23 @@ static int is_allocated_sectors(const uint8_t *buf, = int n, int *pnum) break; } } + + tail =3D (sector_num + i) & (alignment - 1); + if (tail) { + if (is_zero && i =3D=3D tail) { + /* treat unallocated areas which only consist + * of a small tail as allocated. */ + is_zero =3D 0; + } + if (!is_zero) { + /* align up end offset of allocated areas. */ + i +=3D alignment - tail; + i =3D MIN(i, n); + } else { + /* align down end offset of zero areas. */ + i -=3D tail; + } + } *pnum =3D i; return !is_zero; } @@ -1132,7 +1153,7 @@ static int is_allocated_sectors(const uint8_t *buf, i= nt n, int *pnum) * breaking up write requests for only small sparse areas. */ static int is_allocated_sectors_min(const uint8_t *buf, int n, int *pnum, - int min) + int min, int64_t sector_num, int alignment) { int ret; int num_checked, num_used; @@ -1141,7 +1162,7 @@ static int is_allocated_sectors_min(const uint8_t *bu= f, int n, int *pnum, min =3D n; } =20 - ret =3D is_allocated_sectors(buf, n, pnum); + ret =3D is_allocated_sectors(buf, n, pnum, sector_num, alignment); if (!ret) { return ret; } @@ -1149,13 +1170,15 @@ static int is_allocated_sectors_min(const uint8_t *= buf, int n, int *pnum, num_used =3D *pnum; buf +=3D BDRV_SECTOR_SIZE * *pnum; n -=3D *pnum; + sector_num +=3D *pnum; num_checked =3D num_used; =20 while (n > 0) { - ret =3D is_allocated_sectors(buf, n, pnum); + ret =3D is_allocated_sectors(buf, n, pnum, sector_num, alignment); =20 buf +=3D BDRV_SECTOR_SIZE * *pnum; n -=3D *pnum; + sector_num +=3D *pnum; num_checked +=3D *pnum; if (ret) { num_used =3D num_checked; @@ -1560,6 +1583,7 @@ typedef struct ImgConvertState { bool wr_in_order; bool copy_range; int min_sparse; + int alignment; size_t cluster_sectors; size_t buf_sectors; long num_coroutines; @@ -1724,7 +1748,8 @@ static int coroutine_fn convert_co_write(ImgConvertSt= ate *s, int64_t sector_num, * zeroed. */ if (!s->min_sparse || (!s->compressed && - is_allocated_sectors_min(buf, n, &n, s->min_sparse)) || + is_allocated_sectors_min(buf, n, &n, s->min_sparse, + sector_num, s->alignment)) || (s->compressed && !buffer_is_zero(buf, n * BDRV_SECTOR_SIZE))) { @@ -2373,6 +2398,13 @@ static int img_convert(int argc, char **argv) out_bs->bl.pdiscard_alignment >> BDRV_SECTOR_BITS))); =20 + /* try to align the write requests to the destination to avoid unneces= sary + * RMW cycles. */ + s.alignment =3D MAX(pow2floor(s.min_sparse), + DIV_ROUND_UP(out_bs->bl.request_alignment, + BDRV_SECTOR_SIZE)); + assert(is_power_of_2(s.alignment)); + if (skip_create) { int64_t output_sectors =3D blk_nb_sectors(s.target); if (output_sectors < 0) { --=20 2.7.4