From nobody Fri Nov 14 00:48:49 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com ARC-Seal: i=1; a=rsa-sha256; t=1583480399; cv=none; d=zohomail.com; s=zohoarc; b=lKuzgUVpHMnir6Hksu9aK7HX0MzJpc3Uts9EraH7Y+s/8ySeXk49tpbUzqPkQFeeC27yALFdPYjjbfj45Q62ugueDxRKngm4wn3zIezzB/ad1c+uK0vBipvAvVFyStHRlmGx3px26RZAcskrR0SbIRMnTxH3wNluNZVPI21nBlM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1583480399; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=qWRi1qvJNKhrNBOI+wBYmV1vxGxDW5sacZh6JriAsc0=; b=aGVA6I3/BjYBIaoV4fV62X4lc9SQncYh+wZ8M0vDSNw50yTnYKHKWdjYUFl0epQeMBdyOveLMtFvwKTEROljm/9Z1cC+Pyyh1b/VzikX4egPtoJWYVB7dVLqEsILuqZZeqZxpZAhxgnPsEgpfF7GGYNIGPFF58DaENzUhR0th80= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 158348039940433.56740052321743; Thu, 5 Mar 2020 23:39:59 -0800 (PST) Received: from localhost ([::1]:60476 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jA7aP-00076F-Mv for importer@patchew.org; Fri, 06 Mar 2020 02:39:57 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:46279) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jA7ZH-00053d-Ie for qemu-devel@nongnu.org; Fri, 06 Mar 2020 02:38:49 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jA7ZG-0008KP-2F for qemu-devel@nongnu.org; Fri, 06 Mar 2020 02:38:47 -0500 Received: from relay.sw.ru ([185.231.240.75]:37958) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jA7ZC-0007wk-KR; Fri, 06 Mar 2020 02:38:42 -0500 Received: from vovaso.qa.sw.ru ([10.94.3.0] helo=kvm.qa.sw.ru) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1jA7Z8-0002Qu-86; Fri, 06 Mar 2020 10:38:38 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org Subject: [PATCH v3 3/9] block/block-copy: specialcase first copy_range request Date: Fri, 6 Mar 2020 10:38:25 +0300 Message-Id: <20200306073831.7737-4-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200306073831.7737-1-vsementsov@virtuozzo.com> References: <20200306073831.7737-1-vsementsov@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 185.231.240.75 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, qemu-devel@nongnu.org, mreitz@redhat.com, andrey.shinkevich@virtuozzo.com, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" In block_copy_do_copy we fallback to read+write if copy_range failed. In this case copy_size is larger than defined for buffered IO, and there is corresponding commit. Still, backup copies data cluster by cluster, and most of requests are limited to one cluster anyway, so the only source of this one bad-limited request is copy-before-write operation. Further patch will move backup to use block_copy directly, than for cases where copy_range is not supported, first request will be oversized in each backup. It's not good, let's change it now. Fix is simple: just limit first copy_range request like buffer-based request. If it succeed, set larger copy_range limit. Signed-off-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Andrey Shinkevich Reviewed-by: Max Reitz --- block/block-copy.c | 41 +++++++++++++++++++++++++++++++---------- 1 file changed, 31 insertions(+), 10 deletions(-) diff --git a/block/block-copy.c b/block/block-copy.c index e2d7b3b887..ddd61c1652 100644 --- a/block/block-copy.c +++ b/block/block-copy.c @@ -70,16 +70,19 @@ void block_copy_state_free(BlockCopyState *s) g_free(s); } =20 +static uint32_t block_copy_max_transfer(BdrvChild *source, BdrvChild *targ= et) +{ + return MIN_NON_ZERO(INT_MAX, + MIN_NON_ZERO(source->bs->bl.max_transfer, + target->bs->bl.max_transfer)); +} + BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target, int64_t cluster_size, BdrvRequestFlags write_flags, Error *= *errp) { BlockCopyState *s; BdrvDirtyBitmap *copy_bitmap; - uint32_t max_transfer =3D - MIN_NON_ZERO(INT_MAX, - MIN_NON_ZERO(source->bs->bl.max_transfer, - target->bs->bl.max_transfer)); =20 copy_bitmap =3D bdrv_create_dirty_bitmap(source->bs, cluster_size, NUL= L, errp); @@ -99,7 +102,7 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, = BdrvChild *target, .mem =3D shres_create(BLOCK_COPY_MAX_MEM), }; =20 - if (max_transfer < cluster_size) { + if (block_copy_max_transfer(source, target) < cluster_size) { /* * copy_range does not respect max_transfer. We don't want to both= er * with requests smaller than block-copy cluster size, so fallback= to @@ -114,12 +117,11 @@ BlockCopyState *block_copy_state_new(BdrvChild *sourc= e, BdrvChild *target, s->copy_size =3D cluster_size; } else { /* - * copy_range does not respect max_transfer (it's a TODO), so we f= actor - * that in here. + * We enable copy-range, but keep small copy_size, until first + * successful copy_range (look at block_copy_do_copy). */ s->use_copy_range =3D true; - s->copy_size =3D MIN(MAX(cluster_size, BLOCK_COPY_MAX_COPY_RANGE), - QEMU_ALIGN_DOWN(max_transfer, cluster_size)); + s->copy_size =3D MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER); } =20 QLIST_INIT(&s->inflight_reqs); @@ -172,6 +174,22 @@ static int coroutine_fn block_copy_do_copy(BlockCopySt= ate *s, s->copy_size =3D MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER); /* Fallback to read+write with allocated buffer */ } else { + if (s->use_copy_range) { + /* + * Successful copy-range. Now increase copy_size. copy_ra= nge + * does not respect max_transfer (it's a TODO), so we fact= or + * that in here. + * + * Note: we double-check s->use_copy_range for the case wh= en + * parallel block-copy request unsets it during previous + * bdrv_co_copy_range call. + */ + s->copy_size =3D + MIN(MAX(s->cluster_size, BLOCK_COPY_MAX_COPY_RANGE= ), + QEMU_ALIGN_DOWN(block_copy_max_transfer(s->sou= rce, + s->tar= get), + s->cluster_size)); + } goto out; } } @@ -179,7 +197,10 @@ static int coroutine_fn block_copy_do_copy(BlockCopySt= ate *s, /* * In case of failed copy_range request above, we may proceed with buf= fered * request larger than BLOCK_COPY_MAX_BUFFER. Still, further requests = will - * be properly limited, so don't care too much. + * be properly limited, so don't care too much. Moreover the most like= ly + * case (copy_range is unsupported for the configuration, so the very = first + * copy_range request fails) is handled by setting large copy_size only + * after first successful copy_range. */ =20 bounce_buffer =3D qemu_blockalign(s->source->bs, nbytes); --=20 2.21.0