From nobody Wed Nov 12 05:24:26 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com ARC-Seal: i=1; a=rsa-sha256; t=1568301677; cv=none; d=zoho.com; s=zohoarc; b=nWA3tYAVG7uBzoRn89tckr8kfzjF2ynQ/v68v7HhsLLP3U/j5Nh0PFEhJY9EazWkFI91m6AelIbrjjHUXKnblkAS/QA4zibQy1leh0o0TVgQpHK9YoEuaGAGJRPhASbKS39NPcCExM9NaKckfNlusrB1tMYIXvJgRnWJ9v9AbCo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568301677; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=kpZt6fDtGmTAn3UjHj/cLwi7UZjzGf9e49nmFq2M9Gc=; b=XWuVwhdZJ9tid96tFdJPyhYg9ex1emRE8LFbKlpUVVdhGNfFJ8LbjXLLu8/MjDfN9OK+4r6CzRjvT7b5YnQKLRnkYsjZwhUeUn5lk/Jw46rQn0DKK676S2Z68RmOMj2tpn0jQ1kdE2k1AzzFE9ssQGEssgk0otuNpAewE/yhaSk= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1568301677790178.74549799856845; Thu, 12 Sep 2019 08:21:17 -0700 (PDT) Received: from localhost ([::1]:35814 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8QuI-0000DR-Nf for importer@patchew.org; Thu, 12 Sep 2019 11:21:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:38364) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i8Qn5-00024x-7w for qemu-devel@nongnu.org; Thu, 12 Sep 2019 11:13:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1i8Qn3-00036W-93 for qemu-devel@nongnu.org; Thu, 12 Sep 2019 11:13:47 -0400 Received: from relay.sw.ru ([185.231.240.75]:57852) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1i8Qmz-00034A-Ur; Thu, 12 Sep 2019 11:13:42 -0400 Received: from [10.94.3.0] (helo=kvm.qa.sw.ru) by relay.sw.ru with esmtp (Exim 4.92) (envelope-from ) id 1i8Qmx-0000kk-2b; Thu, 12 Sep 2019 18:13:39 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-block@nongnu.org Date: Thu, 12 Sep 2019 18:13:37 +0300 Message-Id: <20190912151338.21225-4-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190912151338.21225-1-vsementsov@virtuozzo.com> References: <20190912151338.21225-1-vsementsov@virtuozzo.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 185.231.240.75 Subject: [Qemu-devel] [PATCH 3/4] block/mirror: support unaligned write in active mirror X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, qemu-devel@nongnu.org, mreitz@redhat.com, den@openvz.org, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up region in the dirty bitmap, which means that we may not copy some bytes and assume them copied, which actually leads to producing corrupted target. So 9adc1cb49af8d forced dirty bitmap granularity to be request_alignment for mirror-top filter, so we are not working with unaligned requests. However forcing large alignment obviously decreases performance of unaligned requests. This commit provides another solution for the problem: if unaligned padding is already dirty, we can safely ignore it, as 1. It's dirty, it will be copied by mirror_iteration anyway 2. It's dirty, so skipping it now we don't increase dirtiness of the bitmap and therefore don't damage "synchronicity" of the write-blocking mirror. If unaligned padding is not dirty, we just write it, no reason to touch dirty bitmap if we succeed (on failure we'll set the whole region ofcourse, but we loss "synchronicity" on failure anyway). Note: we need to disable dirty_bitmap, otherwise we will not be able to see in do_sync_target_write bitmap state before current operation. We may of course check dirty bitmap before the operation in bdrv_mirror_top_do_write and remember it, but we don't need active dirty bitmap for write-blocking mirror anyway. New code-path is unused until the following commit reverts 9adc1cb49af8d. Suggested-by: Denis V. Lunev Signed-off-by: Vladimir Sementsov-Ogievskiy --- block/mirror.c | 39 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 38 insertions(+), 1 deletion(-) diff --git a/block/mirror.c b/block/mirror.c index d176bf5920..d192f6a96b 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -1204,6 +1204,39 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMeth= od method, QEMUIOVector *qiov, int flags) { int ret; + size_t qiov_offset =3D 0; + + if (!QEMU_IS_ALIGNED(offset, job->granularity) && + bdrv_dirty_bitmap_get(job->dirty_bitmap, offset)) { + /* + * Dirty unaligned padding + * 1. It's already dirty, no damage to "actively_synced" if we= just + * skip unaligned part. + * 2. If we copy it, we can't reset corresponding bit in + * dirty_bitmap as there may be some "dirty" bytes still not + * copied. + * So, just ignore it. + */ + qiov_offset =3D QEMU_ALIGN_UP(offset, job->granularity) - offs= et; + if (bytes <=3D qiov_offset) { + /* nothing to do after shrink */ + return; + } + offset +=3D qiov_offset; + bytes -=3D qiov_offset; + } + + if (!QEMU_IS_ALIGNED(offset + bytes, job->granularity) && + bdrv_dirty_bitmap_get(job->dirty_bitmap, offset + bytes - 1)) + { + uint64_t tail =3D (offset + bytes) % job->granularity; + + if (bytes <=3D tail) { + /* nothing to do after shrink */ + return; + } + bytes -=3D tail; + } =20 bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes); =20 @@ -1211,7 +1244,8 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMetho= d method, =20 switch (method) { case MIRROR_METHOD_COPY: - ret =3D blk_co_pwritev(job->target, offset, bytes, qiov, flags); + ret =3D blk_co_pwritev_part(job->target, offset, bytes, + qiov, qiov_offset, flags); break; =20 case MIRROR_METHOD_ZERO: @@ -1640,6 +1674,9 @@ static BlockJob *mirror_start_job( if (!s->dirty_bitmap) { goto fail; } + if (s->copy_mode =3D=3D MIRROR_COPY_MODE_WRITE_BLOCKING) { + bdrv_disable_dirty_bitmap(s->dirty_bitmap); + } =20 ret =3D block_job_add_bdrv(&s->common, "source", bs, 0, BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE | --=20 2.21.0