From nobody Tue Nov 4 18:52:29 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1530641365605575.2880480811826; Tue, 3 Jul 2018 11:09:25 -0700 (PDT) Received: from localhost ([::1]:42037 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1faPjs-0005E7-Nj for importer@patchew.org; Tue, 03 Jul 2018 14:09:20 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41386) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1faPiZ-0004XL-50 for qemu-devel@nongnu.org; Tue, 03 Jul 2018 14:08:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1faPiV-0006TK-51 for qemu-devel@nongnu.org; Tue, 03 Jul 2018 14:07:59 -0400 Received: from relay.sw.ru ([185.231.240.75]:34456) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1faPiU-0006QW-TR; Tue, 03 Jul 2018 14:07:55 -0400 Received: from vz-out.virtuozzo.com ([185.231.240.5] helo=kvm.sw.ru) by relay.sw.ru with esmtp (Exim 4.90_1) (envelope-from ) id 1faPiR-0001la-Kp; Tue, 03 Jul 2018 21:07:51 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-devel@nongnu.org, qemu-block@nongnu.org Date: Tue, 3 Jul 2018 21:07:51 +0300 Message-Id: <20180703180751.243496-3-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.11.1 In-Reply-To: <20180703180751.243496-1-vsementsov@virtuozzo.com> References: <20180703180751.243496-1-vsementsov@virtuozzo.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 185.231.240.75 Subject: [Qemu-devel] [PATCH 2/2] block/backup: fix fleecing scheme: use serialized writes X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, famz@redhat.com, jsnow@redhat.com, jcody@redhat.com, mreitz@redhat.com, stefanha@redhat.com, den@openvz.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Fleecing scheme works as follows: we want a kind of temporary snapshot of active drive A. We create temporary image B, with B->backing =3D A. Then we start backup(sync=3Dnone) from A to B. From this point, B reads as point-in-time snapshot of A (A continues to be active drive, accepting guest IO). This scheme needs some additional synchronization between reads from B and backup COW operations, otherwise, the following situation is theoretically possible: (assume B is qcow2, client is NBD client, reading from B) 1. client starts reading and take qcow2 mutex in qcow2_co_preadv, and goes up to l2 table loading (assume cache miss) 2) guest write =3D> backup COW =3D> qcow2 write =3D> try to take qcow2 mutex =3D> waiting 3. l2 table loaded, we see that cluster is UNALLOCATED, go to "case QCOW2_CLUSTER_UNALLOCATED" and unlock mutex before bdrv_co_preadv(bs->backing, ...) 4) aha, mutex unlocked, backup COW continues, and we finally finish guest write and change cluster in our active disk A 5. actually, do bdrv_co_preadv(bs->backing, ...) and read _new updated_ data. To avoid this, let's make all COW writes serializing, to not intersect with reads from B. Signed-off-by: Vladimir Sementsov-Ogievskiy --- block/backup.c | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/block/backup.c b/block/backup.c index 81895ddbe2..ab46a7d43d 100644 --- a/block/backup.c +++ b/block/backup.c @@ -47,6 +47,8 @@ typedef struct BackupBlockJob { HBitmap *copy_bitmap; bool use_copy_range; int64_t copy_range_size; + + bool fleecing; } BackupBlockJob; =20 static const BlockJobDriver backup_job_driver; @@ -102,6 +104,7 @@ static int coroutine_fn backup_cow_with_bounce_buffer(B= ackupBlockJob *job, QEMUIOVector qiov; BlockBackend *blk =3D job->common.blk; int nbytes; + int sflags =3D job->fleecing ? BDRV_REQ_SERIALISING : 0; =20 hbitmap_reset(job->copy_bitmap, start / job->cluster_size, 1); nbytes =3D MIN(job->cluster_size, job->len - start); @@ -124,11 +127,12 @@ static int coroutine_fn backup_cow_with_bounce_buffer= (BackupBlockJob *job, =20 if (qemu_iovec_is_zero(&qiov)) { ret =3D blk_co_pwrite_zeroes(job->target, start, - qiov.size, BDRV_REQ_MAY_UNMAP); + qiov.size, sflags | BDRV_REQ_MAY_UNMAP); } else { ret =3D blk_co_pwritev(job->target, start, qiov.size, &qiov, - job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0= ); + job->compress ? BDRV_REQ_WRITE_COMPRESSED : + sflags); } if (ret < 0) { trace_backup_do_cow_write_fail(job, start, ret); @@ -614,6 +618,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDr= iverState *bs, BlockDriverInfo bdi; BackupBlockJob *job =3D NULL; int ret; + bool fleecing; =20 assert(bs); assert(target); @@ -668,6 +673,15 @@ BlockJob *backup_job_create(const char *job_id, BlockD= riverState *bs, return NULL; } =20 + /* Detect image fleecing */ + fleecing =3D sync_mode =3D=3D MIRROR_SYNC_MODE_NONE && target->backing= ->bs =3D=3D bs; + if (fleecing) { + if (compress) { + error_setg(errp, "Image fleecing doesn't support compressed mo= de."); + return NULL; + } + } + len =3D bdrv_getlength(bs); if (len < 0) { error_setg_errno(errp, -len, "unable to get length for '%s'", @@ -700,6 +714,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDr= iverState *bs, job->sync_bitmap =3D sync_mode =3D=3D MIRROR_SYNC_MODE_INCREMENTAL ? sync_bitmap : NULL; job->compress =3D compress; + job->fleecing =3D fleecing; =20 /* If there is no backing file on the target, we cannot rely on COW if= our * backup cluster size is smaller than the target cluster size. Even f= or @@ -727,7 +742,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDr= iverState *bs, } else { job->cluster_size =3D MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster= _size); } - job->use_copy_range =3D true; + job->use_copy_range =3D !fleecing; job->copy_range_size =3D MIN_NON_ZERO(blk_get_max_transfer(job->common= .blk), blk_get_max_transfer(job->target)); job->copy_range_size =3D MAX(job->cluster_size, --=20 2.11.1