From nobody Tue Nov 4 21:46:14 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1530726735909101.94797601049515; Wed, 4 Jul 2018 10:52:15 -0700 (PDT) Received: from localhost ([::1]:48470 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1falwl-0001ZV-7F for importer@patchew.org; Wed, 04 Jul 2018 13:52:07 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33087) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1faluw-0000Uz-LQ for qemu-devel@nongnu.org; Wed, 04 Jul 2018 13:50:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1falut-0006Z8-5I for qemu-devel@nongnu.org; Wed, 04 Jul 2018 13:50:14 -0400 Received: from relay.sw.ru ([185.231.240.75]:48386) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1falus-0006Tg-TE; Wed, 04 Jul 2018 13:50:11 -0400 Received: from vz-out.virtuozzo.com ([185.231.240.5] helo=kvm.sw.ru) by relay.sw.ru with esmtp (Exim 4.90_1) (envelope-from ) id 1falup-0004FF-IC; Wed, 04 Jul 2018 20:50:07 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-devel@nongnu.org, qemu-block@nongnu.org Date: Wed, 4 Jul 2018 20:50:06 +0300 Message-Id: <20180704175006.519184-5-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.11.1 In-Reply-To: <20180704175006.519184-1-vsementsov@virtuozzo.com> References: <20180704175006.519184-1-vsementsov@virtuozzo.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 185.231.240.75 Subject: [Qemu-devel] [PATCH v2 4/4] block/backup: fix fleecing scheme: use serialized writes X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, famz@redhat.com, ronniesahlberg@gmail.com, jcody@redhat.com, pl@kamp.de, mreitz@redhat.com, stefanha@redhat.com, den@openvz.org, pbonzini@redhat.com, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Fleecing scheme works as follows: we want a kind of temporary snapshot of active drive A. We create temporary image B, with B->backing =3D A. Then we start backup(sync=3Dnone) from A to B. From this point, B reads as point-in-time snapshot of A (A continues to be active drive, accepting guest IO). This scheme needs some additional synchronization between reads from B and backup COW operations, otherwise, the following situation is theoretically possible: (assume B is qcow2, client is NBD client, reading from B) 1. client starts reading and take qcow2 mutex in qcow2_co_preadv, and goes up to l2 table loading (assume cache miss) 2) guest write =3D> backup COW =3D> qcow2 write =3D> try to take qcow2 mutex =3D> waiting 3. l2 table loaded, we see that cluster is UNALLOCATED, go to "case QCOW2_CLUSTER_UNALLOCATED" and unlock mutex before bdrv_co_preadv(bs->backing, ...) 4) aha, mutex unlocked, backup COW continues, and we finally finish guest write and change cluster in our active disk A 5. actually, do bdrv_co_preadv(bs->backing, ...) and read _new updated_ data. To avoid this, let's make backup writes serializing, to not intersect with reads from B. Note: we expand range of handled cases from (sync=3Dnone and B->backing =3D A) to just (A in backing chain of B), to finally allow safe reading from B during backup for all cases when A in backing chain of B, i.e. B formally looks like point-in-time snapshot of A. Signed-off-by: Vladimir Sementsov-Ogievskiy --- block/backup.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/block/backup.c b/block/backup.c index f3e4e814b6..319fc922e8 100644 --- a/block/backup.c +++ b/block/backup.c @@ -47,6 +47,8 @@ typedef struct BackupBlockJob { HBitmap *copy_bitmap; bool use_copy_range; int64_t copy_range_size; + + bool serialize_target_writes; } BackupBlockJob; =20 static const BlockJobDriver backup_job_driver; @@ -102,6 +104,8 @@ static int coroutine_fn backup_cow_with_bounce_buffer(B= ackupBlockJob *job, QEMUIOVector qiov; BlockBackend *blk =3D job->common.blk; int nbytes; + int read_flags =3D is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0; + int write_flags =3D job->serialize_target_writes ? BDRV_REQ_SERIALISIN= G : 0; =20 hbitmap_reset(job->copy_bitmap, start / job->cluster_size, 1); nbytes =3D MIN(job->cluster_size, job->len - start); @@ -112,8 +116,7 @@ static int coroutine_fn backup_cow_with_bounce_buffer(B= ackupBlockJob *job, iov.iov_len =3D nbytes; qemu_iovec_init_external(&qiov, &iov, 1); =20 - ret =3D blk_co_preadv(blk, start, qiov.size, &qiov, - is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0); + ret =3D blk_co_preadv(blk, start, qiov.size, &qiov, read_flags); if (ret < 0) { trace_backup_do_cow_read_fail(job, start, ret); if (error_is_read) { @@ -124,11 +127,11 @@ static int coroutine_fn backup_cow_with_bounce_buffer= (BackupBlockJob *job, =20 if (qemu_iovec_is_zero(&qiov)) { ret =3D blk_co_pwrite_zeroes(job->target, start, - qiov.size, BDRV_REQ_MAY_UNMAP); + qiov.size, write_flags | BDRV_REQ_MAY_U= NMAP); } else { ret =3D blk_co_pwritev(job->target, start, - qiov.size, &qiov, - job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0= ); + qiov.size, &qiov, write_flags | + (job->compress ? BDRV_REQ_WRITE_COMPRESSED : = 0)); } if (ret < 0) { trace_backup_do_cow_write_fail(job, start, ret); @@ -156,6 +159,8 @@ static int coroutine_fn backup_cow_with_offload(BackupB= lockJob *job, int nr_clusters; BlockBackend *blk =3D job->common.blk; int nbytes; + int read_flags =3D is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0; + int write_flags =3D job->serialize_target_writes ? BDRV_REQ_SERIALISIN= G : 0; =20 assert(QEMU_IS_ALIGNED(job->copy_range_size, job->cluster_size)); nbytes =3D MIN(job->copy_range_size, end - start); @@ -163,7 +168,7 @@ static int coroutine_fn backup_cow_with_offload(BackupB= lockJob *job, hbitmap_reset(job->copy_bitmap, start / job->cluster_size, nr_clusters); ret =3D blk_co_copy_range(blk, start, job->target, start, nbytes, - is_write_notifier ? BDRV_REQ_NO_SERIALISING : = 0, 0); + read_flags, write_flags); if (ret < 0) { trace_backup_do_cow_copy_range_fail(job, start, ret); hbitmap_set(job->copy_bitmap, start / job->cluster_size, @@ -701,6 +706,9 @@ BlockJob *backup_job_create(const char *job_id, BlockDr= iverState *bs, sync_bitmap : NULL; job->compress =3D compress; =20 + /* Detect image-fleecing (and similar) schemes */ + job->serialize_target_writes =3D bdrv_chain_contains(target, bs); + /* If there is no backing file on the target, we cannot rely on COW if= our * backup cluster size is smaller than the target cluster size. Even f= or * targets with a backing file, try to avoid COW if possible. */ --=20 2.11.1