From nobody Fri Nov 14 16:58:05 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1588701301; cv=none; d=zohomail.com; s=zohoarc; b=aEHnY5i/vqzM1qj6IIJgatBCWZii4fMnLrTaFcRYdjd195omjRErHsli5VlFuV/KpbssrzujxgDq9TXEi6Minm9xqBmr+wYVW7BUmU6se6TVmvIUsS32F4cVx05XMDUwY5e9oGXYJgug3y7cw5acOLafxlpqnIIBhKwzvl6HXok= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1588701301; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=WolM22/0RYu5YtyUA15DWNI8xKnUdzECUrfnOVRu/jU=; b=WuSaM6kMxgpSZVHBwzsqQPeW6oI5F5TXuf8bOyIHJlY6rx8TXoxmmIgED5WE0akkVSGzCSKVU3jdRGEKEAG1WOo1A/4slg0UAP97wewSJ7je1ryyI/7XTW/g6DEv2sDX8XBaGrIe3DrQ2Sq1HDFjO8CRysd6ndNXEA8ksM/jNLA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 158870130173147.62876574451593; Tue, 5 May 2020 10:55:01 -0700 (PDT) Received: from localhost ([::1]:33290 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jW1mW-000560-BS for importer@patchew.org; Tue, 05 May 2020 13:55:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44160) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jW1Xq-00075f-PV; Tue, 05 May 2020 13:39:50 -0400 Received: from fanzine.igalia.com ([178.60.130.6]:39079) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1jW1Xf-0008Qx-1c; Tue, 05 May 2020 13:39:50 -0400 Received: from static.160.43.0.81.ibercom.com ([81.0.43.160] helo=perseus.local) by fanzine.igalia.com with esmtpsa (Cipher TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim) id 1jW1Ws-00025B-Ca; Tue, 05 May 2020 19:38:50 +0200 Received: from berto by perseus.local with local (Exim 4.92) (envelope-from ) id 1jW1Wc-00043g-JB; Tue, 05 May 2020 19:38:34 +0200 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=WolM22/0RYu5YtyUA15DWNI8xKnUdzECUrfnOVRu/jU=; b=R1gk8PKDLdVfgobvWB5uLV2Mv9U4XEf8b2rS5Fi0j+zmtY0brNhVdD7GZPo3s46SYxShOtjfHmRspPDpHAHkeB1fL4LSu+hXzPtEUodaYceQzW5SNLx1QQ3RgDDkDBBE4vb3xS4t4bEMNDJZ23T9Cf5f5ORcNLD/x51y1qQP4VywoY+aQERGDeBDvTiDFMam/2PTRRWads/3reC80v0LMb746f9yHZa1Cn7enPGZ8h2ri3EqEPjOwrHbXn1fqxvQGpLPZCWLPtI6EeoVReyLuyatT8weZRCJ4Se2y0e09aNLug21xL27Omm47mvXf82MJGarlV+1J/Ly4LKJLnZtFA==; From: Alberto Garcia To: qemu-devel@nongnu.org Subject: [PATCH v5 05/31] qcow2: Process QCOW2_CLUSTER_ZERO_ALLOC clusters in handle_copied() Date: Tue, 5 May 2020 19:38:05 +0200 Message-Id: <84fbd11fbfc4a2ee65a4cdca36976d4b18f10ef6.1588699789.git.berto@igalia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=178.60.130.6; envelope-from=berto@igalia.com; helo=fanzine.igalia.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/05 13:38:50 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x (no timestamps) [generic] [fuzzy] X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Vladimir Sementsov-Ogievskiy , Alberto Garcia , qemu-block@nongnu.org, Max Reitz Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" When writing to a qcow2 file there are two functions that take a virtual offset and return a host offset, possibly allocating new clusters if necessary: - handle_copied() looks for normal data clusters that are already allocated and have a reference count of 1. In those clusters we can simply write the data and there is no need to perform any copy-on-write. - handle_alloc() looks for clusters that do need copy-on-write, either because they haven't been allocated yet, because their reference count is !=3D 1 or because they are ZERO_ALLOC clusters. The ZERO_ALLOC case is a bit special because those are clusters that are already allocated and they could perfectly be dealt with in handle_copied() (as long as copy-on-write is performed when required). In fact, there is extra code specifically for them in handle_alloc() that tries to reuse the existing allocation if possible and frees them otherwise. This patch changes the handling of ZERO_ALLOC clusters so the semantics of these two functions are now like this: - handle_copied() looks for clusters that are already allocated and which we can overwrite (NORMAL and ZERO_ALLOC clusters with a reference count of 1). - handle_alloc() looks for clusters for which we need a new allocation (all other cases). One important difference after this change is that clusters found in handle_copied() may now require copy-on-write, but this will be necessary anyway once we add support for subclusters. Signed-off-by: Alberto Garcia Reviewed-by: Eric Blake --- block/qcow2-cluster.c | 256 +++++++++++++++++++++++------------------- 1 file changed, 141 insertions(+), 115 deletions(-) diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c index 80f9787461..fce0be7a08 100644 --- a/block/qcow2-cluster.c +++ b/block/qcow2-cluster.c @@ -1039,13 +1039,18 @@ void qcow2_alloc_cluster_abort(BlockDriverState *bs= , QCowL2Meta *m) =20 /* * For a given write request, create a new QCowL2Meta structure, add - * it to @m and the BDRVQcow2State.cluster_allocs list. + * it to @m and the BDRVQcow2State.cluster_allocs list. If the write + * request does not need copy-on-write or changes to the L2 metadata + * then this function does nothing. * * @host_cluster_offset points to the beginning of the first cluster. * * @guest_offset and @bytes indicate the offset and length of the * request. * + * @l2_slice contains the L2 entries of all clusters involved in this + * write request. + * * If @keep_old is true it means that the clusters were already * allocated and will be overwritten. If false then the clusters are * new and we have to decrease the reference count of the old ones. @@ -1053,15 +1058,53 @@ void qcow2_alloc_cluster_abort(BlockDriverState *bs= , QCowL2Meta *m) static void calculate_l2_meta(BlockDriverState *bs, uint64_t host_cluster_offset, uint64_t guest_offset, unsigned bytes, - QCowL2Meta **m, bool keep_old) + uint64_t *l2_slice, QCowL2Meta **m, bool kee= p_old) { BDRVQcow2State *s =3D bs->opaque; - unsigned cow_start_from =3D 0; + int l2_index =3D offset_to_l2_slice_index(s, guest_offset); + uint64_t l2_entry; + unsigned cow_start_from, cow_end_to; unsigned cow_start_to =3D offset_into_cluster(s, guest_offset); unsigned cow_end_from =3D cow_start_to + bytes; - unsigned cow_end_to =3D ROUND_UP(cow_end_from, s->cluster_size); unsigned nb_clusters =3D size_to_clusters(s, cow_end_from); QCowL2Meta *old_m =3D *m; + QCow2ClusterType type; + + assert(nb_clusters <=3D s->l2_slice_size - l2_index); + + /* Return if there's no COW (all clusters are normal and we keep them)= */ + if (keep_old) { + int i; + for (i =3D 0; i < nb_clusters; i++) { + l2_entry =3D be64_to_cpu(l2_slice[l2_index + i]); + if (qcow2_get_cluster_type(bs, l2_entry) !=3D QCOW2_CLUSTER_NO= RMAL) { + break; + } + } + if (i =3D=3D nb_clusters) { + return; + } + } + + /* Get the L2 entry of the first cluster */ + l2_entry =3D be64_to_cpu(l2_slice[l2_index]); + type =3D qcow2_get_cluster_type(bs, l2_entry); + + if (type =3D=3D QCOW2_CLUSTER_NORMAL && keep_old) { + cow_start_from =3D cow_start_to; + } else { + cow_start_from =3D 0; + } + + /* Get the L2 entry of the last cluster */ + l2_entry =3D be64_to_cpu(l2_slice[l2_index + nb_clusters - 1]); + type =3D qcow2_get_cluster_type(bs, l2_entry); + + if (type =3D=3D QCOW2_CLUSTER_NORMAL && keep_old) { + cow_end_to =3D cow_end_from; + } else { + cow_end_to =3D ROUND_UP(cow_end_from, s->cluster_size); + } =20 *m =3D g_malloc0(sizeof(**m)); **m =3D (QCowL2Meta) { @@ -1087,18 +1130,22 @@ static void calculate_l2_meta(BlockDriverState *bs, QLIST_INSERT_HEAD(&s->cluster_allocs, *m, next_in_flight); } =20 -/* Returns true if writing to a cluster requires COW */ -static bool cluster_needs_cow(BlockDriverState *bs, uint64_t l2_entry) +/* + * Returns true if writing to the cluster pointed to by @l2_entry + * requires a new allocation (that is, if the cluster is unallocated + * or has refcount > 1 and therefore cannot be written in-place). + */ +static bool cluster_needs_new_alloc(BlockDriverState *bs, uint64_t l2_entr= y) { switch (qcow2_get_cluster_type(bs, l2_entry)) { case QCOW2_CLUSTER_NORMAL: + case QCOW2_CLUSTER_ZERO_ALLOC: if (l2_entry & QCOW_OFLAG_COPIED) { return false; } case QCOW2_CLUSTER_UNALLOCATED: case QCOW2_CLUSTER_COMPRESSED: case QCOW2_CLUSTER_ZERO_PLAIN: - case QCOW2_CLUSTER_ZERO_ALLOC: return true; default: abort(); @@ -1106,20 +1153,38 @@ static bool cluster_needs_cow(BlockDriverState *bs,= uint64_t l2_entry) } =20 /* - * Returns the number of contiguous clusters that can be used for an alloc= ating - * write, but require COW to be performed (this includes yet unallocated s= pace, - * which must copy from the backing file) + * Returns the number of contiguous clusters that can be written to + * using one single write request, starting from @l2_index. + * At most @nb_clusters are checked. + * + * If @new_alloc is true this counts clusters that are either + * unallocated, or allocated but with refcount > 1 (so they need to be + * newly allocated and COWed). + * + * If @new_alloc is false this counts clusters that are already + * allocated and can be overwritten in-place (this includes clusters + * of type QCOW2_CLUSTER_ZERO_ALLOC). */ -static int count_cow_clusters(BlockDriverState *bs, int nb_clusters, - uint64_t *l2_slice, int l2_index) +static int count_single_write_clusters(BlockDriverState *bs, int nb_cluste= rs, + uint64_t *l2_slice, int l2_index, + bool new_alloc) { + BDRVQcow2State *s =3D bs->opaque; + uint64_t l2_entry =3D be64_to_cpu(l2_slice[l2_index]); + uint64_t expected_offset =3D l2_entry & L2E_OFFSET_MASK; int i; =20 for (i =3D 0; i < nb_clusters; i++) { - uint64_t l2_entry =3D be64_to_cpu(l2_slice[l2_index + i]); - if (!cluster_needs_cow(bs, l2_entry)) { + l2_entry =3D be64_to_cpu(l2_slice[l2_index + i]); + if (cluster_needs_new_alloc(bs, l2_entry) !=3D new_alloc) { break; } + if (!new_alloc) { + if (expected_offset !=3D (l2_entry & L2E_OFFSET_MASK)) { + break; + } + expected_offset +=3D s->cluster_size; + } } =20 assert(i <=3D nb_clusters); @@ -1190,10 +1255,10 @@ static int handle_dependencies(BlockDriverState *bs= , uint64_t guest_offset, } =20 /* - * Checks how many already allocated clusters that don't require a copy on - * write there are at the given guest_offset (up to *bytes). If *host_offs= et is - * not INV_OFFSET, only physically contiguous clusters beginning at this h= ost - * offset are counted. + * Checks how many already allocated clusters that don't require a new + * allocation there are at the given guest_offset (up to *bytes). + * If *host_offset is not INV_OFFSET, only physically contiguous clusters + * beginning at this host offset are counted. * * Note that guest_offset may not be cluster aligned. In this case, the * returned *host_offset points to exact byte referenced by guest_offset a= nd @@ -1202,12 +1267,12 @@ static int handle_dependencies(BlockDriverState *bs= , uint64_t guest_offset, * Returns: * 0: if no allocated clusters are available at the given offset. * *bytes is normally unchanged. It is set to 0 if the cluster - * is allocated and doesn't need COW, but doesn't have the right - * physical offset. + * is allocated and can be overwritten in-place but doesn't have + * the right physical offset. * - * 1: if allocated clusters that don't require a COW are available at - * the requested offset. *bytes may have decreased and describes - * the length of the area that can be written to. + * 1: if allocated clusters that can be overwritten in place are + * available at the requested offset. *bytes may have decreased + * and describes the length of the area that can be written to. * * -errno: in error cases */ @@ -1216,7 +1281,7 @@ static int handle_copied(BlockDriverState *bs, uint64= _t guest_offset, { BDRVQcow2State *s =3D bs->opaque; int l2_index; - uint64_t cluster_offset; + uint64_t l2_entry, cluster_offset; uint64_t *l2_slice; uint64_t nb_clusters; unsigned int keep_clusters; @@ -1237,7 +1302,8 @@ static int handle_copied(BlockDriverState *bs, uint64= _t guest_offset, =20 l2_index =3D offset_to_l2_slice_index(s, guest_offset); nb_clusters =3D MIN(nb_clusters, s->l2_slice_size - l2_index); - assert(nb_clusters <=3D INT_MAX); + /* Limit total byte count to BDRV_REQUEST_MAX_BYTES */ + nb_clusters =3D MIN(nb_clusters, BDRV_REQUEST_MAX_BYTES >> s->cluster_= bits); =20 /* Find L2 entry for the first involved cluster */ ret =3D get_cluster_table(bs, guest_offset, &l2_slice, &l2_index); @@ -1245,41 +1311,39 @@ static int handle_copied(BlockDriverState *bs, uint= 64_t guest_offset, return ret; } =20 - cluster_offset =3D be64_to_cpu(l2_slice[l2_index]); + l2_entry =3D be64_to_cpu(l2_slice[l2_index]); + cluster_offset =3D l2_entry & L2E_OFFSET_MASK; + + if (!cluster_needs_new_alloc(bs, l2_entry)) { + if (offset_into_cluster(s, cluster_offset)) { + qcow2_signal_corruption(bs, true, -1, -1, "%s cluster offset " + "%#" PRIx64 " unaligned (guest offset:= %#" + PRIx64 ")", l2_entry & QCOW_OFLAG_ZERO= ? + "Preallocated zero" : "Data", + cluster_offset, guest_offset); + ret =3D -EIO; + goto out; + } =20 - /* Check how many clusters are already allocated and don't need COW */ - if (qcow2_get_cluster_type(bs, cluster_offset) =3D=3D QCOW2_CLUSTER_NO= RMAL - && (cluster_offset & QCOW_OFLAG_COPIED)) - { /* If a specific host_offset is required, check it */ - bool offset_matches =3D - (cluster_offset & L2E_OFFSET_MASK) =3D=3D *host_offset; - - if (offset_into_cluster(s, cluster_offset & L2E_OFFSET_MASK)) { - qcow2_signal_corruption(bs, true, -1, -1, "Data cluster offset= " - "%#llx unaligned (guest offset: %#" PR= Ix64 - ")", cluster_offset & L2E_OFFSET_MASK, - guest_offset); - ret =3D -EIO; - goto out; - } - - if (*host_offset !=3D INV_OFFSET && !offset_matches) { + if (*host_offset !=3D INV_OFFSET && cluster_offset !=3D *host_offs= et) { *bytes =3D 0; ret =3D 0; goto out; } =20 /* We keep all QCOW_OFLAG_COPIED clusters */ - keep_clusters =3D - count_contiguous_clusters(bs, nb_clusters, s->cluster_size, - &l2_slice[l2_index], - QCOW_OFLAG_COPIED | QCOW_OFLAG_ZERO); + keep_clusters =3D count_single_write_clusters(bs, nb_clusters, l2_= slice, + l2_index, false); assert(keep_clusters <=3D nb_clusters); =20 *bytes =3D MIN(*bytes, keep_clusters * s->cluster_size - offset_into_cluster(s, guest_offset)); + assert(*bytes !=3D 0); + + calculate_l2_meta(bs, cluster_offset, guest_offset, + *bytes, l2_slice, m, true); =20 ret =3D 1; } else { @@ -1293,8 +1357,7 @@ out: /* Only return a host offset if we actually made progress. Otherwise we * would make requirements for handle_alloc() that it can't fulfill */ if (ret > 0) { - *host_offset =3D (cluster_offset & L2E_OFFSET_MASK) - + offset_into_cluster(s, guest_offset); + *host_offset =3D cluster_offset + offset_into_cluster(s, guest_off= set); } =20 return ret; @@ -1355,9 +1418,10 @@ static int do_alloc_cluster_offset(BlockDriverState = *bs, uint64_t guest_offset, } =20 /* - * Allocates new clusters for an area that either is yet unallocated or ne= eds a - * copy on write. If *host_offset is not INV_OFFSET, clusters are only - * allocated if the new allocation can match the specified host offset. + * Allocates new clusters for an area that is either still unallocated or + * cannot be overwritten in-place. If *host_offset is not INV_OFFSET, + * clusters are only allocated if the new allocation can match the specifi= ed + * host offset. * * Note that guest_offset may not be cluster aligned. In this case, the * returned *host_offset points to exact byte referenced by guest_offset a= nd @@ -1380,12 +1444,10 @@ static int handle_alloc(BlockDriverState *bs, uint6= 4_t guest_offset, BDRVQcow2State *s =3D bs->opaque; int l2_index; uint64_t *l2_slice; - uint64_t entry; uint64_t nb_clusters; int ret; - bool keep_old_clusters =3D false; =20 - uint64_t alloc_cluster_offset =3D INV_OFFSET; + uint64_t alloc_cluster_offset; =20 trace_qcow2_handle_alloc(qemu_coroutine_self(), guest_offset, *host_of= fset, *bytes); @@ -1400,10 +1462,8 @@ static int handle_alloc(BlockDriverState *bs, uint64= _t guest_offset, =20 l2_index =3D offset_to_l2_slice_index(s, guest_offset); nb_clusters =3D MIN(nb_clusters, s->l2_slice_size - l2_index); - assert(nb_clusters <=3D INT_MAX); - - /* Limit total allocation byte count to INT_MAX */ - nb_clusters =3D MIN(nb_clusters, INT_MAX >> s->cluster_bits); + /* Limit total allocation byte count to BDRV_REQUEST_MAX_BYTES */ + nb_clusters =3D MIN(nb_clusters, BDRV_REQUEST_MAX_BYTES >> s->cluster_= bits); =20 /* Find L2 entry for the first involved cluster */ ret =3D get_cluster_table(bs, guest_offset, &l2_slice, &l2_index); @@ -1411,67 +1471,32 @@ static int handle_alloc(BlockDriverState *bs, uint6= 4_t guest_offset, return ret; } =20 - entry =3D be64_to_cpu(l2_slice[l2_index]); - nb_clusters =3D count_cow_clusters(bs, nb_clusters, l2_slice, l2_index= ); + nb_clusters =3D count_single_write_clusters(bs, nb_clusters, + l2_slice, l2_index, true); =20 /* This function is only called when there were no non-COW clusters, s= o if * we can't find any unallocated or COW clusters either, something is * wrong with our code. */ assert(nb_clusters > 0); =20 - if (qcow2_get_cluster_type(bs, entry) =3D=3D QCOW2_CLUSTER_ZERO_ALLOC = && - (entry & QCOW_OFLAG_COPIED) && - (*host_offset =3D=3D INV_OFFSET || - start_of_cluster(s, *host_offset) =3D=3D (entry & L2E_OFFSET_MASK= ))) - { - int preallocated_nb_clusters; - - if (offset_into_cluster(s, entry & L2E_OFFSET_MASK)) { - qcow2_signal_corruption(bs, true, -1, -1, "Preallocated zero " - "cluster offset %#llx unaligned (guest= " - "offset: %#" PRIx64 ")", - entry & L2E_OFFSET_MASK, guest_offset); - ret =3D -EIO; - goto fail; - } - - /* Try to reuse preallocated zero clusters; contiguous normal clus= ters - * would be fine, too, but count_cow_clusters() above has limited - * nb_clusters already to a range of COW clusters */ - preallocated_nb_clusters =3D - count_contiguous_clusters(bs, nb_clusters, s->cluster_size, - &l2_slice[l2_index], QCOW_OFLAG_COPI= ED); - assert(preallocated_nb_clusters > 0); - - nb_clusters =3D preallocated_nb_clusters; - alloc_cluster_offset =3D entry & L2E_OFFSET_MASK; - - /* We want to reuse these clusters, so qcow2_alloc_cluster_link_l2= () - * should not free them. */ - keep_old_clusters =3D true; + /* Allocate at a given offset in the image file */ + alloc_cluster_offset =3D *host_offset =3D=3D INV_OFFSET ? INV_OFFSET : + start_of_cluster(s, *host_offset); + ret =3D do_alloc_cluster_offset(bs, guest_offset, &alloc_cluster_offse= t, + &nb_clusters); + if (ret < 0) { + goto out; } =20 - qcow2_cache_put(s->l2_table_cache, (void **) &l2_slice); - - if (alloc_cluster_offset =3D=3D INV_OFFSET) { - /* Allocate, if necessary at a given offset in the image file */ - alloc_cluster_offset =3D *host_offset =3D=3D INV_OFFSET ? INV_OFFS= ET : - start_of_cluster(s, *host_offset); - ret =3D do_alloc_cluster_offset(bs, guest_offset, &alloc_cluster_o= ffset, - &nb_clusters); - if (ret < 0) { - goto fail; - } - - /* Can't extend contiguous allocation */ - if (nb_clusters =3D=3D 0) { - *bytes =3D 0; - return 0; - } - - assert(alloc_cluster_offset !=3D INV_OFFSET); + /* Can't extend contiguous allocation */ + if (nb_clusters =3D=3D 0) { + *bytes =3D 0; + ret =3D 0; + goto out; } =20 + assert(alloc_cluster_offset !=3D INV_OFFSET); + /* * Save info needed for meta data update. * @@ -1494,13 +1519,14 @@ static int handle_alloc(BlockDriverState *bs, uint6= 4_t guest_offset, *bytes =3D MIN(*bytes, nb_bytes - offset_into_cluster(s, guest_offset)= ); assert(*bytes !=3D 0); =20 - calculate_l2_meta(bs, alloc_cluster_offset, guest_offset, *bytes, - m, keep_old_clusters); + calculate_l2_meta(bs, alloc_cluster_offset, guest_offset, *bytes, l2_s= lice, + m, false); =20 - return 1; + ret =3D 1; =20 -fail: - if (*m && (*m)->nb_clusters > 0) { +out: + qcow2_cache_put(s->l2_table_cache, (void **) &l2_slice); + if (ret < 0 && *m && (*m)->nb_clusters > 0) { QLIST_REMOVE(*m, next_in_flight); } return ret; --=20 2.20.1