From nobody Thu Nov 6 16:27:19 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1489441713556524.553515499386; Mon, 13 Mar 2017 14:48:33 -0700 (PDT) Received: from localhost ([::1]:54570 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cnXpP-0004oF-Uh for importer@patchew.org; Mon, 13 Mar 2017 17:48:32 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44015) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cnXie-0007yZ-Q6 for qemu-devel@nongnu.org; Mon, 13 Mar 2017 17:41:34 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cnXid-00027p-JB for qemu-devel@nongnu.org; Mon, 13 Mar 2017 17:41:32 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40898) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cnXia-00026O-2l; Mon, 13 Mar 2017 17:41:28 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3A2DEC05AA4F; Mon, 13 Mar 2017 21:41:28 +0000 (UTC) Received: from localhost (ovpn-204-63.brq.redhat.com [10.40.204.63]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v2DLfQ2U019270 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 13 Mar 2017 17:41:27 -0400 From: Max Reitz To: qemu-block@nongnu.org Date: Mon, 13 Mar 2017 22:41:14 +0100 Message-Id: <20170313214117.27350-4-mreitz@redhat.com> In-Reply-To: <20170313214001.26339-1-mreitz@redhat.com> References: <20170313214001.26339-1-mreitz@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 13 Mar 2017 21:41:28 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH for-2.10 13/16] block/qcow2: qcow2_calc_size_usage() for truncate X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-devel@nongnu.org, Max Reitz Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This patch extends qcow2_calc_size_usage() so it can calculate the additional space needed for preallocating image growth. Signed-off-by: Max Reitz --- block/qcow2.c | 137 +++++++++++++++++++++++++++++++++++++++++-------------= ---- 1 file changed, 98 insertions(+), 39 deletions(-) diff --git a/block/qcow2.c b/block/qcow2.c index 21b2b3cd53..80fb815b15 100644 --- a/block/qcow2.c +++ b/block/qcow2.c @@ -2101,7 +2101,15 @@ done: return ret; } =20 -static uint64_t qcow2_calc_size_usage(uint64_t new_size, +/** + * Returns the number of bytes that must be allocated in the underlying fi= le + * to accomodate an image growth from @current_size to @new_size. + * + * @current_size must be 0 when creating a new image. In that case, @bs is + * ignored; otherwise it must be valid. + */ +static uint64_t qcow2_calc_size_usage(BlockDriverState *bs, + uint64_t current_size, uint64_t new_= size, int cluster_bits, int refcount_order) { size_t cluster_size =3D 1u << cluster_bits; @@ -2122,47 +2130,97 @@ static uint64_t qcow2_calc_size_usage(uint64_t new_= size, refblock_bits =3D cluster_bits - (refcount_order - 3); refblock_size =3D 1 << refblock_bits; =20 - /* header: 1 cluster */ - meta_size +=3D cluster_size; - - /* total size of L2 tables */ - nl2e =3D aligned_total_size / cluster_size; - nl2e =3D align_offset(nl2e, cluster_size / sizeof(uint64_t)); - meta_size +=3D nl2e * sizeof(uint64_t); + if (!current_size) { + /* header: 1 cluster */ + meta_size +=3D cluster_size; + + /* total size of L2 tables */ + nl2e =3D aligned_total_size / cluster_size; + nl2e =3D align_offset(nl2e, cluster_size / sizeof(uint64_t)); + meta_size +=3D nl2e * sizeof(uint64_t); + + /* total size of L1 tables */ + nl1e =3D nl2e * sizeof(uint64_t) / cluster_size; + nl1e =3D align_offset(nl1e, cluster_size / sizeof(uint64_t)); + meta_size +=3D nl1e * sizeof(uint64_t); + + /* total size of refcount blocks + * + * note: every host cluster is reference-counted, including metada= ta + * (even refcount blocks are recursively included). + * Let: + * a =3D total_size (this is the guest disk size) + * m =3D meta size not including refcount blocks and refcount ta= bles + * c =3D cluster size + * y1 =3D number of refcount blocks entries + * y2 =3D meta size including everything + * rces =3D refcount entry size in bytes + * then, + * y1 =3D (y2 + a)/c + * y2 =3D y1 * rces + y1 * rces * sizeof(u64) / c + m + * we can get y1: + * y1 =3D (a + m) / (c - rces - rces * sizeof(u64) / c) + */ + nrefblocke =3D (aligned_total_size + meta_size + cluster_size) + / (cluster_size - rces - rces * sizeof(uint64_t) + / cluster_size); + meta_size +=3D DIV_ROUND_UP(nrefblocke, refblock_size) * cluster_s= ize; =20 - /* total size of L1 tables */ - nl1e =3D nl2e * sizeof(uint64_t) / cluster_size; - nl1e =3D align_offset(nl1e, cluster_size / sizeof(uint64_t)); - meta_size +=3D nl1e * sizeof(uint64_t); + /* total size of refcount tables */ + nreftablee =3D nrefblocke / refblock_size; + nreftablee =3D align_offset(nreftablee, cluster_size / sizeof(uint= 64_t)); + meta_size +=3D nreftablee * sizeof(uint64_t); =20 - /* total size of refcount blocks - * - * note: every host cluster is reference-counted, including metadata - * (even refcount blocks are recursively included). - * Let: - * a =3D total_size (this is the guest disk size) - * m =3D meta size not including refcount blocks and refcount tables - * c =3D cluster size - * y1 =3D number of refcount blocks entries - * y2 =3D meta size including everything - * rces =3D refcount entry size in bytes - * then, - * y1 =3D (y2 + a)/c - * y2 =3D y1 * rces + y1 * rces * sizeof(u64) / c + m - * we can get y1: - * y1 =3D (a + m) / (c - rces - rces * sizeof(u64) / c) - */ - nrefblocke =3D (aligned_total_size + meta_size + cluster_size) - / (cluster_size - rces - rces * sizeof(uint64_t) - / cluster_size); - meta_size +=3D DIV_ROUND_UP(nrefblocke, refblock_size) * cluster_size; + return aligned_total_size + meta_size; + } else { + BDRVQcow2State *s =3D bs->opaque; + uint64_t aligned_cur_size =3D align_offset(current_size, cluster_s= ize); + uint64_t creftable_length; + uint64_t i; + + /* new total size of L2 tables */ + nl2e =3D aligned_total_size / cluster_size; + nl2e =3D align_offset(nl2e, cluster_size / sizeof(uint64_t)); + meta_size +=3D nl2e * sizeof(uint64_t); + + /* Subtract L2 tables which are already present */ + for (i =3D 0; i < s->l1_size; i++) { + if (s->l1_table[i] & L1E_OFFSET_MASK) { + meta_size -=3D cluster_size; + } + } =20 - /* total size of refcount tables */ - nreftablee =3D nrefblocke / refblock_size; - nreftablee =3D align_offset(nreftablee, cluster_size / sizeof(uint64_t= )); - meta_size +=3D nreftablee * sizeof(uint64_t); + /* Do not add L1 table size because the only caller of this path + * (qcow2_truncate) has increased its size already. */ =20 - return aligned_total_size + meta_size; + /* Calculate size of the additional refblocks (this assumes that a= ll of + * the existing image is covered by refblocks, which is extremely + * likely); this may result in overallocation because parts of the= newly + * added space may be covered by existing refblocks, but that is f= ine. + * + * This only considers the newly added space. Since we cannot upda= te the + * reftable in-place, we will have to able to store both the old a= nd the + * new one at the same time, though. Therefore, we need to add the= size + * of the old reftable here. + */ + creftable_length =3D ROUND_UP(s->refcount_table_size * sizeof(uint= 64_t), + cluster_size); + nrefblocke =3D ((aligned_total_size - aligned_cur_size) + meta_siz= e + + creftable_length + cluster_size) + / (cluster_size - rces - + rces * sizeof(uint64_t) / cluster_size); + meta_size +=3D DIV_ROUND_UP(nrefblocke, refblock_size) * cluster_s= ize; + + /* total size of the new refcount table (again, may be too much be= cause + * it assumes that the new area is not covered by any refcount blo= cks + * yet) */ + nreftablee =3D s->max_refcount_table_index + 1 + + nrefblocke / refblock_size; + nreftablee =3D align_offset(nreftablee, cluster_size / sizeof(uint= 64_t)); + meta_size +=3D nreftablee * sizeof(uint64_t); + + return (aligned_total_size - aligned_cur_size) + meta_size; + } } =20 static int qcow2_create2(const char *filename, int64_t total_size, @@ -2203,7 +2261,8 @@ static int qcow2_create2(const char *filename, int64_= t total_size, int ret; =20 if (prealloc =3D=3D PREALLOC_MODE_FULL || prealloc =3D=3D PREALLOC_MOD= E_FALLOC) { - uint64_t file_size =3D qcow2_calc_size_usage(total_size, cluster_b= its, + uint64_t file_size =3D qcow2_calc_size_usage(NULL, 0, total_size, + cluster_bits, refcount_order); =20 qemu_opt_set_number(opts, BLOCK_OPT_SIZE, file_size, &error_abort); --=20 2.12.0