From nobody Mon Sep 29 21:13:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F4DDC00140 for ; Tue, 16 Aug 2022 01:04:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344962AbiHPBEY (ORCPT ); Mon, 15 Aug 2022 21:04:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348959AbiHPA54 (ORCPT ); Mon, 15 Aug 2022 20:57:56 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 736A51A1524; Mon, 15 Aug 2022 13:49:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 19AA7B811A0; Mon, 15 Aug 2022 20:49:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4F6E1C433D6; Mon, 15 Aug 2022 20:49:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1660596559; bh=Laq28zTtiS0G9s9VzXihgBdykSB7/ytUfDPl+DvERXg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wQ8eTpQwVpmmAqEeEbmZwfpASriNxCKr3PX7vK6HRvYV1LTUUwf7Fes2CqGUe9Fl+ H1HyysEVvThZ8YlM7ZYtz+YsJ2HwJbIJaczT8INTQ3NXWplsTLN26ffjq+mYbCat/v ww5wJDwa/pTDX7VrMCBWajBZ62DQzNTrqaJxpDn0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Johannes Thumshirn , Naohiro Aota , David Sterba , Sasha Levin Subject: [PATCH 5.19 1085/1157] btrfs: zoned: revive max_zone_append_bytes Date: Mon, 15 Aug 2022 20:07:21 +0200 Message-Id: <20220815180523.577596003@linuxfoundation.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220815180439.416659447@linuxfoundation.org> References: <20220815180439.416659447@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Naohiro Aota [ Upstream commit c2ae7b772ef4e86c5ddf3fd47bf59045ae96a414 ] This patch is basically a revert of commit 5a80d1c6a270 ("btrfs: zoned: remove max_zone_append_size logic"), but without unnecessary ASSERT and check. The max_zone_append_size will be used as a hint to estimate the number of extents to cover delalloc/writeback region in the later commits. The size of a ZONE APPEND bio is also limited by queue_max_segments(), so this commit considers it to calculate max_zone_append_size. Technically, a bio can be larger than queue_max_segments() * PAGE_SIZE if the pages are contiguous. But, it is safe to consider "queue_max_segments() * PAGE_SIZE" as an upper limit of an extent size to calculate the number of extents needed to write data. Reviewed-by: Johannes Thumshirn Signed-off-by: Naohiro Aota Signed-off-by: David Sterba Signed-off-by: Sasha Levin --- fs/btrfs/ctree.h | 2 ++ fs/btrfs/zoned.c | 17 +++++++++++++++++ fs/btrfs/zoned.h | 1 + 3 files changed, 20 insertions(+) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 9c21e214d29e..7abfbfd7c94c 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1047,6 +1047,8 @@ struct btrfs_fs_info { */ u64 zone_size; =20 + /* Max size to emit ZONE_APPEND write command */ + u64 max_zone_append_size; struct mutex zoned_meta_io_lock; spinlock_t treelog_bg_lock; u64 treelog_bg; diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index d99026df6f67..52607569cf49 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -415,6 +415,16 @@ int btrfs_get_dev_zone_info(struct btrfs_device *devic= e, bool populate_cache) nr_sectors =3D bdev_nr_sectors(bdev); zone_info->zone_size_shift =3D ilog2(zone_info->zone_size); zone_info->nr_zones =3D nr_sectors >> ilog2(zone_sectors); + /* + * We limit max_zone_append_size also by max_segments * + * PAGE_SIZE. Technically, we can have multiple pages per segment. But, + * since btrfs adds the pages one by one to a bio, and btrfs cannot + * increase the metadata reservation even if it increases the number of + * extents, it is safe to stick with the limit. + */ + zone_info->max_zone_append_size =3D + min_t(u64, (u64)bdev_max_zone_append_sectors(bdev) << SECTOR_SHIFT, + (u64)bdev_max_segments(bdev) << PAGE_SHIFT); if (!IS_ALIGNED(nr_sectors, zone_sectors)) zone_info->nr_zones++; =20 @@ -640,6 +650,7 @@ int btrfs_check_zoned_mode(struct btrfs_fs_info *fs_inf= o) u64 zoned_devices =3D 0; u64 nr_devices =3D 0; u64 zone_size =3D 0; + u64 max_zone_append_size =3D 0; const bool incompat_zoned =3D btrfs_fs_incompat(fs_info, ZONED); int ret =3D 0; =20 @@ -674,6 +685,11 @@ int btrfs_check_zoned_mode(struct btrfs_fs_info *fs_in= fo) ret =3D -EINVAL; goto out; } + if (!max_zone_append_size || + (zone_info->max_zone_append_size && + zone_info->max_zone_append_size < max_zone_append_size)) + max_zone_append_size =3D + zone_info->max_zone_append_size; } nr_devices++; } @@ -723,6 +739,7 @@ int btrfs_check_zoned_mode(struct btrfs_fs_info *fs_inf= o) } =20 fs_info->zone_size =3D zone_size; + fs_info->max_zone_append_size =3D max_zone_append_size; fs_info->fs_devices->chunk_alloc_policy =3D BTRFS_CHUNK_ALLOC_ZONED; =20 /* diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h index 6b2eec99162b..9caeab07fd38 100644 --- a/fs/btrfs/zoned.h +++ b/fs/btrfs/zoned.h @@ -19,6 +19,7 @@ struct btrfs_zoned_device_info { */ u64 zone_size; u8 zone_size_shift; + u64 max_zone_append_size; u32 nr_zones; unsigned int max_active_zones; atomic_t active_zones_left; --=20 2.35.1