From nobody Fri Dec 19 07:49:54 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03A56C6FA89 for ; Tue, 13 Sep 2022 14:51:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231862AbiIMOvN (ORCPT ); Tue, 13 Sep 2022 10:51:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234473AbiIMOsq (ORCPT ); Tue, 13 Sep 2022 10:48:46 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A687B6F546; Tue, 13 Sep 2022 07:25:21 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 575BEB80EFE; Tue, 13 Sep 2022 14:14:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB63BC433C1; Tue, 13 Sep 2022 14:14:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1663078484; bh=hzHlMCpsGdkKTK0zWkUiYLwjSnS2cs+6PTOMwwjV36o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1Y65BphkXOiWidwIvQXWli0bquotHl85G9zu5RCEOuZsM44h6hRuY0d+GmLqpV1xl SJzS/WL9WsS60JuRgOgeN4h7sAQxsi8SGD67OSNxbgMJFthiJGBUTvuHvXDfq6F4TB TMQ/NF481LyOCS6nrNnHJZsHnt71qPIB1pdev6W8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Johannes Thumshirn , David Sterba , Sasha Levin Subject: [PATCH 5.19 132/192] btrfs: zoned: fix mounting with conventional zones Date: Tue, 13 Sep 2022 16:03:58 +0200 Message-Id: <20220913140416.588500662@linuxfoundation.org> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220913140410.043243217@linuxfoundation.org> References: <20220913140410.043243217@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Johannes Thumshirn [ Upstream commit 6ca64ac2763149fb66c0b4bf12f5e0977a88e51d ] Since commit 6a921de58992 ("btrfs: zoned: introduce space_info->active_total_bytes"), we're only counting the bytes of a block group on an active zone as usable for metadata writes. But on a SMR drive, we don't have active zones and short circuit some of the logic. This leads to an error on mount, because we cannot reserve space for metadata writes. Fix this by also setting the BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE bit in the block-group's runtime flag if the zone is a conventional zone. Fixes: 6a921de58992 ("btrfs: zoned: introduce space_info->active_total_byte= s") Signed-off-by: Johannes Thumshirn Signed-off-by: David Sterba Signed-off-by: Sasha Levin --- fs/btrfs/zoned.c | 81 ++++++++++++++++++++++++------------------------ 1 file changed, 40 insertions(+), 41 deletions(-) diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index 4949e0d82923d..1386362fad3b8 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1187,7 +1187,7 @@ int btrfs_ensure_empty_zones(struct btrfs_device *dev= ice, u64 start, u64 size) * offset. */ static int calculate_alloc_pointer(struct btrfs_block_group *cache, - u64 *offset_ret) + u64 *offset_ret, bool new) { struct btrfs_fs_info *fs_info =3D cache->fs_info; struct btrfs_root *root; @@ -1197,6 +1197,21 @@ static int calculate_alloc_pointer(struct btrfs_bloc= k_group *cache, int ret; u64 length; =20 + /* + * Avoid tree lookups for a new block group, there's no use for it. + * It must always be 0. + * + * Also, we have a lock chain of extent buffer lock -> chunk mutex. + * For new a block group, this function is called from + * btrfs_make_block_group() which is already taking the chunk mutex. + * Thus, we cannot call calculate_alloc_pointer() which takes extent + * buffer locks to avoid deadlock. + */ + if (new) { + *offset_ret =3D 0; + return 0; + } + path =3D btrfs_alloc_path(); if (!path) return -ENOMEM; @@ -1332,6 +1347,13 @@ int btrfs_load_block_group_zone_info(struct btrfs_bl= ock_group *cache, bool new) else num_conventional++; =20 + /* + * Consider a zone as active if we can allow any number of + * active zones. + */ + if (!device->zone_info->max_active_zones) + __set_bit(i, active); + if (!is_sequential) { alloc_offsets[i] =3D WP_CONVENTIONAL; continue; @@ -1398,45 +1420,23 @@ int btrfs_load_block_group_zone_info(struct btrfs_b= lock_group *cache, bool new) __set_bit(i, active); break; } - - /* - * Consider a zone as active if we can allow any number of - * active zones. - */ - if (!device->zone_info->max_active_zones) - __set_bit(i, active); } =20 if (num_sequential > 0) cache->seq_zone =3D true; =20 if (num_conventional > 0) { - /* - * Avoid calling calculate_alloc_pointer() for new BG. It - * is no use for new BG. It must be always 0. - * - * Also, we have a lock chain of extent buffer lock -> - * chunk mutex. For new BG, this function is called from - * btrfs_make_block_group() which is already taking the - * chunk mutex. Thus, we cannot call - * calculate_alloc_pointer() which takes extent buffer - * locks to avoid deadlock. - */ - /* Zone capacity is always zone size in emulation */ cache->zone_capacity =3D cache->length; - if (new) { - cache->alloc_offset =3D 0; - goto out; - } - ret =3D calculate_alloc_pointer(cache, &last_alloc); - if (ret || map->num_stripes =3D=3D num_conventional) { - if (!ret) - cache->alloc_offset =3D last_alloc; - else - btrfs_err(fs_info, + ret =3D calculate_alloc_pointer(cache, &last_alloc, new); + if (ret) { + btrfs_err(fs_info, "zoned: failed to determine allocation offset of bg %llu", - cache->start); + cache->start); + goto out; + } else if (map->num_stripes =3D=3D num_conventional) { + cache->alloc_offset =3D last_alloc; + cache->zone_is_active =3D 1; goto out; } } @@ -1504,13 +1504,6 @@ int btrfs_load_block_group_zone_info(struct btrfs_bl= ock_group *cache, bool new) goto out; } =20 - if (cache->zone_is_active) { - btrfs_get_block_group(cache); - spin_lock(&fs_info->zone_active_bgs_lock); - list_add_tail(&cache->active_bg_list, &fs_info->zone_active_bgs); - spin_unlock(&fs_info->zone_active_bgs_lock); - } - out: if (cache->alloc_offset > fs_info->zone_size) { btrfs_err(fs_info, @@ -1535,10 +1528,16 @@ int btrfs_load_block_group_zone_info(struct btrfs_b= lock_group *cache, bool new) ret =3D -EIO; } =20 - if (!ret) + if (!ret) { cache->meta_write_pointer =3D cache->alloc_offset + cache->start; - - if (ret) { + if (cache->zone_is_active) { + btrfs_get_block_group(cache); + spin_lock(&fs_info->zone_active_bgs_lock); + list_add_tail(&cache->active_bg_list, + &fs_info->zone_active_bgs); + spin_unlock(&fs_info->zone_active_bgs_lock); + } + } else { kfree(cache->physical_map); cache->physical_map =3D NULL; } --=20 2.35.1