From nobody Mon Sep 29 22:43:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58C12C00140 for ; Tue, 16 Aug 2022 01:07:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348842AbiHPBFu (ORCPT ); Mon, 15 Aug 2022 21:05:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349697AbiHPA6j (ORCPT ); Mon, 15 Aug 2022 20:58:39 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFBDA1A2E8D; Mon, 15 Aug 2022 13:49:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B2B2260FC4; Mon, 15 Aug 2022 20:49:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9BE76C433D6; Mon, 15 Aug 2022 20:49:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1660596591; bh=v79BDIz+bJxNdmMCpKkjIK9UA5+kvyKRUyd0IFt9G2Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z1suUbMdixr4c5Ii6pJahhM3yfV3t5myadvvKwyKor18f9GZotq8LDYk85ICwPzhb FjmWv66ZuBh3OZlcvZu+H9PgkzKFxKRW8RBJnR7nhORRsN2H22H4rv4l37/LBsN3Uc 9abYb4bJvMaquf9/fLq98ELBpgROuwR1n875AWCA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jan Kara , Theodore Tso , Sasha Levin , Ritesh Harjani Subject: [PATCH 5.19 1129/1157] ext4: fix race when reusing xattr blocks Date: Mon, 15 Aug 2022 20:08:05 +0200 Message-Id: <20220815180525.547543475@linuxfoundation.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220815180439.416659447@linuxfoundation.org> References: <20220815180439.416659447@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Jan Kara [ Upstream commit 65f8b80053a1b2fd602daa6814e62d6fa90e5e9b ] When ext4_xattr_block_set() decides to remove xattr block the following race can happen: CPU1 CPU2 ext4_xattr_block_set() ext4_xattr_release_block() new_bh =3D ext4_xattr_block_cache_find() lock_buffer(bh); ref =3D le32_to_cpu(BHDR(bh)->h_r= efcount); if (ref =3D=3D 1) { ... mb_cache_entry_delete(); unlock_buffer(bh); ext4_free_blocks(); ... ext4_forget(..., bh, ...); jbd2_journal_revoke(..., bh= ); ext4_journal_get_write_access(..., new_bh, ...) do_get_write_access() jbd2_journal_cancel_revoke(..., new_bh); Later the code in ext4_xattr_block_set() finds out the block got freed and cancels reusal of the block but the revoke stays canceled and so in case of block reuse and journal replay the filesystem can get corrupted. If the race works out slightly differently, we can also hit assertions in the jbd2 code. Fix the problem by making sure that once matching mbcache entry is found, code dropping the last xattr block reference (or trying to modify xattr block in place) waits until the mbcache entry reference is dropped. This way code trying to reuse xattr block is protected from someone trying to drop the last reference to xattr block. Reported-and-tested-by: Ritesh Harjani CC: stable@vger.kernel.org Fixes: 82939d7999df ("ext4: convert to mbcache2") Signed-off-by: Jan Kara Link: https://lore.kernel.org/r/20220712105436.32204-5-jack@suse.cz Signed-off-by: Theodore Ts'o Signed-off-by: Sasha Levin --- fs/ext4/xattr.c | 67 +++++++++++++++++++++++++++++++++---------------- 1 file changed, 45 insertions(+), 22 deletions(-) diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c index a25942a74929..533216e80fa2 100644 --- a/fs/ext4/xattr.c +++ b/fs/ext4/xattr.c @@ -439,9 +439,16 @@ static int ext4_xattr_inode_iget(struct inode *parent,= unsigned long ea_ino, /* Remove entry from mbcache when EA inode is getting evicted */ void ext4_evict_ea_inode(struct inode *inode) { - if (EA_INODE_CACHE(inode)) - mb_cache_entry_delete(EA_INODE_CACHE(inode), - ext4_xattr_inode_get_hash(inode), inode->i_ino); + struct mb_cache_entry *oe; + + if (!EA_INODE_CACHE(inode)) + return; + /* Wait for entry to get unused so that we can remove it */ + while ((oe =3D mb_cache_entry_delete_or_get(EA_INODE_CACHE(inode), + ext4_xattr_inode_get_hash(inode), inode->i_ino))) { + mb_cache_entry_wait_unused(oe); + mb_cache_entry_put(EA_INODE_CACHE(inode), oe); + } } =20 static int @@ -1229,6 +1236,7 @@ ext4_xattr_release_block(handle_t *handle, struct ino= de *inode, if (error) goto out; =20 +retry_ref: lock_buffer(bh); hash =3D le32_to_cpu(BHDR(bh)->h_hash); ref =3D le32_to_cpu(BHDR(bh)->h_refcount); @@ -1238,9 +1246,18 @@ ext4_xattr_release_block(handle_t *handle, struct in= ode *inode, * This must happen under buffer lock for * ext4_xattr_block_set() to reliably detect freed block */ - if (ea_block_cache) - mb_cache_entry_delete(ea_block_cache, hash, - bh->b_blocknr); + if (ea_block_cache) { + struct mb_cache_entry *oe; + + oe =3D mb_cache_entry_delete_or_get(ea_block_cache, hash, + bh->b_blocknr); + if (oe) { + unlock_buffer(bh); + mb_cache_entry_wait_unused(oe); + mb_cache_entry_put(ea_block_cache, oe); + goto retry_ref; + } + } get_bh(bh); unlock_buffer(bh); =20 @@ -1867,9 +1884,20 @@ ext4_xattr_block_set(handle_t *handle, struct inode = *inode, * ext4_xattr_block_set() to reliably detect modified * block */ - if (ea_block_cache) - mb_cache_entry_delete(ea_block_cache, hash, - bs->bh->b_blocknr); + if (ea_block_cache) { + struct mb_cache_entry *oe; + + oe =3D mb_cache_entry_delete_or_get(ea_block_cache, + hash, bs->bh->b_blocknr); + if (oe) { + /* + * Xattr block is getting reused. Leave + * it alone. + */ + mb_cache_entry_put(ea_block_cache, oe); + goto clone_block; + } + } ea_bdebug(bs->bh, "modifying in-place"); error =3D ext4_xattr_set_entry(i, s, handle, inode, true /* is_block */); @@ -1885,6 +1913,7 @@ ext4_xattr_block_set(handle_t *handle, struct inode *= inode, goto cleanup; goto inserted; } +clone_block: unlock_buffer(bs->bh); ea_bdebug(bs->bh, "cloning"); s->base =3D kmemdup(BHDR(bs->bh), bs->bh->b_size, GFP_NOFS); @@ -1990,18 +2019,13 @@ ext4_xattr_block_set(handle_t *handle, struct inode= *inode, lock_buffer(new_bh); /* * We have to be careful about races with - * freeing, rehashing or adding references to - * xattr block. Once we hold buffer lock xattr - * block's state is stable so we can check - * whether the block got freed / rehashed or - * not. Since we unhash mbcache entry under - * buffer lock when freeing / rehashing xattr - * block, checking whether entry is still - * hashed is reliable. Same rules hold for - * e_reusable handling. + * adding references to xattr block. Once we + * hold buffer lock xattr block's state is + * stable so we can check the additional + * reference fits. */ - if (hlist_bl_unhashed(&ce->e_hash_list) || - !ce->e_reusable) { + ref =3D le32_to_cpu(BHDR(new_bh)->h_refcount) + 1; + if (ref > EXT4_XATTR_REFCOUNT_MAX) { /* * Undo everything and check mbcache * again. @@ -2016,9 +2040,8 @@ ext4_xattr_block_set(handle_t *handle, struct inode *= inode, new_bh =3D NULL; goto inserted; } - ref =3D le32_to_cpu(BHDR(new_bh)->h_refcount) + 1; BHDR(new_bh)->h_refcount =3D cpu_to_le32(ref); - if (ref >=3D EXT4_XATTR_REFCOUNT_MAX) + if (ref =3D=3D EXT4_XATTR_REFCOUNT_MAX) ce->e_reusable =3D 0; ea_bdebug(new_bh, "reusing; refcount now=3D%d", ref); --=20 2.35.1