From nobody Wed Dec 17 08:22:57 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DF3CC4332F for ; Tue, 31 Oct 2023 06:05:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234936AbjJaGFr (ORCPT ); Tue, 31 Oct 2023 02:05:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232876AbjJaGFp (ORCPT ); Tue, 31 Oct 2023 02:05:45 -0400 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2972E8F for ; Mon, 30 Oct 2023 23:05:37 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=hsiangkao@linux.alibaba.com;NM=1;PH=DS;RN=4;SR=0;TI=SMTPD_---0VvGHDFU_1698732325; Received: from e69b19392.et15sqa.tbsite.net(mailfrom:hsiangkao@linux.alibaba.com fp:SMTPD_---0VvGHDFU_1698732325) by smtp.aliyun-inc.com; Tue, 31 Oct 2023 14:05:30 +0800 From: Gao Xiang To: linux-erofs@lists.ozlabs.org Cc: LKML , Linus Torvalds , Gao Xiang Subject: [PATCH] erofs: fix erofs_insert_workgroup() lockref usage Date: Tue, 31 Oct 2023 14:05:24 +0800 Message-Id: <20231031060524.1103921-1-hsiangkao@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" As Linus pointed out [1], lockref_put_return() is fundamentally designed to be something that can fail. It behaves as a fastpath-only thing, and the failure case needs to be handled anyway. Actually, since the new pcluster was just allocated without being populated, it won't be accessed by others until it is inserted into XArray, so lockref helpers are actually unneeded here. Let's just set the proper reference count on initializing. [1] https://lore.kernel.org/r/CAHk-=3DwhCga8BeQnJ3ZBh_Hfm9ctba_wpF444LpwRyb= VNMzO6Dw@mail.gmail.com Fixes: 7674a42f35ea ("erofs: use struct lockref to replace handcrafted appr= oach") Signed-off-by: Gao Xiang Reviewed-by: Chao Yu --- fs/erofs/utils.c | 8 +------- fs/erofs/zdata.c | 1 + 2 files changed, 2 insertions(+), 7 deletions(-) diff --git a/fs/erofs/utils.c b/fs/erofs/utils.c index cc6fb9e98899..4256a85719a1 100644 --- a/fs/erofs/utils.c +++ b/fs/erofs/utils.c @@ -77,12 +77,7 @@ struct erofs_workgroup *erofs_insert_workgroup(struct su= per_block *sb, struct erofs_sb_info *const sbi =3D EROFS_SB(sb); struct erofs_workgroup *pre; =20 - /* - * Bump up before making this visible to others for the XArray in order - * to avoid potential UAF without serialized by xa_lock. - */ - lockref_get(&grp->lockref); - + DBG_BUGON(grp->lockref.count < 1); repeat: xa_lock(&sbi->managed_pslots); pre =3D __xa_cmpxchg(&sbi->managed_pslots, grp->index, @@ -96,7 +91,6 @@ struct erofs_workgroup *erofs_insert_workgroup(struct sup= er_block *sb, cond_resched(); goto repeat; } - lockref_put_return(&grp->lockref); grp =3D pre; } xa_unlock(&sbi->managed_pslots); diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index 036f610e044b..a7e6847f6f8f 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -796,6 +796,7 @@ static int z_erofs_register_pcluster(struct z_erofs_dec= ompress_frontend *fe) return PTR_ERR(pcl); =20 spin_lock_init(&pcl->obj.lockref.lock); + pcl->obj.lockref.count =3D 1; /* one ref for this request */ pcl->algorithmformat =3D map->m_algorithmformat; pcl->length =3D 0; pcl->partial =3D true; --=20 2.39.3