From nobody Tue Feb 10 04:02:40 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09F583093C1; Sat, 31 Jan 2026 09:26:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769851614; cv=none; b=nnXO4DHyf6tcKvMpyr3ehaIQ48XmdJNtQd6Mqamxfqv46lG5oNPwobanfHo8J8SSiW/ZdVJTun6dvd0oh232NTIpvJvFxmEE8zdBhv7ZIKKHRsvZ6x+NRxdboRt4lXKvxq7qOoDXqnSWBeOj5bAFGQSGwYJNKLCHgadlFIB8jy8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769851614; c=relaxed/simple; bh=2vu0DMOZKlxrTIhsGNSFgpKYQa2dih+Sukj37YUmvcM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FO31GfpAKKoqZWq2RIrnfd/xI+9EoKOhk4pe64JjgFdIw9KwG49q7GiLoR1OK4gQ3o4w0cWyhMFiROZfpwyq80ZJKaicGRn6qZsig/XXnznLKMS8lQalEuiflw7tr+hpP3GFeVoEsLwNaul/E536unoxZB06f3SM0+mCewYoc2g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.170]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4f36v20Hf0zYQtsP; Sat, 31 Jan 2026 17:26:06 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 23C1F4056E; Sat, 31 Jan 2026 17:26:48 +0800 (CST) Received: from hulk-vt.huawei.com (unknown [10.67.174.121]) by APP4 (Coremail) with SMTP id gCh0CgD3V_jRyn1pwSIEFw--.14901S5; Sat, 31 Jan 2026 17:26:47 +0800 (CST) From: Chen Ridong To: dev@lankhorst.se, mripard@kernel.org, natalie.vock@gmx.de, tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com Cc: cgroups@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, lujialin4@huawei.com, chenridong@huaweicloud.com Subject: [PATCH -next 3/3] cgroup/dmem: avoid pool UAF Date: Sat, 31 Jan 2026 09:12:02 +0000 Message-Id: <20260131091202.344788-4-chenridong@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260131091202.344788-1-chenridong@huaweicloud.com> References: <20260131091202.344788-1-chenridong@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgD3V_jRyn1pwSIEFw--.14901S5 X-Coremail-Antispam: 1UD129KBjvJXoW3WrW5Cry3Jw1UJF47Gw15Arb_yoWxAF1UpF nIkFyYkr4rZr47Zrn2y3WDXF9ay3WkXa1Dt3yxCw4Fvr4xXw1rtF1Dt345tFy5GFZ7G347 XF4YkrZrCFWYy3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQm14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JrWl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUXVWUAwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2 xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCF54CYxVCY1x0262kKe7AK xVWUtVW8ZwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I 0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAI cVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjfUomiiDUUUU X-CM-SenderInfo: hfkh02xlgr0w46kxt4xhlfz01xgou0bp/ Content-Type: text/plain; charset="utf-8" From: Chen Ridong An UAF issue was observed: BUG: KASAN: slab-use-after-free in page_counter_uncharge+0x65/0x150 Write of size 8 at addr ffff888106715440 by task insmod/527 CPU: 4 UID: 0 PID: 527 Comm: insmod 6.19.0-rc7-next-20260129+ #11 Tainted: [O]=3DOOT_MODULE Call Trace: dump_stack_lvl+0x82/0xd0 kasan_report+0xca/0x100 kasan_check_range+0x39/0x1c0 page_counter_uncharge+0x65/0x150 dmem_cgroup_uncharge+0x1f/0x260 Allocated by task 527: Freed by task 0: The buggy address belongs to the object at ffff888106715400 which belongs to the cache kmalloc-512 of size 512 The buggy address is located 64 bytes inside of freed 512-byte region [ffff888106715400, ffff888106715600) The buggy address belongs to the physical page: Memory state around the buggy address: ffff888106715300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff888106715380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff888106715400: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888106715480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888106715500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb The issue occurs because a pool can still be held by a caller after its associated memory region is unregistered. The current implementation frees the pool even if users still hold references to it (e.g., before uncharge operations complete). This patch adds a reference counter to each pool, ensuring that a pool is only freed when its reference count drops to zero. Fixes: b168ed458dde ("kernel/cgroup: Add "dmem" memory accounting cgroup") Signed-off-by: Chen Ridong --- kernel/cgroup/dmem.c | 60 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 58 insertions(+), 2 deletions(-) diff --git a/kernel/cgroup/dmem.c b/kernel/cgroup/dmem.c index b17f48bdef81..7fc13e4dce72 100644 --- a/kernel/cgroup/dmem.c +++ b/kernel/cgroup/dmem.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include =20 @@ -71,7 +72,9 @@ struct dmem_cgroup_pool_state { struct rcu_head rcu; =20 struct page_counter cnt; + struct dmem_cgroup_pool_state *parent; =20 + refcount_t ref; bool inited; }; =20 @@ -88,6 +91,9 @@ struct dmem_cgroup_pool_state { static DEFINE_SPINLOCK(dmemcg_lock); static LIST_HEAD(dmem_cgroup_regions); =20 +static void dmemcg_free_region(struct kref *ref); +static void dmemcg_pool_free_rcu(struct rcu_head *rcu); + static inline struct dmemcg_state * css_to_dmemcs(struct cgroup_subsys_state *css) { @@ -104,10 +110,38 @@ static struct dmemcg_state *parent_dmemcs(struct dmem= cg_state *cg) return cg->css.parent ? css_to_dmemcs(cg->css.parent) : NULL; } =20 +static void dmemcg_pool_get(struct dmem_cgroup_pool_state *pool) +{ + refcount_inc(&pool->ref); +} + +static bool dmemcg_pool_tryget(struct dmem_cgroup_pool_state *pool) +{ + return refcount_inc_not_zero(&pool->ref); +} + +static void dmemcg_pool_put(struct dmem_cgroup_pool_state *pool) +{ + if (!refcount_dec_and_test(&pool->ref)) + return; + + call_rcu(&pool->rcu, dmemcg_pool_free_rcu); +} + +static void dmemcg_pool_free_rcu(struct rcu_head *rcu) +{ + struct dmem_cgroup_pool_state *pool =3D container_of(rcu, typeof(*pool), = rcu); + + if (pool->parent) + dmemcg_pool_put(pool->parent); + kref_put(&pool->region->ref, dmemcg_free_region); + kfree(pool); +} + static void free_cg_pool(struct dmem_cgroup_pool_state *pool) { list_del(&pool->region_node); - kfree(pool); + dmemcg_pool_put(pool); } =20 static void @@ -342,6 +376,12 @@ alloc_pool_single(struct dmemcg_state *dmemcs, struct = dmem_cgroup_region *region page_counter_init(&pool->cnt, ppool ? &ppool->cnt : NULL, true); reset_all_resource_limits(pool); + refcount_set(&pool->ref, 1); + kref_get(®ion->ref); + if (ppool && !pool->parent) { + pool->parent =3D ppool; + dmemcg_pool_get(ppool); + } =20 list_add_tail_rcu(&pool->css_node, &dmemcs->pools); list_add_tail(&pool->region_node, ®ion->pools); @@ -389,6 +429,10 @@ get_cg_pool_locked(struct dmemcg_state *dmemcs, struct= dmem_cgroup_region *regio =20 /* Fix up parent links, mark as inited. */ pool->cnt.parent =3D &ppool->cnt; + if (ppool && !pool->parent) { + pool->parent =3D ppool; + dmemcg_pool_get(ppool); + } pool->inited =3D true; =20 pool =3D ppool; @@ -435,6 +479,8 @@ void dmem_cgroup_unregister_region(struct dmem_cgroup_r= egion *region) =20 list_for_each_entry_safe(pool, next, ®ion->pools, region_node) { list_del_rcu(&pool->css_node); + list_del(&pool->region_node); + dmemcg_pool_put(pool); } =20 /* @@ -515,8 +561,10 @@ static struct dmem_cgroup_region *dmemcg_get_region_by= _name(const char *name) */ void dmem_cgroup_pool_state_put(struct dmem_cgroup_pool_state *pool) { - if (pool) + if (pool) { css_put(&pool->cs->css); + dmemcg_pool_put(pool); + } } EXPORT_SYMBOL_GPL(dmem_cgroup_pool_state_put); =20 @@ -530,6 +578,8 @@ get_cg_pool_unlocked(struct dmemcg_state *cg, struct dm= em_cgroup_region *region) pool =3D find_cg_pool_locked(cg, region); if (pool && !READ_ONCE(pool->inited)) pool =3D NULL; + if (pool && !dmemcg_pool_tryget(pool)) + pool =3D NULL; rcu_read_unlock(); =20 while (!pool) { @@ -538,6 +588,8 @@ get_cg_pool_unlocked(struct dmemcg_state *cg, struct dm= em_cgroup_region *region) pool =3D get_cg_pool_locked(cg, region, &allocpool); else pool =3D ERR_PTR(-ENODEV); + if (!IS_ERR(pool)) + dmemcg_pool_get(pool); spin_unlock(&dmemcg_lock); =20 if (pool =3D=3D ERR_PTR(-ENOMEM)) { @@ -573,6 +625,7 @@ void dmem_cgroup_uncharge(struct dmem_cgroup_pool_state= *pool, u64 size) =20 page_counter_uncharge(&pool->cnt, size); css_put(&pool->cs->css); + dmemcg_pool_put(pool); } EXPORT_SYMBOL_GPL(dmem_cgroup_uncharge); =20 @@ -624,7 +677,9 @@ int dmem_cgroup_try_charge(struct dmem_cgroup_region *r= egion, u64 size, if (ret_limit_pool) { *ret_limit_pool =3D container_of(fail, struct dmem_cgroup_pool_state, c= nt); css_get(&(*ret_limit_pool)->cs->css); + dmemcg_pool_get(*ret_limit_pool); } + dmemcg_pool_put(pool); ret =3D -EAGAIN; goto err; } @@ -721,6 +776,7 @@ static ssize_t dmemcg_limit_write(struct kernfs_open_fi= le *of, =20 /* And commit */ apply(pool, new_limit); + dmemcg_pool_put(pool); =20 out_put: kref_put(®ion->ref, dmemcg_free_region); --=20 2.34.1