From nobody Sun Dec 14 12:13:37 2025 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95FBC4A0A; Tue, 27 May 2025 11:39:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748345947; cv=none; b=AL4rKv6VMdVCCssV0n1kQehdWucYnLEWce0g7Ido535z+yb+ADz4gr3v3Jfxc+uUKr66wQ6XLyhJQ+cG5kmj3pmFLPSf2Wpr/Php75koy+8p+VhJRhuKF0t+d0KbqGDGyUouU3ZC20KPK49E9qxv9TkPCT5mPxbiMYBeDMXAJlc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748345947; c=relaxed/simple; bh=v8uyg8afEdAi69p+DRGtrXU2w+DDZuGr3H1wT8pqhrQ=; h=From:To:CC:Subject:Date:Message-ID:MIME-Version:Content-Type; b=diuUqIlH1R3u3eL26QmZpU1UXkZ6gJ2Aj9QCZmxGpGnP93vIqSe5XxId9zCxReBuLCVTKOwATx4BObQqseXSPtTrQn+uM+O4N1IZ6MKaF+gaT2jWV0r5kUkS20GMFAL8VQQwyaj/CutxRylk8HjCJUxwvo+YVsLHNxzALwxZ6+0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4b69bv1vjlznfgc; Tue, 27 May 2025 19:37:47 +0800 (CST) Received: from dggemv712-chm.china.huawei.com (unknown [10.1.198.32]) by mail.maildlp.com (Postfix) with ESMTPS id B945F1402C4; Tue, 27 May 2025 19:39:01 +0800 (CST) Received: from kwepemq200002.china.huawei.com (7.202.195.90) by dggemv712-chm.china.huawei.com (10.1.198.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 27 May 2025 19:39:01 +0800 Received: from localhost.localdomain (10.175.104.82) by kwepemq200002.china.huawei.com (7.202.195.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 27 May 2025 19:39:00 +0800 From: Dong Chenchen To: , , , , , , , , , CC: , , , Dong Chenchen , Subject: [PATCH net v2] page_pool: Fix use-after-free in page_pool_recycle_in_ring Date: Tue, 27 May 2025 19:41:52 +0800 Message-ID: <20250527114152.3119109-1-dongchenchen2@huawei.com> X-Mailer: git-send-email 2.25.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemq200002.china.huawei.com (7.202.195.90) syzbot reported a uaf in page_pool_recycle_in_ring: BUG: KASAN: slab-use-after-free in lock_release+0x151/0xa30 kernel/locking/= lockdep.c:5862 Read of size 8 at addr ffff8880286045a0 by task syz.0.284/6943 CPU: 0 UID: 0 PID: 6943 Comm: syz.0.284 Not tainted 6.13.0-rc3-syzkaller-gd= fa94ce54f41 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Goo= gle 09/13/2024 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0x169/0x550 mm/kasan/report.c:489 kasan_report+0x143/0x180 mm/kasan/report.c:602 lock_release+0x151/0xa30 kernel/locking/lockdep.c:5862 __raw_spin_unlock_bh include/linux/spinlock_api_smp.h:165 [inline] _raw_spin_unlock_bh+0x1b/0x40 kernel/locking/spinlock.c:210 spin_unlock_bh include/linux/spinlock.h:396 [inline] ptr_ring_produce_bh include/linux/ptr_ring.h:164 [inline] page_pool_recycle_in_ring net/core/page_pool.c:707 [inline] page_pool_put_unrefed_netmem+0x748/0xb00 net/core/page_pool.c:826 page_pool_put_netmem include/net/page_pool/helpers.h:323 [inline] page_pool_put_full_netmem include/net/page_pool/helpers.h:353 [inline] napi_pp_put_page+0x149/0x2b0 net/core/skbuff.c:1036 skb_pp_recycle net/core/skbuff.c:1047 [inline] skb_free_head net/core/skbuff.c:1094 [inline] skb_release_data+0x6c4/0x8a0 net/core/skbuff.c:1125 skb_release_all net/core/skbuff.c:1190 [inline] __kfree_skb net/core/skbuff.c:1204 [inline] sk_skb_reason_drop+0x1c9/0x380 net/core/skbuff.c:1242 kfree_skb_reason include/linux/skbuff.h:1263 [inline] __skb_queue_purge_reason include/linux/skbuff.h:3343 [inline] root cause is: page_pool_recycle_in_ring ptr_ring_produce spin_lock(&r->producer_lock); WRITE_ONCE(r->queue[r->producer++], ptr) //recycle last page to pool page_pool_release page_pool_scrub page_pool_empty_ring ptr_ring_consume page_pool_return_page //release all page __page_pool_destroy free_percpu(pool->recycle_stats); free(pool) //free spin_unlock(&r->producer_lock); //pool->ring uaf read recycle_stat_inc(pool, ring); page_pool can be free while page pool recycle the last page in ring. Add producer-lock barrier to page_pool_release to prevent the page pool from being free before all pages have been recycled. recycle_stat_inc() is empty when CONFIG_PAGE_POOL_STATS is not enabled, which will trigger Wempty-body build warning. Add definition for pool stat macro to fix warning. Suggested-by: Jakub Kacinski Link: https://lore.kernel.org/netdev/20250513083123.3514193-1-dongchenchen2= @huawei.com Fixes: ff7d6b27f894 ("page_pool: refurbish version of page_pool code") Reported-by: syzbot+204a4382fcb3311f3858@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=3D204a4382fcb3311f3858 Signed-off-by: Dong Chenchen Reviewed-by: Toke H=C3=B8iland-J=C3=B8rgensen Reviewed-by: Mina Almasry --- Changes in v2: - Change the return type from int to bool - Fix Wempty-body warn of recycle_stat_inc --- net/core/page_pool.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 7745ad924ae2..862ec1dc48df 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -153,9 +153,9 @@ u64 *page_pool_ethtool_stats_get(u64 *data, const void = *stats) EXPORT_SYMBOL(page_pool_ethtool_stats_get); =20 #else -#define alloc_stat_inc(pool, __stat) -#define recycle_stat_inc(pool, __stat) -#define recycle_stat_add(pool, __stat, val) +#define alloc_stat_inc(...) do { } while (0) +#define recycle_stat_inc(...) do { } while (0) +#define recycle_stat_add(...) do { } while (0) #endif =20 static bool page_pool_producer_lock(struct page_pool *pool) @@ -707,19 +707,16 @@ void page_pool_return_page(struct page_pool *pool, ne= tmem_ref netmem) =20 static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref n= etmem) { - int ret; - /* BH protection not needed if current is softirq */ - if (in_softirq()) - ret =3D ptr_ring_produce(&pool->ring, (__force void *)netmem); - else - ret =3D ptr_ring_produce_bh(&pool->ring, (__force void *)netmem); + bool in_softirq, ret; =20 - if (!ret) { + /* BH protection not needed if current is softirq */ + in_softirq =3D page_pool_producer_lock(pool); + ret =3D !__ptr_ring_produce(&pool->ring, (__force void *)netmem); + if (ret) recycle_stat_inc(pool, ring); - return true; - } + page_pool_producer_unlock(pool, in_softirq); =20 - return false; + return ret; } =20 /* Only allow direct recycling in special circumstances, into the @@ -1091,10 +1088,14 @@ static void page_pool_scrub(struct page_pool *pool) =20 static int page_pool_release(struct page_pool *pool) { + bool in_softirq; int inflight; =20 page_pool_scrub(pool); inflight =3D page_pool_inflight(pool, true); + /* Acquire producer lock to make sure producers have exited. */ + in_softirq =3D page_pool_producer_lock(pool); + page_pool_producer_unlock(pool, in_softirq); if (!inflight) __page_pool_destroy(pool); =20 --=20 2.25.1