From nobody Wed Apr 8 01:16:37 2026 Received: from m16.mail.163.com (m16.mail.163.com [117.135.210.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E526C1D5CD1 for ; Wed, 11 Mar 2026 03:27:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=117.135.210.5 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773199678; cv=none; b=pZ5YGS+v0QjDRLkNNzB+k0TB/wDBBDtOgygIAS8s5qS4mC2ZW0k4T2aVSMPs1HHUG/R/OEAbkhka/3OiOWRqZm5XtP/5eXHNhkSDM8wDy75gWXZZOyvgIdSohKThcP9TPldq2WqHTnjH6bnKUOsa0mii4/5c7bgd76bLoOmqjMQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773199678; c=relaxed/simple; bh=BsaBA20PdOcEV1ZjmDtRyizKAWb6UuiifEctrHDnMN0=; h=Date:From:To:Subject:Content-Type:MIME-Version:Message-ID; b=tVZJlVDWop3ipNsMrMo0lZhrE/TTeCoh1yJF6/Cye8RYuqn4A8LMvAMWjSChmYD8HaRmkpzsY7XqqLcnvutYNYB9Xa7HaWPj1UvByCZZeYWEBjqUtsg1Sju6WSOhwIVx7ezabn6VxtSk+1wiBSqRNUAF7Zj3VTXRO9xuevAQKVc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=163.com; spf=pass smtp.mailfrom=163.com; dkim=pass (1024-bit key) header.d=163.com header.i=@163.com header.b=cBO5gMUE; arc=none smtp.client-ip=117.135.210.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=163.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=163.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=163.com header.i=@163.com header.b="cBO5gMUE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com; s=s110527; h=Date:From:To:Subject:Content-Type:MIME-Version: Message-ID; bh=BsaBA20PdOcEV1ZjmDtRyizKAWb6UuiifEctrHDnMN0=; b=c BO5gMUEuYZv3oBFLBM3tlgWBrb2E8rywZahPM9jijdJ8TlgGgJHN4ab3Mj/F74bi GG9vBerRQK8OWnwEsaDCDrqnoCOkWUCNZIfENlbM5zeTEG34RhnV/nni7MxgeMJL cMIgxXRYQ5XGKU5N1+cMpqmg7rxAZjiPtVtfJV/YB0= Received: from luckd0g$163.com ( [183.205.138.18] ) by ajax-webmail-wmsvr-40-146 (Coremail) ; Wed, 11 Mar 2026 11:27:16 +0800 (CST) Date: Wed, 11 Mar 2026 11:27:16 +0800 (CST) From: "Jianzhou Zhao" To: linux-kernel@vger.kernel.org, aliceryhl@google.com, Liam.Howlett@oracle.com, andrewjballance@gmail.com, akpm@linux-foundation.org, maple-tree@lists.infradead.org, linux-mm@kvack.org Subject: maple_tree: KCSAN: data-race in mas_wr_node_store / mtree_range_walk X-Priority: 3 X-Mailer: Coremail Webmail Server Version 2023.4-cmXT build 20251222(83accb85) Copyright (c) 2002-2026 www.mailtech.cn 163com X-NTES-SC: AL_Qu2cAf2Stk4o4CadbekfmU4Rhug7UMO3uf8n24JfPJ9wjCzr5C4MZHpGN2Py3OuVMC+gqhiXXAlB7sV7cJNobacNy6m5dMJ8eRC0Rabra3tgng== Content-Transfer-Encoding: quoted-printable Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <480ffa8f.3729.19cdaef38bd.Coremail.luckd0g@163.com> X-Coremail-Locale: zh_CN X-CM-TRANSID: kigvCgD332oU4bBpvO11AA--.41080W X-CM-SenderInfo: poxfyvkqj6il2tof0z/xtbC9hQEjmmw4RTKtQAA37 X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU== Content-Type: text/plain; charset="utf-8" Subject: [BUG] maple_tree: KCSAN: data-race in mas_wr_node_store / mtree_ra= nge_walk Dear Maintainers, We are writing to report a KCSAN-detected data-race vulnerability in the Li= nux kernel. This bug was found by our custom fuzzing tool, RacePilot. The b= ug occurs in the maple tree component during concurrent node storage manipu= lation and tree traversal/RCU walk operations. We observed this on the Linu= x kernel version 6.18.0-08691-g2061f18ad76e-dirty. Call Trace & Context =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D BUG: KCSAN: data-race in mas_wr_node_store / mtree_range_walk write to 0xffff888023e00900 of 8 bytes by task 62996 on cpu 0: mte_set_node_dead home/kfuzz/linux/lib/maple_tree.c:335 [inline] mas_put_in_tree home/kfuzz/linux/lib/maple_tree.c:1571 [inline] mas_replace_node home/kfuzz/linux/lib/maple_tree.c:1587 [inline] mas_wr_node_store+0xa5c/0xc10 home/kfuzz/linux/lib/maple_tree.c:3568 mas_wr_store_entry+0xabd/0x1120 home/kfuzz/linux/lib/maple_tree.c:3780 mas_store_prealloc+0x47c/0xa60 home/kfuzz/linux/lib/maple_tree.c:5191 vma_iter_store_overwrite home/kfuzz/linux/mm/vma.h:481 [inline] vma_iter_store_new home/kfuzz/linux/mm/vma.h:488 [inline] __mmap_new_vma home/kfuzz/linux/mm/vma.c:2508 [inline] __mmap_region+0x12d5/0x1ef0 home/kfuzz/linux/mm/vma.c:2681 mmap_region+0x15f/0x260 home/kfuzz/linux/mm/vma.c:2751 do_mmap+0x754/0xcd0 home/kfuzz/linux/mm/mmap.c:558 vm_mmap_pgoff+0x15d/0x2e0 home/kfuzz/linux/mm/util.c:587 ksys_mmap_pgoff+0x7d/0x380 home/kfuzz/linux/mm/mmap.c:604 __do_sys_mmap home/kfuzz/linux/arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap home/kfuzz/linux/arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x71/0xa0 home/kfuzz/linux/arch/x86/kernel/sys_x86_64.c:82 x64_sys_call+0x1b42/0x2030 home/kfuzz/linux/arch/x86/include/generated/asm= /syscalls_64.h:10 do_syscall_x64 home/kfuzz/linux/arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xae/0x2c0 home/kfuzz/linux/arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f read to 0xffff888023e00900 of 8 bytes by task 62997 on cpu 1: ma_dead_node home/kfuzz/linux/lib/maple_tree.c:576 [inline] mtree_range_walk+0x11e/0x630 home/kfuzz/linux/lib/maple_tree.c:2594 mas_state_walk home/kfuzz/linux/lib/maple_tree.c:3313 [inline] mas_walk+0x2a4/0x400 home/kfuzz/linux/lib/maple_tree.c:4617 lock_vma_under_rcu+0xd3/0x710 home/kfuzz/linux/mm/mmap_lock.c:238 do_user_addr_fault home/kfuzz/linux/arch/x86/mm/fault.c:1327 [inline] handle_page_fault home/kfuzz/linux/arch/x86/mm/fault.c:1476 [inline] exc_page_fault+0x294/0x10d0 home/kfuzz/linux/arch/x86/mm/fault.c:1532 asm_exc_page_fault+0x26/0x30 home/kfuzz/linux/arch/x86/include/asm/idtentr= y.h:618 value changed: 0xffff88800bf0d706 -> 0xffff888023e00900 Reported by Kernel Concurrency Sanitizer on: CPU: 1 UID: 0 PID: 62997 Comm: syz.8.4355 Not tainted 6.18.0-08691-g2061f18= ad76e-dirty #42 PREEMPT(voluntary)=20 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/= 2014 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Execution Flow & Code Context The CPU 0 task is currently modifying the maple tree mapping memory ranges = via `__mmap_region`. The tree update routine uses `mas_wr_node_store()`, wh= ich calls `mas_replace_node()` to swap the old node with the new one. As pa= rt of replacing the node, it calls `mte_set_node_dead()`, performing a plai= n write to update the `node->parent` pointer to point to itself to indicate= the node is dead: ```c // lib/maple_tree.c static inline void mte_set_node_dead(struct maple_enode *mn) { mte_to_node(mn)->parent =3D ma_parent_ptr(mte_to_node(mn)); // <-- Write smp_wmb(); /* Needed for RCU */ } ``` Simultaneously, CPU 1 tries to handle a page fault with lockless concurrent= RCU lookup using `lock_vma_under_rcu`. The maple tree traversal routines `= mtree_range_walk()` calls `ma_dead_node()` on the nodes it fetches to ensur= e it hasn't stepped into a dead tree node. `ma_dead_node()` locklessly fetc= hes the `node->parent` using a simple unannotated fetch in C: ```c // lib/maple_tree.c static __always_inline bool ma_dead_node(const struct maple_node *node) { struct maple_node *parent; /* Do not reorder reads from the node prior to the parent check */ smp_rmb(); parent =3D (void *)((unsigned long)node->parent & ~MAPLE_NODE_MASK); // <-= - Lockless Read return (parent =3D=3D node); } ``` Root Cause Analysis A data race occurs over `node->parent` between the writer updating it to in= dicate tree modification explicitly (via `mte_set_node_dead()`) and the fas= t-path page fault traversal logic trying to deduce if the node is live conc= urrently (`ma_dead_node()`). The lockless reader runs while the writer make= s an unsynchronized plain store in C. Unfortunately, we were unable to generate a reproducer for this bug. Potential Impact If `ma_dead_node()` reads a partially torn or out-of-date pointer due to mi= ssing compiler annotations (read-tearing/store-tearing architectures or agg= ressive optimizations like value caching and hoisting), a dead node could b= e erroneously evaluated as alive (or vice versa). This could lead to a use-= after-free, memory corruption, infinite loops inside the `maple_tree` navig= ation routines, or local Denial of Service (DoS) scenarios under heavy conc= urrent page-faulting load. Proposed Fix To safely resolve this data race without compromising the performance of th= e RCU walk path, we suggest adding standard Linux kernel concurrent annotat= ions around the `node->parent` access manually. The writer should use `WRIT= E_ONCE()` and the reader should fetch the pointer context via `READ_ONCE()`. ```diff --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -332,7 +332,7 @@ static inline struct maple_node *mas_mn(const struct ma= _state *mas) static inline void mte_set_node_dead(struct maple_enode *mn) { - mte_to_node(mn)->parent =3D ma_parent_ptr(mte_to_node(mn)); + WRITE_ONCE(mte_to_node(mn)->parent, ma_parent_ptr(mte_to_node(mn))); smp_wmb(); /* Needed for RCU */ } =20 @@ -576,7 +576,8 @@ static __always_inline bool ma_dead_node(const struct m= aple_node *node) =20 /* Do not reorder reads from the node prior to the parent check */ smp_rmb(); - parent =3D (void *)((unsigned long)node->parent & ~MAPLE_NODE_MASK); + parent =3D (void *)((unsigned long)READ_ONCE(node->parent) & + ~MAPLE_NODE_MASK); return (parent =3D=3D node); } ``` We would be highly honored if this could be of any help. Best regards, RacePilot Team