From nobody Thu Apr 9 04:04:23 2026 Received: from m16.mail.163.com (m16.mail.163.com [117.135.210.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96BA13AB29C for ; Wed, 11 Mar 2026 07:45:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=117.135.210.5 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773215144; cv=none; b=q4eKwvnEM+p+WlVD+7xqBNkcqMD6hfHI3Z/5EdyliW3JX0Wry2Ioe4WW+qdQQFCdUhynr+KczDURvGUbV+p4jKF/obqu/qfUS0/ZAw/l5mjXVtX+TVlJ8IZsc4SGXZDGnX6rvggMxSveP4EwBFW3a17zjRCTUzOJdj8GNuCk/oI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773215144; c=relaxed/simple; bh=Dzl60+qXb8UJpk8u2QpbFXc6EkWrhlGAlgS3e9S9qrU=; h=Date:From:To:Subject:Content-Type:MIME-Version:Message-ID; b=cK9eDQZxWT51FPTfM/4rbQ2n6jWUpJmarN14qK+/k2z8wttk/tO4QSdKkrD9VFqUzvTv+K6fNPE7Pg/11B5ckhLQJhsbhqF5V8rXSaUG6za5L7e8+s9z9mC3rPCxauk/p2jKcCQlPIW45+N/p1tyKSTxQp5aYZpgldDAUlEKxLI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=163.com; spf=pass smtp.mailfrom=163.com; dkim=pass (1024-bit key) header.d=163.com header.i=@163.com header.b=BQMSsU4a; arc=none smtp.client-ip=117.135.210.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=163.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=163.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=163.com header.i=@163.com header.b="BQMSsU4a" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com; s=s110527; h=Date:From:To:Subject:Content-Type:MIME-Version: Message-ID; bh=Dzl60+qXb8UJpk8u2QpbFXc6EkWrhlGAlgS3e9S9qrU=; b=B QMSsU4a3hnKpwuVkLov1EYtgqHeaoM6DscvTAxUN3Z1eKd8zcq2Y/QiyuTAm9eOh gyB2KA/thBEtLIwNaI/+lHlCWUvIgj6eaDYt3iPduBRTZhA1N/6y5100nInxKSc7 N6wwChvbA9t11z1+EIIkOGU1XrfGULzvIqnE5fqetc= Received: from luckd0g$163.com ( [183.205.138.18] ) by ajax-webmail-wmsvr-40-127 (Coremail) ; Wed, 11 Mar 2026 15:44:35 +0800 (CST) Date: Wed, 11 Mar 2026 15:44:35 +0800 (CST) From: "Jianzhou Zhao" To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, aliceryhl@google.com, Liam.Howlett@oracle.com, andrewjballance@gmail.com, maple-tree@lists.infradead.org Subject: BUG: KCSAN: data-race in mas_wr_store_entry / mtree_range_walk X-Priority: 3 X-Mailer: Coremail Webmail Server Version 2023.4-cmXT build 20251222(83accb85) Copyright (c) 2002-2026 www.mailtech.cn 163com X-NTES-SC: AL_Qu2cAf6aukgs4yeYZOkfmU4Rhug7UMO3uf8n24JfPJ9wjA/p2yseUUF9NmPf88CwFTuXvxiGfTNO1/ZAU5BifrwxgHbaRtmyfMCgneUNgje3mQ== Content-Transfer-Encoding: quoted-printable Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <55532e16.66b6.19cdbdacd7d.Coremail.luckd0g@163.com> X-Coremail-Locale: zh_CN X-CM-TRANSID: fygvCgD3l3hjHbFp_7p2AA--.38916W X-CM-SenderInfo: poxfyvkqj6il2tof0z/xtbC9gMmsGmxHWOYAgAA3f X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU== Content-Type: text/plain; charset="utf-8" Subject: [BUG] maple_tree: KCSAN: data-race in mas_wr_store_entry / mtree_r= ange_walk Dear Maintainers, We are writing to report a KCSAN-detected data-race vulnerability in the Li= nux kernel involving the maple tree subsystem. This bug was found by our cu= stom fuzzing tool, RacePilot. The bug occurs because `mtree_range_walk` rea= ds tree node pivots locklessly while a writer is concurrently updating thos= e bounds through `mas_wr_slot_store` without explicit memory access annotat= ions. We observed this on the Linux kernel version 6.18.0-08691-g2061f18ad7= 6e-dirty. Call Trace & Context =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D BUG: KCSAN: data-race in mas_wr_store_entry / mtree_range_walk write to 0xffff88800c50aa30 of 8 bytes by task 214464 on cpu 1: mas_wr_slot_store lib/maple_tree.c:3601 [inline] mas_wr_store_entry+0xf54/0x1120 lib/maple_tree.c:3777 mas_store_prealloc+0x47c/0xa60 lib/maple_tree.c:5191 vma_iter_store_overwrite mm/vma.h:481 [inline] commit_merge+0x3ea/0x740 mm/vma.c:766 vma_merge_existing_range mm/vma.c:980 [inline] vma_modify+0x5ff/0xdd0 mm/vma.c:1620 vma_modify_flags+0x16c/0x1a0 mm/vma.c:1662 mprotect_fixup+0x170/0x660 mm/mprotect.c:816 do_mprotect_pkey+0x5fe/0x930 mm/mprotect.c:990 __do_sys_mprotect mm/mprotect.c:1011 [inline] __se_sys_mprotect mm/mprotect.c:1008 [inline] __x64_sys_mprotect+0x47/0x60 mm/mprotect.c:1008 x64_sys_call+0xc6c/0x2030 arch/x86/include/generated/asm/syscalls_64.h:11 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xae/0x2c0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f read to 0xffff88800c50aa30 of 8 bytes by task 214467 on cpu 0: mtree_range_walk+0x368/0x630 lib/maple_tree.c:2581 mas_state_walk lib/maple_tree.c:3313 [inline] mas_walk+0x2a4/0x400 lib/maple_tree.c:4617 lock_vma_under_rcu+0xd3/0x710 mm/mmap_lock.c:238 do_user_addr_fault arch/x86/mm/fault.c:1327 [inline] handle_page_fault arch/x86/mm/fault.c:1476 [inline] exc_page_fault+0x294/0x10d0 arch/x86/mm/fault.c:1532 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618 value changed: 0x00007f571d344fff -> 0x00007f571d324fff Reported by Kernel Concurrency Sanitizer on: CPU: 0 UID: 0 PID: 214467 Comm: syz.8.13076 Not tainted 6.18.0-08691-g2061f= 18ad76e-dirty #44 PREEMPT(voluntary)=20 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/= 2014 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Execution Flow & Code Context During a virtual memory modification (via `mprotect`), `vma_modify` invokes= `mas_store_prealloc` which calls `mas_wr_store_entry` -> `mas_wr_slot_stor= e` to insert elements and overwrite node boundaries. This involves updating= `wr_mas->pivots` arrays with plain assignments: ```c // lib/maple_tree.c static inline void mas_wr_slot_store(struct ma_wr_state *wr_mas) { ... if (mas->index =3D=3D wr_mas->r_min) { /* Overwriting the range and a part of the next one */ rcu_assign_pointer(slots[offset], wr_mas->entry); wr_mas->pivots[offset] =3D mas->last; // <-- Plain Write } else { ... } ``` Concurrently, a page fault handler (`do_user_addr_fault`) tries to lock the= VMA under RCU using `mas_walk`, which descends into `mtree_range_walk()`. = The reader locklessly accesses `pivots[offset]` values in order to resolve = bounding boxes of ranges evaluating exactly against `mas->index`:=20 ```c // lib/maple_tree.c static inline void *mtree_range_walk(struct ma_state *mas) { ... if (pivots[0] >=3D mas->index) { // <-- Plain Read offset =3D 0; max =3D pivots[0]; // <-- Potentially double read / torn value goto next; } offset =3D 1; while (offset < end) { if (pivots[offset] >=3D mas->index) { // <-- Plain Read max =3D pivots[offset]; break; } offset++; } ... } ``` Root Cause Analysis A data race occurs because the maple tree iterates `pivots[]` boundaries ou= t of RCU order and locklessly in the reader, whilst `mas_wr_slot_store()` c= an execute a concurrent update on the exact same layout. Both accesses are = compiled as plain C loads and stores. As `pivots[]` can be updated natively= , tearing could occur where `mtree_range_walk` witnesses a split or interme= diate bounds state, leading to broken limits being assigned to `max`.=20 Furthermore, `pivots[offset]` is fetched repeatedly in sequence in `max =3D= pivots[offset]` creating a risk of fetching inconsistent values across mul= tiple evaluations. Unfortunately, we were unable to generate a reproducer for this bug. Potential Impact A torn pivot read or an inconsistent repeated read can misdirect the range = walk entirely, fetching the wrong VMA structures or crashing tree navigatio= n logic. This could result in random local Denial of Service (DoS) faults o= r unmapped memory panics due to mismatched virtual mappings under high memo= ry stress. Proposed Fix To safely inform the compiler that these memory slots are concurrently acce= ssed in RCU readers and avoid torn logic, `WRITE_ONCE()` should be used whe= n updating pivots in `mas_wr_slot_store`. Correspondingly, the walker shoul= d fetch the pivot exactly once via `READ_ONCE()` into a single local variab= le before using it: ```diff --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -2577,15 +2577,18 @@ static inline void *mtree_range_walk(struct ma_stat= e *mas) end =3D ma_data_end(node, type, pivots, max); prev_min =3D min; prev_max =3D max; - if (pivots[0] >=3D mas->index) { + =09 + unsigned long pivot =3D READ_ONCE(pivots[0]); + if (pivot >=3D mas->index) { offset =3D 0; - max =3D pivots[0]; + max =3D pivot; goto next; } =20 offset =3D 1; while (offset < end) { - if (pivots[offset] >=3D mas->index) { - max =3D pivots[offset]; + pivot =3D READ_ONCE(pivots[offset]); + if (pivot >=3D mas->index) { + max =3D pivot; break; } @@ -3605,10 +3613,10 @@ static inline void mas_wr_slot_store(struct ma_wr_s= tate *wr_mas) if (mas->index =3D=3D wr_mas->r_min) { /* Overwriting the range and a part of the next one */ rcu_assign_pointer(slots[offset], wr_mas->entry); - wr_mas->pivots[offset] =3D mas->last; + WRITE_ONCE(wr_mas->pivots[offset], mas->last); } else { /* Overwriting a part of the range and the next one */ rcu_assign_pointer(slots[offset + 1], wr_mas->entry); - wr_mas->pivots[offset] =3D mas->index - 1; + WRITE_ONCE(wr_mas->pivots[offset], mas->index - 1); mas->offset++; /* Keep mas accurate. */ } } else { @@ -3621,8 +3629,8 @@ static inline void mas_wr_slot_store(struct ma_wr_sta= te *wr_mas) */ gap |=3D !mt_slot_locked(mas->tree, slots, offset + 2); rcu_assign_pointer(slots[offset + 1], wr_mas->entry); - wr_mas->pivots[offset] =3D mas->index - 1; - wr_mas->pivots[offset + 1] =3D mas->last; + WRITE_ONCE(wr_mas->pivots[offset], mas->index - 1); + WRITE_ONCE(wr_mas->pivots[offset + 1], mas->last); mas->offset++; /* Keep mas accurate. */ } ``` We would be highly honored if this could be of any help. Best regards, RacePilot Team