From nobody Thu Apr 9 04:04:24 2026 Received: from m16.mail.163.com (m16.mail.163.com [117.135.210.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 882443B2FC4 for ; Wed, 11 Mar 2026 07:34:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=117.135.210.4 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773214450; cv=none; b=rq8n0X5mJnqBBnHqNwsYI0BuiEHkYNH6QU0K2kfFBh8G1M9HMOhEDaevYppPJhYOwVKMPgbLucy6k7w2ZV64w9iwDuJtahULLqKAfkuEilwmaoBqAlxEPY8Gqd9yNMTnEABl5Xk44+CeeYgtyVrTV4uECHfPRJeeEZJNsKBFvHY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773214450; c=relaxed/simple; bh=Jp8fDph6l6+p6hz/DvMu0I+Wk3zJgeH0irY78zHwrog=; h=Date:From:To:Subject:Content-Type:MIME-Version:Message-ID; b=AXE0OSUpLPKEF9LYI4wLGoNY89oBlrGK3UIhuCTmn70vZZLA3EWUR99OUyrtXiA/q2813sDxdjdz02Am7Ulc/6ONxfTPxlIX5sVjeyCmuUQlZ8yLKoMec6TDz+ALS7q0SYQryDSh9rShtPaRRhGvH8mnDeB1Jsgv8I5QO9s/4/U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=163.com; spf=pass smtp.mailfrom=163.com; dkim=pass (1024-bit key) header.d=163.com header.i=@163.com header.b=ITBjGlst; arc=none smtp.client-ip=117.135.210.4 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=163.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=163.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=163.com header.i=@163.com header.b="ITBjGlst" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com; s=s110527; h=Date:From:To:Subject:Content-Type:MIME-Version: Message-ID; bh=Jp8fDph6l6+p6hz/DvMu0I+Wk3zJgeH0irY78zHwrog=; b=I TBjGlstJhiphN2XXv945Wpa7DivImJhOtR+oqSJNPRAxEAkIKWFDNBYJ2KhNZ38B dziWk/CzbjE8xJUG4e3XS0N1rRHrRTCBocfVnoZcN9xGes7R9tPEfMJdKMMy4HCP tJePg5nHXoopaJazsRseew8IrRrIDcWydkAeOudqc0= Received: from luckd0g$163.com ( [183.205.138.18] ) by ajax-webmail-wmsvr-40-127 (Coremail) ; Wed, 11 Mar 2026 15:32:51 +0800 (CST) Date: Wed, 11 Mar 2026 15:32:51 +0800 (CST) From: "Jianzhou Zhao" To: shikemeng@huaweicloud.com, chrisl@kernel.org, akpm@linux-foundation.org, kasong@tencent.com, nphamcs@gmail.com, bhe@redhat.com, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: KCSAN: data-race in __folio_batch_add_and_move / __lru_add_drain_all X-Priority: 3 X-Mailer: Coremail Webmail Server Version 2023.4-cmXT build 20251222(83accb85) Copyright (c) 2002-2026 www.mailtech.cn 163com X-NTES-SC: AL_Qu2cAf6au0ss5yacbekfmU4Rhug7UMO3uf8n24JfPJ9wjA/p2yseUUF9NmPf88CwFTuXvxiGfTNO1/ZAU5BifrwxO2pg2iz8+zZx9sKcSMFdMg== Content-Transfer-Encoding: quoted-printable Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <39f32cfd.6357.19cdbd00f48.Coremail.luckd0g@163.com> X-Coremail-Locale: zh_CN X-CM-TRANSID: fygvCgAXDrujGrFpZ7Z2AA--.25512W X-CM-SenderInfo: poxfyvkqj6il2tof0z/xtbC9gN1AGmxGqNhMwAA3z X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU== Content-Type: text/plain; charset="utf-8" Subject: [BUG] mm, swap: KCSAN: data-race in __folio_batch_add_and_move / _= _lru_add_drain_all Dear Maintainers, We are writing to report a KCSAN-detected data-race vulnerability in the Li= nux kernel. This bug was found by our custom fuzzing tool, RacePilot. The b= ug occurs during concurrent folio batch addition and the global LRU drain o= perations in the memory management subsystem. We observed this on the Linux= kernel version 6.18.0-08691-g2061f18ad76e-dirty. Call Trace & Context =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D BUG: KCSAN: data-race in __folio_batch_add_and_move / __lru_add_drain_all write to 0xffff88807dd267e0 of 1 bytes by task 11894 on cpu 1: =C2=A0folio_batch_add include/linux/pagevec.h:80 [inline] =C2=A0__folio_batch_add_and_move+0x7f/0x1b0 mm/swap.c:196 =C2=A0folio_add_lru+0xbe/0xd0 mm/swap.c:511 =C2=A0folio_add_lru_vma+0x47/0x70 mm/swap.c:530 =C2=A0wp_page_copy mm/memory.c:3784 [inline] =C2=A0do_wp_page+0xda9/0x2000 mm/memory.c:4180 =C2=A0handle_pte_fault mm/memory.c:6303 [inline] =C2=A0__handle_mm_fault+0xb6c/0x21f0 mm/memory.c:6421 =C2=A0handle_mm_fault+0x2ee/0x820 mm/memory.c:6590 =C2=A0do_user_addr_fault arch/x86/mm/fault.c:1336 [inline] =C2=A0handle_page_fault arch/x86/mm/fault.c:1476 [inline] =C2=A0exc_page_fault+0x398/0x10d0 arch/x86/mm/fault.c:1532 =C2=A0asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618 read to 0xffff88807dd267e0 of 1 bytes by task 5526 on cpu 0: =C2=A0folio_batch_count include/linux/pagevec.h:58 [inline] =C2=A0cpu_needs_drain mm/swap.c:784 [inline] =C2=A0__lru_add_drain_all+0x2e3/0x5a0 mm/swap.c:881 =C2=A0lru_add_drain_all+0x10/0x20 mm/swap.c:903 =C2=A0invalidate_bdev+0x7a/0xb0 block/bdev.c:106 =C2=A0ext4_put_super+0x5dd/0x8f0 fs/ext4/super.c:1348 =C2=A0generic_shutdown_super+0xec/0x200 fs/super.c:643 =C2=A0kill_block_super+0x29/0x60 fs/super.c:1730 =C2=A0ext4_kill_sb+0x48/0x90 fs/ext4/super.c:7444 =C2=A0deactivate_locked_super+0x72/0x210 fs/super.c:474 =C2=A0deactivate_super fs/super.c:507 [inline] =C2=A0deactivate_super+0x8b/0xa0 fs/super.c:503 =C2=A0cleanup_mnt+0x22f/0x2c0 fs/namespace.c:1318 =C2=A0__cleanup_mnt+0x16/0x20 fs/namespace.c:1325 =C2=A0task_work_run+0x105/0x190 kernel/task_work.c:233 =C2=A0resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] =C2=A0__exit_to_user_mode_loop kernel/entry/common.c:44 [inline] =C2=A0exit_to_user_mode_loop+0x129/0x7d0 kernel/entry/common.c:75 =C2=A0__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inl= ine] =C2=A0syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:25= 6 [inline] =C2=A0syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inli= ne] =C2=A0syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline] =C2=A0do_syscall_64+0x27f/0x2c0 arch/x86/entry/syscall_64.c:100 =C2=A0entry_SYSCALL_64_after_hwframe+0x77/0x7f value changed: 0x0f -> 0x10 Reported by Kernel Concurrency Sanitizer on: CPU: 0 UID: 0 PID: 5526 Comm: syz-executor Not tainted 6.18.0-08691-g2061f1= 8ad76e-dirty #44 PREEMPT(voluntary)=20 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/= 2014 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Execution Flow & Code Context On CPU 1, a page fault handler allocates a new folio and inserts it into th= e LRU list, hitting `__folio_batch_add_and_move()`. Here, local CPU batch u= pdates happen via `folio_batch_add()`, generating a plain write to `fbatch-= >nr`: ```c // include/linux/pagevec.h static inline unsigned folio_batch_add(struct folio_batch *fbatch, =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0struct folio *folio) { =C2=A0fbatch->folios[fbatch->nr++] =3D folio; // <-- Write =C2=A0return folio_batch_space(fbatch); } ``` Meanwhile, on CPU 0, `lru_add_drain_all()` iterate over all online CPUs and= checks if their local folio batches require flushing. The check relies on = `cpu_needs_drain()`, calling `folio_batch_count()` to inspect remote CPU `f= batch` fields. This translates to an unannotated read over `fbatch->nr`: ```c // mm/swap.c static bool cpu_needs_drain(unsigned int cpu) { =C2=A0struct cpu_fbatches *fbatches =3D &per_cpu(cpu_fbatches, cpu); =C2=A0/* Check these in order of likelihood that they're not zero */ =C2=A0return folio_batch_count(&fbatches->lru_add) || // <-- Lockless Read =C2=A0 =C2=A0 =C2=A0 =C2=A0 folio_batch_count(&fbatches->lru_move_tail) || ... } // include/linux/pagevec.h static inline unsigned int folio_batch_count(const struct folio_batch *fbat= ch) { =C2=A0return fbatch->nr; // <-- Lockless Read } ``` Root Cause Analysis A data race occurs because writer access to `fbatch->nr` via `folio_batch_a= dd()` is strictly local to its CPU and executed using plain pointer assignm= ents without cross-CPU locks. Meanwhile, `cpu_needs_drain()` remotely itera= tes across all CPUs to check `fbatch->nr` and conservatively determine if L= RU drainage work needs to be globally scheduled. Since it reads the counter= s from other CPUs without synchronization, this is an intentional, benign d= ata race utilized as an optimization heuristic. Unfortunately, we were unable to generate a reproducer for this bug. Potential Impact Because the access is a benign cross-cpu inspection to determine if a drain= task should be offloaded, reading a slightly stale value just leads to ski= pping the drain (if read 0) or scheduling an unnecessary, redundant drain w= ork (if read > 0 when empty). It isn't functionally critical. However, KCSA= N flagging it produces noise that hides more critical data races.=20 Proposed Fix Since `cpu_needs_drain()` is designed to operate heuristically and tolerate= s races safely, the unannotated reads over `folio_batch_count()` can be exp= licitly wrapped with the `data_race()` macro to annotate the intentional lo= ckless reads, silencing the sanitizer properly. ```diff --- a/mm/swap.c +++ b/mm/swap.c @@ -781,12 +781,12 @@ static bool cpu_needs_drain(unsigned int cpu) =C2=A0 struct cpu_fbatches *fbatches =3D &per_cpu(cpu_fbatches, cpu); =C2=A0 /* Check these in order of likelihood that they're not zero */ - return folio_batch_count(&fbatches->lru_add) || - folio_batch_count(&fbatches->lru_move_tail) || - folio_batch_count(&fbatches->lru_deactivate_file) || - folio_batch_count(&fbatches->lru_deactivate) || - folio_batch_count(&fbatches->lru_lazyfree) || - folio_batch_count(&fbatches->lru_activate) || + return data_race(folio_batch_count(&fbatches->lru_add)) || + data_race(folio_batch_count(&fbatches->lru_move_tail)) || + data_race(folio_batch_count(&fbatches->lru_deactivate_file)) || + data_race(folio_batch_count(&fbatches->lru_deactivate)) || + data_race(folio_batch_count(&fbatches->lru_lazyfree)) || + data_race(folio_batch_count(&fbatches->lru_activate)) || =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0need_mlock_drain(cpu) || has_bh_in_lru(cp= u, NULL); =C2=A0} ``` We would be highly honored if this could be of any help. Best regards, RacePilot Team =20