From nobody Wed Apr 8 02:48:22 2026 Received: from m16.mail.163.com (m16.mail.163.com [117.135.210.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E56026B777 for ; Wed, 11 Mar 2026 03:31:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=117.135.210.5 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773199896; cv=none; b=M+bMmlf6uzt7K96p3WMySnKvPv/VPhinSx0OYsS6z4a8XxViMid8+nvktXFlpFFKbS0RbH6pNw69+pLphVx7khTWaSCq1+c46AoBhWJgMFHGY8Ahz8ETKmpCHO/zkYgJoN/BdBFhLf6MMSZhv9307B0ViVLD92AEy2NmYTehQl4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773199896; c=relaxed/simple; bh=7XEIA+l2G183xuJDdLbZ97ukdvlcWb6rix9Y1qiYwCA=; h=Date:From:To:Subject:Content-Type:MIME-Version:Message-ID; b=WbNUFfV0+3Tpj45nq0sDZO60xVX4ORmuKjRYHqF6CvPSc0pRj3mCJOvvx/pr3n8rYCYjjyfo/QOrrhy4OtinZ8ykazDEB2uQ8M5b6Ey7s5P46LmxUkF7+bk1Wd/so5jNybwBQSLk9i1ejKPnAYEMFsF3NSWYs6LXNZ714sXbSII= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=163.com; spf=pass smtp.mailfrom=163.com; dkim=pass (1024-bit key) header.d=163.com header.i=@163.com header.b=Yy2yA+JH; arc=none smtp.client-ip=117.135.210.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=163.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=163.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=163.com header.i=@163.com header.b="Yy2yA+JH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com; s=s110527; h=Date:From:To:Subject:Content-Type:MIME-Version: Message-ID; bh=7XEIA+l2G183xuJDdLbZ97ukdvlcWb6rix9Y1qiYwCA=; b=Y y2yA+JHpwSyXcKUoPCl1ffuQOaRzZZrrVNCSfRnCnj5tLAqaUpBIFg28ghyVaXL8 pdmgddgV06uon7RvmgBHwzkJk6BmFszkuPk3lD1AGuu+8y4Pz2S/AaRKX3DWqP6p NYEZRiTS2bvwgoMfTUo2chtAhb9Bg/xZ0BnoF0qpjo= Received: from luckd0g$163.com ( [183.205.138.18] ) by ajax-webmail-wmsvr-40-146 (Coremail) ; Wed, 11 Mar 2026 11:31:08 +0800 (CST) Date: Wed, 11 Mar 2026 11:31:08 +0800 (CST) From: "Jianzhou Zhao" To: linux-kernel@vger.kernel.org, senozhatsky@chromium.org, pmladek@suse.com, rostedt@goodmis.org, john.ogness@linutronix.de Subject: KCSAN: data-race in desc_read / prb_reserve_in_last X-Priority: 3 X-Mailer: Coremail Webmail Server Version 2023.4-cmXT build 20251222(83accb85) Copyright (c) 2002-2026 www.mailtech.cn 163com X-NTES-SC: AL_Qu2cAf2StkAt7iKdZukfmU4Rhug7UMO3uf8n24JfPJ9wjCzr5C4MZHpGN2Py3OuVMC+gqhiXXAlB7sV7cJNobacN7XRiiSeZiUvdSe/+kKj22Q== Content-Transfer-Encoding: quoted-printable Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <1a32d127.3865.19cdaf2c166.Coremail.luckd0g@163.com> X-Coremail-Locale: zh_CN X-CM-TRANSID: kigvCgBnJ2384bBpcu91AA--.41039W X-CM-SenderInfo: poxfyvkqj6il2tof0z/xtbC9hw+yGmw4fzf1wAA3Q X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU== Content-Type: text/plain; charset="utf-8" Subject: [BUG] printk: KCSAN: data-race in desc_read / prb_reserve_in_last Dear Maintainers, We are writing to report a KCSAN-detected data-race vulnerability in the Li= nux kernel. This bug was found by our custom fuzzing tool, RacePilot. The b= ug occurs in the printk ringbuffer system, dealing with concurrent modifica= tions and unannotated reads of the data block logical limits. We observed t= his on the Linux kernel version 6.18.0-08691-g2061f18ad76e-dirty. Call Trace & Context =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D BUG: KCSAN: data-race in desc_read / prb_reserve_in_last write to 0xffffffff869276a8 of 8 bytes by task 14248 on cpu 0: data_realloc kernel/printk/printk_ringbuffer.c:1252 [inline] prb_reserve_in_last+0x831/0xb20 kernel/printk/printk_ringbuffer.c:1529 vprintk_store+0x603/0x980 kernel/printk/printk.c:2283 vprintk_emit+0xfd/0x540 kernel/printk/printk.c:2412 vprintk_default+0x26/0x30 kernel/printk/printk.c:2451 vprintk+0x1d/0x30 kernel/printk/printk_safe.c:82 _printk+0x63/0x90 kernel/printk/printk.c:2461 disk_unlock_native_capacity block/partitions/core.c:520 [inline] blk_add_partition block/partitions/core.c:543 [inline] blk_add_partitions block/partitions/core.c:633 [inline] bdev_disk_changed block/partitions/core.c:693 [inline] bdev_disk_changed+0xae3/0xeb0 block/partitions/core.c:642 loop_reread_partitions+0x44/0xc0 drivers/block/loop.c:449 loop_set_status+0x41c/0x580 drivers/block/loop.c:1278 loop_set_status64 drivers/block/loop.c:1374 [inline] lo_ioctl+0xf0/0x1170 drivers/block/loop.c:1560 blkdev_ioctl+0x377/0x420 block/ioctl.c:707 vfs_ioctl fs/ioctl.c:52 [inline] __do_sys_ioctl fs/ioctl.c:605 [inline] __se_sys_ioctl fs/ioctl.c:591 [inline] __x64_sys_ioctl+0x121/0x170 fs/ioctl.c:591 x64_sys_call+0xc3a/0x2030 arch/x86/include/generated/asm/syscalls_64.h:17 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xae/0x2c0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f read to 0xffffffff869276a0 of 16 bytes by task 3004 on cpu 1: desc_read+0x115/0x250 kernel/printk/printk_ringbuffer.c:499 desc_read_finalized_seq+0x40/0x140 kernel/printk/printk_ringbuffer.c:1972 prb_read kernel/printk/printk_ringbuffer.c:2018 [inline] _prb_read_valid+0xc1/0x550 kernel/printk/printk_ringbuffer.c:2213 prb_read_valid_info+0x74/0xa0 kernel/printk/printk_ringbuffer.c:2321 devkmsg_poll+0xa1/0x120 kernel/printk/printk.c:906 vfs_poll include/linux/poll.h:82 [inline] ep_item_poll.isra.0+0xb0/0x110 fs/eventpoll.c:1059 ep_send_events+0x231/0x670 fs/eventpoll.c:1818 ep_try_send_events fs/eventpoll.c:1905 [inline] ep_poll fs/eventpoll.c:1970 [inline] do_epoll_wait+0x2a8/0x9c0 fs/eventpoll.c:2461 __do_sys_epoll_wait fs/eventpoll.c:2469 [inline] __se_sys_epoll_wait fs/eventpoll.c:2464 [inline] __x64_sys_epoll_wait+0xcb/0x190 fs/eventpoll.c:2464 x64_sys_call+0x194e/0x2030 arch/x86/include/generated/asm/syscalls_64.h:233 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xae/0x2c0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Reported by Kernel Concurrency Sanitizer on: CPU: 1 UID: 0 PID: 3004 Comm: systemd-journal Not tainted 6.18.0-08691-g206= 1f18ad76e-dirty #50 PREEMPT(voluntary)=20 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/= 2014 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Execution Flow & Code Context The writer task executes `prb_reserve_in_last()`, which concurrently reallo= cates and extends space for the newest record data by calling `data_realloc= ()`. Inside `data_realloc()`, it updates `blk_lpos->next` (which points to = `desc->text_blk_lpos.next`) with a plain write: ```c // kernel/printk/printk_ringbuffer.c static char *data_realloc(struct printk_ringbuffer *rb, unsigned int size, struct prb_data_blk_lpos *blk_lpos, unsigned long id) { ... blk_lpos->next =3D next_lpos; // <-- Write return &blk->data[0]; } ``` Concurrently, another task acting as a reader accesses the ringbuffer via `= desc_read()` to copy the descriptor structure to a local copy `desc_out`. I= t uses an unsynchronized `memcpy()` to read the `text_blk_lpos` structure w= hich includes the 16 bytes representing both `begin` and `next` positions: ```c // kernel/printk/printk_ringbuffer.c static enum desc_state desc_read(struct prb_desc_ring *desc_ring, unsigned long id, struct prb_desc *desc_out, u64 *seq_out, u32 *caller_id_out) { ... if (desc_out) { memcpy(&desc_out->text_blk_lpos, &desc->text_blk_lpos, sizeof(desc_out->text_blk_lpos)); /* LMM(desc_read:C) */ // <-- Lo= ckless Read } ... } ``` Root Cause Analysis A data race occurs because the writer modifies `desc->text_blk_lpos.next` (= via `blk_lpos->next =3D next_lpos` in `data_realloc()`) without concurrency= synchronization, while a reader copies the entire `text_blk_lpos` structur= e (via `memcpy()`) concurrently. Since plain writes and `memcpy` lack atomi= c annotations, the compiler is free to tear the reads and stores or optimiz= e them under the assumption of exclusivity. This can result in a torn or pa= rtially updated `text_blk_lpos` record being copied by the reader. Unfortunately, we were unable to generate a reproducer for this bug. Potential Impact If `memcpy` in `desc_read()` tears the read, `desc_out->text_blk_lpos` may = end up containing a logically inconsistent `begin` and `next` pair (e.g., a= n obsolete `begin` combined with an updated `next`, or simply a mangled 64-= bit value). This could cause the printk readers to miscalculate the bounds = of the data block, leading to reading corrupted text data, traversing out-o= f-bounds in the text data ring, or encountering infinite loops within the r= eader utilities, thereby causing a local DoS or syslog corruption. Proposed Fix To avoid tearing and resolve the sanitiser warnings while adhering to the m= emory model, we should replace the plain `memcpy` in `desc_read()` with ind= ividual `READ_ONCE()` calls for the `begin` and `next` fields. In addition,= we should proactively wrap the concurrent assignments to `blk_lpos` in `da= ta_alloc()` and `data_realloc()` with `WRITE_ONCE()`. ```diff --- a/kernel/printk/printk_ringbuffer.c +++ b/kernel/printk/printk_ringbuffer.c @@ -496,8 +496,8 @@ static enum desc_state desc_read(struct prb_desc_ring *= desc_ring, * cannot be used because of the atomic_t @state_var field. */ if (desc_out) { - memcpy(&desc_out->text_blk_lpos, &desc->text_blk_lpos, - sizeof(desc_out->text_blk_lpos)); /* LMM(desc_read:C) */ + desc_out->text_blk_lpos.begin =3D READ_ONCE(desc->text_blk_lpos.begin); + desc_out->text_blk_lpos.next =3D READ_ONCE(desc->text_blk_lpos.next); } if (seq_out) { *seq_out =3D info->seq; /* also part of desc_read:C */ @@ -1083,8 +1083,8 @@ static char *data_alloc(struct printk_ringbuffer *rb,= unsigned int size, * reader will recognize these special lpos values and handle * it appropriately. */ - blk_lpos->begin =3D EMPTY_LINE_LPOS; - blk_lpos->next =3D EMPTY_LINE_LPOS; + WRITE_ONCE(blk_lpos->begin, EMPTY_LINE_LPOS); + WRITE_ONCE(blk_lpos->next, EMPTY_LINE_LPOS); return NULL; } =20 @@ -1107,8 +1107,8 @@ static char *data_alloc(struct printk_ringbuffer *rb,= unsigned int size, if (WARN_ON_ONCE(next_lpos - begin_lpos > DATA_SIZE(data_ring)) || !data_push_tail(rb, next_lpos - DATA_SIZE(data_ring))) { - blk_lpos->begin =3D FAILED_LPOS; - blk_lpos->next =3D FAILED_LPOS; + WRITE_ONCE(blk_lpos->begin, FAILED_LPOS); + WRITE_ONCE(blk_lpos->next, FAILED_LPOS); return NULL; } =20 @@ -1148,8 +1148,8 @@ static char *data_alloc(struct printk_ringbuffer *rb,= unsigned int size, blk->id =3D id; } =20 - blk_lpos->begin =3D begin_lpos; - blk_lpos->next =3D next_lpos; + WRITE_ONCE(blk_lpos->begin, begin_lpos); + WRITE_ONCE(blk_lpos->next, next_lpos); =20 return &blk->data[0]; } @@ -1249,7 +1249,7 @@ static char *data_realloc(struct printk_ringbuffer *r= b, unsigned int size, } } =20 - blk_lpos->next =3D next_lpos; + WRITE_ONCE(blk_lpos->next, next_lpos); =20 return &blk->data[0]; } ``` We would be highly honored if this could be of any help. Best regards, RacePilot Team