From nobody Sat Feb 7 15:22:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC719EB64DA for ; Wed, 12 Jul 2023 22:01:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232365AbjGLWBR (ORCPT ); Wed, 12 Jul 2023 18:01:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232178AbjGLWBC (ORCPT ); Wed, 12 Jul 2023 18:01:02 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 373381FCC; Wed, 12 Jul 2023 15:01:01 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0BC676196B; Wed, 12 Jul 2023 22:01:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 065F3C43395; Wed, 12 Jul 2023 22:00:59 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.96) (envelope-from ) id 1qJhtJ-000QiI-0H; Wed, 12 Jul 2023 18:00:57 -0400 Message-ID: <20230712220056.899064942@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 12 Jul 2023 17:50:45 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Shuah Khan , linux-kselftest@vger.kernel.org, Beau Belgrave Subject: [for-linus][PATCH 1/5] selftests/user_events: Test struct size match cases References: <20230712215044.496021196@goodmis.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Beau Belgrave The self tests for user_events currently does not ensure that the edge case for struct types work properly with size differences. Add cases for mis-matching struct names and sizes to ensure they work properly. Link: https://lkml.kernel.org/r/20230629235049.581-3-beaub@linux.microsoft.= com Cc: Shuah Khan Cc: linux-kselftest@vger.kernel.org Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- tools/testing/selftests/user_events/dyn_test.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/tools/testing/selftests/user_events/dyn_test.c b/tools/testing= /selftests/user_events/dyn_test.c index d6979a48478f..91a4444ad42b 100644 --- a/tools/testing/selftests/user_events/dyn_test.c +++ b/tools/testing/selftests/user_events/dyn_test.c @@ -217,6 +217,18 @@ TEST_F(user, matching) { /* Types don't match */ TEST_NMATCH("__test_event u64 a; u64 b", "__test_event u32 a; u32 b"); + + /* Struct name and size matches */ + TEST_MATCH("__test_event struct my_struct a 20", + "__test_event struct my_struct a 20"); + + /* Struct name don't match */ + TEST_NMATCH("__test_event struct my_struct a 20", + "__test_event struct my_struct b 20"); + + /* Struct size don't match */ + TEST_NMATCH("__test_event struct my_struct a 20", + "__test_event struct my_struct a 21"); } =20 int main(int argc, char **argv) --=20 2.40.1 From nobody Sat Feb 7 15:22:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 817A2EB64DD for ; Wed, 12 Jul 2023 22:01:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232008AbjGLWBO (ORCPT ); Wed, 12 Jul 2023 18:01:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232155AbjGLWBC (ORCPT ); Wed, 12 Jul 2023 18:01:02 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0926B1FDE for ; Wed, 12 Jul 2023 15:01:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9056F61964 for ; Wed, 12 Jul 2023 22:00:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2E07C433C7; Wed, 12 Jul 2023 22:00:58 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.96) (envelope-from ) id 1qJhtJ-000Qip-0w; Wed, 12 Jul 2023 18:00:57 -0400 Message-ID: <20230712220057.107996460@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 12 Jul 2023 17:50:46 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Will Deacon , Kees Cook , Florent Revest , Arnd Bergmann , Catalin Marinas Subject: [for-linus][PATCH 2/5] tracing: arm64: Avoid missing-prototype warnings References: <20230712215044.496021196@goodmis.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Arnd Bergmann These are all tracing W=3D1 warnings in arm64 allmodconfig about missing prototypes: kernel/trace/trace_kprobe_selftest.c:7:5: error: no previous prototype for = 'kprobe_trace_selftest_target' [-Werror=3Dmissing-pro totypes] kernel/trace/ftrace.c:329:5: error: no previous prototype for '__register_f= trace_function' [-Werror=3Dmissing-prototypes] kernel/trace/ftrace.c:372:5: error: no previous prototype for '__unregister= _ftrace_function' [-Werror=3Dmissing-prototypes] kernel/trace/ftrace.c:4130:15: error: no previous prototype for 'arch_ftrac= e_match_adjust' [-Werror=3Dmissing-prototypes] kernel/trace/fgraph.c:243:15: error: no previous prototype for 'ftrace_retu= rn_to_handler' [-Werror=3Dmissing-prototypes] kernel/trace/fgraph.c:358:6: error: no previous prototype for 'ftrace_graph= _sleep_time_control' [-Werror=3Dmissing-prototypes] arch/arm64/kernel/ftrace.c:460:6: error: no previous prototype for 'prepare= _ftrace_return' [-Werror=3Dmissing-prototypes] arch/arm64/kernel/ptrace.c:2172:5: error: no previous prototype for 'syscal= l_trace_enter' [-Werror=3Dmissing-prototypes] arch/arm64/kernel/ptrace.c:2195:6: error: no previous prototype for 'syscal= l_trace_exit' [-Werror=3Dmissing-prototypes] Move the declarations to an appropriate header where they can be seen by the caller and callee, and make sure the headers are included where needed. Link: https://lore.kernel.org/linux-trace-kernel/20230517125215.930689-1-ar= nd@kernel.org Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Will Deacon Cc: Kees Cook Cc: Florent Revest Signed-off-by: Arnd Bergmann Acked-by: Catalin Marinas [ Fixed ftrace_return_to_handler() to handle CONFIG_HAVE_FUNCTION_GRAPH_RET= VAL case ] Signed-off-by: Steven Rostedt (Google) --- arch/arm64/include/asm/ftrace.h | 4 ++++ arch/arm64/include/asm/syscall.h | 3 +++ arch/arm64/kernel/syscall.c | 3 --- include/linux/ftrace.h | 9 +++++++++ kernel/trace/fgraph.c | 1 + kernel/trace/ftrace_internal.h | 5 +++-- kernel/trace/trace_kprobe_selftest.c | 3 +++ 7 files changed, 23 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrac= e.h index 21ac1c5c71d3..ab158196480c 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -211,6 +211,10 @@ static inline unsigned long fgraph_ret_regs_frame_poin= ter(struct fgraph_ret_regs { return ret_regs->fp; } + +void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent, + unsigned long frame_pointer); + #endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */ #endif =20 diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/sysc= all.h index 4cfe9b49709b..ab8e14b96f68 100644 --- a/arch/arm64/include/asm/syscall.h +++ b/arch/arm64/include/asm/syscall.h @@ -85,4 +85,7 @@ static inline int syscall_get_arch(struct task_struct *ta= sk) return AUDIT_ARCH_AARCH64; } =20 +int syscall_trace_enter(struct pt_regs *regs); +void syscall_trace_exit(struct pt_regs *regs); + #endif /* __ASM_SYSCALL_H */ diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index 5a668d7f3c1f..b1ae2f2eaf77 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -75,9 +75,6 @@ static inline bool has_syscall_work(unsigned long flags) return unlikely(flags & _TIF_SYSCALL_WORK); } =20 -int syscall_trace_enter(struct pt_regs *regs); -void syscall_trace_exit(struct pt_regs *regs); - static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, const syscall_fn_t syscall_table[]) { diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 8e59bd954153..ce156c7704ee 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -41,6 +41,15 @@ struct ftrace_ops; struct ftrace_regs; struct dyn_ftrace; =20 +char *arch_ftrace_match_adjust(char *str, const char *search); + +#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL +struct fgraph_ret_regs; +unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs); +#else +unsigned long ftrace_return_to_handler(unsigned long frame_pointer); +#endif + #ifdef CONFIG_FUNCTION_TRACER /* * If the arch's mcount caller does not support all of ftrace's diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index cd2c35b1dd8f..c83c005e654e 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -15,6 +15,7 @@ #include =20 #include "ftrace_internal.h" +#include "trace.h" =20 #ifdef CONFIG_DYNAMIC_FTRACE #define ASSIGN_OPS_HASH(opsname, val) \ diff --git a/kernel/trace/ftrace_internal.h b/kernel/trace/ftrace_internal.h index 382775edf690..5012c04f92c0 100644 --- a/kernel/trace/ftrace_internal.h +++ b/kernel/trace/ftrace_internal.h @@ -2,6 +2,9 @@ #ifndef _LINUX_KERNEL_FTRACE_INTERNAL_H #define _LINUX_KERNEL_FTRACE_INTERNAL_H =20 +int __register_ftrace_function(struct ftrace_ops *ops); +int __unregister_ftrace_function(struct ftrace_ops *ops); + #ifdef CONFIG_FUNCTION_TRACER =20 extern struct mutex ftrace_lock; @@ -15,8 +18,6 @@ int ftrace_ops_test(struct ftrace_ops *ops, unsigned long= ip, void *regs); =20 #else /* !CONFIG_DYNAMIC_FTRACE */ =20 -int __register_ftrace_function(struct ftrace_ops *ops); -int __unregister_ftrace_function(struct ftrace_ops *ops); /* Keep as macros so we do not need to define the commands */ # define ftrace_startup(ops, command) \ ({ \ diff --git a/kernel/trace/trace_kprobe_selftest.c b/kernel/trace/trace_kpro= be_selftest.c index 16548ee4c8c6..3851cd1e6a62 100644 --- a/kernel/trace/trace_kprobe_selftest.c +++ b/kernel/trace/trace_kprobe_selftest.c @@ -1,4 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 + +#include "trace_kprobe_selftest.h" + /* * Function used during the kprobe self test. This function is in a separa= te * compile unit so it can be compile with CC_FLAGS_FTRACE to ensure that it --=20 2.40.1 From nobody Sat Feb 7 15:22:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7F2AEB64DD for ; Wed, 12 Jul 2023 22:01:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232331AbjGLWBL (ORCPT ); Wed, 12 Jul 2023 18:01:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232159AbjGLWBC (ORCPT ); Wed, 12 Jul 2023 18:01:02 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 130B21FE3; Wed, 12 Jul 2023 15:01:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A22FD61965; Wed, 12 Jul 2023 22:00:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EABF8C433CA; Wed, 12 Jul 2023 22:00:58 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.96) (envelope-from ) id 1qJhtJ-000QjM-1c; Wed, 12 Jul 2023 18:00:57 -0400 Message-ID: <20230712220057.311944701@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 12 Jul 2023 17:50:47 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , stable@vger.kernel.org, Zheng Yejian Subject: [for-linus][PATCH 3/5] ring-buffer: Fix deadloop issue on reading trace_pipe References: <20230712215044.496021196@goodmis.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zheng Yejian Soft lockup occurs when reading file 'trace_pipe': watchdog: BUG: soft lockup - CPU#6 stuck for 22s! [cat:4488] [...] RIP: 0010:ring_buffer_empty_cpu+0xed/0x170 RSP: 0018:ffff88810dd6fc48 EFLAGS: 00000246 RAX: 0000000000000000 RBX: 0000000000000246 RCX: ffffffff93d1aaeb RDX: ffff88810a280040 RSI: 0000000000000008 RDI: ffff88811164b218 RBP: ffff88811164b218 R08: 0000000000000000 R09: ffff88815156600f R10: ffffed102a2acc01 R11: 0000000000000001 R12: 0000000051651901 R13: 0000000000000000 R14: ffff888115e49500 R15: 0000000000000000 [...] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f8d853c2000 CR3: 000000010dcd8000 CR4: 00000000000006e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: __find_next_entry+0x1a8/0x4b0 ? peek_next_entry+0x250/0x250 ? down_write+0xa5/0x120 ? down_write_killable+0x130/0x130 trace_find_next_entry_inc+0x3b/0x1d0 tracing_read_pipe+0x423/0xae0 ? tracing_splice_read_pipe+0xcb0/0xcb0 vfs_read+0x16b/0x490 ksys_read+0x105/0x210 ? __ia32_sys_pwrite64+0x200/0x200 ? switch_fpu_return+0x108/0x220 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x61/0xc6 Through the vmcore, I found it's because in tracing_read_pipe(), ring_buffer_empty_cpu() found some buffer is not empty but then it cannot read anything due to "rb_num_of_entries() =3D=3D 0" always true, Then it infinitely loop the procedure due to user buffer not been filled, see following code path: tracing_read_pipe() { ... ... waitagain: tracing_wait_pipe() // 1. find non-empty buffer here trace_find_next_entry_inc() // 2. loop here try to find an entry __find_next_entry() ring_buffer_empty_cpu(); // 3. find non-empty buffer peek_next_entry() // 4. but peek always return NULL ring_buffer_peek() rb_buffer_peek() rb_get_reader_page() // 5. because rb_num_of_entries() =3D=3D 0 always true he= re // then return NULL // 6. user buffer not been filled so goto 'waitgain' // and eventually leads to an deadloop in kernel!!! } By some analyzing, I found that when resetting ringbuffer, the 'entries' of its pages are not all cleared (see rb_reset_cpu()). Then when reducing the ringbuffer, and if some reduced pages exist dirty 'entries' data, they will be added into 'cpu_buffer->overrun' (see rb_remove_pages()), which cause wrong 'overrun' count and eventually cause the deadloop issue. To fix it, we need to clear every pages in rb_reset_cpu(). Link: https://lore.kernel.org/linux-trace-kernel/20230708225144.3785600-1-z= hengyejian1@huawei.com Cc: stable@vger.kernel.org Fixes: a5fb833172eca ("ring-buffer: Fix uninitialized read_stamp") Signed-off-by: Zheng Yejian Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 834b361a4a66..14d8001140c8 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -5242,28 +5242,34 @@ unsigned long ring_buffer_size(struct trace_buffer = *buffer, int cpu) } EXPORT_SYMBOL_GPL(ring_buffer_size); =20 +static void rb_clear_buffer_page(struct buffer_page *page) +{ + local_set(&page->write, 0); + local_set(&page->entries, 0); + rb_init_page(page->page); + page->read =3D 0; +} + static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) { + struct buffer_page *page; + rb_head_page_deactivate(cpu_buffer); =20 cpu_buffer->head_page =3D list_entry(cpu_buffer->pages, struct buffer_page, list); - local_set(&cpu_buffer->head_page->write, 0); - local_set(&cpu_buffer->head_page->entries, 0); - local_set(&cpu_buffer->head_page->page->commit, 0); - - cpu_buffer->head_page->read =3D 0; + rb_clear_buffer_page(cpu_buffer->head_page); + list_for_each_entry(page, cpu_buffer->pages, list) { + rb_clear_buffer_page(page); + } =20 cpu_buffer->tail_page =3D cpu_buffer->head_page; cpu_buffer->commit_page =3D cpu_buffer->head_page; =20 INIT_LIST_HEAD(&cpu_buffer->reader_page->list); INIT_LIST_HEAD(&cpu_buffer->new_pages); - local_set(&cpu_buffer->reader_page->write, 0); - local_set(&cpu_buffer->reader_page->entries, 0); - local_set(&cpu_buffer->reader_page->page->commit, 0); - cpu_buffer->reader_page->read =3D 0; + rb_clear_buffer_page(cpu_buffer->reader_page); =20 local_set(&cpu_buffer->entries_bytes, 0); local_set(&cpu_buffer->overrun, 0); --=20 2.40.1 From nobody Sat Feb 7 15:22:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0843AEB64DA for ; Wed, 12 Jul 2023 22:01:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232283AbjGLWBH (ORCPT ); Wed, 12 Jul 2023 18:01:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232126AbjGLWBB (ORCPT ); Wed, 12 Jul 2023 18:01:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A30A1FE1; Wed, 12 Jul 2023 15:01:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9A9C861961; Wed, 12 Jul 2023 22:00:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E91ECC433CB; Wed, 12 Jul 2023 22:00:58 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.96) (envelope-from ) id 1qJhtJ-000Qjt-2F; Wed, 12 Jul 2023 18:00:57 -0400 Message-ID: <20230712220057.519974644@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 12 Jul 2023 17:50:48 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , stable@vger.kernel.org, Zheng Yejian Subject: [for-linus][PATCH 4/5] ftrace: Fix possible warning on checking all pages used in ftrace_process_locs() References: <20230712215044.496021196@goodmis.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zheng Yejian As comments in ftrace_process_locs(), there may be NULL pointers in mcount_loc section: > Some architecture linkers will pad between > the different mcount_loc sections of different > object files to satisfy alignments. > Skip any NULL pointers. After commit 20e5227e9f55 ("ftrace: allow NULL pointers in mcount_loc"), NULL pointers will be accounted when allocating ftrace pages but skipped before adding into ftrace pages, this may result in some pages not being used. Then after commit 706c81f87f84 ("ftrace: Remove extra helper functions"), warning may occur at: WARN_ON(pg->next); To fix it, only warn for case that no pointers skipped but pages not used up, then free those unused pages after releasing ftrace_lock. Link: https://lore.kernel.org/linux-trace-kernel/20230712060452.3175675-1-z= hengyejian1@huawei.com Cc: stable@vger.kernel.org Fixes: 706c81f87f84 ("ftrace: Remove extra helper functions") Suggested-by: Steven Rostedt Signed-off-by: Zheng Yejian Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ftrace.c | 45 +++++++++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 14 deletions(-) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 3740aca79fe7..05c0024815bf 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -3305,6 +3305,22 @@ static int ftrace_allocate_records(struct ftrace_pag= e *pg, int count) return cnt; } =20 +static void ftrace_free_pages(struct ftrace_page *pages) +{ + struct ftrace_page *pg =3D pages; + + while (pg) { + if (pg->records) { + free_pages((unsigned long)pg->records, pg->order); + ftrace_number_of_pages -=3D 1 << pg->order; + } + pages =3D pg->next; + kfree(pg); + pg =3D pages; + ftrace_number_of_groups--; + } +} + static struct ftrace_page * ftrace_allocate_pages(unsigned long num_to_init) { @@ -3343,17 +3359,7 @@ ftrace_allocate_pages(unsigned long num_to_init) return start_pg; =20 free_pages: - pg =3D start_pg; - while (pg) { - if (pg->records) { - free_pages((unsigned long)pg->records, pg->order); - ftrace_number_of_pages -=3D 1 << pg->order; - } - start_pg =3D pg->next; - kfree(pg); - pg =3D start_pg; - ftrace_number_of_groups--; - } + ftrace_free_pages(start_pg); pr_info("ftrace: FAILED to allocate memory for functions\n"); return NULL; } @@ -6471,9 +6477,11 @@ static int ftrace_process_locs(struct module *mod, unsigned long *start, unsigned long *end) { + struct ftrace_page *pg_unuse =3D NULL; struct ftrace_page *start_pg; struct ftrace_page *pg; struct dyn_ftrace *rec; + unsigned long skipped =3D 0; unsigned long count; unsigned long *p; unsigned long addr; @@ -6536,8 +6544,10 @@ static int ftrace_process_locs(struct module *mod, * object files to satisfy alignments. * Skip any NULL pointers. */ - if (!addr) + if (!addr) { + skipped++; continue; + } =20 end_offset =3D (pg->index+1) * sizeof(pg->records[0]); if (end_offset > PAGE_SIZE << pg->order) { @@ -6551,8 +6561,10 @@ static int ftrace_process_locs(struct module *mod, rec->ip =3D addr; } =20 - /* We should have used all pages */ - WARN_ON(pg->next); + if (pg->next) { + pg_unuse =3D pg->next; + pg->next =3D NULL; + } =20 /* Assign the last page to ftrace_pages */ ftrace_pages =3D pg; @@ -6574,6 +6586,11 @@ static int ftrace_process_locs(struct module *mod, out: mutex_unlock(&ftrace_lock); =20 + /* We should have used all pages unless we skipped some */ + if (pg_unuse) { + WARN_ON(!skipped); + ftrace_free_pages(pg_unuse); + } return ret; } =20 --=20 2.40.1 From nobody Sat Feb 7 15:22:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 632FEC001B0 for ; Wed, 12 Jul 2023 22:01:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232261AbjGLWBE (ORCPT ); Wed, 12 Jul 2023 18:01:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231887AbjGLWBB (ORCPT ); Wed, 12 Jul 2023 18:01:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 082731FDA for ; Wed, 12 Jul 2023 15:01:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 87A1E61962 for ; Wed, 12 Jul 2023 22:00:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DA25FC433C9; Wed, 12 Jul 2023 22:00:58 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.96) (envelope-from ) id 1qJhtJ-000QkQ-2t; Wed, 12 Jul 2023 18:00:57 -0400 Message-ID: <20230712220057.716543026@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 12 Jul 2023 17:50:49 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Sven Schnelle Subject: [for-linus][PATCH 5/5] tracing: Stop FORTIFY_SOURCE complaining about stack trace caller References: <20230712215044.496021196@goodmis.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Steven Rostedt (Google)" The stack_trace event is an event created by the tracing subsystem to store stack traces. It originally just contained a hard coded array of 8 words to hold the stack, and a "size" to know how many entries are there. This is exported to user space as: name: kernel_stack ID: 4 format: field:unsigned short common_type; offset:0; size:2; signed:0; field:unsigned char common_flags; offset:2; size:1; signed:0; field:unsigned char common_preempt_count; offset:3; size:1; signed:0; field:int common_pid; offset:4; size:4; signed:1; field:int size; offset:8; size:4; signed:1; field:unsigned long caller[8]; offset:16; size:64; signed:0; print fmt: "\t=3D> %ps\n\t=3D> %ps\n\t=3D> %ps\n" "\t=3D> %ps\n\t=3D> %ps\n= \t=3D> %ps\n" "\t=3D> %ps\n\t=3D> %ps\n",i (void *)REC->caller[0], (void *)REC->caller[1], (void *)REC->caller[2], (void *)REC->caller[3], (void *)REC->caller[4], (void *)REC->caller[5], (void *)REC->caller[6], (void *)REC->caller[7] Where the user space tracers could parse the stack. The library was updated for this specific event to only look at the size, and not the array. But some older users still look at the array (note, the older code still checks to make sure the array fits inside the event that it read. That is, if only 4 words were saved, the parser would not read the fifth word because it will see that it was outside of the event size). This event was changed a while ago to be more dynamic, and would save a full stack even if it was greater than 8 words. It does this by simply allocating more ring buffer to hold the extra words. Then it copies in the stack via: memcpy(&entry->caller, fstack->calls, size); As the entry is struct stack_entry, that is created by a macro to both create the structure and export this to user space, it still had the caller field of entry defined as: unsigned long caller[8]. When the stack is greater than 8, the FORTIFY_SOURCE code notices that the amount being copied is greater than the source array and complains about it. It has no idea that the source is pointing to the ring buffer with the required allocation. To hide this from the FORTIFY_SOURCE logic, pointer arithmetic is used: ptr =3D ring_buffer_event_data(event); entry =3D ptr; ptr +=3D offsetof(typeof(*entry), caller); memcpy(ptr, fstack->calls, size); Link: https://lore.kernel.org/all/20230612160748.4082850-1-svens@linux.ibm.= com/ Link: https://lore.kernel.org/linux-trace-kernel/20230712105235.5fc441aa@ga= ndalf.local.home Cc: Masami Hiramatsu Cc: Mark Rutland Reported-by: Sven Schnelle Tested-by: Sven Schnelle Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 4529e264cb86..20122eeccf97 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3118,6 +3118,7 @@ static void __ftrace_trace_stack(struct trace_buffer = *buffer, struct ftrace_stack *fstack; struct stack_entry *entry; int stackidx; + void *ptr; =20 /* * Add one, for this function and the call to save_stack_trace() @@ -3161,9 +3162,25 @@ static void __ftrace_trace_stack(struct trace_buffer= *buffer, trace_ctx); if (!event) goto out; - entry =3D ring_buffer_event_data(event); + ptr =3D ring_buffer_event_data(event); + entry =3D ptr; + + /* + * For backward compatibility reasons, the entry->caller is an + * array of 8 slots to store the stack. This is also exported + * to user space. The amount allocated on the ring buffer actually + * holds enough for the stack specified by nr_entries. This will + * go into the location of entry->caller. Due to string fortifiers + * checking the size of the destination of memcpy() it triggers + * when it detects that size is greater than 8. To hide this from + * the fortifiers, we use "ptr" and pointer arithmetic to assign caller. + * + * The below is really just: + * memcpy(&entry->caller, fstack->calls, size); + */ + ptr +=3D offsetof(typeof(*entry), caller); + memcpy(ptr, fstack->calls, size); =20 - memcpy(&entry->caller, fstack->calls, size); entry->size =3D nr_entries; =20 if (!call_filter_check_discard(call, entry, buffer, event)) --=20 2.40.1