From nobody Sat Feb 7 18:52:39 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 70201339861 for ; Fri, 30 Jan 2026 15:46:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787984; cv=none; b=cyOFViEeqPCnBk3o8HGhhz4S/mkg7UPJcnMMsvRmW8laDLLa/+efzvy+8fCv+WiqkqDLlbIumKvFQFW/fP5phHO4qlAQU8OX58cd5GKZpYuGFO1YFK2jPpXEhEhJYzxnC7QOln3O8jWg3w9Lr5Pd5oW25qs0z/R5JaN4ZGrOjKc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787984; c=relaxed/simple; bh=EXUzRxMfXSb3+7phclpiwkMcDwiUwkxxHC0nf0ojyig=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=Ri+ojGr2yWzewr6dhGnT6N9Ru568HzW++qD0E7Yy485NSqviM6xE5bsEqXVHutX3Ku6UI3gA5pHnG8cURMf1DhnkRktmLKIVnb4SqnpWRd0qpIfgLyC6J6WHNV0AJiWDg+jkkLpn5t6YB7/Crer+RM4v86oJMsWkewWPUIlc2aA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XhTDovIB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XhTDovIB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 17CCAC116D0; Fri, 30 Jan 2026 15:46:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1769787984; bh=EXUzRxMfXSb3+7phclpiwkMcDwiUwkxxHC0nf0ojyig=; h=Date:From:To:Cc:Subject:References:From; b=XhTDovIBpPRjygqA/H47fia6soYAPl0UY1ffVkZKWCpzvhY11i/2kXD4GP8Mg3ag1 bcKBUaEJtmdeqXbHc6D9C3POCHJqvnUDpjbEXwklWof/N5IYTrVhG7zcz8N3Ie6sDH 0jEa4bIFob0QPyghrW/jO0IUlOB7VfLf5N+C2fYgdIht/TQrbzFzTMiXG9ZPu01JlA CmDmJ8IoAQbXiff0O9sw/BMeGfj/bNd6JtRQtZ8hQlIMWh20IjlyVGjtDues34j1uO 6NcaxuCGQ69WLVLRizinQfy8eaHi4JWoErg21y3muVVBr3shJawSWGotIf6svmD9d8 u8HiDHN8t+Zlw== Received: from rostedt by gandalf with local (Exim 4.99.1) (envelope-from ) id 1vlqhj-00000001gUz-2wJL; Fri, 30 Jan 2026 10:46:39 -0500 Message-ID: <20260130154639.550162886@kernel.org> User-Agent: quilt/0.68 Date: Fri, 30 Jan 2026 10:46:08 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , "Paul E. McKenney" , Sebastian Andrzej Siewior , Alexei Starovoitov Subject: [for-next][PATCH 1/4] tracing: perf: Have perf tracepoint callbacks always disable preemption References: <20260130154607.755725833@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Steven Rostedt In preparation to convert protection of tracepoints from being protected by a preempt disabled section to being protected by SRCU, have all the perf callbacks disable preemption as perf expects preemption to be disabled when processing tracepoints. While at it, convert the perf system call callback preempt_disable() to a guard(preempt). Link: https://lore.kernel.org/all/20250613152218.1924093-1-bigeasy@linutron= ix.de/ Link: https://patch.msgid.link/20260108220550.2f6638f3@fedora Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Mathieu Desnoyers Cc: Andrew Morton Cc: "Paul E. McKenney" Cc: Sebastian Andrzej Siewior Cc: Alexei Starovoitov Link: https://patch.msgid.link/20260126231256.174621257@kernel.org Signed-off-by: Steven Rostedt (Google) --- include/trace/perf.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/trace/perf.h b/include/trace/perf.h index a1754b73a8f5..348ad1d9b556 100644 --- a/include/trace/perf.h +++ b/include/trace/perf.h @@ -71,6 +71,7 @@ perf_trace_##call(void *__data, proto) \ u64 __count __attribute__((unused)); \ struct task_struct *__task __attribute__((unused)); \ \ + guard(preempt_notrace)(); \ do_perf_trace_##call(__data, args); \ } =20 @@ -85,9 +86,8 @@ perf_trace_##call(void *__data, proto) \ struct task_struct *__task __attribute__((unused)); \ \ might_fault(); \ - preempt_disable_notrace(); \ + guard(preempt_notrace)(); \ do_perf_trace_##call(__data, args); \ - preempt_enable_notrace(); \ } =20 /* --=20 2.51.0 From nobody Sat Feb 7 18:52:39 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7028533A9D3 for ; Fri, 30 Jan 2026 15:46:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787984; cv=none; b=JjCIlHQ/fjkFDOtseDHLRTVOUnjjCE922GnsfSqN0ls1zemzCX2JNer9GAXEmFaA/kmUvntQllqQBQ/H0uQJ5UcYvjn05i5qIlcnM8JpguxiO6QDvHpNcOFbrhAYj70AxxRzLpYIXNCVoHlvvpWEdhlcM2Tb6MLylUldsBujJFE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787984; c=relaxed/simple; bh=Xlz3vuMrnXgkGggrgNPyi5AUxFfxQY0cSm/GjnIcZac=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=t+jI+6ECNeJTtWzBHF+vI41hUAP/KnNVM1wfqczkTerdZ9wZ1q90CzDSUB7tKYwH269n3qt6bKgE/oXPICyzPa1XTD8jMi1/R6jBMIcQ31Vio9u58lPRU5IxaR3pwT1NHWAw1O40CCOnvbtTY1VhO/DlN8y0UuJNdXFzZWvam24= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OIexYOEh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OIexYOEh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D5ABC16AAE; Fri, 30 Jan 2026 15:46:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1769787984; bh=Xlz3vuMrnXgkGggrgNPyi5AUxFfxQY0cSm/GjnIcZac=; h=Date:From:To:Cc:Subject:References:From; b=OIexYOEhu8FU88RudCVNzEztzfUZG4muWi0LC2Sqa1zMx2mSNJPwwkSvRTfNApFzD 2B8wHRzomXm7eqkI8Vvi2MtF9iHHPet0FbEqIbQ0moLlUV5ZRWfR2Y1pqM45u8+4IY HSguPNFMsfFMuq16+UFLFD2AI6ZfQn9AV2IyN7Fd4guKFK7WgjJ4W5UZOyufdSn6lZ XVJXbO2n6L+8gGTDYU6pRK1d4d7j/2OUKrHb5QD5vZo0edefuS11uHAjw2omMkBvwN cSgdp9REDk15PBwQlMkuJwQ4zHeY/923NpDlgDY20U1tcZ+7AZkAKSUtDTJGSnmK1J LMdwMU2oXaNFQ== Received: from rostedt by gandalf with local (Exim 4.99.1) (envelope-from ) id 1vlqhj-00000001gVT-3d4a; Fri, 30 Jan 2026 10:46:39 -0500 Message-ID: <20260130154639.720309344@kernel.org> User-Agent: quilt/0.68 Date: Fri, 30 Jan 2026 10:46:09 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , "Paul E. McKenney" , Sebastian Andrzej Siewior , Alexei Starovoitov , Alexei Starovoitov Subject: [for-next][PATCH 2/4] bpf: Have __bpf_trace_run() use rcu_read_lock_dont_migrate() References: <20260130154607.755725833@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Steven Rostedt In order to switch the protection of tracepoint callbacks from preempt_disable() to srcu_read_lock_fast() the BPF callback from tracepoints needs to have migration prevention as the BPF programs expect to stay on the same CPU as they execute. Put together the RCU protection with migration prevention and use rcu_read_lock_dont_migrate() in __bpf_trace_run(). This will allow tracepoints callbacks to be preemptible. Link: https://lore.kernel.org/all/CAADnVQKvY026HSFGOsavJppm3-Ajm-VsLzY-OeFU= e+BaKMRnDg@mail.gmail.com/ Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Mathieu Desnoyers Cc: Andrew Morton Cc: "Paul E. McKenney" Cc: Sebastian Andrzej Siewior Cc: Alexei Starovoitov Link: https://patch.msgid.link/20260126231256.335034877@kernel.org Suggested-by: Alexei Starovoitov Signed-off-by: Steven Rostedt (Google) --- kernel/trace/bpf_trace.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index fe28d86f7c35..abbf0177ad20 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2062,7 +2062,7 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u6= 4 *args) struct bpf_run_ctx *old_run_ctx; struct bpf_trace_run_ctx run_ctx; =20 - cant_sleep(); + rcu_read_lock_dont_migrate(); if (unlikely(this_cpu_inc_return(*(prog->active)) !=3D 1)) { bpf_prog_inc_misses_counter(prog); goto out; @@ -2071,13 +2071,12 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, = u64 *args) run_ctx.bpf_cookie =3D link->cookie; old_run_ctx =3D bpf_set_run_ctx(&run_ctx.run_ctx); =20 - rcu_read_lock(); (void) bpf_prog_run(prog, args); - rcu_read_unlock(); =20 bpf_reset_run_ctx(old_run_ctx); out: this_cpu_dec(*(prog->active)); + rcu_read_unlock_migrate(); } =20 #define UNPACK(...) __VA_ARGS__ --=20 2.51.0 From nobody Sat Feb 7 18:52:39 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B3E2341660; Fri, 30 Jan 2026 15:46:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787984; cv=none; b=MrD4/JwVbkJtV4Go+u6q6zyA4gj5oZxzpJB/JdeZ8lx6Y1IkpkpPIVY/4CtLy6vebWvsFcu7GcVNbQIrkQ5CO7GYHnVrBvgLTULSVt9uFZinjKWnXdOSucu1qjhZ/qCiakooi3uSAAKdEYf6p0PVtCEI7z51BsPCfwnMxLTKIyo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787984; c=relaxed/simple; bh=JInSuttsgS1tOFQAYgB1DjT9Hw8RRyZ3iLDPsXSvK0w=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=m0PvceDAseR7txz5bVs8Z62hshXKXqg0YC6PrB8DuX8o5eEk1c4P0/DiLh3SvoW3e1hruZ+2K2X7zHY/zqkoXvzdxknWS7YEUA0Z8HrN0C1GiuM3ZY0tC0w01jFGyg5eb6fkaa0p6j1AP9f/NG3Z8sENetwM17d8SwZV6D4azDg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qnmw/88h; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qnmw/88h" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 66F89C2BCB0; Fri, 30 Jan 2026 15:46:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1769787984; bh=JInSuttsgS1tOFQAYgB1DjT9Hw8RRyZ3iLDPsXSvK0w=; h=Date:From:To:Cc:Subject:References:From; b=qnmw/88h5fWh/+pFStNKkfloWOmKCbCsyEbL3Wusb2BNawUGAOOQguUGWqfChMllO GC3IIF2BUYSMWEvVqmWedT/cCandlgG/5d2UcZui/mGl+20qVbkoYLYjTqW4Q3gyqC pipgPHqFf3K4nPx9jmcSK+uaevNL8rzcR1pMPURgtUj7aBXZARHaDDjaZIxbtUKd0C bwzCnxoguz3YWqy5wZk55/A7pLSoREYivoBBLFV4PV8uEobquOnhVZHN7n/UJAY9GE qICejj8HgL54hfWHBwefU6R31onsfD4ExuGpb2Gw7K6HniCWTh8UawtMb6BIdjGkqJ FYio/NLXXgvmg== Received: from rostedt by gandalf with local (Exim 4.99.1) (envelope-from ) id 1vlqhk-00000001gVx-07rK; Fri, 30 Jan 2026 10:46:40 -0500 Message-ID: <20260130154639.883617005@kernel.org> User-Agent: quilt/0.68 Date: Fri, 30 Jan 2026 10:46:10 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , "Paul E. McKenney" , Andrii Nakryiko , Boqun Feng , Alexei Starovoitov , Peter Zijlstra , bpf@vger.kernel.org Subject: [for-next][PATCH 3/4] srcu: Fix warning to permit SRCU-fast readers in NMI handlers References: <20260130154607.755725833@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Paul E. McKenney" SRCU-fast is designed to be used in NMI handlers, even going so far as to use atomic operations for architectures supporting NMIs but not providing NMI-safe per-CPU atomic operations. However, the WARN_ON_ONCE() in __srcu_check_read_flavor() complains if SRCU-fast is used in an NMI handler. This commit therefore modifies that WARN_ON_ONCE() to avoid such complaints. Reported-by: Steven Rostedt Signed-off-by: Paul E. McKenney Tested-by: Steven Rostedt Cc: Andrii Nakryiko Cc: Boqun Feng Cc: Alexei Starovoitov Cc: Peter Zijlstra Cc: bpf@vger.kernel.org Link: https://patch.msgid.link/8232efe8-a7a3-446c-af0b-19f9b523b4f7@paulmck= -laptop Signed-off-by: Steven Rostedt (Google) --- kernel/rcu/srcutree.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index ea3f128de06f..c4a0a93e8da4 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -789,7 +789,8 @@ void __srcu_check_read_flavor(struct srcu_struct *ssp, = int read_flavor) struct srcu_data *sdp; =20 /* NMI-unsafe use in NMI is a bad sign, as is multi-bit read_flavor value= s. */ - WARN_ON_ONCE((read_flavor !=3D SRCU_READ_FLAVOR_NMI) && in_nmi()); + WARN_ON_ONCE(read_flavor !=3D SRCU_READ_FLAVOR_NMI && + read_flavor !=3D SRCU_READ_FLAVOR_FAST && in_nmi()); WARN_ON_ONCE(read_flavor & (read_flavor - 1)); =20 sdp =3D raw_cpu_ptr(ssp->sda); --=20 2.51.0 From nobody Sat Feb 7 18:52:39 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BC283340A79 for ; Fri, 30 Jan 2026 15:46:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787984; cv=none; b=c8wBFjH9ISsN8Hle2Eq/zO2nUIpyGFOw6ivULfG5iKuQjoAnIcNuy50kRC+GQognbVKfpPQedx0MboyYRLKWVhARMdao7R+EBG7UqesIKfYkRbxJcCIISSx3D8H25KtxdyO7iel27desXEr7FXbJy1P0o+wFvNewpx0uxsY2FYQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769787984; c=relaxed/simple; bh=E+wrYWoDHHsly2MK74aa4NYNOo7kJhoKS6+csKhnlhs=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=HJuxSWAWuv4LLVBscQkTRuUY3sIZyeOXQvnoC6uPyk8/clqe4vvPPhuqQ7rqvdjTtoKtWwQmaaSHfVKg8S+oEpDh3GvBK3DZmxQOyBNlNClrBxk2jEj2hC7BEjtdczf3wsx+b9jgGiiWYwiV3L5C2R8MArJVXV5ZY2rciM5+w6M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KkiuQlIR; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KkiuQlIR" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 968C9C4CEF7; Fri, 30 Jan 2026 15:46:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1769787984; bh=E+wrYWoDHHsly2MK74aa4NYNOo7kJhoKS6+csKhnlhs=; h=Date:From:To:Cc:Subject:References:From; b=KkiuQlIRUHAp3GB3vCwzXInxt/Z1O8cB4hK1BwJWWOg6k0X06NvqJi29q8ZZT/FP0 4VIVVm6m1RPiueyPE4Qo74AHocZMcp5QFcPwBtbvy0VyuafXHN5WT9rL0bsjtTEi7+ n+gb8i5XoqB4+q82buCD6P9Eb+0Od9AOXqR4uVH8oL1U/2W666v/J15Js3E+0RFRns 4su68hW1u0dFUWYbHQ03/7iuP+vNgwFEWaY+RcyF9WxrlXdqWyEQNT32ngd1a6nxwB kg9F9HKzowfPpLFcFyo3UlPLvmG6VmTLa08tB6TIxdwMB7PhzqqWxU844/px+FAvUW 34gTJof+u+ywA== Received: from rostedt by gandalf with local (Exim 4.99.1) (envelope-from ) id 1vlqhk-00000001gWR-0pjp; Fri, 30 Jan 2026 10:46:40 -0500 Message-ID: <20260130154640.049870047@kernel.org> User-Agent: quilt/0.68 Date: Fri, 30 Jan 2026 10:46:11 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , "Paul E. McKenney" , Sebastian Andrzej Siewior , Alexei Starovoitov Subject: [for-next][PATCH 4/4] tracing: Guard __DECLARE_TRACE() use of __DO_TRACE_CALL() with SRCU-fast References: <20260130154607.755725833@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Steven Rostedt The current use of guard(preempt_notrace)() within __DECLARE_TRACE() to protect invocation of __DO_TRACE_CALL() means that BPF programs attached to tracepoints are non-preemptible. This is unhelpful in real-time systems, whose users apparently wish to use BPF while also achieving low latencies. (Who knew?) One option would be to use preemptible RCU, but this introduces many opportunities for infinite recursion, which many consider to be counterproductive, especially given the relatively small stacks provided by the Linux kernel. These opportunities could be shut down by sufficiently energetic duplication of code, but this sort of thing is considered impolite in some circles. Therefore, use the shiny new SRCU-fast API, which provides somewhat faster readers than those of preemptible RCU, at least on Paul E. McKenney's laptop, where task_struct access is more expensive than access to per-CPU variables. And SRCU-fast provides way faster readers than does SRCU, courtesy of being able to avoid the read-side use of smp_mb(). Also, it is quite straightforward to create srcu_read_{,un}lock_fast_notrace() functions. Link: https://lore.kernel.org/all/20250613152218.1924093-1-bigeasy@linutron= ix.de/ Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Mathieu Desnoyers Cc: Andrew Morton Cc: Sebastian Andrzej Siewior Cc: Alexei Starovoitov Link: https://patch.msgid.link/20260126231256.499701982@kernel.org Co-developed-by: Paul E. McKenney Signed-off-by: Paul E. McKenney Signed-off-by: Steven Rostedt (Google) --- include/linux/tracepoint.h | 9 +++++---- include/trace/trace_events.h | 4 ++-- kernel/tracepoint.c | 18 ++++++++++++++---- 3 files changed, 21 insertions(+), 10 deletions(-) diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h index 8a56f3278b1b..22ca1c8b54f3 100644 --- a/include/linux/tracepoint.h +++ b/include/linux/tracepoint.h @@ -108,14 +108,15 @@ void for_each_tracepoint_in_module(struct module *mod, * An alternative is to use the following for batch reclaim associated * with a given tracepoint: * - * - tracepoint_is_faultable() =3D=3D false: call_rcu() + * - tracepoint_is_faultable() =3D=3D false: call_srcu() * - tracepoint_is_faultable() =3D=3D true: call_rcu_tasks_trace() */ #ifdef CONFIG_TRACEPOINTS +extern struct srcu_struct tracepoint_srcu; static inline void tracepoint_synchronize_unregister(void) { synchronize_rcu_tasks_trace(); - synchronize_rcu(); + synchronize_srcu(&tracepoint_srcu); } static inline bool tracepoint_is_faultable(struct tracepoint *tp) { @@ -275,13 +276,13 @@ static inline struct tracepoint *tracepoint_ptr_deref= (tracepoint_ptr_t *p) return static_branch_unlikely(&__tracepoint_##name.key);\ } =20 -#define __DECLARE_TRACE(name, proto, args, cond, data_proto) \ +#define __DECLARE_TRACE(name, proto, args, cond, data_proto) \ __DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_pro= to)) \ static inline void __do_trace_##name(proto) \ { \ TRACEPOINT_CHECK(name) \ if (cond) { \ - guard(preempt_notrace)(); \ + guard(srcu_fast_notrace)(&tracepoint_srcu); \ __DO_TRACE_CALL(name, TP_ARGS(args)); \ } \ } \ diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h index 4f22136fd465..fbc07d353be6 100644 --- a/include/trace/trace_events.h +++ b/include/trace/trace_events.h @@ -436,6 +436,7 @@ __DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args)= , PARAMS(tstruct), \ static notrace void \ trace_event_raw_event_##call(void *__data, proto) \ { \ + guard(preempt_notrace)(); \ do_trace_event_raw_event_##call(__data, args); \ } =20 @@ -447,9 +448,8 @@ static notrace void \ trace_event_raw_event_##call(void *__data, proto) \ { \ might_fault(); \ - preempt_disable_notrace(); \ + guard(preempt_notrace)(); \ do_trace_event_raw_event_##call(__data, args); \ - preempt_enable_notrace(); \ } =20 /* diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c index 62719d2941c9..fd2ee879815c 100644 --- a/kernel/tracepoint.c +++ b/kernel/tracepoint.c @@ -34,9 +34,13 @@ enum tp_transition_sync { =20 struct tp_transition_snapshot { unsigned long rcu; + unsigned long srcu_gp; bool ongoing; }; =20 +DEFINE_SRCU_FAST(tracepoint_srcu); +EXPORT_SYMBOL_GPL(tracepoint_srcu); + /* Protected by tracepoints_mutex */ static struct tp_transition_snapshot tp_transition_snapshot[_NR_TP_TRANSIT= ION_SYNC]; =20 @@ -46,6 +50,7 @@ static void tp_rcu_get_state(enum tp_transition_sync sync) =20 /* Keep the latest get_state snapshot. */ snapshot->rcu =3D get_state_synchronize_rcu(); + snapshot->srcu_gp =3D start_poll_synchronize_srcu(&tracepoint_srcu); snapshot->ongoing =3D true; } =20 @@ -56,6 +61,8 @@ static void tp_rcu_cond_sync(enum tp_transition_sync sync) if (!snapshot->ongoing) return; cond_synchronize_rcu(snapshot->rcu); + if (!poll_state_synchronize_srcu(&tracepoint_srcu, snapshot->srcu_gp)) + synchronize_srcu(&tracepoint_srcu); snapshot->ongoing =3D false; } =20 @@ -112,10 +119,13 @@ static inline void release_probes(struct tracepoint *= tp, struct tracepoint_func struct tp_probes *tp_probes =3D container_of(old, struct tp_probes, probes[0]); =20 - if (tracepoint_is_faultable(tp)) - call_rcu_tasks_trace(&tp_probes->rcu, rcu_free_old_probes); - else - call_rcu(&tp_probes->rcu, rcu_free_old_probes); + if (tracepoint_is_faultable(tp)) { + call_rcu_tasks_trace(&tp_probes->rcu, + rcu_free_old_probes); + } else { + call_srcu(&tracepoint_srcu, &tp_probes->rcu, + rcu_free_old_probes); + } } } =20 --=20 2.51.0