From nobody Sun Dec 14 18:11:35 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5F8A3126D4; Tue, 4 Nov 2025 08:16:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762244212; cv=none; b=LgtB4dxzfJDHUcwvp5LWCOTNLM2kfQhzA2ofJzHranfiuF6Ni65hR3vfxW7wpBzMEhILeZjbHCnkANz3/oMSbpo55pt4YMT2lN7yuv03itFnWmwZINXfW/zx7qsERYbEXPJm11U7EhPURYJyLpkugH4hxqJwz+ApSZaol3WFZa4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762244212; c=relaxed/simple; bh=iXrBUOgSDL3RytrqFTW9qSBLxuOKg6dPuPPcNAqKRno=; h=Date:From:To:Subject:Cc:In-Reply-To:References:MIME-Version: Message-ID:Content-Type; b=hYfJ7pGQyobYJC+LwiBkUBx4kboMPkxC8IjwQknH2mBauvNqBxpjEj+y3udztkXMVgg1w6r44e27qktG5C9HZV9weRs0REG+XvqO9cx3tss3BaOYaqem9RQFC1OdacJWJec5DiGi0mwdz0aEx+92DCN1aTMgkfPYwDwrrmw66qk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=Clad9S/T; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=Jx8y+UL8; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="Clad9S/T"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="Jx8y+UL8" Date: Tue, 04 Nov 2025 08:16:47 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1762244209; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ngdbu+496NLH9QEsOegwOp23WVab4omcwp3CPeZFgkw=; b=Clad9S/TVUIOxjMFS89Pyz+qXzPjsCCxeR60W0hFoZrJZvdbfHoAb8WuXO1F4v/A2C3Jlc PNb2xxboSGBCEUTYRCxPmT5vLFaArs88kPeECTQPtDkCCRbJ4VBrh+8oBh9+zaIaCCr50m GBA7XJOZ8ZI6nUpyMK141uBNThsvTm4EHXT8Bs6WTXOefgTyy540snrk/rRxhFdOMQzEMa yBiOnUVz6M6MnATiYHq0Bo9pz3ilHd24fxtW/tNLJX2MsNdNeY7UiQz7PPZQeGyhZez/L1 LxJyQEtD0DHcpqM6ri1SiIvSD7HqQH+2iVT0o+63dQkyJu5paM4hFqvdOxbjxw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1762244209; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ngdbu+496NLH9QEsOegwOp23WVab4omcwp3CPeZFgkw=; b=Jx8y+UL81SfkQdapzGg4yA+RivRxDs5MElZXpDVhw7iYa7KeiYMm98cfs+dg9Cw+h7FEKv Un9HKvQnFDz18CBg== From: "tip-bot2 for Thomas Gleixner" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: core/rseq] rseq: Switch to fast path processing on exit to user Cc: Thomas Gleixner , "Peter Zijlstra (Intel)" , Ingo Molnar , Mathieu Desnoyers , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20251027084307.701201365@linutronix.de> References: <20251027084307.701201365@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <176224420767.2601451.13855218627698899917.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Precedence: bulk Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The following commit has been merged into the core/rseq branch of tip: Commit-ID: 3db6b38dfe640207da706b286d4181237391f5bd Gitweb: https://git.kernel.org/tip/3db6b38dfe640207da706b286d4181237= 391f5bd Author: Thomas Gleixner AuthorDate: Mon, 27 Oct 2025 09:45:19 +01:00 Committer: Ingo Molnar CommitterDate: Tue, 04 Nov 2025 08:34:39 +01:00 rseq: Switch to fast path processing on exit to user Now that all bits and pieces are in place, hook the RSEQ handling fast path function into exit_to_user_mode_prepare() after the TIF work bits have been handled. If case of fast path failure, TIF_NOTIFY_RESUME has been raised and the caller needs to take another turn through the TIF handling slow path. This only works for architectures which use the generic entry code. Architectures who still have their own incomplete hacks are not supported and won't be. This results in the following improvements: Kernel build Before After Reduction exit to user 80692981 80514451 signal checks: 32581 121 99% slowpath runs: 1201408 1.49% 198 0.00% 100% fastpath runs: 675941 0.84% N/A id updates: 1233989 1.53% 50541 0.06% 96% cs checks: 1125366 1.39% 0 0.00% 100% cs cleared: 1125366 100% 0 100% cs fixup: 0 0% 0 RSEQ selftests Before After Reduction exit to user: 386281778 387373750 signal checks: 35661203 0 100% slowpath runs: 140542396 36.38% 100 0.00% 100% fastpath runs: 9509789 2.51% N/A id updates: 176203599 45.62% 9087994 2.35% 95% cs checks: 175587856 45.46% 4728394 1.22% 98% cs cleared: 172359544 98.16% 1319307 27.90% 99% cs fixup: 3228312 1.84% 3409087 72.10% The 'cs cleared' and 'cs fixup' percentages are not relative to the exit to user invocations, they are relative to the actual 'cs check' invocations. While some of this could have been avoided in the original code, like the obvious clearing of CS when it's already clear, the main problem of going through TIF_NOTIFY_RESUME cannot be solved. In some workloads the RSEQ notify handler is invoked more than once before going out to user space. Doing this once when everything has stabilized is the only solution to avoid this. The initial attempt to completely decouple it from the TIF work turned out to be suboptimal for workloads, which do a lot of quick and short system calls. Even if the fast path decision is only 4 instructions (including a conditional branch), this adds up quickly and becomes measurable when the rate for actually having to handle rseq is in the low single digit percentage range of user/kernel transitions. Signed-off-by: Thomas Gleixner Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Reviewed-by: Mathieu Desnoyers Link: https://patch.msgid.link/20251027084307.701201365@linutronix.de --- include/linux/irq-entry-common.h | 7 ++----- include/linux/resume_user_mode.h | 2 +- include/linux/rseq.h | 18 ++++++++++++------ init/Kconfig | 2 +- kernel/entry/common.c | 26 +++++++++++++++++++------- kernel/rseq.c | 8 ++++++-- 6 files changed, 41 insertions(+), 22 deletions(-) diff --git a/include/linux/irq-entry-common.h b/include/linux/irq-entry-com= mon.h index cb31fb8..8f5ceea 100644 --- a/include/linux/irq-entry-common.h +++ b/include/linux/irq-entry-common.h @@ -197,11 +197,8 @@ static __always_inline void arch_exit_to_user_mode(voi= d) { } */ void arch_do_signal_or_restart(struct pt_regs *regs); =20 -/** - * exit_to_user_mode_loop - do any pending work before leaving to user spa= ce - */ -unsigned long exit_to_user_mode_loop(struct pt_regs *regs, - unsigned long ti_work); +/* Handle pending TIF work */ +unsigned long exit_to_user_mode_loop(struct pt_regs *regs, unsigned long t= i_work); =20 /** * exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required diff --git a/include/linux/resume_user_mode.h b/include/linux/resume_user_m= ode.h index dd3bf7d..bf92227 100644 --- a/include/linux/resume_user_mode.h +++ b/include/linux/resume_user_mode.h @@ -59,7 +59,7 @@ static inline void resume_user_mode_work(struct pt_regs *= regs) mem_cgroup_handle_over_high(GFP_KERNEL); blkcg_maybe_throttle_current(); =20 - rseq_handle_notify_resume(regs); + rseq_handle_slowpath(regs); } =20 #endif /* LINUX_RESUME_USER_MODE_H */ diff --git a/include/linux/rseq.h b/include/linux/rseq.h index abfbeb4..ded4baa 100644 --- a/include/linux/rseq.h +++ b/include/linux/rseq.h @@ -7,13 +7,19 @@ =20 #include =20 -void __rseq_handle_notify_resume(struct pt_regs *regs); +void __rseq_handle_slowpath(struct pt_regs *regs); =20 -static inline void rseq_handle_notify_resume(struct pt_regs *regs) +/* Invoked from resume_user_mode_work() */ +static inline void rseq_handle_slowpath(struct pt_regs *regs) { - /* '&' is intentional to spare one conditional branch */ - if (current->rseq.event.sched_switch & current->rseq.event.has_rseq) - __rseq_handle_notify_resume(regs); + if (IS_ENABLED(CONFIG_GENERIC_ENTRY)) { + if (current->rseq.event.slowpath) + __rseq_handle_slowpath(regs); + } else { + /* '&' is intentional to spare one conditional branch */ + if (current->rseq.event.sched_switch & current->rseq.event.has_rseq) + __rseq_handle_slowpath(regs); + } } =20 void __rseq_signal_deliver(int sig, struct pt_regs *regs); @@ -152,7 +158,7 @@ static inline void rseq_fork(struct task_struct *t, u64= clone_flags) } =20 #else /* CONFIG_RSEQ */ -static inline void rseq_handle_notify_resume(struct pt_regs *regs) { } +static inline void rseq_handle_slowpath(struct pt_regs *regs) { } static inline void rseq_signal_deliver(struct ksignal *ksig, struct pt_reg= s *regs) { } static inline void rseq_sched_switch_event(struct task_struct *t) { } static inline void rseq_sched_set_task_cpu(struct task_struct *t, unsigned= int cpu) { } diff --git a/init/Kconfig b/init/Kconfig index bde40ab..d1c606e 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1941,7 +1941,7 @@ config RSEQ_DEBUG_DEFAULT_ENABLE config DEBUG_RSEQ default n bool "Enable debugging of rseq() system call" if EXPERT - depends on RSEQ && DEBUG_KERNEL + depends on RSEQ && DEBUG_KERNEL && !GENERIC_ENTRY select RSEQ_DEBUG_DEFAULT_ENABLE help Enable extra debugging checks for the rseq system call. diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 70a16db..523a3e7 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -11,13 +11,8 @@ /* Workaround to allow gradual conversion of architecture code */ void __weak arch_do_signal_or_restart(struct pt_regs *regs) { } =20 -/** - * exit_to_user_mode_loop - do any pending work before leaving to user spa= ce - * @regs: Pointer to pt_regs on entry stack - * @ti_work: TIF work flags as read by the caller - */ -__always_inline unsigned long exit_to_user_mode_loop(struct pt_regs *regs, - unsigned long ti_work) +static __always_inline unsigned long __exit_to_user_mode_loop(struct pt_re= gs *regs, + unsigned long ti_work) { /* * Before returning to user space ensure that all pending work @@ -62,6 +57,23 @@ __always_inline unsigned long exit_to_user_mode_loop(str= uct pt_regs *regs, return ti_work; } =20 +/** + * exit_to_user_mode_loop - do any pending work before leaving to user spa= ce + * @regs: Pointer to pt_regs on entry stack + * @ti_work: TIF work flags as read by the caller + */ +__always_inline unsigned long exit_to_user_mode_loop(struct pt_regs *regs, + unsigned long ti_work) +{ + for (;;) { + ti_work =3D __exit_to_user_mode_loop(regs, ti_work); + + if (likely(!rseq_exit_to_user_mode_restart(regs))) + return ti_work; + ti_work =3D read_thread_flags(); + } +} + noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) { irqentry_state_t ret =3D { diff --git a/kernel/rseq.c b/kernel/rseq.c index c5d6336..395d8b0 100644 --- a/kernel/rseq.c +++ b/kernel/rseq.c @@ -237,7 +237,11 @@ efault: =20 static void rseq_slowpath_update_usr(struct pt_regs *regs) { - /* Preserve rseq state and user_irq state for exit to user */ + /* + * Preserve rseq state and user_irq state. The generic entry code + * clears user_irq on the way out, the non-generic entry + * architectures are not having user_irq. + */ const struct rseq_event evt_mask =3D { .has_rseq =3D true, .user_irq =3D = true, }; struct task_struct *t =3D current; struct rseq_ids ids; @@ -289,7 +293,7 @@ static void rseq_slowpath_update_usr(struct pt_regs *re= gs) } } =20 -void __rseq_handle_notify_resume(struct pt_regs *regs) +void __rseq_handle_slowpath(struct pt_regs *regs) { /* * If invoked from hypervisors before entering the guest via