From nobody Mon Feb 9 13:00:41 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 909693469F8; Wed, 29 Oct 2025 13:22:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761744152; cv=none; b=EEgXOymArKeMkixr1P2fWQlpZ460g2a6pBzlaV52zOKZkdO07G9UUdrAFXNOB+BhbWwhHuQD3fgXx9+NWVver4wgCBr1ECbswROCze78Wo++qyFGbQTG3FPC37c9Wr0hv6668VgCPHd1OIFeb8hmryZ/0aavMo0B9vN4/ZPPfvQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761744152; c=relaxed/simple; bh=6QRLM7xMqkRArKnD9pXHNONjyOAcJgMQpDE+bXtSwE8=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=W7YyGqBuCyIauh2w7UpeoyY/aBD1uSPPcp9cocIASUPY7DMA3r5rYVTWq+UvmMga6i2g3S4JLbdpcnW7KbSu1QCkl/iJKP4nM1WZsHDw2mrpjo3sXHnMOmaiEWSgbDMvYMg7cvPoOEbO1s+tROK/b5j6lo/3PrtSoJOowVoClng= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=1qQ/YomI; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=zK+zrjLq; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="1qQ/YomI"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="zK+zrjLq" Message-ID: <20251029130403.988550967@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1761744149; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=AMdecMI/rcpTkJB16o8t8M64tkc3x3WveHYK82QqmDs=; b=1qQ/YomIPXbQLEdIXsrOtxIdM/vMbaUO/VXv0t8o1YyabFzkHbsn2iBiGwAMmW+lprfipk I/IDSDt6EM0lo7GGVp6/bHahIdOHfHbqRPA/KJUwwwk2Y+aLYut7WQXb8DNsTF+pElNZAo E/0NWLA8Be0Gyzw7s42mnucwSrEVCOF28udLyu9wp29Sna8WjM9/eWCPx5UPkbcBqtz6YL jdyOxqUPc7nfk/9kiKuJbAsYYMUQ8yVyQOWiuN+y5OavOqUqqIcoOmswbtLemYaDFDEcbK O5IialdjIoVwnykhv/xy0hhu4Xa7L27TW/Sebnnt9lT9cBSifoiCpyGTXDl4cg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1761744149; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=AMdecMI/rcpTkJB16o8t8M64tkc3x3WveHYK82QqmDs=; b=zK+zrjLqlYbrDs5VYhPTwx3rTz1SMFw0m9zYtkmDy242bXctWBkaF+XvsxbbILB47Wu47N D4G/PN3+dODQsTDQ== From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Mathieu Desnoyers , "Paul E. McKenney" , Boqun Feng , Jonathan Corbet , Prakash Sangappa , Madadi Vineeth Reddy , K Prateek Nayak , Steven Rostedt , Sebastian Andrzej Siewior , Arnd Bergmann , linux-arch@vger.kernel.org Subject: [patch V3 09/12] rseq: Reset slice extension when scheduled References: <20251029125514.496134233@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Wed, 29 Oct 2025 14:22:28 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When a time slice extension was granted in the need_resched() check on exit to user space, the task can still be scheduled out in one of the other pending work items. When it gets scheduled back in, and need_resched() is not set, then the stale grant would be preserved, which is just wrong. RSEQ already keeps track of that and sets TIF_RSEQ, which invokes the critical section and ID update mechanisms. Utilize them and clear the user space slice control member of struct rseq unconditionally within the existing user access sections. That's just an unconditional store more in that path. Signed-off-by: Thomas Gleixner Cc: Mathieu Desnoyers Cc: Peter Zijlstra Cc: "Paul E. McKenney" Cc: Boqun Feng Reviewed-by: Mathieu Desnoyers --- include/linux/rseq_entry.h | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) --- a/include/linux/rseq_entry.h +++ b/include/linux/rseq_entry.h @@ -101,9 +101,17 @@ static __always_inline bool rseq_arm_sli return __rseq_arm_slice_extension_timer(); } =20 +static __always_inline void rseq_slice_clear_grant(struct task_struct *t) +{ + if (IS_ENABLED(CONFIG_RSEQ_STATS) && t->rseq.slice.state.granted) + rseq_stat_inc(rseq_stats.s_revoked); + t->rseq.slice.state.granted =3D false; +} + #else /* CONFIG_RSEQ_SLICE_EXTENSION */ static inline bool rseq_slice_extension_enabled(void) { return false; } static inline bool rseq_arm_slice_extension_timer(void) { return false; } +static inline void rseq_slice_clear_grant(struct task_struct *t) { } #endif /* !CONFIG_RSEQ_SLICE_EXTENSION */ =20 bool rseq_debug_update_user_cs(struct task_struct *t, struct pt_regs *regs= , unsigned long csaddr); @@ -390,8 +398,15 @@ bool rseq_set_ids_get_csaddr(struct task unsafe_put_user(ids->mm_cid, &rseq->mm_cid, efault); if (csaddr) unsafe_get_user(*csaddr, &rseq->rseq_cs, efault); + + /* Open coded, so it's in the same user access region */ + if (rseq_slice_extension_enabled()) { + /* Unconditionally clear it, no point in conditionals */ + unsafe_put_user(0U, &rseq->slice_ctrl.all, efault); + } } =20 + rseq_slice_clear_grant(t); /* Cache the new values */ t->rseq.ids.cpu_cid =3D ids->cpu_cid; rseq_stat_inc(rseq_stats.ids); @@ -487,8 +502,17 @@ static __always_inline bool rseq_exit_us */ u64 csaddr; =20 - if (unlikely(!get_user_inline(csaddr, &rseq->rseq_cs))) - return false; + scoped_user_rw_access(rseq, efault) { + unsafe_get_user(csaddr, &rseq->rseq_cs, efault); + + /* Open coded, so it's in the same user access region */ + if (rseq_slice_extension_enabled()) { + /* Unconditionally clear it, no point in conditionals */ + unsafe_put_user(0U, &rseq->slice_ctrl.all, efault); + } + } + + rseq_slice_clear_grant(t); =20 if (static_branch_unlikely(&rseq_debug_enabled) || unlikely(csaddr)) { if (unlikely(!rseq_update_user_cs(t, regs, csaddr))) @@ -504,6 +528,8 @@ static __always_inline bool rseq_exit_us u32 node_id =3D cpu_to_node(ids.cpu_id); =20 return rseq_update_usr(t, regs, &ids, node_id); +efault: + return false; } =20 static __always_inline bool __rseq_exit_to_user_mode_restart(struct pt_reg= s *regs)