From nobody Tue Dec 16 21:26:49 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C01D5345CBF; Mon, 15 Dec 2025 16:52:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765817552; cv=none; b=MR/7VpOOjoSTr+fLhtGCCkgFNIi5FSOde+BmtdlYfalrdIfH1hzRQ7u6Z2ZF03cWukpF8h8bgeHaopI4KwOZxq5lQ70NjQxAny55pGyoTYTmHjxetcH9oR3kiXXnGYa5YZl+552hhKWKbPJVcVG990ZspHu6bHBjOGigay+gf4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765817552; c=relaxed/simple; bh=bweILKBAG+NAHAwOybYlRiEttgCdytAp9C0CZOF0J1E=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=KFJVzKdO3+9yZTT6JOqIeqg66G8tQEqF+oprHmA9jI6GqgzwIRsmBJmnqZUqF1/79NLdyn0QHN8vsJyn1IgiZhadNLD0g4rxIXJ9x7lnOXU+OVYcdIjSmoZQ7CDFsutXR2aquHR8/kcUT4IDxpDFmNyRQDS59CiWdMoYlq1+7hw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=RzBQixXc; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=9t7r+xnm; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="RzBQixXc"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="9t7r+xnm" Message-ID: <20251215155709.131081527@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1765817547; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=tI2ArUugH9bBd2PsGVrarj2XWauowKWXGr0SZO/aAow=; b=RzBQixXcATVqZtWl4BTB6uHHla1sYDhvZ4/N6KMLdfi7BmWaLlzpmtIk3dvbkQ+7V8Zcug sOohtLkXLAJDG6obtJQjxzIFGl1TEulLfqRUjUcAthB8VzPvz6Fn1mx24t/p5UNmoYA01g +eJYQsPo1ocZRKYcYNotL6ciTos5FHpAGO5NbZNdA/Tl5qDGcOYkIUdqg9hMqoEbeUQlIg jjruqyst7Pli0vaug+d1jQKsG3+uR+GFgAPIsoMP6Nrn3Sd+Chr1pXS79LfJUFEVP3S7Pm EkL4ydDxW5lWVtIHBIsvR2yHO6Ky0f80i8Zxwj1kIYS3OxUdanfs05A69j1F7g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1765817547; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=tI2ArUugH9bBd2PsGVrarj2XWauowKWXGr0SZO/aAow=; b=9t7r+xnmxnfx6g07AVJghxnlca2NZK+/Hf0NohHA4bSPQF1GvvssKsXUauoLeaQIDjVTE1 Kuzsboh90WMzjUBw== From: Thomas Gleixner To: LKML Cc: Mathieu Desnoyers , "Paul E. McKenney" , Boqun Feng , Jonathan Corbet , Prakash Sangappa , Madadi Vineeth Reddy , K Prateek Nayak , Steven Rostedt , Sebastian Andrzej Siewior , Arnd Bergmann , linux-arch@vger.kernel.org, Randy Dunlap , Peter Zijlstra , Ron Geva , Waiman Long Subject: [patch V6 08/11] rseq: Reset slice extension when scheduled References: <20251215155615.870031952@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Date: Mon, 15 Dec 2025 17:52:26 +0100 (CET) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When a time slice extension was granted in the need_resched() check on exit to user space, the task can still be scheduled out in one of the other pending work items. When it gets scheduled back in, and need_resched() is not set, then the stale grant would be preserved, which is just wrong. RSEQ already keeps track of that and sets TIF_RSEQ, which invokes the critical section and ID update mechanisms. Utilize them and clear the user space slice control member of struct rseq unconditionally within the existing user access sections. That's just an unconditional store more in that path. Signed-off-by: Thomas Gleixner Cc: Mathieu Desnoyers Cc: Peter Zijlstra Cc: "Paul E. McKenney" Cc: Boqun Feng Reviewed-by: Mathieu Desnoyers --- include/linux/rseq_entry.h | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) --- a/include/linux/rseq_entry.h +++ b/include/linux/rseq_entry.h @@ -102,9 +102,17 @@ static __always_inline bool rseq_arm_sli return __rseq_arm_slice_extension_timer(); } =20 +static __always_inline void rseq_slice_clear_grant(struct task_struct *t) +{ + if (IS_ENABLED(CONFIG_RSEQ_STATS) && t->rseq.slice.state.granted) + rseq_stat_inc(rseq_stats.s_revoked); + t->rseq.slice.state.granted =3D false; +} + #else /* CONFIG_RSEQ_SLICE_EXTENSION */ static inline bool rseq_slice_extension_enabled(void) { return false; } static inline bool rseq_arm_slice_extension_timer(void) { return false; } +static inline void rseq_slice_clear_grant(struct task_struct *t) { } #endif /* !CONFIG_RSEQ_SLICE_EXTENSION */ =20 bool rseq_debug_update_user_cs(struct task_struct *t, struct pt_regs *regs= , unsigned long csaddr); @@ -391,8 +399,15 @@ bool rseq_set_ids_get_csaddr(struct task unsafe_put_user(ids->mm_cid, &rseq->mm_cid, efault); if (csaddr) unsafe_get_user(*csaddr, &rseq->rseq_cs, efault); + + /* Open coded, so it's in the same user access region */ + if (rseq_slice_extension_enabled()) { + /* Unconditionally clear it, no point in conditionals */ + unsafe_put_user(0U, &rseq->slice_ctrl.all, efault); + } } =20 + rseq_slice_clear_grant(t); /* Cache the new values */ t->rseq.ids.cpu_cid =3D ids->cpu_cid; rseq_stat_inc(rseq_stats.ids); @@ -488,8 +503,17 @@ static __always_inline bool rseq_exit_us */ u64 csaddr; =20 - if (unlikely(get_user_inline(csaddr, &rseq->rseq_cs))) - return false; + scoped_user_rw_access(rseq, efault) { + unsafe_get_user(csaddr, &rseq->rseq_cs, efault); + + /* Open coded, so it's in the same user access region */ + if (rseq_slice_extension_enabled()) { + /* Unconditionally clear it, no point in conditionals */ + unsafe_put_user(0U, &rseq->slice_ctrl.all, efault); + } + } + + rseq_slice_clear_grant(t); =20 if (static_branch_unlikely(&rseq_debug_enabled) || unlikely(csaddr)) { if (unlikely(!rseq_update_user_cs(t, regs, csaddr))) @@ -505,6 +529,8 @@ static __always_inline bool rseq_exit_us u32 node_id =3D cpu_to_node(ids.cpu_id); =20 return rseq_update_usr(t, regs, &ids, node_id); +efault: + return false; } =20 static __always_inline bool __rseq_exit_to_user_mode_restart(struct pt_reg= s *regs)