From nobody Thu Apr 9 09:48:28 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59DA9413221; Mon, 9 Mar 2026 19:48:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773085708; cv=none; b=O+oisrOAxYL+bntrj++5sdg81uvPQE+RC9k0Fhl27t9bi/Cv+PjvGhFBku4mP60xsdx7in1RQFq+CvGFXPG9ZZXFk0x+D3dS6HHvU/eBa7FsFt2HLsgFhbofw5VcdMQ4vWjbJyNJ2DR1doAyRkdQVLqC/QdSLmj3J7vZY33TdhY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773085708; c=relaxed/simple; bh=lavPYaffDnU8ISEG4V7FtjhMTp2VZrPqgsTy9PCb63U=; h=Date:From:To:Subject:Cc:In-Reply-To:References:MIME-Version: Message-ID:Content-Type; b=LFJEwfAsrc/gjsmJVXQp9m+76WzdP6hSJZ01fBAQ1U/XpKi24//uPvl7LUrV87YjmGn1AIUw8C17IWeKyO7/NjgPkkpYYbsFmAJYp3OXRTO+n1ccr75+KRXhlDokUxnR7vWBbuBjcVcx9lxh3vA5zqf4YLBpYEw068FT6CJHA4k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=WtqJieN2; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=2L+3KBfG; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="WtqJieN2"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="2L+3KBfG" Date: Mon, 09 Mar 2026 19:48:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1773085704; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ihPdQsnQoH48KQ1PwEsdhc4L4+zx0q1L8XZMZ7N9hSY=; b=WtqJieN2DJJS/rSNe9iqQAlWUTeCIXHygrqPscDWHCJcsHiFDunJwzzSvx/EW6yBXsswgd NXVJMzR1KWcqiFj+NSEThwU41BP7l2E0BP5y2njutMdzoItOh+7kp/FY7a2I65lJq1AKO9 DO7SY1rGCyvu16+YkmMzLc/q36P5sRd7vX4rX+/Jo5L0MM/QhBOJ0MHFwNx0oSSKkonYA2 aM76TzpKXbexyGoxrV00OM42mtEaYTPQll3V+j71ChMcnqYmhLjl02a5OXjck56u/ATeqN C+RUn5WejBzHHCeXmpkfPTv+zjHsyynILFhGYDk5+4vIO9jMw6P0FEKfrl/njg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1773085704; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ihPdQsnQoH48KQ1PwEsdhc4L4+zx0q1L8XZMZ7N9hSY=; b=2L+3KBfG04SyXyNGElWfVXN+jMmFF8YZcvjGuj6c1x1y97o69DuNKphk/xdbgZ9VmYVN2N WjlqW6l2R/vM8yAQ== From: "tip-bot2 for Boqun Feng" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: locking/core] rust: sync: atomic: Add atomic operation helpers over raw pointers Cc: Boqun Feng , "Peter Zijlstra (Intel)" , Alice Ryhl , Gary Guo , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20260303201701.12204-11-boqun@kernel.org> References: <20260303201701.12204-11-boqun@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <177308570364.1647592.7902021372596822770.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Precedence: bulk Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The following commit has been merged into the locking/core branch of tip: Commit-ID: e2f9c86f33abb89d3e52436018f58e5fb951cc04 Gitweb: https://git.kernel.org/tip/e2f9c86f33abb89d3e52436018f58e5fb= 951cc04 Author: Boqun Feng AuthorDate: Tue, 03 Mar 2026 12:16:58 -08:00 Committer: Peter Zijlstra CommitterDate: Sun, 08 Mar 2026 11:06:50 +01:00 rust: sync: atomic: Add atomic operation helpers over raw pointers In order to synchronize with C or external memory, atomic operations over raw pointers are need. Although there is already an `Atomic::from_ptr()` to provide a `&Atomic`, it's more convenient to have helpers that directly perform atomic operations on raw pointers. Hence a few are added, which are basically an `Atomic::from_ptr().op()` wrapper. Note: for naming, since `atomic_xchg()` and `atomic_cmpxchg()` have a conflict naming to 32bit C atomic xchg/cmpxchg, hence the helpers are just named as `xchg()` and `cmpxchg()`. For `atomic_load()` and `atomic_store()`, their 32bit C counterparts are `atomic_read()` and `atomic_set()`, so keep the `atomic_` prefix. [boqun: Fix typo spotted by Alice and fix broken sentence spotted by Gary] Signed-off-by: Boqun Feng Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Alice Ryhl Reviewed-by: Gary Guo Link: https://patch.msgid.link/20260120115207.55318-3-boqun.feng@gmail.com Link: https://patch.msgid.link/20260303201701.12204-11-boqun@kernel.org --- rust/kernel/sync/atomic.rs | 104 ++++++++++++++++++++++++++- rust/kernel/sync/atomic/predefine.rs | 46 ++++++++++++- 2 files changed, 150 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index f80cebc..1bb1fc2 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -703,3 +703,107 @@ impl AtomicFlag { } } } + +/// Atomic load over raw pointers. +/// +/// This function provides a short-cut of `Atomic::from_ptr().load(..)`, a= nd can be used to work +/// with C side on synchronizations: +/// +/// - `atomic_load(.., Relaxed)` maps to `READ_ONCE()` when used for inter= -thread communication. +/// - `atomic_load(.., Acquire)` maps to `smp_load_acquire()`. +/// +/// # Safety +/// +/// - `ptr` is a valid pointer to `T` and aligned to `align_of::()`. +/// - If there is a concurrent store from kernel (C or Rust), it has to be= atomic. +#[doc(alias("READ_ONCE", "smp_load_acquire"))] +#[inline(always)] +pub unsafe fn atomic_load( + ptr: *mut T, + o: Ordering, +) -> T +where + T::Repr: AtomicBasicOps, +{ + // SAFETY: Per the function safety requirement, `ptr` is valid and ali= gned to + // `align_of::()`, and all concurrent stores from kernel are atomic= , hence no data race per + // LKMM. + unsafe { Atomic::from_ptr(ptr) }.load(o) +} + +/// Atomic store over raw pointers. +/// +/// This function provides a short-cut of `Atomic::from_ptr().load(..)`, a= nd can be used to work +/// with C side on synchronizations: +/// +/// - `atomic_store(.., Relaxed)` maps to `WRITE_ONCE()` when used for int= er-thread communication. +/// - `atomic_load(.., Release)` maps to `smp_store_release()`. +/// +/// # Safety +/// +/// - `ptr` is a valid pointer to `T` and aligned to `align_of::()`. +/// - If there is a concurrent access from kernel (C or Rust), it has to b= e atomic. +#[doc(alias("WRITE_ONCE", "smp_store_release"))] +#[inline(always)] +pub unsafe fn atomic_store( + ptr: *mut T, + v: T, + o: Ordering, +) where + T::Repr: AtomicBasicOps, +{ + // SAFETY: Per the function safety requirement, `ptr` is valid and ali= gned to + // `align_of::()`, and all concurrent accesses from kernel are atom= ic, hence no data race + // per LKMM. + unsafe { Atomic::from_ptr(ptr) }.store(v, o); +} + +/// Atomic exchange over raw pointers. +/// +/// This function provides a short-cut of `Atomic::from_ptr().xchg(..)`, a= nd can be used to work +/// with C side on synchronizations. +/// +/// # Safety +/// +/// - `ptr` is a valid pointer to `T` and aligned to `align_of::()`. +/// - If there is a concurrent access from kernel (C or Rust), it has to b= e atomic. +#[inline(always)] +pub unsafe fn xchg( + ptr: *mut T, + new: T, + o: Ordering, +) -> T +where + T::Repr: AtomicExchangeOps, +{ + // SAFETY: Per the function safety requirement, `ptr` is valid and ali= gned to + // `align_of::()`, and all concurrent accesses from kernel are atom= ic, hence no data race + // per LKMM. + unsafe { Atomic::from_ptr(ptr) }.xchg(new, o) +} + +/// Atomic compare and exchange over raw pointers. +/// +/// This function provides a short-cut of `Atomic::from_ptr().cmpxchg(..)`= , and can be used to work +/// with C side on synchronizations. +/// +/// # Safety +/// +/// - `ptr` is a valid pointer to `T` and aligned to `align_of::()`. +/// - If there is a concurrent access from kernel (C or Rust), it has to b= e atomic. +#[doc(alias("try_cmpxchg"))] +#[inline(always)] +pub unsafe fn cmpxchg( + ptr: *mut T, + old: T, + new: T, + o: Ordering, +) -> Result +where + T::Repr: AtomicExchangeOps, +{ + // SAFETY: Per the function safety requirement, `ptr` is valid and ali= gned to + // `align_of::()`, and all concurrent accesses from kernel are atom= ic, hence no data race + // per LKMM. + unsafe { Atomic::from_ptr(ptr) }.cmpxchg(old, new, o) +} diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic= /predefine.rs index ceb3cae..1d53834 100644 --- a/rust/kernel/sync/atomic/predefine.rs +++ b/rust/kernel/sync/atomic/predefine.rs @@ -178,6 +178,14 @@ mod tests { =20 assert_eq!(v, x.load(Relaxed)); }); + + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |= v| { + let x =3D Atomic::new(v); + let ptr =3D x.as_ptr(); + + // SAFETY: `ptr` is a valid pointer and no concurrent access. + assert_eq!(v, unsafe { atomic_load(ptr, Relaxed) }); + }); } =20 #[test] @@ -188,6 +196,17 @@ mod tests { x.store(v, Release); assert_eq!(v, x.load(Acquire)); }); + + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |= v| { + let x =3D Atomic::new(0); + let ptr =3D x.as_ptr(); + + // SAFETY: `ptr` is a valid pointer and no concurrent access. + unsafe { atomic_store(ptr, v, Release) }; + + // SAFETY: `ptr` is a valid pointer and no concurrent access. + assert_eq!(v, unsafe { atomic_load(ptr, Acquire) }); + }); } =20 #[test] @@ -201,6 +220,18 @@ mod tests { assert_eq!(old, x.xchg(new, Full)); assert_eq!(new, x.load(Relaxed)); }); + + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |= v| { + let x =3D Atomic::new(v); + let ptr =3D x.as_ptr(); + + let old =3D v; + let new =3D v + 1; + + // SAFETY: `ptr` is a valid pointer and no concurrent access. + assert_eq!(old, unsafe { xchg(ptr, new, Full) }); + assert_eq!(new, x.load(Relaxed)); + }); } =20 #[test] @@ -216,6 +247,21 @@ mod tests { assert_eq!(Ok(old), x.cmpxchg(old, new, Relaxed)); assert_eq!(new, x.load(Relaxed)); }); + + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |= v| { + let x =3D Atomic::new(v); + let ptr =3D x.as_ptr(); + + let old =3D v; + let new =3D v + 1; + + // SAFETY: `ptr` is a valid pointer and no concurrent access. + assert_eq!(Err(old), unsafe { cmpxchg(ptr, new, new, Full) }); + assert_eq!(old, x.load(Relaxed)); + // SAFETY: `ptr` is a valid pointer and no concurrent access. + assert_eq!(Ok(old), unsafe { cmpxchg(ptr, old, new, Relaxed) }= ); + assert_eq!(new, x.load(Relaxed)); + }); } =20 #[test]