On Fri, 1 Nov 2024 at 14:04, Boqun Feng <boqun.feng@gmail.com> wrote:
>
> Hi,
>
> This is another RFC version of LKMM atomics in Rust, you can find the
> previous versions:
>
> v1: https://lore.kernel.org/rust-for-linux/20240612223025.1158537-1-boqun.feng@gmail.com/
> wip: https://lore.kernel.org/rust-for-linux/20240322233838.868874-1-boqun.feng@gmail.com/
>
> I add a few more people Cced this time, so if you're curious about why
> we choose to implement LKMM atomics instead of using the Rust ones, you
> can find some explanation:
>
> * In my Kangrejos talk: https://lwn.net/Articles/993785/
> * In Linus' email: https://lore.kernel.org/rust-for-linux/CAHk-=whkQk=zq5XiMcaU3xj4v69+jyoP-y6Sywhq-TvxSSvfEA@mail.gmail.com/
>
> This time, I try implementing a generic atomic type `Atomic<T>`, since
> Benno and Gary suggested last time, and also Rust standard library is
> also going to that direction [1].
>
> Honestly, a generic atomic type is still not quite necessary for myself,
> but here are a few reasons that it's useful:
>
> * It's useful for type alias, for example, if you have:
>
> type c_long = isize;
>
> Then `Atomic<c_clong>` and `Atomic<isize>` is the same type,
> this may make FFI code (mapping a C type to a Rust type or vice
> versa) more readable.
>
> * In kernel, we sometimes switch atomic to percpu for better
> scalabity, percpu is naturally a generic type, because it can
> have data that is larger than machine word size. Making atomic
> generic ease the potential switching/refactoring.
>
> * Generic atomics provide all the functionalities that non-generic
> atomics could have.
>
> That said, currently "generic" is limited to a few types: the type must
> be the same size as atomic_t or atomic64_t, other than basic types, only
> #[repr(<basic types>)] struct can be used in a `Atomic<T>`.
>
> Also this is not a full feature set patchset, things like different
> arithmetic operations and bit operations are still missing, they can be
> either future work or for future versions.
>
> I included an RCU pointer implementation as one example of the usage, so
> a patch from Danilo is added, but I will drop it once his patch merged.
>
> This is based on today's rust-next, and I've run all kunit tests from
> the doc test on x86, arm64 and riscv.
>
> Feedbacks and comments are welcome! Thanks.
>
> Regards,
> Boqun
>
> [1]: https://github.com/rust-lang/rust/issues/130539
>
Thanks, Boqun.
I played around a bit with porting the blk-mq atomic code to this. As
neither an expert in Rust nor an expert in atomics, this is probably
both non-idiomatic and wrong, but unlike the `core` atomics, it
provides an Atomic::<u64> on 32-bit systems, which gets UML's 32-bit
build working again.
Diff below -- I'm not likely to have much time to work on this again
soon, so feel free to pick it up/fix it if it suits.
Thanks,
-- David
---
diff --git a/rust/kernel/block/mq/operations.rs
b/rust/kernel/block/mq/operations.rs
index 9ba7fdfeb4b2..822d64230e11 100644
--- a/rust/kernel/block/mq/operations.rs
+++ b/rust/kernel/block/mq/operations.rs
@@ -11,7 +11,8 @@
error::{from_result, Result},
types::ARef,
};
-use core::{marker::PhantomData, sync::atomic::AtomicU64,
sync::atomic::Ordering};
+use core::marker::PhantomData;
+use kernel::sync::atomic::{Atomic, Relaxed};
/// Implement this trait to interface blk-mq as block devices.
///
@@ -77,7 +78,7 @@ impl<T: Operations> OperationsVTable<T> {
let request = unsafe { &*(*bd).rq.cast::<Request<T>>() };
// One refcount for the ARef, one for being in flight
- request.wrapper_ref().refcount().store(2, Ordering::Relaxed);
+ request.wrapper_ref().refcount().store(2, Relaxed);
// SAFETY:
// - We own a refcount that we took above. We pass that to `ARef`.
@@ -186,7 +187,7 @@ impl<T: Operations> OperationsVTable<T> {
// SAFETY: The refcount field is allocated but not initialized, so
// it is valid for writes.
- unsafe {
RequestDataWrapper::refcount_ptr(pdu.as_ptr()).write(AtomicU64::new(0))
};
+ unsafe {
RequestDataWrapper::refcount_ptr(pdu.as_ptr()).write(Atomic::<u64>::new(0))
};
Ok(0)
})
diff --git a/rust/kernel/block/mq/request.rs b/rust/kernel/block/mq/request.rs
index 7943f43b9575..8d4060d65159 100644
--- a/rust/kernel/block/mq/request.rs
+++ b/rust/kernel/block/mq/request.rs
@@ -13,8 +13,8 @@
use core::{
marker::PhantomData,
ptr::{addr_of_mut, NonNull},
- sync::atomic::{AtomicU64, Ordering},
};
+use kernel::sync::atomic::{Atomic, Relaxed};
/// A wrapper around a blk-mq [`struct request`]. This represents an
IO request.
///
@@ -102,8 +102,7 @@ fn try_set_end(this: ARef<Self>) -> Result<*mut
bindings::request, ARef<Self>> {
if let Err(_old) = this.wrapper_ref().refcount().compare_exchange(
2,
0,
- Ordering::Relaxed,
- Ordering::Relaxed,
+ Relaxed
) {
return Err(this);
}
@@ -168,13 +167,13 @@ pub(crate) struct RequestDataWrapper {
/// - 0: The request is owned by C block layer.
/// - 1: The request is owned by Rust abstractions but there are
no [`ARef`] references to it.
/// - 2+: There are [`ARef`] references to the request.
- refcount: AtomicU64,
+ refcount: Atomic::<u64>,
}
impl RequestDataWrapper {
/// Return a reference to the refcount of the request that is embedding
/// `self`.
- pub(crate) fn refcount(&self) -> &AtomicU64 {
+ pub(crate) fn refcount(&self) -> &Atomic::<u64> {
&self.refcount
}
@@ -184,7 +183,7 @@ pub(crate) fn refcount(&self) -> &AtomicU64 {
/// # Safety
///
/// - `this` must point to a live allocation of at least the size
of `Self`.
- pub(crate) unsafe fn refcount_ptr(this: *mut Self) -> *mut AtomicU64 {
+ pub(crate) unsafe fn refcount_ptr(this: *mut Self) -> *mut Atomic::<u64> {
// SAFETY: Because of the safety requirements of this function, the
// field projection is safe.
unsafe { addr_of_mut!((*this).refcount) }
@@ -202,28 +201,22 @@ unsafe impl<T: Operations> Sync for Request<T> {}
/// Store the result of `op(target.load())` in target, returning new value of
/// target.
-fn atomic_relaxed_op_return(target: &AtomicU64, op: impl Fn(u64) ->
u64) -> u64 {
- let old = target.fetch_update(Ordering::Relaxed,
Ordering::Relaxed, |x| Some(op(x)));
-
- // SAFETY: Because the operation passed to `fetch_update` above always
- // return `Some`, `old` will always be `Ok`.
- let old = unsafe { old.unwrap_unchecked() };
-
- op(old)
+fn atomic_relaxed_op_return(target: &Atomic::<u64>, op: impl Fn(u64)
-> u64) -> u64 {
+ let old = target.load(Relaxed);
+ let new_val = op(old);
+ target.compare_exchange(old, new_val, Relaxed);
+ old
}
/// Store the result of `op(target.load)` in `target` if `target.load() !=
/// pred`, returning [`true`] if the target was updated.
-fn atomic_relaxed_op_unless(target: &AtomicU64, op: impl Fn(u64) ->
u64, pred: u64) -> bool {
- target
- .fetch_update(Ordering::Relaxed, Ordering::Relaxed, |x| {
- if x == pred {
- None
- } else {
- Some(op(x))
- }
- })
- .is_ok()
+fn atomic_relaxed_op_unless(target: &Atomic::<u64>, op: impl Fn(u64)
-> u64, pred: u64) -> bool {
+ let old = target.load(Relaxed);
+ if old == pred {
+ false
+ } else {
+ target.compare_exchange(old, op(old), Relaxed).is_ok()
+ }
}
// SAFETY: All instances of `Request<T>` are reference counted. This