[PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree

Onur Özkan posted 3 patches 3 months, 2 weeks ago
There is a newer version of this series
[PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Onur Özkan 3 months, 2 weeks ago
Adds Rust bindings for the kernel's `ww_mutex` infrastructure to enable
deadlock-free acquisition of multiple related locks.

The patch abstracts `ww_mutex.h` header and wraps the existing
C `ww_mutex` with three main types:
    - `WwClass` for grouping related mutexes
    - `WwAcquireCtx` for tracking lock acquisition context
    - `WwMutex<T>` for the actual lock

Signed-off-by: Onur Özkan <work@onurozkan.dev>
---
 rust/kernel/error.rs              |   1 +
 rust/kernel/sync/lock.rs          |   1 +
 rust/kernel/sync/lock/ww_mutex.rs | 421 ++++++++++++++++++++++++++++++
 3 files changed, 423 insertions(+)
 create mode 100644 rust/kernel/sync/lock/ww_mutex.rs

diff --git a/rust/kernel/error.rs b/rust/kernel/error.rs
index 3dee3139fcd4..28157541e12c 100644
--- a/rust/kernel/error.rs
+++ b/rust/kernel/error.rs
@@ -64,6 +64,7 @@ macro_rules! declare_err {
     declare_err!(EPIPE, "Broken pipe.");
     declare_err!(EDOM, "Math argument out of domain of func.");
     declare_err!(ERANGE, "Math result not representable.");
+    declare_err!(EDEADLK, "Resource deadlock avoided.");
     declare_err!(EOVERFLOW, "Value too large for defined data type.");
     declare_err!(ERESTARTSYS, "Restart the system call.");
     declare_err!(ERESTARTNOINTR, "System call was interrupted by a signal and will be restarted.");
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index e82fa5be289c..8824ebc81084 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -15,6 +15,7 @@
 
 pub mod mutex;
 pub mod spinlock;
+pub mod ww_mutex;
 
 pub(super) mod global;
 pub use global::{GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy};
diff --git a/rust/kernel/sync/lock/ww_mutex.rs b/rust/kernel/sync/lock/ww_mutex.rs
new file mode 100644
index 000000000000..dcb23941813c
--- /dev/null
+++ b/rust/kernel/sync/lock/ww_mutex.rs
@@ -0,0 +1,421 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! A kernel Wound/Wait Mutex.
+//!
+//! This module provides Rust abstractions for the Linux kernel's `ww_mutex` implementation,
+//! which provides deadlock avoidance through a wait-wound or wait-die algorithm.
+//!
+//! C header: [`include/linux/ww_mutex.h`](srctree/include/linux/ww_mutex.h)
+//!
+//! For more information: <https://docs.kernel.org/locking/ww-mutex-design.html>
+
+use crate::bindings;
+use crate::error::to_result;
+use crate::prelude::*;
+use crate::types::{NotThreadSafe, Opaque};
+use core::cell::UnsafeCell;
+use core::marker::PhantomData;
+
+/// Create static [`WwClass`] instances.
+///
+/// # Examples
+///
+/// ```
+/// use kernel::{c_str, define_ww_class};
+///
+/// define_ww_class!(WOUND_WAIT_GLOBAL_CLASS, wound_wait, c_str!("wound_wait_global_class"));
+/// define_ww_class!(WAIT_DIE_GLOBAL_CLASS, wait_die, c_str!("wait_die_global_class"));
+/// ```
+#[macro_export]
+macro_rules! define_ww_class {
+    ($name:ident, wound_wait, $class_name:expr) => {
+        static $name: $crate::sync::lock::ww_mutex::WwClass =
+            { $crate::sync::lock::ww_mutex::WwClass::new($class_name, false) };
+    };
+    ($name:ident, wait_die, $class_name:expr) => {
+        static $name: $crate::sync::lock::ww_mutex::WwClass =
+            { $crate::sync::lock::ww_mutex::WwClass::new($class_name, true) };
+    };
+}
+
+/// A group of mutexes that can participate in deadlock avoidance together.
+///
+/// All mutexes that might be acquired together should use the same class.
+///
+/// # Examples
+///
+/// ```
+/// use kernel::sync::lock::ww_mutex::WwClass;
+/// use kernel::c_str;
+/// use pin_init::stack_pin_init;
+///
+/// stack_pin_init!(let _wait_die_class = WwClass::new_wait_die(c_str!("graphics_buffers")));
+/// stack_pin_init!(let _wound_wait_class = WwClass::new_wound_wait(c_str!("memory_pools")));
+///
+/// # Ok::<(), Error>(())
+/// ```
+#[pin_data]
+pub struct WwClass {
+    #[pin]
+    inner: Opaque<bindings::ww_class>,
+}
+
+// SAFETY: [`WwClass`] is set up once and never modified. It's fine to share it across threads.
+unsafe impl Sync for WwClass {}
+// SAFETY: Doesn't hold anything thread-specific. It's safe to send to other threads.
+unsafe impl Send for WwClass {}
+
+macro_rules! ww_class_init_helper {
+    ($name:expr, $is_wait_die:expr) => {
+        Opaque::new(bindings::ww_class {
+            stamp: bindings::atomic_long_t { counter: 0 },
+            acquire_name: $name.as_char_ptr(),
+            mutex_name: $name.as_char_ptr(),
+            is_wait_die: $is_wait_die as u32,
+            // TODO: Replace with `bindings::lock_class_key::default()` once stabilized for `const`.
+            //
+            // SAFETY: This is always zero-initialized when defined with `DEFINE_WD_CLASS`
+            // globally on C side.
+            //
+            // Ref: <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/ww_mutex.h?h=v6.16-rc2#n85>
+            acquire_key: unsafe { core::mem::zeroed() },
+            // TODO: Replace with `bindings::lock_class_key::default()` once stabilized for `const`.
+            //
+            // SAFETY: This is always zero-initialized when defined with `DEFINE_WD_CLASS`
+            // globally on C side.
+            //
+            // Ref: <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/ww_mutex.h?h=v6.16-rc2#n85>
+            mutex_key: unsafe { core::mem::zeroed() },
+        })
+    };
+}
+
+impl WwClass {
+    /// Creates a [`WwClass`].
+    ///
+    /// It's `pub` only so it can be used by the `define_ww_class!` macro.
+    ///
+    /// You should not use this function directly. Use the `define_ww_class!`
+    /// macro or call [`WwClass::new_wait_die`] or [`WwClass::new_wound_wait`] instead.
+    pub const fn new(name: &'static CStr, is_wait_die: bool) -> Self {
+        WwClass {
+            inner: ww_class_init_helper!(name, is_wait_die),
+        }
+    }
+
+    /// Creates wait-die [`WwClass`].
+    pub fn new_wait_die(name: &'static CStr) -> impl PinInit<Self> {
+        pin_init!(WwClass {
+            inner: ww_class_init_helper!(name, true),
+        })
+    }
+
+    /// Creates wound-wait [`WwClass`].
+    pub fn new_wound_wait(name: &'static CStr) -> impl PinInit<Self> {
+        pin_init!(WwClass {
+            inner: ww_class_init_helper!(name, false),
+        })
+    }
+}
+
+/// An acquire context is used to group multiple mutex acquisitions together
+/// for deadlock avoidance. It must be used when acquiring multiple mutexes
+/// of the same class.
+///
+/// # Examples
+///
+/// ```
+/// use kernel::sync::lock::ww_mutex::{WwClass, WwAcquireCtx, WwMutex};
+/// use kernel::c_str;
+/// use kernel::alloc::KBox;
+/// use pin_init::stack_pin_init;
+///
+/// stack_pin_init!(let class = WwClass::new_wound_wait(c_str!("my_class")));
+///
+/// // Create mutexes.
+/// stack_pin_init!(let mutex1 = WwMutex::new(1, &class));
+/// stack_pin_init!(let mutex2 = WwMutex::new(2, &class));
+///
+/// // Create acquire context for deadlock avoidance.
+/// let mut ctx = KBox::pin_init(WwAcquireCtx::new(&class), GFP_KERNEL)?;
+///
+/// // Acquire multiple locks safely.
+/// let guard1 = mutex1.lock(Some(&ctx))?;
+/// let guard2 = mutex2.lock(Some(&ctx))?;
+///
+/// // Mark acquisition phase as complete.
+/// ctx.as_mut().done();
+///
+/// # Ok::<(), Error>(())
+/// ```
+#[pin_data(PinnedDrop)]
+pub struct WwAcquireCtx<'a> {
+    #[pin]
+    inner: Opaque<bindings::ww_acquire_ctx>,
+    _p: PhantomData<&'a WwClass>,
+}
+
+// SAFETY: Used in controlled ways during lock acquisition. No race risk.
+unsafe impl Sync for WwAcquireCtx<'_> {}
+// SAFETY: Doesn't rely on thread-local state. Safe to move between threads.
+unsafe impl Send for WwAcquireCtx<'_> {}
+
+impl<'ctx> WwAcquireCtx<'ctx> {
+    /// Initializes `Self` with calling C side `ww_acquire_init` inside.
+    pub fn new<'class: 'ctx>(ww_class: &'class WwClass) -> impl PinInit<Self> {
+        let raw_ptr = ww_class.inner.get();
+        pin_init!(WwAcquireCtx {
+            inner <- Opaque::ffi_init(|slot: *mut bindings::ww_acquire_ctx| {
+                // SAFETY: The caller guarantees that `ww_class` remains valid.
+                unsafe { bindings::ww_acquire_init(slot, raw_ptr) }
+            }),
+            _p: PhantomData
+        })
+    }
+
+    /// Marks the end of the acquire phase with C side `ww_acquire_done`.
+    ///
+    /// After calling this function, no more mutexes can be acquired with this context.
+    pub fn done(self: Pin<&mut Self>) {
+        // SAFETY: The context is pinned and valid.
+        unsafe { bindings::ww_acquire_done(self.inner.get()) };
+    }
+
+    /// Returns a raw pointer to the inner `ww_acquire_ctx`.
+    fn as_ptr(&self) -> *mut bindings::ww_acquire_ctx {
+        self.inner.get()
+    }
+}
+
+#[pinned_drop]
+impl PinnedDrop for WwAcquireCtx<'_> {
+    fn drop(self: Pin<&mut Self>) {
+        // SAFETY: The context is being dropped and is pinned.
+        unsafe { bindings::ww_acquire_fini(self.inner.get()) };
+    }
+}
+
+/// A wound/wait mutex backed with C side `ww_mutex`.
+///
+/// This is a mutual exclusion primitive that provides deadlock avoidance when
+/// acquiring multiple locks of the same class.
+///
+/// # Examples
+///
+/// ## Basic Usage
+///
+/// ```
+/// use kernel::sync::lock::ww_mutex::{WwClass, WwMutex};
+/// use kernel::c_str;
+/// use pin_init::stack_pin_init;
+///
+/// stack_pin_init!(let class = WwClass::new_wound_wait(c_str!("buffer_class")));
+/// stack_pin_init!(let mutex = WwMutex::new(42, &class));
+///
+/// // Simple lock without context.
+/// let guard = mutex.lock(None)?;
+/// assert_eq!(*guard, 42);
+///
+/// # Ok::<(), Error>(())
+/// ```
+///
+/// ## Multiple Locks
+///
+/// ```
+/// use kernel::c_str;
+/// use kernel::prelude::*;
+/// use kernel::sync::lock::ww_mutex::{WwClass, WwAcquireCtx, WwMutex};
+/// use pin_init::stack_pin_init;
+///
+/// stack_pin_init!(let class = WwClass::new_wait_die(c_str!("resource_class")));
+/// stack_pin_init!(let mutex_a = WwMutex::new("Resource A", &class));
+/// stack_pin_init!(let mutex_b = WwMutex::new("Resource B", &class));
+///
+/// let mut ctx = KBox::pin_init(WwAcquireCtx::new(&class), GFP_KERNEL)?;
+///
+/// // Try to acquire both locks.
+/// let guard_a = match mutex_a.lock(Some(&ctx)) {
+///     Ok(guard) => guard,
+///     Err(e) if e == EDEADLK => {
+///         // Deadlock detected, use slow path.
+///         mutex_a.lock_slow(&ctx)?
+///     }
+///     Err(e) => return Err(e),
+/// };
+///
+/// let guard_b = mutex_b.lock(Some(&ctx))?;
+/// ctx.as_mut().done();
+///
+/// # Ok::<(), Error>(())
+/// ```
+#[pin_data]
+pub struct WwMutex<'a, T: ?Sized> {
+    _p: PhantomData<&'a WwClass>,
+    #[pin]
+    mutex: Opaque<bindings::ww_mutex>,
+    data: UnsafeCell<T>,
+}
+
+// SAFETY: [`WwMutex`] can be shared between threads.
+unsafe impl<T: ?Sized + Send> Send for WwMutex<'_, T> {}
+// SAFETY: [`WwMutex`] can be safely accessed from multiple threads concurrently.
+unsafe impl<T: ?Sized + Sync> Sync for WwMutex<'_, T> {}
+
+impl<'ww_class, T> WwMutex<'ww_class, T> {
+    /// Creates `Self` with calling `ww_mutex_init` inside.
+    pub fn new(t: T, ww_class: &'ww_class WwClass) -> impl PinInit<Self> {
+        let raw_ptr = ww_class.inner.get();
+        pin_init!(WwMutex {
+            mutex <- Opaque::ffi_init(|slot: *mut bindings::ww_mutex| {
+                // SAFETY: The caller guarantees that `ww_class` remains valid.
+                unsafe { bindings::ww_mutex_init(slot, raw_ptr) }
+            }),
+            data: UnsafeCell::new(t),
+            _p: PhantomData,
+        })
+    }
+}
+
+impl<T: ?Sized> WwMutex<'_, T> {
+    /// Locks the mutex with the given acquire context.
+    pub fn lock<'a>(&'a self, ctx: Option<&WwAcquireCtx<'_>>) -> Result<WwMutexGuard<'a, T>> {
+        // SAFETY: The mutex is pinned and valid.
+        let ret = unsafe {
+            bindings::ww_mutex_lock(
+                self.mutex.get(),
+                ctx.map_or(core::ptr::null_mut(), |c| c.as_ptr()),
+            )
+        };
+
+        to_result(ret)?;
+
+        Ok(WwMutexGuard::new(self))
+    }
+
+    /// Locks the mutex with the given acquire context, interruptible.
+    ///
+    /// Similar to `lock`, but can be interrupted by signals.
+    pub fn lock_interruptible<'a>(
+        &'a self,
+        ctx: Option<&WwAcquireCtx<'_>>,
+    ) -> Result<WwMutexGuard<'a, T>> {
+        // SAFETY: The mutex is pinned and valid.
+        let ret = unsafe {
+            bindings::ww_mutex_lock_interruptible(
+                self.mutex.get(),
+                ctx.map_or(core::ptr::null_mut(), |c| c.as_ptr()),
+            )
+        };
+
+        to_result(ret)?;
+
+        Ok(WwMutexGuard::new(self))
+    }
+
+    /// Locks the mutex in the slow path after a die case.
+    ///
+    /// This should be called after releasing all held mutexes when `lock` returns [`EDEADLK`].
+    pub fn lock_slow<'a>(&'a self, ctx: &WwAcquireCtx<'_>) -> Result<WwMutexGuard<'a, T>> {
+        // SAFETY: The mutex is pinned and valid, and we're in the slow path.
+        unsafe { bindings::ww_mutex_lock_slow(self.mutex.get(), ctx.as_ptr()) };
+
+        Ok(WwMutexGuard::new(self))
+    }
+
+    /// Locks the mutex in the slow path after a die case, interruptible.
+    pub fn lock_slow_interruptible<'a>(
+        &'a self,
+        ctx: &WwAcquireCtx<'_>,
+    ) -> Result<WwMutexGuard<'a, T>> {
+        // SAFETY: The mutex is pinned and valid, and we are in the slow path.
+        let ret =
+            unsafe { bindings::ww_mutex_lock_slow_interruptible(self.mutex.get(), ctx.as_ptr()) };
+
+        to_result(ret)?;
+
+        Ok(WwMutexGuard::new(self))
+    }
+
+    /// Tries to lock the mutex without blocking.
+    pub fn try_lock<'a>(&'a self, ctx: Option<&WwAcquireCtx<'_>>) -> Result<WwMutexGuard<'a, T>> {
+        // SAFETY: The mutex is pinned and valid.
+        let ret = unsafe {
+            bindings::ww_mutex_trylock(
+                self.mutex.get(),
+                ctx.map_or(core::ptr::null_mut(), |c| c.as_ptr()),
+            )
+        };
+
+        if ret == 0 {
+            return Err(EBUSY);
+        }
+
+        to_result(if ret < 0 { ret } else { 0 })?;
+
+        Ok(WwMutexGuard::new(self))
+    }
+
+    /// Checks if the mutex is currently locked.
+    pub fn is_locked(&self) -> bool {
+        // SAFETY: The mutex is pinned and valid.
+        unsafe { bindings::ww_mutex_is_locked(self.mutex.get()) }
+    }
+
+    /// Returns a raw pointer to the inner mutex.
+    fn as_ptr(&self) -> *mut bindings::ww_mutex {
+        self.mutex.get()
+    }
+}
+
+/// A guard that provides exclusive access to the data protected
+/// by a [`WwMutex`].
+///
+/// # Invariants
+///
+/// The guard holds an exclusive lock on the associated [`WwMutex`]. The lock is held
+/// for the entire lifetime of this guard and is automatically released when the
+/// guard is dropped.
+#[must_use = "the lock unlocks immediately when the guard is unused"]
+pub struct WwMutexGuard<'a, T: ?Sized> {
+    mutex: &'a WwMutex<'a, T>,
+    _not_send: NotThreadSafe,
+}
+
+// SAFETY: [`WwMutexGuard`] can be transferred across thread boundaries if the data can.
+unsafe impl<T: ?Sized + Send> Send for WwMutexGuard<'_, T> {}
+
+// SAFETY: [`WwMutexGuard`] can be shared between threads if the data can.
+unsafe impl<T: ?Sized + Send + Sync> Sync for WwMutexGuard<'_, T> {}
+
+impl<'a, T: ?Sized> WwMutexGuard<'a, T> {
+    /// Creates a new guard for a locked mutex.
+    fn new(mutex: &'a WwMutex<'a, T>) -> Self {
+        Self {
+            mutex,
+            _not_send: NotThreadSafe,
+        }
+    }
+}
+
+impl<T: ?Sized> core::ops::Deref for WwMutexGuard<'_, T> {
+    type Target = T;
+
+    fn deref(&self) -> &Self::Target {
+        // SAFETY: We hold the lock, so we have exclusive access.
+        unsafe { &*self.mutex.data.get() }
+    }
+}
+
+impl<T: ?Sized> core::ops::DerefMut for WwMutexGuard<'_, T> {
+    fn deref_mut(&mut self) -> &mut Self::Target {
+        // SAFETY: We hold the lock, so we have exclusive access.
+        unsafe { &mut *self.mutex.data.get() }
+    }
+}
+
+impl<T: ?Sized> Drop for WwMutexGuard<'_, T> {
+    fn drop(&mut self) {
+        // SAFETY: We hold the lock and are about to release it.
+        unsafe { bindings::ww_mutex_unlock(self.mutex.as_ptr()) };
+    }
+}
-- 
2.49.0

Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Boqun Feng 3 months, 2 weeks ago
On Sat, Jun 21, 2025 at 09:44:53PM +0300, Onur Özkan wrote:
[...]
> +#[pin_data(PinnedDrop)]
> +pub struct WwAcquireCtx<'a> {
> +    #[pin]
> +    inner: Opaque<bindings::ww_acquire_ctx>,
> +    _p: PhantomData<&'a WwClass>,
> +}
> +
> +// SAFETY: Used in controlled ways during lock acquisition. No race risk.
> +unsafe impl Sync for WwAcquireCtx<'_> {}
> +// SAFETY: Doesn't rely on thread-local state. Safe to move between threads.
> +unsafe impl Send for WwAcquireCtx<'_> {}
> +

I don't think `WwAcquireCtx` is `Send`, if you look at C code when
LOCKDEP is enabled, `ww_acquire_init()` calls a few `mutex_acquire()`
and expects `ww_acquire_fini()` to call the corresponding
`mutex_release()`, and these two have to be on the same task. Also I
don't think there is a need for sending `WwAcquireCtx` to another
thread.

Besides, the `Sync` of `WwAcquireCtx` also doesn't make sense, I would
drop it if there is no real usage for now.

> +impl<'ctx> WwAcquireCtx<'ctx> {
> +    /// Initializes `Self` with calling C side `ww_acquire_init` inside.
> +    pub fn new<'class: 'ctx>(ww_class: &'class WwClass) -> impl PinInit<Self> {
> +        let raw_ptr = ww_class.inner.get();
> +        pin_init!(WwAcquireCtx {
> +            inner <- Opaque::ffi_init(|slot: *mut bindings::ww_acquire_ctx| {
> +                // SAFETY: The caller guarantees that `ww_class` remains valid.
> +                unsafe { bindings::ww_acquire_init(slot, raw_ptr) }
> +            }),
> +            _p: PhantomData
> +        })
> +    }
> +
> +    /// Marks the end of the acquire phase with C side `ww_acquire_done`.
> +    ///
> +    /// After calling this function, no more mutexes can be acquired with this context.
> +    pub fn done(self: Pin<&mut Self>) {
> +        // SAFETY: The context is pinned and valid.
> +        unsafe { bindings::ww_acquire_done(self.inner.get()) };
> +    }
> +
> +    /// Returns a raw pointer to the inner `ww_acquire_ctx`.
> +    fn as_ptr(&self) -> *mut bindings::ww_acquire_ctx {
> +        self.inner.get()
> +    }
> +}
> +
> +#[pinned_drop]
> +impl PinnedDrop for WwAcquireCtx<'_> {
> +    fn drop(self: Pin<&mut Self>) {
> +        // SAFETY: The context is being dropped and is pinned.
> +        unsafe { bindings::ww_acquire_fini(self.inner.get()) };
> +    }
> +}
> +
[...]
> +#[pin_data]
> +pub struct WwMutex<'a, T: ?Sized> {
> +    _p: PhantomData<&'a WwClass>,
> +    #[pin]
> +    mutex: Opaque<bindings::ww_mutex>,
> +    data: UnsafeCell<T>,
> +}
> +
> +// SAFETY: [`WwMutex`] can be shared between threads.
> +unsafe impl<T: ?Sized + Send> Send for WwMutex<'_, T> {}
> +// SAFETY: [`WwMutex`] can be safely accessed from multiple threads concurrently.
> +unsafe impl<T: ?Sized + Sync> Sync for WwMutex<'_, T> {}

I believe this requires `T` being `Send` as well, because if `&WwMutex`
is shared between threads, that means any thread can access `&mut T`
when the lock acquired.

> +
> +impl<'ww_class, T> WwMutex<'ww_class, T> {
> +    /// Creates `Self` with calling `ww_mutex_init` inside.
> +    pub fn new(t: T, ww_class: &'ww_class WwClass) -> impl PinInit<Self> {
> +        let raw_ptr = ww_class.inner.get();
> +        pin_init!(WwMutex {
> +            mutex <- Opaque::ffi_init(|slot: *mut bindings::ww_mutex| {
> +                // SAFETY: The caller guarantees that `ww_class` remains valid.
> +                unsafe { bindings::ww_mutex_init(slot, raw_ptr) }
> +            }),
> +            data: UnsafeCell::new(t),
> +            _p: PhantomData,
> +        })
> +    }
> +}
> +
[...]
> +    /// Checks if the mutex is currently locked.
> +    pub fn is_locked(&self) -> bool {

Did I miss a reply from you regarding:

	https://lore.kernel.org/rust-for-linux/aFReIdlPPg4MmaYX@tardis.local/

no public is_lock() please. Do an assert_is_locked() instead. We need to
avoid users from abusing this.

> +        // SAFETY: The mutex is pinned and valid.
> +        unsafe { bindings::ww_mutex_is_locked(self.mutex.get()) }
> +    }
> +
> +    /// Returns a raw pointer to the inner mutex.
> +    fn as_ptr(&self) -> *mut bindings::ww_mutex {
> +        self.mutex.get()
> +    }
> +}
> +
> +/// A guard that provides exclusive access to the data protected
> +/// by a [`WwMutex`].
> +///
> +/// # Invariants
> +///
> +/// The guard holds an exclusive lock on the associated [`WwMutex`]. The lock is held
> +/// for the entire lifetime of this guard and is automatically released when the
> +/// guard is dropped.
> +#[must_use = "the lock unlocks immediately when the guard is unused"]
> +pub struct WwMutexGuard<'a, T: ?Sized> {
> +    mutex: &'a WwMutex<'a, T>,
> +    _not_send: NotThreadSafe,
> +}
> +
> +// SAFETY: [`WwMutexGuard`] can be transferred across thread boundaries if the data can.
> +unsafe impl<T: ?Sized + Send> Send for WwMutexGuard<'_, T> {}

Nope, ww_mutex is still a mutex, you cannot acquire the lock in one task
and release the lock on another task.

> +
> +// SAFETY: [`WwMutexGuard`] can be shared between threads if the data can.
> +unsafe impl<T: ?Sized + Send + Sync> Sync for WwMutexGuard<'_, T> {}

You don't need the `Send` here? A `&WwMutexGuard` doesn't provide the
access to `&mut T`, so being `Sync` suffices.

Regards,
Boqun

> +
> +impl<'a, T: ?Sized> WwMutexGuard<'a, T> {
> +    /// Creates a new guard for a locked mutex.
> +    fn new(mutex: &'a WwMutex<'a, T>) -> Self {
> +        Self {
> +            mutex,
> +            _not_send: NotThreadSafe,
> +        }
> +    }
> +}
> +
> +impl<T: ?Sized> core::ops::Deref for WwMutexGuard<'_, T> {
> +    type Target = T;
> +
> +    fn deref(&self) -> &Self::Target {
> +        // SAFETY: We hold the lock, so we have exclusive access.
> +        unsafe { &*self.mutex.data.get() }
> +    }
> +}
> +
> +impl<T: ?Sized> core::ops::DerefMut for WwMutexGuard<'_, T> {
> +    fn deref_mut(&mut self) -> &mut Self::Target {
> +        // SAFETY: We hold the lock, so we have exclusive access.
> +        unsafe { &mut *self.mutex.data.get() }
> +    }
> +}
> +
> +impl<T: ?Sized> Drop for WwMutexGuard<'_, T> {
> +    fn drop(&mut self) {
> +        // SAFETY: We hold the lock and are about to release it.
> +        unsafe { bindings::ww_mutex_unlock(self.mutex.as_ptr()) };
> +    }
> +}
> -- 
> 2.49.0
> 
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Onur 3 months, 2 weeks ago
On Mon, 23 Jun 2025 06:26:14 -0700
Boqun Feng <boqun.feng@gmail.com> wrote:

> On Sat, Jun 21, 2025 at 09:44:53PM +0300, Onur Özkan wrote:
> [...]
> > +#[pin_data(PinnedDrop)]
> > +pub struct WwAcquireCtx<'a> {
> > +    #[pin]
> > +    inner: Opaque<bindings::ww_acquire_ctx>,
> > +    _p: PhantomData<&'a WwClass>,
> > +}
> > +
> > +// SAFETY: Used in controlled ways during lock acquisition. No
> > race risk. +unsafe impl Sync for WwAcquireCtx<'_> {}
> > +// SAFETY: Doesn't rely on thread-local state. Safe to move
> > between threads. +unsafe impl Send for WwAcquireCtx<'_> {}
> > +
> 
> I don't think `WwAcquireCtx` is `Send`, if you look at C code when
> LOCKDEP is enabled, `ww_acquire_init()` calls a few `mutex_acquire()`
> and expects `ww_acquire_fini()` to call the corresponding
> `mutex_release()`, and these two have to be on the same task. Also I
> don't think there is a need for sending `WwAcquireCtx` to another
> thread.

I wasn't very sure about myself analyzing the C API of ww_mutex thank
you. I will address this along with the other notes you pointed out.

> Besides, the `Sync` of `WwAcquireCtx` also doesn't make sense, I would
> drop it if there is no real usage for now.
> 
> > +impl<'ctx> WwAcquireCtx<'ctx> {
> > +    /// Initializes `Self` with calling C side `ww_acquire_init`
> > inside.
> > +    pub fn new<'class: 'ctx>(ww_class: &'class WwClass) -> impl
> > PinInit<Self> {
> > +        let raw_ptr = ww_class.inner.get();
> > +        pin_init!(WwAcquireCtx {
> > +            inner <- Opaque::ffi_init(|slot: *mut
> > bindings::ww_acquire_ctx| {
> > +                // SAFETY: The caller guarantees that `ww_class`
> > remains valid.
> > +                unsafe { bindings::ww_acquire_init(slot, raw_ptr) }
> > +            }),
> > +            _p: PhantomData
> > +        })
> > +    }
> > +
> > +    /// Marks the end of the acquire phase with C side
> > `ww_acquire_done`.
> > +    ///
> > +    /// After calling this function, no more mutexes can be
> > acquired with this context.
> > +    pub fn done(self: Pin<&mut Self>) {
> > +        // SAFETY: The context is pinned and valid.
> > +        unsafe { bindings::ww_acquire_done(self.inner.get()) };
> > +    }
> > +
> > +    /// Returns a raw pointer to the inner `ww_acquire_ctx`.
> > +    fn as_ptr(&self) -> *mut bindings::ww_acquire_ctx {
> > +        self.inner.get()
> > +    }
> > +}
> > +
> > +#[pinned_drop]
> > +impl PinnedDrop for WwAcquireCtx<'_> {
> > +    fn drop(self: Pin<&mut Self>) {
> > +        // SAFETY: The context is being dropped and is pinned.
> > +        unsafe { bindings::ww_acquire_fini(self.inner.get()) };
> > +    }
> > +}
> > +
> [...]
> > +#[pin_data]
> > +pub struct WwMutex<'a, T: ?Sized> {
> > +    _p: PhantomData<&'a WwClass>,
> > +    #[pin]
> > +    mutex: Opaque<bindings::ww_mutex>,
> > +    data: UnsafeCell<T>,
> > +}
> > +
> > +// SAFETY: [`WwMutex`] can be shared between threads.
> > +unsafe impl<T: ?Sized + Send> Send for WwMutex<'_, T> {}
> > +// SAFETY: [`WwMutex`] can be safely accessed from multiple
> > threads concurrently. +unsafe impl<T: ?Sized + Sync> Sync for
> > WwMutex<'_, T> {}
> 
> I believe this requires `T` being `Send` as well, because if
> `&WwMutex` is shared between threads, that means any thread can
> access `&mut T` when the lock acquired.
> 
> > +
> > +impl<'ww_class, T> WwMutex<'ww_class, T> {
> > +    /// Creates `Self` with calling `ww_mutex_init` inside.
> > +    pub fn new(t: T, ww_class: &'ww_class WwClass) -> impl
> > PinInit<Self> {
> > +        let raw_ptr = ww_class.inner.get();
> > +        pin_init!(WwMutex {
> > +            mutex <- Opaque::ffi_init(|slot: *mut
> > bindings::ww_mutex| {
> > +                // SAFETY: The caller guarantees that `ww_class`
> > remains valid.
> > +                unsafe { bindings::ww_mutex_init(slot, raw_ptr) }
> > +            }),
> > +            data: UnsafeCell::new(t),
> > +            _p: PhantomData,
> > +        })
> > +    }
> > +}
> > +
> [...]
> > +    /// Checks if the mutex is currently locked.
> > +    pub fn is_locked(&self) -> bool {
> 
> Did I miss a reply from you regarding:
> 
> 	https://lore.kernel.org/rust-for-linux/aFReIdlPPg4MmaYX@tardis.local/
> 
> no public is_lock() please. Do an assert_is_locked() instead. We need
> to avoid users from abusing this.

Sorry, I missed that. Perhaps, using `#[cfg(CONFIG_KUNIT)]` e.g.,:

    /// Checks if the mutex is currently locked.
    #[cfg(CONFIG_KUNIT)]
    fn is_locked(&self) -> bool {
        // SAFETY: The mutex is pinned and valid.
        unsafe { bindings::ww_mutex_is_locked(self.mutex.get()) }
    }

would be better? What do you think?

---

On Mon, 23 Jun 2025 11:51:56 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> On Sat, Jun 21, 2025 at 09:44:53PM +0300, Onur Özkan wrote:
> > Adds Rust bindings for the kernel's `ww_mutex` infrastructure to
> > enable deadlock-free acquisition of multiple related locks.
> > 
> > The patch abstracts `ww_mutex.h` header and wraps the existing
> > C `ww_mutex` with three main types:
> >     - `WwClass` for grouping related mutexes
> >     - `WwAcquireCtx` for tracking lock acquisition context
> >     - `WwMutex<T>` for the actual lock
> > 
> > Signed-off-by: Onur Özkan <work@onurozkan.dev>
> > ---
> >  rust/kernel/error.rs              |   1 +
> >  rust/kernel/sync/lock.rs          |   1 +
> >  rust/kernel/sync/lock/ww_mutex.rs | 421
> > ++++++++++++++++++++++++++++++ 3 files changed, 423 insertions(+)
> >  create mode 100644 rust/kernel/sync/lock/ww_mutex.rs
> > 
> > diff --git a/rust/kernel/error.rs b/rust/kernel/error.rs
> > index 3dee3139fcd4..28157541e12c 100644
> > --- a/rust/kernel/error.rs
> > +++ b/rust/kernel/error.rs
> > @@ -64,6 +64,7 @@ macro_rules! declare_err {
> >      declare_err!(EPIPE, "Broken pipe.");
> >      declare_err!(EDOM, "Math argument out of domain of func.");
> >      declare_err!(ERANGE, "Math result not representable.");
> > +    declare_err!(EDEADLK, "Resource deadlock avoided.");
> >      declare_err!(EOVERFLOW, "Value too large for defined data
> > type."); declare_err!(ERESTARTSYS, "Restart the system call.");
> >      declare_err!(ERESTARTNOINTR, "System call was interrupted by a
> > signal and will be restarted."); diff --git
> > a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs index
> > e82fa5be289c..8824ebc81084 100644 --- a/rust/kernel/sync/lock.rs
> > +++ b/rust/kernel/sync/lock.rs
> > @@ -15,6 +15,7 @@
> >  
> >  pub mod mutex;
> >  pub mod spinlock;
> > +pub mod ww_mutex;
> >  
> >  pub(super) mod global;
> >  pub use global::{GlobalGuard, GlobalLock, GlobalLockBackend,
> > GlobalLockedBy}; diff --git a/rust/kernel/sync/lock/ww_mutex.rs
> > b/rust/kernel/sync/lock/ww_mutex.rs new file mode 100644
> > index 000000000000..dcb23941813c
> > --- /dev/null
> > +++ b/rust/kernel/sync/lock/ww_mutex.rs
> > @@ -0,0 +1,421 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +//! A kernel Wound/Wait Mutex.
> > +//!
> > +//! This module provides Rust abstractions for the Linux kernel's
> > `ww_mutex` implementation, +//! which provides deadlock avoidance
> > through a wait-wound or wait-die algorithm. +//!
> > +//! C header:
> > [`include/linux/ww_mutex.h`](srctree/include/linux/ww_mutex.h) +//!
> > +//! For more information:
> > <https://docs.kernel.org/locking/ww-mutex-design.html> +
> > +use crate::bindings;
> > +use crate::error::to_result;
> > +use crate::prelude::*;
> > +use crate::types::{NotThreadSafe, Opaque};
> > +use core::cell::UnsafeCell;
> > +use core::marker::PhantomData;
> > +
> > +/// Create static [`WwClass`] instances.
> > +///
> > +/// # Examples
> > +///
> > +/// ```
> > +/// use kernel::{c_str, define_ww_class};
> > +///
> > +/// define_ww_class!(WOUND_WAIT_GLOBAL_CLASS, wound_wait,
> > c_str!("wound_wait_global_class")); +///
> > define_ww_class!(WAIT_DIE_GLOBAL_CLASS, wait_die,
> > c_str!("wait_die_global_class")); +/// ```
> 
> should we split this into two macros define_ww_class! and
> define_wd_class! to match the C macros?

I don't have a strong opinion on that. I like having small helper
macros but Benno doesn't seem to like having any macro at all for
creating global `WwClass`.

---

On Sun, 22 Jun 2025 11:18:24 +0200
"Benno Lossin" <lossin@kernel.org> wrote:

> > Signed-off-by: Onur Özkan <work@onurozkan.dev>
> > ---
> >  rust/kernel/error.rs              |   1 +
> >  rust/kernel/sync/lock.rs          |   1 +
> >  rust/kernel/sync/lock/ww_mutex.rs | 421
> > ++++++++++++++++++++++++++++++ 3 files changed, 423 insertions(+)
> >  create mode 100644 rust/kernel/sync/lock/ww_mutex.rs
> 
> Thanks for splitting the tests into its own patch, but I still think
> it's useful to do the abstractions for `ww_class`, `ww_acquire_ctx`
> and `ww_mutex` separately.
> 
> > +/// Create static [`WwClass`] instances.
> > +///
> > +/// # Examples
> > +///
> > +/// ```
> > +/// use kernel::{c_str, define_ww_class};
> > +///
> > +/// define_ww_class!(WOUND_WAIT_GLOBAL_CLASS, wound_wait,
> > c_str!("wound_wait_global_class")); +///
> > define_ww_class!(WAIT_DIE_GLOBAL_CLASS, wait_die,
> > c_str!("wait_die_global_class")); +/// ``` +#[macro_export]
> > +macro_rules! define_ww_class {
> > +    ($name:ident, wound_wait, $class_name:expr) => {
> > +        static $name: $crate::sync::lock::ww_mutex::WwClass =
> > +            {
> > $crate::sync::lock::ww_mutex::WwClass::new($class_name, false) };
> > +    };
> > +    ($name:ident, wait_die, $class_name:expr) => {
> > +        static $name: $crate::sync::lock::ww_mutex::WwClass =
> > +            {
> > $crate::sync::lock::ww_mutex::WwClass::new($class_name, true) };
> > +    };
> > +}
> 
> I really don't see the value in having a macro here. The user can just
> declare the static themselves.

Yes, they can. This is just a helper to make things a bit simpler.
I am fine with either keeping or removing it, I don't have a strong
opinion on this macro.

Alice also made a suggestion about it (it should be visible in this
e-mail as I replied that as well): splitting `define_ww_class!` into two
macros, `define_wd_class!` and `define_ww_class!`.

In my opinion, having this macro doesn't add much value but it also
doesn't introduce any complexity, so it leaves us with a small gain, I
guess.

> > +
> > +impl WwClass {
> > +    /// Creates a [`WwClass`].
> > +    ///
> > +    /// It's `pub` only so it can be used by the
> > `define_ww_class!` macro.
> > +    ///
> > +    /// You should not use this function directly. Use the
> > `define_ww_class!`
> > +    /// macro or call [`WwClass::new_wait_die`] or
> > [`WwClass::new_wound_wait`] instead.
> > +    pub const fn new(name: &'static CStr, is_wait_die: bool) ->
> > Self {
> 
> This doesn't work together with `#[pin_data]`, you shouldn't return it
> by-value if it is supposed to be pin-initialized.
> 
> Is this type address sensitive? If yes, this function needs to be
> `unsafe`, if no, then we can remove `#[pin_data]`.

It should be `unsafe` function, thanks.


---

Regards,
Onur
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Boqun Feng 3 months, 2 weeks ago
On Mon, Jun 23, 2025 at 09:17:40PM +0300, Onur wrote:
> > [...]
> > > +    /// Checks if the mutex is currently locked.
> > > +    pub fn is_locked(&self) -> bool {
> > 
> > Did I miss a reply from you regarding:
> > 
> > 	https://lore.kernel.org/rust-for-linux/aFReIdlPPg4MmaYX@tardis.local/
> > 
> > no public is_lock() please. Do an assert_is_locked() instead. We need
> > to avoid users from abusing this.
> 
> Sorry, I missed that. Perhaps, using `#[cfg(CONFIG_KUNIT)]` e.g.,:
> 

As long as it's not `pub`, it's fine. `#[cfg(CONFIG_KUNIT)]` is not
needed.

>     /// Checks if the mutex is currently locked.

Please mention that the function is only for internal testing, and
cannot be used to check whether another task has acquired an lock or
not.

Thanks!

Regards,
Boqun

>     #[cfg(CONFIG_KUNIT)]
>     fn is_locked(&self) -> bool {
>         // SAFETY: The mutex is pinned and valid.
>         unsafe { bindings::ww_mutex_is_locked(self.mutex.get()) }
>     }
> 
> would be better? What do you think?
[..]
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Alice Ryhl 3 months, 2 weeks ago
On Sat, Jun 21, 2025 at 09:44:53PM +0300, Onur Özkan wrote:
> Adds Rust bindings for the kernel's `ww_mutex` infrastructure to enable
> deadlock-free acquisition of multiple related locks.
> 
> The patch abstracts `ww_mutex.h` header and wraps the existing
> C `ww_mutex` with three main types:
>     - `WwClass` for grouping related mutexes
>     - `WwAcquireCtx` for tracking lock acquisition context
>     - `WwMutex<T>` for the actual lock
> 
> Signed-off-by: Onur Özkan <work@onurozkan.dev>
> ---
>  rust/kernel/error.rs              |   1 +
>  rust/kernel/sync/lock.rs          |   1 +
>  rust/kernel/sync/lock/ww_mutex.rs | 421 ++++++++++++++++++++++++++++++
>  3 files changed, 423 insertions(+)
>  create mode 100644 rust/kernel/sync/lock/ww_mutex.rs
> 
> diff --git a/rust/kernel/error.rs b/rust/kernel/error.rs
> index 3dee3139fcd4..28157541e12c 100644
> --- a/rust/kernel/error.rs
> +++ b/rust/kernel/error.rs
> @@ -64,6 +64,7 @@ macro_rules! declare_err {
>      declare_err!(EPIPE, "Broken pipe.");
>      declare_err!(EDOM, "Math argument out of domain of func.");
>      declare_err!(ERANGE, "Math result not representable.");
> +    declare_err!(EDEADLK, "Resource deadlock avoided.");
>      declare_err!(EOVERFLOW, "Value too large for defined data type.");
>      declare_err!(ERESTARTSYS, "Restart the system call.");
>      declare_err!(ERESTARTNOINTR, "System call was interrupted by a signal and will be restarted.");
> diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
> index e82fa5be289c..8824ebc81084 100644
> --- a/rust/kernel/sync/lock.rs
> +++ b/rust/kernel/sync/lock.rs
> @@ -15,6 +15,7 @@
>  
>  pub mod mutex;
>  pub mod spinlock;
> +pub mod ww_mutex;
>  
>  pub(super) mod global;
>  pub use global::{GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy};
> diff --git a/rust/kernel/sync/lock/ww_mutex.rs b/rust/kernel/sync/lock/ww_mutex.rs
> new file mode 100644
> index 000000000000..dcb23941813c
> --- /dev/null
> +++ b/rust/kernel/sync/lock/ww_mutex.rs
> @@ -0,0 +1,421 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! A kernel Wound/Wait Mutex.
> +//!
> +//! This module provides Rust abstractions for the Linux kernel's `ww_mutex` implementation,
> +//! which provides deadlock avoidance through a wait-wound or wait-die algorithm.
> +//!
> +//! C header: [`include/linux/ww_mutex.h`](srctree/include/linux/ww_mutex.h)
> +//!
> +//! For more information: <https://docs.kernel.org/locking/ww-mutex-design.html>
> +
> +use crate::bindings;
> +use crate::error::to_result;
> +use crate::prelude::*;
> +use crate::types::{NotThreadSafe, Opaque};
> +use core::cell::UnsafeCell;
> +use core::marker::PhantomData;
> +
> +/// Create static [`WwClass`] instances.
> +///
> +/// # Examples
> +///
> +/// ```
> +/// use kernel::{c_str, define_ww_class};
> +///
> +/// define_ww_class!(WOUND_WAIT_GLOBAL_CLASS, wound_wait, c_str!("wound_wait_global_class"));
> +/// define_ww_class!(WAIT_DIE_GLOBAL_CLASS, wait_die, c_str!("wait_die_global_class"));
> +/// ```

should we split this into two macros define_ww_class! and
define_wd_class! to match the C macros?

> +#[macro_export]
> +macro_rules! define_ww_class {
> +    ($name:ident, wound_wait, $class_name:expr) => {
> +        static $name: $crate::sync::lock::ww_mutex::WwClass =
> +            { $crate::sync::lock::ww_mutex::WwClass::new($class_name, false) };
> +    };
> +    ($name:ident, wait_die, $class_name:expr) => {
> +        static $name: $crate::sync::lock::ww_mutex::WwClass =
> +            { $crate::sync::lock::ww_mutex::WwClass::new($class_name, true) };
> +    };
> +}
> +
> +/// A group of mutexes that can participate in deadlock avoidance together.
> +///
> +/// All mutexes that might be acquired together should use the same class.
> +///
> +/// # Examples
> +///
> +/// ```
> +/// use kernel::sync::lock::ww_mutex::WwClass;
> +/// use kernel::c_str;
> +/// use pin_init::stack_pin_init;
> +///
> +/// stack_pin_init!(let _wait_die_class = WwClass::new_wait_die(c_str!("graphics_buffers")));
> +/// stack_pin_init!(let _wound_wait_class = WwClass::new_wound_wait(c_str!("memory_pools")));
> +///
> +/// # Ok::<(), Error>(())
> +/// ```
> +#[pin_data]
> +pub struct WwClass {
> +    #[pin]
> +    inner: Opaque<bindings::ww_class>,
> +}
> +
> +// SAFETY: [`WwClass`] is set up once and never modified. It's fine to share it across threads.
> +unsafe impl Sync for WwClass {}
> +// SAFETY: Doesn't hold anything thread-specific. It's safe to send to other threads.
> +unsafe impl Send for WwClass {}
> +
> +macro_rules! ww_class_init_helper {
> +    ($name:expr, $is_wait_die:expr) => {
> +        Opaque::new(bindings::ww_class {
> +            stamp: bindings::atomic_long_t { counter: 0 },
> +            acquire_name: $name.as_char_ptr(),
> +            mutex_name: $name.as_char_ptr(),
> +            is_wait_die: $is_wait_die as u32,
> +            // TODO: Replace with `bindings::lock_class_key::default()` once stabilized for `const`.
> +            //
> +            // SAFETY: This is always zero-initialized when defined with `DEFINE_WD_CLASS`
> +            // globally on C side.
> +            //
> +            // Ref: <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/ww_mutex.h?h=v6.16-rc2#n85>
> +            acquire_key: unsafe { core::mem::zeroed() },
> +            // TODO: Replace with `bindings::lock_class_key::default()` once stabilized for `const`.
> +            //
> +            // SAFETY: This is always zero-initialized when defined with `DEFINE_WD_CLASS`
> +            // globally on C side.
> +            //
> +            // Ref: <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/ww_mutex.h?h=v6.16-rc2#n85>
> +            mutex_key: unsafe { core::mem::zeroed() },
> +        })
> +    };
> +}

You don't need this macro. You can have `new_wait_die` or
`new_wound_wait` call `new` directly and just move the macro into `new`.

> +impl WwClass {
> +    /// Creates a [`WwClass`].
> +    ///
> +    /// It's `pub` only so it can be used by the `define_ww_class!` macro.
> +    ///
> +    /// You should not use this function directly. Use the `define_ww_class!`
> +    /// macro or call [`WwClass::new_wait_die`] or [`WwClass::new_wound_wait`] instead.
> +    pub const fn new(name: &'static CStr, is_wait_die: bool) -> Self {
> +        WwClass {
> +            inner: ww_class_init_helper!(name, is_wait_die),
> +        }
> +    }
> +
> +    /// Creates wait-die [`WwClass`].
> +    pub fn new_wait_die(name: &'static CStr) -> impl PinInit<Self> {
> +        pin_init!(WwClass {
> +            inner: ww_class_init_helper!(name, true),
> +        })
> +    }
> +
> +    /// Creates wound-wait [`WwClass`].
> +    pub fn new_wound_wait(name: &'static CStr) -> impl PinInit<Self> {
> +        pin_init!(WwClass {
> +            inner: ww_class_init_helper!(name, false),
> +        })
> +    }
> +}
> +
> +/// An acquire context is used to group multiple mutex acquisitions together
> +/// for deadlock avoidance. It must be used when acquiring multiple mutexes
> +/// of the same class.
> +///
> +/// # Examples
> +///
> +/// ```
> +/// use kernel::sync::lock::ww_mutex::{WwClass, WwAcquireCtx, WwMutex};
> +/// use kernel::c_str;
> +/// use kernel::alloc::KBox;
> +/// use pin_init::stack_pin_init;
> +///
> +/// stack_pin_init!(let class = WwClass::new_wound_wait(c_str!("my_class")));
> +///
> +/// // Create mutexes.
> +/// stack_pin_init!(let mutex1 = WwMutex::new(1, &class));
> +/// stack_pin_init!(let mutex2 = WwMutex::new(2, &class));
> +///
> +/// // Create acquire context for deadlock avoidance.
> +/// let mut ctx = KBox::pin_init(WwAcquireCtx::new(&class), GFP_KERNEL)?;
> +///
> +/// // Acquire multiple locks safely.
> +/// let guard1 = mutex1.lock(Some(&ctx))?;
> +/// let guard2 = mutex2.lock(Some(&ctx))?;
> +///
> +/// // Mark acquisition phase as complete.
> +/// ctx.as_mut().done();
> +///
> +/// # Ok::<(), Error>(())
> +/// ```
> +#[pin_data(PinnedDrop)]
> +pub struct WwAcquireCtx<'a> {
> +    #[pin]
> +    inner: Opaque<bindings::ww_acquire_ctx>,
> +    _p: PhantomData<&'a WwClass>,
> +}
> +
> +// SAFETY: Used in controlled ways during lock acquisition. No race risk.
> +unsafe impl Sync for WwAcquireCtx<'_> {}
> +// SAFETY: Doesn't rely on thread-local state. Safe to move between threads.
> +unsafe impl Send for WwAcquireCtx<'_> {}
> +
> +impl<'ctx> WwAcquireCtx<'ctx> {
> +    /// Initializes `Self` with calling C side `ww_acquire_init` inside.
> +    pub fn new<'class: 'ctx>(ww_class: &'class WwClass) -> impl PinInit<Self> {

I would rename 'ctx to 'class and get rid of this super-lifetime.

impl<'class> WwAcquireCtx<'class> {
    /// Initializes `Self` with calling C side `ww_acquire_init` inside.
    pub fn new(ww_class: &'class WwClass) -> impl PinInit<Self> {

> +        let raw_ptr = ww_class.inner.get();
> +        pin_init!(WwAcquireCtx {
> +            inner <- Opaque::ffi_init(|slot: *mut bindings::ww_acquire_ctx| {
> +                // SAFETY: The caller guarantees that `ww_class` remains valid.
> +                unsafe { bindings::ww_acquire_init(slot, raw_ptr) }
> +            }),
> +            _p: PhantomData
> +        })
> +    }
> +
> +    /// Marks the end of the acquire phase with C side `ww_acquire_done`.
> +    ///
> +    /// After calling this function, no more mutexes can be acquired with this context.
> +    pub fn done(self: Pin<&mut Self>) {
> +        // SAFETY: The context is pinned and valid.
> +        unsafe { bindings::ww_acquire_done(self.inner.get()) };
> +    }
> +
> +    /// Returns a raw pointer to the inner `ww_acquire_ctx`.
> +    fn as_ptr(&self) -> *mut bindings::ww_acquire_ctx {
> +        self.inner.get()
> +    }
> +}
> +
> +#[pinned_drop]
> +impl PinnedDrop for WwAcquireCtx<'_> {
> +    fn drop(self: Pin<&mut Self>) {
> +        // SAFETY: The context is being dropped and is pinned.
> +        unsafe { bindings::ww_acquire_fini(self.inner.get()) };
> +    }
> +}
> +
> +/// A wound/wait mutex backed with C side `ww_mutex`.
> +///
> +/// This is a mutual exclusion primitive that provides deadlock avoidance when
> +/// acquiring multiple locks of the same class.
> +///
> +/// # Examples
> +///
> +/// ## Basic Usage
> +///
> +/// ```
> +/// use kernel::sync::lock::ww_mutex::{WwClass, WwMutex};
> +/// use kernel::c_str;
> +/// use pin_init::stack_pin_init;
> +///
> +/// stack_pin_init!(let class = WwClass::new_wound_wait(c_str!("buffer_class")));
> +/// stack_pin_init!(let mutex = WwMutex::new(42, &class));
> +///
> +/// // Simple lock without context.
> +/// let guard = mutex.lock(None)?;
> +/// assert_eq!(*guard, 42);
> +///
> +/// # Ok::<(), Error>(())
> +/// ```
> +///
> +/// ## Multiple Locks
> +///
> +/// ```
> +/// use kernel::c_str;
> +/// use kernel::prelude::*;
> +/// use kernel::sync::lock::ww_mutex::{WwClass, WwAcquireCtx, WwMutex};
> +/// use pin_init::stack_pin_init;
> +///
> +/// stack_pin_init!(let class = WwClass::new_wait_die(c_str!("resource_class")));
> +/// stack_pin_init!(let mutex_a = WwMutex::new("Resource A", &class));
> +/// stack_pin_init!(let mutex_b = WwMutex::new("Resource B", &class));
> +///
> +/// let mut ctx = KBox::pin_init(WwAcquireCtx::new(&class), GFP_KERNEL)?;
> +///
> +/// // Try to acquire both locks.
> +/// let guard_a = match mutex_a.lock(Some(&ctx)) {
> +///     Ok(guard) => guard,
> +///     Err(e) if e == EDEADLK => {
> +///         // Deadlock detected, use slow path.
> +///         mutex_a.lock_slow(&ctx)?
> +///     }
> +///     Err(e) => return Err(e),
> +/// };
> +///
> +/// let guard_b = mutex_b.lock(Some(&ctx))?;
> +/// ctx.as_mut().done();
> +///
> +/// # Ok::<(), Error>(())
> +/// ```
> +#[pin_data]
> +pub struct WwMutex<'a, T: ?Sized> {
> +    _p: PhantomData<&'a WwClass>,
> +    #[pin]
> +    mutex: Opaque<bindings::ww_mutex>,
> +    data: UnsafeCell<T>,
> +}
> +
> +// SAFETY: [`WwMutex`] can be shared between threads.
> +unsafe impl<T: ?Sized + Send> Send for WwMutex<'_, T> {}
> +// SAFETY: [`WwMutex`] can be safely accessed from multiple threads concurrently.
> +unsafe impl<T: ?Sized + Sync> Sync for WwMutex<'_, T> {}
> +
> +impl<'ww_class, T> WwMutex<'ww_class, T> {
> +    /// Creates `Self` with calling `ww_mutex_init` inside.
> +    pub fn new(t: T, ww_class: &'ww_class WwClass) -> impl PinInit<Self> {
> +        let raw_ptr = ww_class.inner.get();
> +        pin_init!(WwMutex {
> +            mutex <- Opaque::ffi_init(|slot: *mut bindings::ww_mutex| {
> +                // SAFETY: The caller guarantees that `ww_class` remains valid.
> +                unsafe { bindings::ww_mutex_init(slot, raw_ptr) }
> +            }),
> +            data: UnsafeCell::new(t),
> +            _p: PhantomData,
> +        })
> +    }
> +}
> +
> +impl<T: ?Sized> WwMutex<'_, T> {
> +    /// Locks the mutex with the given acquire context.
> +    pub fn lock<'a>(&'a self, ctx: Option<&WwAcquireCtx<'_>>) -> Result<WwMutexGuard<'a, T>> {
> +        // SAFETY: The mutex is pinned and valid.
> +        let ret = unsafe {
> +            bindings::ww_mutex_lock(
> +                self.mutex.get(),
> +                ctx.map_or(core::ptr::null_mut(), |c| c.as_ptr()),
> +            )
> +        };
> +
> +        to_result(ret)?;
> +
> +        Ok(WwMutexGuard::new(self))
> +    }
> +
> +    /// Locks the mutex with the given acquire context, interruptible.
> +    ///
> +    /// Similar to `lock`, but can be interrupted by signals.
> +    pub fn lock_interruptible<'a>(
> +        &'a self,
> +        ctx: Option<&WwAcquireCtx<'_>>,
> +    ) -> Result<WwMutexGuard<'a, T>> {
> +        // SAFETY: The mutex is pinned and valid.
> +        let ret = unsafe {
> +            bindings::ww_mutex_lock_interruptible(
> +                self.mutex.get(),
> +                ctx.map_or(core::ptr::null_mut(), |c| c.as_ptr()),
> +            )
> +        };
> +
> +        to_result(ret)?;
> +
> +        Ok(WwMutexGuard::new(self))
> +    }
> +
> +    /// Locks the mutex in the slow path after a die case.
> +    ///
> +    /// This should be called after releasing all held mutexes when `lock` returns [`EDEADLK`].
> +    pub fn lock_slow<'a>(&'a self, ctx: &WwAcquireCtx<'_>) -> Result<WwMutexGuard<'a, T>> {
> +        // SAFETY: The mutex is pinned and valid, and we're in the slow path.
> +        unsafe { bindings::ww_mutex_lock_slow(self.mutex.get(), ctx.as_ptr()) };
> +
> +        Ok(WwMutexGuard::new(self))
> +    }
> +
> +    /// Locks the mutex in the slow path after a die case, interruptible.
> +    pub fn lock_slow_interruptible<'a>(
> +        &'a self,
> +        ctx: &WwAcquireCtx<'_>,
> +    ) -> Result<WwMutexGuard<'a, T>> {
> +        // SAFETY: The mutex is pinned and valid, and we are in the slow path.
> +        let ret =
> +            unsafe { bindings::ww_mutex_lock_slow_interruptible(self.mutex.get(), ctx.as_ptr()) };
> +
> +        to_result(ret)?;
> +
> +        Ok(WwMutexGuard::new(self))
> +    }
> +
> +    /// Tries to lock the mutex without blocking.
> +    pub fn try_lock<'a>(&'a self, ctx: Option<&WwAcquireCtx<'_>>) -> Result<WwMutexGuard<'a, T>> {
> +        // SAFETY: The mutex is pinned and valid.
> +        let ret = unsafe {
> +            bindings::ww_mutex_trylock(
> +                self.mutex.get(),
> +                ctx.map_or(core::ptr::null_mut(), |c| c.as_ptr()),
> +            )
> +        };
> +
> +        if ret == 0 {
> +            return Err(EBUSY);
> +        }
> +
> +        to_result(if ret < 0 { ret } else { 0 })?;
> +
> +        Ok(WwMutexGuard::new(self))
> +    }
> +
> +    /// Checks if the mutex is currently locked.
> +    pub fn is_locked(&self) -> bool {
> +        // SAFETY: The mutex is pinned and valid.
> +        unsafe { bindings::ww_mutex_is_locked(self.mutex.get()) }
> +    }
> +
> +    /// Returns a raw pointer to the inner mutex.
> +    fn as_ptr(&self) -> *mut bindings::ww_mutex {
> +        self.mutex.get()
> +    }
> +}
> +
> +/// A guard that provides exclusive access to the data protected
> +/// by a [`WwMutex`].
> +///
> +/// # Invariants
> +///
> +/// The guard holds an exclusive lock on the associated [`WwMutex`]. The lock is held
> +/// for the entire lifetime of this guard and is automatically released when the
> +/// guard is dropped.
> +#[must_use = "the lock unlocks immediately when the guard is unused"]
> +pub struct WwMutexGuard<'a, T: ?Sized> {
> +    mutex: &'a WwMutex<'a, T>,
> +    _not_send: NotThreadSafe,
> +}
> +
> +// SAFETY: [`WwMutexGuard`] can be transferred across thread boundaries if the data can.
> +unsafe impl<T: ?Sized + Send> Send for WwMutexGuard<'_, T> {}
> +
> +// SAFETY: [`WwMutexGuard`] can be shared between threads if the data can.
> +unsafe impl<T: ?Sized + Send + Sync> Sync for WwMutexGuard<'_, T> {}
> +
> +impl<'a, T: ?Sized> WwMutexGuard<'a, T> {
> +    /// Creates a new guard for a locked mutex.
> +    fn new(mutex: &'a WwMutex<'a, T>) -> Self {
> +        Self {
> +            mutex,
> +            _not_send: NotThreadSafe,
> +        }
> +    }
> +}
> +
> +impl<T: ?Sized> core::ops::Deref for WwMutexGuard<'_, T> {
> +    type Target = T;
> +
> +    fn deref(&self) -> &Self::Target {
> +        // SAFETY: We hold the lock, so we have exclusive access.
> +        unsafe { &*self.mutex.data.get() }
> +    }
> +}
> +
> +impl<T: ?Sized> core::ops::DerefMut for WwMutexGuard<'_, T> {
> +    fn deref_mut(&mut self) -> &mut Self::Target {
> +        // SAFETY: We hold the lock, so we have exclusive access.
> +        unsafe { &mut *self.mutex.data.get() }
> +    }
> +}
> +
> +impl<T: ?Sized> Drop for WwMutexGuard<'_, T> {
> +    fn drop(&mut self) {
> +        // SAFETY: We hold the lock and are about to release it.
> +        unsafe { bindings::ww_mutex_unlock(self.mutex.as_ptr()) };
> +    }
> +}
> -- 
> 2.49.0
> 
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Benno Lossin 3 months, 2 weeks ago
On Sat Jun 21, 2025 at 8:44 PM CEST, Onur Özkan wrote:
> Adds Rust bindings for the kernel's `ww_mutex` infrastructure to enable
> deadlock-free acquisition of multiple related locks.
>
> The patch abstracts `ww_mutex.h` header and wraps the existing
> C `ww_mutex` with three main types:
>     - `WwClass` for grouping related mutexes
>     - `WwAcquireCtx` for tracking lock acquisition context
>     - `WwMutex<T>` for the actual lock

Going to repeat my question from the previous version:

    I don't know the design of `struct ww_mutex`, but from the code below I
    gathered that it has some special error return values that signify that
    one should release other locks.
    
    Did anyone think about making a more Rusty API that would allow one to
    try to lock multiple mutexes at the same time (in a specified order) and
    if it fails, it would do the resetting automatically?

I'm not familiar with ww_mutex, so I can't tell if there is something
good that we could do.

> Signed-off-by: Onur Özkan <work@onurozkan.dev>
> ---
>  rust/kernel/error.rs              |   1 +
>  rust/kernel/sync/lock.rs          |   1 +
>  rust/kernel/sync/lock/ww_mutex.rs | 421 ++++++++++++++++++++++++++++++
>  3 files changed, 423 insertions(+)
>  create mode 100644 rust/kernel/sync/lock/ww_mutex.rs

Thanks for splitting the tests into its own patch, but I still think
it's useful to do the abstractions for `ww_class`, `ww_acquire_ctx` and
`ww_mutex` separately.

> +/// Create static [`WwClass`] instances.
> +///
> +/// # Examples
> +///
> +/// ```
> +/// use kernel::{c_str, define_ww_class};
> +///
> +/// define_ww_class!(WOUND_WAIT_GLOBAL_CLASS, wound_wait, c_str!("wound_wait_global_class"));
> +/// define_ww_class!(WAIT_DIE_GLOBAL_CLASS, wait_die, c_str!("wait_die_global_class"));
> +/// ```
> +#[macro_export]
> +macro_rules! define_ww_class {
> +    ($name:ident, wound_wait, $class_name:expr) => {
> +        static $name: $crate::sync::lock::ww_mutex::WwClass =
> +            { $crate::sync::lock::ww_mutex::WwClass::new($class_name, false) };
> +    };
> +    ($name:ident, wait_die, $class_name:expr) => {
> +        static $name: $crate::sync::lock::ww_mutex::WwClass =
> +            { $crate::sync::lock::ww_mutex::WwClass::new($class_name, true) };
> +    };
> +}

I really don't see the value in having a macro here. The user can just
declare the static themselves.

> +
> +/// A group of mutexes that can participate in deadlock avoidance together.

This isn't a group of mutexes, but rather a class that can be used to
group mutexes together.

> +///
> +/// All mutexes that might be acquired together should use the same class.
> +///
> +/// # Examples
> +///
> +/// ```
> +/// use kernel::sync::lock::ww_mutex::WwClass;
> +/// use kernel::c_str;
> +/// use pin_init::stack_pin_init;
> +///
> +/// stack_pin_init!(let _wait_die_class = WwClass::new_wait_die(c_str!("graphics_buffers")));
> +/// stack_pin_init!(let _wound_wait_class = WwClass::new_wound_wait(c_str!("memory_pools")));
> +///
> +/// # Ok::<(), Error>(())
> +/// ```
> +#[pin_data]
> +pub struct WwClass {
> +    #[pin]
> +    inner: Opaque<bindings::ww_class>,
> +}
> +
> +// SAFETY: [`WwClass`] is set up once and never modified. It's fine to share it across threads.
> +unsafe impl Sync for WwClass {}
> +// SAFETY: Doesn't hold anything thread-specific. It's safe to send to other threads.
> +unsafe impl Send for WwClass {}
> +
> +macro_rules! ww_class_init_helper {
> +    ($name:expr, $is_wait_die:expr) => {
> +        Opaque::new(bindings::ww_class {
> +            stamp: bindings::atomic_long_t { counter: 0 },
> +            acquire_name: $name.as_char_ptr(),
> +            mutex_name: $name.as_char_ptr(),
> +            is_wait_die: $is_wait_die as u32,
> +            // TODO: Replace with `bindings::lock_class_key::default()` once stabilized for `const`.
> +            //
> +            // SAFETY: This is always zero-initialized when defined with `DEFINE_WD_CLASS`
> +            // globally on C side.
> +            //
> +            // Ref: <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/ww_mutex.h?h=v6.16-rc2#n85>
> +            acquire_key: unsafe { core::mem::zeroed() },
> +            // TODO: Replace with `bindings::lock_class_key::default()` once stabilized for `const`.
> +            //
> +            // SAFETY: This is always zero-initialized when defined with `DEFINE_WD_CLASS`
> +            // globally on C side.
> +            //
> +            // Ref: <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/ww_mutex.h?h=v6.16-rc2#n85>
> +            mutex_key: unsafe { core::mem::zeroed() },
> +        })
> +    };
> +}

Is this really needed? Can't we do without this macro?

> +
> +impl WwClass {
> +    /// Creates a [`WwClass`].
> +    ///
> +    /// It's `pub` only so it can be used by the `define_ww_class!` macro.
> +    ///
> +    /// You should not use this function directly. Use the `define_ww_class!`
> +    /// macro or call [`WwClass::new_wait_die`] or [`WwClass::new_wound_wait`] instead.
> +    pub const fn new(name: &'static CStr, is_wait_die: bool) -> Self {

This doesn't work together with `#[pin_data]`, you shouldn't return it
by-value if it is supposed to be pin-initialized.

Is this type address sensitive? If yes, this function needs to be
`unsafe`, if no, then we can remove `#[pin_data]`.

> +        WwClass {
> +            inner: ww_class_init_helper!(name, is_wait_die),
> +        }
> +    }
> +
> +    /// Creates wait-die [`WwClass`].
> +    pub fn new_wait_die(name: &'static CStr) -> impl PinInit<Self> {
> +        pin_init!(WwClass {
> +            inner: ww_class_init_helper!(name, true),
> +        })

This can just be `new(name, true)` instead.

> +    }
> +
> +    /// Creates wound-wait [`WwClass`].
> +    pub fn new_wound_wait(name: &'static CStr) -> impl PinInit<Self> {
> +        pin_init!(WwClass {
> +            inner: ww_class_init_helper!(name, false),
> +        })

Ditto with `false`.

> +    }
> +}
> +
> +/// An acquire context is used to group multiple mutex acquisitions together
> +/// for deadlock avoidance. It must be used when acquiring multiple mutexes
> +/// of the same class.

The first paragraph will be displayed by rustdoc in several summary
views, so it should be a single short sentence.

> +///
> +/// # Examples
> +///
> +/// ```
> +/// use kernel::sync::lock::ww_mutex::{WwClass, WwAcquireCtx, WwMutex};
> +/// use kernel::c_str;
> +/// use kernel::alloc::KBox;
> +/// use pin_init::stack_pin_init;
> +///
> +/// stack_pin_init!(let class = WwClass::new_wound_wait(c_str!("my_class")));
> +///
> +/// // Create mutexes.
> +/// stack_pin_init!(let mutex1 = WwMutex::new(1, &class));
> +/// stack_pin_init!(let mutex2 = WwMutex::new(2, &class));

I don't think it makes sense to have examples using mutexes that are on
the stack. Please use `Arc` instead.

> +///
> +/// // Create acquire context for deadlock avoidance.
> +/// let mut ctx = KBox::pin_init(WwAcquireCtx::new(&class), GFP_KERNEL)?;
> +///
> +/// // Acquire multiple locks safely.
> +/// let guard1 = mutex1.lock(Some(&ctx))?;
> +/// let guard2 = mutex2.lock(Some(&ctx))?;
> +///
> +/// // Mark acquisition phase as complete.
> +/// ctx.as_mut().done();
> +///
> +/// # Ok::<(), Error>(())
> +/// ```
> +#[pin_data(PinnedDrop)]
> +pub struct WwAcquireCtx<'a> {
> +    #[pin]
> +    inner: Opaque<bindings::ww_acquire_ctx>,
> +    _p: PhantomData<&'a WwClass>,
> +}

> +// SAFETY: [`WwMutex`] can be shared between threads.
> +unsafe impl<T: ?Sized + Send> Send for WwMutex<'_, T> {}
> +// SAFETY: [`WwMutex`] can be safely accessed from multiple threads concurrently.
> +unsafe impl<T: ?Sized + Sync> Sync for WwMutex<'_, T> {}
> +
> +impl<'ww_class, T> WwMutex<'ww_class, T> {
> +    /// Creates `Self` with calling `ww_mutex_init` inside.
> +    pub fn new(t: T, ww_class: &'ww_class WwClass) -> impl PinInit<Self> {
> +        let raw_ptr = ww_class.inner.get();

s/raw_ptr/class/

> +        pin_init!(WwMutex {
> +            mutex <- Opaque::ffi_init(|slot: *mut bindings::ww_mutex| {
> +                // SAFETY: The caller guarantees that `ww_class` remains valid.

The caller doesn't need to guarantees that (and in fact they don't),
because it's a reference that lives until `'ww_class` which is captured
by `Self`. So the borrow checker guarantees that the class remains
valid.

---
Cheers,
Benno

> +                unsafe { bindings::ww_mutex_init(slot, raw_ptr) }
> +            }),
> +            data: UnsafeCell::new(t),
> +            _p: PhantomData,
> +        })
> +    }
> +}
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Boqun Feng 3 months, 2 weeks ago
On Sun, Jun 22, 2025 at 11:18:24AM +0200, Benno Lossin wrote:
> On Sat Jun 21, 2025 at 8:44 PM CEST, Onur Özkan wrote:
> > Adds Rust bindings for the kernel's `ww_mutex` infrastructure to enable
> > deadlock-free acquisition of multiple related locks.
> >
> > The patch abstracts `ww_mutex.h` header and wraps the existing
> > C `ww_mutex` with three main types:
> >     - `WwClass` for grouping related mutexes
> >     - `WwAcquireCtx` for tracking lock acquisition context
> >     - `WwMutex<T>` for the actual lock
> 
> Going to repeat my question from the previous version:
> 
>     I don't know the design of `struct ww_mutex`, but from the code below I
>     gathered that it has some special error return values that signify that
>     one should release other locks.
>     
>     Did anyone think about making a more Rusty API that would allow one to
>     try to lock multiple mutexes at the same time (in a specified order) and
>     if it fails, it would do the resetting automatically?

But the order may not be known ahead of time, for example say you have
a few:

    pub struct Foo {
        other: Arc<WwMutex<Foo>>,
	data: i32,
    }

you need to get the lock of the current object in order to know what's
the next object to lock.

> 
> I'm not familiar with ww_mutex, so I can't tell if there is something
> good that we could do.
> 

It's not a bad idea when it can apply, but we still need to support the
case where the order is unknown.

Regards,
Boqun
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Benno Lossin 3 months, 2 weeks ago
On Mon Jun 23, 2025 at 3:04 PM CEST, Boqun Feng wrote:
> On Sun, Jun 22, 2025 at 11:18:24AM +0200, Benno Lossin wrote:
>> On Sat Jun 21, 2025 at 8:44 PM CEST, Onur Özkan wrote:
>> > Adds Rust bindings for the kernel's `ww_mutex` infrastructure to enable
>> > deadlock-free acquisition of multiple related locks.
>> >
>> > The patch abstracts `ww_mutex.h` header and wraps the existing
>> > C `ww_mutex` with three main types:
>> >     - `WwClass` for grouping related mutexes
>> >     - `WwAcquireCtx` for tracking lock acquisition context
>> >     - `WwMutex<T>` for the actual lock
>> 
>> Going to repeat my question from the previous version:
>> 
>>     I don't know the design of `struct ww_mutex`, but from the code below I
>>     gathered that it has some special error return values that signify that
>>     one should release other locks.
>>     
>>     Did anyone think about making a more Rusty API that would allow one to
>>     try to lock multiple mutexes at the same time (in a specified order) and
>>     if it fails, it would do the resetting automatically?
>
> But the order may not be known ahead of time, for example say you have
> a few:
>
>     pub struct Foo {
>         other: Arc<WwMutex<Foo>>,
> 	data: i32,
>     }
>
> you need to get the lock of the current object in order to know what's
> the next object to lock.
>
>> 
>> I'm not familiar with ww_mutex, so I can't tell if there is something
>> good that we could do.
>> 
>
> It's not a bad idea when it can apply, but we still need to support the
> case where the order is unknown.

I didn't have a concrete API in mind, but after having read the
abstractions more, would this make sense?

    let ctx: &WwAcquireCtx = ...;
    let m1: &WwMutex<T> = ...;
    let m2: &WwMutex<Foo> = ...;

    let (t, foo, foo2) = ctx
        .begin()
        .lock(m1)
        .lock(m2)
        .lock_with(|(t, foo)| &*foo.other)
        .finish();

    let _: &mut T = t;
    let _: &mut Foo = foo;
    let _: &mut Foo = foo2;

---
Cheers,
Benno
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Boqun Feng 3 months, 2 weeks ago
On Mon, Jun 23, 2025 at 03:44:58PM +0200, Benno Lossin wrote:
> On Mon Jun 23, 2025 at 3:04 PM CEST, Boqun Feng wrote:
> > On Sun, Jun 22, 2025 at 11:18:24AM +0200, Benno Lossin wrote:
> >> On Sat Jun 21, 2025 at 8:44 PM CEST, Onur Özkan wrote:
> >> > Adds Rust bindings for the kernel's `ww_mutex` infrastructure to enable
> >> > deadlock-free acquisition of multiple related locks.
> >> >
> >> > The patch abstracts `ww_mutex.h` header and wraps the existing
> >> > C `ww_mutex` with three main types:
> >> >     - `WwClass` for grouping related mutexes
> >> >     - `WwAcquireCtx` for tracking lock acquisition context
> >> >     - `WwMutex<T>` for the actual lock
> >> 
> >> Going to repeat my question from the previous version:
> >> 
> >>     I don't know the design of `struct ww_mutex`, but from the code below I
> >>     gathered that it has some special error return values that signify that
> >>     one should release other locks.
> >>     
> >>     Did anyone think about making a more Rusty API that would allow one to
> >>     try to lock multiple mutexes at the same time (in a specified order) and
> >>     if it fails, it would do the resetting automatically?
> >
> > But the order may not be known ahead of time, for example say you have
> > a few:
> >
> >     pub struct Foo {
> >         other: Arc<WwMutex<Foo>>,
> > 	data: i32,
> >     }
> >
> > you need to get the lock of the current object in order to know what's
> > the next object to lock.
> >
> >> 
> >> I'm not familiar with ww_mutex, so I can't tell if there is something
> >> good that we could do.
> >> 
> >
> > It's not a bad idea when it can apply, but we still need to support the
> > case where the order is unknown.
> 
> I didn't have a concrete API in mind, but after having read the
> abstractions more, would this make sense?
> 
>     let ctx: &WwAcquireCtx = ...;
>     let m1: &WwMutex<T> = ...;
>     let m2: &WwMutex<Foo> = ...;
> 
>     let (t, foo, foo2) = ctx
>         .begin()
>         .lock(m1)
>         .lock(m2)
>         .lock_with(|(t, foo)| &*foo.other)
>         .finish();
> 

Cute!

However, each `.lock()` will need to be polymorphic over a tuple of
locks that are already held, right? Otherwise I don't see how
`.lock_with()` knows it's already held two locks. That sounds like a
challenge for implementation. We also need to take into consideration
that the user want to drop any lock in the sequence? E.g. the user
acquires a, b and c, and then drop b, and then acquires d. Which I think
is possible for ww_mutex.

Regards,
Boqun

>     let _: &mut T = t;
>     let _: &mut Foo = foo;
>     let _: &mut Foo = foo2;
> 
> ---
> Cheers,
> Benno
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Benno Lossin 3 months, 2 weeks ago
On Mon Jun 23, 2025 at 4:47 PM CEST, Boqun Feng wrote:
> On Mon, Jun 23, 2025 at 03:44:58PM +0200, Benno Lossin wrote:
>> I didn't have a concrete API in mind, but after having read the
>> abstractions more, would this make sense?
>> 
>>     let ctx: &WwAcquireCtx = ...;
>>     let m1: &WwMutex<T> = ...;
>>     let m2: &WwMutex<Foo> = ...;
>> 
>>     let (t, foo, foo2) = ctx
>>         .begin()
>>         .lock(m1)
>>         .lock(m2)
>>         .lock_with(|(t, foo)| &*foo.other)
>>         .finish();
>> 
>
> Cute!
>
> However, each `.lock()` will need to be polymorphic over a tuple of
> locks that are already held, right? Otherwise I don't see how
> `.lock_with()` knows it's already held two locks. That sounds like a
> challenge for implementation.

I think it's doable if we have 

    impl WwActiveCtx {
        fn begin(&self) -> WwActiveCtx<'_, ()>;
    }

    struct WwActiveCtx<'a, Locks> {
        locks: Locks,
        _ctx: PhantomData<&'a WwAcquireCtx>,
    }

    impl<'a, Locks> WwActiveCtx<'a, Locks>
    where
        Locks: Tuple
    {
        fn lock<'b, T>(
            self,
            lock: &'b WwMutex<T>,
        ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;

        fn lock_with<'b, T>(
            self,
            get_lock: impl FnOnce(&Locks) -> &'b WwMutex<T>,
        ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
        // I'm not 100% sure that the lifetimes will work out...

        fn finish(self) -> Locks;
    }

    trait Tuple {
        type Append<T>;

        fn append<T>(self, value: T) -> Self::Append<T>;
    }

    impl Tuple for () {
        type Append<T> = (T,);

        fn append<T>(self, value: T) -> Self::Append<T> {
            (value,)
        }
    }
    
    impl<T1> Tuple for (T1,) {
        type Append<T> = (T1, T);

        fn append<T>(self, value: T) -> Self::Append<T> {
            (self.0, value,)
        }
    }

    impl<T1, T2> Tuple for (T1, T2) {
        type Append<T> = (T1, T2, T);

        fn append<T>(self, value: T) -> Self::Append<T> {
            (self.0, self.1, value,)
        }
    }

    /* these can easily be generated by a macro */

> We also need to take into consideration that the user want to drop any
> lock in the sequence? E.g. the user acquires a, b and c, and then drop
> b, and then acquires d. Which I think is possible for ww_mutex.

Hmm what about adding this to the above idea?:

    impl<'a, Locks> WwActiveCtx<'a, Locks>
    where
        Locks: Tuple
    {
        fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) -> WwActiveCtx<'a, L2>;
    }

Then you can do:

    let (a, c, d) = ctx.begin()
        .lock(a)
        .lock(b)
        .lock(c)
        .custom(|(a, _, c)| (a, c))
        .lock(d)
        .finish();

>>     let _: &mut T = t;
>>     let _: &mut Foo = foo;
>>     let _: &mut Foo = foo2;

Ah these will actually be `WwMutexGuard<'_, ...>`, but that should be
expected.

---
Cheers,
Benno
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Onur 3 months ago
On Mon, 23 Jun 2025 17:14:37 +0200
"Benno Lossin" <lossin@kernel.org> wrote:

> > We also need to take into consideration that the user want to drop
> > any lock in the sequence? E.g. the user acquires a, b and c, and
> > then drop b, and then acquires d. Which I think is possible for
> > ww_mutex.
> 
> Hmm what about adding this to the above idea?:
> 
>     impl<'a, Locks> WwActiveCtx<'a, Locks>
>     where
>         Locks: Tuple
>     {
>         fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
> WwActiveCtx<'a, L2>; }
> 
> Then you can do:
> 
>     let (a, c, d) = ctx.begin()
>         .lock(a)
>         .lock(b)
>         .lock(c)
>         .custom(|(a, _, c)| (a, c))
>         .lock(d)
>         .finish();


Instead of `begin` and `custom`, why not something like this:

	let (a, c, d) = ctx.init()
	    .lock(a)
            .lock(b)
            .lock(c)
            .unlock(b)
            .lock(d)
            .finish();

Instead of `begin`, `init` would be better naming to imply `fini` on the
C side, and `unlock` instead of `custom` would make the intent clearer
when dropping locks mid chain.

I guess `lock()` is going to use the slow path since it's infallible? It
might be good to provide a `try_lock` that returns -DEADLOCK
immediately without blocking when it can't acquire the lock.

Regards,
Onur
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Benno Lossin 3 months ago
On Mon Jul 7, 2025 at 3:39 PM CEST, Onur wrote:
> On Mon, 23 Jun 2025 17:14:37 +0200
> "Benno Lossin" <lossin@kernel.org> wrote:
>
>> > We also need to take into consideration that the user want to drop
>> > any lock in the sequence? E.g. the user acquires a, b and c, and
>> > then drop b, and then acquires d. Which I think is possible for
>> > ww_mutex.
>> 
>> Hmm what about adding this to the above idea?:
>> 
>>     impl<'a, Locks> WwActiveCtx<'a, Locks>
>>     where
>>         Locks: Tuple
>>     {
>>         fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
>> WwActiveCtx<'a, L2>; }
>> 
>> Then you can do:
>> 
>>     let (a, c, d) = ctx.begin()
>>         .lock(a)
>>         .lock(b)
>>         .lock(c)
>>         .custom(|(a, _, c)| (a, c))
>>         .lock(d)
>>         .finish();
>
>
> Instead of `begin` and `custom`, why not something like this:
>
> 	let (a, c, d) = ctx.init()
> 	    .lock(a)
>             .lock(b)
>             .lock(c)
>             .unlock(b)
>             .lock(d)
>             .finish();
>
> Instead of `begin`, `init` would be better naming to imply `fini` on the
> C side, and `unlock` instead of `custom` would make the intent clearer
> when dropping locks mid chain.

I don't think that this `unlock` operation will work. How would you
implement it?

> I guess `lock()` is going to use the slow path since it's infallible? It
> might be good to provide a `try_lock` that returns -DEADLOCK
> immediately without blocking when it can't acquire the lock.

I think `lock` would first try the fast path and if it fails, it would
unlock all locks taken before & then re-try the slow path. (if that is
how ww_mutex is usually used, if not, I'd need to see the most common
use-case)

---
Cheers,
Benno
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Onur 3 months ago
On Mon, 07 Jul 2025 17:31:10 +0200
"Benno Lossin" <lossin@kernel.org> wrote:

> On Mon Jul 7, 2025 at 3:39 PM CEST, Onur wrote:
> > On Mon, 23 Jun 2025 17:14:37 +0200
> > "Benno Lossin" <lossin@kernel.org> wrote:
> >
> >> > We also need to take into consideration that the user want to
> >> > drop any lock in the sequence? E.g. the user acquires a, b and
> >> > c, and then drop b, and then acquires d. Which I think is
> >> > possible for ww_mutex.
> >> 
> >> Hmm what about adding this to the above idea?:
> >> 
> >>     impl<'a, Locks> WwActiveCtx<'a, Locks>
> >>     where
> >>         Locks: Tuple
> >>     {
> >>         fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
> >> WwActiveCtx<'a, L2>; }
> >> 
> >> Then you can do:
> >> 
> >>     let (a, c, d) = ctx.begin()
> >>         .lock(a)
> >>         .lock(b)
> >>         .lock(c)
> >>         .custom(|(a, _, c)| (a, c))
> >>         .lock(d)
> >>         .finish();
> >
> >
> > Instead of `begin` and `custom`, why not something like this:
> >
> > 	let (a, c, d) = ctx.init()
> > 	    .lock(a)
> >             .lock(b)
> >             .lock(c)
> >             .unlock(b)
> >             .lock(d)
> >             .finish();
> >
> > Instead of `begin`, `init` would be better naming to imply `fini`
> > on the C side, and `unlock` instead of `custom` would make the
> > intent clearer when dropping locks mid chain.
> 
> I don't think that this `unlock` operation will work. How would you
> implement it?


We could link mutexes to locks using some unique value, so that we can
access locks by passing mutexes (though that sounds a bit odd).

Another option would be to unlock by the index, e.g.,:

	let (a, c) = ctx.init()
	    .lock(a)
            .lock(b)
            .unlock::<1>()
            .lock(c)
            .finish();

The index approach would require us to write something very similar
to `Tuple` (with macro obviously) you proposed sometime ago.

We could also just go back to your `custom` but find a better name
for it (such as `retain`).

Regards,
Onur
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Benno Lossin 3 months ago
On Mon Jul 7, 2025 at 8:06 PM CEST, Onur wrote:
> On Mon, 07 Jul 2025 17:31:10 +0200
> "Benno Lossin" <lossin@kernel.org> wrote:
>
>> On Mon Jul 7, 2025 at 3:39 PM CEST, Onur wrote:
>> > On Mon, 23 Jun 2025 17:14:37 +0200
>> > "Benno Lossin" <lossin@kernel.org> wrote:
>> >
>> >> > We also need to take into consideration that the user want to
>> >> > drop any lock in the sequence? E.g. the user acquires a, b and
>> >> > c, and then drop b, and then acquires d. Which I think is
>> >> > possible for ww_mutex.
>> >> 
>> >> Hmm what about adding this to the above idea?:
>> >> 
>> >>     impl<'a, Locks> WwActiveCtx<'a, Locks>
>> >>     where
>> >>         Locks: Tuple
>> >>     {
>> >>         fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
>> >> WwActiveCtx<'a, L2>; }
>> >> 
>> >> Then you can do:
>> >> 
>> >>     let (a, c, d) = ctx.begin()
>> >>         .lock(a)
>> >>         .lock(b)
>> >>         .lock(c)
>> >>         .custom(|(a, _, c)| (a, c))
>> >>         .lock(d)
>> >>         .finish();
>> >
>> >
>> > Instead of `begin` and `custom`, why not something like this:
>> >
>> > 	let (a, c, d) = ctx.init()
>> > 	    .lock(a)
>> >             .lock(b)
>> >             .lock(c)
>> >             .unlock(b)
>> >             .lock(d)
>> >             .finish();
>> >
>> > Instead of `begin`, `init` would be better naming to imply `fini`
>> > on the C side, and `unlock` instead of `custom` would make the
>> > intent clearer when dropping locks mid chain.

Also, I'm not really fond of the name `init`, how about `enter`?

>> 
>> I don't think that this `unlock` operation will work. How would you
>> implement it?
>
>
> We could link mutexes to locks using some unique value, so that we can
> access locks by passing mutexes (though that sounds a bit odd).
>
> Another option would be to unlock by the index, e.g.,:
>
> 	let (a, c) = ctx.init()
> 	    .lock(a)
>             .lock(b)
>             .unlock::<1>()
>             .lock(c)
>             .finish();

Hmm yeah that's interesting, but probably not very readable...

    let (a, c, e) = ctx
        .enter()
        .lock(a)
        .lock(b)
        .lock_with(|(a, b)| b.foo())
        .unlock::<1>()
        .lock(c)
        .lock(d)
        .lock_with(|(.., d)| d.bar())
        .unlock::<2>();

> The index approach would require us to write something very similar
> to `Tuple` (with macro obviously) you proposed sometime ago.
>
> We could also just go back to your `custom` but find a better name
> for it (such as `retain`).

Oh yeah the name was just a placeholder.

The advantage of custom is that the user can do anything in the closure.

---
Cheers,
Benno
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Daniel Almeida 2 months, 1 week ago
Hi Benno,

> On 7 Jul 2025, at 16:48, Benno Lossin <lossin@kernel.org> wrote:
> 
> On Mon Jul 7, 2025 at 8:06 PM CEST, Onur wrote:
>> On Mon, 07 Jul 2025 17:31:10 +0200
>> "Benno Lossin" <lossin@kernel.org> wrote:
>> 
>>> On Mon Jul 7, 2025 at 3:39 PM CEST, Onur wrote:
>>>> On Mon, 23 Jun 2025 17:14:37 +0200
>>>> "Benno Lossin" <lossin@kernel.org> wrote:
>>>> 
>>>>>> We also need to take into consideration that the user want to
>>>>>> drop any lock in the sequence? E.g. the user acquires a, b and
>>>>>> c, and then drop b, and then acquires d. Which I think is
>>>>>> possible for ww_mutex.
>>>>> 
>>>>> Hmm what about adding this to the above idea?:
>>>>> 
>>>>>    impl<'a, Locks> WwActiveCtx<'a, Locks>
>>>>>    where
>>>>>        Locks: Tuple
>>>>>    {
>>>>>        fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
>>>>> WwActiveCtx<'a, L2>; }
>>>>> 
>>>>> Then you can do:
>>>>> 
>>>>>    let (a, c, d) = ctx.begin()
>>>>>        .lock(a)
>>>>>        .lock(b)
>>>>>        .lock(c)
>>>>>        .custom(|(a, _, c)| (a, c))
>>>>>        .lock(d)
>>>>>        .finish();
>>>> 
>>>> 
>>>> Instead of `begin` and `custom`, why not something like this:
>>>> 
>>>> let (a, c, d) = ctx.init()
>>>>     .lock(a)
>>>>            .lock(b)
>>>>            .lock(c)
>>>>            .unlock(b)
>>>>            .lock(d)
>>>>            .finish();
>>>> 
>>>> Instead of `begin`, `init` would be better naming to imply `fini`
>>>> on the C side, and `unlock` instead of `custom` would make the
>>>> intent clearer when dropping locks mid chain.
> 
> Also, I'm not really fond of the name `init`, how about `enter`?
> 
>>> 
>>> I don't think that this `unlock` operation will work. How would you
>>> implement it?
>> 
>> 
>> We could link mutexes to locks using some unique value, so that we can
>> access locks by passing mutexes (though that sounds a bit odd).
>> 
>> Another option would be to unlock by the index, e.g.,:
>> 
>> let (a, c) = ctx.init()
>>     .lock(a)
>>            .lock(b)
>>            .unlock::<1>()

Why do we need this random unlock() here? We usually want to lock everything
and proceed, or otherwise backoff completely so that someone else can proceed.

One thing I didn’t understand with your approach: is it amenable to loops?
i.e.: are things like drm_exec() implementable?

/**
 * drm_exec_until_all_locked - loop until all GEM objects are locked
 * @exec: drm_exec object
 *
 * Core functionality of the drm_exec object. Loops until all GEM objects are
 * locked and no more contention exists. At the beginning of the loop it is
 * guaranteed that no GEM object is locked.
 *
 * Since labels can't be defined local to the loops body we use a jump pointer
 * to make sure that the retry is only used from within the loops body.
 */
#define drm_exec_until_all_locked(exec)					\
__PASTE(__drm_exec_, __LINE__):						\
	for (void *__drm_exec_retry_ptr; ({				\
		__drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
		(void)__drm_exec_retry_ptr;				\
		drm_exec_cleanup(exec);					\
	});)

In fact, perhaps we can copy drm_exec, basically? i.e.:

/**
 * struct drm_exec - Execution context
 */
struct drm_exec {
	/**
	 * @flags: Flags to control locking behavior
	 */
	u32                     flags;

	/**
	 * @ticket: WW ticket used for acquiring locks
	 */
	struct ww_acquire_ctx	ticket;

	/**
	 * @num_objects: number of objects locked
	 */
	unsigned int		num_objects;

	/**
	 * @max_objects: maximum objects in array
	 */
	unsigned int		max_objects;

	/**
	 * @objects: array of the locked objects
	 */
	struct drm_gem_object	**objects;

	/**
	 * @contended: contended GEM object we backed off for
	 */
	struct drm_gem_object	*contended;

	/**
	 * @prelocked: already locked GEM object due to contention
	 */
	struct drm_gem_object *prelocked;
};

This is GEM-specific, but we could perhaps implement the same idea by
tracking ww_mutexes instead of GEM objects.

Also, I’d appreciate if the rollback logic could be automated, which is
what you’re trying to do, so +1 to your idea.

>>            .lock(c)
>>            .finish();
> 
> Hmm yeah that's interesting, but probably not very readable...
> 
>    let (a, c, e) = ctx
>        .enter()
>        .lock(a)
>        .lock(b)
>        .lock_with(|(a, b)| b.foo())
>        .unlock::<1>()
>        .lock(c)
>        .lock(d)
>        .lock_with(|(.., d)| d.bar())
>        .unlock::<2>();
> 
>> The index approach would require us to write something very similar
>> to `Tuple` (with macro obviously) you proposed sometime ago.
>> 
>> We could also just go back to your `custom` but find a better name
>> for it (such as `retain`).
> 
> Oh yeah the name was just a placeholder.
> 
> The advantage of custom is that the user can do anything in the closure.
> 
> ---
> Cheers,
> Benno

— Daniel
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Benno Lossin 2 months, 1 week ago
On Fri Aug 1, 2025 at 11:22 PM CEST, Daniel Almeida wrote:
> Hi Benno,
>
>> On 7 Jul 2025, at 16:48, Benno Lossin <lossin@kernel.org> wrote:
>> 
>> On Mon Jul 7, 2025 at 8:06 PM CEST, Onur wrote:
>>> On Mon, 07 Jul 2025 17:31:10 +0200
>>> "Benno Lossin" <lossin@kernel.org> wrote:
>>> 
>>>> On Mon Jul 7, 2025 at 3:39 PM CEST, Onur wrote:
>>>>> On Mon, 23 Jun 2025 17:14:37 +0200
>>>>> "Benno Lossin" <lossin@kernel.org> wrote:
>>>>> 
>>>>>>> We also need to take into consideration that the user want to
>>>>>>> drop any lock in the sequence? E.g. the user acquires a, b and
>>>>>>> c, and then drop b, and then acquires d. Which I think is
>>>>>>> possible for ww_mutex.
>>>>>> 
>>>>>> Hmm what about adding this to the above idea?:
>>>>>> 
>>>>>>    impl<'a, Locks> WwActiveCtx<'a, Locks>
>>>>>>    where
>>>>>>        Locks: Tuple
>>>>>>    {
>>>>>>        fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
>>>>>> WwActiveCtx<'a, L2>; }
>>>>>> 
>>>>>> Then you can do:
>>>>>> 
>>>>>>    let (a, c, d) = ctx.begin()
>>>>>>        .lock(a)
>>>>>>        .lock(b)
>>>>>>        .lock(c)
>>>>>>        .custom(|(a, _, c)| (a, c))
>>>>>>        .lock(d)
>>>>>>        .finish();
>>>>> 
>>>>> 
>>>>> Instead of `begin` and `custom`, why not something like this:
>>>>> 
>>>>> let (a, c, d) = ctx.init()
>>>>>     .lock(a)
>>>>>            .lock(b)
>>>>>            .lock(c)
>>>>>            .unlock(b)
>>>>>            .lock(d)
>>>>>            .finish();
>>>>> 
>>>>> Instead of `begin`, `init` would be better naming to imply `fini`
>>>>> on the C side, and `unlock` instead of `custom` would make the
>>>>> intent clearer when dropping locks mid chain.
>> 
>> Also, I'm not really fond of the name `init`, how about `enter`?
>> 
>>>> 
>>>> I don't think that this `unlock` operation will work. How would you
>>>> implement it?
>>> 
>>> 
>>> We could link mutexes to locks using some unique value, so that we can
>>> access locks by passing mutexes (though that sounds a bit odd).
>>> 
>>> Another option would be to unlock by the index, e.g.,:
>>> 
>>> let (a, c) = ctx.init()
>>>     .lock(a)
>>>            .lock(b)
>>>            .unlock::<1>()
>
> Why do we need this random unlock() here? We usually want to lock everything
> and proceed, or otherwise backoff completely so that someone else can proceed.

No the `unlock` was just to show that we could interleave locking and
unlocking.

> One thing I didn’t understand with your approach: is it amenable to loops?
> i.e.: are things like drm_exec() implementable?

I don't think so, see also my reply here:

    https://lore.kernel.org/all/DBOPIJHY9NZ7.2CU5XP7UY7ES3@kernel.org

The type-based approach with tuples doesn't handle dynamic number of
locks.

> /**
>  * drm_exec_until_all_locked - loop until all GEM objects are locked
>  * @exec: drm_exec object
>  *
>  * Core functionality of the drm_exec object. Loops until all GEM objects are
>  * locked and no more contention exists. At the beginning of the loop it is
>  * guaranteed that no GEM object is locked.
>  *
>  * Since labels can't be defined local to the loops body we use a jump pointer
>  * to make sure that the retry is only used from within the loops body.
>  */
> #define drm_exec_until_all_locked(exec)					\
> __PASTE(__drm_exec_, __LINE__):						\
> 	for (void *__drm_exec_retry_ptr; ({				\
> 		__drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
> 		(void)__drm_exec_retry_ptr;				\
> 		drm_exec_cleanup(exec);					\
> 	});)

My understanding of C preprocessor macros is not good enough to parse or
understand this :( What is that `__PASTE` thing?

> In fact, perhaps we can copy drm_exec, basically? i.e.:
>
> /**
>  * struct drm_exec - Execution context
>  */
> struct drm_exec {
> 	/**
> 	 * @flags: Flags to control locking behavior
> 	 */
> 	u32                     flags;
>
> 	/**
> 	 * @ticket: WW ticket used for acquiring locks
> 	 */
> 	struct ww_acquire_ctx	ticket;
>
> 	/**
> 	 * @num_objects: number of objects locked
> 	 */
> 	unsigned int		num_objects;
>
> 	/**
> 	 * @max_objects: maximum objects in array
> 	 */
> 	unsigned int		max_objects;
>
> 	/**
> 	 * @objects: array of the locked objects
> 	 */
> 	struct drm_gem_object	**objects;
>
> 	/**
> 	 * @contended: contended GEM object we backed off for
> 	 */
> 	struct drm_gem_object	*contended;
>
> 	/**
> 	 * @prelocked: already locked GEM object due to contention
> 	 */
> 	struct drm_gem_object *prelocked;
> };
>
> This is GEM-specific, but we could perhaps implement the same idea by
> tracking ww_mutexes instead of GEM objects.

But this would only work for `Vec<WwMutex<T>>`, right?

> Also, I’d appreciate if the rollback logic could be automated, which is
> what you’re trying to do, so +1 to your idea.

Good to see that it seems useful to you :)

---
Cheers,
Benno
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Miguel Ojeda 2 months, 1 week ago
On Sat, Aug 2, 2025 at 12:42 PM Benno Lossin <lossin@kernel.org> wrote:
>
> My understanding of C preprocessor macros is not good enough to parse or
> understand this :( What is that `__PASTE` thing?

It allows you to paste the expansion of other macros, e.g. compare `A` and `B`:

    #define A(a,b) a##b
    #define B(a,b) __PASTE(a,b)

    #define C 43

    A(42, C)
    B(42, C)

would give:

    42C
    4243

https://godbolt.org/z/Ms18oed1b

Cheers,
Miguel
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Daniel Almeida 2 months, 1 week ago

> On 2 Aug 2025, at 07:42, Benno Lossin <lossin@kernel.org> wrote:
> 
> On Fri Aug 1, 2025 at 11:22 PM CEST, Daniel Almeida wrote:
>> Hi Benno,
>> 
>>> On 7 Jul 2025, at 16:48, Benno Lossin <lossin@kernel.org> wrote:
>>> 
>>> On Mon Jul 7, 2025 at 8:06 PM CEST, Onur wrote:
>>>> On Mon, 07 Jul 2025 17:31:10 +0200
>>>> "Benno Lossin" <lossin@kernel.org> wrote:
>>>> 
>>>>> On Mon Jul 7, 2025 at 3:39 PM CEST, Onur wrote:
>>>>>> On Mon, 23 Jun 2025 17:14:37 +0200
>>>>>> "Benno Lossin" <lossin@kernel.org> wrote:
>>>>>> 
>>>>>>>> We also need to take into consideration that the user want to
>>>>>>>> drop any lock in the sequence? E.g. the user acquires a, b and
>>>>>>>> c, and then drop b, and then acquires d. Which I think is
>>>>>>>> possible for ww_mutex.
>>>>>>> 
>>>>>>> Hmm what about adding this to the above idea?:
>>>>>>> 
>>>>>>>   impl<'a, Locks> WwActiveCtx<'a, Locks>
>>>>>>>   where
>>>>>>>       Locks: Tuple
>>>>>>>   {
>>>>>>>       fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
>>>>>>> WwActiveCtx<'a, L2>; }
>>>>>>> 
>>>>>>> Then you can do:
>>>>>>> 
>>>>>>>   let (a, c, d) = ctx.begin()
>>>>>>>       .lock(a)
>>>>>>>       .lock(b)
>>>>>>>       .lock(c)
>>>>>>>       .custom(|(a, _, c)| (a, c))
>>>>>>>       .lock(d)
>>>>>>>       .finish();
>>>>>> 
>>>>>> 
>>>>>> Instead of `begin` and `custom`, why not something like this:
>>>>>> 
>>>>>> let (a, c, d) = ctx.init()
>>>>>>    .lock(a)
>>>>>>           .lock(b)
>>>>>>           .lock(c)
>>>>>>           .unlock(b)
>>>>>>           .lock(d)
>>>>>>           .finish();
>>>>>> 
>>>>>> Instead of `begin`, `init` would be better naming to imply `fini`
>>>>>> on the C side, and `unlock` instead of `custom` would make the
>>>>>> intent clearer when dropping locks mid chain.
>>> 
>>> Also, I'm not really fond of the name `init`, how about `enter`?
>>> 
>>>>> 
>>>>> I don't think that this `unlock` operation will work. How would you
>>>>> implement it?
>>>> 
>>>> 
>>>> We could link mutexes to locks using some unique value, so that we can
>>>> access locks by passing mutexes (though that sounds a bit odd).
>>>> 
>>>> Another option would be to unlock by the index, e.g.,:
>>>> 
>>>> let (a, c) = ctx.init()
>>>>    .lock(a)
>>>>           .lock(b)
>>>>           .unlock::<1>()
>> 
>> Why do we need this random unlock() here? We usually want to lock everything
>> and proceed, or otherwise backoff completely so that someone else can proceed.
> 
> No the `unlock` was just to show that we could interleave locking and
> unlocking.
> 
>> One thing I didn’t understand with your approach: is it amenable to loops?
>> i.e.: are things like drm_exec() implementable?
> 
> I don't think so, see also my reply here:
> 
>    https://lore.kernel.org/all/DBOPIJHY9NZ7.2CU5XP7UY7ES3@kernel.org
> 
> The type-based approach with tuples doesn't handle dynamic number of
> locks.
> 

This is probably the default use-case by the way.

>> /**
>> * drm_exec_until_all_locked - loop until all GEM objects are locked
>> * @exec: drm_exec object
>> *
>> * Core functionality of the drm_exec object. Loops until all GEM objects are
>> * locked and no more contention exists. At the beginning of the loop it is
>> * guaranteed that no GEM object is locked.
>> *
>> * Since labels can't be defined local to the loops body we use a jump pointer
>> * to make sure that the retry is only used from within the loops body.
>> */
>> #define drm_exec_until_all_locked(exec) \
>> __PASTE(__drm_exec_, __LINE__): \
>> for (void *__drm_exec_retry_ptr; ({ \
>> __drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
>> (void)__drm_exec_retry_ptr; \
>> drm_exec_cleanup(exec); \
>> });)
> 
> My understanding of C preprocessor macros is not good enough to parse or
> understand this :( What is that `__PASTE` thing?

This macro is very useful, but also cursed :)

This declares a unique label before the loop, so you can jump back to it on
contention. It is usually used in conjunction with:

/**
 * drm_exec_retry_on_contention - restart the loop to grap all locks
 * @exec: drm_exec object
 *
 * Control flow helper to continue when a contention was detected and we need to
 * clean up and re-start the loop to prepare all GEM objects.
 */
#define drm_exec_retry_on_contention(exec)			\
	do {							\
		if (unlikely(drm_exec_is_contended(exec)))	\
			goto *__drm_exec_retry_ptr;		\
	} while (0)


The termination is handled by:

/**
 * drm_exec_cleanup - cleanup when contention is detected
 * @exec: the drm_exec object to cleanup
 *
 * Cleanup the current state and return true if we should stay inside the retry
 * loop, false if there wasn't any contention detected and we can keep the
 * objects locked.
 */
bool drm_exec_cleanup(struct drm_exec *exec)
{
	if (likely(!exec->contended)) {
		ww_acquire_done(&exec->ticket);
		return false;
	}

	if (likely(exec->contended == DRM_EXEC_DUMMY)) {
		exec->contended = NULL;
		ww_acquire_init(&exec->ticket, &reservation_ww_class);
		return true;
	}

	drm_exec_unlock_all(exec);
	exec->num_objects = 0;
	return true;
}
EXPORT_SYMBOL(drm_exec_cleanup);

The third clause in the loop is empty.

For example, in amdgpu:

/**
 * reserve_bo_and_vm - reserve a BO and a VM unconditionally.
 * @mem: KFD BO structure.
 * @vm: the VM to reserve.
 * @ctx: the struct that will be used in unreserve_bo_and_vms().
 */
static int reserve_bo_and_vm(struct kgd_mem *mem,
			      struct amdgpu_vm *vm,
			      struct bo_vm_reservation_context *ctx)
{
	struct amdgpu_bo *bo = mem->bo;
	int ret;

	WARN_ON(!vm);

	ctx->n_vms = 1;
	ctx->sync = &mem->sync;
	drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
	drm_exec_until_all_locked(&ctx->exec) {
		ret = amdgpu_vm_lock_pd(vm, &ctx->exec, 2);
		drm_exec_retry_on_contention(&ctx->exec);
		if (unlikely(ret))
			goto error;

		ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1);
		drm_exec_retry_on_contention(&ctx->exec);
		if (unlikely(ret))
			goto error;
	}
        // <—— everything is locked at this point.
	return 0;


So, something like:

some_unique_label:
for(void *retry_ptr;
    ({ retry_ptr = &some_unique_label; drm_exec_cleanup(); });
    /* empty *) {
     /* user code here, which potentially jumps back to some_unique_label  */
}

> 
>> In fact, perhaps we can copy drm_exec, basically? i.e.:
>> 
>> /**
>> * struct drm_exec - Execution context
>> */
>> struct drm_exec {
>> /**
>>  * @flags: Flags to control locking behavior
>>  */
>> u32                     flags;
>> 
>> /**
>>  * @ticket: WW ticket used for acquiring locks
>>  */
>> struct ww_acquire_ctx ticket;
>> 
>> /**
>>  * @num_objects: number of objects locked
>>  */
>> unsigned int num_objects;
>> 
>> /**
>>  * @max_objects: maximum objects in array
>>  */
>> unsigned int max_objects;
>> 
>> /**
>>  * @objects: array of the locked objects
>>  */
>> struct drm_gem_object **objects;
>> 
>> /**
>>  * @contended: contended GEM object we backed off for
>>  */
>> struct drm_gem_object *contended;
>> 
>> /**
>>  * @prelocked: already locked GEM object due to contention
>>  */
>> struct drm_gem_object *prelocked;
>> };
>> 
>> This is GEM-specific, but we could perhaps implement the same idea by
>> tracking ww_mutexes instead of GEM objects.
> 
> But this would only work for `Vec<WwMutex<T>>`, right?

I’m not sure if I understand your point here.

The list of ww_mutexes that we've managed to currently lock would be something
that we'd keep track internally in our context. In what way is a KVec an issue?

> 
>> Also, I’d appreciate if the rollback logic could be automated, which is
>> what you’re trying to do, so +1 to your idea.
> 
> Good to see that it seems useful to you :)
> 
> ---
> Cheers,
> Benno

Btw, I can also try to implement a proof of concept, so long as people agree that
this approach makes sense.

By the way, dri-devel seems to not be on cc? Added them now.

Full context at [0].

— Daniel

[0]: https://lore.kernel.org/rust-for-linux/DBPC27REX4N1.3JA4SSHDYXAHJ@kernel.org/T/#m17a1e3a913ead2648abdff0fc2d927c8323cb8c3
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Benno Lossin 2 months, 1 week ago
On Sat Aug 2, 2025 at 4:15 PM CEST, Daniel Almeida wrote:
> On 2 Aug 2025, at 07:42, Benno Lossin <lossin@kernel.org> wrote:
>> On Fri Aug 1, 2025 at 11:22 PM CEST, Daniel Almeida wrote:
>>> One thing I didn’t understand with your approach: is it amenable to loops?
>>> i.e.: are things like drm_exec() implementable?
>> 
>> I don't think so, see also my reply here:
>> 
>>    https://lore.kernel.org/all/DBOPIJHY9NZ7.2CU5XP7UY7ES3@kernel.org
>> 
>> The type-based approach with tuples doesn't handle dynamic number of
>> locks.
>> 
>
> This is probably the default use-case by the way.

That's an important detail. In that case, a type state won't we a good
idea. Unless it's also common to have a finite number of them, in which
case we should have two API.

>>> /**
>>> * drm_exec_until_all_locked - loop until all GEM objects are locked
>>> * @exec: drm_exec object
>>> *
>>> * Core functionality of the drm_exec object. Loops until all GEM objects are
>>> * locked and no more contention exists. At the beginning of the loop it is
>>> * guaranteed that no GEM object is locked.
>>> *
>>> * Since labels can't be defined local to the loops body we use a jump pointer
>>> * to make sure that the retry is only used from within the loops body.
>>> */
>>> #define drm_exec_until_all_locked(exec) \
>>> __PASTE(__drm_exec_, __LINE__): \
>>> for (void *__drm_exec_retry_ptr; ({ \
>>> __drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
>>> (void)__drm_exec_retry_ptr; \
>>> drm_exec_cleanup(exec); \
>>> });)
>> 
>> My understanding of C preprocessor macros is not good enough to parse or
>> understand this :( What is that `__PASTE` thing?
>
> This macro is very useful, but also cursed :)
>
> This declares a unique label before the loop, so you can jump back to it on
> contention. It is usually used in conjunction with:

Ahh, I missed the `:` at the end of the line. Thanks for explaining!
(also Miguel in the other reply!) If you don't mind I'll ask some more
basic C questions :)

And yeah it's pretty cursed...

> /**
>  * drm_exec_retry_on_contention - restart the loop to grap all locks
>  * @exec: drm_exec object
>  *
>  * Control flow helper to continue when a contention was detected and we need to
>  * clean up and re-start the loop to prepare all GEM objects.
>  */
> #define drm_exec_retry_on_contention(exec)			\
> 	do {							\
> 		if (unlikely(drm_exec_is_contended(exec)))	\
> 			goto *__drm_exec_retry_ptr;		\
> 	} while (0)

The `do { ... } while(0)` is used because C doesn't have `{ ... }`
scopes? (& because you want to be able to have this be called from an if
without braces?)

> The termination is handled by:
>
> /**
>  * drm_exec_cleanup - cleanup when contention is detected
>  * @exec: the drm_exec object to cleanup
>  *
>  * Cleanup the current state and return true if we should stay inside the retry
>  * loop, false if there wasn't any contention detected and we can keep the
>  * objects locked.
>  */
> bool drm_exec_cleanup(struct drm_exec *exec)
> {
> 	if (likely(!exec->contended)) {
> 		ww_acquire_done(&exec->ticket);
> 		return false;
> 	}
>
> 	if (likely(exec->contended == DRM_EXEC_DUMMY)) {
> 		exec->contended = NULL;
> 		ww_acquire_init(&exec->ticket, &reservation_ww_class);
> 		return true;
> 	}
>
> 	drm_exec_unlock_all(exec);
> 	exec->num_objects = 0;
> 	return true;
> }
> EXPORT_SYMBOL(drm_exec_cleanup);
>
> The third clause in the loop is empty.
>
> For example, in amdgpu:
>
> /**
>  * reserve_bo_and_vm - reserve a BO and a VM unconditionally.
>  * @mem: KFD BO structure.
>  * @vm: the VM to reserve.
>  * @ctx: the struct that will be used in unreserve_bo_and_vms().
>  */
> static int reserve_bo_and_vm(struct kgd_mem *mem,
> 			      struct amdgpu_vm *vm,
> 			      struct bo_vm_reservation_context *ctx)
> {
> 	struct amdgpu_bo *bo = mem->bo;
> 	int ret;
>
> 	WARN_ON(!vm);
>
> 	ctx->n_vms = 1;
> 	ctx->sync = &mem->sync;
> 	drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
> 	drm_exec_until_all_locked(&ctx->exec) {
> 		ret = amdgpu_vm_lock_pd(vm, &ctx->exec, 2);
> 		drm_exec_retry_on_contention(&ctx->exec);
> 		if (unlikely(ret))
> 			goto error;
>
> 		ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1);
> 		drm_exec_retry_on_contention(&ctx->exec);
> 		if (unlikely(ret))
> 			goto error;
> 	}
>         // <—— everything is locked at this point.

Which function call locks the mutexes?

> 	return 0;
>
>
> So, something like:
>
> some_unique_label:
> for(void *retry_ptr;
>     ({ retry_ptr = &some_unique_label; drm_exec_cleanup(); });

Normally this should be a condition, or rather an expression evaluating
to bool, why is this okay? Or does C just take the value of the last
function call due to the `({})`?

Why isn't `({})` used instead of `do { ... } while(0)` above?

>     /* empty *) {
>      /* user code here, which potentially jumps back to some_unique_label  */
> }

Thanks for the example & the macro expansion. What I gather from this is
that we'd probably want a closure that executes the code & reruns it
when contention is detected.

>>> In fact, perhaps we can copy drm_exec, basically? i.e.:
>>> 
>>> /**
>>> * struct drm_exec - Execution context
>>> */
>>> struct drm_exec {
>>> /**
>>>  * @flags: Flags to control locking behavior
>>>  */
>>> u32                     flags;
>>> 
>>> /**
>>>  * @ticket: WW ticket used for acquiring locks
>>>  */
>>> struct ww_acquire_ctx ticket;
>>> 
>>> /**
>>>  * @num_objects: number of objects locked
>>>  */
>>> unsigned int num_objects;
>>> 
>>> /**
>>>  * @max_objects: maximum objects in array
>>>  */
>>> unsigned int max_objects;
>>> 
>>> /**
>>>  * @objects: array of the locked objects
>>>  */
>>> struct drm_gem_object **objects;
>>> 
>>> /**
>>>  * @contended: contended GEM object we backed off for
>>>  */
>>> struct drm_gem_object *contended;
>>> 
>>> /**
>>>  * @prelocked: already locked GEM object due to contention
>>>  */
>>> struct drm_gem_object *prelocked;
>>> };
>>> 
>>> This is GEM-specific, but we could perhaps implement the same idea by
>>> tracking ww_mutexes instead of GEM objects.
>> 
>> But this would only work for `Vec<WwMutex<T>>`, right?
>
> I’m not sure if I understand your point here.
>
> The list of ww_mutexes that we've managed to currently lock would be something
> that we'd keep track internally in our context. In what way is a KVec an issue?

I saw "array of the locked objects" and thus thought so this must only
work for an array of locks. Looking at the type a bit closer, it
actually is an array of pointers, so it does work for arbitrary data
structures storing the locks.

So essentially it would amount to storing `Vec<WwMutexGuard<'_, T>>` in
Rust IIUC. I was under the impression that we wanted to avoid that,
because it's an extra allocation.

But maybe that's actually what's done on the C side.

> Btw, I can also try to implement a proof of concept, so long as people agree that
> this approach makes sense.

I'm not sure I understand what you are proposing, so I can't give a
recommendation yet.

---
Cheers,
Benno
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Daniel Almeida 2 months ago
Hi Benno,

> On 2 Aug 2025, at 17:58, Benno Lossin <lossin@kernel.org> wrote:
> 
> On Sat Aug 2, 2025 at 4:15 PM CEST, Daniel Almeida wrote:
>> On 2 Aug 2025, at 07:42, Benno Lossin <lossin@kernel.org> wrote:
>>> On Fri Aug 1, 2025 at 11:22 PM CEST, Daniel Almeida wrote:
>>>> One thing I didn’t understand with your approach: is it amenable to loops?
>>>> i.e.: are things like drm_exec() implementable?
>>> 
>>> I don't think so, see also my reply here:
>>> 
>>>   https://lore.kernel.org/all/DBOPIJHY9NZ7.2CU5XP7UY7ES3@kernel.org
>>> 
>>> The type-based approach with tuples doesn't handle dynamic number of
>>> locks.
>>> 
>> 
>> This is probably the default use-case by the way.
> 
> That's an important detail. In that case, a type state won't we a good
> idea. Unless it's also common to have a finite number of them, in which
> case we should have two API.
> 
>>>> /**
>>>> * drm_exec_until_all_locked - loop until all GEM objects are locked
>>>> * @exec: drm_exec object
>>>> *
>>>> * Core functionality of the drm_exec object. Loops until all GEM objects are
>>>> * locked and no more contention exists. At the beginning of the loop it is
>>>> * guaranteed that no GEM object is locked.
>>>> *
>>>> * Since labels can't be defined local to the loops body we use a jump pointer
>>>> * to make sure that the retry is only used from within the loops body.
>>>> */
>>>> #define drm_exec_until_all_locked(exec) \
>>>> __PASTE(__drm_exec_, __LINE__): \
>>>> for (void *__drm_exec_retry_ptr; ({ \
>>>> __drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
>>>> (void)__drm_exec_retry_ptr; \
>>>> drm_exec_cleanup(exec); \
>>>> });)
>>> 
>>> My understanding of C preprocessor macros is not good enough to parse or
>>> understand this :( What is that `__PASTE` thing?
>> 
>> This macro is very useful, but also cursed :)
>> 
>> This declares a unique label before the loop, so you can jump back to it on
>> contention. It is usually used in conjunction with:
> 
> Ahh, I missed the `:` at the end of the line. Thanks for explaining!
> (also Miguel in the other reply!) If you don't mind I'll ask some more
> basic C questions :)
> 
> And yeah it's pretty cursed...
> 
>> /**
>> * drm_exec_retry_on_contention - restart the loop to grap all locks
>> * @exec: drm_exec object
>> *
>> * Control flow helper to continue when a contention was detected and we need to
>> * clean up and re-start the loop to prepare all GEM objects.
>> */
>> #define drm_exec_retry_on_contention(exec) \
>> do { \
>> if (unlikely(drm_exec_is_contended(exec))) \
>> goto *__drm_exec_retry_ptr; \
>> } while (0)
> 
> The `do { ... } while(0)` is used because C doesn't have `{ ... }`
> scopes? (& because you want to be able to have this be called from an if
> without braces?)

do {} while (0) makes this behave as a single statement. It usually used in
macros to ensure that they can be correctly called from control statements even
when no braces are used, like you said. It also enforces that a semi-colon has
to be placed at the end when the macro is called, which makes it behave a bit
more like a function call.

There may be other uses that I am not aware of, but it’s not something that
specific to “drm_exec_retry_on_contention".

> 
>> The termination is handled by:
>> 
>> /**
>> * drm_exec_cleanup - cleanup when contention is detected
>> * @exec: the drm_exec object to cleanup
>> *
>> * Cleanup the current state and return true if we should stay inside the retry
>> * loop, false if there wasn't any contention detected and we can keep the
>> * objects locked.
>> */
>> bool drm_exec_cleanup(struct drm_exec *exec)
>> {
>> if (likely(!exec->contended)) {
>> ww_acquire_done(&exec->ticket);
>> return false;
>> }
>> 
>> if (likely(exec->contended == DRM_EXEC_DUMMY)) {
>> exec->contended = NULL;
>> ww_acquire_init(&exec->ticket, &reservation_ww_class);
>> return true;
>> }
>> 
>> drm_exec_unlock_all(exec);
>> exec->num_objects = 0;
>> return true;
>> }
>> EXPORT_SYMBOL(drm_exec_cleanup);
>> 
>> The third clause in the loop is empty.
>> 
>> For example, in amdgpu:
>> 
>> /**
>> * reserve_bo_and_vm - reserve a BO and a VM unconditionally.
>> * @mem: KFD BO structure.
>> * @vm: the VM to reserve.
>> * @ctx: the struct that will be used in unreserve_bo_and_vms().
>> */
>> static int reserve_bo_and_vm(struct kgd_mem *mem,
>>      struct amdgpu_vm *vm,
>>      struct bo_vm_reservation_context *ctx)
>> {
>> struct amdgpu_bo *bo = mem->bo;
>> int ret;
>> 
>> WARN_ON(!vm);
>> 
>> ctx->n_vms = 1;
>> ctx->sync = &mem->sync;
>> drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
>> drm_exec_until_all_locked(&ctx->exec) {
>> ret = amdgpu_vm_lock_pd(vm, &ctx->exec, 2);
>> drm_exec_retry_on_contention(&ctx->exec);
>> if (unlikely(ret))
>> goto error;
>> 
>> ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1);
>> drm_exec_retry_on_contention(&ctx->exec);
>> if (unlikely(ret))
>> goto error;
>> }
>>        // <—— everything is locked at this point.
> 
> Which function call locks the mutexes?

The function below, which is indirectly called from amdgpu_vm_lock_pd() in
this particular example:

```
/**
 * drm_exec_lock_obj - lock a GEM object for use
 * @exec: the drm_exec object with the state
 * @obj: the GEM object to lock
 *
 * Lock a GEM object for use and grab a reference to it.
 *
 * Returns: -EDEADLK if a contention is detected, -EALREADY when object is
 * already locked (can be suppressed by setting the DRM_EXEC_IGNORE_DUPLICATES
 * flag), -ENOMEM when memory allocation failed and zero for success.
 */
int drm_exec_lock_obj(struct drm_exec *exec, struct drm_gem_object *obj)
{
	int ret;

	ret = drm_exec_lock_contended(exec);
	if (unlikely(ret))
		return ret;

	if (exec->prelocked == obj) {
		drm_gem_object_put(exec->prelocked);
		exec->prelocked = NULL;
		return 0;
	}

	if (exec->flags & DRM_EXEC_INTERRUPTIBLE_WAIT)
		ret = dma_resv_lock_interruptible(obj->resv, &exec->ticket);
	else
		ret = dma_resv_lock(obj->resv, &exec->ticket);

	if (unlikely(ret == -EDEADLK)) {
		drm_gem_object_get(obj);
		exec->contended = obj;
		return -EDEADLK;
	}

	if (unlikely(ret == -EALREADY) &&
	    exec->flags & DRM_EXEC_IGNORE_DUPLICATES)
		return 0;

	if (unlikely(ret))
		return ret;

	ret = drm_exec_obj_locked(exec, obj);
	if (ret)
		goto error_unlock;

	return 0;

error_unlock:
	dma_resv_unlock(obj->resv);
	return ret;
}
EXPORT_SYMBOL(drm_exec_lock_obj);
```

And the tracking of locked objects is done at:

```
/* Track the locked object in the array */
static int drm_exec_obj_locked(struct drm_exec *exec,
			       struct drm_gem_object *obj)
{
	if (unlikely(exec->num_objects == exec->max_objects)) {
		size_t size = exec->max_objects * sizeof(void *);
		void *tmp;

		tmp = kvrealloc(exec->objects, size + PAGE_SIZE, GFP_KERNEL);
		if (!tmp)
			return -ENOMEM;

		exec->objects = tmp;
		exec->max_objects += PAGE_SIZE / sizeof(void *);
	}
	drm_gem_object_get(obj);
	exec->objects[exec->num_objects++] = obj;

	return 0;
}
```

Note that dma_resv_lock() is:

```
static inline int dma_resv_lock(struct dma_resv *obj,
				struct ww_acquire_ctx *ctx)
{
	return ww_mutex_lock(&obj->lock, ctx);
}
```


Again, this is GEM-specific, but the idea is to generalize it.


> 
>> return 0;
>> 
>> 
>> So, something like:
>> 
>> some_unique_label:
>> for(void *retry_ptr;
>>    ({ retry_ptr = &some_unique_label; drm_exec_cleanup(); });
> 
> Normally this should be a condition, or rather an expression evaluating
> to bool, why is this okay? Or does C just take the value of the last
> function call due to the `({})`?

This is described here [0]. As per the docs, it evaluates to bool (as
drm_exec_cleanup() is last, and that evaluates to bool) 

> 
> Why isn't `({})` used instead of `do { ... } while(0)` above?

I’m not sure I understand what you’re trying to ask.

If you’re asking why ({}) is being used here, then it’s because we need
to return (i.e. evaluate to) a value, and a do {…} while(0) does not do
that.

> 
>>    /* empty *) {
>>     /* user code here, which potentially jumps back to some_unique_label  */
>> }
> 
> Thanks for the example & the macro expansion. What I gather from this is
> that we'd probably want a closure that executes the code & reruns it
> when contention is detected.

Yep, I think so, too.

> 
>>>> In fact, perhaps we can copy drm_exec, basically? i.e.:
>>>> 
>>>> /**
>>>> * struct drm_exec - Execution context
>>>> */
>>>> struct drm_exec {
>>>> /**
>>>> * @flags: Flags to control locking behavior
>>>> */
>>>> u32                     flags;
>>>> 
>>>> /**
>>>> * @ticket: WW ticket used for acquiring locks
>>>> */
>>>> struct ww_acquire_ctx ticket;
>>>> 
>>>> /**
>>>> * @num_objects: number of objects locked
>>>> */
>>>> unsigned int num_objects;
>>>> 
>>>> /**
>>>> * @max_objects: maximum objects in array
>>>> */
>>>> unsigned int max_objects;
>>>> 
>>>> /**
>>>> * @objects: array of the locked objects
>>>> */
>>>> struct drm_gem_object **objects;
>>>> 
>>>> /**
>>>> * @contended: contended GEM object we backed off for
>>>> */
>>>> struct drm_gem_object *contended;
>>>> 
>>>> /**
>>>> * @prelocked: already locked GEM object due to contention
>>>> */
>>>> struct drm_gem_object *prelocked;
>>>> };
>>>> 
>>>> This is GEM-specific, but we could perhaps implement the same idea by
>>>> tracking ww_mutexes instead of GEM objects.
>>> 
>>> But this would only work for `Vec<WwMutex<T>>`, right?
>> 
>> I’m not sure if I understand your point here.
>> 
>> The list of ww_mutexes that we've managed to currently lock would be something
>> that we'd keep track internally in our context. In what way is a KVec an issue?
> 
> I saw "array of the locked objects" and thus thought so this must only
> work for an array of locks. Looking at the type a bit closer, it
> actually is an array of pointers, so it does work for arbitrary data
> structures storing the locks.
> 
> So essentially it would amount to storing `Vec<WwMutexGuard<'_, T>>` in
> Rust IIUC. I was under the impression that we wanted to avoid that,
> because it's an extra allocation.

It’s the price to pay for correctness IMHO.

The “exec” abstraction also allocates:

```
/* Track the locked object in the array */
static int drm_exec_obj_locked(struct drm_exec *exec,
			       struct drm_gem_object *obj)
{
	if (unlikely(exec->num_objects == exec->max_objects)) {
		size_t size = exec->max_objects * sizeof(void *);
		void *tmp;

		tmp = kvrealloc(exec->objects, size + PAGE_SIZE, GFP_KERNEL);
		if (!tmp)
			return -ENOMEM;

		exec->objects = tmp;
		exec->max_objects += PAGE_SIZE / sizeof(void *);
	}
	drm_gem_object_get(obj);
	exec->objects[exec->num_objects++] = obj;

	return 0;
}
```



> 
> But maybe that's actually what's done on the C side.

See above.

> 
>> Btw, I can also try to implement a proof of concept, so long as people agree that
>> this approach makes sense.
> 
> I'm not sure I understand what you are proposing, so I can't give a
> recommendation yet.
> 

I am suggesting what you said above and more:

a) run a user closure where the user can indicate which ww_mutexes they want to lock
b) keep track of the objects above
c) keep track of whether a contention happened
d) rollback if a contention happened, releasing all locks
e) rerun the user closure from a clean slate after rolling back
f)  run a separate user closure whenever we know that all objects have been locked.

That’s a very broad description, but I think it can work.

Note that the operations above would be implemented by a separate type, not by
the ww_mutex abstraction itself. But users should probably be using the API
above unless there’s a strong reason not to.

> ---
> Cheers,
> Benno
> 

— Daniel


[0]: https://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Onur Özkan 2 months ago
On Sat, 2 Aug 2025 11:15:07 -0300
Daniel Almeida <daniel.almeida@collabora.com> wrote:
 
> Btw, I can also try to implement a proof of concept, so long as
> people agree that this approach makes sense.

It's not necessary to provide a full P.o.C but a small demonstration of
the kind of ww_mutex API you would prefer would be helpful. Seeing a few
sample Rust use-cases (especially in comparison to existing C
implementations) would give a clearer picture for me.

At the moment, the implementation is just a wrapper ([1]) around the C
ww_mutex with no additional functionality, mostly because we don't have
a solid consensus on the API design yet (we had some ideas about Tuple
based approach, but seems like that isn't going to be useful for most
of the ww_mutex users).

[1]: https://github.com/onur-ozkan/linux/commits/673e01a9c309c

> By the way, dri-devel seems to not be on cc? Added them now.

Thanks!

--

Regards,
Onur
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Daniel Almeida 2 months ago
Hi Onur,

> On 5 Aug 2025, at 06:08, Onur Özkan <work@onurozkan.dev> wrote:
> 
> On Sat, 2 Aug 2025 11:15:07 -0300
> Daniel Almeida <daniel.almeida@collabora.com> wrote:
> 
>> Btw, I can also try to implement a proof of concept, so long as
>> people agree that this approach makes sense.
> 
> It's not necessary to provide a full P.o.C but a small demonstration of
> the kind of ww_mutex API you would prefer would be helpful. Seeing a few
> sample Rust use-cases (especially in comparison to existing C
> implementations) would give a clearer picture for me.
> 
> At the moment, the implementation is just a wrapper ([1]) around the C
> ww_mutex with no additional functionality, mostly because we don't have
> a solid consensus on the API design yet (we had some ideas about Tuple
> based approach, but seems like that isn't going to be useful for most
> of the ww_mutex users).
> 
> [1]: https://github.com/onur-ozkan/linux/commits/673e01a9c309c
> 
>> By the way, dri-devel seems to not be on cc? Added them now.
> 
> Thanks!
> 
> --
> 
> Regards,
> Onur
> 

This topic is on my TODO for this week.

— Daniel
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Onur Özkan 2 months ago
On Tue, 5 Aug 2025 09:41:43 -0300
Daniel Almeida <daniel.almeida@collabora.com> wrote:

> Hi Onur,
> 
> > On 5 Aug 2025, at 06:08, Onur Özkan <work@onurozkan.dev> wrote:
> > 
> > On Sat, 2 Aug 2025 11:15:07 -0300
> > Daniel Almeida <daniel.almeida@collabora.com> wrote:
> > 
> >> Btw, I can also try to implement a proof of concept, so long as
> >> people agree that this approach makes sense.
> > 
> > It's not necessary to provide a full P.o.C but a small
> > demonstration of the kind of ww_mutex API you would prefer would be
> > helpful. Seeing a few sample Rust use-cases (especially in
> > comparison to existing C implementations) would give a clearer
> > picture for me.
> > 
> > At the moment, the implementation is just a wrapper ([1]) around
> > the C ww_mutex with no additional functionality, mostly because we
> > don't have a solid consensus on the API design yet (we had some
> > ideas about Tuple based approach, but seems like that isn't going
> > to be useful for most of the ww_mutex users).
> > 
> > [1]: https://github.com/onur-ozkan/linux/commits/673e01a9c309c
> > 
> >> By the way, dri-devel seems to not be on cc? Added them now.
> > 
> > Thanks!
> > 
> > --
> > 
> > Regards,
> > Onur
> > 
> 
> This topic is on my TODO for this week.
> 
> — Daniel

Awesome, thank you for doing it. :)

Regards,
Onur
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Onur 3 months ago
On Mon, 07 Jul 2025 21:48:07 +0200
"Benno Lossin" <lossin@kernel.org> wrote:

> >> > Instead of `begin` and `custom`, why not something like this:
> >> >
> >> > 	let (a, c, d) = ctx.init()
> >> > 	    .lock(a)
> >> >             .lock(b)
> >> >             .lock(c)
> >> >             .unlock(b)
> >> >             .lock(d)
> >> >             .finish();
> >> >
> >> > Instead of `begin`, `init` would be better naming to imply `fini`
> >> > on the C side, and `unlock` instead of `custom` would make the
> >> > intent clearer when dropping locks mid chain.
> 
> Also, I'm not really fond of the name `init`, how about `enter`?

I don't have a strong feeling on any of them, they are all the same
at some point. The reason why I suggested `init` was to keep it as
close/same as possible to the C implementation so people with C
knowledge would adapt to Rust implementation easier and quicker.

> Oh yeah the name was just a placeholder.
> 
> The advantage of custom is that the user can do anything in the
> closure.

Right, that is a good plus compared to alternatives. :)

Regards,
Onur
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Boqun Feng 3 months, 2 weeks ago
On Mon, Jun 23, 2025 at 05:14:37PM +0200, Benno Lossin wrote:
> On Mon Jun 23, 2025 at 4:47 PM CEST, Boqun Feng wrote:
> > On Mon, Jun 23, 2025 at 03:44:58PM +0200, Benno Lossin wrote:
> >> I didn't have a concrete API in mind, but after having read the
> >> abstractions more, would this make sense?
> >> 
> >>     let ctx: &WwAcquireCtx = ...;
> >>     let m1: &WwMutex<T> = ...;
> >>     let m2: &WwMutex<Foo> = ...;
> >> 
> >>     let (t, foo, foo2) = ctx
> >>         .begin()
> >>         .lock(m1)
> >>         .lock(m2)
> >>         .lock_with(|(t, foo)| &*foo.other)
> >>         .finish();
> >> 
> >
> > Cute!
> >
> > However, each `.lock()` will need to be polymorphic over a tuple of
> > locks that are already held, right? Otherwise I don't see how
> > `.lock_with()` knows it's already held two locks. That sounds like a
> > challenge for implementation.
> 
> I think it's doable if we have 
> 
>     impl WwActiveCtx {

I think you mean *WwAcquireCtx*

>         fn begin(&self) -> WwActiveCtx<'_, ()>;
>     }
> 
>     struct WwActiveCtx<'a, Locks> {
>         locks: Locks,

This probably need to to be Result<Locks>, because we may detect
-DEADLOCK in the middle.

    let (a, c, d) = ctx.begin()
        .lock(a)
        .lock(b) // <- `b` may be locked by someone else. So we should
                 // drop `a` and switch `locks` to an `Err(_)`.
        .lock(c) // <- this should be a no-op if `locks` is an `Err(_)`.
        .finish();

>         _ctx: PhantomData<&'a WwAcquireCtx>,

We can still take a reference to WwAcquireCtx here I think.

>     }
> 
>     impl<'a, Locks> WwActiveCtx<'a, Locks>
>     where
>         Locks: Tuple
>     {
>         fn lock<'b, T>(
>             self,
>             lock: &'b WwMutex<T>,
>         ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
> 
>         fn lock_with<'b, T>(
>             self,
>             get_lock: impl FnOnce(&Locks) -> &'b WwMutex<T>,
>         ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
>         // I'm not 100% sure that the lifetimes will work out...

I think we can make the following work?

    impl<'a, Locks> WwActiveCtx<'a, Locks>
    where
        Locks: Tuple
    {
        fn lock_with<T>(
	    self,
	    get_lock: impl FnOnce(&Locks) -> &WmMutex<T>,
	) -> WwActiveCtx<'a, Locks::Append<WmMutexGuard<'a, T>>
    }

because with a `WwActiveCtx<'a, Locks>`, we can get a `&'a Locks`, which
will give us a `&'a WmMutex<T>`, and should be able to give us a
`WmMutexGuard<'a, T>`.

> 
>         fn finish(self) -> Locks;
>     }
> 
>     trait Tuple {
>         type Append<T>;
> 
>         fn append<T>(self, value: T) -> Self::Append<T>;
>     }
> 

`Tuple` is good enough for its own, if you could remember, we have some
ideas about using things like this to consolidate multiple `RcuOld` so
that we can do one `synchronize_rcu()` for `RcuOld`s.

>     impl Tuple for () {
>         type Append<T> = (T,);
> 
>         fn append<T>(self, value: T) -> Self::Append<T> {
>             (value,)
>         }
>     }
>     
>     impl<T1> Tuple for (T1,) {
>         type Append<T> = (T1, T);
> 
>         fn append<T>(self, value: T) -> Self::Append<T> {
>             (self.0, value,)
>         }
>     }
> 
>     impl<T1, T2> Tuple for (T1, T2) {
>         type Append<T> = (T1, T2, T);
> 
>         fn append<T>(self, value: T) -> Self::Append<T> {
>             (self.0, self.1, value,)
>         }
>     }
> 
>     /* these can easily be generated by a macro */
> 
> > We also need to take into consideration that the user want to drop any
> > lock in the sequence? E.g. the user acquires a, b and c, and then drop
> > b, and then acquires d. Which I think is possible for ww_mutex.
> 
> Hmm what about adding this to the above idea?:
> 
>     impl<'a, Locks> WwActiveCtx<'a, Locks>
>     where
>         Locks: Tuple
>     {
>         fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) -> WwActiveCtx<'a, L2>;
>     }
> 
> Then you can do:
> 
>     let (a, c, d) = ctx.begin()
>         .lock(a)
>         .lock(b)
>         .lock(c)
>         .custom(|(a, _, c)| (a, c))
>         .lock(d)
>         .finish();
> 

Seems reasonable. But we still need to present this to the end user to
see how much they like it. For ww_mutex I think the major user is DRM,
so add them into Cc list.

Regards,
Boqun

> >>     let _: &mut T = t;
> >>     let _: &mut Foo = foo;
> >>     let _: &mut Foo = foo2;
> 
> Ah these will actually be `WwMutexGuard<'_, ...>`, but that should be
> expected.
> 
> ---
> Cheers,
> Benno
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Benno Lossin 3 months, 2 weeks ago
On Mon Jun 23, 2025 at 7:11 PM CEST, Boqun Feng wrote:
> On Mon, Jun 23, 2025 at 05:14:37PM +0200, Benno Lossin wrote:
>> On Mon Jun 23, 2025 at 4:47 PM CEST, Boqun Feng wrote:
>> > On Mon, Jun 23, 2025 at 03:44:58PM +0200, Benno Lossin wrote:
>> >> I didn't have a concrete API in mind, but after having read the
>> >> abstractions more, would this make sense?
>> >> 
>> >>     let ctx: &WwAcquireCtx = ...;
>> >>     let m1: &WwMutex<T> = ...;
>> >>     let m2: &WwMutex<Foo> = ...;
>> >> 
>> >>     let (t, foo, foo2) = ctx
>> >>         .begin()
>> >>         .lock(m1)
>> >>         .lock(m2)
>> >>         .lock_with(|(t, foo)| &*foo.other)
>> >>         .finish();
>> >> 
>> >
>> > Cute!
>> >
>> > However, each `.lock()` will need to be polymorphic over a tuple of
>> > locks that are already held, right? Otherwise I don't see how
>> > `.lock_with()` knows it's already held two locks. That sounds like a
>> > challenge for implementation.
>> 
>> I think it's doable if we have 
>> 
>>     impl WwActiveCtx {
>
> I think you mean *WwAcquireCtx*

Oh yeah.

>>         fn begin(&self) -> WwActiveCtx<'_, ()>;
>>     }
>> 
>>     struct WwActiveCtx<'a, Locks> {
>>         locks: Locks,
>
> This probably need to to be Result<Locks>, because we may detect
> -DEADLOCK in the middle.
>
>     let (a, c, d) = ctx.begin()
>         .lock(a)
>         .lock(b) // <- `b` may be locked by someone else. So we should
>                  // drop `a` and switch `locks` to an `Err(_)`.
>         .lock(c) // <- this should be a no-op if `locks` is an `Err(_)`.
>         .finish();

Hmm, I thought that we would go for the `lock_slow_path` thing, but
maybe that's the wrong thing to do? Maybe `lock` should return a result?
I'd have to see the use-cases...

>>         _ctx: PhantomData<&'a WwAcquireCtx>,
>
> We can still take a reference to WwAcquireCtx here I think.

Yeah we have to do that in order to call lock on the mutexes.

>>     }
>> 
>>     impl<'a, Locks> WwActiveCtx<'a, Locks>
>>     where
>>         Locks: Tuple
>>     {
>>         fn lock<'b, T>(
>>             self,
>>             lock: &'b WwMutex<T>,
>>         ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
>> 
>>         fn lock_with<'b, T>(
>>             self,
>>             get_lock: impl FnOnce(&Locks) -> &'b WwMutex<T>,
>>         ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
>>         // I'm not 100% sure that the lifetimes will work out...
>
> I think we can make the following work?
>
>     impl<'a, Locks> WwActiveCtx<'a, Locks>
>     where
>         Locks: Tuple
>     {
>         fn lock_with<T>(
> 	    self,
> 	    get_lock: impl FnOnce(&Locks) -> &WmMutex<T>,
> 	) -> WwActiveCtx<'a, Locks::Append<WmMutexGuard<'a, T>>
>     }
>
> because with a `WwActiveCtx<'a, Locks>`, we can get a `&'a Locks`, which
> will give us a `&'a WmMutex<T>`, and should be able to give us a
> `WmMutexGuard<'a, T>`.

I think this is more restrictive, since this will require that the mutex
is (potentially) locked for `'a` (you can drop the guard before, but you
can't drop the mutex itself). So again concrete use-cases should inform
our choice here.

>>         fn finish(self) -> Locks;
>>     }
>> 
>>     trait Tuple {
>>         type Append<T>;
>> 
>>         fn append<T>(self, value: T) -> Self::Append<T>;
>>     }
>> 
>
> `Tuple` is good enough for its own, if you could remember, we have some
> ideas about using things like this to consolidate multiple `RcuOld` so
> that we can do one `synchronize_rcu()` for `RcuOld`s.

Yeah that's true, feel free to make a patch or good-first-issue, I won't
have time to create a series.

>>     impl Tuple for () {
>>         type Append<T> = (T,);
>> 
>>         fn append<T>(self, value: T) -> Self::Append<T> {
>>             (value,)
>>         }
>>     }
>>     
>>     impl<T1> Tuple for (T1,) {
>>         type Append<T> = (T1, T);
>> 
>>         fn append<T>(self, value: T) -> Self::Append<T> {
>>             (self.0, value,)
>>         }
>>     }
>> 
>>     impl<T1, T2> Tuple for (T1, T2) {
>>         type Append<T> = (T1, T2, T);
>> 
>>         fn append<T>(self, value: T) -> Self::Append<T> {
>>             (self.0, self.1, value,)
>>         }
>>     }
>> 
>>     /* these can easily be generated by a macro */
>> 
>> > We also need to take into consideration that the user want to drop any
>> > lock in the sequence? E.g. the user acquires a, b and c, and then drop
>> > b, and then acquires d. Which I think is possible for ww_mutex.
>> 
>> Hmm what about adding this to the above idea?:
>> 
>>     impl<'a, Locks> WwActiveCtx<'a, Locks>
>>     where
>>         Locks: Tuple
>>     {
>>         fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) -> WwActiveCtx<'a, L2>;
>>     }
>> 
>> Then you can do:
>> 
>>     let (a, c, d) = ctx.begin()
>>         .lock(a)
>>         .lock(b)
>>         .lock(c)
>>         .custom(|(a, _, c)| (a, c))
>>         .lock(d)
>>         .finish();
>> 
>
> Seems reasonable. But we still need to present this to the end user to
> see how much they like it. For ww_mutex I think the major user is DRM,
> so add them into Cc list.

Yeah let's see some use-cases :)

---
Cheers,
Benno
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Onur 3 months, 2 weeks ago
On Tue, 24 Jun 2025 01:22:05 +0200
"Benno Lossin" <lossin@kernel.org> wrote:

> On Mon Jun 23, 2025 at 7:11 PM CEST, Boqun Feng wrote:
> > On Mon, Jun 23, 2025 at 05:14:37PM +0200, Benno Lossin wrote:
> >> On Mon Jun 23, 2025 at 4:47 PM CEST, Boqun Feng wrote:
> >> > On Mon, Jun 23, 2025 at 03:44:58PM +0200, Benno Lossin wrote:
> >> >> I didn't have a concrete API in mind, but after having read the
> >> >> abstractions more, would this make sense?
> >> >> 
> >> >>     let ctx: &WwAcquireCtx = ...;
> >> >>     let m1: &WwMutex<T> = ...;
> >> >>     let m2: &WwMutex<Foo> = ...;
> >> >> 
> >> >>     let (t, foo, foo2) = ctx
> >> >>         .begin()
> >> >>         .lock(m1)
> >> >>         .lock(m2)
> >> >>         .lock_with(|(t, foo)| &*foo.other)
> >> >>         .finish();
> >> >> 
> >> >
> >> > Cute!
> >> >
> >> > However, each `.lock()` will need to be polymorphic over a tuple
> >> > of locks that are already held, right? Otherwise I don't see how
> >> > `.lock_with()` knows it's already held two locks. That sounds
> >> > like a challenge for implementation.
> >> 
> >> I think it's doable if we have 
> >> 
> >>     impl WwActiveCtx {
> >
> > I think you mean *WwAcquireCtx*
> 
> Oh yeah.
> 
> >>         fn begin(&self) -> WwActiveCtx<'_, ()>;
> >>     }
> >> 
> >>     struct WwActiveCtx<'a, Locks> {
> >>         locks: Locks,
> >
> > This probably need to to be Result<Locks>, because we may detect
> > -DEADLOCK in the middle.
> >
> >     let (a, c, d) = ctx.begin()
> >         .lock(a)
> >         .lock(b) // <- `b` may be locked by someone else. So we
> > should // drop `a` and switch `locks` to an `Err(_)`.
> >         .lock(c) // <- this should be a no-op if `locks` is an
> > `Err(_)`. .finish();
> 
> Hmm, I thought that we would go for the `lock_slow_path` thing, but
> maybe that's the wrong thing to do? Maybe `lock` should return a
> result? I'd have to see the use-cases...
> 
> >>         _ctx: PhantomData<&'a WwAcquireCtx>,
> >
> > We can still take a reference to WwAcquireCtx here I think.
> 
> Yeah we have to do that in order to call lock on the mutexes.
> 
> >>     }
> >> 
> >>     impl<'a, Locks> WwActiveCtx<'a, Locks>
> >>     where
> >>         Locks: Tuple
> >>     {
> >>         fn lock<'b, T>(
> >>             self,
> >>             lock: &'b WwMutex<T>,
> >>         ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
> >> 
> >>         fn lock_with<'b, T>(
> >>             self,
> >>             get_lock: impl FnOnce(&Locks) -> &'b WwMutex<T>,
> >>         ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
> >>         // I'm not 100% sure that the lifetimes will work out...
> >
> > I think we can make the following work?
> >
> >     impl<'a, Locks> WwActiveCtx<'a, Locks>
> >     where
> >         Locks: Tuple
> >     {
> >         fn lock_with<T>(
> > 	    self,
> > 	    get_lock: impl FnOnce(&Locks) -> &WmMutex<T>,
> > 	) -> WwActiveCtx<'a, Locks::Append<WmMutexGuard<'a, T>>
> >     }
> >
> > because with a `WwActiveCtx<'a, Locks>`, we can get a `&'a Locks`,
> > which will give us a `&'a WmMutex<T>`, and should be able to give
> > us a `WmMutexGuard<'a, T>`.
> 
> I think this is more restrictive, since this will require that the
> mutex is (potentially) locked for `'a` (you can drop the guard
> before, but you can't drop the mutex itself). So again concrete
> use-cases should inform our choice here.
> 
> >>         fn finish(self) -> Locks;
> >>     }
> >> 
> >>     trait Tuple {
> >>         type Append<T>;
> >> 
> >>         fn append<T>(self, value: T) -> Self::Append<T>;
> >>     }
> >> 
> >
> > `Tuple` is good enough for its own, if you could remember, we have
> > some ideas about using things like this to consolidate multiple
> > `RcuOld` so that we can do one `synchronize_rcu()` for `RcuOld`s.
> 
> Yeah that's true, feel free to make a patch or good-first-issue, I
> won't have time to create a series.
> 
> >>     impl Tuple for () {
> >>         type Append<T> = (T,);
> >> 
> >>         fn append<T>(self, value: T) -> Self::Append<T> {
> >>             (value,)
> >>         }
> >>     }
> >>     
> >>     impl<T1> Tuple for (T1,) {
> >>         type Append<T> = (T1, T);
> >> 
> >>         fn append<T>(self, value: T) -> Self::Append<T> {
> >>             (self.0, value,)
> >>         }
> >>     }
> >> 
> >>     impl<T1, T2> Tuple for (T1, T2) {
> >>         type Append<T> = (T1, T2, T);
> >> 
> >>         fn append<T>(self, value: T) -> Self::Append<T> {
> >>             (self.0, self.1, value,)
> >>         }
> >>     }
> >> 
> >>     /* these can easily be generated by a macro */
> >> 
> >> > We also need to take into consideration that the user want to
> >> > drop any lock in the sequence? E.g. the user acquires a, b and
> >> > c, and then drop b, and then acquires d. Which I think is
> >> > possible for ww_mutex.
> >> 
> >> Hmm what about adding this to the above idea?:
> >> 
> >>     impl<'a, Locks> WwActiveCtx<'a, Locks>
> >>     where
> >>         Locks: Tuple
> >>     {
> >>         fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
> >> WwActiveCtx<'a, L2>; }
> >> 
> >> Then you can do:
> >> 
> >>     let (a, c, d) = ctx.begin()
> >>         .lock(a)
> >>         .lock(b)
> >>         .lock(c)
> >>         .custom(|(a, _, c)| (a, c))
> >>         .lock(d)
> >>         .finish();
> >> 
> >
> > Seems reasonable. But we still need to present this to the end user
> > to see how much they like it. For ww_mutex I think the major user
> > is DRM, so add them into Cc list.
> 
> Yeah let's see some use-cases :)


Should we handle this in the initial implementation or leave it for
follow-up patches after the core abstraction of ww_mutex has landed?

---

Regards,
Onur
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Benno Lossin 3 months, 2 weeks ago
On Tue Jun 24, 2025 at 7:34 AM CEST, Onur wrote:
> Should we handle this in the initial implementation or leave it for
> follow-up patches after the core abstraction of ww_mutex has landed?

Since you're writing these abstractions specifically for usage in drm, I
think we should look at the intended use-cases there and then decide on
an API.

So maybe Lyude or Dave can chime in :)

If you (or someone else) have another user for this API that needs it
ASAP, then we can think about merging this and improve it later. But if
we don't have a user, then we shouldn't merge it anyways.

---
Cheers,
Benno
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Onur 3 months, 2 weeks ago
On Tue, 24 Jun 2025 10:20:48 +0200
"Benno Lossin" <lossin@kernel.org> wrote:

> On Tue Jun 24, 2025 at 7:34 AM CEST, Onur wrote:
> > Should we handle this in the initial implementation or leave it for
> > follow-up patches after the core abstraction of ww_mutex has landed?
> 
> Since you're writing these abstractions specifically for usage in
> drm, I think we should look at the intended use-cases there and then
> decide on an API.
> 
> So maybe Lyude or Dave can chime in :)
> 
> If you (or someone else) have another user for this API that needs it
> ASAP, then we can think about merging this and improve it later. But
> if we don't have a user, then we shouldn't merge it anyways.

I don't think this is urgent, but it might be better to land the basic
structure first and improve it gradually I think? I would be happy to
continue working for the improvements as I don't plan to leave it as
just the initial version.

I worked on the v5 review notes, but if we are going to consider
designing a different API, then it doesn't make much sense to send a v6
patch before finishing the design, which requires additional people in
the topic. That would also mean some of the ongoing review discussion
would be wasted.

---

Regards,
Onur
Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Posted by Benno Lossin 3 months, 2 weeks ago
On Tue Jun 24, 2025 at 2:31 PM CEST, Onur wrote:
> On Tue, 24 Jun 2025 10:20:48 +0200
> "Benno Lossin" <lossin@kernel.org> wrote:
>
>> On Tue Jun 24, 2025 at 7:34 AM CEST, Onur wrote:
>> > Should we handle this in the initial implementation or leave it for
>> > follow-up patches after the core abstraction of ww_mutex has landed?
>> 
>> Since you're writing these abstractions specifically for usage in
>> drm, I think we should look at the intended use-cases there and then
>> decide on an API.
>> 
>> So maybe Lyude or Dave can chime in :)
>> 
>> If you (or someone else) have another user for this API that needs it
>> ASAP, then we can think about merging this and improve it later. But
>> if we don't have a user, then we shouldn't merge it anyways.
>
> I don't think this is urgent, but it might be better to land the basic
> structure first and improve it gradually I think? I would be happy to
> continue working for the improvements as I don't plan to leave it as
> just the initial version.

I don't think we should land the basic API when we don't have a user
in-tree or blessed by the maintainers.

> I worked on the v5 review notes, but if we are going to consider
> designing a different API, then it doesn't make much sense to send a v6
> patch before finishing the design, which requires additional people in
> the topic. That would also mean some of the ongoing review discussion
> would be wasted.

I would just wait for DRM to say something :)

---
Cheers,
Benno