To load a value, one must be careful to hold the lock while accessing
it. To enable this, we add a lock() method so that you can perform
operations on the value before the spinlock is released.
Co-developed-by: Andrew Ballance <andrewjballance@gmail.com>
Signed-off-by: Andrew Ballance <andrewjballance@gmail.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
rust/kernel/maple_tree.rs | 94 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 94 insertions(+)
diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs
index 0f26c173eedc7c79bb8e2b56fe85e8a266b3ae0c..c7ef504a9c78065b3d5752b4f5337fb6277182d1 100644
--- a/rust/kernel/maple_tree.rs
+++ b/rust/kernel/maple_tree.rs
@@ -206,6 +206,23 @@ pub fn erase(&self, index: usize) -> Option<T> {
unsafe { T::try_from_foreign(ret) }
}
+ /// Lock the internal spinlock.
+ #[inline]
+ pub fn lock(&self) -> MapleLock<'_, T> {
+ // SAFETY: It's safe to lock the spinlock in a maple tree.
+ unsafe { bindings::spin_lock(self.ma_lock()) };
+
+ // INVARIANT: We just took the spinlock.
+ MapleLock(self)
+ }
+
+ #[inline]
+ fn ma_lock(&self) -> *mut bindings::spinlock_t {
+ // SAFETY: This pointer offset operation stays in-bounds.
+ let lock = unsafe { &raw mut (*self.tree.get()).__bindgen_anon_1.ma_lock };
+ lock.cast()
+ }
+
/// Free all `T` instances in this tree.
///
/// # Safety
@@ -248,6 +265,83 @@ fn drop(mut self: Pin<&mut Self>) {
}
}
+/// A reference to a [`MapleTree`] that owns the inner lock.
+///
+/// # Invariants
+///
+/// This guard owns the inner spinlock.
+pub struct MapleLock<'tree, T: ForeignOwnable>(&'tree MapleTree<T>);
+
+impl<'tree, T: ForeignOwnable> Drop for MapleLock<'tree, T> {
+ #[inline]
+ fn drop(&mut self) {
+ // SAFETY: By the type invariants, we hold this spinlock.
+ unsafe { bindings::spin_unlock(self.0.ma_lock()) };
+ }
+}
+
+impl<'tree, T: ForeignOwnable> MapleLock<'tree, T> {
+ /// Load the value at the given index.
+ ///
+ /// # Examples
+ ///
+ /// Read the value while holding the spinlock.
+ ///
+ /// ```
+ /// use kernel::maple_tree::{MapleTree, InsertErrorKind};
+ ///
+ /// let tree = KBox::pin_init(MapleTree::<KBox<i32>>::new(), GFP_KERNEL)?;
+ ///
+ /// let ten = KBox::new(10, GFP_KERNEL)?;
+ /// let twenty = KBox::new(20, GFP_KERNEL)?;
+ /// tree.insert(100, ten, GFP_KERNEL)?;
+ /// tree.insert(200, twenty, GFP_KERNEL)?;
+ ///
+ /// let mut lock = tree.lock();
+ /// assert_eq!(lock.load(100), Some(&mut 10));
+ /// assert_eq!(lock.load(200), Some(&mut 20));
+ /// assert_eq!(lock.load(300), None);
+ /// # Ok::<_, Error>(())
+ /// ```
+ ///
+ /// Increment refcount while holding spinlock and read afterwards.
+ ///
+ /// ```
+ /// use kernel::maple_tree::{MapleTree, InsertErrorKind};
+ /// use kernel::sync::Arc;
+ ///
+ /// let tree = KBox::pin_init(MapleTree::<Arc<i32>>::new(), GFP_KERNEL)?;
+ ///
+ /// let ten = Arc::new(10, GFP_KERNEL)?;
+ /// let twenty = Arc::new(20, GFP_KERNEL)?;
+ /// tree.insert(100, ten, GFP_KERNEL)?;
+ /// tree.insert(200, twenty, GFP_KERNEL)?;
+ ///
+ /// // Briefly take the lock to increment the refcount.
+ /// let value = Arc::from(tree.lock().load(100).unwrap());
+ ///
+ /// // At this point, another thread might remove the value.
+ /// tree.erase(100);
+ ///
+ /// // But we can still access it because we took a refcount.
+ /// assert_eq!(*value, 10);
+ /// # Ok::<_, Error>(())
+ /// ```
+ #[inline]
+ pub fn load(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> {
+ // SAFETY: `self.tree` contains a valid maple tree.
+ let ret = unsafe { bindings::mtree_load(self.0.tree.get(), index) };
+ if ret.is_null() {
+ return None;
+ }
+
+ // SAFETY: If the pointer is not null, then it references a valid instance of `T`. It is
+ // safe to borrow the instance mutably because the signature of this function enforces that
+ // the mutable borrow is not used after the spinlock is dropped.
+ Some(unsafe { T::borrow_mut(ret) })
+ }
+}
+
/// Error type for failure to insert a new value.
pub struct InsertError<T> {
/// The value that could not be inserted.
--
2.50.1.470.g6ba607880d-goog
On Sat, Jul 26, 2025 at 01:23:23PM +0000, Alice Ryhl wrote: [...] > +/// A reference to a [`MapleTree`] that owns the inner lock. > +/// > +/// # Invariants > +/// > +/// This guard owns the inner spinlock. > +pub struct MapleLock<'tree, T: ForeignOwnable>(&'tree MapleTree<T>); So it's a guard, how about we name it `MapleGuard`, or `MapleLockGuard`, or just `Guard`? Regards, Boqun > + > +impl<'tree, T: ForeignOwnable> Drop for MapleLock<'tree, T> { > + #[inline] > + fn drop(&mut self) { > + // SAFETY: By the type invariants, we hold this spinlock. > + unsafe { bindings::spin_unlock(self.0.ma_lock()) }; > + } > +} > + [...]
On Sat Jul 26, 2025 at 3:23 PM CEST, Alice Ryhl wrote: > diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs > index 0f26c173eedc7c79bb8e2b56fe85e8a266b3ae0c..c7ef504a9c78065b3d5752b4f5337fb6277182d1 100644 > --- a/rust/kernel/maple_tree.rs > +++ b/rust/kernel/maple_tree.rs > @@ -206,6 +206,23 @@ pub fn erase(&self, index: usize) -> Option<T> { > unsafe { T::try_from_foreign(ret) } > } > > + /// Lock the internal spinlock. > + #[inline] > + pub fn lock(&self) -> MapleLock<'_, T> { > + // SAFETY: It's safe to lock the spinlock in a maple tree. > + unsafe { bindings::spin_lock(self.ma_lock()) }; > + > + // INVARIANT: We just took the spinlock. > + MapleLock(self) > + } > + > + #[inline] > + fn ma_lock(&self) -> *mut bindings::spinlock_t { > + // SAFETY: This pointer offset operation stays in-bounds. > + let lock = unsafe { &raw mut (*self.tree.get()).__bindgen_anon_1.ma_lock }; > + lock.cast() > + } > + > /// Free all `T` instances in this tree. > /// > /// # Safety > @@ -248,6 +265,83 @@ fn drop(mut self: Pin<&mut Self>) { > } > } > > +/// A reference to a [`MapleTree`] that owns the inner lock. > +/// > +/// # Invariants > +/// > +/// This guard owns the inner spinlock. > +pub struct MapleLock<'tree, T: ForeignOwnable>(&'tree MapleTree<T>); > + > +impl<'tree, T: ForeignOwnable> Drop for MapleLock<'tree, T> { > + #[inline] > + fn drop(&mut self) { > + // SAFETY: By the type invariants, we hold this spinlock. > + unsafe { bindings::spin_unlock(self.0.ma_lock()) }; > + } > +} I think in the future we also want to give users access to the mas_*() function familiy. I assume, MaState would represent a struct ma_state, but also carry a MapleLock guard. And then the mas_*() functions would be methods of MaState? In case we want to allow to release and re-acquire the lock for the same MaState, we could probably use type states. I wonder if this (at least partially) makes sense to have from the get-go, since it could already be used to implement things like MapleTree::free_all_entries() based on it.
On 7/26/25 8:23 AM, Alice Ryhl wrote: > To load a value, one must be careful to hold the lock while accessing > it. To enable this, we add a lock() method so that you can perform > operations on the value before the spinlock is released. > > Co-developed-by: Andrew Ballance <andrewjballance@gmail.com> > Signed-off-by: Andrew Ballance <andrewjballance@gmail.com> > Signed-off-by: Alice Ryhl <aliceryhl@google.com> I have a couple of nits, but overall looks good to me. > --- > rust/kernel/maple_tree.rs | 94 +++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 94 insertions(+) > > diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs > index 0f26c173eedc7c79bb8e2b56fe85e8a266b3ae0c..c7ef504a9c78065b3d5752b4f5337fb6277182d1 100644 > --- a/rust/kernel/maple_tree.rs > +++ b/rust/kernel/maple_tree.rs > @@ -206,6 +206,23 @@ pub fn erase(&self, index: usize) -> Option<T> { > unsafe { T::try_from_foreign(ret) } > } > > + /// Lock the internal spinlock. probably should add #[must_use] here. > + #[inline] > + pub fn lock(&self) -> MapleLock<'_, T> { > + // SAFETY: It's safe to lock the spinlock in a maple tree. > + unsafe { bindings::spin_lock(self.ma_lock()) }; > + > + // INVARIANT: We just took the spinlock. > + MapleLock(self) > + } > + > + #[inline] > + fn ma_lock(&self) -> *mut bindings::spinlock_t { > + // SAFETY: This pointer offset operation stays in-bounds. > + let lock = unsafe { &raw mut (*self.tree.get()).__bindgen_anon_1.ma_lock }; > + lock.cast() This cast seems unneeded. lock should already be a *mut spinlock_t. > + } > + > /// Free all `T` instances in this tree. > /// > /// # Safety > @@ -248,6 +265,83 @@ fn drop(mut self: Pin<&mut Self>) { > } > } > > +/// A reference to a [`MapleTree`] that owns the inner lock. > +/// > +/// # Invariants > +/// > +/// This guard owns the inner spinlock. > +pub struct MapleLock<'tree, T: ForeignOwnable>(&'tree MapleTree<T>); > + > +impl<'tree, T: ForeignOwnable> Drop for MapleLock<'tree, T> { > + #[inline] > + fn drop(&mut self) { > + // SAFETY: By the type invariants, we hold this spinlock. > + unsafe { bindings::spin_unlock(self.0.ma_lock()) }; > + } > +} > + > +impl<'tree, T: ForeignOwnable> MapleLock<'tree, T> { > + /// Load the value at the given index. > + /// > + /// # Examples > + /// > + /// Read the value while holding the spinlock. > + /// > + /// ``` > + /// use kernel::maple_tree::{MapleTree, InsertErrorKind}; > + /// > + /// let tree = KBox::pin_init(MapleTree::<KBox<i32>>::new(), GFP_KERNEL)?; > + /// > + /// let ten = KBox::new(10, GFP_KERNEL)?; > + /// let twenty = KBox::new(20, GFP_KERNEL)?; > + /// tree.insert(100, ten, GFP_KERNEL)?; > + /// tree.insert(200, twenty, GFP_KERNEL)?; > + /// > + /// let mut lock = tree.lock(); > + /// assert_eq!(lock.load(100), Some(&mut 10)); > + /// assert_eq!(lock.load(200), Some(&mut 20)); > + /// assert_eq!(lock.load(300), None); > + /// # Ok::<_, Error>(()) > + /// ``` > + /// > + /// Increment refcount while holding spinlock and read afterwards. > + /// > + /// ``` > + /// use kernel::maple_tree::{MapleTree, InsertErrorKind}; > + /// use kernel::sync::Arc; > + /// > + /// let tree = KBox::pin_init(MapleTree::<Arc<i32>>::new(), GFP_KERNEL)?; > + /// > + /// let ten = Arc::new(10, GFP_KERNEL)?; > + /// let twenty = Arc::new(20, GFP_KERNEL)?; > + /// tree.insert(100, ten, GFP_KERNEL)?; > + /// tree.insert(200, twenty, GFP_KERNEL)?; > + /// > + /// // Briefly take the lock to increment the refcount. > + /// let value = Arc::from(tree.lock().load(100).unwrap()); > + /// > + /// // At this point, another thread might remove the value. > + /// tree.erase(100); > + /// > + /// // But we can still access it because we took a refcount. > + /// assert_eq!(*value, 10); > + /// # Ok::<_, Error>(()) > + /// ``` > + #[inline] > + pub fn load(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> { > + // SAFETY: `self.tree` contains a valid maple tree. > + let ret = unsafe { bindings::mtree_load(self.0.tree.get(), index) }; > + if ret.is_null() { > + return None; > + } > + > + // SAFETY: If the pointer is not null, then it references a valid instance of `T`. It is > + // safe to borrow the instance mutably because the signature of this function enforces that > + // the mutable borrow is not used after the spinlock is dropped. > + Some(unsafe { T::borrow_mut(ret) }) > + } > +} > + > /// Error type for failure to insert a new value. > pub struct InsertError<T> { > /// The value that could not be inserted. > with or without those fixes, for the entire series, Reviewed-by: Andrew Ballance <andrewjballance@gmail.com> Also, if you need one, I would be happy to be a co-maintainer of the rust maple tree bindings. Best Regards, Andrew Ballance
On Mon, Jul 28, 2025 at 1:11 PM Andrew Ballance <andrewjballance@gmail.com> wrote: > > On 7/26/25 8:23 AM, Alice Ryhl wrote: > > To load a value, one must be careful to hold the lock while accessing > > it. To enable this, we add a lock() method so that you can perform > > operations on the value before the spinlock is released. > > > > Co-developed-by: Andrew Ballance <andrewjballance@gmail.com> > > Signed-off-by: Andrew Ballance <andrewjballance@gmail.com> > > Signed-off-by: Alice Ryhl <aliceryhl@google.com> > > I have a couple of nits, but overall looks good to me. > > > --- > > rust/kernel/maple_tree.rs | 94 +++++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 94 insertions(+) > > > > diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs > > index 0f26c173eedc7c79bb8e2b56fe85e8a266b3ae0c..c7ef504a9c78065b3d5752b4f5337fb6277182d1 100644 > > --- a/rust/kernel/maple_tree.rs > > +++ b/rust/kernel/maple_tree.rs > > @@ -206,6 +206,23 @@ pub fn erase(&self, index: usize) -> Option<T> { > > unsafe { T::try_from_foreign(ret) } > > } > > > > + /// Lock the internal spinlock. > > probably should add #[must_use] here. > > > + #[inline] > > + pub fn lock(&self) -> MapleLock<'_, T> { > > + // SAFETY: It's safe to lock the spinlock in a maple tree. > > + unsafe { bindings::spin_lock(self.ma_lock()) }; > > + > > + // INVARIANT: We just took the spinlock. > > + MapleLock(self) > > + } > > + > > + #[inline] > > + fn ma_lock(&self) -> *mut bindings::spinlock_t { > > + // SAFETY: This pointer offset operation stays in-bounds. > > + let lock = unsafe { &raw mut (*self.tree.get()).__bindgen_anon_1.ma_lock }; > > + lock.cast() > > This cast seems unneeded. lock should already be a *mut spinlock_t. In some configurations, the type is BindgenUnionField<spinlock_t>. Alice
On Sat, 26 Jul 2025 13:23:23 +0000 Alice Ryhl <aliceryhl@google.com> wrote: > To load a value, one must be careful to hold the lock while accessing > it. To enable this, we add a lock() method so that you can perform > operations on the value before the spinlock is released. > > Co-developed-by: Andrew Ballance <andrewjballance@gmail.com> > Signed-off-by: Andrew Ballance <andrewjballance@gmail.com> > Signed-off-by: Alice Ryhl <aliceryhl@google.com> > --- > rust/kernel/maple_tree.rs | 94 +++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 94 insertions(+) > > diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs > index 0f26c173eedc7c79bb8e2b56fe85e8a266b3ae0c..c7ef504a9c78065b3d5752b4f5337fb6277182d1 100644 > --- a/rust/kernel/maple_tree.rs > +++ b/rust/kernel/maple_tree.rs > @@ -206,6 +206,23 @@ pub fn erase(&self, index: usize) -> Option<T> { > unsafe { T::try_from_foreign(ret) } > } > > + /// Lock the internal spinlock. > + #[inline] > + pub fn lock(&self) -> MapleLock<'_, T> { > + // SAFETY: It's safe to lock the spinlock in a maple tree. > + unsafe { bindings::spin_lock(self.ma_lock()) }; > + > + // INVARIANT: We just took the spinlock. > + MapleLock(self) > + } > + > + #[inline] > + fn ma_lock(&self) -> *mut bindings::spinlock_t { > + // SAFETY: This pointer offset operation stays in-bounds. > + let lock = unsafe { &raw mut (*self.tree.get()).__bindgen_anon_1.ma_lock }; > + lock.cast() > + } Could this return `&SpinLock<()>` using `Lock::from_raw`? I guess it has the drawback of having `MapleLock` needing to store `ma_lock` pointer but the guard is usually just all on stack and inlined so it probably won't make a difference? This way you remove `unsafe` from `lock` and gets a free `drop`. > + > /// Free all `T` instances in this tree. > /// > /// # Safety > @@ -248,6 +265,83 @@ fn drop(mut self: Pin<&mut Self>) { > } > } > > +/// A reference to a [`MapleTree`] that owns the inner lock. > +/// > +/// # Invariants > +/// > +/// This guard owns the inner spinlock. > +pub struct MapleLock<'tree, T: ForeignOwnable>(&'tree MapleTree<T>); > + > +impl<'tree, T: ForeignOwnable> Drop for MapleLock<'tree, T> { > + #[inline] > + fn drop(&mut self) { > + // SAFETY: By the type invariants, we hold this spinlock. > + unsafe { bindings::spin_unlock(self.0.ma_lock()) }; > + } > +}
On Sat, Jul 26, 2025 at 5:50 PM Gary Guo <gary@garyguo.net> wrote: > > On Sat, 26 Jul 2025 13:23:23 +0000 > Alice Ryhl <aliceryhl@google.com> wrote: > > > To load a value, one must be careful to hold the lock while accessing > > it. To enable this, we add a lock() method so that you can perform > > operations on the value before the spinlock is released. > > > > Co-developed-by: Andrew Ballance <andrewjballance@gmail.com> > > Signed-off-by: Andrew Ballance <andrewjballance@gmail.com> > > Signed-off-by: Alice Ryhl <aliceryhl@google.com> > > --- > > rust/kernel/maple_tree.rs | 94 +++++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 94 insertions(+) > > > > diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs > > index 0f26c173eedc7c79bb8e2b56fe85e8a266b3ae0c..c7ef504a9c78065b3d5752b4f5337fb6277182d1 100644 > > --- a/rust/kernel/maple_tree.rs > > +++ b/rust/kernel/maple_tree.rs > > @@ -206,6 +206,23 @@ pub fn erase(&self, index: usize) -> Option<T> { > > unsafe { T::try_from_foreign(ret) } > > } > > > > + /// Lock the internal spinlock. > > + #[inline] > > + pub fn lock(&self) -> MapleLock<'_, T> { > > + // SAFETY: It's safe to lock the spinlock in a maple tree. > > + unsafe { bindings::spin_lock(self.ma_lock()) }; > > + > > + // INVARIANT: We just took the spinlock. > > + MapleLock(self) > > + } > > + > > + #[inline] > > + fn ma_lock(&self) -> *mut bindings::spinlock_t { > > + // SAFETY: This pointer offset operation stays in-bounds. > > + let lock = unsafe { &raw mut (*self.tree.get()).__bindgen_anon_1.ma_lock }; > > + lock.cast() > > + } > > Could this return `&SpinLock<()>` using `Lock::from_raw`? > > I guess it has the drawback of having `MapleLock` needing to store > `ma_lock` pointer but the guard is usually just all on stack and > inlined so it probably won't make a difference? > > This way you remove `unsafe` from `lock` and gets a free `drop`. I ended up going this way to avoid the extra field in MapleLock, like you mention. Alice
On Sat, Jul 26, 2025 at 6:15 PM Alice Ryhl <aliceryhl@google.com> wrote: > > On Sat, Jul 26, 2025 at 5:50 PM Gary Guo <gary@garyguo.net> wrote: > > > > On Sat, 26 Jul 2025 13:23:23 +0000 > > Alice Ryhl <aliceryhl@google.com> wrote: > > > > > To load a value, one must be careful to hold the lock while accessing > > > it. To enable this, we add a lock() method so that you can perform > > > operations on the value before the spinlock is released. > > > > > > Co-developed-by: Andrew Ballance <andrewjballance@gmail.com> > > > Signed-off-by: Andrew Ballance <andrewjballance@gmail.com> > > > Signed-off-by: Alice Ryhl <aliceryhl@google.com> > > > --- > > > rust/kernel/maple_tree.rs | 94 +++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 94 insertions(+) > > > > > > diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs > > > index 0f26c173eedc7c79bb8e2b56fe85e8a266b3ae0c..c7ef504a9c78065b3d5752b4f5337fb6277182d1 100644 > > > --- a/rust/kernel/maple_tree.rs > > > +++ b/rust/kernel/maple_tree.rs > > > @@ -206,6 +206,23 @@ pub fn erase(&self, index: usize) -> Option<T> { > > > unsafe { T::try_from_foreign(ret) } > > > } > > > > > > + /// Lock the internal spinlock. > > > + #[inline] > > > + pub fn lock(&self) -> MapleLock<'_, T> { > > > + // SAFETY: It's safe to lock the spinlock in a maple tree. > > > + unsafe { bindings::spin_lock(self.ma_lock()) }; > > > + > > > + // INVARIANT: We just took the spinlock. > > > + MapleLock(self) > > > + } > > > + > > > + #[inline] > > > + fn ma_lock(&self) -> *mut bindings::spinlock_t { > > > + // SAFETY: This pointer offset operation stays in-bounds. > > > + let lock = unsafe { &raw mut (*self.tree.get()).__bindgen_anon_1.ma_lock }; > > > + lock.cast() > > > + } > > > > Could this return `&SpinLock<()>` using `Lock::from_raw`? > > > > I guess it has the drawback of having `MapleLock` needing to store > > `ma_lock` pointer but the guard is usually just all on stack and > > inlined so it probably won't make a difference? > > > > This way you remove `unsafe` from `lock` and gets a free `drop`. > > I ended up going this way to avoid the extra field in MapleLock, like > you mention. Oh, and it also avoids assuming anything about the layout of SpinLock<()> Alice
On Sat, 26 Jul 2025 18:18:02 +0200 Alice Ryhl <aliceryhl@google.com> wrote: > On Sat, Jul 26, 2025 at 6:15 PM Alice Ryhl <aliceryhl@google.com> wrote: > > > > On Sat, Jul 26, 2025 at 5:50 PM Gary Guo <gary@garyguo.net> wrote: > > > > > > On Sat, 26 Jul 2025 13:23:23 +0000 > > > Alice Ryhl <aliceryhl@google.com> wrote: > > > > > > > To load a value, one must be careful to hold the lock while accessing > > > > it. To enable this, we add a lock() method so that you can perform > > > > operations on the value before the spinlock is released. > > > > > > > > Co-developed-by: Andrew Ballance <andrewjballance@gmail.com> > > > > Signed-off-by: Andrew Ballance <andrewjballance@gmail.com> > > > > Signed-off-by: Alice Ryhl <aliceryhl@google.com> > > > > --- > > > > rust/kernel/maple_tree.rs | 94 +++++++++++++++++++++++++++++++++++++++++++++++ > > > > 1 file changed, 94 insertions(+) > > > > > > > > diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs > > > > index 0f26c173eedc7c79bb8e2b56fe85e8a266b3ae0c..c7ef504a9c78065b3d5752b4f5337fb6277182d1 100644 > > > > --- a/rust/kernel/maple_tree.rs > > > > +++ b/rust/kernel/maple_tree.rs > > > > @@ -206,6 +206,23 @@ pub fn erase(&self, index: usize) -> Option<T> { > > > > unsafe { T::try_from_foreign(ret) } > > > > } > > > > > > > > + /// Lock the internal spinlock. > > > > + #[inline] > > > > + pub fn lock(&self) -> MapleLock<'_, T> { > > > > + // SAFETY: It's safe to lock the spinlock in a maple tree. > > > > + unsafe { bindings::spin_lock(self.ma_lock()) }; > > > > + > > > > + // INVARIANT: We just took the spinlock. > > > > + MapleLock(self) > > > > + } > > > > + > > > > + #[inline] > > > > + fn ma_lock(&self) -> *mut bindings::spinlock_t { > > > > + // SAFETY: This pointer offset operation stays in-bounds. > > > > + let lock = unsafe { &raw mut (*self.tree.get()).__bindgen_anon_1.ma_lock }; > > > > + lock.cast() > > > > + } > > > > > > Could this return `&SpinLock<()>` using `Lock::from_raw`? > > > > > > I guess it has the drawback of having `MapleLock` needing to store > > > `ma_lock` pointer but the guard is usually just all on stack and > > > inlined so it probably won't make a difference? > > > > > > This way you remove `unsafe` from `lock` and gets a free `drop`. > > > > I ended up going this way to avoid the extra field in MapleLock, like > > you mention. > > Oh, and it also avoids assuming anything about the layout of SpinLock<()> > > Alice Well, `Lock::from_raw` is designed to interact with a C-side lock: > Construct a Lock from a raw pointer > > This can be useful for interacting with a lock which was initialised outside of Rust. - Gary
* Gary Guo <gary@garyguo.net> [250727 08:03]: > On Sat, 26 Jul 2025 18:18:02 +0200 > Alice Ryhl <aliceryhl@google.com> wrote: > > > On Sat, Jul 26, 2025 at 6:15 PM Alice Ryhl <aliceryhl@google.com> wrote: > > > > > > On Sat, Jul 26, 2025 at 5:50 PM Gary Guo <gary@garyguo.net> wrote: > > > > > > > > On Sat, 26 Jul 2025 13:23:23 +0000 > > > > Alice Ryhl <aliceryhl@google.com> wrote: > > > > > > > > > To load a value, one must be careful to hold the lock while accessing > > > > > it. To enable this, we add a lock() method so that you can perform > > > > > operations on the value before the spinlock is released. > > > > > > > > > > Co-developed-by: Andrew Ballance <andrewjballance@gmail.com> > > > > > Signed-off-by: Andrew Ballance <andrewjballance@gmail.com> > > > > > Signed-off-by: Alice Ryhl <aliceryhl@google.com> > > > > > --- > > > > > rust/kernel/maple_tree.rs | 94 +++++++++++++++++++++++++++++++++++++++++++++++ > > > > > 1 file changed, 94 insertions(+) > > > > > > > > > > diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs > > > > > index 0f26c173eedc7c79bb8e2b56fe85e8a266b3ae0c..c7ef504a9c78065b3d5752b4f5337fb6277182d1 100644 > > > > > --- a/rust/kernel/maple_tree.rs > > > > > +++ b/rust/kernel/maple_tree.rs > > > > > @@ -206,6 +206,23 @@ pub fn erase(&self, index: usize) -> Option<T> { > > > > > unsafe { T::try_from_foreign(ret) } > > > > > } > > > > > > > > > > + /// Lock the internal spinlock. > > > > > + #[inline] > > > > > + pub fn lock(&self) -> MapleLock<'_, T> { > > > > > + // SAFETY: It's safe to lock the spinlock in a maple tree. > > > > > + unsafe { bindings::spin_lock(self.ma_lock()) }; > > > > > + > > > > > + // INVARIANT: We just took the spinlock. > > > > > + MapleLock(self) > > > > > + } > > > > > + > > > > > + #[inline] > > > > > + fn ma_lock(&self) -> *mut bindings::spinlock_t { > > > > > + // SAFETY: This pointer offset operation stays in-bounds. > > > > > + let lock = unsafe { &raw mut (*self.tree.get()).__bindgen_anon_1.ma_lock }; > > > > > + lock.cast() > > > > > + } > > > > > > > > Could this return `&SpinLock<()>` using `Lock::from_raw`? > > > > > > > > I guess it has the drawback of having `MapleLock` needing to store > > > > `ma_lock` pointer but the guard is usually just all on stack and > > > > inlined so it probably won't make a difference? > > > > > > > > This way you remove `unsafe` from `lock` and gets a free `drop`. > > > > > > I ended up going this way to avoid the extra field in MapleLock, like > > > you mention. > > > > Oh, and it also avoids assuming anything about the layout of SpinLock<()> > > > > Alice > > Well, `Lock::from_raw` is designed to interact with a C-side lock: > > > Construct a Lock from a raw pointer > > > > This can be useful for interacting with a lock which was initialised outside of Rust. > If it matters for future build out, the tree supports an external lock that may not be a spinlock. This is currently used by the mm for the vma management, and others (although willy wants it to go away eventually). Thanks, Liam
On Thu Aug 7, 2025 at 6:15 PM CEST, Liam R. Howlett wrote: > * Gary Guo <gary@garyguo.net> [250727 08:03]: >> On Sat, 26 Jul 2025 18:18:02 +0200 >> Well, `Lock::from_raw` is designed to interact with a C-side lock: >> >> > Construct a Lock from a raw pointer >> > >> > This can be useful for interacting with a lock which was initialised outside of Rust. >> > > If it matters for future build out, the tree supports an external lock > that may not be a spinlock. This is currently used by the mm for the > vma management, and others (although willy wants it to go away > eventually). When I was considering maple tree for GPUVM, i.e. vma management for GPUs, I would have needed the external lock as well. For GPU VMA management we have section for which we have to ensure that the tree won't be altered for a sequence of sleeping operations. In the worst case we could have used the internal spinlock and yet have an external mutex for this purpose; an uncontended spinlock shouldn't be that big a deal to take. So, long story short, I think there may be a few cases where an external lock can make sense. Just to recap why GPUVM couldn't leverage maple tree: we have cases where we have to pre-allocate multiple entries for the tree, whose ranges are yet unknown at the time we need to pre-allocate them. Unfortunately, I can't think of a solution for this given that in this situation we can't predict the number of new nodes this requires. If however in the meantime there have been ideas to tackle this please let me know, I'd still love to have maple tree for GPUVM. - Danilo
© 2016 - 2025 Red Hat, Inc.