The __mt_dup() function requires callers to hold the appropriate write
lock when duplicating a maple tree. Without proper locking, concurrent
modifications during duplication could access invalid node slots.
Add a lockdep assertion to catch such API misuse during development.
This is API hardening rather than a bug fix - all in-tree callers
already follow the proper locking rules as documented above __mt_dup().
Signed-off-by: Boudewijn van der Heide <boudewijn@delta-utec.com>
---
Changes in v2:
- Replaced runtime deadnode check with a lockdep assertion
v1:
https://lore.kernel.org/lkml/20260103165758.74094-1-boudewijn@delta-utec.com/
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 5aa4c9500018..3b4357f16352 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6248,6 +6248,8 @@ static inline void mas_dup_alloc(struct ma_state *mas, struct ma_state *new_mas,
void __rcu **new_slots;
unsigned long val;
+ lockdep_assert(mt_write_locked(mas->tree));
+
/* Allocate memory for child nodes. */
type = mte_node_type(mas->node);
new_slots = ma_slots(new_node, type);
--
2.47.3
* Boudewijn van der Heide <boudewijn@delta-utec.com> [260106 16:08]: > The __mt_dup() function requires callers to hold the appropriate write > lock when duplicating a maple tree. Without proper locking, concurrent > modifications during duplication could access invalid node slots. > > Add a lockdep assertion to catch such API misuse during development. > This is API hardening rather than a bug fix - all in-tree callers > already follow the proper locking rules as documented above __mt_dup(). > > Signed-off-by: Boudewijn van der Heide <boudewijn@delta-utec.com> > --- > Changes in v2: > - Replaced runtime deadnode check with a lockdep assertion > v1: > https://lore.kernel.org/lkml/20260103165758.74094-1-boudewijn@delta-utec.com/ > > diff --git a/lib/maple_tree.c b/lib/maple_tree.c > index 5aa4c9500018..3b4357f16352 100644 > --- a/lib/maple_tree.c > +++ b/lib/maple_tree.c > @@ -6248,6 +6248,8 @@ static inline void mas_dup_alloc(struct ma_state *mas, struct ma_state *new_mas, > void __rcu **new_slots; > unsigned long val; > > + lockdep_assert(mt_write_locked(mas->tree)); > + This is still the wrong place. You are validating the lock is held in a function that is called in a loop without any unlocking. mas_dup_build() is the only caller of mas_dup_alloc(), and that is only called from two functions: __mt_dup() and mtree_dup(). This would be better served in __mt_dup() since the other caller, mtree_dup(), already does the locking so there's no way this will trigger from that call path. That way, we don't spend a lot of cycles checking lockdep when forking for no reason. > /* Allocate memory for child nodes. */ > type = mte_node_type(mas->node); > new_slots = ma_slots(new_node, type); > -- > 2.47.3 > >
Thanks for the review! That makes sense, I overlooked this detail in v2.
We should absolutely just check in __mt_dup(),
like mtree_dup() does for actual locks.
I took a closer look at __mt_dup() and mtree_dup(),
and I saw that mtree_dup() locks both the new and old maple_tree.
So I think it also makes sense for us to assert both trees in __mt_dup(),
like this:
@@ -6379,6 +6379,9 @@ int __mt_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp)
MA_STATE(mas, mt, 0, 0);
MA_STATE(new_mas, new, 0, 0);
+ lockdep_assert(mt_write_locked(new));
+ lockdep_assert(mt_write_locked(mt));
+
mas_dup_build(&mas, &new_mas, gfp);
if (unlikely(mas_is_err(&mas))) {
ret = xa_err(mas.node);
Does that look good to you for v3?
> > /* Allocate memory for child nodes. */
> > type = mte_node_type(mas->node);
> > new_slots = ma_slots(new_node, type);
> > --
> > 2.47.3
> >
> >
>
Thanks,
Boudewijn
© 2016 - 2026 Red Hat, Inc.