kernel/locking/lockdep.c | 3 +++ 1 file changed, 3 insertions(+)
Currently, when a lock class is allocated, nr_unused_locks will be
increased by 1, until it gets used: nr_unused_locks will be decreased by
1 in mark_lock(). However, one scenario is missed: a lock class may be
zapped without even being used once. This could result into a situation
that nr_unused_locks != 0 but no unused lock class is active in the
system, and when `cat /proc/lockdep_stats`, a WARN_ON() will
be triggered in a CONFIG_DEBUG_LOCKDEP=y kernel:
[...] DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused)
[...] WARNING: CPU: 41 PID: 1121 at kernel/locking/lockdep_proc.c:283 lockdep_stats_show+0xba9/0xbd0
And as a result, lockdep will be disabled after this.
Therefore, nr_unused_locks needs to be accounted correctly at
zap_class() time.
Cc: stable@vger.kernel.org
Signee-off-by: Boqun Feng <boqun.feng@gmail.com>
---
kernel/locking/lockdep.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index b15757e63626..686546d52337 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -6264,6 +6264,9 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
hlist_del_rcu(&class->hash_entry);
WRITE_ONCE(class->key, NULL);
WRITE_ONCE(class->name, NULL);
+ /* class allocated but not used, -1 in nr_unused_locks */
+ if (class->usage_mask == 0)
+ debug_atomic_dec(nr_unused_locks);
nr_lock_classes--;
__clear_bit(class - lock_classes, lock_classes_in_use);
if (class - lock_classes == max_lock_class_idx)
--
2.47.1
* Boqun Feng <boqun.feng@gmail.com> wrote: > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > index b15757e63626..686546d52337 100644 > --- a/kernel/locking/lockdep.c > +++ b/kernel/locking/lockdep.c > @@ -6264,6 +6264,9 @@ static void zap_class(struct pending_free *pf, struct lock_class *class) > hlist_del_rcu(&class->hash_entry); > WRITE_ONCE(class->key, NULL); > WRITE_ONCE(class->name, NULL); > + /* class allocated but not used, -1 in nr_unused_locks */ > + if (class->usage_mask == 0) > + debug_atomic_dec(nr_unused_locks); Nit: capitalization in comments should follow the style of the surrounding code - ie. I did the change below. Thanks, Ingo ======================> kernel/locking/lockdep.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 686546d52337..58d78a33ac65 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -6264,7 +6264,7 @@ static void zap_class(struct pending_free *pf, struct lock_class *class) hlist_del_rcu(&class->hash_entry); WRITE_ONCE(class->key, NULL); WRITE_ONCE(class->name, NULL); - /* class allocated but not used, -1 in nr_unused_locks */ + /* Class allocated but not used, -1 in nr_unused_locks */ if (class->usage_mask == 0) debug_atomic_dec(nr_unused_locks); nr_lock_classes--;
On 3/26/25 2:08 PM, Boqun Feng wrote: > Currently, when a lock class is allocated, nr_unused_locks will be > increased by 1, until it gets used: nr_unused_locks will be decreased by > 1 in mark_lock(). However, one scenario is missed: a lock class may be > zapped without even being used once. This could result into a situation > that nr_unused_locks != 0 but no unused lock class is active in the > system, and when `cat /proc/lockdep_stats`, a WARN_ON() will > be triggered in a CONFIG_DEBUG_LOCKDEP=y kernel: > > [...] DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused) > [...] WARNING: CPU: 41 PID: 1121 at kernel/locking/lockdep_proc.c:283 lockdep_stats_show+0xba9/0xbd0 > > And as a result, lockdep will be disabled after this. > > Therefore, nr_unused_locks needs to be accounted correctly at > zap_class() time. > > Cc: stable@vger.kernel.org > Signee-off-by: Boqun Feng <boqun.feng@gmail.com> Typo: "Signee-off-by"? Other than that, LGTM Reviewed-by: Waiman Long <longman@redhat.com> > --- > kernel/locking/lockdep.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > index b15757e63626..686546d52337 100644 > --- a/kernel/locking/lockdep.c > +++ b/kernel/locking/lockdep.c > @@ -6264,6 +6264,9 @@ static void zap_class(struct pending_free *pf, struct lock_class *class) > hlist_del_rcu(&class->hash_entry); > WRITE_ONCE(class->key, NULL); > WRITE_ONCE(class->name, NULL); > + /* class allocated but not used, -1 in nr_unused_locks */ > + if (class->usage_mask == 0) > + debug_atomic_dec(nr_unused_locks); > nr_lock_classes--; > __clear_bit(class - lock_classes, lock_classes_in_use); > if (class - lock_classes == max_lock_class_idx)
On Wed, Mar 26, 2025 at 02:26:53PM -0400, Waiman Long wrote: > > On 3/26/25 2:08 PM, Boqun Feng wrote: > > Currently, when a lock class is allocated, nr_unused_locks will be > > increased by 1, until it gets used: nr_unused_locks will be decreased by > > 1 in mark_lock(). However, one scenario is missed: a lock class may be > > zapped without even being used once. This could result into a situation > > that nr_unused_locks != 0 but no unused lock class is active in the > > system, and when `cat /proc/lockdep_stats`, a WARN_ON() will > > be triggered in a CONFIG_DEBUG_LOCKDEP=y kernel: > > > > [...] DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused) > > [...] WARNING: CPU: 41 PID: 1121 at kernel/locking/lockdep_proc.c:283 lockdep_stats_show+0xba9/0xbd0 > > > > And as a result, lockdep will be disabled after this. > > > > Therefore, nr_unused_locks needs to be accounted correctly at > > zap_class() time. > > > > Cc: stable@vger.kernel.org > > Signee-off-by: Boqun Feng <boqun.feng@gmail.com> > > Typo: "Signee-off-by"? > Oops, yeah. > Other than that, LGTM > > Reviewed-by: Waiman Long <longman@redhat.com> > Thanks! Regards, Boqun > > --- > > kernel/locking/lockdep.c | 3 +++ > > 1 file changed, 3 insertions(+) > > > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > > index b15757e63626..686546d52337 100644 > > --- a/kernel/locking/lockdep.c > > +++ b/kernel/locking/lockdep.c > > @@ -6264,6 +6264,9 @@ static void zap_class(struct pending_free *pf, struct lock_class *class) > > hlist_del_rcu(&class->hash_entry); > > WRITE_ONCE(class->key, NULL); > > WRITE_ONCE(class->name, NULL); > > + /* class allocated but not used, -1 in nr_unused_locks */ > > + if (class->usage_mask == 0) > > + debug_atomic_dec(nr_unused_locks); > > nr_lock_classes--; > > __clear_bit(class - lock_classes, lock_classes_in_use); > > if (class - lock_classes == max_lock_class_idx) >
The following commit has been merged into the locking/urgent branch of tip:
Commit-ID: 495f53d5cca0f939eaed9dca90b67e7e6fb0e30c
Gitweb: https://git.kernel.org/tip/495f53d5cca0f939eaed9dca90b67e7e6fb0e30c
Author: Boqun Feng <boqun.feng@gmail.com>
AuthorDate: Wed, 26 Mar 2025 11:08:30 -07:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 27 Mar 2025 08:23:17 +01:00
locking/lockdep: Decrease nr_unused_locks if lock unused in zap_class()
Currently, when a lock class is allocated, nr_unused_locks will be
increased by 1, until it gets used: nr_unused_locks will be decreased by
1 in mark_lock(). However, one scenario is missed: a lock class may be
zapped without even being used once. This could result into a situation
that nr_unused_locks != 0 but no unused lock class is active in the
system, and when `cat /proc/lockdep_stats`, a WARN_ON() will
be triggered in a CONFIG_DEBUG_LOCKDEP=y kernel:
[...] DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused)
[...] WARNING: CPU: 41 PID: 1121 at kernel/locking/lockdep_proc.c:283 lockdep_stats_show+0xba9/0xbd0
And as a result, lockdep will be disabled after this.
Therefore, nr_unused_locks needs to be accounted correctly at
zap_class() time.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Waiman Long <longman@redhat.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20250326180831.510348-1-boqun.feng@gmail.com
---
kernel/locking/lockdep.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index b15757e..58d78a3 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -6264,6 +6264,9 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
hlist_del_rcu(&class->hash_entry);
WRITE_ONCE(class->key, NULL);
WRITE_ONCE(class->name, NULL);
+ /* Class allocated but not used, -1 in nr_unused_locks */
+ if (class->usage_mask == 0)
+ debug_atomic_dec(nr_unused_locks);
nr_lock_classes--;
__clear_bit(class - lock_classes, lock_classes_in_use);
if (class - lock_classes == max_lock_class_idx)
© 2016 - 2025 Red Hat, Inc.