kernel/trace/ring_buffer.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
From: Shaurya Rane <ssrane_b23@ee.vjti.ac.in>
The kernel's lockdep validator detected a circular locking dependency
in ring_buffer_map(). The function was acquiring the per-CPU
'cpu_buffer->mapping_lock' before the global 'buffer->mutex'.
This violates the established locking hierarchy where 'buffer->mutex'
should be acquired first, leading to a potential deadlock.
Fix this by reordering the mutex acquisition to lock 'buffer->mutex'
before 'cpu_buffer->mapping_lock', satisfying the lockdep requirements
and preventing the deadlock.
Reported-by: syzbot+c530b4d95ec5cd4f33a7@syzkaller.appspotmail.com
Signed-off-by: Shaurya Rane <ssrane_b23@ee.vjti.ac.in>
---
kernel/trace/ring_buffer.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 43460949ad3f..82c3d5d2dcf6 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -7222,9 +7222,10 @@ int ring_buffer_map(struct trace_buffer *buffer, int cpu,
if (!cpumask_test_cpu(cpu, buffer->cpumask))
return -EINVAL;
-
+
cpu_buffer = buffer->buffers[cpu];
-
+
+ guard(mutex)(&buffer->mutex);
guard(mutex)(&cpu_buffer->mapping_lock);
if (cpu_buffer->user_mapped) {
--
2.34.1
On Sun, 5 Oct 2025 19:46:36 +0530
ssrane_b23@ee.vjti.ac.in wrote:
> From: Shaurya Rane <ssrane_b23@ee.vjti.ac.in>
>
> The kernel's lockdep validator detected a circular locking dependency
> in ring_buffer_map(). The function was acquiring the per-CPU
> 'cpu_buffer->mapping_lock' before the global 'buffer->mutex'.
>
You should either have a link to the email reporting the lockdep splat, or
post it in the change log. I'd like to know exactly what the race was.
> This violates the established locking hierarchy where 'buffer->mutex'
> should be acquired first, leading to a potential deadlock.
>
> Fix this by reordering the mutex acquisition to lock 'buffer->mutex'
> before 'cpu_buffer->mapping_lock', satisfying the lockdep requirements
> and preventing the deadlock.
>
> Reported-by: syzbot+c530b4d95ec5cd4f33a7@syzkaller.appspotmail.com
>
> Signed-off-by: Shaurya Rane <ssrane_b23@ee.vjti.ac.in>
> ---
> kernel/trace/ring_buffer.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index 43460949ad3f..82c3d5d2dcf6 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -7222,9 +7222,10 @@ int ring_buffer_map(struct trace_buffer *buffer, int cpu,
>
> if (!cpumask_test_cpu(cpu, buffer->cpumask))
> return -EINVAL;
> -
> +
Added white space?
> cpu_buffer = buffer->buffers[cpu];
> -
> +
More added white space?
> + guard(mutex)(&buffer->mutex);
> guard(mutex)(&cpu_buffer->mapping_lock);
You state that you are reversing the order here, but all I see is you added
taking the buffer->mutex lock. If there was a reverse order, then I'm
assuming that later on in this function the buffer->mutex is taken again.
That would cause a deadlock.
What exactly are you reversing?
-- Steve
>
> if (cpu_buffer->user_mapped) {
© 2016 - 2025 Red Hat, Inc.