kernel/trace/ring_buffer.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
Mark data races to work->wait_index as benign using READ_ONCE and WRITE_ONCE. These accesses are expected to be racy.
Signed-off-by: linke li <lilinke99@qq.com>
---
kernel/trace/ring_buffer.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 0699027b4f4c..a47e9e9750cc 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -798,7 +798,7 @@ void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu)
rbwork = &cpu_buffer->irq_work;
}
- rbwork->wait_index++;
+ WRITE_ONCE(rbwork->wait_index, READ_ONCE(rbwork->wait_index) + 1);
/* make sure the waiters see the new index */
smp_wmb();
@@ -906,7 +906,7 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
/* Make sure to see the new wait index */
smp_rmb();
- if (wait_index != work->wait_index)
+ if (wait_index != READ_ONCE(work->wait_index))
break;
}
--
2.39.3 (Apple Git-145)
On Wed, 6 Mar 2024 10:55:34 +0800 linke li <lilinke99@qq.com> wrote: > Mark data races to work->wait_index as benign using READ_ONCE and WRITE_ONCE. These accesses are expected to be racy. Are we now to the point that every single access of a variable (long size or less) needs a READ_ONCE/WRITE_ONCE even with all the necessary smp_r/wmb()s? > > Signed-off-by: linke li <lilinke99@qq.com> > --- > kernel/trace/ring_buffer.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c > index 0699027b4f4c..a47e9e9750cc 100644 > --- a/kernel/trace/ring_buffer.c > +++ b/kernel/trace/ring_buffer.c > @@ -798,7 +798,7 @@ void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu) > rbwork = &cpu_buffer->irq_work; > } > > - rbwork->wait_index++; > + WRITE_ONCE(rbwork->wait_index, READ_ONCE(rbwork->wait_index) + 1); I mean the above is really ugly. If this is the new thing to do, we need better macros. If anything, just convert it to an atomic_t. -- Steve > /* make sure the waiters see the new index */ > smp_wmb(); > > @@ -906,7 +906,7 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full) > > /* Make sure to see the new wait index */ > smp_rmb(); > - if (wait_index != work->wait_index) > + if (wait_index != READ_ONCE(work->wait_index)) > break; > } >
© 2016 - 2025 Red Hat, Inc.