kernel/trace/ring_buffer.c | 2 ++ 1 file changed, 2 insertions(+)
Write to buffer_size_kb can permanently fail due to cpu_online_mask changed
between two for_each_online_buffer_cpu loops.
The number of increasing and decreasing on cpu_buffer->resize_disable
may be inconsistent, leading that the resize_disabled in some CPUs
becoming none zero after ring_buffer_reset_online_cpus return.
This issue can be reproduced by "echo 0 > trace" and hotplug cpu at the
same time. After reproducing success, we can find out buffer_size_kb
will not be functional anymore.
This patch uses cpus_read_lock() to prevent cpu_online_mask being changed
between two different "for_each_online_buffer_cpu".
Changes in v2:
Use cpus_read_lock() instead of copying cpu_online_mask at the entry of
function
Link:
https://lore.kernel.org/lkml/20230408052226.25268-1-Tze-nan.Wu@mediatek.com/
Cc: stable@vger.kernel.org
Signed-off-by: Tze-nan Wu <Tze-nan.Wu@mediatek.com>
---
kernel/trace/ring_buffer.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 76a2d91eecad..44d833252fb0 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -5357,6 +5357,7 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
/* prevent another thread from changing buffer sizes */
mutex_lock(&buffer->mutex);
+ cpus_read_lock();
for_each_online_buffer_cpu(buffer, cpu) {
cpu_buffer = buffer->buffers[cpu];
@@ -5377,6 +5378,7 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
atomic_dec(&cpu_buffer->resize_disabled);
}
+ cpus_read_unlock();
mutex_unlock(&buffer->mutex);
}
--
2.18.0
On 4/9/23 09:46, Tze-nan Wu wrote: > This issue can be reproduced by "echo 0 > trace" and hotplug cpu at the > same time. After reproducing success, we can find out buffer_size_kb > will not be functional anymore. > Do you mean disabling tracing while hotplugging CPU? Or disabling both tracing and hotplug CPU? > This patch uses cpus_read_lock() to prevent cpu_online_mask being changed > between two different "for_each_online_buffer_cpu". > "Use cpu_read_lock() to prevent ..." > Changes in v2: > Use cpus_read_lock() instead of copying cpu_online_mask at the entry of > function > To resolve kernel test robot warnings ([1] and [2])? Or have they been fixed? [1]: https://lore.kernel.org/stable/202304081615.eiaqpbV8-lkp@intel.com/ [2]: https://lore.kernel.org/stable/202304082051.Dp50upfS-lkp@intel.com/ Thanks. -- An old man doll... just what I always wanted! - Clara
Hi Bagas, On Sun, 2023-04-09 at 19:30 +0700, Bagas Sanjaya wrote: > > On 4/9/23 09:46, Tze-nan Wu wrote: > > This issue can be reproduced by "echo 0 > trace" and hotplug cpu at > > the > > same time. After reproducing success, we can find out > > buffer_size_kb > > will not be functional anymore. > > > > Do you mean disabling tracing while hotplugging CPU? Or disabling > both > tracing and hotplug CPU? > I mean disabling tracing while hotplugging CPU, sorry for the confusion here. > > This patch uses cpus_read_lock() to prevent cpu_online_mask being > > changed > > between two different "for_each_online_buffer_cpu". > > > > "Use cpu_read_lock() to prevent ..." > Thanks for the suggestion > > Changes in v2: > > Use cpus_read_lock() instead of copying cpu_online_mask at the > > entry of > > function > > > Since I change to fix the issue by using cpus_read_lock(), we don't need a copy of cpu_online_mask anymore, hence the two warnings will not meet in the v2 patch. > To resolve kernel test robot warnings ([1] and [2])? Or have they > been fixed? > > [1]: > https://lore.kernel.org/stable/202304081615.eiaqpbV8-lkp@intel.com/ > [2]: > https://lore.kernel.org/stable/202304082051.Dp50upfS-lkp@intel.com/ > > Thanks. > > -- > An old man doll... just what I always wanted! - Clara >
Write to buffer_size_kb can permanently fail, due to cpu_online_mask may
changed between two for_each_online_buffer_cpu loops.
The number of increasing and decreasing on cpu_buffer->resize_disable
may be inconsistent, leading that the resize_disabled in some CPUs
becoming none zero after ring_buffer_reset_online_cpus return.
This issue can be reproduced by "echo 0 > trace" while hotplugging cpu.
After reproducing success, we can find out buffer_size_kb will not be
functional anymore.
Prevent the two "loops" in this function from iterating through different
online cpus by copying cpu_online_mask at the entry of the function.
Changes from v1 to v3:
Declare the cpumask variable statically rather than dynamically.
Changes from v2 to v3:
Considering holding cpu_hotplug_lock too long because of the
synchronize_rcu(), maybe it's better to prevent the issue by copying
cpu_online_mask at the entry of the function as V1 does, instead of
using cpus_read_lock().
Link: https://lore.kernel.org/lkml/20230408052226.25268-1-Tze-nan.Wu@mediatek.com/
Link: https://lore.kernel.org/oe-kbuild-all/202304082051.Dp50upfS-lkp@intel.com/
Link: https://lore.kernel.org/oe-kbuild-all/202304081615.eiaqpbV8-lkp@intel.com/
Cc: stable@vger.kernel.org
Cc: npiggin@gmail.com
Fixes: b23d7a5f4a07 ("ring-buffer: speed up buffer resets by avoiding synchronize_rcu for each CPU")
Reported-by: kernel test robot <lkp@intel.com>
Reviewed-by: Cheng-Jui Wang <cheng-jui.wang@mediatek.com>
Signed-off-by: Tze-nan Wu <Tze-nan.Wu@mediatek.com>
---
kernel/trace/ring_buffer.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 76a2d91eecad..dc758930dacb 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -288,9 +288,6 @@ EXPORT_SYMBOL_GPL(ring_buffer_event_data);
#define for_each_buffer_cpu(buffer, cpu) \
for_each_cpu(cpu, buffer->cpumask)
-#define for_each_online_buffer_cpu(buffer, cpu) \
- for_each_cpu_and(cpu, buffer->cpumask, cpu_online_mask)
-
#define TS_SHIFT 27
#define TS_MASK ((1ULL << TS_SHIFT) - 1)
#define TS_DELTA_TEST (~TS_MASK)
@@ -5353,12 +5350,19 @@ EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
{
struct ring_buffer_per_cpu *cpu_buffer;
+ cpumask_t reset_online_cpumask;
int cpu;
+ /*
+ * Record cpu_online_mask here to make sure we iterate through the same
+ * online CPUs in the following two loops.
+ */
+ cpumask_copy(&reset_online_cpumask, cpu_online_mask);
+
/* prevent another thread from changing buffer sizes */
mutex_lock(&buffer->mutex);
- for_each_online_buffer_cpu(buffer, cpu) {
+ for_each_cpu_and(cpu, buffer->cpumask, &reset_online_cpumask) {
cpu_buffer = buffer->buffers[cpu];
atomic_inc(&cpu_buffer->resize_disabled);
@@ -5368,7 +5372,7 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
/* Make sure all commits have finished */
synchronize_rcu();
- for_each_online_buffer_cpu(buffer, cpu) {
+ for_each_cpu_and(cpu, buffer->cpumask, &reset_online_cpumask) {
cpu_buffer = buffer->buffers[cpu];
reset_disabled_cpu_buffer(cpu_buffer);
--
2.18.0
In case those who are following this thread but are not included in the cc list, the v4 patch cannot be easily found by them due to the tile of the newer patch had been changed. Let my put a link here: https://lore.kernel.org/lkml/20230412112401.25081-1-Tze-nan.Wu@mediatek.com/
Please have each new patch be a new thread, and not a Cc to the previous version of the patch. As it makes it hard to find in INBOXs. On Mon, 10 Apr 2023 15:35:08 +0800 Tze-nan Wu <Tze-nan.Wu@mediatek.com> wrote: > Write to buffer_size_kb can permanently fail, due to cpu_online_mask may > changed between two for_each_online_buffer_cpu loops. > The number of increasing and decreasing on cpu_buffer->resize_disable > may be inconsistent, leading that the resize_disabled in some CPUs > becoming none zero after ring_buffer_reset_online_cpus return. > > This issue can be reproduced by "echo 0 > trace" while hotplugging cpu. > After reproducing success, we can find out buffer_size_kb will not be > functional anymore. > > Prevent the two "loops" in this function from iterating through different > online cpus by copying cpu_online_mask at the entry of the function. > The "Changes from" need to go below the '---', otherwise they are added to the git commit (we don't want it there). > Changes from v1 to v3: > Declare the cpumask variable statically rather than dynamically. > > Changes from v2 to v3: > Considering holding cpu_hotplug_lock too long because of the > synchronize_rcu(), maybe it's better to prevent the issue by copying > cpu_online_mask at the entry of the function as V1 does, instead of > using cpus_read_lock(). > > Link: https://lore.kernel.org/lkml/20230408052226.25268-1-Tze-nan.Wu@mediatek.com/ > Link: https://lore.kernel.org/oe-kbuild-all/202304082051.Dp50upfS-lkp@intel.com/ > Link: https://lore.kernel.org/oe-kbuild-all/202304081615.eiaqpbV8-lkp@intel.com/ > > Cc: stable@vger.kernel.org > Cc: npiggin@gmail.com > Fixes: b23d7a5f4a07 ("ring-buffer: speed up buffer resets by avoiding synchronize_rcu for each CPU") > Reported-by: kernel test robot <lkp@intel.com> > Reviewed-by: Cheng-Jui Wang <cheng-jui.wang@mediatek.com> > Signed-off-by: Tze-nan Wu <Tze-nan.Wu@mediatek.com> > --- This is where the "Changes from" go. And since this patch is not suppose to be a Cc. But since it's still good to have a link to it. You could do: Changes from v2 to v3: https://lore.kernel.org/linux-trace-kernel/20230409024616.31099-1-Tze-nan.Wu@mediatek.com/ Considering holding cpu_hotplug_lock too long because of the synchronize_rcu(), maybe it's better to prevent the issue by copying cpu_online_mask at the entry of the function as V1 does, instead of using cpus_read_lock(). Where the previous version changes has the lore link to the previous patch, in case someone wants to look at it. > kernel/trace/ring_buffer.c | 14 +++++++++----- > 1 file changed, 9 insertions(+), 5 deletions(-) > > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c > index 76a2d91eecad..dc758930dacb 100644 > --- a/kernel/trace/ring_buffer.c > +++ b/kernel/trace/ring_buffer.c > @@ -288,9 +288,6 @@ EXPORT_SYMBOL_GPL(ring_buffer_event_data); > #define for_each_buffer_cpu(buffer, cpu) \ > for_each_cpu(cpu, buffer->cpumask) > > -#define for_each_online_buffer_cpu(buffer, cpu) \ > - for_each_cpu_and(cpu, buffer->cpumask, cpu_online_mask) > - > #define TS_SHIFT 27 > #define TS_MASK ((1ULL << TS_SHIFT) - 1) > #define TS_DELTA_TEST (~TS_MASK) > @@ -5353,12 +5350,19 @@ EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu); > void ring_buffer_reset_online_cpus(struct trace_buffer *buffer) > { > struct ring_buffer_per_cpu *cpu_buffer; > + cpumask_t reset_online_cpumask; It's usually considered bad form to put a cpumask on the stack. As it can be 128 bytes for a machine with 1024 CPUs (and yes they do exist). Also, the mask size is set to NR_CPUS not the actual size, so you do not even need to have it that big. > int cpu; > > + /* > + * Record cpu_online_mask here to make sure we iterate through the same > + * online CPUs in the following two loops. > + */ > + cpumask_copy(&reset_online_cpumask, cpu_online_mask); > + > /* prevent another thread from changing buffer sizes */ > mutex_lock(&buffer->mutex); > > - for_each_online_buffer_cpu(buffer, cpu) { > + for_each_cpu_and(cpu, buffer->cpumask, &reset_online_cpumask) { > cpu_buffer = buffer->buffers[cpu]; > > atomic_inc(&cpu_buffer->resize_disabled); Anyway, we don't need to modify any of the above, and just do the following instead of atomic_inc(): #define RESET_BIT (1 << 30) atomic_add(&cpu_buffer->resize_disabled, RESET_BIT); > @@ -5368,7 +5372,7 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer) > /* Make sure all commits have finished */ > synchronize_rcu(); > > - for_each_online_buffer_cpu(buffer, cpu) { > + for_each_cpu_and(cpu, buffer->cpumask, &reset_online_cpumask) { > cpu_buffer = buffer->buffers[cpu]; Then here we can do: /* * If a CPU came online during the synchronize_rcu(), then * ignore it. */ if (!atomic_read(&cpu_buffer->resize_disabled) & RESET_BIT)) continue; atomic_sub(&cpu_buffer->resize_disabled, RESET_BIT); As the resize_disabled only needs to be set to something to make it disabled. -- Steve > > reset_disabled_cpu_buffer(cpu_buffer);
Hi Steve, On Tue, 2023-04-11 at 12:44 -0400, Steven Rostedt wrote: > External email : Please do not click links or open attachments until > you have verified the sender or the content. > > > Please have each new patch be a new thread, and not a Cc to the > previous > version of the patch. As it makes it hard to find in INBOXs. > No problem, got it. > On Mon, 10 Apr 2023 15:35:08 +0800 > Tze-nan Wu <Tze-nan.Wu@mediatek.com> wrote: > > > Write to buffer_size_kb can permanently fail, due to > > cpu_online_mask may > > changed between two for_each_online_buffer_cpu loops. > > The number of increasing and decreasing on cpu_buffer- > > >resize_disable > > may be inconsistent, leading that the resize_disabled in some CPUs > > becoming none zero after ring_buffer_reset_online_cpus return. > > > > This issue can be reproduced by "echo 0 > trace" while hotplugging > > cpu. > > After reproducing success, we can find out buffer_size_kb will not > > be > > functional anymore. > > > > Prevent the two "loops" in this function from iterating through > > different > > online cpus by copying cpu_online_mask at the entry of the > > function. > > > > The "Changes from" need to go below the '---', otherwise they are > added to > the git commit (we don't want it there). > Will remember this, won't happened next time :) > > Changes from v1 to v3: > > Declare the cpumask variable statically rather than dynamically. > > > > Changes from v2 to v3: > > Considering holding cpu_hotplug_lock too long because of the > > synchronize_rcu(), maybe it's better to prevent the issue by > > copying > > cpu_online_mask at the entry of the function as V1 does, instead > > of > > using cpus_read_lock(). > > > > Link: > > https://lore.kernel.org/lkml/20230408052226.25268-1-Tze-nan.Wu@mediatek.com/ > > Link: > > https://lore.kernel.org/oe-kbuild-all/202304082051.Dp50upfS-lkp@intel.com/ > > Link: > > https://lore.kernel.org/oe-kbuild-all/202304081615.eiaqpbV8-lkp@intel.com/ > > > > Cc: stable@vger.kernel.org > > Cc: npiggin@gmail.com > > Fixes: b23d7a5f4a07 ("ring-buffer: speed up buffer resets by > > avoiding synchronize_rcu for each CPU") > > Reported-by: kernel test robot <lkp@intel.com> > > Reviewed-by: Cheng-Jui Wang <cheng-jui.wang@mediatek.com> > > Signed-off-by: Tze-nan Wu <Tze-nan.Wu@mediatek.com> > > --- > > This is where the "Changes from" go. And since this patch is not > suppose to > be a Cc. But since it's still good to have a link to it. You could > do: > > Changes from v2 to v3: > https://lore.kernel.org/linux-trace-kernel/20230409024616.31099-1-Tze-nan.Wu@mediatek.com/ > Considering holding cpu_hotplug_lock too long because of the > synchronize_rcu(), maybe it's better to prevent the issue by > copying > cpu_online_mask at the entry of the function as V1 does, instead of > using cpus_read_lock(). > > > Where the previous version changes has the lore link to the previous > patch, > in case someone wants to look at it. > Sure, a link here is really helpful. Will follow this format in the future. > > > kernel/trace/ring_buffer.c | 14 +++++++++----- > > 1 file changed, 9 insertions(+), 5 deletions(-) > > > > diff --git a/kernel/trace/ring_buffer.c > > b/kernel/trace/ring_buffer.c > > index 76a2d91eecad..dc758930dacb 100644 > > --- a/kernel/trace/ring_buffer.c > > +++ b/kernel/trace/ring_buffer.c > > @@ -288,9 +288,6 @@ EXPORT_SYMBOL_GPL(ring_buffer_event_data); > > #define for_each_buffer_cpu(buffer, cpu) \ > > for_each_cpu(cpu, buffer->cpumask) > > > > -#define for_each_online_buffer_cpu(buffer, cpu) \ > > - for_each_cpu_and(cpu, buffer->cpumask, cpu_online_mask) > > - > > #define TS_SHIFT 27 > > #define TS_MASK ((1ULL << TS_SHIFT) - 1) > > #define TS_DELTA_TEST (~TS_MASK) > > @@ -5353,12 +5350,19 @@ EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu); > > void ring_buffer_reset_online_cpus(struct trace_buffer *buffer) > > { > > struct ring_buffer_per_cpu *cpu_buffer; > > + cpumask_t reset_online_cpumask; > > It's usually considered bad form to put a cpumask on the stack. As it > can > be 128 bytes for a machine with 1024 CPUs (and yes they do exist). > Also, > the mask size is set to NR_CPUS not the actual size, so you do not > even > need to have it that big. > Never thought about that until you told me, I will keep it in mind before declare a cpumask next time. > > > int cpu; > > > > + /* > > + * Record cpu_online_mask here to make sure we iterate > > through the same > > + * online CPUs in the following two loops. > > + */ > > + cpumask_copy(&reset_online_cpumask, cpu_online_mask); > > + > > /* prevent another thread from changing buffer sizes */ > > mutex_lock(&buffer->mutex); > > > > - for_each_online_buffer_cpu(buffer, cpu) { > > + for_each_cpu_and(cpu, buffer->cpumask, &reset_online_cpumask) > > { > > cpu_buffer = buffer->buffers[cpu]; > > > > atomic_inc(&cpu_buffer->resize_disabled); > > Anyway, we don't need to modify any of the above, and just do the > following > instead of atomic_inc(): > > #define RESET_BIT (1 << 30) > > atomic_add(&cpu_buffer->resize_disabled, RESET_BIT); > > > > @@ -5368,7 +5372,7 @@ void ring_buffer_reset_online_cpus(struct > > trace_buffer *buffer) > > /* Make sure all commits have finished */ > > synchronize_rcu(); > > > > - for_each_online_buffer_cpu(buffer, cpu) { > > + for_each_cpu_and(cpu, buffer->cpumask, &reset_online_cpumask) Maybe we should use for_each_buffer_cpu(buffer, cpu) here? since a CPU may also came offline during synchronize_rcu(). > > { > > cpu_buffer = buffer->buffers[cpu]; > > Then here we can do: > > /* > * If a CPU came online during the synchronize_rcu(), > then > * ignore it. > */ > if (!atomic_read(&cpu_buffer->resize_disabled) & > RESET_BIT)) > continue; > > atomic_sub(&cpu_buffer->resize_disabled, RESET_BIT); > > > As the resize_disabled only needs to be set to something to make it > disabled. > > -- Steve > Thanks for all your suggestions, learn a lot from here, really appriciate :). I will upload a v4 patch in new thread as soon as the new patch pass my test. -- Tzenan > > > > reset_disabled_cpu_buffer(cpu_buffer); > >
On Wed, 12 Apr 2023 02:27:53 +0000 Tze-nan Wu (吳澤南) <Tze-nan.Wu@mediatek.com> wrote: > > > @@ -5368,7 +5372,7 @@ void ring_buffer_reset_online_cpus(struct > > > trace_buffer *buffer) > > > /* Make sure all commits have finished */ > > > synchronize_rcu(); > > > > > > - for_each_online_buffer_cpu(buffer, cpu) { > > > + for_each_cpu_and(cpu, buffer->cpumask, &reset_online_cpumask) > > Maybe we should use for_each_buffer_cpu(buffer, cpu) here? > since a CPU may also came offline during synchronize_rcu(). Yeah, I guess that works too (not looking at the code at the moment though). -- Steve > > > > { > > > cpu_buffer = buffer->buffers[cpu]; > >
On Tue, 11 Apr 2023 12:44:03 -0400 Steven Rostedt <rostedt@goodmis.org> wrote: > Please have each new patch be a new thread, and not a Cc to the previous Should have been "reply to the previous" :-p -- Steve > version of the patch. As it makes it hard to find in INBOXs.
© 2016 - 2024 Red Hat, Inc.