kernel/events/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
In literal sense of "max_samples_per_tick", if hwc->interrupts ==
max_samples_per_tick, it should not be throttled, therefore, the judgment
condition should be changed to "hwc->interrupts > max_samples_per_tick".
In fact, this may cause the hardlockup to fail, The minimum value of
max_samples_per_tick may be 1, in this case, the return value of
__perf_event_account_interrupt function is 1.
As a result, nmi_watchdog gets throttled, which would stop PMU (Use x86
architecture as an example, see x86_pmu_handle_irq).
Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
---
kernel/events/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index d56328e5080e..ced98e028d86 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -9414,7 +9414,7 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle)
} else {
hwc->interrupts++;
if (unlikely(throttle
- && hwc->interrupts >= max_samples_per_tick)) {
+ && hwc->interrupts > max_samples_per_tick)) {
__this_cpu_inc(perf_throttled_count);
tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
hwc->interrupts = MAX_INTERRUPTS;
--
2.30.GIT
Hello, PING Thanks, Yang On 2023/1/12 11:38, Yang Jihong wrote: > In literal sense of "max_samples_per_tick", if hwc->interrupts == > max_samples_per_tick, it should not be throttled, therefore, the judgment > condition should be changed to "hwc->interrupts > max_samples_per_tick". > > In fact, this may cause the hardlockup to fail, The minimum value of > max_samples_per_tick may be 1, in this case, the return value of > __perf_event_account_interrupt function is 1. > As a result, nmi_watchdog gets throttled, which would stop PMU (Use x86 > architecture as an example, see x86_pmu_handle_irq). > > Signed-off-by: Yang Jihong <yangjihong1@huawei.com> > --- > kernel/events/core.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/events/core.c b/kernel/events/core.c > index d56328e5080e..ced98e028d86 100644 > --- a/kernel/events/core.c > +++ b/kernel/events/core.c > @@ -9414,7 +9414,7 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle) > } else { > hwc->interrupts++; > if (unlikely(throttle > - && hwc->interrupts >= max_samples_per_tick)) { > + && hwc->interrupts > max_samples_per_tick)) { > __this_cpu_inc(perf_throttled_count); > tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS); > hwc->interrupts = MAX_INTERRUPTS; >
Hello, Ping again, please take time to review, thanks. Thanks, Yang On 2023/1/28 9:28, Yang Jihong wrote: > Hello, > > PING > > Thanks, > Yang > > On 2023/1/12 11:38, Yang Jihong wrote: >> In literal sense of "max_samples_per_tick", if hwc->interrupts == >> max_samples_per_tick, it should not be throttled, therefore, the judgment >> condition should be changed to "hwc->interrupts > max_samples_per_tick". >> >> In fact, this may cause the hardlockup to fail, The minimum value of >> max_samples_per_tick may be 1, in this case, the return value of >> __perf_event_account_interrupt function is 1. >> As a result, nmi_watchdog gets throttled, which would stop PMU (Use x86 >> architecture as an example, see x86_pmu_handle_irq). >> >> Signed-off-by: Yang Jihong <yangjihong1@huawei.com> >> --- >> kernel/events/core.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/kernel/events/core.c b/kernel/events/core.c >> index d56328e5080e..ced98e028d86 100644 >> --- a/kernel/events/core.c >> +++ b/kernel/events/core.c >> @@ -9414,7 +9414,7 @@ __perf_event_account_interrupt(struct perf_event >> *event, int throttle) >> } else { >> hwc->interrupts++; >> if (unlikely(throttle >> - && hwc->interrupts >= max_samples_per_tick)) { >> + && hwc->interrupts > max_samples_per_tick)) { >> __this_cpu_inc(perf_throttled_count); >> tick_dep_set_cpu(smp_processor_id(), >> TICK_DEP_BIT_PERF_EVENTS); >> hwc->interrupts = MAX_INTERRUPTS; >> > > .
© 2016 - 2025 Red Hat, Inc.