[PATCH v2] perf: Avoid undefined behavior from stopping/starting inactive events

Yunseong Kim posted 1 patch 1 month, 3 weeks ago
There is a newer version of this series
kernel/events/core.c | 6 ++++++
1 file changed, 6 insertions(+)
[PATCH v2] perf: Avoid undefined behavior from stopping/starting inactive events
Posted by Yunseong Kim 1 month, 3 weeks ago
Calling pmu->start()/stop() on events in PERF_EVENT_STATE_OFF can leave
event->hw.idx at -1, which may lead to UBSAN shift-out-of-bounds reports
when the PMU code later shifts by a negative exponent.

Move the state check into perf_event_throttle()/perf_event_unthrottle() so
that inactive events are skipped entirely. This ensures only active events
with a valid hw.idx are processed, preventing undefined behavior and
silencing UBSAN warnings.

The problem can be reproduced with the syzkaller reproducer:
Link: https://lore.kernel.org/lkml/714b7ba2-693e-42e4-bce4-feef2a5e7613@kzalloc.com/

Fixes: 9734e25fbf5a ("perf: Fix the throttle logic for a group")
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Yunseong Kim <ysk@kzalloc.com>
---
 kernel/events/core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 8060c2857bb2..c9322029a8ae 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2665,6 +2665,9 @@ static void perf_log_itrace_start(struct perf_event *event);
 
 static void perf_event_unthrottle(struct perf_event *event, bool start)
 {
+	if (event->state <= PERF_EVENT_STATE_OFF)
+		return;
+
 	event->hw.interrupts = 0;
 	if (start)
 		event->pmu->start(event, 0);
@@ -2674,6 +2677,9 @@ static void perf_event_unthrottle(struct perf_event *event, bool start)
 
 static void perf_event_throttle(struct perf_event *event)
 {
+	if (event->state <= PERF_EVENT_STATE_OFF)
+		return;
+
 	event->hw.interrupts = MAX_INTERRUPTS;
 	event->pmu->stop(event, 0);
 	if (event == event->group_leader)
-- 
2.50.0
Re: [PATCH v2] perf: Avoid undefined behavior from stopping/starting inactive events
Posted by Peter Zijlstra 1 month, 3 weeks ago
On Tue, Aug 12, 2025 at 01:27:22AM +0000, Yunseong Kim wrote:
> Calling pmu->start()/stop() on events in PERF_EVENT_STATE_OFF can leave
> event->hw.idx at -1, which may lead to UBSAN shift-out-of-bounds reports
> when the PMU code later shifts by a negative exponent.

Yeah, but how do we get there? I suppose there is a race somewhere?
Please describe.

> Move the state check into perf_event_throttle()/perf_event_unthrottle() so
> that inactive events are skipped entirely. This ensures only active events
> with a valid hw.idx are processed, preventing undefined behavior and
> silencing UBSAN warnings.
> The problem can be reproduced with the syzkaller reproducer:
> Link: https://lore.kernel.org/lkml/714b7ba2-693e-42e4-bce4-feef2a5e7613@kzalloc.com/
> 
> Fixes: 9734e25fbf5a ("perf: Fix the throttle logic for a group")
> Cc: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Yunseong Kim <ysk@kzalloc.com>
> ---
>  kernel/events/core.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 8060c2857bb2..c9322029a8ae 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2665,6 +2665,9 @@ static void perf_log_itrace_start(struct perf_event *event);
>  
>  static void perf_event_unthrottle(struct perf_event *event, bool start)
>  {
> +	if (event->state <= PERF_EVENT_STATE_OFF)
> +		return;

This seems wrong. We should only {,un}throttle ACTIVE events, no?

>  	event->hw.interrupts = 0;
>  	if (start)
>  		event->pmu->start(event, 0);
> @@ -2674,6 +2677,9 @@ static void perf_event_unthrottle(struct perf_event *event, bool start)
>  
>  static void perf_event_throttle(struct perf_event *event)
>  {
> +	if (event->state <= PERF_EVENT_STATE_OFF)
> +		return;
> +
>  	event->hw.interrupts = MAX_INTERRUPTS;
>  	event->pmu->stop(event, 0);
>  	if (event == event->group_leader)
> -- 
> 2.50.0
>
Re: [PATCH v2] perf: Avoid undefined behavior from stopping/starting inactive events
Posted by Yunseong Kim 1 month, 3 weeks ago
Hi Peter,

Thank you so much for the code review!

On 8/12/25 4:56 PM, Peter Zijlstra wrote:
> On Tue, Aug 12, 2025 at 01:27:22AM +0000, Yunseong Kim wrote:
>> Calling pmu->start()/stop() on events in PERF_EVENT_STATE_OFF can leave
>> event->hw.idx at -1, which may lead to UBSAN shift-out-of-bounds reports
>> when the PMU code later shifts by a negative exponent.
> 
> Yeah, but how do we get there? I suppose there is a race somewhere?
> Please describe.

It appears to be an issue in how event groups handle throttling when some
members are intentionally disabled, rather than a race condition.

Based on the analysis and the reproducer provided by Mark Rutland
(this issue on both arm64 and x86-64), the scenario unfolds as follows:

 1. A group leader event is configured with a very aggressive sampling
    period(e.g., sample_period = 1). This causes frequent interrupts and
    triggers the throttling mechanism.

 2. A child event in the same group is created in a disabled state
    (.disabled = 1). This event remains in PERF_EVENT_STATE_OFF. Since it
    hasn't been scheduled onto the PMU, its event->hw.idx remains
    initialized at -1.

 3. When throttling occurs, perf_event_throttle_group() and later
    perf_event_unthrottle_group() iterates through all siblings, including
    the disabled child event.

 4. perf_event_throttle()/unthrottle() are called on this inactive child
    event, which then call event->pmu->start()/stop().

 5. The PMU driver receives the event with hw.idx == -1 and attempts to use
    it as a shift exponent, leading to the UBSAN report.

The core issue is that the throttling mechanism attempts to start/stop
events that are not actively scheduled on the hardware.

>> Move the state check into perf_event_throttle()/perf_event_unthrottle() so
>> that inactive events are skipped entirely. This ensures only active events
>> with a valid hw.idx are processed, preventing undefined behavior and
>> silencing UBSAN warnings.
>> The problem can be reproduced with the syzkaller reproducer:
>> Link: https://lore.kernel.org/lkml/714b7ba2-693e-42e4-bce4-feef2a5e7613@kzalloc.com/
>>
>> Fixes: 9734e25fbf5a ("perf: Fix the throttle logic for a group")
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Signed-off-by: Yunseong Kim <ysk@kzalloc.com>
>> ---
>>  kernel/events/core.c | 6 ++++++
>>  1 file changed, 6 insertions(+)
>>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 8060c2857bb2..c9322029a8ae 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -2665,6 +2665,9 @@ static void perf_log_itrace_start(struct perf_event *event);
>>  
>>  static void perf_event_unthrottle(struct perf_event *event, bool start)
>>  {
>> +	if (event->state <= PERF_EVENT_STATE_OFF)
>> +		return;
> 
> This seems wrong. We should only {,un}throttle ACTIVE events, no?

You are absolutely correct. Throttling should only manage events that are
actively running on the hardware. My proposed check <= PERF_EVENT_STATE_OFF
was too permissive. As you pointed out, we must ensure we only attempt to
throttle or unthrottle events that are currently active.

I'm going to make changes and test it, before sending the next patch.

+	if (event->state != PERF_EVENT_STATE_ACTIVE)
+		return;

>>  	event->hw.interrupts = 0;
>>  	if (start)
>>  		event->pmu->start(event, 0);
>> @@ -2674,6 +2677,9 @@ static void perf_event_unthrottle(struct perf_event *event, bool start)
>>  
>>  static void perf_event_throttle(struct perf_event *event)
>>  {
>> +	if (event->state <= PERF_EVENT_STATE_OFF)
>> +		return;
>> +
>>  	event->hw.interrupts = MAX_INTERRUPTS;
>>  	event->pmu->stop(event, 0);
>>  	if (event == event->group_leader)
>> -- 
>> 2.50.0
>>

I will send v3 shortly with this correction and an updated commit message
detailing the root cause analysis above.

Thanks,
Yunseong Kim