[PATCH v3 3/7] perf: Simplify perf_event_free_task() wait

Peter Zijlstra posted 7 patches 9 months, 1 week ago
[PATCH v3 3/7] perf: Simplify perf_event_free_task() wait
Posted by Peter Zijlstra 9 months, 1 week ago
Simplify the code by moving the duplicated wakeup condition into
put_ctx().

Notably, wait_var_event() is in perf_event_free_task() and will have
set ctx->task = TASK_TOMBSTONE.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/events/core.c |   32 ++++++++------------------------
 1 file changed, 8 insertions(+), 24 deletions(-)

--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1223,8 +1223,14 @@ static void put_ctx(struct perf_event_co
 	if (refcount_dec_and_test(&ctx->refcount)) {
 		if (ctx->parent_ctx)
 			put_ctx(ctx->parent_ctx);
-		if (ctx->task && ctx->task != TASK_TOMBSTONE)
-			put_task_struct(ctx->task);
+		if (ctx->task) {
+			if (ctx->task == TASK_TOMBSTONE) {
+				smp_mb(); /* pairs with wait_var_event() */
+				wake_up_var(&ctx->refcount);
+			} else {
+				put_task_struct(ctx->task);
+			}
+		}
 		call_rcu(&ctx->rcu_head, free_ctx);
 	}
 }
@@ -5492,8 +5498,6 @@ int perf_event_release_kernel(struct per
 again:
 	mutex_lock(&event->child_mutex);
 	list_for_each_entry(child, &event->child_list, child_list) {
-		void *var = NULL;
-
 		/*
 		 * Cannot change, child events are not migrated, see the
 		 * comment with perf_event_ctx_lock_nested().
@@ -5533,39 +5537,19 @@ int perf_event_release_kernel(struct per
 			 * this can't be the last reference.
 			 */
 			put_event(event);
-		} else {
-			var = &ctx->refcount;
 		}
 
 		mutex_unlock(&event->child_mutex);
 		mutex_unlock(&ctx->mutex);
 		put_ctx(ctx);
 
-		if (var) {
-			/*
-			 * If perf_event_free_task() has deleted all events from the
-			 * ctx while the child_mutex got released above, make sure to
-			 * notify about the preceding put_ctx().
-			 */
-			smp_mb(); /* pairs with wait_var_event() */
-			wake_up_var(var);
-		}
 		goto again;
 	}
 	mutex_unlock(&event->child_mutex);
 
 	list_for_each_entry_safe(child, tmp, &free_list, child_list) {
-		void *var = &child->ctx->refcount;
-
 		list_del(&child->child_list);
 		free_event(child);
-
-		/*
-		 * Wake any perf_event_free_task() waiting for this event to be
-		 * freed.
-		 */
-		smp_mb(); /* pairs with wait_var_event() */
-		wake_up_var(var);
 	}
 
 no_ctx:
Re: [PATCH v3 3/7] perf: Simplify perf_event_free_task() wait
Posted by Ravi Bangoria 9 months ago
Hi Peter,

> @@ -1223,8 +1223,14 @@ static void put_ctx(struct perf_event_co
>  	if (refcount_dec_and_test(&ctx->refcount)) {
>  		if (ctx->parent_ctx)
>  			put_ctx(ctx->parent_ctx);
> -		if (ctx->task && ctx->task != TASK_TOMBSTONE)
> -			put_task_struct(ctx->task);
> +		if (ctx->task) {
> +			if (ctx->task == TASK_TOMBSTONE) {
> +				smp_mb(); /* pairs with wait_var_event() */
> +				wake_up_var(&ctx->refcount);

perf_event_free_task() waits on "ctx->refcount == 1". But moving
wake_up_var() under refcount_dec_and_test() will cause
perf_event_free_task() to wait indefinitely. Right? So, shouldn't
wake_up_var() be outside? something like:

--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1281,15 +1281,14 @@ static void put_ctx(struct perf_event_context *ctx)
 	if (refcount_dec_and_test(&ctx->refcount)) {
 		if (ctx->parent_ctx)
 			put_ctx(ctx->parent_ctx);
-		if (ctx->task) {
-			if (ctx->task == TASK_TOMBSTONE) {
-				smp_mb(); /* pairs with wait_var_event() */
-				wake_up_var(&ctx->refcount);
-			} else {
-				put_task_struct(ctx->task);
-			}
-		}
+		if (ctx->task && ctx->task != TASK_TOMBSTONE)
+			put_task_struct(ctx->task);
 		call_rcu(&ctx->rcu_head, free_ctx);
+	} else {
+		if (ctx->task == TASK_TOMBSTONE) {
+			smp_mb(); /* pairs with wait_var_event() */
+			wake_up_var(&ctx->refcount);
+		}
 	}
 }

Thanks,
Ravi
Re: [PATCH v3 3/7] perf: Simplify perf_event_free_task() wait
Posted by Peter Zijlstra 8 months, 2 weeks ago
On Mon, Mar 17, 2025 at 12:19:07PM +0530, Ravi Bangoria wrote:
> Hi Peter,
> 
> > @@ -1223,8 +1223,14 @@ static void put_ctx(struct perf_event_co
> >  	if (refcount_dec_and_test(&ctx->refcount)) {
> >  		if (ctx->parent_ctx)
> >  			put_ctx(ctx->parent_ctx);
> > -		if (ctx->task && ctx->task != TASK_TOMBSTONE)
> > -			put_task_struct(ctx->task);
> > +		if (ctx->task) {
> > +			if (ctx->task == TASK_TOMBSTONE) {
> > +				smp_mb(); /* pairs with wait_var_event() */
> > +				wake_up_var(&ctx->refcount);
> 
> perf_event_free_task() waits on "ctx->refcount == 1". But moving
> wake_up_var() under refcount_dec_and_test() will cause
> perf_event_free_task() to wait indefinitely. Right? So, shouldn't
> wake_up_var() be outside? something like:
> 
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -1281,15 +1281,14 @@ static void put_ctx(struct perf_event_context *ctx)
>  	if (refcount_dec_and_test(&ctx->refcount)) {
>  		if (ctx->parent_ctx)
>  			put_ctx(ctx->parent_ctx);
> -		if (ctx->task) {
> -			if (ctx->task == TASK_TOMBSTONE) {
> -				smp_mb(); /* pairs with wait_var_event() */
> -				wake_up_var(&ctx->refcount);
> -			} else {
> -				put_task_struct(ctx->task);
> -			}
> -		}
> +		if (ctx->task && ctx->task != TASK_TOMBSTONE)
> +			put_task_struct(ctx->task);
>  		call_rcu(&ctx->rcu_head, free_ctx);
> +	} else {
> +		if (ctx->task == TASK_TOMBSTONE) {
> +			smp_mb(); /* pairs with wait_var_event() */
> +			wake_up_var(&ctx->refcount);
> +		}
>  	}
>  }

Yes, you're quite right indeed. Thanks!
[tip: perf/core] perf: Simplify perf_event_free_task() wait
Posted by tip-bot2 for Peter Zijlstra 8 months, 1 week ago
The following commit has been merged into the perf/core branch of tip:

Commit-ID:     59f3aa4a3ee27e96132e16d2d2bdc3acadb4bf79
Gitweb:        https://git.kernel.org/tip/59f3aa4a3ee27e96132e16d2d2bdc3acadb4bf79
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Fri, 17 Jan 2025 15:27:07 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 08 Apr 2025 20:55:46 +02:00

perf: Simplify perf_event_free_task() wait

Simplify the code by moving the duplicated wakeup condition into
put_ctx().

Notably, wait_var_event() is in perf_event_free_task() and will have
set ctx->task = TASK_TOMBSTONE.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ravi Bangoria <ravi.bangoria@amd.com>
Link: https://lkml.kernel.org/r/20250307193723.044499344@infradead.org
---
 kernel/events/core.c | 25 +++----------------------
 1 file changed, 3 insertions(+), 22 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 3c92b75..fa6dab0 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1270,6 +1270,9 @@ static void put_ctx(struct perf_event_context *ctx)
 		if (ctx->task && ctx->task != TASK_TOMBSTONE)
 			put_task_struct(ctx->task);
 		call_rcu(&ctx->rcu_head, free_ctx);
+	} else if (ctx->task == TASK_TOMBSTONE) {
+		smp_mb(); /* pairs with wait_var_event() */
+		wake_up_var(&ctx->refcount);
 	}
 }
 
@@ -5729,8 +5732,6 @@ int perf_event_release_kernel(struct perf_event *event)
 again:
 	mutex_lock(&event->child_mutex);
 	list_for_each_entry(child, &event->child_list, child_list) {
-		void *var = NULL;
-
 		/*
 		 * Cannot change, child events are not migrated, see the
 		 * comment with perf_event_ctx_lock_nested().
@@ -5765,40 +5766,20 @@ again:
 		if (tmp == child) {
 			perf_remove_from_context(child, DETACH_GROUP | DETACH_CHILD);
 			list_add(&child->child_list, &free_list);
-		} else {
-			var = &ctx->refcount;
 		}
 
 		mutex_unlock(&event->child_mutex);
 		mutex_unlock(&ctx->mutex);
 		put_ctx(ctx);
 
-		if (var) {
-			/*
-			 * If perf_event_free_task() has deleted all events from the
-			 * ctx while the child_mutex got released above, make sure to
-			 * notify about the preceding put_ctx().
-			 */
-			smp_mb(); /* pairs with wait_var_event() */
-			wake_up_var(var);
-		}
 		goto again;
 	}
 	mutex_unlock(&event->child_mutex);
 
 	list_for_each_entry_safe(child, tmp, &free_list, child_list) {
-		void *var = &child->ctx->refcount;
-
 		list_del(&child->child_list);
 		/* Last reference unless ->pending_task work is pending */
 		put_event(child);
-
-		/*
-		 * Wake any perf_event_free_task() waiting for this event to be
-		 * freed.
-		 */
-		smp_mb(); /* pairs with wait_var_event() */
-		wake_up_var(var);
 	}
 
 no_ctx:
Re: [tip: perf/core] perf: Simplify perf_event_free_task() wait
Posted by Frederic Weisbecker 8 months, 1 week ago
Le Tue, Apr 08, 2025 at 07:05:04PM -0000, tip-bot2 for Peter Zijlstra a écrit :
> The following commit has been merged into the perf/core branch of tip:
> 
> Commit-ID:     59f3aa4a3ee27e96132e16d2d2bdc3acadb4bf79
> Gitweb:        https://git.kernel.org/tip/59f3aa4a3ee27e96132e16d2d2bdc3acadb4bf79
> Author:        Peter Zijlstra <peterz@infradead.org>
> AuthorDate:    Fri, 17 Jan 2025 15:27:07 +01:00
> Committer:     Peter Zijlstra <peterz@infradead.org>
> CommitterDate: Tue, 08 Apr 2025 20:55:46 +02:00
> 
> perf: Simplify perf_event_free_task() wait
> 
> Simplify the code by moving the duplicated wakeup condition into
> put_ctx().
> 
> Notably, wait_var_event() is in perf_event_free_task() and will have
> set ctx->task = TASK_TOMBSTONE.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Reviewed-by: Ravi Bangoria <ravi.bangoria@amd.com>
> Link: https://lkml.kernel.org/r/20250307193723.044499344@infradead.org
> ---
>  kernel/events/core.c | 25 +++----------------------
>  1 file changed, 3 insertions(+), 22 deletions(-)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 3c92b75..fa6dab0 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -1270,6 +1270,9 @@ static void put_ctx(struct perf_event_context *ctx)
>  		if (ctx->task && ctx->task != TASK_TOMBSTONE)
>  			put_task_struct(ctx->task);
>  		call_rcu(&ctx->rcu_head, free_ctx);
> +	} else if (ctx->task == TASK_TOMBSTONE) {
> +		smp_mb(); /* pairs with wait_var_event() */
> +		wake_up_var(&ctx->refcount);

So there are three situations:

* If perf_event_free_task() has removed all the children from the parent list
  before perf_event_release_kernel() got a chance to even iterate them, then
  it's all good as there is no get_ctx() pending.

* If perf_event_release_kernel() iterates a child event, but it gets freed
  meanwhile by perf_event_free_task() while the mutexes are temporarily
  unlocked, it's all good because while locking again the ctx mutex,
  perf_event_release_kernel() observes TASK_TOMBSTONE.

* But if perf_event_release_kernel() frees the child event before
  perf_event_free_task() got a chance, we may face this scenario:

    perf_event_release_kernel()                                  perf_event_free_task()
    --------------------------                                   ------------------------
    mutex_lock(&event->child_mutex)
    get_ctx(child->ctx)
    mutex_unlock(&event->child_mutex)

    mutex_lock(ctx->mutex)
    mutex_lock(&event->child_mutex)
    perf_remove_from_context(child)
    mutex_unlock(&event->child_mutex)
    mutex_unlock(ctx->mutex)

                                                                 // This lock acquires ctx->refcount == 2
                                                                 // visibility
                                                                 mutex_lock(ctx->mutex)
                                                                 ctx->task = TASK_TOMBSTONE
                                                                 mutex_unlock(ctx->mutex)
                                                                 
                                                                 wait_var_event()
                                                                     // enters prepare_to_wait() since
                                                                     // ctx->refcount == 2
                                                                     // is guaranteed to be seen
                                                                     set_current_state(TASK_INTERRUPTIBLE)
                                                                     smp_mb()
                                                                     if (ctx->refcount != 1)
                                                                         schedule()
    put_ctx()
       // NOT fully ordered! Only RELEASE semantics
       refcount_dec_and_test()
           atomic_fetch_sub_release()
       // So TASK_TOMBSTONE is not guaranteed to be seen
       if (ctx->task == TASK_TOMBSTONE)
           wake_up_var()

Basically it's a broken store buffer:

    perf_event_release_kernel()                                  perf_event_free_task()
    --------------------------                                   ------------------------
    ctx->task = TASK_TOMBSTONE                                   smp_store_release(&ctx->refcount, ctx->refcount - 1)
    smp_mb()
    READ_ONCE(ctx->refcount)                                     READ_ONCE(ctx->task)


So you need this:

diff --git a/kernel/events/core.c b/kernel/events/core.c
index fa6dab08be47..c4fbbe25361a 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1270,9 +1270,10 @@ static void put_ctx(struct perf_event_context *ctx)
 		if (ctx->task && ctx->task != TASK_TOMBSTONE)
 			put_task_struct(ctx->task);
 		call_rcu(&ctx->rcu_head, free_ctx);
-	} else if (ctx->task == TASK_TOMBSTONE) {
+	} else {
 		smp_mb(); /* pairs with wait_var_event() */
-		wake_up_var(&ctx->refcount);
+		if (ctx->task == TASK_TOMBSTONE)
+			wake_up_var(&ctx->refcount);
 	}
 }
 


-- 
Frederic Weisbecker
SUSE Labs
Re: [tip: perf/core] perf: Simplify perf_event_free_task() wait
Posted by Ingo Molnar 8 months ago
* Frederic Weisbecker <frederic@kernel.org> wrote:

> Le Tue, Apr 08, 2025 at 07:05:04PM -0000, tip-bot2 for Peter Zijlstra a écrit :
> > The following commit has been merged into the perf/core branch of tip:
> > 
> > Commit-ID:     59f3aa4a3ee27e96132e16d2d2bdc3acadb4bf79
> > Gitweb:        https://git.kernel.org/tip/59f3aa4a3ee27e96132e16d2d2bdc3acadb4bf79
> > Author:        Peter Zijlstra <peterz@infradead.org>
> > AuthorDate:    Fri, 17 Jan 2025 15:27:07 +01:00
> > Committer:     Peter Zijlstra <peterz@infradead.org>
> > CommitterDate: Tue, 08 Apr 2025 20:55:46 +02:00
> > 
> > perf: Simplify perf_event_free_task() wait
> > 
> > Simplify the code by moving the duplicated wakeup condition into
> > put_ctx().
> > 
> > Notably, wait_var_event() is in perf_event_free_task() and will have
> > set ctx->task = TASK_TOMBSTONE.
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > Reviewed-by: Ravi Bangoria <ravi.bangoria@amd.com>
> > Link: https://lkml.kernel.org/r/20250307193723.044499344@infradead.org
> > ---
> >  kernel/events/core.c | 25 +++----------------------
> >  1 file changed, 3 insertions(+), 22 deletions(-)
> > 
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 3c92b75..fa6dab0 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -1270,6 +1270,9 @@ static void put_ctx(struct perf_event_context *ctx)
> >  		if (ctx->task && ctx->task != TASK_TOMBSTONE)
> >  			put_task_struct(ctx->task);
> >  		call_rcu(&ctx->rcu_head, free_ctx);
> > +	} else if (ctx->task == TASK_TOMBSTONE) {
> > +		smp_mb(); /* pairs with wait_var_event() */
> > +		wake_up_var(&ctx->refcount);
> 
> So there are three situations:
> 
> * If perf_event_free_task() has removed all the children from the parent list
>   before perf_event_release_kernel() got a chance to even iterate them, then
>   it's all good as there is no get_ctx() pending.
> 
> * If perf_event_release_kernel() iterates a child event, but it gets freed
>   meanwhile by perf_event_free_task() while the mutexes are temporarily
>   unlocked, it's all good because while locking again the ctx mutex,
>   perf_event_release_kernel() observes TASK_TOMBSTONE.
> 
> * But if perf_event_release_kernel() frees the child event before
>   perf_event_free_task() got a chance, we may face this scenario:
> 
>     perf_event_release_kernel()                                  perf_event_free_task()
>     --------------------------                                   ------------------------
>     mutex_lock(&event->child_mutex)
>     get_ctx(child->ctx)
>     mutex_unlock(&event->child_mutex)
> 
>     mutex_lock(ctx->mutex)
>     mutex_lock(&event->child_mutex)
>     perf_remove_from_context(child)
>     mutex_unlock(&event->child_mutex)
>     mutex_unlock(ctx->mutex)
> 
>                                                                  // This lock acquires ctx->refcount == 2
>                                                                  // visibility
>                                                                  mutex_lock(ctx->mutex)
>                                                                  ctx->task = TASK_TOMBSTONE
>                                                                  mutex_unlock(ctx->mutex)
>                                                                  
>                                                                  wait_var_event()
>                                                                      // enters prepare_to_wait() since
>                                                                      // ctx->refcount == 2
>                                                                      // is guaranteed to be seen
>                                                                      set_current_state(TASK_INTERRUPTIBLE)
>                                                                      smp_mb()
>                                                                      if (ctx->refcount != 1)
>                                                                          schedule()
>     put_ctx()
>        // NOT fully ordered! Only RELEASE semantics
>        refcount_dec_and_test()
>            atomic_fetch_sub_release()
>        // So TASK_TOMBSTONE is not guaranteed to be seen
>        if (ctx->task == TASK_TOMBSTONE)
>            wake_up_var()
> 
> Basically it's a broken store buffer:
> 
>     perf_event_release_kernel()                                  perf_event_free_task()
>     --------------------------                                   ------------------------
>     ctx->task = TASK_TOMBSTONE                                   smp_store_release(&ctx->refcount, ctx->refcount - 1)
>     smp_mb()
>     READ_ONCE(ctx->refcount)                                     READ_ONCE(ctx->task)
> 
> 
> So you need this:
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index fa6dab08be47..c4fbbe25361a 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -1270,9 +1270,10 @@ static void put_ctx(struct perf_event_context *ctx)
>  		if (ctx->task && ctx->task != TASK_TOMBSTONE)
>  			put_task_struct(ctx->task);
>  		call_rcu(&ctx->rcu_head, free_ctx);
> -	} else if (ctx->task == TASK_TOMBSTONE) {
> +	} else {
>  		smp_mb(); /* pairs with wait_var_event() */
> -		wake_up_var(&ctx->refcount);
> +		if (ctx->task == TASK_TOMBSTONE)
> +			wake_up_var(&ctx->refcount);
>  	}
>  }

JFYI, I've added your SOB:

    Signed-off-by: Frederic Weisbecker <frederic@kernel.org>

Thanks,

	Ingo
Re: [tip: perf/core] perf: Simplify perf_event_free_task() wait
Posted by Peter Zijlstra 8 months, 1 week ago
On Wed, Apr 09, 2025 at 03:01:12PM +0200, Frederic Weisbecker wrote:

> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 3c92b75..fa6dab0 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -1270,6 +1270,9 @@ static void put_ctx(struct perf_event_context *ctx)
> >  		if (ctx->task && ctx->task != TASK_TOMBSTONE)
> >  			put_task_struct(ctx->task);
> >  		call_rcu(&ctx->rcu_head, free_ctx);
> > +	} else if (ctx->task == TASK_TOMBSTONE) {
> > +		smp_mb(); /* pairs with wait_var_event() */
> > +		wake_up_var(&ctx->refcount);
> 
> So there are three situations:
> 
> * If perf_event_free_task() has removed all the children from the parent list
>   before perf_event_release_kernel() got a chance to even iterate them, then
>   it's all good as there is no get_ctx() pending.
> 
> * If perf_event_release_kernel() iterates a child event, but it gets freed
>   meanwhile by perf_event_free_task() while the mutexes are temporarily
>   unlocked, it's all good because while locking again the ctx mutex,
>   perf_event_release_kernel() observes TASK_TOMBSTONE.
> 
> * But if perf_event_release_kernel() frees the child event before
>   perf_event_free_task() got a chance, we may face this scenario:
> 
>     perf_event_release_kernel()                                  perf_event_free_task()
>     --------------------------                                   ------------------------
>     mutex_lock(&event->child_mutex)
>     get_ctx(child->ctx)
>     mutex_unlock(&event->child_mutex)
> 
>     mutex_lock(ctx->mutex)
>     mutex_lock(&event->child_mutex)
>     perf_remove_from_context(child)
>     mutex_unlock(&event->child_mutex)
>     mutex_unlock(ctx->mutex)
> 
>                                                                  // This lock acquires ctx->refcount == 2
>                                                                  // visibility
>                                                                  mutex_lock(ctx->mutex)
>                                                                  ctx->task = TASK_TOMBSTONE
>                                                                  mutex_unlock(ctx->mutex)
>                                                                  
>                                                                  wait_var_event()
>                                                                      // enters prepare_to_wait() since
>                                                                      // ctx->refcount == 2
>                                                                      // is guaranteed to be seen
>                                                                      set_current_state(TASK_INTERRUPTIBLE)
>                                                                      smp_mb()
>                                                                      if (ctx->refcount != 1)
>                                                                          schedule()
>     put_ctx()
>        // NOT fully ordered! Only RELEASE semantics
>        refcount_dec_and_test()
>            atomic_fetch_sub_release()
>        // So TASK_TOMBSTONE is not guaranteed to be seen
>        if (ctx->task == TASK_TOMBSTONE)
>            wake_up_var()
> 
> Basically it's a broken store buffer:
> 
>     perf_event_release_kernel()                                  perf_event_free_task()
>     --------------------------                                   ------------------------
>     ctx->task = TASK_TOMBSTONE                                   smp_store_release(&ctx->refcount, ctx->refcount - 1)
>     smp_mb()
>     READ_ONCE(ctx->refcount)                                     READ_ONCE(ctx->task)
> 
> 
> So you need this:
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index fa6dab08be47..c4fbbe25361a 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -1270,9 +1270,10 @@ static void put_ctx(struct perf_event_context *ctx)
>  		if (ctx->task && ctx->task != TASK_TOMBSTONE)
>  			put_task_struct(ctx->task);
>  		call_rcu(&ctx->rcu_head, free_ctx);
> -	} else if (ctx->task == TASK_TOMBSTONE) {
> +	} else {
>  		smp_mb(); /* pairs with wait_var_event() */
> -		wake_up_var(&ctx->refcount);
> +		if (ctx->task == TASK_TOMBSTONE)
> +			wake_up_var(&ctx->refcount);
>  	}
>  }

Very good, thanks!

I'll make that smp_mb__after_atomic() instead, but yes, this barrier
needs to move before the loading of ctx->task.

I'll transform this into a patch and stuff on top.
Re: [tip: perf/core] perf: Simplify perf_event_free_task() wait
Posted by Frederic Weisbecker 8 months, 1 week ago
Le Thu, Apr 10, 2025 at 11:34:56AM +0200, Peter Zijlstra a écrit :
> On Wed, Apr 09, 2025 at 03:01:12PM +0200, Frederic Weisbecker wrote:
> 
> > > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > > index 3c92b75..fa6dab0 100644
> > > --- a/kernel/events/core.c
> > > +++ b/kernel/events/core.c
> > > @@ -1270,6 +1270,9 @@ static void put_ctx(struct perf_event_context *ctx)
> > >  		if (ctx->task && ctx->task != TASK_TOMBSTONE)
> > >  			put_task_struct(ctx->task);
> > >  		call_rcu(&ctx->rcu_head, free_ctx);
> > > +	} else if (ctx->task == TASK_TOMBSTONE) {
> > > +		smp_mb(); /* pairs with wait_var_event() */
> > > +		wake_up_var(&ctx->refcount);
> > 
> > So there are three situations:
> > 
> > * If perf_event_free_task() has removed all the children from the parent list
> >   before perf_event_release_kernel() got a chance to even iterate them, then
> >   it's all good as there is no get_ctx() pending.
> > 
> > * If perf_event_release_kernel() iterates a child event, but it gets freed
> >   meanwhile by perf_event_free_task() while the mutexes are temporarily
> >   unlocked, it's all good because while locking again the ctx mutex,
> >   perf_event_release_kernel() observes TASK_TOMBSTONE.
> > 
> > * But if perf_event_release_kernel() frees the child event before
> >   perf_event_free_task() got a chance, we may face this scenario:
> > 
> >     perf_event_release_kernel()                                  perf_event_free_task()
> >     --------------------------                                   ------------------------
> >     mutex_lock(&event->child_mutex)
> >     get_ctx(child->ctx)
> >     mutex_unlock(&event->child_mutex)
> > 
> >     mutex_lock(ctx->mutex)
> >     mutex_lock(&event->child_mutex)
> >     perf_remove_from_context(child)
> >     mutex_unlock(&event->child_mutex)
> >     mutex_unlock(ctx->mutex)
> > 
> >                                                                  // This lock acquires ctx->refcount == 2
> >                                                                  // visibility
> >                                                                  mutex_lock(ctx->mutex)
> >                                                                  ctx->task = TASK_TOMBSTONE
> >                                                                  mutex_unlock(ctx->mutex)
> >                                                                  
> >                                                                  wait_var_event()
> >                                                                      // enters prepare_to_wait() since
> >                                                                      // ctx->refcount == 2
> >                                                                      // is guaranteed to be seen
> >                                                                      set_current_state(TASK_INTERRUPTIBLE)
> >                                                                      smp_mb()
> >                                                                      if (ctx->refcount != 1)
> >                                                                          schedule()
> >     put_ctx()
> >        // NOT fully ordered! Only RELEASE semantics
> >        refcount_dec_and_test()
> >            atomic_fetch_sub_release()
> >        // So TASK_TOMBSTONE is not guaranteed to be seen
> >        if (ctx->task == TASK_TOMBSTONE)
> >            wake_up_var()
> > 
> > Basically it's a broken store buffer:
> > 
> >     perf_event_release_kernel()                                  perf_event_free_task()
> >     --------------------------                                   ------------------------
> >     ctx->task = TASK_TOMBSTONE                                   smp_store_release(&ctx->refcount, ctx->refcount - 1)
> >     smp_mb()
> >     READ_ONCE(ctx->refcount)                                     READ_ONCE(ctx->task)
> > 
> > 
> > So you need this:
> > 
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index fa6dab08be47..c4fbbe25361a 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -1270,9 +1270,10 @@ static void put_ctx(struct perf_event_context *ctx)
> >  		if (ctx->task && ctx->task != TASK_TOMBSTONE)
> >  			put_task_struct(ctx->task);
> >  		call_rcu(&ctx->rcu_head, free_ctx);
> > -	} else if (ctx->task == TASK_TOMBSTONE) {
> > +	} else {
> >  		smp_mb(); /* pairs with wait_var_event() */
> > -		wake_up_var(&ctx->refcount);
> > +		if (ctx->task == TASK_TOMBSTONE)
> > +			wake_up_var(&ctx->refcount);
> >  	}
> >  }
> 
> Very good, thanks!
> 
> I'll make that smp_mb__after_atomic() instead, but yes, this barrier
> needs to move before the loading of ctx->task.
> 
> I'll transform this into a patch and stuff on top.

Sure! Or feel free to fold it, though that imply a rebase...

Thanks.

-- 
Frederic Weisbecker
SUSE Labs
[tip: perf/core] perf/core: Fix put_ctx() ordering
Posted by tip-bot2 for Frederic Weisbecker 8 months ago
The following commit has been merged into the perf/core branch of tip:

Commit-ID:     2839f393c69456bc356738e521b2e70b82977f46
Gitweb:        https://git.kernel.org/tip/2839f393c69456bc356738e521b2e70b82977f46
Author:        Frederic Weisbecker <frederic@kernel.org>
AuthorDate:    Wed, 09 Apr 2025 15:01:12 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 17 Apr 2025 14:21:15 +02:00

perf/core: Fix put_ctx() ordering

So there are three situations:

* If perf_event_free_task() has removed all the children from the parent list
  before perf_event_release_kernel() got a chance to even iterate them, then
  it's all good as there is no get_ctx() pending.

* If perf_event_release_kernel() iterates a child event, but it gets freed
  meanwhile by perf_event_free_task() while the mutexes are temporarily
  unlocked, it's all good because while locking again the ctx mutex,
  perf_event_release_kernel() observes TASK_TOMBSTONE.

* But if perf_event_release_kernel() frees the child event before
  perf_event_free_task() got a chance, we may face this scenario:

    perf_event_release_kernel()                                  perf_event_free_task()
    --------------------------                                   ------------------------
    mutex_lock(&event->child_mutex)
    get_ctx(child->ctx)
    mutex_unlock(&event->child_mutex)

    mutex_lock(ctx->mutex)
    mutex_lock(&event->child_mutex)
    perf_remove_from_context(child)
    mutex_unlock(&event->child_mutex)
    mutex_unlock(ctx->mutex)

                                                                 // This lock acquires ctx->refcount == 2
                                                                 // visibility
                                                                 mutex_lock(ctx->mutex)
                                                                 ctx->task = TASK_TOMBSTONE
                                                                 mutex_unlock(ctx->mutex)

                                                                 wait_var_event()
                                                                     // enters prepare_to_wait() since
                                                                     // ctx->refcount == 2
                                                                     // is guaranteed to be seen
                                                                     set_current_state(TASK_INTERRUPTIBLE)
                                                                     smp_mb()
                                                                     if (ctx->refcount != 1)
                                                                         schedule()
    put_ctx()
       // NOT fully ordered! Only RELEASE semantics
       refcount_dec_and_test()
           atomic_fetch_sub_release()
       // So TASK_TOMBSTONE is not guaranteed to be seen
       if (ctx->task == TASK_TOMBSTONE)
           wake_up_var()

Basically it's a broken store buffer:

    perf_event_release_kernel()                                  perf_event_free_task()
    --------------------------                                   ------------------------
    ctx->task = TASK_TOMBSTONE                                   smp_store_release(&ctx->refcount, ctx->refcount - 1)
    smp_mb()
    READ_ONCE(ctx->refcount)                                     READ_ONCE(ctx->task)

So we need a smp_mb__after_atomic() before looking at ctx->task.

Fixes: 59f3aa4a3ee2 ("perf: Simplify perf_event_free_task() wait")
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/Z_ZvmEhjkAhplCBE@localhost.localdomain
---
 kernel/events/core.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index e4d7a0c..1a19df9 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1271,9 +1271,10 @@ static void put_ctx(struct perf_event_context *ctx)
 		if (ctx->task && ctx->task != TASK_TOMBSTONE)
 			put_task_struct(ctx->task);
 		call_rcu(&ctx->rcu_head, free_ctx);
-	} else if (ctx->task == TASK_TOMBSTONE) {
-		smp_mb(); /* pairs with wait_var_event() */
-		wake_up_var(&ctx->refcount);
+	} else {
+		smp_mb__after_atomic(); /* pairs with wait_var_event() */
+		if (ctx->task == TASK_TOMBSTONE)
+			wake_up_var(&ctx->refcount);
 	}
 }