[PATCH v2] sched: add READ_ONCE to task_on_rq_queued

Jon Kohler posted 1 patch 1 week, 1 day ago
kernel/sched/sched.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
[PATCH v2] sched: add READ_ONCE to task_on_rq_queued
Posted by Jon Kohler 1 week, 1 day ago
From: Harshit Agarwal <harshit@nutanix.com>

task_on_rq_queued read p->on_rq without READ_ONCE, though p->on_rq is
set with WRITE_ONCE in {activate|deactivate}_task and smp_store_release
in __block_task, and also read with READ_ONCE in task_on_rq_migrating.

Make all of these accesses pair together by adding READ_ONCE in the
task_on_rq_queued.

Signed-off-by: Harshit Agarwal <harshit@nutanix.com>
Reviewed-by: Phil Auld <pauld@redhat.com>
Cc: Jon Kohler <jon@nutanix.com>
---
 kernel/sched/sched.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index c03b3d7b320e..dbbe5ce0dd96 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2277,7 +2277,7 @@ static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
 
 static inline int task_on_rq_queued(struct task_struct *p)
 {
-	return p->on_rq == TASK_ON_RQ_QUEUED;
+	return READ_ONCE(p->on_rq) == TASK_ON_RQ_QUEUED;
 }
 
 static inline int task_on_rq_migrating(struct task_struct *p)
-- 
2.43.0
Re: [PATCH v2] sched: add READ_ONCE to task_on_rq_queued
Posted by Peter Zijlstra 1 week ago
On Thu, Nov 14, 2024 at 02:08:11PM -0700, Jon Kohler wrote:
> From: Harshit Agarwal <harshit@nutanix.com>
> 
> task_on_rq_queued read p->on_rq without READ_ONCE, though p->on_rq is
> set with WRITE_ONCE in {activate|deactivate}_task and smp_store_release
> in __block_task, and also read with READ_ONCE in task_on_rq_migrating.
> 
> Make all of these accesses pair together by adding READ_ONCE in the
> task_on_rq_queued.
> 
> Signed-off-by: Harshit Agarwal <harshit@nutanix.com>
> Reviewed-by: Phil Auld <pauld@redhat.com>
> Cc: Jon Kohler <jon@nutanix.com>
> ---
>  kernel/sched/sched.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index c03b3d7b320e..dbbe5ce0dd96 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2277,7 +2277,7 @@ static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
>  
>  static inline int task_on_rq_queued(struct task_struct *p)
>  {
> -	return p->on_rq == TASK_ON_RQ_QUEUED;
> +	return READ_ONCE(p->on_rq) == TASK_ON_RQ_QUEUED;
>  }

I think that strictly speaking we don't need it here, *IF* we see the
ON_RQ_QUEUED value, it must be stable.

But yeah, this is probably easier to reason about.

If you've got time, it might be worth to try something like:

	if (READ_ONCE(p->on_rq) == TASK_ON_RQ_QUEUED) {
		/*
		 * If we observe ON_RQ_QUEUED, it should be stable. IOW
		 * there should be no concurrent writes at this point.
		 */
		ASSERT_EXCLUSIVE_WRITER(p->on_rq);
		return true;
	}
	return false;