From nobody Mon Apr 6 09:46:28 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC711C38145 for ; Thu, 8 Sep 2022 19:42:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230272AbiIHTme (ORCPT ); Thu, 8 Sep 2022 15:42:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230443AbiIHTmH (ORCPT ); Thu, 8 Sep 2022 15:42:07 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0E29CCD6F for ; Thu, 8 Sep 2022 12:42:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662666123; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wa4jMrWf4h46iuaX6C0rsJTKtu2WHz+HFgVSDD+vhso=; b=KFaRvjJw77MuzW183f0WWVQnuxdwey274JFCSwzv1fI1oVAQEKRr46qE9R+na9NVwPe8yL XLu/TQoeagOkA3SYtpu4JOgy6BkmvHeYKnGYkDIu+Qk92K5lp+oHBuHPwvJY2tlEh5uDFP zMlO02TvPGf3tV1l2gRk1b2EhAMzj6Y= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-121-Ehcfk8nPO6WUnIkmiVC0gQ-1; Thu, 08 Sep 2022 15:41:54 -0400 X-MC-Unique: Ehcfk8nPO6WUnIkmiVC0gQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 919ED8039B5; Thu, 8 Sep 2022 19:41:53 +0000 (UTC) Received: from llong.com (unknown [10.22.9.66]) by smtp.corp.redhat.com (Postfix) with ESMTP id DFCD81415117; Thu, 8 Sep 2022 19:41:52 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Lai Jiangshan , Waiman Long Subject: [PATCH v8 4/7] sched: Introduce affinity_context structure Date: Thu, 8 Sep 2022 15:41:18 -0400 Message-Id: <20220908194121.858462-5-longman@redhat.com> In-Reply-To: <20220908194121.858462-1-longman@redhat.com> References: <20220908194121.858462-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce a new affinity_context structure for passing cpu affinity informa= tion around in core scheduler code. The relevant functions are modified to use the new structure. There is no functional change. Suggested-by: Peter Zijlstra Signed-off-by: Waiman Long Reported-by: kernel test robot --- kernel/sched/core.c | 114 ++++++++++++++++++++++++++-------------- kernel/sched/deadline.c | 7 ++- kernel/sched/sched.h | 11 ++-- 3 files changed, 86 insertions(+), 46 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 84544daf3839..b43b851c0399 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2195,14 +2195,18 @@ void check_preempt_curr(struct rq *rq, struct task_= struct *p, int flags) #ifdef CONFIG_SMP =20 static void -__do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mas= k, u32 flags); +__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx); =20 static int __set_cpus_allowed_ptr(struct task_struct *p, - const struct cpumask *new_mask, - u32 flags); + struct affinity_context *ctx); =20 static void migrate_disable_switch(struct rq *rq, struct task_struct *p) { + struct affinity_context ac =3D { + .new_mask =3D cpumask_of(rq->cpu), + .flags =3D SCA_MIGRATE_DISABLE, + }; + if (likely(!p->migration_disabled)) return; =20 @@ -2212,7 +2216,7 @@ static void migrate_disable_switch(struct rq *rq, str= uct task_struct *p) /* * Violates locking rules! see comment in __do_set_cpus_allowed(). */ - __do_set_cpus_allowed(p, cpumask_of(rq->cpu), SCA_MIGRATE_DISABLE); + __do_set_cpus_allowed(p, &ac); } =20 void migrate_disable(void) @@ -2234,6 +2238,10 @@ EXPORT_SYMBOL_GPL(migrate_disable); void migrate_enable(void) { struct task_struct *p =3D current; + struct affinity_context ac =3D { + .new_mask =3D &p->cpus_mask, + .flags =3D SCA_MIGRATE_ENABLE, + }; =20 if (p->migration_disabled > 1) { p->migration_disabled--; @@ -2249,7 +2257,7 @@ void migrate_enable(void) */ preempt_disable(); if (p->cpus_ptr !=3D &p->cpus_mask) - __set_cpus_allowed_ptr(p, &p->cpus_mask, SCA_MIGRATE_ENABLE); + __set_cpus_allowed_ptr(p, &ac); /* * Mustn't clear migration_disabled() until cpus_ptr points back at the * regular cpus_mask, otherwise things that race (eg. @@ -2529,19 +2537,19 @@ int push_cpu_stop(void *arg) * sched_class::set_cpus_allowed must do the below, but is not required to * actually call this function. */ -void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *= new_mask, u32 flags) +void set_cpus_allowed_common(struct task_struct *p, struct affinity_contex= t *ctx) { - if (flags & (SCA_MIGRATE_ENABLE | SCA_MIGRATE_DISABLE)) { - p->cpus_ptr =3D new_mask; + if (ctx->flags & (SCA_MIGRATE_ENABLE | SCA_MIGRATE_DISABLE)) { + p->cpus_ptr =3D ctx->new_mask; return; } =20 - cpumask_copy(&p->cpus_mask, new_mask); - p->nr_cpus_allowed =3D cpumask_weight(new_mask); + cpumask_copy(&p->cpus_mask, ctx->new_mask); + p->nr_cpus_allowed =3D cpumask_weight(ctx->new_mask); } =20 static void -__do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mas= k, u32 flags) +__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx) { struct rq *rq =3D task_rq(p); bool queued, running; @@ -2558,7 +2566,7 @@ __do_set_cpus_allowed(struct task_struct *p, const st= ruct cpumask *new_mask, u32 * * XXX do further audits, this smells like something putrid. */ - if (flags & SCA_MIGRATE_DISABLE) + if (ctx->flags & SCA_MIGRATE_DISABLE) SCHED_WARN_ON(!p->on_cpu); else lockdep_assert_held(&p->pi_lock); @@ -2577,7 +2585,7 @@ __do_set_cpus_allowed(struct task_struct *p, const st= ruct cpumask *new_mask, u32 if (running) put_prev_task(rq, p); =20 - p->sched_class->set_cpus_allowed(p, new_mask, flags); + p->sched_class->set_cpus_allowed(p, ctx); =20 if (queued) enqueue_task(rq, p, ENQUEUE_RESTORE | ENQUEUE_NOCLOCK); @@ -2587,7 +2595,12 @@ __do_set_cpus_allowed(struct task_struct *p, const s= truct cpumask *new_mask, u32 =20 void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_= mask) { - __do_set_cpus_allowed(p, new_mask, 0); + struct affinity_context ac =3D { + .new_mask =3D new_mask, + .flags =3D 0, + }; + + __do_set_cpus_allowed(p, &ac); } =20 int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, @@ -2840,8 +2853,7 @@ static int affine_move_task(struct rq *rq, struct tas= k_struct *p, struct rq_flag * Called with both p->pi_lock and rq->lock held; drops both before return= ing. */ static int __set_cpus_allowed_ptr_locked(struct task_struct *p, - const struct cpumask *new_mask, - u32 flags, + struct affinity_context *ctx, struct rq *rq, struct rq_flags *rf) __releases(rq->lock) @@ -2869,7 +2881,7 @@ static int __set_cpus_allowed_ptr_locked(struct task_= struct *p, cpu_valid_mask =3D cpu_online_mask; } =20 - if (!kthread && !cpumask_subset(new_mask, cpu_allowed_mask)) { + if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) { ret =3D -EINVAL; goto out; } @@ -2878,18 +2890,18 @@ static int __set_cpus_allowed_ptr_locked(struct tas= k_struct *p, * Must re-check here, to close a race against __kthread_bind(), * sched_setaffinity() is not guaranteed to observe the flag. */ - if ((flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) { + if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) { ret =3D -EINVAL; goto out; } =20 - if (!(flags & SCA_MIGRATE_ENABLE)) { - if (cpumask_equal(&p->cpus_mask, new_mask)) + if (!(ctx->flags & SCA_MIGRATE_ENABLE)) { + if (cpumask_equal(&p->cpus_mask, ctx->new_mask)) goto out; =20 if (WARN_ON_ONCE(p =3D=3D current && is_migration_disabled(p) && - !cpumask_test_cpu(task_cpu(p), new_mask))) { + !cpumask_test_cpu(task_cpu(p), ctx->new_mask))) { ret =3D -EBUSY; goto out; } @@ -2900,15 +2912,15 @@ static int __set_cpus_allowed_ptr_locked(struct tas= k_struct *p, * for groups of tasks (ie. cpuset), so that load balancing is not * immediately required to distribute the tasks within their new mask. */ - dest_cpu =3D cpumask_any_and_distribute(cpu_valid_mask, new_mask); + dest_cpu =3D cpumask_any_and_distribute(cpu_valid_mask, ctx->new_mask); if (dest_cpu >=3D nr_cpu_ids) { ret =3D -EINVAL; goto out; } =20 - __do_set_cpus_allowed(p, new_mask, flags); + __do_set_cpus_allowed(p, ctx); =20 - return affine_move_task(rq, p, rf, dest_cpu, flags); + return affine_move_task(rq, p, rf, dest_cpu, ctx->flags); =20 out: task_rq_unlock(rq, p, rf); @@ -2926,22 +2938,27 @@ static int __set_cpus_allowed_ptr_locked(struct tas= k_struct *p, * call is not atomic; no spinlocks may be held. */ static int __set_cpus_allowed_ptr(struct task_struct *p, - const struct cpumask *new_mask, u32 flags) + struct affinity_context *ctx) { struct rq_flags rf; struct rq *rq; =20 rq =3D task_rq_lock(p, &rf); - if (p->user_cpus_ptr && !(flags & SCA_USER) && - cpumask_and(rq->scratch_mask, new_mask, p->user_cpus_ptr)) - new_mask =3D rq->scratch_mask; + if (p->user_cpus_ptr && !(ctx->flags & SCA_USER) && + cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr)) + ctx->new_mask =3D rq->scratch_mask; =20 - return __set_cpus_allowed_ptr_locked(p, new_mask, flags, rq, &rf); + return __set_cpus_allowed_ptr_locked(p, ctx, rq, &rf); } =20 int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_= mask) { - return __set_cpus_allowed_ptr(p, new_mask, 0); + struct affinity_context ac =3D { + .new_mask =3D new_mask, + .flags =3D 0, + }; + + return __set_cpus_allowed_ptr(p, &ac); } EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr); =20 @@ -2958,6 +2975,10 @@ static int restrict_cpus_allowed_ptr(struct task_str= uct *p, struct cpumask *new_mask, const struct cpumask *subset_mask) { + struct affinity_context ac =3D { + .new_mask =3D new_mask, + .flags =3D 0, + }; struct rq_flags rf; struct rq *rq; int err; @@ -2979,7 +3000,7 @@ static int restrict_cpus_allowed_ptr(struct task_stru= ct *p, goto err_unlock; } =20 - return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf); + return __set_cpus_allowed_ptr_locked(p, &ac, rq, &rf); =20 err_unlock: task_rq_unlock(rq, p, &rf); @@ -3032,7 +3053,7 @@ void force_compatible_cpus_allowed_ptr(struct task_st= ruct *p) } =20 static int -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, int= flags); +__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx); =20 /* * Restore the affinity of a task @p which was previously restricted by a @@ -3043,13 +3064,17 @@ __sched_setaffinity(struct task_struct *p, const st= ruct cpumask *mask, int flags */ void relax_compatible_cpus_allowed_ptr(struct task_struct *p) { + struct affinity_context ac =3D { + .new_mask =3D task_user_cpus(p), + .flags =3D 0, + }; int ret; =20 /* * Try to restore the old affinity mask with __sched_setaffinity(). * Cpuset masking will be done there too. */ - ret =3D __sched_setaffinity(p, task_user_cpus(p), 0); + ret =3D __sched_setaffinity(p, &ac); WARN_ON_ONCE(ret); } =20 @@ -8053,7 +8078,7 @@ int dl_task_check_affinity(struct task_struct *p, con= st struct cpumask *mask) #endif =20 static int -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, int= flags) +__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx) { int retval; cpumask_var_t cpus_allowed, new_mask; @@ -8067,13 +8092,16 @@ __sched_setaffinity(struct task_struct *p, const st= ruct cpumask *mask, int flags } =20 cpuset_cpus_allowed(p, cpus_allowed); - cpumask_and(new_mask, mask, cpus_allowed); + cpumask_and(new_mask, ctx->new_mask, cpus_allowed); + + ctx->new_mask =3D new_mask; + ctx->flags |=3D SCA_CHECK; =20 retval =3D dl_task_check_affinity(p, new_mask); if (retval) goto out_free_new_mask; again: - retval =3D __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | flags); + retval =3D __set_cpus_allowed_ptr(p, ctx); if (retval) goto out_free_new_mask; =20 @@ -8096,6 +8124,7 @@ __sched_setaffinity(struct task_struct *p, const stru= ct cpumask *mask, int flags =20 long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) { + struct affinity_context ac; struct cpumask *user_mask; struct task_struct *p; int retval; @@ -8137,8 +8166,12 @@ long sched_setaffinity(pid_t pid, const struct cpuma= sk *in_mask) goto out_put_task; } cpumask_copy(user_mask, in_mask); + ac =3D (struct affinity_context){ + .new_mask =3D in_mask, + .flags =3D SCA_USER, + }; =20 - retval =3D __sched_setaffinity(p, in_mask, SCA_USER); + retval =3D __sched_setaffinity(p, &ac); =20 /* * Save in_mask into user_cpus_ptr after a successful @@ -8936,6 +8969,7 @@ void show_state_filter(unsigned int state_filter) void __init init_idle(struct task_struct *idle, int cpu) { struct rq *rq =3D cpu_rq(cpu); + struct affinity_context ac; unsigned long flags; =20 __sched_fork(0, idle); @@ -8953,13 +8987,17 @@ void __init init_idle(struct task_struct *idle, int= cpu) kthread_set_per_cpu(idle, cpu); =20 #ifdef CONFIG_SMP + ac =3D (struct affinity_context) { + .new_mask =3D cpumask_of(cpu), + .flags =3D 0, + }; /* * It's possible that init_idle() gets called multiple times on a task, * in that case do_set_cpus_allowed() will not do the right thing. * * And since this is boot we can forgo the serialization. */ - set_cpus_allowed_common(idle, cpumask_of(cpu), 0); + set_cpus_allowed_common(idle, &ac); #endif /* * We're having a chicken and egg problem, even though we are diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 0ab79d819a0d..38fa2c3ef7db 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2486,8 +2486,7 @@ static void task_woken_dl(struct rq *rq, struct task_= struct *p) } =20 static void set_cpus_allowed_dl(struct task_struct *p, - const struct cpumask *new_mask, - u32 flags) + struct affinity_context *ctx) { struct root_domain *src_rd; struct rq *rq; @@ -2502,7 +2501,7 @@ static void set_cpus_allowed_dl(struct task_struct *p, * update. We already made space for us in the destination * domain (see cpuset_can_attach()). */ - if (!cpumask_intersects(src_rd->span, new_mask)) { + if (!cpumask_intersects(src_rd->span, ctx->new_mask)) { struct dl_bw *src_dl_b; =20 src_dl_b =3D dl_bw_of(cpu_of(rq)); @@ -2516,7 +2515,7 @@ static void set_cpus_allowed_dl(struct task_struct *p, raw_spin_unlock(&src_dl_b->lock); } =20 - set_cpus_allowed_common(p, new_mask, flags); + set_cpus_allowed_common(p, ctx); } =20 /* Assumes rq->lock is held */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 482b702d65ea..1927c02f68fa 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2157,6 +2157,11 @@ extern const u32 sched_prio_to_wmult[40]; =20 #define RETRY_TASK ((void *)-1UL) =20 +struct affinity_context { + const struct cpumask *new_mask; + unsigned int flags; +}; + struct sched_class { =20 #ifdef CONFIG_UCLAMP_TASK @@ -2185,9 +2190,7 @@ struct sched_class { =20 void (*task_woken)(struct rq *this_rq, struct task_struct *task); =20 - void (*set_cpus_allowed)(struct task_struct *p, - const struct cpumask *newmask, - u32 flags); + void (*set_cpus_allowed)(struct task_struct *p, struct affinity_context *= ctx); =20 void (*rq_online)(struct rq *rq); void (*rq_offline)(struct rq *rq); @@ -2301,7 +2304,7 @@ extern void update_group_capacity(struct sched_domain= *sd, int cpu); =20 extern void trigger_load_balance(struct rq *rq); =20 -extern void set_cpus_allowed_common(struct task_struct *p, const struct cp= umask *new_mask, u32 flags); +extern void set_cpus_allowed_common(struct task_struct *p, struct affinity= _context *ctx); =20 static inline struct task_struct *get_push_task(struct rq *rq) { --=20 2.31.1