From nobody Mon Apr 6 08:06:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0E9DC38145 for ; Thu, 8 Sep 2022 19:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229663AbiIHTl6 (ORCPT ); Thu, 8 Sep 2022 15:41:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229536AbiIHTl4 (ORCPT ); Thu, 8 Sep 2022 15:41:56 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30C6BA3D40 for ; Thu, 8 Sep 2022 12:41:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662666114; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=d7F5jNXOig7p4gZdg2FFWDOqFX0Eoffdyr2osFUuJU4=; b=cA7LIgFBfAJyZXRg0E8dKle0qsN5u6cs/Z86BqND0kcqy1e5porJjFBH5IH/1SA8v2ym07 4yHkRugkVM+dmmpbt3+AOYEEZPlDx0GuJQAGVKHUk6tKio34um/Nkg7QSjZteQMp2B4yFv ZqHdnoL2QaJ4eA+1b2kNN7m1PG37xRU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-63-wiJivYPQMKeiOQ0K_hshPg-1; Thu, 08 Sep 2022 15:41:52 -0400 X-MC-Unique: wiJivYPQMKeiOQ0K_hshPg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7297A811E81; Thu, 8 Sep 2022 19:41:51 +0000 (UTC) Received: from llong.com (unknown [10.22.9.66]) by smtp.corp.redhat.com (Postfix) with ESMTP id C89431415117; Thu, 8 Sep 2022 19:41:50 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Lai Jiangshan , Waiman Long Subject: [PATCH v8 1/7] sched: Add __releases annotations to affine_move_task() Date: Thu, 8 Sep 2022 15:41:15 -0400 Message-Id: <20220908194121.858462-2-longman@redhat.com> In-Reply-To: <20220908194121.858462-1-longman@redhat.com> References: <20220908194121.858462-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" affine_move_task() assumes task_rq_lock() has been called and it does an implicit task_rq_unlock() before returning. Add the appropriate __releases annotations to make this clear. A typo error in comment is also fixed. Signed-off-by: Waiman Long --- kernel/sched/core.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ee28253c9ac0..b351e6d173b7 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2696,6 +2696,8 @@ void release_user_cpus_ptr(struct task_struct *p) */ static int affine_move_task(struct rq *rq, struct task_struct *p, struct r= q_flags *rf, int dest_cpu, unsigned int flags) + __releases(rq->lock) + __releases(p->pi_lock) { struct set_affinity_pending my_pending =3D { }, *pending =3D NULL; bool stop_pending, complete =3D false; @@ -3005,7 +3007,7 @@ static int restrict_cpus_allowed_ptr(struct task_stru= ct *p, =20 /* * Restrict the CPU affinity of task @p so that it is a subset of - * task_cpu_possible_mask() and point @p->user_cpu_ptr to a copy of the + * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the * old affinity mask. If the resulting mask is empty, we warn and walk * up the cpuset hierarchy until we find a suitable mask. */ --=20 2.31.1 From nobody Mon Apr 6 08:06:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD05DC6FA86 for ; Thu, 8 Sep 2022 19:42:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231495AbiIHTmR (ORCPT ); Thu, 8 Sep 2022 15:42:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229869AbiIHTmD (ORCPT ); Thu, 8 Sep 2022 15:42:03 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05811D076C for ; Thu, 8 Sep 2022 12:41:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662666117; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NimEyZtTzJQVDtCo9T4viDWAW4CqgFOy4rbVvNAhs2E=; b=JvXPOTo8IaW9G8Gs0A+kXCFFOO10Jldk3gtcnI57Sin3Lfn4wHanERew7n9Bxp7oYzLZZI /DIVo5zekqG95tdz3v83L7S7SmL0Vx2zTDtbubxpeWwZZzMake1Bg2LprS4kNPfoKE80eg hy5EizGf4nrHF6nHvm+2RYRTikStVNY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-486-dfMMCxefMHuTOeUjeVUErw-1; Thu, 08 Sep 2022 15:41:53 -0400 X-MC-Unique: dfMMCxefMHuTOeUjeVUErw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2463C18A01AD; Thu, 8 Sep 2022 19:41:52 +0000 (UTC) Received: from llong.com (unknown [10.22.9.66]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7F0891415117; Thu, 8 Sep 2022 19:41:51 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Lai Jiangshan , Waiman Long Subject: [PATCH v8 2/7] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Date: Thu, 8 Sep 2022 15:41:16 -0400 Message-Id: <20220908194121.858462-3-longman@redhat.com> In-Reply-To: <20220908194121.858462-1-longman@redhat.com> References: <20220908194121.858462-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched: Introduce task_struct::user_cpus_ptr to track requested affinity"). It is currently used only by arm64 arch due to possible asymmetric CPU setup. This patch extends its usage to save user provided cpumask when sched_setaffinity() is called for all arches. With this patch applied, user_cpus_ptr, once allocated after a successful call to sched_setaffinity(), will only be freed when the task exits. Since user_cpus_ptr is supposed to be used for "requested affinity", there is actually no point to save current cpu affinity in restrict_cpus_allowed_ptr() if sched_setaffinity() has never been called. Modify the logic to set user_cpus_ptr only in sched_setaffinity() and use it in restrict_cpus_allowed_ptr() and relax_compatible_cpus_allowed_ptr() if defined but not changing it. This will be some changes in behavior for arm64 systems with asymmetric CPUs in some corner cases. For instance, if sched_setaffinity() has never been called and there is a cpuset change before relax_compatible_cpus_allowed_ptr() is called, its subsequent call will follow what the cpuset allows but not what the previous cpu affinity setting allows. Signed-off-by: Waiman Long --- kernel/sched/core.c | 82 ++++++++++++++++++++------------------------ kernel/sched/sched.h | 7 ++++ 2 files changed, 44 insertions(+), 45 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b351e6d173b7..c7c0425974c2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2850,7 +2850,6 @@ static int __set_cpus_allowed_ptr_locked(struct task_= struct *p, const struct cpumask *cpu_allowed_mask =3D task_cpu_possible_mask(p); const struct cpumask *cpu_valid_mask =3D cpu_active_mask; bool kthread =3D p->flags & PF_KTHREAD; - struct cpumask *user_mask =3D NULL; unsigned int dest_cpu; int ret =3D 0; =20 @@ -2909,14 +2908,7 @@ static int __set_cpus_allowed_ptr_locked(struct task= _struct *p, =20 __do_set_cpus_allowed(p, new_mask, flags); =20 - if (flags & SCA_USER) - user_mask =3D clear_user_cpus_ptr(p); - - ret =3D affine_move_task(rq, p, rf, dest_cpu, flags); - - kfree(user_mask); - - return ret; + return affine_move_task(rq, p, rf, dest_cpu, flags); =20 out: task_rq_unlock(rq, p, rf); @@ -2951,8 +2943,10 @@ EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr); =20 /* * Change a given task's CPU affinity to the intersection of its current - * affinity mask and @subset_mask, writing the resulting mask to @new_mask - * and pointing @p->user_cpus_ptr to a copy of the old mask. + * affinity mask and @subset_mask, writing the resulting mask to @new_mask. + * If user_cpus_ptr is defined, use it as the basis for restricting CPU + * affinity or use cpu_online_mask instead. + * * If the resulting mask is empty, leave the affinity unchanged and return * -EINVAL. */ @@ -2960,17 +2954,10 @@ static int restrict_cpus_allowed_ptr(struct task_st= ruct *p, struct cpumask *new_mask, const struct cpumask *subset_mask) { - struct cpumask *user_mask =3D NULL; struct rq_flags rf; struct rq *rq; int err; =20 - if (!p->user_cpus_ptr) { - user_mask =3D kmalloc(cpumask_size(), GFP_KERNEL); - if (!user_mask) - return -ENOMEM; - } - rq =3D task_rq_lock(p, &rf); =20 /* @@ -2983,25 +2970,15 @@ static int restrict_cpus_allowed_ptr(struct task_st= ruct *p, goto err_unlock; } =20 - if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) { + if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) { err =3D -EINVAL; goto err_unlock; } =20 - /* - * We're about to butcher the task affinity, so keep track of what - * the user asked for in case we're able to restore it later on. - */ - if (user_mask) { - cpumask_copy(user_mask, p->cpus_ptr); - p->user_cpus_ptr =3D user_mask; - } - return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf); =20 err_unlock: task_rq_unlock(rq, p, &rf); - kfree(user_mask); return err; } =20 @@ -3055,30 +3032,21 @@ __sched_setaffinity(struct task_struct *p, const st= ruct cpumask *mask); =20 /* * Restore the affinity of a task @p which was previously restricted by a - * call to force_compatible_cpus_allowed_ptr(). This will clear (and free) - * @p->user_cpus_ptr. + * call to force_compatible_cpus_allowed_ptr(). * * It is the caller's responsibility to serialise this with any calls to * force_compatible_cpus_allowed_ptr(@p). */ void relax_compatible_cpus_allowed_ptr(struct task_struct *p) { - struct cpumask *user_mask =3D p->user_cpus_ptr; - unsigned long flags; + int ret; =20 /* - * Try to restore the old affinity mask. If this fails, then - * we free the mask explicitly to avoid it being inherited across - * a subsequent fork(). + * Try to restore the old affinity mask with __sched_setaffinity(). + * Cpuset masking will be done there too. */ - if (!user_mask || !__sched_setaffinity(p, user_mask)) - return; - - raw_spin_lock_irqsave(&p->pi_lock, flags); - user_mask =3D clear_user_cpus_ptr(p); - raw_spin_unlock_irqrestore(&p->pi_lock, flags); - - kfree(user_mask); + ret =3D __sched_setaffinity(p, task_user_cpus(p)); + WARN_ON_ONCE(ret); } =20 void set_task_cpu(struct task_struct *p, unsigned int new_cpu) @@ -8101,7 +8069,7 @@ __sched_setaffinity(struct task_struct *p, const stru= ct cpumask *mask) if (retval) goto out_free_new_mask; again: - retval =3D __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | SCA_USER); + retval =3D __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK); if (retval) goto out_free_new_mask; =20 @@ -8124,6 +8092,7 @@ __sched_setaffinity(struct task_struct *p, const stru= ct cpumask *mask) =20 long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) { + struct cpumask *user_mask; struct task_struct *p; int retval; =20 @@ -8158,7 +8127,30 @@ long sched_setaffinity(pid_t pid, const struct cpuma= sk *in_mask) if (retval) goto out_put_task; =20 + user_mask =3D kmalloc(cpumask_size(), GFP_KERNEL); + if (!user_mask) { + retval =3D -ENOMEM; + goto out_put_task; + } + cpumask_copy(user_mask, in_mask); + retval =3D __sched_setaffinity(p, in_mask); + + /* + * Save in_mask into user_cpus_ptr after a successful + * __sched_setaffinity() call. pi_lock is used to synchronize + * changes to user_cpus_ptr. + */ + if (!retval) { + unsigned long flags; + + /* Use pi_lock to synchronize changes to user_cpus_ptr */ + raw_spin_lock_irqsave(&p->pi_lock, flags); + swap(p->user_cpus_ptr, user_mask); + raw_spin_unlock_irqrestore(&p->pi_lock, flags); + } + kfree(user_mask); + out_put_task: put_task_struct(p); return retval; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e26688d387ae..ac235bc8ef08 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1881,6 +1881,13 @@ static inline void dirty_sched_domain_sysctl(int cpu) #endif =20 extern int sched_update_scaling(void); + +static inline const struct cpumask *task_user_cpus(struct task_struct *p) +{ + if (!p->user_cpus_ptr) + return cpu_possible_mask; /* &init_task.cpus_mask */ + return p->user_cpus_ptr; +} #endif /* CONFIG_SMP */ =20 #include "stats.h" --=20 2.31.1 From nobody Mon Apr 6 08:06:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A33CC38145 for ; Thu, 8 Sep 2022 19:42:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231156AbiIHTmK (ORCPT ); Thu, 8 Sep 2022 15:42:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229744AbiIHTmB (ORCPT ); Thu, 8 Sep 2022 15:42:01 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22A94D1E23 for ; Thu, 8 Sep 2022 12:41:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662666118; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pKnGg9J9iDoXVutMmsDPvKNQq77bmJ4L3Oh0dX5BhWU=; b=UmpJtPQzt6TP4WPkeM+hTUUZhbYG+aE0N1DyAN7XtG5WT5PAHnPJkB7QWxOv6zG76iVUB+ y6BUbSZEgpaOa5+lbYFLqCKUChP9eLlSp5bJjrvL8JbnquBE1JKYl9fASZB2rWaCDaVP8o EO/fma730m9Wi9oWQL39qoMj493XLWg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-663-p-xDiJO0NKuxwR98MEF8ZQ-1; Thu, 08 Sep 2022 15:41:53 -0400 X-MC-Unique: p-xDiJO0NKuxwR98MEF8ZQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D232385A589; Thu, 8 Sep 2022 19:41:52 +0000 (UTC) Received: from llong.com (unknown [10.22.9.66]) by smtp.corp.redhat.com (Postfix) with ESMTP id 33677141511A; Thu, 8 Sep 2022 19:41:52 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Lai Jiangshan , Waiman Long Subject: [PATCH v8 3/7] sched: Enforce user requested affinity Date: Thu, 8 Sep 2022 15:41:17 -0400 Message-Id: <20220908194121.858462-4-longman@redhat.com> In-Reply-To: <20220908194121.858462-1-longman@redhat.com> References: <20220908194121.858462-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It was found that the user requested affinity via sched_setaffinity() can be easily overwritten by other kernel subsystems without an easy way to reset it back to what the user requested. For example, any change to the current cpuset hierarchy may reset the cpumask of the tasks in the affected cpusets to the default cpuset value even if those tasks have pre-existing user requested affinity. That is especially easy to trigger under a cgroup v2 environment where writing "+cpuset" to the root cgroup's cgroup.subtree_control file will reset the cpus affinity of all the processes in the system. That is problematic in a nohz_full environment where the tasks running in the nohz_full CPUs usually have their cpus affinity explicitly set and will behave incorrectly if cpus affinity changes. Fix this problem by looking at user_cpus_ptr in __set_cpus_allowed_ptr() and use it to restrcit the given cpumask unless there is no overlap. In that case, it will fallback to the given one. The SCA_USER flag is reused to indicate intent to set user_cpus_ptr and so user_cpus_ptr masking should be skipped. All callers of set_cpus_allowed_ptr() will be affected by this change. A scratch cpumask is added to percpu runqueues structure for doing additional masking when user_cpus_ptr is set. Signed-off-by: Waiman Long Reported-by: kernel test robot --- kernel/sched/core.c | 17 ++++++++++++----- kernel/sched/sched.h | 3 +++ 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c7c0425974c2..84544daf3839 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2932,6 +2932,10 @@ static int __set_cpus_allowed_ptr(struct task_struct= *p, struct rq *rq; =20 rq =3D task_rq_lock(p, &rf); + if (p->user_cpus_ptr && !(flags & SCA_USER) && + cpumask_and(rq->scratch_mask, new_mask, p->user_cpus_ptr)) + new_mask =3D rq->scratch_mask; + return __set_cpus_allowed_ptr_locked(p, new_mask, flags, rq, &rf); } =20 @@ -3028,7 +3032,7 @@ void force_compatible_cpus_allowed_ptr(struct task_st= ruct *p) } =20 static int -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask); +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, int= flags); =20 /* * Restore the affinity of a task @p which was previously restricted by a @@ -3045,7 +3049,7 @@ void relax_compatible_cpus_allowed_ptr(struct task_st= ruct *p) * Try to restore the old affinity mask with __sched_setaffinity(). * Cpuset masking will be done there too. */ - ret =3D __sched_setaffinity(p, task_user_cpus(p)); + ret =3D __sched_setaffinity(p, task_user_cpus(p), 0); WARN_ON_ONCE(ret); } =20 @@ -8049,7 +8053,7 @@ int dl_task_check_affinity(struct task_struct *p, con= st struct cpumask *mask) #endif =20 static int -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask) +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, int= flags) { int retval; cpumask_var_t cpus_allowed, new_mask; @@ -8069,7 +8073,7 @@ __sched_setaffinity(struct task_struct *p, const stru= ct cpumask *mask) if (retval) goto out_free_new_mask; again: - retval =3D __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK); + retval =3D __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | flags); if (retval) goto out_free_new_mask; =20 @@ -8134,7 +8138,7 @@ long sched_setaffinity(pid_t pid, const struct cpumas= k *in_mask) } cpumask_copy(user_mask, in_mask); =20 - retval =3D __sched_setaffinity(p, in_mask); + retval =3D __sched_setaffinity(p, in_mask, SCA_USER); =20 /* * Save in_mask into user_cpus_ptr after a successful @@ -9647,6 +9651,9 @@ void __init sched_init(void) cpumask_size(), GFP_KERNEL, cpu_to_node(i)); per_cpu(select_rq_mask, i) =3D (cpumask_var_t)kzalloc_node( cpumask_size(), GFP_KERNEL, cpu_to_node(i)); + per_cpu(runqueues.scratch_mask, i) =3D + (cpumask_var_t)kzalloc_node(cpumask_size(), + GFP_KERNEL, cpu_to_node(i)); } #endif /* CONFIG_CPUMASK_OFFSTACK */ =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index ac235bc8ef08..482b702d65ea 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1159,6 +1159,9 @@ struct rq { unsigned int core_forceidle_occupation; u64 core_forceidle_start; #endif + + /* Scratch cpumask to be temporarily used under rq_lock */ + cpumask_var_t scratch_mask; }; =20 #ifdef CONFIG_FAIR_GROUP_SCHED --=20 2.31.1 From nobody Mon Apr 6 08:06:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC711C38145 for ; Thu, 8 Sep 2022 19:42:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230272AbiIHTme (ORCPT ); Thu, 8 Sep 2022 15:42:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230443AbiIHTmH (ORCPT ); Thu, 8 Sep 2022 15:42:07 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0E29CCD6F for ; Thu, 8 Sep 2022 12:42:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662666123; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wa4jMrWf4h46iuaX6C0rsJTKtu2WHz+HFgVSDD+vhso=; b=KFaRvjJw77MuzW183f0WWVQnuxdwey274JFCSwzv1fI1oVAQEKRr46qE9R+na9NVwPe8yL XLu/TQoeagOkA3SYtpu4JOgy6BkmvHeYKnGYkDIu+Qk92K5lp+oHBuHPwvJY2tlEh5uDFP zMlO02TvPGf3tV1l2gRk1b2EhAMzj6Y= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-121-Ehcfk8nPO6WUnIkmiVC0gQ-1; Thu, 08 Sep 2022 15:41:54 -0400 X-MC-Unique: Ehcfk8nPO6WUnIkmiVC0gQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 919ED8039B5; Thu, 8 Sep 2022 19:41:53 +0000 (UTC) Received: from llong.com (unknown [10.22.9.66]) by smtp.corp.redhat.com (Postfix) with ESMTP id DFCD81415117; Thu, 8 Sep 2022 19:41:52 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Lai Jiangshan , Waiman Long Subject: [PATCH v8 4/7] sched: Introduce affinity_context structure Date: Thu, 8 Sep 2022 15:41:18 -0400 Message-Id: <20220908194121.858462-5-longman@redhat.com> In-Reply-To: <20220908194121.858462-1-longman@redhat.com> References: <20220908194121.858462-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce a new affinity_context structure for passing cpu affinity informa= tion around in core scheduler code. The relevant functions are modified to use the new structure. There is no functional change. Suggested-by: Peter Zijlstra Signed-off-by: Waiman Long Reported-by: kernel test robot --- kernel/sched/core.c | 114 ++++++++++++++++++++++++++-------------- kernel/sched/deadline.c | 7 ++- kernel/sched/sched.h | 11 ++-- 3 files changed, 86 insertions(+), 46 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 84544daf3839..b43b851c0399 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2195,14 +2195,18 @@ void check_preempt_curr(struct rq *rq, struct task_= struct *p, int flags) #ifdef CONFIG_SMP =20 static void -__do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mas= k, u32 flags); +__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx); =20 static int __set_cpus_allowed_ptr(struct task_struct *p, - const struct cpumask *new_mask, - u32 flags); + struct affinity_context *ctx); =20 static void migrate_disable_switch(struct rq *rq, struct task_struct *p) { + struct affinity_context ac =3D { + .new_mask =3D cpumask_of(rq->cpu), + .flags =3D SCA_MIGRATE_DISABLE, + }; + if (likely(!p->migration_disabled)) return; =20 @@ -2212,7 +2216,7 @@ static void migrate_disable_switch(struct rq *rq, str= uct task_struct *p) /* * Violates locking rules! see comment in __do_set_cpus_allowed(). */ - __do_set_cpus_allowed(p, cpumask_of(rq->cpu), SCA_MIGRATE_DISABLE); + __do_set_cpus_allowed(p, &ac); } =20 void migrate_disable(void) @@ -2234,6 +2238,10 @@ EXPORT_SYMBOL_GPL(migrate_disable); void migrate_enable(void) { struct task_struct *p =3D current; + struct affinity_context ac =3D { + .new_mask =3D &p->cpus_mask, + .flags =3D SCA_MIGRATE_ENABLE, + }; =20 if (p->migration_disabled > 1) { p->migration_disabled--; @@ -2249,7 +2257,7 @@ void migrate_enable(void) */ preempt_disable(); if (p->cpus_ptr !=3D &p->cpus_mask) - __set_cpus_allowed_ptr(p, &p->cpus_mask, SCA_MIGRATE_ENABLE); + __set_cpus_allowed_ptr(p, &ac); /* * Mustn't clear migration_disabled() until cpus_ptr points back at the * regular cpus_mask, otherwise things that race (eg. @@ -2529,19 +2537,19 @@ int push_cpu_stop(void *arg) * sched_class::set_cpus_allowed must do the below, but is not required to * actually call this function. */ -void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *= new_mask, u32 flags) +void set_cpus_allowed_common(struct task_struct *p, struct affinity_contex= t *ctx) { - if (flags & (SCA_MIGRATE_ENABLE | SCA_MIGRATE_DISABLE)) { - p->cpus_ptr =3D new_mask; + if (ctx->flags & (SCA_MIGRATE_ENABLE | SCA_MIGRATE_DISABLE)) { + p->cpus_ptr =3D ctx->new_mask; return; } =20 - cpumask_copy(&p->cpus_mask, new_mask); - p->nr_cpus_allowed =3D cpumask_weight(new_mask); + cpumask_copy(&p->cpus_mask, ctx->new_mask); + p->nr_cpus_allowed =3D cpumask_weight(ctx->new_mask); } =20 static void -__do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mas= k, u32 flags) +__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx) { struct rq *rq =3D task_rq(p); bool queued, running; @@ -2558,7 +2566,7 @@ __do_set_cpus_allowed(struct task_struct *p, const st= ruct cpumask *new_mask, u32 * * XXX do further audits, this smells like something putrid. */ - if (flags & SCA_MIGRATE_DISABLE) + if (ctx->flags & SCA_MIGRATE_DISABLE) SCHED_WARN_ON(!p->on_cpu); else lockdep_assert_held(&p->pi_lock); @@ -2577,7 +2585,7 @@ __do_set_cpus_allowed(struct task_struct *p, const st= ruct cpumask *new_mask, u32 if (running) put_prev_task(rq, p); =20 - p->sched_class->set_cpus_allowed(p, new_mask, flags); + p->sched_class->set_cpus_allowed(p, ctx); =20 if (queued) enqueue_task(rq, p, ENQUEUE_RESTORE | ENQUEUE_NOCLOCK); @@ -2587,7 +2595,12 @@ __do_set_cpus_allowed(struct task_struct *p, const s= truct cpumask *new_mask, u32 =20 void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_= mask) { - __do_set_cpus_allowed(p, new_mask, 0); + struct affinity_context ac =3D { + .new_mask =3D new_mask, + .flags =3D 0, + }; + + __do_set_cpus_allowed(p, &ac); } =20 int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, @@ -2840,8 +2853,7 @@ static int affine_move_task(struct rq *rq, struct tas= k_struct *p, struct rq_flag * Called with both p->pi_lock and rq->lock held; drops both before return= ing. */ static int __set_cpus_allowed_ptr_locked(struct task_struct *p, - const struct cpumask *new_mask, - u32 flags, + struct affinity_context *ctx, struct rq *rq, struct rq_flags *rf) __releases(rq->lock) @@ -2869,7 +2881,7 @@ static int __set_cpus_allowed_ptr_locked(struct task_= struct *p, cpu_valid_mask =3D cpu_online_mask; } =20 - if (!kthread && !cpumask_subset(new_mask, cpu_allowed_mask)) { + if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) { ret =3D -EINVAL; goto out; } @@ -2878,18 +2890,18 @@ static int __set_cpus_allowed_ptr_locked(struct tas= k_struct *p, * Must re-check here, to close a race against __kthread_bind(), * sched_setaffinity() is not guaranteed to observe the flag. */ - if ((flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) { + if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) { ret =3D -EINVAL; goto out; } =20 - if (!(flags & SCA_MIGRATE_ENABLE)) { - if (cpumask_equal(&p->cpus_mask, new_mask)) + if (!(ctx->flags & SCA_MIGRATE_ENABLE)) { + if (cpumask_equal(&p->cpus_mask, ctx->new_mask)) goto out; =20 if (WARN_ON_ONCE(p =3D=3D current && is_migration_disabled(p) && - !cpumask_test_cpu(task_cpu(p), new_mask))) { + !cpumask_test_cpu(task_cpu(p), ctx->new_mask))) { ret =3D -EBUSY; goto out; } @@ -2900,15 +2912,15 @@ static int __set_cpus_allowed_ptr_locked(struct tas= k_struct *p, * for groups of tasks (ie. cpuset), so that load balancing is not * immediately required to distribute the tasks within their new mask. */ - dest_cpu =3D cpumask_any_and_distribute(cpu_valid_mask, new_mask); + dest_cpu =3D cpumask_any_and_distribute(cpu_valid_mask, ctx->new_mask); if (dest_cpu >=3D nr_cpu_ids) { ret =3D -EINVAL; goto out; } =20 - __do_set_cpus_allowed(p, new_mask, flags); + __do_set_cpus_allowed(p, ctx); =20 - return affine_move_task(rq, p, rf, dest_cpu, flags); + return affine_move_task(rq, p, rf, dest_cpu, ctx->flags); =20 out: task_rq_unlock(rq, p, rf); @@ -2926,22 +2938,27 @@ static int __set_cpus_allowed_ptr_locked(struct tas= k_struct *p, * call is not atomic; no spinlocks may be held. */ static int __set_cpus_allowed_ptr(struct task_struct *p, - const struct cpumask *new_mask, u32 flags) + struct affinity_context *ctx) { struct rq_flags rf; struct rq *rq; =20 rq =3D task_rq_lock(p, &rf); - if (p->user_cpus_ptr && !(flags & SCA_USER) && - cpumask_and(rq->scratch_mask, new_mask, p->user_cpus_ptr)) - new_mask =3D rq->scratch_mask; + if (p->user_cpus_ptr && !(ctx->flags & SCA_USER) && + cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr)) + ctx->new_mask =3D rq->scratch_mask; =20 - return __set_cpus_allowed_ptr_locked(p, new_mask, flags, rq, &rf); + return __set_cpus_allowed_ptr_locked(p, ctx, rq, &rf); } =20 int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_= mask) { - return __set_cpus_allowed_ptr(p, new_mask, 0); + struct affinity_context ac =3D { + .new_mask =3D new_mask, + .flags =3D 0, + }; + + return __set_cpus_allowed_ptr(p, &ac); } EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr); =20 @@ -2958,6 +2975,10 @@ static int restrict_cpus_allowed_ptr(struct task_str= uct *p, struct cpumask *new_mask, const struct cpumask *subset_mask) { + struct affinity_context ac =3D { + .new_mask =3D new_mask, + .flags =3D 0, + }; struct rq_flags rf; struct rq *rq; int err; @@ -2979,7 +3000,7 @@ static int restrict_cpus_allowed_ptr(struct task_stru= ct *p, goto err_unlock; } =20 - return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf); + return __set_cpus_allowed_ptr_locked(p, &ac, rq, &rf); =20 err_unlock: task_rq_unlock(rq, p, &rf); @@ -3032,7 +3053,7 @@ void force_compatible_cpus_allowed_ptr(struct task_st= ruct *p) } =20 static int -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, int= flags); +__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx); =20 /* * Restore the affinity of a task @p which was previously restricted by a @@ -3043,13 +3064,17 @@ __sched_setaffinity(struct task_struct *p, const st= ruct cpumask *mask, int flags */ void relax_compatible_cpus_allowed_ptr(struct task_struct *p) { + struct affinity_context ac =3D { + .new_mask =3D task_user_cpus(p), + .flags =3D 0, + }; int ret; =20 /* * Try to restore the old affinity mask with __sched_setaffinity(). * Cpuset masking will be done there too. */ - ret =3D __sched_setaffinity(p, task_user_cpus(p), 0); + ret =3D __sched_setaffinity(p, &ac); WARN_ON_ONCE(ret); } =20 @@ -8053,7 +8078,7 @@ int dl_task_check_affinity(struct task_struct *p, con= st struct cpumask *mask) #endif =20 static int -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, int= flags) +__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx) { int retval; cpumask_var_t cpus_allowed, new_mask; @@ -8067,13 +8092,16 @@ __sched_setaffinity(struct task_struct *p, const st= ruct cpumask *mask, int flags } =20 cpuset_cpus_allowed(p, cpus_allowed); - cpumask_and(new_mask, mask, cpus_allowed); + cpumask_and(new_mask, ctx->new_mask, cpus_allowed); + + ctx->new_mask =3D new_mask; + ctx->flags |=3D SCA_CHECK; =20 retval =3D dl_task_check_affinity(p, new_mask); if (retval) goto out_free_new_mask; again: - retval =3D __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | flags); + retval =3D __set_cpus_allowed_ptr(p, ctx); if (retval) goto out_free_new_mask; =20 @@ -8096,6 +8124,7 @@ __sched_setaffinity(struct task_struct *p, const stru= ct cpumask *mask, int flags =20 long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) { + struct affinity_context ac; struct cpumask *user_mask; struct task_struct *p; int retval; @@ -8137,8 +8166,12 @@ long sched_setaffinity(pid_t pid, const struct cpuma= sk *in_mask) goto out_put_task; } cpumask_copy(user_mask, in_mask); + ac =3D (struct affinity_context){ + .new_mask =3D in_mask, + .flags =3D SCA_USER, + }; =20 - retval =3D __sched_setaffinity(p, in_mask, SCA_USER); + retval =3D __sched_setaffinity(p, &ac); =20 /* * Save in_mask into user_cpus_ptr after a successful @@ -8936,6 +8969,7 @@ void show_state_filter(unsigned int state_filter) void __init init_idle(struct task_struct *idle, int cpu) { struct rq *rq =3D cpu_rq(cpu); + struct affinity_context ac; unsigned long flags; =20 __sched_fork(0, idle); @@ -8953,13 +8987,17 @@ void __init init_idle(struct task_struct *idle, int= cpu) kthread_set_per_cpu(idle, cpu); =20 #ifdef CONFIG_SMP + ac =3D (struct affinity_context) { + .new_mask =3D cpumask_of(cpu), + .flags =3D 0, + }; /* * It's possible that init_idle() gets called multiple times on a task, * in that case do_set_cpus_allowed() will not do the right thing. * * And since this is boot we can forgo the serialization. */ - set_cpus_allowed_common(idle, cpumask_of(cpu), 0); + set_cpus_allowed_common(idle, &ac); #endif /* * We're having a chicken and egg problem, even though we are diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 0ab79d819a0d..38fa2c3ef7db 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2486,8 +2486,7 @@ static void task_woken_dl(struct rq *rq, struct task_= struct *p) } =20 static void set_cpus_allowed_dl(struct task_struct *p, - const struct cpumask *new_mask, - u32 flags) + struct affinity_context *ctx) { struct root_domain *src_rd; struct rq *rq; @@ -2502,7 +2501,7 @@ static void set_cpus_allowed_dl(struct task_struct *p, * update. We already made space for us in the destination * domain (see cpuset_can_attach()). */ - if (!cpumask_intersects(src_rd->span, new_mask)) { + if (!cpumask_intersects(src_rd->span, ctx->new_mask)) { struct dl_bw *src_dl_b; =20 src_dl_b =3D dl_bw_of(cpu_of(rq)); @@ -2516,7 +2515,7 @@ static void set_cpus_allowed_dl(struct task_struct *p, raw_spin_unlock(&src_dl_b->lock); } =20 - set_cpus_allowed_common(p, new_mask, flags); + set_cpus_allowed_common(p, ctx); } =20 /* Assumes rq->lock is held */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 482b702d65ea..1927c02f68fa 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2157,6 +2157,11 @@ extern const u32 sched_prio_to_wmult[40]; =20 #define RETRY_TASK ((void *)-1UL) =20 +struct affinity_context { + const struct cpumask *new_mask; + unsigned int flags; +}; + struct sched_class { =20 #ifdef CONFIG_UCLAMP_TASK @@ -2185,9 +2190,7 @@ struct sched_class { =20 void (*task_woken)(struct rq *this_rq, struct task_struct *task); =20 - void (*set_cpus_allowed)(struct task_struct *p, - const struct cpumask *newmask, - u32 flags); + void (*set_cpus_allowed)(struct task_struct *p, struct affinity_context *= ctx); =20 void (*rq_online)(struct rq *rq); void (*rq_offline)(struct rq *rq); @@ -2301,7 +2304,7 @@ extern void update_group_capacity(struct sched_domain= *sd, int cpu); =20 extern void trigger_load_balance(struct rq *rq); =20 -extern void set_cpus_allowed_common(struct task_struct *p, const struct cp= umask *new_mask, u32 flags); +extern void set_cpus_allowed_common(struct task_struct *p, struct affinity= _context *ctx); =20 static inline struct task_struct *get_push_task(struct rq *rq) { --=20 2.31.1 From nobody Mon Apr 6 08:06:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B550EC6FA89 for ; Thu, 8 Sep 2022 19:42:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231467AbiIHTmX (ORCPT ); Thu, 8 Sep 2022 15:42:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229889AbiIHTmD (ORCPT ); Thu, 8 Sep 2022 15:42:03 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B994D51D4 for ; Thu, 8 Sep 2022 12:42:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662666119; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ygU5n+vDab/JrahhgIYA9NLv/bE0KEdNWKARdUa/DEs=; b=BgimrHCsiMZw+VHpdeY1feMqqhPwLZ2e6m717qMNsLjzPh0gOi2tajLFNvviCDLWHo3R0O UTxoftsJ/i9Nf1l5ClWlWftU/BVtrwQA3I1V2vmKDqYLuWW7/2HugwhabsiRygL2WJ+JaV bWmxi9FGgxHJbrd+BXWiDLSWscu/gdA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-615-n77__nT_NPm4okGGTNKDEg-1; Thu, 08 Sep 2022 15:41:55 -0400 X-MC-Unique: n77__nT_NPm4okGGTNKDEg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4EAE2101A56C; Thu, 8 Sep 2022 19:41:54 +0000 (UTC) Received: from llong.com (unknown [10.22.9.66]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9FE7D1415117; Thu, 8 Sep 2022 19:41:53 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Lai Jiangshan , Waiman Long Subject: [PATCH v8 5/7] sched: Handle set_cpus_allowed_ptr() & sched_setaffinity() race Date: Thu, 8 Sep 2022 15:41:19 -0400 Message-Id: <20220908194121.858462-6-longman@redhat.com> In-Reply-To: <20220908194121.858462-1-longman@redhat.com> References: <20220908194121.858462-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Racing is possible between set_cpus_allowed_ptr() and sched_setaffinity() or between multiple sched_setaffinity() calls from different CPUs. To resolve these race conditions, we need to update both user_cpus_ptr and cpus_mask in a single lock critical section instead of separated ones. This requires moving the user_cpus_ptr update to set_cpus_allowed_common() by putting the user_mask into the affinity_context structure. This patch also changes the handling of the race between the sched_setaffinity() call and the changing of cpumask of the current cpuset. In case the new mask conflicts with newly updated cpuset, the cpus_mask will be reset to the cpuset cpumask and an error value of -EINVAL will be returned. If a previous user_cpus_ptr value exists, it will be swapped back in and the new_mask will be further restricted to what is allowed in the cpumask pointed to by the old user_cpus_ptr. Signed-off-by: Waiman Long --- kernel/sched/core.c | 44 +++++++++++++++++++++++++++----------------- kernel/sched/sched.h | 1 + 2 files changed, 28 insertions(+), 17 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b43b851c0399..1ba5d82d4f3c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2546,6 +2546,12 @@ void set_cpus_allowed_common(struct task_struct *p, = struct affinity_context *ctx =20 cpumask_copy(&p->cpus_mask, ctx->new_mask); p->nr_cpus_allowed =3D cpumask_weight(ctx->new_mask); + + /* + * Swap in a new user_cpus_ptr if SCA_USER flag set + */ + if (ctx->flags & SCA_USER) + swap(p->user_cpus_ptr, ctx->user_mask); } =20 static void @@ -8100,7 +8106,7 @@ __sched_setaffinity(struct task_struct *p, struct aff= inity_context *ctx) retval =3D dl_task_check_affinity(p, new_mask); if (retval) goto out_free_new_mask; -again: + retval =3D __set_cpus_allowed_ptr(p, ctx); if (retval) goto out_free_new_mask; @@ -8112,7 +8118,24 @@ __sched_setaffinity(struct task_struct *p, struct af= finity_context *ctx) * Just reset the cpumask to the cpuset's cpus_allowed. */ cpumask_copy(new_mask, cpus_allowed); - goto again; + + /* + * If SCA_USER is set, a 2nd call to __set_cpus_allowed_ptr() + * will restore the previous user_cpus_ptr value. + * + * In the unlikely event a previous user_cpus_ptr exists, + * we need to further restrict the mask to what is allowed + * by that old user_cpus_ptr. + */ + if (unlikely((ctx->flags & SCA_USER) && ctx->user_mask)) { + bool empty =3D !cpumask_and(new_mask, new_mask, + ctx->user_mask); + + if (WARN_ON_ONCE(empty)) + cpumask_copy(new_mask, cpus_allowed); + } + __set_cpus_allowed_ptr(p, ctx); + retval =3D -EINVAL; } =20 out_free_new_mask: @@ -8168,25 +8191,12 @@ long sched_setaffinity(pid_t pid, const struct cpum= ask *in_mask) cpumask_copy(user_mask, in_mask); ac =3D (struct affinity_context){ .new_mask =3D in_mask, + .user_mask =3D user_mask, .flags =3D SCA_USER, }; =20 retval =3D __sched_setaffinity(p, &ac); - - /* - * Save in_mask into user_cpus_ptr after a successful - * __sched_setaffinity() call. pi_lock is used to synchronize - * changes to user_cpus_ptr. - */ - if (!retval) { - unsigned long flags; - - /* Use pi_lock to synchronize changes to user_cpus_ptr */ - raw_spin_lock_irqsave(&p->pi_lock, flags); - swap(p->user_cpus_ptr, user_mask); - raw_spin_unlock_irqrestore(&p->pi_lock, flags); - } - kfree(user_mask); + kfree(ac.user_mask); =20 out_put_task: put_task_struct(p); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 1927c02f68fa..110e13b7d78b 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2159,6 +2159,7 @@ extern const u32 sched_prio_to_wmult[40]; =20 struct affinity_context { const struct cpumask *new_mask; + struct cpumask *user_mask; unsigned int flags; }; =20 --=20 2.31.1 From nobody Mon Apr 6 08:06:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF22FC54EE9 for ; Thu, 8 Sep 2022 19:42:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230171AbiIHTmH (ORCPT ); Thu, 8 Sep 2022 15:42:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229536AbiIHTmA (ORCPT ); Thu, 8 Sep 2022 15:42:00 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 048A6C6EB8 for ; Thu, 8 Sep 2022 12:41:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662666117; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OFFQTu6gkdYFZqHrKuNxJN1srjvtTuRHwwTVfxnRRhU=; b=TcbH1UaKyg3b154y1OTEkJiJN6zkR7OIyvA2fzuwUwqA0ZHhTPE6+aK9O2FFMobASlycF/ 3s6FAYdjhDlr/JEgteB12vfKGzFYuiBS1lc7j9cdSOaXYAcRznruG5Lt5Xkbg+s2Oj8zEQ 6g0CKxE/j4KPrcP7izpi4sXh6QSZonc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-269-ZubrtEpmNjqz5urOewbpfA-1; Thu, 08 Sep 2022 15:41:55 -0400 X-MC-Unique: ZubrtEpmNjqz5urOewbpfA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 05A6A85A589; Thu, 8 Sep 2022 19:41:55 +0000 (UTC) Received: from llong.com (unknown [10.22.9.66]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5BF02141511A; Thu, 8 Sep 2022 19:41:54 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Lai Jiangshan , Waiman Long Subject: [PATCH v8 6/7] sched: Fix sched_setaffinity() and fork/clone() race Date: Thu, 8 Sep 2022 15:41:20 -0400 Message-Id: <20220908194121.858462-7-longman@redhat.com> In-Reply-To: <20220908194121.858462-1-longman@redhat.com> References: <20220908194121.858462-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" sched_setaffinity() can also race with a concurrent fork/clone() syscall calling dup_user_cpus_ptr(). That may lead to a use after free problem. Fix that by protecting the cpumask copying using pi_lock of the source task. Signed-off-by: Waiman Long --- kernel/sched/core.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1ba5d82d4f3c..89c393df5ebc 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2612,6 +2612,8 @@ void do_set_cpus_allowed(struct task_struct *p, const= struct cpumask *new_mask) int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, int node) { + unsigned long flags; + if (!src->user_cpus_ptr) return 0; =20 @@ -2619,7 +2621,10 @@ int dup_user_cpus_ptr(struct task_struct *dst, struc= t task_struct *src, if (!dst->user_cpus_ptr) return -ENOMEM; =20 + /* Use pi_lock to protect content of user_cpus_ptr */ + raw_spin_lock_irqsave(&src->pi_lock, flags); cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr); + raw_spin_unlock_irqrestore(&src->pi_lock, flags); return 0; } =20 --=20 2.31.1 From nobody Mon Apr 6 08:06:21 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1DBFC54EE9 for ; Thu, 8 Sep 2022 19:42:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230054AbiIHTm3 (ORCPT ); Thu, 8 Sep 2022 15:42:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230220AbiIHTmF (ORCPT ); Thu, 8 Sep 2022 15:42:05 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31135F0287 for ; Thu, 8 Sep 2022 12:42:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662666121; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kzI0YI0EXB0U8SiUzA4fJsZEMKq73k5YhdPgI83Xc9c=; b=VY4TNLg6CWJnpAsFdWgWAzyNpdndL0ZFvzhE5e3jbrvwbNAiRozS5j4fL+BhAHur5CtrrI GmdMO+g2HBRCMzQ3dIqIMwDJdB/IcjhyzyoVRVVTfNlKYcwSsDqmLwlj/eLNQ5L8LDcNGd KLcOWH1GFMmXW6Eh43onSbgy5l+PDN0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-357-006GtTtJMXOM-MXSv6axgQ-1; Thu, 08 Sep 2022 15:41:56 -0400 X-MC-Unique: 006GtTtJMXOM-MXSv6axgQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AA3AE3C0D19F; Thu, 8 Sep 2022 19:41:55 +0000 (UTC) Received: from llong.com (unknown [10.22.9.66]) by smtp.corp.redhat.com (Postfix) with ESMTP id 12D9C1415117; Thu, 8 Sep 2022 19:41:55 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Lai Jiangshan , Waiman Long Subject: [PATCH v8 7/7] sched: Always clear user_cpus_ptr in do_set_cpus_allowed() Date: Thu, 8 Sep 2022 15:41:21 -0400 Message-Id: <20220908194121.858462-8-longman@redhat.com> In-Reply-To: <20220908194121.858462-1-longman@redhat.com> References: <20220908194121.858462-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The do_set_cpus_allowed() function is used by either kthread_bind() or select_fallback_rq(). In both cases the user affinity (if any) should be destroyed too. Suggested-by: Peter Zijlstra Signed-off-by: Waiman Long --- kernel/sched/core.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 89c393df5ebc..ba500b1aad3c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2599,14 +2599,20 @@ __do_set_cpus_allowed(struct task_struct *p, struct= affinity_context *ctx) set_next_task(rq, p); } =20 +/* + * Used for kthread_bind() and select_fallback_rq(), in both cases the user + * affinity (if any) should be destroyed too. + */ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_= mask) { struct affinity_context ac =3D { .new_mask =3D new_mask, - .flags =3D 0, + .user_mask =3D NULL, + .flags =3D SCA_USER, /* clear the user requested mask */ }; =20 __do_set_cpus_allowed(p, &ac); + kfree(ac.user_mask); } =20 int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, --=20 2.31.1