From nobody Sun Apr 12 02:46:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83A69C00144 for ; Mon, 1 Aug 2022 15:42:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232951AbiHAPl6 (ORCPT ); Mon, 1 Aug 2022 11:41:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232113AbiHAPly (ORCPT ); Mon, 1 Aug 2022 11:41:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1C2552B260 for ; Mon, 1 Aug 2022 08:41:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659368512; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E9xfUKzaMxgAzfihkAM8W5v0xDYFIwnIQQwpHHoC9fQ=; b=cYxUrx4el7b/9JThaBgorAA7wBDVnTJhqxBsHUt/RCIAm7nGfjAwhqeYZbXbm/WqzKtQbu 5c2erYcv3bmEhNBqkdrtj/BCRx0Kxmlot+4g/JVKjnFzxqanqyQ47E0WPhrzF/2aj8jccE rVGHXxHa9R9yJyrWPn3a+jHhTFqfTh4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-141-fudwL7KsMc-JaN0U0vyzTQ-1; Mon, 01 Aug 2022 11:41:49 -0400 X-MC-Unique: fudwL7KsMc-JaN0U0vyzTQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6B6BC382624A; Mon, 1 Aug 2022 15:41:48 +0000 (UTC) Received: from llong.com (unknown [10.22.17.133]) by smtp.corp.redhat.com (Postfix) with ESMTP id C5CF2492C3B; Mon, 1 Aug 2022 15:41:47 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Waiman Long Subject: [PATCH v2 1/2] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Date: Mon, 1 Aug 2022 11:41:23 -0400 Message-Id: <20220801154124.2011987-2-longman@redhat.com> In-Reply-To: <20220801154124.2011987-1-longman@redhat.com> References: <20220801154124.2011987-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched: Introduce task_struct::user_cpus_ptr to track requested affinity"). It is currently used only by arm64 arch due to possible asymmetric cpu setup. This patch extends its usage to save user provided cpumask when sched_setaffinity() is called for all arches. To preserve the existing arm64 use case, a new cpus_affinity_set flag is added to differentiate if user_cpus_ptr is set up by sched_setaffinity() or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr set by sched_setaffinity() has priority and won't be overwritten by force_compatible_cpus_allowed_ptr() or relax_compatible_cpus_allowed_ptr(). As a call to sched_setaffinity() will no longer clear user_cpus_ptr but set it instead, the SCA_USER flag is no longer necessary and can be removed. Signed-off-by: Waiman Long --- include/linux/sched.h | 1 + kernel/sched/core.c | 71 +++++++++++++++++++++++++++++++------------ kernel/sched/sched.h | 1 - 3 files changed, 52 insertions(+), 21 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index c46f3a63b758..60ae022fa842 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -815,6 +815,7 @@ struct task_struct { =20 unsigned int policy; int nr_cpus_allowed; + int cpus_affinity_set; const cpumask_t *cpus_ptr; cpumask_t *user_cpus_ptr; cpumask_t cpus_mask; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index da0bf6fe9ecd..7757828c7422 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2607,6 +2607,7 @@ int dup_user_cpus_ptr(struct task_struct *dst, struct= task_struct *src, return -ENOMEM; =20 cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr); + dst->cpus_affinity_set =3D src->cpus_affinity_set; return 0; } =20 @@ -2854,7 +2855,6 @@ static int __set_cpus_allowed_ptr_locked(struct task_= struct *p, const struct cpumask *cpu_allowed_mask =3D task_cpu_possible_mask(p); const struct cpumask *cpu_valid_mask =3D cpu_active_mask; bool kthread =3D p->flags & PF_KTHREAD; - struct cpumask *user_mask =3D NULL; unsigned int dest_cpu; int ret =3D 0; =20 @@ -2913,14 +2913,7 @@ static int __set_cpus_allowed_ptr_locked(struct task= _struct *p, =20 __do_set_cpus_allowed(p, new_mask, flags); =20 - if (flags & SCA_USER) - user_mask =3D clear_user_cpus_ptr(p); - - ret =3D affine_move_task(rq, p, rf, dest_cpu, flags); - - kfree(user_mask); - - return ret; + return affine_move_task(rq, p, rf, dest_cpu, flags); =20 out: task_rq_unlock(rq, p, rf); @@ -2994,19 +2987,24 @@ static int restrict_cpus_allowed_ptr(struct task_st= ruct *p, =20 /* * We're about to butcher the task affinity, so keep track of what - * the user asked for in case we're able to restore it later on. + * the user asked for in case we're able to restore it later on + * unless it has been set before by sched_setaffinity(). */ - if (user_mask) { + if (user_mask && !p->cpus_affinity_set) { cpumask_copy(user_mask, p->cpus_ptr); p->user_cpus_ptr =3D user_mask; + user_mask =3D NULL; } =20 - return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf); + err =3D __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf); =20 -err_unlock: - task_rq_unlock(rq, p, &rf); +free_user_mask: kfree(user_mask); return err; + +err_unlock: + task_rq_unlock(rq, p, &rf); + goto free_user_mask; } =20 /* @@ -3055,7 +3053,7 @@ void force_compatible_cpus_allowed_ptr(struct task_st= ruct *p) } =20 static int -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask); +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, boo= l save_mask); =20 /* * Restore the affinity of a task @p which was previously restricted by a @@ -3073,9 +3071,10 @@ void relax_compatible_cpus_allowed_ptr(struct task_s= truct *p) /* * Try to restore the old affinity mask. If this fails, then * we free the mask explicitly to avoid it being inherited across - * a subsequent fork(). + * a subsequent fork() unless it is set by sched_setaffinity(). */ - if (!user_mask || !__sched_setaffinity(p, user_mask)) + if (!user_mask || !__sched_setaffinity(p, user_mask, false) || + p->cpus_affinity_set) return; =20 raw_spin_lock_irqsave(&p->pi_lock, flags); @@ -8010,10 +8009,11 @@ int dl_task_check_affinity(struct task_struct *p, c= onst struct cpumask *mask) #endif =20 static int -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask) +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, boo= l save_mask) { int retval; cpumask_var_t cpus_allowed, new_mask; + struct cpumask *user_mask =3D NULL; =20 if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) return -ENOMEM; @@ -8029,8 +8029,38 @@ __sched_setaffinity(struct task_struct *p, const str= uct cpumask *mask) retval =3D dl_task_check_affinity(p, new_mask); if (retval) goto out_free_new_mask; + + /* + * Save the user requested mask into user_cpus_ptr + */ + if (save_mask && !p->user_cpus_ptr) { +alloc_again: + user_mask =3D kmalloc(cpumask_size(), GFP_KERNEL); + + if (!user_mask) { + retval =3D -ENOMEM; + goto out_free_new_mask; + } + } + if (save_mask) { + struct rq_flags rf; + struct rq *rq =3D task_rq_lock(p, &rf); + + if (unlikely(!p->user_cpus_ptr && !user_mask)) { + task_rq_unlock(rq, p, &rf); + goto alloc_again; + } + if (!p->user_cpus_ptr) { + p->user_cpus_ptr =3D user_mask; + user_mask =3D NULL; + } + + cpumask_copy(p->user_cpus_ptr, mask); + p->cpus_affinity_set =3D 1; + task_rq_unlock(rq, p, &rf); + } again: - retval =3D __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | SCA_USER); + retval =3D __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK); if (retval) goto out_free_new_mask; =20 @@ -8044,6 +8074,7 @@ __sched_setaffinity(struct task_struct *p, const stru= ct cpumask *mask) goto again; } =20 + kfree(user_mask); out_free_new_mask: free_cpumask_var(new_mask); out_free_cpus_allowed: @@ -8087,7 +8118,7 @@ long sched_setaffinity(pid_t pid, const struct cpumas= k *in_mask) if (retval) goto out_put_task; =20 - retval =3D __sched_setaffinity(p, in_mask); + retval =3D __sched_setaffinity(p, in_mask, true); out_put_task: put_task_struct(p); return retval; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 47b89a0fc6e5..c9e9731a1a17 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2242,7 +2242,6 @@ extern struct task_struct *pick_next_task_idle(struct= rq *rq); #define SCA_CHECK 0x01 #define SCA_MIGRATE_DISABLE 0x02 #define SCA_MIGRATE_ENABLE 0x04 -#define SCA_USER 0x08 =20 #ifdef CONFIG_SMP =20 --=20 2.31.1 From nobody Sun Apr 12 02:46:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48A4EC00144 for ; Mon, 1 Aug 2022 15:42:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233253AbiHAPmB (ORCPT ); Mon, 1 Aug 2022 11:42:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232747AbiHAPl4 (ORCPT ); Mon, 1 Aug 2022 11:41:56 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0DACC2A72E for ; Mon, 1 Aug 2022 08:41:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659368514; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=swtQM1D5z4tOUeh3RHmfi2nHTc39eT087LYzvN2ud4o=; b=hV3xnYRLDYghyUTjr2j0nEE3vFK5QcWd/0unsIBz1MPWNalgbdOXNaTFk/ZVtwjuy9OG/H yxH6o5k73CT2wYPCKgne9ScDyfA9IY9qCthYvUwUGC4AOPGo8N8RMdevmB3NpegYMeTpDj 3nKuWlfRPLjV/zRpfzFFaCF8t+6lbXk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-647-zl6NUp9wO4qgx0V2MPFryQ-1; Mon, 01 Aug 2022 11:41:50 -0400 X-MC-Unique: zl6NUp9wO4qgx0V2MPFryQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 25729803301; Mon, 1 Aug 2022 15:41:49 +0000 (UTC) Received: from llong.com (unknown [10.22.17.133]) by smtp.corp.redhat.com (Postfix) with ESMTP id 78EFE492C3B; Mon, 1 Aug 2022 15:41:48 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Waiman Long Subject: [PATCH v2 2/2] cgroup/cpuset: Keep user set cpus affinity Date: Mon, 1 Aug 2022 11:41:24 -0400 Message-Id: <20220801154124.2011987-3-longman@redhat.com> In-Reply-To: <20220801154124.2011987-1-longman@redhat.com> References: <20220801154124.2011987-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It was found that any change to the current cpuset hierarchy may reset the cpumask of the tasks in the affected cpusets to the default cpuset value even if those tasks have cpus affinity explicitly set by the users before. That is especially easy to trigger under a cgroup v2 environment where writing "+cpuset" to the root cgroup's cgroup.subtree_control file will reset the cpus affinity of all the processes in the system. That is problematic in a nohz_full environment where the tasks running in the nohz_full CPUs usually have their cpus affinity explicitly set and will behave incorrectly if cpus affinity changes. Fix this problem by looking at user_cpus_ptr which will be set if cpus affinity have been explicitly set before and use it to restrcit the given cpumask unless there is no overlap. In that case, it will fallback to the given one. With that change in place, it was verified that tasks that have its cpus affinity explicitly set will not be affected by changes made to the v2 cgroup.subtree_control files. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 71a418858a5e..2e3af93bed03 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -704,6 +704,26 @@ static int validate_change(struct cpuset *cur, struct = cpuset *trial) return ret; } =20 +/* + * Preserve user provided cpumask if set unless there is no overlap. + */ +static int cpuset_set_cpus_allowed_ptr(struct task_struct *p, + const struct cpumask *mask) +{ + if (p->user_cpus_ptr && cpumask_intersects(p->user_cpus_ptr, mask)) { + cpumask_var_t new_mask; + int ret; + + alloc_cpumask_var(&new_mask, GFP_KERNEL); + cpumask_and(new_mask, p->user_cpus_ptr, mask); + ret =3D set_cpus_allowed_ptr(p, new_mask); + free_cpumask_var(new_mask); + return ret; + } + + return set_cpus_allowed_ptr(p, mask); +} + #ifdef CONFIG_SMP /* * Helper routine for generate_sched_domains(). @@ -1130,7 +1150,7 @@ static void update_tasks_cpumask(struct cpuset *cs) =20 css_task_iter_start(&cs->css, 0, &it); while ((task =3D css_task_iter_next(&it))) - set_cpus_allowed_ptr(task, cs->effective_cpus); + cpuset_set_cpus_allowed_ptr(task, cs->effective_cpus); css_task_iter_end(&it); } =20 @@ -2303,7 +2323,7 @@ static void cpuset_attach(struct cgroup_taskset *tset) * can_attach beforehand should guarantee that this doesn't * fail. TODO: have a better way to handle failure here */ - WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); + WARN_ON_ONCE(cpuset_set_cpus_allowed_ptr(task, cpus_attach)); =20 cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); cpuset_update_task_spread_flag(cs, task); --=20 2.31.1