From nobody Sat Apr 11 02:17:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CB7FC3F6B0 for ; Tue, 16 Aug 2022 19:27:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237184AbiHPT1y (ORCPT ); Tue, 16 Aug 2022 15:27:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235578AbiHPT1t (ORCPT ); Tue, 16 Aug 2022 15:27:49 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C772D7CA8F for ; Tue, 16 Aug 2022 12:27:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660678067; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hnL7a16+mJwGp2UKj6zOsVK1qIHxMaTCJjaKZic0ihQ=; b=hzC3I7zy5jMYAOi5cwlWJajl5d2b/cm1WACYqXul0naUzkd7hVmMlHe1bcAf3tkeiZORb8 5xULLuSDYZxAlmr0txdIHp0EyyjTnHQzCEYNXq76gjD6YzT+jHNE/1d0Pb+XdWRU9WPJk5 3/CRnG2XEhKD+Cvqi/agE2Z8oiZJcng= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-146-XENSStkyMmGSA0w31kKv0w-1; Tue, 16 Aug 2022 15:27:45 -0400 X-MC-Unique: XENSStkyMmGSA0w31kKv0w-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B1BF21C1A94A; Tue, 16 Aug 2022 19:27:44 +0000 (UTC) Received: from llong.com (unknown [10.22.10.201]) by smtp.corp.redhat.com (Postfix) with ESMTP id 162671121314; Tue, 16 Aug 2022 19:27:44 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Linus Torvalds , Waiman Long Subject: [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Date: Tue, 16 Aug 2022 15:27:32 -0400 Message-Id: <20220816192734.67115-2-longman@redhat.com> In-Reply-To: <20220816192734.67115-1-longman@redhat.com> References: <20220816192734.67115-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched: Introduce task_struct::user_cpus_ptr to track requested affinity"). It is currently used only by arm64 arch due to possible asymmetric CPU setup. This patch extends its usage to save user provided cpumask when sched_setaffinity() is called for all arches. With this patch applied, user_cpus_ptr, once allocated after a call to sched_setaffinity(), will only be freed when the task exits. Since user_cpus_ptr is supposed to be used for "requested affinity", there is actually no point to save current cpu affinity in restrict_cpus_allowed_ptr() if sched_setaffinity() has never been called. Modify the logic to set user_cpus_ptr only in sched_setaffinity() and use it in restrict_cpus_allowed_ptr() and relax_compatible_cpus_allowed_ptr() if defined but not changing it. This will be some changes in behavior for arm64 systems with asymmetric CPUs in some corner cases. For instance, if sched_setaffinity() has never been called and there is a cpuset change before relax_compatible_cpus_allowed_ptr() is called, its subsequent call will follow what the cpuset allows but not what the previous cpu affinity setting allows. As a call to sched_setaffinity() will no longer clear user_cpus_ptr but set it instead, the SCA_USER flag is no longer necessary and can be removed. Signed-off-by: Waiman Long --- kernel/sched/core.c | 100 ++++++++++++++++++++++--------------------- kernel/sched/sched.h | 1 - 2 files changed, 52 insertions(+), 49 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ee28253c9ac0..03053eebb22e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2848,7 +2848,6 @@ static int __set_cpus_allowed_ptr_locked(struct task_= struct *p, const struct cpumask *cpu_allowed_mask =3D task_cpu_possible_mask(p); const struct cpumask *cpu_valid_mask =3D cpu_active_mask; bool kthread =3D p->flags & PF_KTHREAD; - struct cpumask *user_mask =3D NULL; unsigned int dest_cpu; int ret =3D 0; =20 @@ -2907,14 +2906,7 @@ static int __set_cpus_allowed_ptr_locked(struct task= _struct *p, =20 __do_set_cpus_allowed(p, new_mask, flags); =20 - if (flags & SCA_USER) - user_mask =3D clear_user_cpus_ptr(p); - - ret =3D affine_move_task(rq, p, rf, dest_cpu, flags); - - kfree(user_mask); - - return ret; + return affine_move_task(rq, p, rf, dest_cpu, flags); =20 out: task_rq_unlock(rq, p, rf); @@ -2949,8 +2941,10 @@ EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr); =20 /* * Change a given task's CPU affinity to the intersection of its current - * affinity mask and @subset_mask, writing the resulting mask to @new_mask - * and pointing @p->user_cpus_ptr to a copy of the old mask. + * affinity mask and @subset_mask, writing the resulting mask to @new_mask. + * If user_cpus_ptr is defined, use it as the basis for restricting CPU + * affinity or use cpu_online_mask instead. + * * If the resulting mask is empty, leave the affinity unchanged and return * -EINVAL. */ @@ -2958,16 +2952,10 @@ static int restrict_cpus_allowed_ptr(struct task_st= ruct *p, struct cpumask *new_mask, const struct cpumask *subset_mask) { - struct cpumask *user_mask =3D NULL; struct rq_flags rf; struct rq *rq; int err; - - if (!p->user_cpus_ptr) { - user_mask =3D kmalloc(cpumask_size(), GFP_KERNEL); - if (!user_mask) - return -ENOMEM; - } + bool not_empty; =20 rq =3D task_rq_lock(p, &rf); =20 @@ -2981,25 +2969,21 @@ static int restrict_cpus_allowed_ptr(struct task_st= ruct *p, goto err_unlock; } =20 - if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) { + + if (p->user_cpus_ptr) + not_empty =3D cpumask_and(new_mask, p->user_cpus_ptr, subset_mask); + else + not_empty =3D cpumask_and(new_mask, cpu_online_mask, subset_mask); + + if (!not_empty) { err =3D -EINVAL; goto err_unlock; } =20 - /* - * We're about to butcher the task affinity, so keep track of what - * the user asked for in case we're able to restore it later on. - */ - if (user_mask) { - cpumask_copy(user_mask, p->cpus_ptr); - p->user_cpus_ptr =3D user_mask; - } - return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf); =20 err_unlock: task_rq_unlock(rq, p, &rf); - kfree(user_mask); return err; } =20 @@ -3049,34 +3033,27 @@ void force_compatible_cpus_allowed_ptr(struct task_= struct *p) } =20 static int -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask); +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, boo= l save_mask); =20 /* * Restore the affinity of a task @p which was previously restricted by a - * call to force_compatible_cpus_allowed_ptr(). This will clear (and free) - * @p->user_cpus_ptr. + * call to force_compatible_cpus_allowed_ptr(). * * It is the caller's responsibility to serialise this with any calls to * force_compatible_cpus_allowed_ptr(@p). */ void relax_compatible_cpus_allowed_ptr(struct task_struct *p) { - struct cpumask *user_mask =3D p->user_cpus_ptr; - unsigned long flags; + const struct cpumask *user_mask =3D p->user_cpus_ptr; + + if (!user_mask) + user_mask =3D cpu_online_mask; =20 /* - * Try to restore the old affinity mask. If this fails, then - * we free the mask explicitly to avoid it being inherited across - * a subsequent fork(). + * Try to restore the old affinity mask with __sched_setaffinity(). + * Cpuset masking will be done there too. */ - if (!user_mask || !__sched_setaffinity(p, user_mask)) - return; - - raw_spin_lock_irqsave(&p->pi_lock, flags); - user_mask =3D clear_user_cpus_ptr(p); - raw_spin_unlock_irqrestore(&p->pi_lock, flags); - - kfree(user_mask); + __sched_setaffinity(p, user_mask, false); } =20 void set_task_cpu(struct task_struct *p, unsigned int new_cpu) @@ -8079,10 +8056,11 @@ int dl_task_check_affinity(struct task_struct *p, c= onst struct cpumask *mask) #endif =20 static int -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask) +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, boo= l save_mask) { int retval; cpumask_var_t cpus_allowed, new_mask; + struct cpumask *user_mask =3D NULL; =20 if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) return -ENOMEM; @@ -8098,8 +8076,33 @@ __sched_setaffinity(struct task_struct *p, const str= uct cpumask *mask) retval =3D dl_task_check_affinity(p, new_mask); if (retval) goto out_free_new_mask; + + /* + * Save the user requested mask into user_cpus_ptr if save_mask set. + * pi_lock is used for protecting user_cpus_ptr. + */ + if (save_mask && !p->user_cpus_ptr) { + user_mask =3D kmalloc(cpumask_size(), GFP_KERNEL); + + if (!user_mask) { + retval =3D -ENOMEM; + goto out_free_new_mask; + } + } + if (save_mask) { + unsigned long flags; + + raw_spin_lock_irqsave(&p->pi_lock, flags); + if (!p->user_cpus_ptr) { + p->user_cpus_ptr =3D user_mask; + user_mask =3D NULL; + } + + cpumask_copy(p->user_cpus_ptr, mask); + raw_spin_unlock_irqrestore(&p->pi_lock, flags); + } again: - retval =3D __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | SCA_USER); + retval =3D __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK); if (retval) goto out_free_new_mask; =20 @@ -8113,6 +8116,7 @@ __sched_setaffinity(struct task_struct *p, const stru= ct cpumask *mask) goto again; } =20 + kfree(user_mask); out_free_new_mask: free_cpumask_var(new_mask); out_free_cpus_allowed: @@ -8156,7 +8160,7 @@ long sched_setaffinity(pid_t pid, const struct cpumas= k *in_mask) if (retval) goto out_put_task; =20 - retval =3D __sched_setaffinity(p, in_mask); + retval =3D __sched_setaffinity(p, in_mask, true); out_put_task: put_task_struct(p); return retval; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e26688d387ae..15eefcd65faa 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2283,7 +2283,6 @@ extern struct task_struct *pick_next_task_idle(struct= rq *rq); #define SCA_CHECK 0x01 #define SCA_MIGRATE_DISABLE 0x02 #define SCA_MIGRATE_ENABLE 0x04 -#define SCA_USER 0x08 =20 #ifdef CONFIG_SMP =20 --=20 2.31.1 From nobody Sat Apr 11 02:17:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46E50C25B0E for ; Tue, 16 Aug 2022 19:28:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236956AbiHPT2M (ORCPT ); Tue, 16 Aug 2022 15:28:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237169AbiHPT1v (ORCPT ); Tue, 16 Aug 2022 15:27:51 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 343CF75CCA for ; Tue, 16 Aug 2022 12:27:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660678070; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zrkR6gNIt4tudGoTdGvYpSQ89WyxuEjYmnoEsRj/ibE=; b=ggwyRkZoOTmUrBwTY1k9iqnO9LSYGts2kJyVU+ldasigCG1+8Qh6wkYaSn7eGRj69ZVWwa 2nvrFQt/SwfgcL80kXXGrVX8XpKedFK4hNEHTIKvQA/YdgrQKA20WFPz1iE/m6ibbGuU2O wEfM23ItdXo9VyhXsiDKBszNvm0Ivxk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-126-6LvNpOjKNiah0ya12T15xQ-1; Tue, 16 Aug 2022 15:27:46 -0400 X-MC-Unique: 6LvNpOjKNiah0ya12T15xQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7190A1C1A946; Tue, 16 Aug 2022 19:27:45 +0000 (UTC) Received: from llong.com (unknown [10.22.10.201]) by smtp.corp.redhat.com (Postfix) with ESMTP id BE4E91121314; Tue, 16 Aug 2022 19:27:44 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Linus Torvalds , Waiman Long Subject: [PATCH v5 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask Date: Tue, 16 Aug 2022 15:27:33 -0400 Message-Id: <20220816192734.67115-3-longman@redhat.com> In-Reply-To: <20220816192734.67115-1-longman@redhat.com> References: <20220816192734.67115-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Since accessing the content of the user_cpus_ptr requires lock protection to ensure its validity, provide a helper function copy_user_cpus_mask() to facilitate its reading. Suggested-by: Peter Zijlstra Signed-off-by: Waiman Long --- include/linux/sched.h | 1 + kernel/sched/core.c | 19 +++++++++++++++++++ 2 files changed, 20 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index e7b2f8a5c711..f2b0340c094e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1830,6 +1830,7 @@ extern int task_can_attach(struct task_struct *p, con= st struct cpumask *cs_effec extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumas= k *new_mask); extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumas= k *new_mask); extern int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *= src, int node); +extern struct cpumask *copy_user_cpus_mask(struct task_struct *p, struct c= pumask *user_mask); extern void release_user_cpus_ptr(struct task_struct *p); extern int dl_task_check_affinity(struct task_struct *p, const struct cpum= ask *mask); extern void force_compatible_cpus_allowed_ptr(struct task_struct *p); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 03053eebb22e..a0987784913e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2618,6 +2618,25 @@ void release_user_cpus_ptr(struct task_struct *p) kfree(clear_user_cpus_ptr(p)); } =20 +/* + * Return the copied mask pointer or NULL if user mask not available. + * Acquiring pi_lock for read access protection. + */ +struct cpumask *copy_user_cpus_mask(struct task_struct *p, + struct cpumask *user_mask) +{ + struct cpumask *mask =3D NULL; + unsigned long flags; + + raw_spin_lock_irqsave(&p->pi_lock, flags); + if (p->user_cpus_ptr) { + cpumask_copy(user_mask, p->user_cpus_ptr); + mask =3D user_mask; + } + raw_spin_unlock_irqrestore(&p->pi_lock, flags); + return mask; +} + /* * This function is wildly self concurrent; here be dragons. * --=20 2.31.1 From nobody Sat Apr 11 02:17:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C9FBC25B0E for ; Tue, 16 Aug 2022 19:28:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237191AbiHPT2B (ORCPT ); Tue, 16 Aug 2022 15:28:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237175AbiHPT1t (ORCPT ); Tue, 16 Aug 2022 15:27:49 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00A727D7BF for ; Tue, 16 Aug 2022 12:27:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660678068; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5UzYem8u6o5YL1t0vuymkKlCZe7qLh/XnY007elL440=; b=CjORCpl4c3ZlNZk0rZkJ3tqeb3Q2QN6OBZTIMp29imINI5CJ1EoG8pZBghhkA88PqqBfOz EUPBAycrdE5rdHVV7q+TXcBbGFXQXk6YBQJMWqV6ts79rYfjOnEJ4N7dDDdXEMLvRbnZjH 0J8VawCCWFfVHdre0JvBKJ9IcZIrLZM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-423-YhntvkmPO_CSumbl_u0yhQ-1; Tue, 16 Aug 2022 15:27:47 -0400 X-MC-Unique: YhntvkmPO_CSumbl_u0yhQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2DDC985A585; Tue, 16 Aug 2022 19:27:46 +0000 (UTC) Received: from llong.com (unknown [10.22.10.201]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7F7E81121314; Tue, 16 Aug 2022 19:27:45 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tejun Heo , Zefan Li , Johannes Weiner , Will Deacon Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Linus Torvalds , Waiman Long Subject: [PATCH v5 3/3] cgroup/cpuset: Keep user set cpus affinity Date: Tue, 16 Aug 2022 15:27:34 -0400 Message-Id: <20220816192734.67115-4-longman@redhat.com> In-Reply-To: <20220816192734.67115-1-longman@redhat.com> References: <20220816192734.67115-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It was found that any change to the current cpuset hierarchy may reset the cpumask of the tasks in the affected cpusets to the default cpuset value even if those tasks have cpus affinity explicitly set by the users before. That is especially easy to trigger under a cgroup v2 environment where writing "+cpuset" to the root cgroup's cgroup.subtree_control file will reset the cpus affinity of all the processes in the system. That is problematic in a nohz_full environment where the tasks running in the nohz_full CPUs usually have their cpus affinity explicitly set and will behave incorrectly if cpus affinity changes. Fix this problem by looking at user_cpus_ptr which will be set if cpus affinity have been explicitly set before and use it to restrcit the given cpumask unless there is no overlap. In that case, it will fallback to the given one. To handle possible racing with concurrent sched_setaffinity() call, the user_cpus_ptr is rechecked again after a successful set_cpus_allowed_ptr() call. If the user_cpus_ptr status changes, the operation is retried again with the newly assigned user_cpus_ptr. With that change in place, it was verified that tasks that have its cpus affinity explicitly set will not be affected by changes made to the v2 cgroup.subtree_control files. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 42 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 40 insertions(+), 2 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 58aadfda9b8b..a663848d0459 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -704,6 +704,44 @@ static int validate_change(struct cpuset *cur, struct = cpuset *trial) return ret; } =20 +/* + * Preserve user provided cpumask (if set) as much as possible unless there + * is no overlap with the given mask. + */ +static int cpuset_set_cpus_allowed_ptr(struct task_struct *p, + const struct cpumask *mask) +{ + cpumask_var_t new_mask; + int ret; + + if (!READ_ONCE(p->user_cpus_ptr)) { + ret =3D set_cpus_allowed_ptr(p, mask); + /* + * If user_cpus_ptr becomes set now, we are racing with + * a concurrent sched_setaffinity(). So use the newly + * set user_cpus_ptr and retry again. + * + * TODO: We cannot detect change in the cpumask pointed to + * by user_cpus_ptr. We will have to add a sequence number + * if such a race needs to be addressed. + */ + if (ret || !READ_ONCE(p->user_cpus_ptr)) + return ret; + } + + if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) + return -ENOMEM; + + if (copy_user_cpus_mask(p, new_mask) && + cpumask_and(new_mask, new_mask, mask)) + ret =3D set_cpus_allowed_ptr(p, new_mask); + else + ret =3D set_cpus_allowed_ptr(p, mask); + + free_cpumask_var(new_mask); + return ret; +} + #ifdef CONFIG_SMP /* * Helper routine for generate_sched_domains(). @@ -1130,7 +1168,7 @@ static void update_tasks_cpumask(struct cpuset *cs) =20 css_task_iter_start(&cs->css, 0, &it); while ((task =3D css_task_iter_next(&it))) - set_cpus_allowed_ptr(task, cs->effective_cpus); + cpuset_set_cpus_allowed_ptr(task, cs->effective_cpus); css_task_iter_end(&it); } =20 @@ -2303,7 +2341,7 @@ static void cpuset_attach(struct cgroup_taskset *tset) * can_attach beforehand should guarantee that this doesn't * fail. TODO: have a better way to handle failure here */ - WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); + WARN_ON_ONCE(cpuset_set_cpus_allowed_ptr(task, cpus_attach)); =20 cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); cpuset_update_task_spread_flag(cs, task); --=20 2.31.1