From nobody Tue Sep 16 20:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0878CC4332F for ; Sat, 31 Dec 2022 04:12:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231294AbiLaEMQ (ORCPT ); Fri, 30 Dec 2022 23:12:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229514AbiLaEMO (ORCPT ); Fri, 30 Dec 2022 23:12:14 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BD19140FB for ; Fri, 30 Dec 2022 20:11:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1672459890; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SqGhzvoo07gfp1jTcFm+aPK7AgypGl91Poc4CFu8RI0=; b=N5paTAKAN81ujASAlESqoKkkF8751QG1hqjP5nv3k75OJKi6A0q4bGXii0Lckbu3ElTFQK 2CEfqqbBFdZNbLP3FYGNavkzQaW/lLOxHmH4qxhJMKf8NxKDhrBDIT+1w722EBj8HawJuj B76vWpVyciH8H3haSK/ZwEcFRZq+Oy0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-439-rUVrCvvwONGWQ91rnyjeBw-1; Fri, 30 Dec 2022 23:11:29 -0500 X-MC-Unique: rUVrCvvwONGWQ91rnyjeBw-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8920085C6E2; Sat, 31 Dec 2022 04:11:28 +0000 (UTC) Received: from llong.com (unknown [10.22.32.204]) by smtp.corp.redhat.com (Postfix) with ESMTP id DC64C492B00; Sat, 31 Dec 2022 04:11:27 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider Cc: Phil Auld , Wenjie Li , =?UTF-8?q?David=20Wang=20=E7=8E=8B=E6=A0=87?= , Quentin Perret , Will Deacon , linux-kernel@vger.kernel.org, Waiman Long Subject: [PATCH v6 2/2] sched: Use kfree_rcu() in do_set_cpus_allowed() Date: Fri, 30 Dec 2022 23:11:20 -0500 Message-Id: <20221231041120.440785-3-longman@redhat.com> In-Reply-To: <20221231041120.440785-1-longman@redhat.com> References: <20221231041120.440785-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Commit 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") may call kfree() if user_cpus_ptr was previously set. Unfortunately, some of the callers of do_set_cpus_allowed() may have pi_lock held when calling it. So the following splats may be printed especially when running with a PREEMPT_RT kernel: WARNING: possible circular locking dependency detected BUG: sleeping function called from invalid context To avoid these problems, kfree_rcu() is used instead. An internal cpumask_rcuhead union is created for the sole purpose of facilitating the use of kfree_rcu() to free the cpumask. Since user_cpus_ptr is not being used in non-SMP configs, the newly introduced alloc_user_cpus_ptr() helper will return NULL in this case and sched_setaffinity() is modified to handle this special case. Fixes: 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allo= wed()") Suggested-by: Peter Zijlstra Signed-off-by: Waiman Long --- kernel/sched/core.c | 33 +++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b93d030b9fd5..dc68c9a54a71 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2604,9 +2604,29 @@ void do_set_cpus_allowed(struct task_struct *p, cons= t struct cpumask *new_mask) .user_mask =3D NULL, .flags =3D SCA_USER, /* clear the user requested mask */ }; + union cpumask_rcuhead { + cpumask_t cpumask; + struct rcu_head rcu; + }; =20 __do_set_cpus_allowed(p, &ac); - kfree(ac.user_mask); + + /* + * Because this is called with p->pi_lock held, it is not possible + * to use kfree() here (when PREEMPT_RT=3Dy), therefore punt to using + * kfree_rcu(). + */ + kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu); +} + +static cpumask_t *alloc_user_cpus_ptr(int node) +{ + /* + * See do_set_cpus_allowed() above for the rcu_head usage. + */ + int size =3D max_t(int, cpumask_size(), sizeof(struct rcu_head)); + + return kmalloc_node(size, GFP_KERNEL, node); } =20 int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, @@ -2629,7 +2649,7 @@ int dup_user_cpus_ptr(struct task_struct *dst, struct= task_struct *src, if (data_race(!src->user_cpus_ptr)) return 0; =20 - user_mask =3D kmalloc_node(cpumask_size(), GFP_KERNEL, node); + user_mask =3D alloc_user_cpus_ptr(node); if (!user_mask) return -ENOMEM; =20 @@ -3605,6 +3625,11 @@ static inline bool rq_has_pinned_tasks(struct rq *rq) return false; } =20 +static inline cpumask_t *alloc_user_cpus_ptr(int node) +{ + return NULL; +} + #endif /* !CONFIG_SMP */ =20 static void @@ -8263,8 +8288,8 @@ long sched_setaffinity(pid_t pid, const struct cpumas= k *in_mask) if (retval) goto out_put_task; =20 - user_mask =3D kmalloc(cpumask_size(), GFP_KERNEL); - if (!user_mask) { + user_mask =3D alloc_user_cpus_ptr(NUMA_NO_NODE); + if (IS_ENABLED(CONFIG_SMP) && !user_mask) { retval =3D -ENOMEM; goto out_put_task; } --=20 2.31.1