From nobody Tue Sep 16 18:12:02 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7162BC4167B for ; Fri, 30 Dec 2022 15:33:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235127AbiL3Pdj (ORCPT ); Fri, 30 Dec 2022 10:33:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229464AbiL3Pd2 (ORCPT ); Fri, 30 Dec 2022 10:33:28 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 945BB55B3 for ; Fri, 30 Dec 2022 07:32:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1672414360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SQesEpyQ/TOk4+yr8stV7fGWHioPa/8NhDL6gLzIvW8=; b=GxciC7IROpiWbgXpjV2Lbu4XHbgMXPydLbblXnLMHDjI9Qj6gFtV3Is94fL7PdcQ1fS0ix AAQ2ps32iyXTZX/rD8Wf7F62ksJ0gwn+6YPbk/bV/Htqo/3F18+Jln3Ilf2tt4Yz2wIBCw 8k5vSY4BfrKuxnqFdMCQ/gsqw2MM36w= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-480-VLKffyMrMeidMbqSsxW8vw-1; Fri, 30 Dec 2022 10:32:37 -0500 X-MC-Unique: VLKffyMrMeidMbqSsxW8vw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 93B4F18483B3; Fri, 30 Dec 2022 15:32:36 +0000 (UTC) Received: from llong.com (unknown [10.22.32.204]) by smtp.corp.redhat.com (Postfix) with ESMTP id DDC0240C2004; Fri, 30 Dec 2022 15:32:35 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider Cc: Phil Auld , Wenjie Li , =?UTF-8?q?David=20Wang=20=E7=8E=8B=E6=A0=87?= , Quentin Perret , Will Deacon , linux-kernel@vger.kernel.org, Waiman Long , stable@vger.kernel.org Subject: [PATCH v5 1/2] sched: Fix use-after-free bug in dup_user_cpus_ptr() Date: Fri, 30 Dec 2022 10:32:17 -0500 Message-Id: <20221230153218.354214-2-longman@redhat.com> In-Reply-To: <20221230153218.354214-1-longman@redhat.com> References: <20221230153218.354214-1-longman@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since commit 07ec77a1d4e8 ("sched: Allow task CPU affinity to be restricted on asymmetric systems"), the setting and clearing of user_cpus_ptr are done under pi_lock for arm64 architecture. However, dup_user_cpus_ptr() accesses user_cpus_ptr without any lock protection. Since sched_setaffinity() can be invoked from another process, the process being modified may be undergoing fork() at the same time. When racing with the clearing of user_cpus_ptr in __set_cpus_allowed_ptr_locked(), it can lead to user-after-free and possibly double-free in arm64 kernel. Commit 8f9ea86fdf99 ("sched: Always preserve the user requested cpumask") fixes this problem as user_cpus_ptr, once set, will never be cleared in a task's lifetime. However, this bug was re-introduced in commit 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") which allows the clearing of user_cpus_ptr in do_set_cpus_allowed(). This time, it will affect all arches. Fix this bug by always clearing the user_cpus_ptr of the newly cloned/forked task before the copying process starts and check the user_cpus_ptr state of the source task under pi_lock. Note to stable, this patch won't be applicable to stable releases. Just copy the new dup_user_cpus_ptr() function over. Fixes: 07ec77a1d4e8 ("sched: Allow task CPU affinity to be restricted on as= ymmetric systems") Fixes: 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allo= wed()") CC: stable@vger.kernel.org Reported-by: David Wang =E7=8E=8B=E6=A0=87 Signed-off-by: Waiman Long --- kernel/sched/core.c | 34 +++++++++++++++++++++++++++++----- 1 file changed, 29 insertions(+), 5 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 25b582b6ee5f..b93d030b9fd5 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2612,19 +2612,43 @@ void do_set_cpus_allowed(struct task_struct *p, con= st struct cpumask *new_mask) int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, int node) { + cpumask_t *user_mask; unsigned long flags; =20 - if (!src->user_cpus_ptr) + /* + * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's + * may differ by now due to racing. + */ + dst->user_cpus_ptr =3D NULL; + + /* + * This check is racy and losing the race is a valid situation. + * It is not worth the extra overhead of taking the pi_lock on + * every fork/clone. + */ + if (data_race(!src->user_cpus_ptr)) return 0; =20 - dst->user_cpus_ptr =3D kmalloc_node(cpumask_size(), GFP_KERNEL, node); - if (!dst->user_cpus_ptr) + user_mask =3D kmalloc_node(cpumask_size(), GFP_KERNEL, node); + if (!user_mask) return -ENOMEM; =20 - /* Use pi_lock to protect content of user_cpus_ptr */ + /* + * Use pi_lock to protect content of user_cpus_ptr + * + * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent + * do_set_cpus_allowed(). + */ raw_spin_lock_irqsave(&src->pi_lock, flags); - cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr); + if (src->user_cpus_ptr) { + swap(dst->user_cpus_ptr, user_mask); + cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr); + } raw_spin_unlock_irqrestore(&src->pi_lock, flags); + + if (unlikely(user_mask)) + kfree(user_mask); + return 0; } =20 --=20 2.31.1 From nobody Tue Sep 16 18:12:02 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7586C3DA7D for ; Fri, 30 Dec 2022 15:33:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229662AbiL3Pdd (ORCPT ); Fri, 30 Dec 2022 10:33:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229456AbiL3Pd2 (ORCPT ); Fri, 30 Dec 2022 10:33:28 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B5711B1E2 for ; Fri, 30 Dec 2022 07:32:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1672414361; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=feZ5hwjDZqG9+vMkKLDMh4mCqDRJ9BgRHgD4LRZUVhU=; b=buPEZ12kDLPYAN+vwxdP0jF7FJgxiLdspEG3jinG1CxSEKkk4cXb2mh6cD9Pl+cNfG02lw 9PwSd8xbtf1Ln1N5hzXlBZ/zP3u6o30xxyf3eSCCG9ImrTBGoEwCChs4SrsL2VIBDrFiAf LM/a0XSDR+mW7ECuMdQ/vk300T7/pRg= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-397-8UIeCCqsOZWYN3Xstv0c3Q-1; Fri, 30 Dec 2022 10:32:38 -0500 X-MC-Unique: 8UIeCCqsOZWYN3Xstv0c3Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 47DAF1C05EB9; Fri, 30 Dec 2022 15:32:37 +0000 (UTC) Received: from llong.com (unknown [10.22.32.204]) by smtp.corp.redhat.com (Postfix) with ESMTP id A0B6F40C2005; Fri, 30 Dec 2022 15:32:36 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider Cc: Phil Auld , Wenjie Li , =?UTF-8?q?David=20Wang=20=E7=8E=8B=E6=A0=87?= , Quentin Perret , Will Deacon , linux-kernel@vger.kernel.org, Waiman Long Subject: [PATCH v5 2/2] sched: Use kfree_rcu() in do_set_cpus_allowed() Date: Fri, 30 Dec 2022 10:32:18 -0500 Message-Id: <20221230153218.354214-3-longman@redhat.com> In-Reply-To: <20221230153218.354214-1-longman@redhat.com> References: <20221230153218.354214-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Commit 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") may call kfree() if user_cpus_ptr was previously set. Unfortunately, some of the callers of do_set_cpus_allowed() may have pi_lock held when calling it. So the following splats may be printed especially when running with a PREEMPT_RT kernel: WARNING: possible circular locking dependency detected BUG: sleeping function called from invalid context To avoid these problems, kfree_rcu() is used instead. An internal cpumask_rcuhead union is created for the sole purpose of facilitating the use of kfree_rcu() to free the cpumask. Fixes: 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allo= wed()") Suggested-by: Peter Zijlstra Signed-off-by: Waiman Long --- kernel/sched/core.c | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b93d030b9fd5..31a14650bd7e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2604,9 +2604,29 @@ void do_set_cpus_allowed(struct task_struct *p, cons= t struct cpumask *new_mask) .user_mask =3D NULL, .flags =3D SCA_USER, /* clear the user requested mask */ }; + union cpumask_rcuhead { + cpumask_t cpumask; + struct rcu_head rcu; + }; =20 __do_set_cpus_allowed(p, &ac); - kfree(ac.user_mask); + + /* + * Because this is called with p->pi_lock held, it is not possible + * to use kfree() here (when PREEMPT_RT=3Dy), therefore punt to using + * kfree_rcu(). + */ + kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu); +} + +static cpumask_t *alloc_user_cpus_ptr(int node) +{ + /* + * See do_set_cpus_allowed() above for the rcu_head usage. + */ + int size =3D max_t(int, cpumask_size(), sizeof(struct rcu_head)); + + return kmalloc_node(size, GFP_KERNEL, node); } =20 int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, @@ -2629,7 +2649,7 @@ int dup_user_cpus_ptr(struct task_struct *dst, struct= task_struct *src, if (data_race(!src->user_cpus_ptr)) return 0; =20 - user_mask =3D kmalloc_node(cpumask_size(), GFP_KERNEL, node); + user_mask =3D alloc_user_cpus_ptr(node); if (!user_mask) return -ENOMEM; =20 @@ -8263,7 +8283,7 @@ long sched_setaffinity(pid_t pid, const struct cpumas= k *in_mask) if (retval) goto out_put_task; =20 - user_mask =3D kmalloc(cpumask_size(), GFP_KERNEL); + user_mask =3D alloc_user_cpus_ptr(NUMA_NO_NODE); if (!user_mask) { retval =3D -ENOMEM; goto out_put_task; --=20 2.31.1