From nobody Sat Feb 7 14:03:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D81CC43217 for ; Mon, 28 Nov 2022 13:36:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231954AbiK1Ngm (ORCPT ); Mon, 28 Nov 2022 08:36:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231971AbiK1NgS (ORCPT ); Mon, 28 Nov 2022 08:36:18 -0500 Received: from outboundhk.mxmail.xiaomi.com (outboundhk.mxmail.xiaomi.com [207.226.244.122]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6B9451EEEE for ; Mon, 28 Nov 2022 05:36:10 -0800 (PST) X-IronPort-AV: E=Sophos;i="5.96,200,1665417600"; d="scan'208";a="57652141" Received: from hk-mbx02.mioffice.cn (HELO xiaomi.com) ([10.56.8.122]) by outboundhk.mxmail.xiaomi.com with ESMTP; 28 Nov 2022 21:34:56 +0800 Received: from BJ-MBX06.mioffice.cn (10.237.8.126) by HK-MBX02.mioffice.cn (10.56.8.122) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 28 Nov 2022 21:34:48 +0800 Received: from BJ-MBX04.mioffice.cn (10.237.8.124) by BJ-MBX06.mioffice.cn (10.237.8.126) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 28 Nov 2022 21:34:48 +0800 Received: from BJ-MBX04.mioffice.cn ([fe80::44a0:4515:f68b:f8b1]) by BJ-MBX04.mioffice.cn ([fe80::44a0:4515:f68b:f8b1%18]) with mapi id 15.02.0986.036; Mon, 28 Nov 2022 21:34:48 +0800 From: =?utf-8?B?RGF2aWQgV2FuZyDnjovmoIc=?= To: Waiman Long , Ingo Molnar , "Peter Zijlstra" , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , "Daniel Bristot de Oliveira" CC: Phil Auld , Wenjie Li , "linux-kernel@vger.kernel.org" , "stable@vger.kernel.org" Subject: =?utf-8?B?562U5aSNOiBbRXh0ZXJuYWwgTWFpbF1bUEFUQ0gtdGlwXSBzY2hlZDogRml4?= =?utf-8?B?IHVzZS1hZnRlci1mcmVlIGJ1ZyBpbiBkdXBfdXNlcl9jcHVzX3B0cigp?= Thread-Topic: [External Mail][PATCH-tip] sched: Fix use-after-free bug in dup_user_cpus_ptr() Thread-Index: AQHZAssNy10XwAaeOEOvH5Jk+DM0Hq5UU+6w Date: Mon, 28 Nov 2022 13:34:48 +0000 Message-ID: <63373bf9adfc4e0abd9480d40afa2c5a@xiaomi.com> References: <20221128014441.1264867-1-longman@redhat.com> In-Reply-To: <20221128014441.1264867-1-longman@redhat.com> Accept-Language: en-US, zh-CN Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.237.8.11] Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, Waiman We use 140 devices to test this patch 72 hours. The issue can not be repro= duced. If no this patch, the issue can be reproduced. Could you help merge this patch to mailine? https://lore.kernel.org/all/20221125023943.1118603-1-longman@redhat.com/ If this patch is applied to the maintainer's tree, we can request google t= o help cherrypick to ACK to fix issue. Thanks Reported-by: David Wang =E7=8E=8B=E6=A0=87 -----=E9=82=AE=E4=BB=B6=E5=8E=9F=E4=BB=B6----- =E5=8F=91=E4=BB=B6=E4=BA=BA: Waiman Long =E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4: 2022=E5=B9=B411=E6=9C=8828=E6=97=A5 9= :45 =E6=94=B6=E4=BB=B6=E4=BA=BA: Ingo Molnar ; Peter Zijlstra= ; Juri Lelli ; Vincent Guitto= t ; Dietmar Eggemann = ; Steven Rostedt ; Ben Segall ; Me= l Gorman ; Daniel Bristot de Oliveira =E6=8A=84=E9=80=81: Phil Auld ; Wenjie Li ; David Wang =E7=8E=8B=E6=A0=87 ; linux-k= ernel@vger.kernel.org; Waiman Long ; stable@vger.kernel= .org =E4=B8=BB=E9=A2=98: [External Mail][PATCH-tip] sched: Fix use-after-free bu= g in dup_user_cpus_ptr() [=E5=A4=96=E9=83=A8=E9=82=AE=E4=BB=B6] =E6=AD=A4=E9=82=AE=E4=BB=B6=E6=9D=A5= =E6=BA=90=E4=BA=8E=E5=B0=8F=E7=B1=B3=E5=85=AC=E5=8F=B8=E5=A4=96=E9=83=A8=EF= =BC=8C=E8=AF=B7=E8=B0=A8=E6=85=8E=E5=A4=84=E7=90=86=E3=80=82=E8=8B=A5=E5=AF= =B9=E9=82=AE=E4=BB=B6=E5=AE=89=E5=85=A8=E6=80=A7=E5=AD=98=E7=96=91=EF=BC=8C= =E8=AF=B7=E5=B0=86=E9=82=AE=E4=BB=B6=E8=BD=AC=E5=8F=91=E7=BB=99misec@xiaomi= .com=E8=BF=9B=E8=A1=8C=E5=8F=8D=E9=A6=88 Since commit 07ec77a1d4e8 ("sched: Allow task CPU affinity to be restricted= on asymmetric systems"), the setting and clearing of user_cpus_ptr are don= e under pi_lock for arm64 architecture. However, dup_user_cpus_ptr() accesses user_cpus_ptr without any lock protection. Whe= n racing with the clearing of user_cpus_ptr in __set_cpus_allowed_ptr_locke= d(), it can lead to user-after-free and double-free in arm64 kernel. Commit 8f9ea86fdf99 ("sched: Always preserve the user requested cpumask") fixes this problem as user_cpus_ptr, once set, will never be clea= red in a task's lifetime. However, this bug was re-introduced in commit 851= a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") which allows the clearing of user_cpus_ptr in do_se= t_cpus_allowed(). This time, it will affect all arches. Fix this bug by always clearing the user_cpus_ptr of the newly cloned/forke= d task before the copying process starts and check the user_cpus_ptr state = of the source task under pi_lock. Note to stable, this patch won't be applicable to stable releases. Just copy the new dup_user_cpus_ptr() function over. Fixes: 07ec77a1d4e8 ("sched: Allow task CPU affinity to be restricted on as= ymmetric systems") Fixes: 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allo= wed()") CC: stable@vger.kernel.org Reported-by: David Wang =E7=8E=8B=E6=A0=87 Signed-off-by: Waiman Long --- kernel/sched/core.c | 32 ++++++++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8df51b08bb38..= f2b75faaf71a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2624,19 +2624,43 @@ void do_set_cpus_allowed(struct task_struct *p, con= st struct cpumask *new_mask) int dup_user_cpus_ptr(struct task_struct *dst= , struct task_struct *src, int node) { + cpumask_t *user_mask; unsigned long flags; + /* + * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's + * may differ by now due to racing. + */ + dst->user_cpus_ptr =3D NULL; + + /* + * This check is racy and losing the race is a valid situation. + * It is not worth the extra overhead of taking the pi_lock on + * every fork/clone. + */ if (!src->user_cpus_ptr) return 0; - dst->user_cpus_ptr =3D kmalloc_node(cpumask_size(), GFP_KERNEL, nod= e); - if (!dst->user_cpus_ptr) + user_mask =3D kmalloc_node(cpumask_size(), GFP_KERNEL, node); + if (!user_mask) return -ENOMEM; - /* Use pi_lock to protect content of user_cpus_ptr */ + /* + * Use pi_lock to protect content of user_cpus_ptr + * + * Though unlikely, user_cpus_ptr can be reset to NULL by a concurr= ent + * do_set_cpus_allowed(). + */ raw_spin_lock_irqsave(&src->pi_lock, flags); - cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr); + if (src->user_cpus_ptr) { + swap(dst->user_cpus_ptr, user_mask); + cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr); + } raw_spin_unlock_irqrestore(&src->pi_lock, flags); + + if (unlikely(user_mask)) + kfree(user_mask); + return 0; } -- 2.31.1 #/******=E6=9C=AC=E9=82=AE=E4=BB=B6=E5=8F=8A=E5=85=B6=E9=99=84=E4=BB=B6=E5= =90=AB=E6=9C=89=E5=B0=8F=E7=B1=B3=E5=85=AC=E5=8F=B8=E7=9A=84=E4=BF=9D=E5=AF= =86=E4=BF=A1=E6=81=AF=EF=BC=8C=E4=BB=85=E9=99=90=E4=BA=8E=E5=8F=91=E9=80=81= =E7=BB=99=E4=B8=8A=E9=9D=A2=E5=9C=B0=E5=9D=80=E4=B8=AD=E5=88=97=E5=87=BA=E7= =9A=84=E4=B8=AA=E4=BA=BA=E6=88=96=E7=BE=A4=E7=BB=84=E3=80=82=E7=A6=81=E6=AD= =A2=E4=BB=BB=E4=BD=95=E5=85=B6=E4=BB=96=E4=BA=BA=E4=BB=A5=E4=BB=BB=E4=BD=95= =E5=BD=A2=E5=BC=8F=E4=BD=BF=E7=94=A8=EF=BC=88=E5=8C=85=E6=8B=AC=E4=BD=86=E4= =B8=8D=E9=99=90=E4=BA=8E=E5=85=A8=E9=83=A8=E6=88=96=E9=83=A8=E5=88=86=E5=9C= =B0=E6=B3=84=E9=9C=B2=E3=80=81=E5=A4=8D=E5=88=B6=E3=80=81=E6=88=96=E6=95=A3= =E5=8F=91=EF=BC=89=E6=9C=AC=E9=82=AE=E4=BB=B6=E4=B8=AD=E7=9A=84=E4=BF=A1=E6= =81=AF=E3=80=82=E5=A6=82=E6=9E=9C=E6=82=A8=E9=94=99=E6=94=B6=E4=BA=86=E6=9C= =AC=E9=82=AE=E4=BB=B6=EF=BC=8C=E8=AF=B7=E6=82=A8=E7=AB=8B=E5=8D=B3=E7=94=B5= =E8=AF=9D=E6=88=96=E9=82=AE=E4=BB=B6=E9=80=9A=E7=9F=A5=E5=8F=91=E4=BB=B6=E4= =BA=BA=E5=B9=B6=E5=88=A0=E9=99=A4=E6=9C=AC=E9=82=AE=E4=BB=B6=EF=BC=81 This = e-mail and its attachments contain confidential information from XIAOMI, wh= ich is intended only for the person or entity whose address is listed above= . Any use of the information contained herein in any way (including, but no= t limited to, total or partial disclosure, reproduction, or dissemination) = by persons other than the intended recipient(s) is prohibited. If you recei= ve this e-mail in error, please notify the sender by phone or email immedia= tely and delete it!******/#