From nobody Fri Dec 19 13:09:34 2025 Received: from CH1PR05CU001.outbound.protection.outlook.com (mail-northcentralusazon11010045.outbound.protection.outlook.com [52.101.193.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAD003101A6 for ; Mon, 8 Dec 2025 09:28:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=52.101.193.45 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765186122; cv=fail; b=Mu9HSyOx/LmHtYLP+DSaVL8iT/7nRIxOCVnKTiZRhDQCbscwzCwv9tkPlNnAAaBqwhxL4MjhjVZXka7h/Q0vaPHYKdzW7BNdzgK+YSDVbsTE/sridCc2IMbPyifEdi/1mLNt0NbZnDJe7yXdxTaojWW9DuL696WR3NE9fFlO4Ug= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765186122; c=relaxed/simple; bh=kGn5p/lAfmXX1Y6HzBjg3nqseMwj0efzkwGIIOQsfn8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NnQChVR9JB0BhhO6yB83fE/ZuB7pndH+YyKUNzPVL91CrSoAwEreBMvTZIxMOob4KQe5sYk13iKd6UdKmNJc8y/XYZt+zKWUe2M3h3f4IWGFhAJ3bZp3S9CGVHi2jjiUOFFjI4mXam93uY2DCvWurjFrjSCqV4tA03EwH8LpsXk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=N0EmIENj; arc=fail smtp.client-ip=52.101.193.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="N0EmIENj" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=tAT3gjW3wnm2Wv1ZC8Zf7ofW3oU8IH1f7/n6YMqjbSe7bpgN/rcFCJad4nwqv0VDhjKa/7N1cDnxmfmdbxrg8cRntr8Q5yD43x7IZ3AhuvWQ5kl4+cgxPt1BRxGAp1k5VY50CJogdn0QlPzUadp7r1sFVJYHE5QLvYdvPIUt2iyz4yNsm0PgdF7vUOD8y7lsaE6iXw3CXUhGvw7mGdJb+O8QaSNVrSq13G1G1Ic8gqHoF9zqFWn3J/qufDYNRrL0mU+9EwCwdzc3f86aqQTj0f6M4AHsU+Qh4m+GJGdpvDNpd+nEJ9ZKna4KsZAZ3OA2iMcGdQnmkv0sVsk4liwqLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QKuFjo6VtjoJ65wImdyjthWaLc8UX0hqA0/jHDKUTF0=; b=RhZHFbU76sX9ftH7vQXxogngHzB30lXSan2URzF0oPBbt2dHgMLSwVICpMufKzfTBNBdL09QforzJPmfHrPuacI4a2/JeihxRHiH+T+QL4CXoplE+5US929hYGsygtf1HHKCvSqyMzXHt6KqxFObFjVr5ATj7oZPv13ysNjSQi5zU2KU8HVw4koFt4pQxnDsUjbYNOZ8T6bb2JgjP38B+FW15F757e8wf4tP1aV56GufpNSFn2fmooftZh6GBCnFgtB+ImrETk4PauPR5xP6ti3cEZw6aaepx+n44TxdEJ70RJEKrHHuQ46uhc+uqctrAxFnQhqZnqz+XN4NWZwT7Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QKuFjo6VtjoJ65wImdyjthWaLc8UX0hqA0/jHDKUTF0=; b=N0EmIENjnw9L3mlXVfJqGEzHMy7xJC9DA/stXcAGxcXXFqe3l0eG9QdYYOz9OyRO3Qdyca0Yu/NtxCa79bNKDDb9kB1i/gHhbhzL3Ac3N75PThpNHjfvQoHdsyuy7nUgQJTbmv/Qi2L6AL7TqRWol8YFJSt9T9v/xAp7KtQYL+M= Received: from MW4PR03CA0169.namprd03.prod.outlook.com (2603:10b6:303:8d::24) by DS7PR12MB5984.namprd12.prod.outlook.com (2603:10b6:8:7f::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9388.9; Mon, 8 Dec 2025 09:28:33 +0000 Received: from CO1PEPF000042AD.namprd03.prod.outlook.com (2603:10b6:303:8d:cafe::ca) by MW4PR03CA0169.outlook.office365.com (2603:10b6:303:8d::24) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9388.14 via Frontend Transport; Mon, 8 Dec 2025 09:28:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=satlexmb07.amd.com; pr=C Received: from satlexmb07.amd.com (165.204.84.17) by CO1PEPF000042AD.mail.protection.outlook.com (10.167.243.42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9388.8 via Frontend Transport; Mon, 8 Dec 2025 09:28:32 +0000 Received: from BLRKPRNAYAK.amd.com (10.180.168.240) by satlexmb07.amd.com (10.181.42.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.17; Mon, 8 Dec 2025 03:28:26 -0600 From: K Prateek Nayak To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Anna-Maria Behnsen , Frederic Weisbecker , Thomas Gleixner CC: , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , K Prateek Nayak , "Gautham R. Shenoy" , Swapnil Sapkal , Shrikanth Hegde , Chen Yu Subject: [RESEND RFC PATCH v2 03/29] sched/topology: Optimize sd->shared allocation and assignment Date: Mon, 8 Dec 2025 09:26:49 +0000 Message-ID: <20251208092744.32737-3-kprateek.nayak@amd.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251208083602.31898-1-kprateek.nayak@amd.com> References: <20251208083602.31898-1-kprateek.nayak@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: satlexmb07.amd.com (10.181.42.216) To satlexmb07.amd.com (10.181.42.216) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000042AD:EE_|DS7PR12MB5984:EE_ X-MS-Office365-Filtering-Correlation-Id: 658c03fd-36d0-4914-06c4-08de363c2811 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|1800799024|7416014|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?8HpmV/M2u8/O/IhfqwUWmaGhXmg8o9xJoCpDXB3uT+Mm6YaDOGorgTtu5dB7?= =?us-ascii?Q?spWMkUIQq91nab9LZH7BCwSe5nyDx35Q05BqHJvajmrlw2eZoXxT2YZOhWpE?= =?us-ascii?Q?2TwExgRKFwJbVCBsrDJkgKPlQgiu/GmU1rpVrq7HxCzde+HI3YUbr26frn6w?= =?us-ascii?Q?NCPamoKWe6sDpY5jXuNALaDEkP+KENWD3Lqh/N8u6fHQMZ6ByLKeOlg0CayR?= =?us-ascii?Q?EUflY/vy2qwxZSlq27YSKtAEystx0P8y6H25v3RTGEzcFnHV+UlQ52lrlnZD?= =?us-ascii?Q?c4EyjFbB/s69vcGD0JVb9aXmvDgsaJ/fwMkxfwiaxdFugmYM8XtgR9yA0BjU?= =?us-ascii?Q?VeGs8YEEoZtEI4uBCPZslsJIEnthDCP/ZPX2lgAgeA8juS9oKviuThISGStD?= =?us-ascii?Q?uoIpwCaCfd7WZSHW/Sx3I2UiIyhwxUbJZmUsVYtD5E0UflapTaHzw42O1Srr?= =?us-ascii?Q?ZrLKmO9fxEtmYxjzWvPU4sGtBZjFI706oLZ31A8NGV5wyF04RW/27O5Snc+4?= =?us-ascii?Q?81eo4x7cS5jiYxjaqyF1FyZZwmgEzaWAbfoaRMp8TMS6Nw/MKKkwChaGIxvC?= =?us-ascii?Q?hUfkhygpXqTo8W4oayleAR3X0/LS7O8Y7TzTtNamPS7+NXdQW81ZT59oIjON?= =?us-ascii?Q?LI2OjFWxfHFZyW9tOZuo6kBdRdjGumrwzssJrKoP+4iHo6pLD4obuV/lZNqW?= =?us-ascii?Q?MWx5qk/FqBZ0p4Namzl+WhmCmN9KlwF3pbEAvru9cjpbngYB+HPztmIe8MYx?= =?us-ascii?Q?CBAsnY4sqyLjCONw+1lkqu/f2zrDQ//qXB9iGCkt36EQ/bKGQQQ0Ij2GW4HO?= =?us-ascii?Q?FafnMvcWu84hW8NIgSUKGyvCDhCPtMbyPShRQBNF/Xld9+wwPAgcKIN34XHG?= =?us-ascii?Q?IQIKvlnulOuVJY+ifNoFzepRn3Y9LLp9VlkX6kdu0/63FiUJBhDnF7N+VEcl?= =?us-ascii?Q?Q4UwZgDiEUkoEN/oDudL0bJcCbZWIwoLvvcKgNRGDKbKa0WpGLLYz/eFaIuC?= =?us-ascii?Q?J4mz0wczLZYLrgR/Lg44FvES6P9CkBvgrGaYPu5hgjbl5E8hZNF3lDnVNWkv?= =?us-ascii?Q?/ZFgw4Y74DILdflJhHXcu2T/vUFIZvw2j6Z85+zffcquaomoFKUyCP39H+/o?= =?us-ascii?Q?MlmrtIxdJOuXNPjYhkyJymZv1J9Zu2jjAb7Crvd73Ylt1Bfppr1nnbSGAI0i?= =?us-ascii?Q?qvjsDDOKM4BOQM+6JGTApzjyDrB6VMhLE403qZz7koZ6mOCds4YKgzBo/U9X?= =?us-ascii?Q?fdw2azaH44xcR4BrAI624bEmUGTUXOR4zHvzU+Ja6/bRr8hTeywYU0eR+lII?= =?us-ascii?Q?4NA1ncJdrFawkY98dkEH9DzU7P0LKZU1yP9lwX1Ey1RGQnfRQNd20xxkt/nh?= =?us-ascii?Q?rMA8tIYb7puWhd8yDlGAfGWmNNbsAC7OU2wqP1Sz1CWl4cMOTwZBc24CGtHL?= =?us-ascii?Q?NdzfHFL0QaasoNZoSpFJkg+0MGTRuKXBE8jrI69AefWdkt/0IwU+isit5gGO?= =?us-ascii?Q?jdEwQVdugFKIFyJVcH0GITAEEpIFkrYUZGHMXmoXrQlUi/JPulCu7NyZjHxM?= =?us-ascii?Q?dEu/psmfHLvdxLdgb7w=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:satlexmb07.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(1800799024)(7416014)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2025 09:28:32.9999 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 658c03fd-36d0-4914-06c4-08de363c2811 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[satlexmb07.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000042AD.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5984 Content-Type: text/plain; charset="utf-8" sd->shared is ever only used for sd_llc_shared and the sd->shared of lower domains are never referenced by the scheduler. Subsequent optimization will bloat the size of sd->shared and allocation of redundant "sd->shared" can be expensive. sd_llc is always a !SD_OVERLAP domain and it's children are always a subset of the domain, thus the degeneration of a SD_SHARE_LLC domain implies that it either contains a single CPU, or its span matches with that of a its child domain. Initialize a single level of per-CPU sched_domain_shared objects instead of doing it for each topology level. Assign sd->shared to the highest SD_SHARE_LLC domain and rely on the degeneration path to pass it to the correct topology level. Reviewed-by: Gautham R. Shenoy Signed-off-by: K Prateek Nayak --- include/linux/sched/topology.h | 1 - kernel/sched/topology.c | 101 +++++++++++++++++++++++---------- 2 files changed, 71 insertions(+), 31 deletions(-) diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 45c0022b91ce..fc3d89160513 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -171,7 +171,6 @@ typedef int (*sched_domain_flags_f)(void); =20 struct sd_data { struct sched_domain *__percpu *sd; - struct sched_domain_shared *__percpu *sds; struct sched_group *__percpu *sg; struct sched_group_capacity *__percpu *sgc; }; diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index cf643a5ddedd..14be90af9761 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -679,6 +679,9 @@ static void update_top_cache_domain(int cpu) if (sd) { id =3D cpumask_first(sched_domain_span(sd)); size =3D cpumask_weight(sched_domain_span(sd)); + + /* If sd_llc exists, sd_llc_shared should exist too. */ + WARN_ON_ONCE(!sd->shared); sds =3D sd->shared; } =20 @@ -727,6 +730,13 @@ cpu_attach_domain(struct sched_domain *sd, struct root= _domain *rd, int cpu) if (sd_parent_degenerate(tmp, parent)) { tmp->parent =3D parent->parent; =20 + /* Pick reference to parent->shared. */ + if (parent->shared) { + WARN_ON_ONCE(tmp->shared); + tmp->shared =3D parent->shared; + parent->shared =3D NULL; + } + if (parent->parent) { parent->parent->child =3D tmp; parent->parent->groups->flags =3D tmp->flags; @@ -776,6 +786,7 @@ cpu_attach_domain(struct sched_domain *sd, struct root_= domain *rd, int cpu) } =20 struct s_data { + struct sched_domain_shared * __percpu *sds; struct sched_domain * __percpu *sd; struct root_domain *rd; }; @@ -783,6 +794,7 @@ struct s_data { enum s_alloc { sa_rootdomain, sa_sd, + sa_sd_shared, sa_sd_storage, sa_none, }; @@ -1529,6 +1541,9 @@ static void set_domain_attribute(struct sched_domain = *sd, static void __sdt_free(const struct cpumask *cpu_map); static int __sdt_alloc(const struct cpumask *cpu_map); =20 +static void __sds_free(struct s_data *d, const struct cpumask *cpu_map); +static int __sds_alloc(struct s_data *d, const struct cpumask *cpu_map); + static void __free_domain_allocs(struct s_data *d, enum s_alloc what, const struct cpumask *cpu_map) { @@ -1540,6 +1555,9 @@ static void __free_domain_allocs(struct s_data *d, en= um s_alloc what, case sa_sd: free_percpu(d->sd); fallthrough; + case sa_sd_shared: + __sds_free(d, cpu_map); + fallthrough; case sa_sd_storage: __sdt_free(cpu_map); fallthrough; @@ -1555,9 +1573,11 @@ __visit_domain_allocation_hell(struct s_data *d, con= st struct cpumask *cpu_map) =20 if (__sdt_alloc(cpu_map)) return sa_sd_storage; + if (__sds_alloc(d, cpu_map)) + return sa_sd_shared; d->sd =3D alloc_percpu(struct sched_domain *); if (!d->sd) - return sa_sd_storage; + return sa_sd_shared; d->rd =3D alloc_rootdomain(); if (!d->rd) return sa_sd; @@ -1577,9 +1597,6 @@ static void claim_allocations(int cpu, struct sched_d= omain *sd) WARN_ON_ONCE(*per_cpu_ptr(sdd->sd, cpu) !=3D sd); *per_cpu_ptr(sdd->sd, cpu) =3D NULL; =20 - if (atomic_read(&(*per_cpu_ptr(sdd->sds, cpu))->ref)) - *per_cpu_ptr(sdd->sds, cpu) =3D NULL; - if (atomic_read(&(*per_cpu_ptr(sdd->sg, cpu))->ref)) *per_cpu_ptr(sdd->sg, cpu) =3D NULL; =20 @@ -1722,16 +1739,6 @@ sd_init(struct sched_domain_topology_level *tl, sd->cache_nice_tries =3D 1; } =20 - /* - * For all levels sharing cache; connect a sched_domain_shared - * instance. - */ - if (sd->flags & SD_SHARE_LLC) { - sd->shared =3D *per_cpu_ptr(sdd->sds, sd_id); - atomic_inc(&sd->shared->ref); - atomic_set(&sd->shared->nr_busy_cpus, sd_weight); - } - sd->private =3D sdd; =20 return sd; @@ -2367,10 +2374,6 @@ static int __sdt_alloc(const struct cpumask *cpu_map) if (!sdd->sd) return -ENOMEM; =20 - sdd->sds =3D alloc_percpu(struct sched_domain_shared *); - if (!sdd->sds) - return -ENOMEM; - sdd->sg =3D alloc_percpu(struct sched_group *); if (!sdd->sg) return -ENOMEM; @@ -2381,7 +2384,6 @@ static int __sdt_alloc(const struct cpumask *cpu_map) =20 for_each_cpu(j, cpu_map) { struct sched_domain *sd; - struct sched_domain_shared *sds; struct sched_group *sg; struct sched_group_capacity *sgc; =20 @@ -2392,13 +2394,6 @@ static int __sdt_alloc(const struct cpumask *cpu_map) =20 *per_cpu_ptr(sdd->sd, j) =3D sd; =20 - sds =3D kzalloc_node(sizeof(struct sched_domain_shared), - GFP_KERNEL, cpu_to_node(j)); - if (!sds) - return -ENOMEM; - - *per_cpu_ptr(sdd->sds, j) =3D sds; - sg =3D kzalloc_node(sizeof(struct sched_group) + cpumask_size(), GFP_KERNEL, cpu_to_node(j)); if (!sg) @@ -2440,8 +2435,6 @@ static void __sdt_free(const struct cpumask *cpu_map) kfree(*per_cpu_ptr(sdd->sd, j)); } =20 - if (sdd->sds) - kfree(*per_cpu_ptr(sdd->sds, j)); if (sdd->sg) kfree(*per_cpu_ptr(sdd->sg, j)); if (sdd->sgc) @@ -2449,8 +2442,6 @@ static void __sdt_free(const struct cpumask *cpu_map) } free_percpu(sdd->sd); sdd->sd =3D NULL; - free_percpu(sdd->sds); - sdd->sds =3D NULL; free_percpu(sdd->sg); sdd->sg =3D NULL; free_percpu(sdd->sgc); @@ -2458,6 +2449,42 @@ static void __sdt_free(const struct cpumask *cpu_map) } } =20 +static int __sds_alloc(struct s_data *d, const struct cpumask *cpu_map) +{ + int j; + + d->sds =3D alloc_percpu(struct sched_domain_shared *); + if (!d->sds) + return -ENOMEM; + + for_each_cpu(j, cpu_map) { + struct sched_domain_shared *sds; + + sds =3D kzalloc_node(sizeof(struct sched_domain_shared), + GFP_KERNEL, cpu_to_node(j)); + if (!sds) + return -ENOMEM; + + *per_cpu_ptr(d->sds, j) =3D sds; + } + + return 0; +} + +static void __sds_free(struct s_data *d, const struct cpumask *cpu_map) +{ + int j; + + if (!d->sds) + return; + + for_each_cpu(j, cpu_map) + kfree(*per_cpu_ptr(d->sds, j)); + + free_percpu(d->sds); + d->sds =3D NULL; +} + static struct sched_domain *build_sched_domain(struct sched_domain_topolog= y_level *tl, const struct cpumask *cpu_map, struct sched_domain_attr *attr, struct sched_domain *child, int cpu) @@ -2609,8 +2636,19 @@ build_sched_domains(const struct cpumask *cpu_map, s= truct sched_domain_attr *att unsigned int imb_span =3D 1; =20 for (sd =3D *per_cpu_ptr(d.sd, i); sd; sd =3D sd->parent) { + struct sched_domain *parent =3D sd->parent; struct sched_domain *child =3D sd->child; =20 + /* Attach sd->shared to the topmost SD_SHARE_LLC domain. */ + if ((sd->flags & SD_SHARE_LLC) && + (!parent || !(parent->flags & SD_SHARE_LLC))) { + int llc_id =3D cpumask_first(sched_domain_span(sd)); + + sd->shared =3D *per_cpu_ptr(d.sds, llc_id); + atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight); + atomic_inc(&sd->shared->ref); + } + if (!(sd->flags & SD_SHARE_LLC) && child && (child->flags & SD_SHARE_LLC)) { struct sched_domain __rcu *top_p; @@ -2663,6 +2701,9 @@ build_sched_domains(const struct cpumask *cpu_map, st= ruct sched_domain_attr *att if (!cpumask_test_cpu(i, cpu_map)) continue; =20 + if (atomic_read(&(*per_cpu_ptr(d.sds, i))->ref)) + *per_cpu_ptr(d.sds, i) =3D NULL; + for (sd =3D *per_cpu_ptr(d.sd, i); sd; sd =3D sd->parent) { claim_allocations(i, sd); init_sched_groups_capacity(i, sd); --=20 2.43.0