From nobody Mon Apr 6 20:02:27 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62D6B37DEA6; Wed, 18 Mar 2026 08:08:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773821324; cv=none; b=AnZCQA+J+7pLbWPfXhXrhM0B458Ys0zD5PAo+dBIfisltofRPyAX6RLCWFhy74PuLdtMjpVBj+2ffEXcGJXUstYeK5wxjx45IyjBKQXxBncyq2xi4glnw7e+ol9cEdsqhuyYhPGJBN7nuFwvXaG0L0VVHT8VGXLEgl9cxEeJ8hk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773821324; c=relaxed/simple; bh=Hr7zYL9UF4qWlheMgKSrKuA5SDhZix3W2rxIgFFHOUI=; h=Date:From:To:Subject:Cc:In-Reply-To:References:MIME-Version: Message-ID:Content-Type; b=ZWBkVPlt45mKCwp+fSZ2K9Ka1jndhNg8jpWu9gVDJ12Eiv4UPnhPC11MsaDVJ2UmiHX83Zf+v1qC0eiqMPNckO48UZeO8ibquvZqfy94rqG0VfGkkeFGbzzw6OTGeMnmz9FaaD2BVu+1A7mPo0ekxuMFBthQqzDpx94Mrsui9HY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=y3FQssZo; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=hRWS3jWc; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="y3FQssZo"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="hRWS3jWc" Date: Wed, 18 Mar 2026 08:08:40 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1773821321; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JePiK5uHE0+MKhxkRyJLEoy6GSF4uqwEpFUcWKL5L40=; b=y3FQssZoSKsAaCGd81c5ztXaN2JYwbkwcCoLx9OugOjiyguQYcGHeC7l22cNGWiQG6JJYA DbDfdNwm5O7EqpcIQ1vgKus1boxwNQQQqJ6nx8v7krYOgnf7SWE1k2bVK60dDZdQqarrob UwPran4KHqc3yttm5Gu2aWWQYYxHMgWkfnPdnXfMsDnhcfTwXQk3PVMshDRYnBKdS7eQDH 3GxnaoQVUa2VFo5qOmXawhyoIaVSPqvNRde+HP49OaA7BJyBoSHudAp1cD50SbXBWjokI6 mhyCp/Ei+vTlmKRYFKD3D/OFhM6QiQRzWFsq3gP4ugFYdcwOVFETjgfyxWVf1w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1773821321; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JePiK5uHE0+MKhxkRyJLEoy6GSF4uqwEpFUcWKL5L40=; b=hRWS3jWcvAmNWmzrTmewktJsNOYwrr+qMcrNbd2/DjuuJ4fm2tbhO8KG+iEANI/Mtk745a 82EKy1jCSpegSfBQ== From: "tip-bot2 for K Prateek Nayak" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: sched/core] sched/topology: Switch to assigning "sd->shared" from s_data Cc: K Prateek Nayak , "Peter Zijlstra (Intel)" , Dietmar Eggemann , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20260312044434.1974-5-kprateek.nayak@amd.com> References: <20260312044434.1974-5-kprateek.nayak@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <177382132059.1647592.14815519891010921302.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Precedence: bulk Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The following commit has been merged into the sched/core branch of tip: Commit-ID: bb7a5e44fc6f3d5a252d95c48d057d5beccb8b35 Gitweb: https://git.kernel.org/tip/bb7a5e44fc6f3d5a252d95c48d057d5be= ccb8b35 Author: K Prateek Nayak AuthorDate: Thu, 12 Mar 2026 04:44:29=20 Committer: Peter Zijlstra CommitterDate: Wed, 18 Mar 2026 09:06:48 +01:00 sched/topology: Switch to assigning "sd->shared" from s_data Use the "sched_domain_shared" object allocated in s_data for "sd->shared" assignments. Assign "sd->shared" for the topmost SD_SHARE_LLC domain before degeneration and rely on the degeneration path to correctly pass down the shared object to "sd_llc". sd_degenerate_parent() ensures degenerating domains must have the same sched_domain_span() which ensures 1:1 passing down of the shared object. If the topmost SD_SHARE_LLC domain degenerates, the shared object is freed from destroy_sched_domain() when the last reference is dropped. claim_allocations() NULLs out the objects that have been assigned as "sd->shared" and the unassigned ones are freed from the __sds_free() path. To keep all the claim_allocations() bits in one place, claim_allocations() has been extended to accept "s_data" and iterate the domains internally to free both "sched_domain_shared" and the per-topology-level data for the particular CPU in one place. Post cpu_attach_domain(), all reclaims of "sd->shared" are handled via call_rcu() on the sched_domain object via destroy_sched_domains_rcu(). Signed-off-by: K Prateek Nayak Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Dietmar Eggemann Tested-by: Dietmar Eggemann Link: https://patch.msgid.link/20260312044434.1974-5-kprateek.nayak@amd.com --- kernel/sched/topology.c | 73 ++++++++++++++++++++++++---------------- 1 file changed, 44 insertions(+), 29 deletions(-) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 9006586..b19d84f 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -685,6 +685,9 @@ static void update_top_cache_domain(int cpu) if (sd) { id =3D cpumask_first(sched_domain_span(sd)); size =3D cpumask_weight(sched_domain_span(sd)); + + /* If sd_llc exists, sd_llc_shared should exist too. */ + WARN_ON_ONCE(!sd->shared); sds =3D sd->shared; } =20 @@ -733,6 +736,13 @@ cpu_attach_domain(struct sched_domain *sd, struct root= _domain *rd, int cpu) if (sd_parent_degenerate(tmp, parent)) { tmp->parent =3D parent->parent; =20 + /* Pick reference to parent->shared. */ + if (parent->shared) { + WARN_ON_ONCE(tmp->shared); + tmp->shared =3D parent->shared; + parent->shared =3D NULL; + } + if (parent->parent) { parent->parent->child =3D tmp; parent->parent->groups->flags =3D tmp->flags; @@ -1586,21 +1596,28 @@ __visit_domain_allocation_hell(struct s_data *d, co= nst struct cpumask *cpu_map) * sched_group structure so that the subsequent __free_domain_allocs() * will not free the data we're using. */ -static void claim_allocations(int cpu, struct sched_domain *sd) +static void claim_allocations(int cpu, struct s_data *d) { - struct sd_data *sdd =3D sd->private; + struct sched_domain *sd; + + if (atomic_read(&(*per_cpu_ptr(d->sds, cpu))->ref)) + *per_cpu_ptr(d->sds, cpu) =3D NULL; =20 - WARN_ON_ONCE(*per_cpu_ptr(sdd->sd, cpu) !=3D sd); - *per_cpu_ptr(sdd->sd, cpu) =3D NULL; + for (sd =3D *per_cpu_ptr(d->sd, cpu); sd; sd =3D sd->parent) { + struct sd_data *sdd =3D sd->private; =20 - if (atomic_read(&(*per_cpu_ptr(sdd->sds, cpu))->ref)) - *per_cpu_ptr(sdd->sds, cpu) =3D NULL; + WARN_ON_ONCE(*per_cpu_ptr(sdd->sd, cpu) !=3D sd); + *per_cpu_ptr(sdd->sd, cpu) =3D NULL; =20 - if (atomic_read(&(*per_cpu_ptr(sdd->sg, cpu))->ref)) - *per_cpu_ptr(sdd->sg, cpu) =3D NULL; + if (atomic_read(&(*per_cpu_ptr(sdd->sds, cpu))->ref)) + *per_cpu_ptr(sdd->sds, cpu) =3D NULL; =20 - if (atomic_read(&(*per_cpu_ptr(sdd->sgc, cpu))->ref)) - *per_cpu_ptr(sdd->sgc, cpu) =3D NULL; + if (atomic_read(&(*per_cpu_ptr(sdd->sg, cpu))->ref)) + *per_cpu_ptr(sdd->sg, cpu) =3D NULL; + + if (atomic_read(&(*per_cpu_ptr(sdd->sgc, cpu))->ref)) + *per_cpu_ptr(sdd->sgc, cpu) =3D NULL; + } } =20 #ifdef CONFIG_NUMA @@ -1738,16 +1755,6 @@ sd_init(struct sched_domain_topology_level *tl, sd->cache_nice_tries =3D 1; } =20 - /* - * For all levels sharing cache; connect a sched_domain_shared - * instance. - */ - if (sd->flags & SD_SHARE_LLC) { - sd->shared =3D *per_cpu_ptr(sdd->sds, sd_id); - atomic_inc(&sd->shared->ref); - atomic_set(&sd->shared->nr_busy_cpus, sd_weight); - } - sd->private =3D sdd; =20 return sd; @@ -2729,12 +2736,20 @@ build_sched_domains(const struct cpumask *cpu_map, = struct sched_domain_attr *att while (sd->parent && (sd->parent->flags & SD_SHARE_LLC)) sd =3D sd->parent; =20 - /* - * In presence of higher domains, adjust the - * NUMA imbalance stats for the hierarchy. - */ - if (IS_ENABLED(CONFIG_NUMA) && (sd->flags & SD_SHARE_LLC) && sd->parent) - adjust_numa_imbalance(sd); + if (sd->flags & SD_SHARE_LLC) { + int sd_id =3D cpumask_first(sched_domain_span(sd)); + + sd->shared =3D *per_cpu_ptr(d.sds, sd_id); + atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight); + atomic_inc(&sd->shared->ref); + + /* + * In presence of higher domains, adjust the + * NUMA imbalance stats for the hierarchy. + */ + if (IS_ENABLED(CONFIG_NUMA) && sd->parent) + adjust_numa_imbalance(sd); + } } =20 /* Calculate CPU capacity for physical packages and nodes */ @@ -2742,10 +2757,10 @@ build_sched_domains(const struct cpumask *cpu_map, = struct sched_domain_attr *att if (!cpumask_test_cpu(i, cpu_map)) continue; =20 - for (sd =3D *per_cpu_ptr(d.sd, i); sd; sd =3D sd->parent) { - claim_allocations(i, sd); + claim_allocations(i, &d); + + for (sd =3D *per_cpu_ptr(d.sd, i); sd; sd =3D sd->parent) init_sched_groups_capacity(i, sd); - } } =20 /* Attach the domains */