From nobody Mon Apr 6 10:45:04 2026 Received: from canpmsgout01.his.huawei.com (canpmsgout01.his.huawei.com [113.46.200.216]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05B38318BB3 for ; Fri, 20 Mar 2026 06:20:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.216 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987618; cv=none; b=si0SlFt+cnUif72xrg4eYU+GJ28lz2EkkgZjv6LQTYNc6dIopqg5qqK7BMbElCPBvz+CobRidb+53AOlPzBZ+3yVovtWSFUWV74IRbuOO7RdTwHYIVuvnbPfTOs+7U0hxc57NiFp4EzWzNQWqjFP15wKuKGE6NWrq3Uxst6/4Ok= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987618; c=relaxed/simple; bh=GJHXbvqGv5mPGGbl0hJjcTRxqYPf1PobakYu5x2SJhM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Wb7Xgg6N4FMyl1+qWmXOnW+Dp28OmAdS5c9w/ss5OKMeCctgP/1xnJmG7Mlfbudlva/uSxCSX3cdM2hLMvwYGGnU5gVNNuiax8YxvicwZgmtCaM6voc4ynvAvndaq6pn1Vx0Ae+DPd9clX/nSEEnt51Ceg7pysEv53eqx4jS0GY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=kOE5jCVu; arc=none smtp.client-ip=113.46.200.216 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="kOE5jCVu" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=9duMKgd2ONrcXcqM0IClKSIVy5y+ZoUv3F75U/kSw9M=; b=kOE5jCVu29zHVAIPMHMLIBHV6b5ZHaE7XWD9ku/VIg8udoR7Fx8Aar/46DI3fSv8WDbmgu9h7 oNgKkArDyHJ68S/TmHjSIxq9H/qsIIX2j12xOBLXiFyv4Q7btZ1aaJ7ZIsY5p44dIoqyhcNPSBc Hm7YByEO0KCsoMqNVJFggaw= Received: from mail.maildlp.com (unknown [172.19.163.104]) by canpmsgout01.his.huawei.com (SkyGuard) with ESMTPS id 4fcXN64svcz1T4H8; Fri, 20 Mar 2026 14:14:46 +0800 (CST) Received: from kwepemr500016.china.huawei.com (unknown [7.202.195.68]) by mail.maildlp.com (Postfix) with ESMTPS id 875CD404AD; Fri, 20 Mar 2026 14:20:10 +0800 (CST) Received: from huawei.com (10.67.174.242) by kwepemr500016.china.huawei.com (7.202.195.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Mar 2026 14:20:09 +0800 From: Chen Jinghuang To: , , , , CC: , , , , , , , Subject: [RFC PATCH v5 2/9] sched/topology: Provide hooks to allocate data shared per LLC Date: Fri, 20 Mar 2026 05:59:13 +0000 Message-ID: <20260320055920.2518389-3-chenjinghuang2@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260320055920.2518389-1-chenjinghuang2@huawei.com> References: <20260320055920.2518389-1-chenjinghuang2@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemr500016.china.huawei.com (7.202.195.68) Content-Type: text/plain; charset="utf-8" From: Steve Sistare Add functions sd_llc_alloc_all() and sd_llc_free_all() to allocate and free data pointed to by struct sched_domain_shared at the last-level-cache domain. sd_llc_alloc_all() is called after the SD hierarchy is known, to eliminate the unnecessary allocations that would occur if we instead allocated in __sdt_alloc() and then figured out which shared nodes are redundant. Signed-off-by: Steve Sistare Signed-off-by: Chen Jinghuang --- kernel/sched/topology.c | 75 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 74 insertions(+), 1 deletion(-) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 32dcddaead82..fac1b9155b6e 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -21,6 +21,12 @@ void sched_domains_mutex_unlock(void) static cpumask_var_t sched_domains_tmpmask; static cpumask_var_t sched_domains_tmpmask2; =20 +struct s_data; +static int sd_llc_alloc(struct sched_domain *sd); +static void sd_llc_free(struct sched_domain *sd); +static int sd_llc_alloc_all(const struct cpumask *cpu_map, struct s_data *= d); +static void sd_llc_free_all(const struct cpumask *cpu_map); + static int __init sched_debug_setup(char *str) { sched_debug_verbose =3D true; @@ -630,8 +636,10 @@ static void destroy_sched_domain(struct sched_domain *= sd) */ free_sched_groups(sd->groups, 1); =20 - if (sd->shared && atomic_dec_and_test(&sd->shared->ref)) + if (sd->shared && atomic_dec_and_test(&sd->shared->ref)) { + sd_llc_free(sd); kfree(sd->shared); + } kfree(sd); } =20 @@ -1546,6 +1554,7 @@ static void __free_domain_allocs(struct s_data *d, en= um s_alloc what, free_percpu(d->sd); fallthrough; case sa_sd_storage: + sd_llc_free_all(cpu_map); __sdt_free(cpu_map); fallthrough; case sa_none: @@ -2463,6 +2472,62 @@ static void __sdt_free(const struct cpumask *cpu_map) } } =20 +static int sd_llc_alloc(struct sched_domain *sd) +{ + /* Allocate sd->shared data here. Empty for now. */ + + return 0; +} + +static void sd_llc_free(struct sched_domain *sd) +{ + struct sched_domain_shared *sds =3D sd->shared; + + if (!sds) + return; + + /* Free data here. Empty for now. */ +} + +static int sd_llc_alloc_all(const struct cpumask *cpu_map, struct s_data *= d) +{ + struct sched_domain *sd, *hsd; + int i; + + for_each_cpu(i, cpu_map) { + /* Find highest domain that shares resources */ + hsd =3D NULL; + for (sd =3D *per_cpu_ptr(d->sd, i); sd; sd =3D sd->parent) { + if (!(sd->flags & SD_SHARE_LLC)) + break; + hsd =3D sd; + } + if (hsd && sd_llc_alloc(hsd)) + return 1; + } + + return 0; +} + +static void sd_llc_free_all(const struct cpumask *cpu_map) +{ + struct sched_domain_topology_level *tl; + struct sched_domain *sd; + struct sd_data *sdd; + int j; + + for_each_sd_topology(tl) { + sdd =3D &tl->data; + if (!sdd || !sdd->sd) + continue; + for_each_cpu(j, cpu_map) { + sd =3D *per_cpu_ptr(sdd->sd, j); + if (sd) + sd_llc_free(sd); + } + } +} + static struct sched_domain *build_sched_domain(struct sched_domain_topolog= y_level *tl, const struct cpumask *cpu_map, struct sched_domain_attr *attr, struct sched_domain *child, int cpu) @@ -2674,6 +2739,14 @@ build_sched_domains(const struct cpumask *cpu_map, s= truct sched_domain_attr *att } } =20 + /* + * Allocate shared sd data at last level cache. Must be done after + * domains are built above, but before the data is used in + * cpu_attach_domain and descendants below. + */ + if (sd_llc_alloc_all(cpu_map, &d)) + goto error; + /* Attach the domains */ rcu_read_lock(); for_each_cpu(i, cpu_map) { --=20 2.34.1