From nobody Mon Apr 6 09:13:47 2026 Received: from canpmsgout05.his.huawei.com (canpmsgout05.his.huawei.com [113.46.200.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7303730E836 for ; Fri, 20 Mar 2026 06:20:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.220 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987617; cv=none; b=SHrQAxFunMeZdyYTFZg453zx2Q62TkQ6YlYWAuVtK5ZrfxQtZl04GUeGGLS7CWtNAo804NOzgxUPveF7vJnntNha/hDZWTrKa7GqrWG8fkf69SGPU61OiGdffm4ixFRLYgMFy+R2c+Cq4f1dXIMcO5AkmnmO/G+Xcl25oWIML7w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987617; c=relaxed/simple; bh=+lVXWM24+8C66fkGwZxGQ1rwb4cnsAGZBXsXoBuodNM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=RBZK9t/WPZwQCYRCtNMet/dHht/1/5CFiP1wNJp4KIxGU+0KxSp64gXPovrYEUMzucepYqSjHZ1uF+d3K5BmxboC0ZrHQMMi1wXAiQ3mRNjvgLnHTDAuz8LeO29qMV/rY5DahefzrOtStI68QyRLUjeW/5YvKmAOIoLs8k01wCU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=JQWNhFaM; arc=none smtp.client-ip=113.46.200.220 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="JQWNhFaM" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=pCXPt3s6THniOnQ7Gczb4jK4tbuJO22ZZFl93CbHycg=; b=JQWNhFaM1vmRrNS3iMbvJ9M004eph6BfQlUm9ly1BvRmn6Jk7UIdwnSCZCF/2cic/XVE1assn D07uOZPGtGhRJOeAmUNlOgbATSc7ychf33LHeqUPBbrZVS7vkHirzLLxO/eSs/iwr9iDS5Ww4Y1 TSK5VKH1wVvdMVhtsnqESK4= Received: from mail.maildlp.com (unknown [172.19.162.140]) by canpmsgout05.his.huawei.com (SkyGuard) with ESMTPS id 4fcXMz4qdvz12LDG; Fri, 20 Mar 2026 14:14:39 +0800 (CST) Received: from kwepemr500016.china.huawei.com (unknown [7.202.195.68]) by mail.maildlp.com (Postfix) with ESMTPS id 01ABF2012A; Fri, 20 Mar 2026 14:20:10 +0800 (CST) Received: from huawei.com (10.67.174.242) by kwepemr500016.china.huawei.com (7.202.195.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Mar 2026 14:20:09 +0800 From: Chen Jinghuang To: , , , , CC: , , , , , , , Subject: [RFC PATCH v5 1/9] sched: Provide sparsemask, a reduced contention bitmap Date: Fri, 20 Mar 2026 05:59:12 +0000 Message-ID: <20260320055920.2518389-2-chenjinghuang2@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260320055920.2518389-1-chenjinghuang2@huawei.com> References: <20260320055920.2518389-1-chenjinghuang2@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemr500016.china.huawei.com (7.202.195.68) Content-Type: text/plain; charset="utf-8" From: Steve Sistare Provide struct sparsemask and functions to manipulate it. A sparsemask is a sparse bitmap. It reduces cache contention vs the usual bitmap when many threads concurrently set, clear, and visit elements, by reducing the number of significant bits per cacheline. For each cacheline chunk of the mask, only the first K bits of the first word are used, and the remaining bits are ignored, where K is a creation time parameter. Thus a sparsemask that can represent a set of N elements is approximately (N/K * CACHELINE) bytes in size. This type is simpler and more efficient than the struct sbitmap used by block drivers. Signed-off-by: Steve Sistare Signed-off-by: Chen Jinghuang --- kernel/sched/sparsemask.h | 210 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 210 insertions(+) create mode 100644 kernel/sched/sparsemask.h diff --git a/kernel/sched/sparsemask.h b/kernel/sched/sparsemask.h new file mode 100644 index 000000000000..11948620a1a2 --- /dev/null +++ b/kernel/sched/sparsemask.h @@ -0,0 +1,210 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * sparsemask.h - sparse bitmap operations + * + * Copyright (c) 2018 Oracle Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __LINUX_SPARSEMASK_H +#define __LINUX_SPARSEMASK_H + +#include +#include +#include + +/* + * A sparsemask is a sparse bitmap. It reduces cache contention vs the us= ual + * bitmap when many threads concurrently set, clear, and visit elements. = For + * each cacheline chunk of the mask, only the first K bits of the first wo= rd are + * used, and the remaining bits are ignored, where K is a creation time + * parameter. Thus a sparsemask that can represent a set of N elements is + * approximately (N/K * CACHELINE) bytes in size. + * + * Clients pass and receive element numbers in the public API, and the + * implementation translates them to bit numbers to perform the bitmap + * operations. + */ + +struct sparsemask_chunk { + unsigned long word; /* the significant bits */ +} ____cacheline_aligned_in_smp; + +struct sparsemask { + short nelems; /* current number of elements */ + short density; /* store 2^density elements per chunk */ + struct sparsemask_chunk chunks[0]; /* embedded array of chunks */ +}; + +#define _SMASK_INDEX(density, elem) ((elem) >> (density)) +#define _SMASK_BIT(density, elem) ((elem) & ((1U << (density)) - 1U)) +#define SMASK_INDEX(mask, elem) _SMASK_INDEX((mask)->density, elem) +#define SMASK_BIT(mask, elem) _SMASK_BIT((mask)->density, elem) +#define SMASK_WORD(mask, elem) \ + (&(mask)->chunks[SMASK_INDEX((mask), (elem))].word) + +/* + * sparsemask_next() - Return the next one bit in a bitmap, starting at a + * specified position and wrapping from the last bit to the first, up to b= ut + * not including a specified origin. This is a helper, so do not call it + * directly. + * + * @mask: Bitmap to search. + * @origin: Origin. + * @prev: Previous bit. Start search after this bit number. + * If -1, start search at @origin. + * + * Return: the bit number, else mask->nelems if no bits are set in the ran= ge. + */ +static inline int +sparsemask_next(const struct sparsemask *mask, int origin, int prev) +{ + int density =3D mask->density; + int bits_per_word =3D 1U << density; + const struct sparsemask_chunk *chunk; + int nelems =3D mask->nelems; + int next, bit, nbits; + unsigned long word; + + /* Calculate number of bits to be searched. */ + if (prev =3D=3D -1) { + nbits =3D nelems; + next =3D origin; + } else if (prev < origin) { + nbits =3D origin - prev; + next =3D prev + 1; + } else { + nbits =3D nelems - prev + origin - 1; + next =3D prev + 1; + } + + if (unlikely(next >=3D nelems)) + return nelems; + + /* + * Fetch and adjust first word. Clear word bits below @next, and round + * @next down to @bits_per_word boundary because later ffs will add + * those bits back. + */ + chunk =3D &mask->chunks[_SMASK_INDEX(density, next)]; + bit =3D _SMASK_BIT(density, next); + word =3D chunk->word & (~0UL << bit); + next -=3D bit; + nbits +=3D bit; + + while (!word) { + next +=3D bits_per_word; + nbits -=3D bits_per_word; + if (nbits <=3D 0) + return nelems; + + if (next >=3D nelems) { + chunk =3D mask->chunks; + nbits -=3D (next - nelems); + next =3D 0; + } else { + chunk++; + } + word =3D chunk->word; + } + + next +=3D __ffs(word); + if (next >=3D origin && prev !=3D -1) + return nelems; + return next; +} + +/****************** The public API ********************/ + +/* + * Max value for the density parameter, limited by 64 bits in the chunk wo= rd. + */ +#define SMASK_DENSITY_MAX 6 + +/* + * Return bytes to allocate for a sparsemask, for custom allocators. + */ +static inline size_t sparsemask_size(int nelems, int density) +{ + int index =3D _SMASK_INDEX(density, nelems) + 1; + + return offsetof(struct sparsemask, chunks[index]); +} + +/* + * Initialize an allocated sparsemask, for custom allocators. + */ +static inline void +sparsemask_init(struct sparsemask *mask, int nelems, int density) +{ + WARN_ON(density < 0 || density > SMASK_DENSITY_MAX || nelems < 0); + mask->nelems =3D nelems; + mask->density =3D density; +} + +/* + * sparsemask_alloc_node() - Allocate, initialize, and return a sparsemask. + * + * @nelems - maximum number of elements. + * @density - store 2^density elements per cacheline chunk. + * values from 0 to SMASK_DENSITY_MAX inclusive. + * @flags - kmalloc allocation flags + * @node - numa node + */ +static inline struct sparsemask * +sparsemask_alloc_node(int nelems, int density, gfp_t flags, int node) +{ + int nbytes =3D sparsemask_size(nelems, density); + struct sparsemask *mask =3D kmalloc_node(nbytes, flags, node); + + if (mask) + sparsemask_init(mask, nelems, density); + return mask; +} + +static inline void sparsemask_free(struct sparsemask *mask) +{ + kfree(mask); +} + +static inline void sparsemask_set_elem(struct sparsemask *dst, int elem) +{ + set_bit(SMASK_BIT(dst, elem), SMASK_WORD(dst, elem)); +} + +static inline void sparsemask_clear_elem(struct sparsemask *dst, int elem) +{ + clear_bit(SMASK_BIT(dst, elem), SMASK_WORD(dst, elem)); +} + +static inline int sparsemask_test_elem(const struct sparsemask *mask, int = elem) +{ + return test_bit(SMASK_BIT(mask, elem), SMASK_WORD(mask, elem)); +} + +/* + * sparsemask_for_each() - iterate over each set bit in a bitmap, starting= at a + * specified position, and wrapping from the last bit to the first. + * + * @mask: Bitmap to iterate over. + * @origin: Bit number at which to start searching. + * @elem: Iterator. Can be signed or unsigned integer. + * + * The implementation does not assume any bit in @mask is set, including + * @origin. After the loop, @elem =3D @mask->nelems. + */ +#define sparsemask_for_each(mask, origin, elem) \ + for ((elem) =3D -1; \ + (elem) =3D sparsemask_next((mask), (origin), (elem)), \ + (elem) < (mask)->nelems;) + +#endif /* __LINUX_SPARSEMASK_H */ --=20 2.34.1 From nobody Mon Apr 6 09:13:47 2026 Received: from canpmsgout01.his.huawei.com (canpmsgout01.his.huawei.com [113.46.200.216]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05B38318BB3 for ; Fri, 20 Mar 2026 06:20:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.216 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987618; cv=none; b=si0SlFt+cnUif72xrg4eYU+GJ28lz2EkkgZjv6LQTYNc6dIopqg5qqK7BMbElCPBvz+CobRidb+53AOlPzBZ+3yVovtWSFUWV74IRbuOO7RdTwHYIVuvnbPfTOs+7U0hxc57NiFp4EzWzNQWqjFP15wKuKGE6NWrq3Uxst6/4Ok= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987618; c=relaxed/simple; bh=GJHXbvqGv5mPGGbl0hJjcTRxqYPf1PobakYu5x2SJhM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Wb7Xgg6N4FMyl1+qWmXOnW+Dp28OmAdS5c9w/ss5OKMeCctgP/1xnJmG7Mlfbudlva/uSxCSX3cdM2hLMvwYGGnU5gVNNuiax8YxvicwZgmtCaM6voc4ynvAvndaq6pn1Vx0Ae+DPd9clX/nSEEnt51Ceg7pysEv53eqx4jS0GY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=kOE5jCVu; arc=none smtp.client-ip=113.46.200.216 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="kOE5jCVu" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=9duMKgd2ONrcXcqM0IClKSIVy5y+ZoUv3F75U/kSw9M=; b=kOE5jCVu29zHVAIPMHMLIBHV6b5ZHaE7XWD9ku/VIg8udoR7Fx8Aar/46DI3fSv8WDbmgu9h7 oNgKkArDyHJ68S/TmHjSIxq9H/qsIIX2j12xOBLXiFyv4Q7btZ1aaJ7ZIsY5p44dIoqyhcNPSBc Hm7YByEO0KCsoMqNVJFggaw= Received: from mail.maildlp.com (unknown [172.19.163.104]) by canpmsgout01.his.huawei.com (SkyGuard) with ESMTPS id 4fcXN64svcz1T4H8; Fri, 20 Mar 2026 14:14:46 +0800 (CST) Received: from kwepemr500016.china.huawei.com (unknown [7.202.195.68]) by mail.maildlp.com (Postfix) with ESMTPS id 875CD404AD; Fri, 20 Mar 2026 14:20:10 +0800 (CST) Received: from huawei.com (10.67.174.242) by kwepemr500016.china.huawei.com (7.202.195.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Mar 2026 14:20:09 +0800 From: Chen Jinghuang To: , , , , CC: , , , , , , , Subject: [RFC PATCH v5 2/9] sched/topology: Provide hooks to allocate data shared per LLC Date: Fri, 20 Mar 2026 05:59:13 +0000 Message-ID: <20260320055920.2518389-3-chenjinghuang2@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260320055920.2518389-1-chenjinghuang2@huawei.com> References: <20260320055920.2518389-1-chenjinghuang2@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemr500016.china.huawei.com (7.202.195.68) Content-Type: text/plain; charset="utf-8" From: Steve Sistare Add functions sd_llc_alloc_all() and sd_llc_free_all() to allocate and free data pointed to by struct sched_domain_shared at the last-level-cache domain. sd_llc_alloc_all() is called after the SD hierarchy is known, to eliminate the unnecessary allocations that would occur if we instead allocated in __sdt_alloc() and then figured out which shared nodes are redundant. Signed-off-by: Steve Sistare Signed-off-by: Chen Jinghuang --- kernel/sched/topology.c | 75 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 74 insertions(+), 1 deletion(-) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 32dcddaead82..fac1b9155b6e 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -21,6 +21,12 @@ void sched_domains_mutex_unlock(void) static cpumask_var_t sched_domains_tmpmask; static cpumask_var_t sched_domains_tmpmask2; =20 +struct s_data; +static int sd_llc_alloc(struct sched_domain *sd); +static void sd_llc_free(struct sched_domain *sd); +static int sd_llc_alloc_all(const struct cpumask *cpu_map, struct s_data *= d); +static void sd_llc_free_all(const struct cpumask *cpu_map); + static int __init sched_debug_setup(char *str) { sched_debug_verbose =3D true; @@ -630,8 +636,10 @@ static void destroy_sched_domain(struct sched_domain *= sd) */ free_sched_groups(sd->groups, 1); =20 - if (sd->shared && atomic_dec_and_test(&sd->shared->ref)) + if (sd->shared && atomic_dec_and_test(&sd->shared->ref)) { + sd_llc_free(sd); kfree(sd->shared); + } kfree(sd); } =20 @@ -1546,6 +1554,7 @@ static void __free_domain_allocs(struct s_data *d, en= um s_alloc what, free_percpu(d->sd); fallthrough; case sa_sd_storage: + sd_llc_free_all(cpu_map); __sdt_free(cpu_map); fallthrough; case sa_none: @@ -2463,6 +2472,62 @@ static void __sdt_free(const struct cpumask *cpu_map) } } =20 +static int sd_llc_alloc(struct sched_domain *sd) +{ + /* Allocate sd->shared data here. Empty for now. */ + + return 0; +} + +static void sd_llc_free(struct sched_domain *sd) +{ + struct sched_domain_shared *sds =3D sd->shared; + + if (!sds) + return; + + /* Free data here. Empty for now. */ +} + +static int sd_llc_alloc_all(const struct cpumask *cpu_map, struct s_data *= d) +{ + struct sched_domain *sd, *hsd; + int i; + + for_each_cpu(i, cpu_map) { + /* Find highest domain that shares resources */ + hsd =3D NULL; + for (sd =3D *per_cpu_ptr(d->sd, i); sd; sd =3D sd->parent) { + if (!(sd->flags & SD_SHARE_LLC)) + break; + hsd =3D sd; + } + if (hsd && sd_llc_alloc(hsd)) + return 1; + } + + return 0; +} + +static void sd_llc_free_all(const struct cpumask *cpu_map) +{ + struct sched_domain_topology_level *tl; + struct sched_domain *sd; + struct sd_data *sdd; + int j; + + for_each_sd_topology(tl) { + sdd =3D &tl->data; + if (!sdd || !sdd->sd) + continue; + for_each_cpu(j, cpu_map) { + sd =3D *per_cpu_ptr(sdd->sd, j); + if (sd) + sd_llc_free(sd); + } + } +} + static struct sched_domain *build_sched_domain(struct sched_domain_topolog= y_level *tl, const struct cpumask *cpu_map, struct sched_domain_attr *attr, struct sched_domain *child, int cpu) @@ -2674,6 +2739,14 @@ build_sched_domains(const struct cpumask *cpu_map, s= truct sched_domain_attr *att } } =20 + /* + * Allocate shared sd data at last level cache. Must be done after + * domains are built above, but before the data is used in + * cpu_attach_domain and descendants below. + */ + if (sd_llc_alloc_all(cpu_map, &d)) + goto error; + /* Attach the domains */ rcu_read_lock(); for_each_cpu(i, cpu_map) { --=20 2.34.1 From nobody Mon Apr 6 09:13:47 2026 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A655930EF84 for ; Fri, 20 Mar 2026 06:20:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987625; cv=none; b=BK7WTRdNX1KzL0NMhzdvt3U3tMU8bNAvv28IDn4P48UzWpVKckFCG05qTjX+HdzN4zPhIqiqgqPHWo9w0WEMsW9AzykK4DjjClOi6zwxguEbzE979VY5nxPXs3s9/Ddtv7zWO3xlIOxXB99z0wf+HJ9kmfiepbStfblCalvtPPI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987625; c=relaxed/simple; bh=BH+34rIPuqcGgkeZKYsf4qYdGBuZa3T7EyMVzBtqm1A=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=afK+ewB0r+Nc44coZR6Wjm5gzAAOkq23GXYKCz4zFtcKwIvoCMXMNQF4SDIi/uRQfr/WEhtNCp8AMhWbbks6ezTM20+/GXZgu/0Bc1R8qIEre581PzGFa51tssKZL0GQuM0bZWoW5sn34NM6M8EeyHxKqYzqNM9o8p5nCPfHDBI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=Yu5k3MaY; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=Yu5k3MaY; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="Yu5k3MaY"; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="Yu5k3MaY" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=WuLQczoA8ClYDlFZEuQEW6s57B8Rs8eJ8ySS9FCTbi8=; b=Yu5k3MaYhSxEtoiQBf4GRUnfBoqV1rDtL5eAfIqOrmrSwVMZM/MKHWD3NhLz940JIU3m77bf1 oMfjjB8+fRlcl/1V0Mi5/4bT2mEWtiJV9J+i8HYUJABmJnJ/0B903JEVOEgZpvppugDqV+LiOVI LYNpoFKYLQ2KaKdMQyla1sw= Received: from canpmsgout03.his.huawei.com (unknown [172.19.92.159]) by szxga01-in.huawei.com (SkyGuard) with ESMTPS id 4fcXTL5ltNz1BG2x for ; Fri, 20 Mar 2026 14:19:18 +0800 (CST) dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=WuLQczoA8ClYDlFZEuQEW6s57B8Rs8eJ8ySS9FCTbi8=; b=Yu5k3MaYhSxEtoiQBf4GRUnfBoqV1rDtL5eAfIqOrmrSwVMZM/MKHWD3NhLz940JIU3m77bf1 oMfjjB8+fRlcl/1V0Mi5/4bT2mEWtiJV9J+i8HYUJABmJnJ/0B903JEVOEgZpvppugDqV+LiOVI LYNpoFKYLQ2KaKdMQyla1sw= Received: from mail.maildlp.com (unknown [172.19.162.223]) by canpmsgout03.his.huawei.com (SkyGuard) with ESMTPS id 4fcXN41BJXzpStF; Fri, 20 Mar 2026 14:14:44 +0800 (CST) Received: from kwepemr500016.china.huawei.com (unknown [7.202.195.68]) by mail.maildlp.com (Postfix) with ESMTPS id 1514940561; Fri, 20 Mar 2026 14:20:11 +0800 (CST) Received: from huawei.com (10.67.174.242) by kwepemr500016.china.huawei.com (7.202.195.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Mar 2026 14:20:10 +0800 From: Chen Jinghuang To: , , , , CC: , , , , , , , Subject: [RFC PATCH v5 3/9] sched/topology: Provide cfs_overload_cpus bitmap Date: Fri, 20 Mar 2026 05:59:14 +0000 Message-ID: <20260320055920.2518389-4-chenjinghuang2@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260320055920.2518389-1-chenjinghuang2@huawei.com> References: <20260320055920.2518389-1-chenjinghuang2@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemr500016.china.huawei.com (7.202.195.68) Content-Type: text/plain; charset="utf-8" From: Steve Sistare Define and initialize a sparse bitmap of overloaded CPUs, per last-level-cache scheduling domain, for use by the CFS scheduling class. Save a pointer to cfs_overload_cpus in the rq for efficient access. Signed-off-by: Steve Sistare Signed-off-by: Chen Jinghuang --- include/linux/sched/topology.h | 1 + kernel/sched/sched.h | 2 ++ kernel/sched/topology.c | 25 +++++++++++++++++++++++-- 3 files changed, 26 insertions(+), 2 deletions(-) diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 45c0022b91ce..472c3dcf5a34 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -67,6 +67,7 @@ struct sched_domain_shared { atomic_t ref; atomic_t nr_busy_cpus; int has_idle_cores; + struct sparsemask *cfs_overload_cpus; int nr_idle_scan; }; =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b82fb70a9d54..4989a92eeb9b 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -85,6 +85,7 @@ struct cfs_rq; struct rt_rq; struct sched_group; struct cpuidle_state; +struct sparsemask; =20 #if defined(CONFIG_PARAVIRT) && !defined(CONFIG_HAVE_PV_STEAL_CLOCK_GEN) # include @@ -1173,6 +1174,7 @@ struct rq { struct cfs_rq cfs; struct rt_rq rt; struct dl_rq dl; + struct sparsemask *cfs_overload_cpus; #ifdef CONFIG_SCHED_CLASS_EXT struct scx_rq scx; struct sched_dl_entity ext_server; diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index fac1b9155b6e..7bf1f68dac32 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -6,6 +6,7 @@ #include #include #include "sched.h" +#include "sparsemask.h" =20 DEFINE_MUTEX(sched_domains_mutex); void sched_domains_mutex_lock(void) @@ -683,7 +684,9 @@ DEFINE_STATIC_KEY_FALSE(sched_cluster_active); =20 static void update_top_cache_domain(int cpu) { + struct sparsemask *cfs_overload_cpus =3D NULL; struct sched_domain_shared *sds =3D NULL; + struct rq *rq =3D cpu_rq(cpu); struct sched_domain *sd; int id =3D cpu; int size =3D 1; @@ -693,8 +696,10 @@ static void update_top_cache_domain(int cpu) id =3D cpumask_first(sched_domain_span(sd)); size =3D cpumask_weight(sched_domain_span(sd)); sds =3D sd->shared; + cfs_overload_cpus =3D sds->cfs_overload_cpus; } =20 + rcu_assign_pointer(rq->cfs_overload_cpus, cfs_overload_cpus); rcu_assign_pointer(per_cpu(sd_llc, cpu), sd); per_cpu(sd_llc_size, cpu) =3D size; per_cpu(sd_llc_id, cpu) =3D id; @@ -2474,7 +2479,22 @@ static void __sdt_free(const struct cpumask *cpu_map) =20 static int sd_llc_alloc(struct sched_domain *sd) { - /* Allocate sd->shared data here. Empty for now. */ + struct sched_domain_shared *sds =3D sd->shared; + struct cpumask *span =3D sched_domain_span(sd); + int nid =3D cpu_to_node(cpumask_first(span)); + int flags =3D __GFP_ZERO | GFP_KERNEL; + struct sparsemask *mask; + + /* + * Allocate the bitmap if not already allocated. This is called for + * every CPU in the LLC but only allocates once per sd_llc_shared. + */ + if (!sds->cfs_overload_cpus) { + mask =3D sparsemask_alloc_node(nr_cpu_ids, 3, flags, nid); + if (!mask) + return 1; + sds->cfs_overload_cpus =3D mask; + } =20 return 0; } @@ -2486,7 +2506,8 @@ static void sd_llc_free(struct sched_domain *sd) if (!sds) return; =20 - /* Free data here. Empty for now. */ + sparsemask_free(sds->cfs_overload_cpus); + sds->cfs_overload_cpus =3D NULL; } =20 static int sd_llc_alloc_all(const struct cpumask *cpu_map, struct s_data *= d) --=20 2.34.1 From nobody Mon Apr 6 09:13:47 2026 Received: from canpmsgout06.his.huawei.com (canpmsgout06.his.huawei.com [113.46.200.221]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81520303A01 for ; Fri, 20 Mar 2026 06:20:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.221 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987618; cv=none; b=WgDSzQdIHy6edNzjAqEUxS+ZVrN5KaklU/LptnVElPY/jlP8ijpg8Msn/aWNrUKLEIh9uyBULAAm0qTeJV0pPUBvGu/JbUPxeKCGoR/jX+gLvSwwCWB9wDuKbf8Y1ZxjqdTRE0PVHuZ5jMMCUK1BKTY9QOrUhFtFUgjS++nBqho= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987618; c=relaxed/simple; bh=w2dDjnJ9sdTQwFdgEqM8rz7qsxYTvw1savKKz+G4LPU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qjWg1Q3DAD4cCF/wFJeVp9vyPJpB0itUd+kJnN0MS5NQEt7+Oq9wdX3SEOj8SskSMqO0z414dg5M4ELb9z2Jde8hUaq+aB5AgNAjMPWdFc5l7+DYXj8uj7JJalBjzO95GybyH6mIJ/eqAWjRdgK8NrYG5fJLdllQMIBtH3GT/WY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=3rVF1J8j; arc=none smtp.client-ip=113.46.200.221 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="3rVF1J8j" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=qUWXZzOAXYq25qwmnuhGtzpPIq6pA2UCWm2XP2yPcJI=; b=3rVF1J8jy1te2qsc18Z7hTg0lsWsDjJh7VaL4lx7zSa4iLrW1FUWu6WwYkBaoLldoR34BXSat Pjg6pokdbZgm6YAfs1OXQWIsjTLOEu3QYud2+cth9v+Ro4CoaSJjLY2IQN4oumGGmY1KJNC6s8o P2IQGvyDL+82zz0EgQU+VPs= Received: from mail.maildlp.com (unknown [172.19.162.140]) by canpmsgout06.his.huawei.com (SkyGuard) with ESMTPS id 4fcXNb56lpzRhR3; Fri, 20 Mar 2026 14:15:11 +0800 (CST) Received: from kwepemr500016.china.huawei.com (unknown [7.202.195.68]) by mail.maildlp.com (Postfix) with ESMTPS id 94B432012A; Fri, 20 Mar 2026 14:20:11 +0800 (CST) Received: from huawei.com (10.67.174.242) by kwepemr500016.china.huawei.com (7.202.195.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Mar 2026 14:20:10 +0800 From: Chen Jinghuang To: , , , , CC: , , , , , , , Subject: [RFC PATCH v5 4/9] sched/fair: Dynamically update cfs_overload_cpus Date: Fri, 20 Mar 2026 05:59:15 +0000 Message-ID: <20260320055920.2518389-5-chenjinghuang2@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260320055920.2518389-1-chenjinghuang2@huawei.com> References: <20260320055920.2518389-1-chenjinghuang2@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemr500016.china.huawei.com (7.202.195.68) Content-Type: text/plain; charset="utf-8" From: Steve Sistare An overloaded CPU has more than 1 runnable task. When a CFS task wakes on a CPU, if h_nr_runnable transitions from 1 to more, then set the CPU in the cfs_overload_cpus bitmap. When a CFS task sleeps, if h_nr_runnable transitions from 2 to less, then clear the CPU in cfs_overload_cpus. Signed-off-by: Steve Sistare Signed-off-by: Chen Jinghuang --- v5: Rename h_nr_running to h_nr_runnable and reposition overload_set/overload_clear to fix overload detection for delay dequeue. v4: Detect CPU overload via changes in h_nr_running. --- kernel/sched/fair.c | 45 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 44 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index eea99ec01a3f..92c3bcff5b6b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -55,6 +55,7 @@ #include =20 #include "sched.h" +#include "sparsemask.h" #include "stats.h" #include "autogroup.h" =20 @@ -5076,6 +5077,33 @@ static inline void update_misfit_status(struct task_= struct *p, struct rq *rq) rq->misfit_task_load =3D max_t(unsigned long, task_h_load(p), 1); } =20 +#ifdef CONFIG_SMP +static void overload_clear(struct rq *rq) +{ + struct sparsemask *overload_cpus; + + rcu_read_lock(); + overload_cpus =3D rcu_dereference(rq->cfs_overload_cpus); + if (overload_cpus) + sparsemask_clear_elem(overload_cpus, rq->cpu); + rcu_read_unlock(); +} + +static void overload_set(struct rq *rq) +{ + struct sparsemask *overload_cpus; + + rcu_read_lock(); + overload_cpus =3D rcu_dereference(rq->cfs_overload_cpus); + if (overload_cpus) + sparsemask_set_elem(overload_cpus, rq->cpu); + rcu_read_unlock(); +} +#else /* CONFIG_SMP */ +static inline void overload_clear(struct rq *rq) {} +static inline void overload_set(struct rq *rq) {} +#endif + void __setparam_fair(struct task_struct *p, const struct sched_attr *attr) { struct sched_entity *se =3D &p->se; @@ -5955,6 +5983,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) if (!dequeue) return false; /* Throttle no longer required. */ =20 + /* freeze hierarchy runnable averages while throttled */ rcu_read_lock(); walk_tg_tree_from(cfs_rq->tg, tg_throttle_down, tg_nop, (void *)rq); @@ -6875,6 +6904,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) int h_nr_idle =3D task_has_idle_policy(p); int h_nr_runnable =3D 1; int task_new =3D !(flags & ENQUEUE_WAKEUP); + unsigned int prev_nr =3D rq->cfs.h_nr_runnable; int rq_h_nr_queued =3D rq->cfs.h_nr_queued; u64 slice =3D 0; =20 @@ -6892,6 +6922,10 @@ enqueue_task_fair(struct rq *rq, struct task_struct = *p, int flags) =20 if (flags & ENQUEUE_DELAYED) { requeue_delayed_entity(se); + + if (prev_nr <=3D 1 && rq->cfs.h_nr_runnable >=3D 2) + overload_set(rq); + return; } =20 @@ -6961,6 +6995,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *= p, int flags) =20 /* At this point se is NULL and we are at root level*/ add_nr_running(rq, 1); + if (prev_nr <=3D 1 && rq->cfs.h_nr_runnable >=3D 2) + overload_set(rq); =20 /* * Since new tasks are assigned an initial util_avg equal to @@ -7003,6 +7039,7 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) int h_nr_idle =3D 0; int h_nr_queued =3D 0; int h_nr_runnable =3D 0; + unsigned int prev_nr =3D rq->cfs.h_nr_runnable; struct cfs_rq *cfs_rq; u64 slice =3D 0; =20 @@ -7018,8 +7055,12 @@ static int dequeue_entities(struct rq *rq, struct sc= hed_entity *se, int flags) cfs_rq =3D cfs_rq_of(se); =20 if (!dequeue_entity(cfs_rq, se, flags)) { - if (p && &p->se =3D=3D se) + if (p && &p->se =3D=3D se) { + if (prev_nr >=3D 2 && rq->cfs.h_nr_runnable <=3D 1) + overload_clear(rq); + return -1; + } =20 slice =3D cfs_rq_min_slice(cfs_rq); break; @@ -7077,6 +7118,8 @@ static int dequeue_entities(struct rq *rq, struct sch= ed_entity *se, int flags) } =20 sub_nr_running(rq, h_nr_queued); + if (prev_nr >=3D 2 && rq->cfs.h_nr_runnable <=3D 1) + overload_clear(rq); =20 /* balance early to pull high priority tasks */ if (unlikely(!was_sched_idle && sched_idle_rq(rq))) --=20 2.34.1 From nobody Mon Apr 6 09:13:47 2026 Received: from canpmsgout08.his.huawei.com (canpmsgout08.his.huawei.com [113.46.200.223]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54D8A30FF1D for ; Fri, 20 Mar 2026 06:20:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.223 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987619; cv=none; b=SmsG1zhxFZG+oc1pauUg29dV2QqZPjVbQtzk17dzTIA0I0QzOg24HtFOuslS8oCjbcujbpznZB82b1sWFEK6doS1DD/n0IyMT42WIcjGHwqguGSaBKYeKvVoT6KVI4yuahJHdglkyULyGqRMS38UKpof+15rZEOmM2ld07SIGdI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987619; c=relaxed/simple; bh=iwAsPGgv5owW0EZQTvV5hPM4kemza5FPhD+E/H96duA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=D4fdcQU4dfJBGctMx4r1WW4/C8hOuQv55lyiqF1kI6s/1LKz5rPxcOulfiBXdJJAi5ZL/ZE+44pV6y0gHuSJRuK9BpAbRm6SJJ7LM+W3otqXc88275pwdLrbW4xK8tx+FSNHkwn2TQleH8BWmlpFxmBxdbpvwIXiJMtvE4cKMwE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=mk2pDjNK; arc=none smtp.client-ip=113.46.200.223 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="mk2pDjNK" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=HNOYJL4Vn81gEyytjC8CvgR7HKhIHEkzpdQRN2UUz2s=; b=mk2pDjNKIEziYBSiAECfoSzzOCRSWNG5OM3AjYEHeXn3iMCsJEVSGtKzzShjma8DHFNDAjE8T 2joVuyWg2Kfi/683vflCu6Dss6m/7y9NkMZ55bSugb+9ffxHVNYNyc2zm8h/H/XLXH3IHdAckVS mOM2hSlsr/qBOUqmajLQCm0= Received: from mail.maildlp.com (unknown [172.19.162.92]) by canpmsgout08.his.huawei.com (SkyGuard) with ESMTPS id 4fcXNZ3KKzzmV78; Fri, 20 Mar 2026 14:15:10 +0800 (CST) Received: from kwepemr500016.china.huawei.com (unknown [7.202.195.68]) by mail.maildlp.com (Postfix) with ESMTPS id 23FBE40565; Fri, 20 Mar 2026 14:20:12 +0800 (CST) Received: from huawei.com (10.67.174.242) by kwepemr500016.china.huawei.com (7.202.195.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Mar 2026 14:20:11 +0800 From: Chen Jinghuang To: , , , , CC: , , , , , , , Subject: [RFC PATCH v5 5/9] sched/fair: Hoist idle_stamp up from idle_balance Date: Fri, 20 Mar 2026 05:59:16 +0000 Message-ID: <20260320055920.2518389-6-chenjinghuang2@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260320055920.2518389-1-chenjinghuang2@huawei.com> References: <20260320055920.2518389-1-chenjinghuang2@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemr500016.china.huawei.com (7.202.195.68) Content-Type: text/plain; charset="utf-8" From: Steve Sistare Move the update of idle_stamp from idle_balance to the call site in pick_next_task_fair, to prepare for a future patch that adds work to pick_next_task_fair which must be included in the idle_stamp interval. No functional change. Signed-off-by: Steve Sistare Signed-off-by: Chen Jinghuang --- kernel/sched/fair.c | 32 ++++++++++++++++++++++---------- 1 file changed, 22 insertions(+), 10 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 92c3bcff5b6b..742462d41118 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5078,6 +5078,16 @@ static inline void update_misfit_status(struct task_= struct *p, struct rq *rq) } =20 #ifdef CONFIG_SMP +static inline void rq_idle_stamp_update(struct rq *rq) +{ + rq->idle_stamp =3D rq_clock(rq); +} + +static inline void rq_idle_stamp_clear(struct rq *rq) +{ + rq->idle_stamp =3D 0; +} + static void overload_clear(struct rq *rq) { struct sparsemask *overload_cpus; @@ -5100,6 +5110,8 @@ static void overload_set(struct rq *rq) rcu_read_unlock(); } #else /* CONFIG_SMP */ +static inline void rq_idle_stamp_update(struct rq *rq) {} +static inline void rq_idle_stamp_clear(struct rq *rq) {} static inline void overload_clear(struct rq *rq) {} static inline void overload_set(struct rq *rq) {} #endif @@ -9011,8 +9023,17 @@ pick_next_task_fair(struct rq *rq, struct task_struc= t *prev, struct rq_flags *rf =20 idle: if (rf) { + /* + * We must set idle_stamp _before_ calling idle_balance(), such that we + * measure the duration of idle_balance() as idle time. + */ + rq_idle_stamp_update(rq); + new_tasks =3D sched_balance_newidle(rq, rf); =20 + if (new_tasks) + rq_idle_stamp_clear(rq); + /* * Because sched_balance_newidle() releases (and re-acquires) * rq->lock, it is possible for any higher priority task to @@ -12911,13 +12932,6 @@ static int sched_balance_newidle(struct rq *this_r= q, struct rq_flags *rf) if (this_rq->ttwu_pending) return 0; =20 - /* - * We must set idle_stamp _before_ calling sched_balance_rq() - * for CPU_NEWLY_IDLE, such that we measure the this duration - * as idle time. - */ - this_rq->idle_stamp =3D rq_clock(this_rq); - /* * Do not pull tasks towards !active CPUs... */ @@ -13026,9 +13040,7 @@ static int sched_balance_newidle(struct rq *this_rq= , struct rq_flags *rf) if (time_after(this_rq->next_balance, next_balance)) this_rq->next_balance =3D next_balance; =20 - if (pulled_task) - this_rq->idle_stamp =3D 0; - else + if (!pulled_task) nohz_newidle_balance(this_rq); =20 rq_repin_lock(this_rq, rf); --=20 2.34.1 From nobody Mon Apr 6 09:13:47 2026 Received: from canpmsgout02.his.huawei.com (canpmsgout02.his.huawei.com [113.46.200.217]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 506A230EF84 for ; Fri, 20 Mar 2026 06:20:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.217 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987620; cv=none; b=S5nYifz0TxKmiofKuHnvQCf+20HiGadseFnmoRkRqUVFgnDUaOHU9n8CpHisf2JK3PFv3fZ8sBtUjdWbt6fNr1OxHgu+RyAKavYlV1ZxNotzJmJIYEoWtREXfsLOQ+J+4jUrQpoY4nWfLG0PIYDqi8K+FLDrdCW30dwSxEz7Clw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987620; c=relaxed/simple; bh=4Ao9uxhatKL6nxPUk3LjswGsM2+rrzXm1OH5HDuJuVA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cZqc4+pTO7eBiFXSuuQQWiPrhiMl5yj/QrLfbAal4zk8SsPKp26gin9U4NqCBokbWxYzLMseNoyE2HWF6Vdc6nhdA7NpwzcnwGCBzGznu2jJ2qIDoS+DXuE6iZDDOO19wTbbHo9LZ4CqSox8PwxcUWKZdTe9T20zmHT9JdiaDRk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=33tW+4XJ; arc=none smtp.client-ip=113.46.200.217 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="33tW+4XJ" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=BA8yrgdHA88zwosFEbYCEE+BImhnhVcLGKzx4RYalb0=; b=33tW+4XJ1Ls1WOca5LV3EyTAlEurFgTzYJ3hUVjF6Yu4TW/aDAG5akf8B3oOnC7q8PFwGE8Re O/VCsReLCRZtQSBP0C9X8nGtwfZc6NGXRl5PG9eTKUdS1XAnCm4CbapsOxmWSlBKW0ypyovZw4i Ya0594RGtAvKJNegLvRWbAM= Received: from mail.maildlp.com (unknown [172.19.162.140]) by canpmsgout02.his.huawei.com (SkyGuard) with ESMTPS id 4fcXMp1Pn1zcZyn; Fri, 20 Mar 2026 14:14:30 +0800 (CST) Received: from kwepemr500016.china.huawei.com (unknown [7.202.195.68]) by mail.maildlp.com (Postfix) with ESMTPS id AA8D92012A; Fri, 20 Mar 2026 14:20:12 +0800 (CST) Received: from huawei.com (10.67.174.242) by kwepemr500016.china.huawei.com (7.202.195.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Mar 2026 14:20:11 +0800 From: Chen Jinghuang To: , , , , CC: , , , , , , , Subject: [RFC PATCH v5 6/9] sched/fair: Generalize the detach_task interface Date: Fri, 20 Mar 2026 05:59:17 +0000 Message-ID: <20260320055920.2518389-7-chenjinghuang2@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260320055920.2518389-1-chenjinghuang2@huawei.com> References: <20260320055920.2518389-1-chenjinghuang2@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemr500016.china.huawei.com (7.202.195.68) Content-Type: text/plain; charset="utf-8" From: Steve Sistare The detach_task function takes a struct lb_env argument, but only needs a few of its members. Pass the rq and cpu arguments explicitly so the function may be called from code that is not based on lb_env. No functional change. Signed-off-by: Steve Sistare Signed-off-by: Chen Jinghuang --- kernel/sched/fair.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 742462d41118..ebb13108dabe 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9602,6 +9602,17 @@ static void detach_task(struct task_struct *p, struc= t lb_env *env) set_task_cpu(p, env->dst_cpu); } =20 +/* + * detach_task_steal() -- detach the task for the migration from @src_rq t= o @dst_cpu. + */ +static void detach_task_steal(struct task_struct *p, struct rq *src_rq, in= t dst_cpu) +{ + lockdep_assert_rq_held(src_rq); + + deactivate_task(src_rq, p, DEQUEUE_NOCLOCK); + set_task_cpu(p, dst_cpu); +} + /* * detach_one_task() -- tries to dequeue exactly one task from env->src_rq= , as * part of active balancing operations within "domain". --=20 2.34.1 From nobody Mon Apr 6 09:13:47 2026 Received: from canpmsgout11.his.huawei.com (canpmsgout11.his.huawei.com [113.46.200.226]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E38126A0DD for ; Fri, 20 Mar 2026 06:20:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.226 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987619; cv=none; b=pQmNv8s8aMv9aXd3reXyeeq9gOelPFxT+JsLu/7FIcXOGz4iIBKKwhpkPhMlF+hGNmV0Q2QSYmeb6czdL0Vm0wSTCmxSUNbwpjbO7KC/S5r2Fz5BDLCMo92w8NZoPhTjMuAyXP5ofuUNrnknJGj4FdG8CDqf0lZdz0ZOOjIgExM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987619; c=relaxed/simple; bh=Cnm8A/BqIJEz+2j9M/P1O+jZt3GWP2m3BzbjMsdqybc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kAUt1NOMc4mclIDXBfab9zr4jeAZjuyL4iiyedWCAuZh4Ss+CloZM8Wzuuq/MLK02L+lYF/TRMyR8xPOHsCTyt98zklz846rpXYIOLkFxqHwPqkFO9FNBIt1B1A0bbThbCqYA68E83hjvoHGGGF79Wx/JMR8KNYnWro372qngS4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=hD2dDJcf; arc=none smtp.client-ip=113.46.200.226 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="hD2dDJcf" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=yLtrl0kzFYeduf5CMC+JoJgfXCbR3aasb8lPYKojXg0=; b=hD2dDJcf6MRwaYAIAdVQdJUSvL9Ezx5f4gIwpQxbjDG8yUt3BXyxhzBMpW0VE519Y2No9Y0MZ y8IrlG7xEr3GarfVUe/lb9sgyDHXQkinIB7jerXJUDWJzyQ10NqkNMan2Uj6HgvupOEhLo5Z99A wCkUJeETh57aWoggx3T4NKw= Received: from mail.maildlp.com (unknown [172.19.163.214]) by canpmsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fcXNb6Y0fzKm4l; Fri, 20 Mar 2026 14:15:11 +0800 (CST) Received: from kwepemr500016.china.huawei.com (unknown [7.202.195.68]) by mail.maildlp.com (Postfix) with ESMTPS id 3E92440561; Fri, 20 Mar 2026 14:20:13 +0800 (CST) Received: from huawei.com (10.67.174.242) by kwepemr500016.china.huawei.com (7.202.195.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Mar 2026 14:20:12 +0800 From: Chen Jinghuang To: , , , , CC: , , , , , , , Subject: [RFC PATCH v5 7/9] sched/fair: Provide can_migrate_task_llc Date: Fri, 20 Mar 2026 05:59:18 +0000 Message-ID: <20260320055920.2518389-8-chenjinghuang2@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260320055920.2518389-1-chenjinghuang2@huawei.com> References: <20260320055920.2518389-1-chenjinghuang2@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemr500016.china.huawei.com (7.202.195.68) Content-Type: text/plain; charset="utf-8" From: Steve Sistare Define a simpler version of can_migrate_task called can_migrate_task_llc which does not require a struct lb_env argument, and judges whether a migration from one CPU to another within the same LLC should be allowed. Signed-off-by: Steve Sistare Signed-off-by: Chen Jinghuang --- changes from v4 to v5: Skip tasks with the sched_delayed flag during overload detection. --- kernel/sched/fair.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ebb13108dabe..0bf6d18dac05 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9582,6 +9582,34 @@ int can_migrate_task(struct task_struct *p, struct l= b_env *env) return 0; } =20 +/* + * Return true if task @p can migrate from @rq to @dst_rq in the same LLC. + * No need to test for co-locality, and no need to test task_hot(), as sha= ring + * LLC provides cache warmth at that level. + */ +static bool +can_migrate_task_llc(struct task_struct *p, struct rq *rq, struct rq *dst_= rq) +{ + int dst_cpu =3D dst_rq->cpu; + + lockdep_assert_rq_held(rq); + + if (!cpumask_test_cpu(dst_cpu, p->cpus_ptr)) { + schedstat_inc(p->stats.nr_failed_migrations_affine); + return false; + } + + if (task_on_cpu(rq, p)) { + schedstat_inc(p->stats.nr_failed_migrations_running); + return false; + } + + if (p->se.sched_delayed) + return false; + + return true; +} + /* * detach_task() -- detach the task for the migration specified in env */ --=20 2.34.1 From nobody Mon Apr 6 09:13:47 2026 Received: from canpmsgout10.his.huawei.com (canpmsgout10.his.huawei.com [113.46.200.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5376130FC34 for ; Fri, 20 Mar 2026 06:20:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.225 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987619; cv=none; b=I4zKi1uOtY1NCDWs9sqB0d0kGcVIjCDsXuqj0cOQNp2MVMVV2b3htVoYRWa3EIxb691WiFs1vI2K2wxYj0Y270qn+X7/T8bbeGWNHOhOEdFOiawYChE53Fx8ktqXMc82PfPcJHocKqNEcwkshPSjmK8kujYIPLAdpIqZKfZbxs0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987619; c=relaxed/simple; bh=KYn26931UKn6TwkfEJ4umzeBKbdaMm8sOgI69ILkqVM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=c/vWC6jUBil+UeMmAM8JLY/rXywHOjrIO62W+53hoqYcyJCBNgBcSps/zvRFCJH5qpm5UrlzfId5wHOFYdvR0bupW59mc67LZhGfW4nS/1NR3SyUM8T0yytoDmrMJ1dJ9C133jEqoRMjS2UeG5nytwAHnqNuHHJ1URZO1zUMFLg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=CYrTkl9e; arc=none smtp.client-ip=113.46.200.225 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="CYrTkl9e" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=1QUcDJcGg40c14CMZf7zyBo8fcVM7Gg4bWYFBGYWrmY=; b=CYrTkl9eY0NSJjw8VawSsf4C/gTT7qX2FTqmXhxZEOiu8PLPYVvK2RNpi4EM2CKmWQuCe7o+v 4AMqJW+En1TDZKBjqn5E9KHCNppUWI2qR26y0MUrux9uG5JzJTb7rsUnH+lH4PtzrT+oREN6hDM KIhHr/GvbqD52to/zAlxqvY= Received: from mail.maildlp.com (unknown [172.19.163.15]) by canpmsgout10.his.huawei.com (SkyGuard) with ESMTPS id 4fcXMT4k4sz1K96h; Fri, 20 Mar 2026 14:14:13 +0800 (CST) Received: from kwepemr500016.china.huawei.com (unknown [7.202.195.68]) by mail.maildlp.com (Postfix) with ESMTPS id C646840539; Fri, 20 Mar 2026 14:20:13 +0800 (CST) Received: from huawei.com (10.67.174.242) by kwepemr500016.china.huawei.com (7.202.195.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Mar 2026 14:20:13 +0800 From: Chen Jinghuang To: , , , , CC: , , , , , , , Subject: [RFC PATCH v5 8/9] sched/fair: Steal work from an overloaded CPU when CPU goes idle Date: Fri, 20 Mar 2026 05:59:19 +0000 Message-ID: <20260320055920.2518389-9-chenjinghuang2@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260320055920.2518389-1-chenjinghuang2@huawei.com> References: <20260320055920.2518389-1-chenjinghuang2@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemr500016.china.huawei.com (7.202.195.68) Content-Type: text/plain; charset="utf-8" From: Steve Sistare When a CPU has no more CFS tasks to run, and idle_balance() fails to find a task, then attempt to steal a task from an overloaded CPU in the same LLC, using the cfs_overload_cpus bitmap to efficiently identify candidates. To minimize search time, steal the first migratable task that is found when the bitmap is traversed. For fairness, search for migratable tasks on an overloaded CPU in order of next to run. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, so it may be called every time the CPU is about to go idle. idle_balance() does more work because it searches widely for the busiest queue, so to limit its CPU consumption, it declines to search if the system is too busy. Simple stealing does not offload the globally busiest queue, but it is much better than running nothing at all. Stealing is controlled by the sched feature SCHED_STEAL, which is enabled by default. Note that all test results presented below are based on the=20 NO_DELAY_DEQUEUE implementation. Stealing imprroves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats: steal - number of times a task is stolen from another CPU. X6-2: 2 socket * 40 cores * 2 hyperthreads =3D 160 CPUs Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz hackbench process 100000 baseline grps time %busy sched idle wake steal 1 2.182 20.00 35876 17905 17958 0 2 2.391 39.00 67753 33808 33921 0 3 2.871 47.00 100944 48966 51538 0 4 2.928 62.00 114489 55171 59059 0 8 4.852 83.00 219907 92961 121703 0 new grps time %busy sched idle wake steal %speedup 1 2.229 18.00 45450 22691 22751 52 -2.1 2 2.123 40.00 49975 24977 24990 6 12.6 3 2.690 61.00 56118 22641 32780 9073 6.7 4 2.828 80.00 37927 12828 24165 8442 3.5 8 4.120 95.00 85929 8613 57858 11098 17.8 Elapsed time improves by 17.8, and CPU busy utilization is up by 1 to 18% hitting 95% at peak load. Signed-off-by: Steve Sistare Signed-off-by: Chen Jinghuang --- kernel/sched/fair.c | 174 ++++++++++++++++++++++++++++++++++++++-- kernel/sched/features.h | 6 ++ 2 files changed, 174 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0bf6d18dac05..500215a57392 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5092,6 +5092,9 @@ static void overload_clear(struct rq *rq) { struct sparsemask *overload_cpus; =20 + if (!sched_feat(STEAL)) + return; + rcu_read_lock(); overload_cpus =3D rcu_dereference(rq->cfs_overload_cpus); if (overload_cpus) @@ -5103,17 +5106,29 @@ static void overload_set(struct rq *rq) { struct sparsemask *overload_cpus; =20 + if (!sched_feat(STEAL)) + return; + rcu_read_lock(); overload_cpus =3D rcu_dereference(rq->cfs_overload_cpus); if (overload_cpus) sparsemask_set_elem(overload_cpus, rq->cpu); rcu_read_unlock(); } + +static int try_steal(struct rq *this_rq, struct rq_flags *rf); + #else /* CONFIG_SMP */ static inline void rq_idle_stamp_update(struct rq *rq) {} static inline void rq_idle_stamp_clear(struct rq *rq) {} static inline void overload_clear(struct rq *rq) {} static inline void overload_set(struct rq *rq) {} + +static inline int try_steal(struct rq *this_rq, struct rq_flags *rf) +{ + return 0; +} + #endif =20 void __setparam_fair(struct task_struct *p, const struct sched_attr *attr) @@ -9024,21 +9039,24 @@ pick_next_task_fair(struct rq *rq, struct task_stru= ct *prev, struct rq_flags *rf idle: if (rf) { /* - * We must set idle_stamp _before_ calling idle_balance(), such that we - * measure the duration of idle_balance() as idle time. + * We must set idle_stamp _before_ calling try_steal() or + * sched_balance_newidle(), such that we measure the duration + * as idle time. */ rq_idle_stamp_update(rq); =20 new_tasks =3D sched_balance_newidle(rq, rf); + if (new_tasks =3D=3D 0) + new_tasks =3D try_steal(rq, rf); =20 if (new_tasks) rq_idle_stamp_clear(rq); =20 /* - * Because sched_balance_newidle() releases (and re-acquires) - * rq->lock, it is possible for any higher priority task to - * appear. In that case we must re-start the pick_next_entity() - * loop. + * Because try_steal() and sched_balance_newidle() release + * (and re-acquire) rq->lock, it is possible for any higher priority + * task to appear. In that case we must re-start the + * pick_next_entity() loop. */ if (new_tasks < 0) return RETRY_TASK; @@ -13133,6 +13151,150 @@ void sched_balance_trigger(struct rq *rq) nohz_balancer_kick(rq); } =20 +/* + * Search the runnable tasks in @cfs_rq in order of next to run, and find + * the first one that can be migrated to @dst_rq. @cfs_rq is locked on en= try. + * On success, dequeue the task from @cfs_rq and return it, else return NU= LL. + */ +static struct task_struct * +detach_next_task(struct cfs_rq *cfs_rq, struct rq *dst_rq) +{ + int dst_cpu =3D dst_rq->cpu; + struct task_struct *p; + struct rq *rq =3D rq_of(cfs_rq); + + lockdep_assert_rq_held(rq); + + list_for_each_entry_reverse(p, &rq->cfs_tasks, se.group_node) { + if (can_migrate_task_llc(p, rq, dst_rq)) { + detach_task_steal(p, rq, dst_cpu); + return p; + } + } + return NULL; +} + +/* + * Attempt to migrate a CFS task from @src_cpu to @dst_rq. @locked indica= tes + * whether @dst_rq is already locked on entry. This function may lock or + * unlock @dst_rq, and updates @locked to indicate the locked state on ret= urn. + * The locking protocol is based on idle_balance(). + * Returns 1 on success and 0 on failure. + */ +static int steal_from(struct rq *dst_rq, struct rq_flags *dst_rf, bool *lo= cked, + int src_cpu) +{ + struct task_struct *p; + struct rq_flags rf; + int stolen =3D 0; + int dst_cpu =3D dst_rq->cpu; + struct rq *src_rq =3D cpu_rq(src_cpu); + + if (dst_cpu =3D=3D src_cpu || src_rq->cfs.h_nr_runnable < 2) + return 0; + + if (*locked) { + rq_unpin_lock(dst_rq, dst_rf); + raw_spin_rq_unlock(dst_rq); + *locked =3D false; + } + rq_lock_irqsave(src_rq, &rf); + update_rq_clock(src_rq); + + if (src_rq->cfs.h_nr_runnable < 2 || !cpu_active(src_cpu)) + p =3D NULL; + else + p =3D detach_next_task(&src_rq->cfs, dst_rq); + + rq_unlock(src_rq, &rf); + + if (p) { + raw_spin_rq_lock(dst_rq); + rq_repin_lock(dst_rq, dst_rf); + *locked =3D true; + update_rq_clock(dst_rq); + attach_task(dst_rq, p); + stolen =3D 1; + } + local_irq_restore(rf.flags); + + return stolen; +} + +/* + * Conservative upper bound on the max cost of a steal, in nsecs (the typi= cal + * cost is 1-2 microsec). Do not steal if average idle time is less. + */ +#define SCHED_STEAL_COST 10000 + +/* + * Try to steal a runnable CFS task from a CPU in the same LLC as @dst_rq, + * and migrate it to @dst_rq. rq_lock is held on entry and return, but + * may be dropped in between. Return 1 on success, 0 on failure, and -1 + * if a task in a different scheduling class has become runnable on @dst_r= q. + */ +static int try_steal(struct rq *dst_rq, struct rq_flags *dst_rf) +{ + int src_cpu; + int dst_cpu =3D dst_rq->cpu; + bool locked =3D true; + int stolen =3D 0; + struct sparsemask *overload_cpus; + + if (!sched_feat(STEAL)) + return 0; + + if (!cpu_active(dst_cpu)) + return 0; + + if (dst_rq->avg_idle < SCHED_STEAL_COST) + return 0; + + /* Get bitmap of overloaded CPUs in the same LLC as @dst_rq */ + + rcu_read_lock(); + overload_cpus =3D rcu_dereference(dst_rq->cfs_overload_cpus); + if (!overload_cpus) { + rcu_read_unlock(); + return 0; + } + +#ifdef CONFIG_SCHED_SMT + /* + * First try overloaded CPUs on the same core to preserve cache warmth. + */ + if (static_branch_likely(&sched_smt_present)) { + for_each_cpu(src_cpu, cpu_smt_mask(dst_cpu)) { + if (sparsemask_test_elem(overload_cpus, src_cpu) && + steal_from(dst_rq, dst_rf, &locked, src_cpu)) { + stolen =3D 1; + goto out; + } + } + } +#endif /* CONFIG_SCHED_SMT */ + + /* Accept any suitable task in the LLC */ + + sparsemask_for_each(overload_cpus, dst_cpu, src_cpu) { + if (steal_from(dst_rq, dst_rf, &locked, src_cpu)) { + stolen =3D 1; + goto out; + } + } + +out: + rcu_read_unlock(); + if (!locked) { + raw_spin_rq_lock(dst_rq); + rq_repin_lock(dst_rq, dst_rf); + } + stolen |=3D (dst_rq->cfs.h_nr_runnable > 0); + if (dst_rq->nr_running !=3D dst_rq->cfs.h_nr_runnable) + stolen =3D -1; + return stolen; +} + static void rq_online_fair(struct rq *rq) { update_sysctl(); diff --git a/kernel/sched/features.h b/kernel/sched/features.h index 136a6584be79..e8c3e19bf585 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -87,6 +87,12 @@ SCHED_FEAT(TTWU_QUEUE, true) */ SCHED_FEAT(SIS_UTIL, true) =20 +/* + * Steal a CFS task from another CPU when going idle. + * Improves CPU utilization. + */ +SCHED_FEAT(STEAL, true) + /* * Issue a WARN when we do multiple update_rq_clock() calls * in a single rq->lock section. Default disabled because the --=20 2.34.1 From nobody Mon Apr 6 09:13:47 2026 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D5BF324716 for ; Fri, 20 Mar 2026 06:20:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987627; cv=none; b=JSmffLf6J2jxlQzPp6sa2MRwguKgbDySUgoTTJx9LRf/JiIDfQqpeqmjWlmfI+xh+GTV7vU95fXEWMfdp/wMjplI9HgEjpiOCURqF6m5i83eMxckJSOEXXxx1c8JK0H9nVlNAwDbuC0BAuCIBwF/r/6fVNMj4YjTRa/ZGgfVArE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773987627; c=relaxed/simple; bh=aQ4ZK7NQIQiddIVelVqTYvKZNqLivgtlVTa58ZRsPnI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rxcXW/cbnJbN27KBPQ3Ln8/5Mp4ETFAgRjBNjPl7tiV9PWG2GB2qkNVGYn8JG+CCJGuWn4VWigLKZjrlIieDyPLhgY82ds1qjMZjggUOErFGIWUMwrqdpum6Erm1m2A6j1mf4khILit7BAUdJG9l7GWrGifN3F3/qQWcgapxlf8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=S2N37hqn; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=S2N37hqn; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="S2N37hqn"; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="S2N37hqn" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=iXBrAzn0Xa2rKOCUmvvigs4XG2wDX2pQSdwVKmE/kXE=; b=S2N37hqnr8/OD6NLQ4YOgg2tIdkahwB4GXoWMIGpx5M2jmyqE2ELsoayk3kSDitEc3m6hcL6h FcUqqaMV7zksDCQZ5Qp5j9lmMY0Mmw4W4h0d4TlDeDYFh7BTUPNKLYYhuio4U22qbHV9CgZQSzc kw0JkedniDDVWTIbtbgndAE= Received: from canpmsgout03.his.huawei.com (unknown [172.19.92.159]) by szxga01-in.huawei.com (SkyGuard) with ESMTPS id 4fcXTM59jKz1BG3W for ; Fri, 20 Mar 2026 14:19:19 +0800 (CST) dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=iXBrAzn0Xa2rKOCUmvvigs4XG2wDX2pQSdwVKmE/kXE=; b=S2N37hqnr8/OD6NLQ4YOgg2tIdkahwB4GXoWMIGpx5M2jmyqE2ELsoayk3kSDitEc3m6hcL6h FcUqqaMV7zksDCQZ5Qp5j9lmMY0Mmw4W4h0d4TlDeDYFh7BTUPNKLYYhuio4U22qbHV9CgZQSzc kw0JkedniDDVWTIbtbgndAE= Received: from mail.maildlp.com (unknown [172.19.162.144]) by canpmsgout03.his.huawei.com (SkyGuard) with ESMTPS id 4fcXN73GkDzpTLZ; Fri, 20 Mar 2026 14:14:47 +0800 (CST) Received: from kwepemr500016.china.huawei.com (unknown [7.202.195.68]) by mail.maildlp.com (Postfix) with ESMTPS id 5523A4056A; Fri, 20 Mar 2026 14:20:14 +0800 (CST) Received: from huawei.com (10.67.174.242) by kwepemr500016.china.huawei.com (7.202.195.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 20 Mar 2026 14:20:13 +0800 From: Chen Jinghuang To: , , , , CC: , , , , , , , Subject: [RFC PATCH v5 9/9] sched/fair: Provide idle search schedstats Date: Fri, 20 Mar 2026 05:59:20 +0000 Message-ID: <20260320055920.2518389-10-chenjinghuang2@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260320055920.2518389-1-chenjinghuang2@huawei.com> References: <20260320055920.2518389-1-chenjinghuang2@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemr500016.china.huawei.com (7.202.195.68) Content-Type: text/plain; charset="utf-8" From: Steve Sistare Add schedstats to measure the effectiveness of searching for idle CPUs and stealing tasks. This is a temporary patch intended for use during development only. SCHEDSTAT_VERSION is bumped to 16, and the following fields are added to the per-CPU statistics of /proc/schedstat: field 10: # of times select_idle_sibling "easily" found an idle CPU -- prev or target is idle. field 11: # of times select_idle_sibling searched and found an idle cpu. field 12: # of times select_idle_sibling searched and found an idle core. field 13: # of times select_idle_sibling failed to find anything idle. field 14: time in nanoseconds spent in functions that search for idle CPUs and search for tasks to steal. field 15: # of times an idle CPU steals a task from another CPU. field 16: # of times try_steal finds overloaded CPUs but no task is migratable. Signed-off-by: Steve Sistare Signed-off-by: Chen Jinghuang --- kernel/sched/core.c | 31 +++++++++++++++++++++++-- kernel/sched/fair.c | 54 ++++++++++++++++++++++++++++++++++++++++---- kernel/sched/sched.h | 9 ++++++++ kernel/sched/stats.c | 9 ++++++++ kernel/sched/stats.h | 13 +++++++++++ 5 files changed, 109 insertions(+), 7 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 759777694c78..841a4ca7e173 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4505,17 +4505,44 @@ static int sysctl_numa_balancing(const struct ctl_t= able *table, int write, =20 DEFINE_STATIC_KEY_FALSE(sched_schedstats); =20 +unsigned long schedstat_skid; + +static void compute_skid(void) +{ + int i, n =3D 0; + s64 t; + int skid =3D 0; + + for (i =3D 0; i < 100; i++) { + t =3D local_clock(); + t =3D local_clock() - t; + if (t > 0 && t < 1000) { /* only use sane samples */ + skid +=3D (int) t; + n++; + } + } + + if (n > 0) + schedstat_skid =3D skid / n; + else + schedstat_skid =3D 0; + pr_info("schedstat_skid =3D %lu\n", schedstat_skid); +} + static void set_schedstats(bool enabled) { - if (enabled) + if (enabled) { + compute_skid(); static_branch_enable(&sched_schedstats); - else + } else { static_branch_disable(&sched_schedstats); + } } =20 void force_schedstat_enabled(void) { if (!schedstat_enabled()) { + compute_skid(); pr_info("kernel profiling enabled schedstats, disable via kernel.sched_s= chedstats.\n"); static_branch_enable(&sched_schedstats); } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 500215a57392..ba2b9f811135 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5091,29 +5091,35 @@ static inline void rq_idle_stamp_clear(struct rq *r= q) static void overload_clear(struct rq *rq) { struct sparsemask *overload_cpus; + unsigned long time; =20 if (!sched_feat(STEAL)) return; =20 + time =3D schedstat_start_time(); rcu_read_lock(); overload_cpus =3D rcu_dereference(rq->cfs_overload_cpus); if (overload_cpus) sparsemask_clear_elem(overload_cpus, rq->cpu); rcu_read_unlock(); + schedstat_end_time(rq->find_time, time); } =20 static void overload_set(struct rq *rq) { struct sparsemask *overload_cpus; + unsigned long time; =20 if (!sched_feat(STEAL)) return; =20 + time =3D schedstat_start_time(); rcu_read_lock(); overload_cpus =3D rcu_dereference(rq->cfs_overload_cpus); if (overload_cpus) sparsemask_set_elem(overload_cpus, rq->cpu); rcu_read_unlock(); + schedstat_end_time(rq->find_time, time); } =20 static int try_steal(struct rq *this_rq, struct rq_flags *rf); @@ -7830,6 +7836,16 @@ static inline bool asym_fits_cpu(unsigned long util, return true; } =20 +#define SET_STAT(STAT) \ + do { \ + if (schedstat_enabled()) { \ + struct rq *rq =3D this_rq(); \ + \ + if (rq) \ + __schedstat_inc(rq->STAT); \ + } \ + } while (0) + /* * Try and locate an idle core/thread in the LLC cache domain. */ @@ -7857,8 +7873,10 @@ static int select_idle_sibling(struct task_struct *p= , int prev, int target) lockdep_assert_irqs_disabled(); =20 if ((available_idle_cpu(target) || sched_idle_cpu(target)) && - asym_fits_cpu(task_util, util_min, util_max, target)) + asym_fits_cpu(task_util, util_min, util_max, target)) { + SET_STAT(found_idle_cpu_easy); return target; + } =20 /* * If the previous CPU is cache affine and idle, don't be stupid: @@ -7868,8 +7886,10 @@ static int select_idle_sibling(struct task_struct *p= , int prev, int target) asym_fits_cpu(task_util, util_min, util_max, prev)) { =20 if (!static_branch_unlikely(&sched_cluster_active) || - cpus_share_resources(prev, target)) + cpus_share_resources(prev, target)) { + SET_STAT(found_idle_cpu_easy); return prev; + } =20 prev_aff =3D prev; } @@ -7887,6 +7907,7 @@ static int select_idle_sibling(struct task_struct *p,= int prev, int target) prev =3D=3D smp_processor_id() && this_rq()->nr_running <=3D 1 && asym_fits_cpu(task_util, util_min, util_max, prev)) { + SET_STAT(found_idle_cpu_easy); return prev; } =20 @@ -7901,8 +7922,10 @@ static int select_idle_sibling(struct task_struct *p= , int prev, int target) asym_fits_cpu(task_util, util_min, util_max, recent_used_cpu)) { =20 if (!static_branch_unlikely(&sched_cluster_active) || - cpus_share_resources(recent_used_cpu, target)) + cpus_share_resources(recent_used_cpu, target)) { + SET_STAT(found_idle_cpu_easy); return recent_used_cpu; + } =20 } else { recent_used_cpu =3D -1; @@ -7924,13 +7947,16 @@ static int select_idle_sibling(struct task_struct *= p, int prev, int target) */ if (sd) { i =3D select_idle_capacity(p, sd, target); + SET_STAT(found_idle_cpu_capacity); return ((unsigned)i < nr_cpumask_bits) ? i : target; } } =20 sd =3D rcu_dereference_all(per_cpu(sd_llc, target)); - if (!sd) + if (!sd) { + SET_STAT(nofound_idle_cpu); return target; + } =20 if (sched_smt_active()) { has_idle_core =3D test_idle_cores(target); @@ -7943,8 +7969,12 @@ static int select_idle_sibling(struct task_struct *p= , int prev, int target) } =20 i =3D select_idle_cpu(p, sd, has_idle_core, target); - if ((unsigned)i < nr_cpumask_bits) + if ((unsigned)i < nr_cpumask_bits) { + SET_STAT(found_idle_cpu); return i; + } + + SET_STAT(nofound_idle_cpu); =20 /* * For cluster machines which have lower sharing cache like L2 or @@ -8580,6 +8610,7 @@ static int select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags) { int sync =3D (wake_flags & WF_SYNC) && !(current->flags & PF_EXITING); + unsigned long time; struct sched_domain *tmp, *sd =3D NULL; int cpu =3D smp_processor_id(); int new_cpu =3D prev_cpu; @@ -8587,6 +8618,8 @@ select_task_rq_fair(struct task_struct *p, int prev_c= pu, int wake_flags) /* SD_flags and WF_flags share the first nibble */ int sd_flag =3D wake_flags & 0xF; =20 + time =3D schedstat_start_time(); + /* * required for stable ->cpus_allowed */ @@ -8643,6 +8676,8 @@ select_task_rq_fair(struct task_struct *p, int prev_c= pu, int wake_flags) } rcu_read_unlock(); =20 + schedstat_end_time(cpu_rq(cpu)->find_time, time); + return new_cpu; } =20 @@ -8981,6 +9016,7 @@ pick_next_task_fair(struct rq *rq, struct task_struct= *prev, struct rq_flags *rf struct sched_entity *se; struct task_struct *p; int new_tasks; + unsigned long time; =20 again: p =3D pick_task_fair(rq, rf); @@ -9038,6 +9074,7 @@ pick_next_task_fair(struct rq *rq, struct task_struct= *prev, struct rq_flags *rf =20 idle: if (rf) { + time =3D schedstat_start_time(); /* * We must set idle_stamp _before_ calling try_steal() or * sched_balance_newidle(), such that we measure the duration @@ -9052,6 +9089,8 @@ pick_next_task_fair(struct rq *rq, struct task_struct= *prev, struct rq_flags *rf if (new_tasks) rq_idle_stamp_clear(rq); =20 + schedstat_end_time(rq->find_time, time); + /* * Because try_steal() and sched_balance_newidle() release * (and re-acquire) rq->lock, it is possible for any higher priority @@ -13215,6 +13254,7 @@ static int steal_from(struct rq *dst_rq, struct rq_= flags *dst_rf, bool *locked, update_rq_clock(dst_rq); attach_task(dst_rq, p); stolen =3D 1; + schedstat_inc(dst_rq->steal); } local_irq_restore(rf.flags); =20 @@ -13239,6 +13279,7 @@ static int try_steal(struct rq *dst_rq, struct rq_f= lags *dst_rf) int dst_cpu =3D dst_rq->cpu; bool locked =3D true; int stolen =3D 0; + bool any_overload =3D false; struct sparsemask *overload_cpus; =20 if (!sched_feat(STEAL)) @@ -13281,6 +13322,7 @@ static int try_steal(struct rq *dst_rq, struct rq_f= lags *dst_rf) stolen =3D 1; goto out; } + any_overload =3D true; } =20 out: @@ -13292,6 +13334,8 @@ static int try_steal(struct rq *dst_rq, struct rq_f= lags *dst_rf) stolen |=3D (dst_rq->cfs.h_nr_runnable > 0); if (dst_rq->nr_running !=3D dst_rq->cfs.h_nr_runnable) stolen =3D -1; + if (!stolen && any_overload) + schedstat_inc(dst_rq->steal_fail); return stolen; } =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4989a92eeb9b..530b80fbf897 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1304,6 +1304,15 @@ struct rq { /* try_to_wake_up() stats */ unsigned int ttwu_count; unsigned int ttwu_local; + + /* Idle search stats */ + unsigned int found_idle_cpu_capacity; + unsigned int found_idle_cpu; + unsigned int found_idle_cpu_easy; + unsigned int nofound_idle_cpu; + unsigned long find_time; + unsigned int steal; + unsigned int steal_fail; #endif =20 #ifdef CONFIG_CPU_IDLE diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c index d1c9429a4ac5..7063c9712f68 100644 --- a/kernel/sched/stats.c +++ b/kernel/sched/stats.c @@ -129,6 +129,15 @@ static int show_schedstat(struct seq_file *seq, void *= v) rq->rq_cpu_time, rq->rq_sched_info.run_delay, rq->rq_sched_info.pcount); =20 + seq_printf(seq, " %u %u %u %u %lu %u %u", + rq->found_idle_cpu_easy, + rq->found_idle_cpu_capacity, + rq->found_idle_cpu, + rq->nofound_idle_cpu, + rq->find_time, + rq->steal, + rq->steal_fail); + seq_printf(seq, "\n"); =20 /* domain-specific stats */ diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h index a612cf253c87..55f31a4df8fa 100644 --- a/kernel/sched/stats.h +++ b/kernel/sched/stats.h @@ -43,6 +43,17 @@ rq_sched_info_dequeue(struct rq *rq, unsigned long long = delta) #define schedstat_set(var, val) do { if (schedstat_enabled()) { var =3D = (val); } } while (0) #define schedstat_val(var) (var) #define schedstat_val_or_zero(var) ((schedstat_enabled()) ? (var) : 0) +#define schedstat_start_time() schedstat_val_or_zero(local_clock()) +#define schedstat_end_time(stat, time) \ + do { \ + unsigned long endtime; \ + \ + if (schedstat_enabled() && (time)) { \ + endtime =3D local_clock() - (time) - schedstat_skid; \ + schedstat_add((stat), endtime); \ + } \ + } while (0) +extern unsigned long schedstat_skid; =20 void __update_stats_wait_start(struct rq *rq, struct task_struct *p, struct sched_statistics *stats); @@ -81,6 +92,8 @@ static inline void rq_sched_info_depart (struct rq *rq, = unsigned long long delt # define schedstat_set(var, val) do { } while (0) # define schedstat_val(var) 0 # define schedstat_val_or_zero(var) 0 +# define schedstat_start_time() 0 +# define schedstat_end_time(stat, t) do { } while (0) =20 # define __update_stats_wait_start(rq, p, stats) do { } while (0) # define __update_stats_wait_end(rq, p, stats) do { } while (0) --=20 2.34.1