From nobody Tue Feb 10 10:18:43 2026 Received: from mail-qt1-f171.google.com (mail-qt1-f171.google.com [209.85.160.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA4ED3644A6 for ; Mon, 12 Jan 2026 18:51:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768243904; cv=none; b=m/BpBknSTMp4SSlZpi2Lgq/tDDd3OePmwVgoX2zegkum/FAM8A4MMXsmuGDDx8KU9UFyT3Ofhi/1GDUiCGbqmJJsggqIxJ6sBgRc/2SrhsewnJqTMrGy4+t2CbixycoXe2H1quAA9/8padgEAYcgG8PVqrL7qwaRZGoi21eWn/8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768243904; c=relaxed/simple; bh=ucXz/PY1wR0L0RXr4BlHAsuFcg4Z8BCRzW83WkLbDio=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BsykapG6wGCMycJsYjQidQBZg57vCwX9Ye+T5xam1joPPYOI1G17VlhWXUUDifJ8wkWnQvtOjdekVsGoRgjlHd7X6giPi6t/GGXO2n3UubPYYAqVWl5NUVkvMZ0Qe41k19SL7SnHhLj17jkVhO3PiY7jPp0GftpXO6m9wdrbsxc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ncsu.edu; spf=pass smtp.mailfrom=ncsu.edu; dkim=pass (2048-bit key) header.d=ncsu.edu header.i=@ncsu.edu header.b=C12dOMr7; arc=none smtp.client-ip=209.85.160.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ncsu.edu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ncsu.edu Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ncsu.edu header.i=@ncsu.edu header.b="C12dOMr7" Received: by mail-qt1-f171.google.com with SMTP id d75a77b69052e-4ee257e56aaso62017061cf.0 for ; Mon, 12 Jan 2026 10:51:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ncsu.edu; s=google; t=1768243899; x=1768848699; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+Xgyb0uFWZM7DKrtyjAwW38ymN3Q9k9LCOvQnsUcKBk=; b=C12dOMr75abrwmt/Qh3HtPFNjcYOiFSNPACchkVjJJ3SHGhZGjUMpD3BK1oxuO3uNX 3vTd2YrkRRD40+kmwL1v+ZCE2V//x1CCLe+qPkoFBZWwkchvCmBr77JqQWBl71cy5a7w QrYPV+RRfhwhio26vbxTR9GzT2Tzdpwp/E7osQQNmGKI2gwUbqhGZIzOgI+ued1ddrvw qGXuiRDfMUHR0i6/4y4/0w94HAb8yR7980rym6BJkJvITBC1nuMcvw8tuQJBnWIMZZ2X 2odkppj0r9O+fTr2Fiwb02EzCDXIJAvyZzwUKe/Gr8g8OF23joYVAjHDGhFg4/2D4Ear 06Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768243899; x=1768848699; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=+Xgyb0uFWZM7DKrtyjAwW38ymN3Q9k9LCOvQnsUcKBk=; b=Okocxb0UCEYGjc/B2V5AouUgMjEKxRNrndc9+UabgZx2oub2mPuTgSw+2b8FssDdVS Mtkkw2NfE7OhbJbcaulVFOkLoZrtmpKzc9klTEK7Ya1Y8ecn5WxVwtIxuTvG8wU5M/zq 2lH+QFtJfNSNaShL/lGh0pUlcZGshAX288WFoXCevUssSXVQcSmvG41ir/gyToDa4VfW h5Zv0ehyi4i9SjkfUTBG69KcglNQXH2RZb7gqo0pmNNbop4M88QqXQJAeaSXRlPDMovL dCvYKjPBCpQHrapMxhjafCzouYuqlBzv5s6tUx9hzcrI3BxwtGy3ZEBQbn1TzxmNGiE3 ro7Q== X-Forwarded-Encrypted: i=1; AJvYcCUuhRT4rYByyF9H5n5fgDTg95ou1HFWkMVE0U+jtXO+LQaOA94ZSMbTVr6GLflzeu3f9dnkO4sXAdMJ9LA=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+5H0xzjDp5gPADxAjMFemAw6qNvGAxidl6hJ7Qe3Dfg+p0Lw/ 4IUXHMxDuGepMxwbEKzWfdDUKbnjIy2hlij0govyUV0DNMXLbaW5X8Y76jBINP6h4Q== X-Gm-Gg: AY/fxX5SZeo8GOX2Hi8stp/RkEzd5PWJSPlRfXS7bSROnXKMwAWKO18/d57fTKasI4U 1qnr2owGS8LHXyEhoQNHr85x76NN2I1q5NlGdlknX57fS0E1ixnwvh+Fwuv8ruapwXzzasvxUFr 78gXCcaY1Oj9/KtdehrOblR26HEViTOJM3/DnPyPSH2fSrvrb/qkcgK5uKW9h6DSn3/7jy5a0cW Itsh/cpfE0irFvaYaCMA60Tt1qc3nwSp0DfawBtLgcv5PHBjkp11mjMhQG4b8QFvmfVP4/e0ZdK SVT9prhBbaLlvRKRR0NU8b3fs8r29WJHeQ5NgWjx2la7D0v6AvEqqTGnII/3juwIRtIQ291rFfq WLPDUQtZ2bBs583BOQY7OGR8DKJL4hvoP/R3fktYpxkaahYNF0guL6yYKphV4fz0byC8IsnB33e AePYe1lHApKDNwWglGFAIO X-Received: by 2002:ac8:7e8c:0:b0:4f1:8412:46e2 with SMTP id d75a77b69052e-501397e856bmr6315341cf.29.1768243898887; Mon, 12 Jan 2026 10:51:38 -0800 (PST) Received: from um773-cachyos ([2600:1700:cc0:94af:eca6:9ff9:6f3c:5de9]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4ffa8e4bf3dsm137327951cf.23.2026.01.12.10.51.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Jan 2026 10:51:38 -0800 (PST) From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Rik van Riel , Chris Mason , Madadi Vineeth Reddy , Xu Liu , Blake Jones , Josh Don , Nilay Vaish , linux-kernel@vger.kernel.org, Zecheng Li , Zecheng Li Subject: [PATCH v6 2/3] sched/fair: Remove task_group->se pointer array Date: Mon, 12 Jan 2026 13:51:01 -0500 Message-ID: <20260112185121.3327881-3-zli94@ncsu.edu> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260112185121.3327881-1-zli94@ncsu.edu> References: <20260112185121.3327881-1-zli94@ncsu.edu> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zecheng Li Now that struct sched_entity is co-located with struct cfs_rq for non-root task groups, the task_group->se pointer array is redundant. The associated sched_entity can be loaded directly from the cfs_rq. This patch performs the access conversion with the helpers: - is_root_task_group(tg): checks if a task group is the root task group. It compares the task group's address with the global root_task_group variable. - tg_se(tg, cpu): retrieves the cfs_rq and returns the address of the co-located se. This function checks if tg is the root task group to ensure behaving the same of previous tg->se[cpu]. Replaces all accesses that use the tg->se[cpu] pointer array with calls to the new tg_se(tg, cpu) accessor. - cfs_rq_se(cfs_rq): simplifies access paths like cfs_rq->tg->se[...] to use the co-located sched_entity. This function also checks if tg is the root task group to ensure same behavior. Since tg_se is not in very hot code paths, and the branch is a register comparison with an immediate value (`&root_task_group`), the performance impact is expected to be negligible. Signed-off-by: Zecheng Li Signed-off-by: Zecheng Li --- kernel/sched/core.c | 7 ++----- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 25 +++++++++---------------- kernel/sched/sched.h | 29 ++++++++++++++++++++++++----- 4 files changed, 36 insertions(+), 27 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 5b17d8e3cb55..2db052414794 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8561,7 +8561,7 @@ void __init sched_init(void) wait_bit_init(); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - ptr +=3D 2 * nr_cpu_ids * sizeof(void **); + ptr +=3D nr_cpu_ids * sizeof(void **); #endif #ifdef CONFIG_RT_GROUP_SCHED ptr +=3D 2 * nr_cpu_ids * sizeof(void **); @@ -8570,9 +8570,6 @@ void __init sched_init(void) ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.se =3D (struct sched_entity **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); - root_task_group.cfs_rq =3D (struct cfs_rq **)ptr; ptr +=3D nr_cpu_ids * sizeof(void **); =20 @@ -9640,7 +9637,7 @@ static int cpu_cfs_stat_show(struct seq_file *sf, voi= d *v) int i; =20 for_each_possible_cpu(i) { - stats =3D __schedstats_from_se(tg->se[i]); + stats =3D __schedstats_from_se(tg_se(tg, i)); ws +=3D schedstat_val(stats->wait_sum); } =20 diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 41caa22e0680..a18c1be40578 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -644,7 +644,7 @@ void dirty_sched_domain_sysctl(int cpu) #ifdef CONFIG_FAIR_GROUP_SCHED static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task= _group *tg) { - struct sched_entity *se =3D tg->se[cpu]; + struct sched_entity *se =3D tg_se(tg, cpu); =20 #define P(F) SEQ_printf(m, " .%-30s: %lld\n", #F, (long long)F) #define P_SCHEDSTAT(F) SEQ_printf(m, " .%-30s: %lld\n", \ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index eef10f2ef2a9..db10e617a638 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5974,7 +5974,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) { struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); - struct sched_entity *se =3D cfs_rq->tg->se[cpu_of(rq)]; + struct sched_entity *se =3D cfs_rq_se(cfs_rq); =20 /* * It's possible we are called with runtime_remaining < 0 due to things @@ -9845,7 +9845,6 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) { struct cfs_rq *cfs_rq, *pos; bool decayed =3D false; - int cpu =3D cpu_of(rq); =20 /* * Iterates the task_group tree in a bottom up fashion, see @@ -9865,7 +9864,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) } =20 /* Propagate pending load changes to the parent, if any: */ - se =3D cfs_rq->tg->se[cpu]; + se =3D cfs_rq_se(cfs_rq); if (se && !skip_blocked_update(se)) update_load_avg(cfs_rq_of(se), se, UPDATE_TG); =20 @@ -9891,8 +9890,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) */ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq) { - struct rq *rq =3D rq_of(cfs_rq); - struct sched_entity *se =3D cfs_rq->tg->se[cpu_of(rq)]; + struct sched_entity *se =3D cfs_rq_se(cfs_rq); unsigned long now =3D jiffies; unsigned long load; =20 @@ -13625,7 +13623,6 @@ void free_fair_sched_group(struct task_group *tg) } =20 kfree(tg->cfs_rq); - kfree(tg->se); } =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) @@ -13638,9 +13635,6 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) tg->cfs_rq =3D kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); if (!tg->cfs_rq) goto err; - tg->se =3D kcalloc(nr_cpu_ids, sizeof(se), GFP_KERNEL); - if (!tg->se) - goto err; =20 tg->shares =3D NICE_0_LOAD; =20 @@ -13655,7 +13649,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) cfs_rq =3D &combined->cfs_rq; se =3D &combined->ses.se; init_cfs_rq(cfs_rq); - init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); + init_tg_cfs_entry(tg, cfs_rq, se, i, tg_se(parent, i)); init_entity_runnable_average(se); } =20 @@ -13674,7 +13668,7 @@ void online_fair_sched_group(struct task_group *tg) =20 for_each_possible_cpu(i) { rq =3D cpu_rq(i); - se =3D tg->se[i]; + se =3D tg_se(tg, i); rq_lock_irq(rq, &rf); update_rq_clock(rq); attach_entity_cfs_rq(se); @@ -13691,7 +13685,7 @@ void unregister_fair_sched_group(struct task_group = *tg) =20 for_each_possible_cpu(cpu) { struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu]; - struct sched_entity *se =3D tg->se[cpu]; + struct sched_entity *se =3D tg_se(tg, cpu); struct rq *rq =3D cpu_rq(cpu); =20 if (se) { @@ -13728,7 +13722,6 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, init_cfs_rq_runtime(cfs_rq); =20 tg->cfs_rq[cpu] =3D cfs_rq; - tg->se[cpu] =3D se; =20 /* se could be NULL for root_task_group */ if (!se) @@ -13759,7 +13752,7 @@ static int __sched_group_set_shares(struct task_gro= up *tg, unsigned long shares) /* * We can't change the weight of the root cgroup. */ - if (!tg->se[0]) + if (is_root_task_group(tg)) return -EINVAL; =20 shares =3D clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES)); @@ -13770,7 +13763,7 @@ static int __sched_group_set_shares(struct task_gro= up *tg, unsigned long shares) tg->shares =3D shares; for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); - struct sched_entity *se =3D tg->se[i]; + struct sched_entity *se =3D tg_se(tg, i); struct rq_flags rf; =20 /* Propagate contribution to hierarchy */ @@ -13821,7 +13814,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) =20 for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); - struct sched_entity *se =3D tg->se[i]; + struct sched_entity *se =3D tg_se(tg, i); struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index be32810f7475..97c27ac0ae18 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -476,8 +476,6 @@ struct task_group { #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - /* schedulable entities of this group on each CPU */ - struct sched_entity **se; /* runqueue "owned" by this group on each CPU */ struct cfs_rq **cfs_rq; unsigned long shares; @@ -915,7 +913,8 @@ struct dl_rq { }; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - +/* Check whether a task group is root tg */ +#define is_root_task_group(tg) ((tg) =3D=3D &root_task_group) /* An entity is a task if it doesn't "own" a runqueue */ #define entity_is_task(se) (!se->my_q) =20 @@ -2187,6 +2186,26 @@ struct cfs_rq_with_se { struct cfs_rq cfs_rq; struct sched_entity_stats ses; }; + +static inline struct sched_entity *tg_se(struct task_group *tg, int cpu) +{ + if (is_root_task_group(tg)) + return NULL; + + struct cfs_rq_with_se *combined =3D + container_of(tg->cfs_rq[cpu], struct cfs_rq_with_se, cfs_rq); + return &combined->ses.se; +} + +static inline struct sched_entity *cfs_rq_se(struct cfs_rq *cfs_rq) +{ + if (is_root_task_group(cfs_rq->tg)) + return NULL; + + struct cfs_rq_with_se *combined =3D + container_of(cfs_rq, struct cfs_rq_with_se, cfs_rq); + return &combined->ses.se; +} #endif =20 /* Change a task's cfs_rq and parent entity if it moves across CPUs/groups= */ @@ -2199,8 +2218,8 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #ifdef CONFIG_FAIR_GROUP_SCHED set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; - p->se.parent =3D tg->se[cpu]; - p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; + p->se.parent =3D tg_se(tg, cpu); + p->se.depth =3D p->se.parent ? p->se.parent->depth + 1 : 0; #endif =20 #ifdef CONFIG_RT_GROUP_SCHED --=20 2.52.0