From nobody Wed Feb 11 05:59:55 2026 Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C00243876D0 for ; Mon, 12 Jan 2026 18:51:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768243907; cv=none; b=gk9ZrSv6X+FjV09nMSMnbd/RU0cPY2MwRJHCsjHkCGicW0U8EC0tELJcAPrFnDhLHHJ0HO7rWLaBLrMfkhu8HHUH+pW43A9V989yAtZ/f0tWYb0dXEqiobBTU8eTyHOr1Y2ccIzYo4koF16SNwHBFXUY0qkKDx64fZjtqUAGrxQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768243907; c=relaxed/simple; bh=JDU2ZFavPDCgEjRWmn1wtmOV+p0Va4BKv11TVkJjWLA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bOalMb+z6xAPYJ40r8L45PvRV0uGACe0cuPRfjlyuSL/lGW8ztgLJmbh+6BF43GcnF03/0t6LrdfZjzveeXPKQ391ORmG0wFP7m14nusqYWKqq8GfzduTSivj7qWZOfng3h7ZgNNe5R6iigExyhTfpNuL+QfnUOKzQIldV8v2UY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ncsu.edu; spf=pass smtp.mailfrom=ncsu.edu; dkim=pass (2048-bit key) header.d=ncsu.edu header.i=@ncsu.edu header.b=ZMZQMhHj; arc=none smtp.client-ip=209.85.160.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ncsu.edu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ncsu.edu Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ncsu.edu header.i=@ncsu.edu header.b="ZMZQMhHj" Received: by mail-qt1-f173.google.com with SMTP id d75a77b69052e-4ffc0ddefc4so74501761cf.3 for ; Mon, 12 Jan 2026 10:51:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ncsu.edu; s=google; t=1768243901; x=1768848701; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YDOPwHV2kw/rsOvhuKiqH8DoSNAERAKpdRTmwWJAso0=; b=ZMZQMhHjnuJltcjSgyiTg8Jq9nCH2tAdQMHvitmPc8u3wZbN7vmNBk8gawu/mJGcOI USqWyR1XHDQJ3t3nO3XMuqP6BA8c+BBY0jgUz2d2JB8VfdV5HEHksxt8BxfARbYWnI1M f9kHlIAfHRzMgTazWFuxFbqz9702MesXp/emmB6+vpOBCv6+0IVfoZ6VIVxrl6jXmH6f IR+f5fNkqE/3z4gnfYphtoKLT7pg2o2Lfj0gm/ofQnGvc//YVQ8PNLzywOgeey70tgAQ MZnt7M9HzEg4yI/KlERyEaaYESHFbU9DrvQpf0qk9XSl91NstR5hh+lNkr878yTNbc1s wbnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768243901; x=1768848701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=YDOPwHV2kw/rsOvhuKiqH8DoSNAERAKpdRTmwWJAso0=; b=LO3cg7D+NCVJbuKcm9D7/LO4TcRHpmSJyGlyX/L6koA17tpChYAy5R4omnJnP/AfxT j85e41W+L16ZxQHS0W6SsDlpwUktbORhpLTy7MGhrlH54lmdBvBWAB0lyd0X5DPB01No pZo2m8V5h1nCJeFqL7Am0h/zqFsotV38uFwW+vm8/bDprzHbzG++0yezEA63CwbqzSu6 Rd/wn3ku1f01tK0zHAsZ+tyZpp3SQMdLF5TcokgEQl3Ttd5clvj1mNf+/LXY7QH8L5L6 P999oekuYXVYbwG832L4JXdXVsxYGsTwxCZS31WgTqKsjr4OItBwZiOYWJuuqzNFeiR+ ZGgA== X-Forwarded-Encrypted: i=1; AJvYcCXXvBNH6EfGjm0zNq9PE+7wC9B1zdj54qROWEz38XEmXvYj4OpVX6lsoOtehMMKF/0f8GP3giFLDoeHrIU=@vger.kernel.org X-Gm-Message-State: AOJu0YyE+ssuRwyZdpM5qdKfzDwJKZo4yQbHXEE+B7koXjRnQwDLdo07 XeYAWFrmsSD+JH2av4EGo1CcuMnV4nX0IxbX0qdjP2361W34bdkfMk14SyoF9R9ndQ== X-Gm-Gg: AY/fxX6oq9USeTccsrZlvRqj3h1nYMLu2L78c4G2RJ/LJYHdxUU+efvRTlB1fLJQja4 L7yF4tmWz3nsbpXy+6XKAind1mNIYzKdqwFxV9YUS+iUrJbGlZnzB424B3EBUIZZ7dsejNwesbr Lktos9hbmHrBCrAk4GRvTp2F8PoTBM7dStyZ5gM3j/xcjDKO3evjEz/43gPVsX5oZWIJyS+L2kz 0Qn+Y7H0fPZq61PhhWsLKVSi8fP4zxfXo7DEu6NPeJCuof7S+GUa2oE5lEhjQjX7QRRsVjZSvZi e2x43++JIbvX+9PWG6qElfbbITbmT9BHbjfDI+cRXFtmQFqflmGix9psDZG6oaRN9NLjL/DBMKY kICx3wmnczg7yCZQO/oA6w8/bEx9ZFu001rXz/6RVZ+VotCHSeqxW3JxeUQRhLN2XOuda5w+48f znnMKUoIWpNfo8nTfPqgsr X-Google-Smtp-Source: AGHT+IGAFnDifwBOtt7rscT0rNxuYuadbd5qVVoDhargxgNe/1RDn2vUmKzt5r+HWXVoao6+rBwl+g== X-Received: by 2002:a05:622a:19a2:b0:4f3:565b:c52c with SMTP id d75a77b69052e-4ffb497ba07mr273314331cf.39.1768243900117; Mon, 12 Jan 2026 10:51:40 -0800 (PST) Received: from um773-cachyos ([2600:1700:cc0:94af:eca6:9ff9:6f3c:5de9]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4ffa8e4bf3dsm137327951cf.23.2026.01.12.10.51.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Jan 2026 10:51:39 -0800 (PST) From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Rik van Riel , Chris Mason , Madadi Vineeth Reddy , Xu Liu , Blake Jones , Josh Don , Nilay Vaish , linux-kernel@vger.kernel.org, Zecheng Li , Zecheng Li Subject: [PATCH v6 3/3] sched/fair: Allocate both cfs_rq and sched_entity with per-cpu Date: Mon, 12 Jan 2026 13:51:02 -0500 Message-ID: <20260112185121.3327881-4-zli94@ncsu.edu> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260112185121.3327881-1-zli94@ncsu.edu> References: <20260112185121.3327881-1-zli94@ncsu.edu> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zecheng Li To remove the cfs_rq pointer array in task_group, allocate the combined cfs_rq and sched_entity using the per-cpu allocator. This patch implements the following: - Changes task_group->cfs_rq from struct cfs_rq ** to struct cfs_rq __percpu *. - Updates memory allocation in alloc_fair_sched_group() and free_fair_sched_group() to use alloc_percpu() and free_percpu() respectively. - Uses the inline accessor tg_cfs_rq(tg, cpu) with per_cpu_ptr() to retrieve the pointer to cfs_rq for the given task group and CPU. - Replaces direct accesses tg->cfs_rq[cpu] with calls to the new tg_cfs_rq(tg, cpu) helper. - Handles the root_task_group: since struct rq is already a per-cpu variable (runqueues), its embedded cfs_rq (rq->cfs) is also per-cpu. Therefore, we assign root_task_group.cfs_rq =3D &runqueues.cfs. - Cleanup the code in initializing the root task group. This change places each CPU's cfs_rq and sched_entity in its local per-cpu memory area to remove the per-task_group pointer arrays. Signed-off-by: Zecheng Li Signed-off-by: Zecheng Li --- kernel/sched/core.c | 35 ++++++++++----------------- kernel/sched/fair.c | 57 +++++++++++++++++--------------------------- kernel/sched/sched.h | 14 +++++++---- 3 files changed, 45 insertions(+), 61 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2db052414794..cf63bf089fa0 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8545,7 +8545,7 @@ static struct kmem_cache *task_group_cache __ro_after= _init; =20 void __init sched_init(void) { - unsigned long ptr =3D 0; + unsigned long __maybe_unused ptr =3D 0; int i; =20 /* Make sure the linker didn't screw up */ @@ -8561,33 +8561,24 @@ void __init sched_init(void) wait_bit_init(); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - ptr +=3D nr_cpu_ids * sizeof(void **); -#endif -#ifdef CONFIG_RT_GROUP_SCHED - ptr +=3D 2 * nr_cpu_ids * sizeof(void **); -#endif - if (ptr) { - ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); + root_task_group.cfs_rq =3D &runqueues.cfs; =20 -#ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.cfs_rq =3D (struct cfs_rq **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); - - root_task_group.shares =3D ROOT_TASK_GROUP_LOAD; - init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); + root_task_group.shares =3D ROOT_TASK_GROUP_LOAD; + init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); #endif /* CONFIG_FAIR_GROUP_SCHED */ #ifdef CONFIG_EXT_GROUP_SCHED - scx_tg_init(&root_task_group); + scx_tg_init(&root_task_group); #endif /* CONFIG_EXT_GROUP_SCHED */ #ifdef CONFIG_RT_GROUP_SCHED - root_task_group.rt_se =3D (struct sched_rt_entity **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); + ptr +=3D 2 * nr_cpu_ids * sizeof(void **); + ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); + root_task_group.rt_se =3D (struct sched_rt_entity **)ptr; + ptr +=3D nr_cpu_ids * sizeof(void **); =20 - root_task_group.rt_rq =3D (struct rt_rq **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); + root_task_group.rt_rq =3D (struct rt_rq **)ptr; + ptr +=3D nr_cpu_ids * sizeof(void **); =20 #endif /* CONFIG_RT_GROUP_SCHED */ - } =20 init_defrootdomain(); =20 @@ -9488,7 +9479,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, } =20 for_each_online_cpu(i) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, i); struct rq *rq =3D cfs_rq->rq; =20 guard(rq_lock_irq)(rq); @@ -9656,7 +9647,7 @@ static u64 throttled_time_self(struct task_group *tg) u64 total =3D 0; =20 for_each_possible_cpu(i) { - total +=3D READ_ONCE(tg->cfs_rq[i]->throttled_clock_self_time); + total +=3D READ_ONCE(tg_cfs_rq(tg, i)->throttled_clock_self_time); } =20 return total; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index db10e617a638..359c1c1edee5 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -327,7 +327,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *= cfs_rq) * to a tree or when we reach the top of the tree */ if (cfs_rq->tg->parent && - cfs_rq->tg->parent->cfs_rq[cpu]->on_list) { + tg_cfs_rq(cfs_rq->tg->parent, cpu)->on_list) { /* * If parent is already on the list, we add the child * just before. Thanks to circular linked property of @@ -335,7 +335,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *= cfs_rq) * of the list that starts by parent. */ list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list, - &(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list)); + &(tg_cfs_rq(cfs_rq->tg->parent, cpu)->leaf_cfs_rq_list)); /* * The branch is now connected to its tree so we can * reset tmp_alone_branch to the beginning of the @@ -4156,7 +4156,7 @@ static void __maybe_unused clear_tg_offline_cfs_rqs(s= truct rq *rq) =20 rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 clear_tg_load_avg(cfs_rq); } @@ -5692,7 +5692,7 @@ static inline int throttled_hierarchy(struct cfs_rq *= cfs_rq) =20 static inline int lb_throttled_hierarchy(struct task_struct *p, int dst_cp= u) { - return throttled_hierarchy(task_group(p)->cfs_rq[dst_cpu]); + return throttled_hierarchy(tg_cfs_rq(task_group(p), dst_cpu)); } =20 static inline bool task_is_throttled(struct task_struct *p) @@ -5838,7 +5838,7 @@ static void enqueue_task_fair(struct rq *rq, struct t= ask_struct *p, int flags); static int tg_unthrottle_up(struct task_group *tg, void *data) { struct rq *rq =3D data; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); struct task_struct *p, *tmp; =20 if (--cfs_rq->throttle_count) @@ -5909,7 +5909,7 @@ static void record_throttle_clock(struct cfs_rq *cfs_= rq) static int tg_throttle_down(struct task_group *tg, void *data) { struct rq *rq =3D data; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 if (cfs_rq->throttle_count++) return 0; @@ -6382,8 +6382,8 @@ static void sync_throttle(struct task_group *tg, int = cpu) if (!tg->parent) return; =20 - cfs_rq =3D tg->cfs_rq[cpu]; - pcfs_rq =3D tg->parent->cfs_rq[cpu]; + cfs_rq =3D tg_cfs_rq(tg, cpu); + pcfs_rq =3D tg_cfs_rq(tg->parent, cpu); =20 cfs_rq->throttle_count =3D pcfs_rq->throttle_count; cfs_rq->throttled_clock_pelt =3D rq_clock_pelt(cpu_rq(cpu)); @@ -6575,7 +6575,7 @@ static void __maybe_unused update_runtime_enabled(str= uct rq *rq) rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { struct cfs_bandwidth *cfs_b =3D &tg->cfs_bandwidth; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 raw_spin_lock(&cfs_b->lock); cfs_rq->runtime_enabled =3D cfs_b->quota !=3D RUNTIME_INF; @@ -6604,7 +6604,7 @@ static void __maybe_unused unthrottle_offline_cfs_rqs= (struct rq *rq) =20 rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 if (!cfs_rq->runtime_enabled) continue; @@ -9414,7 +9414,7 @@ static inline int task_is_ineligible_on_dst_cpu(struc= t task_struct *p, int dest_ struct cfs_rq *dst_cfs_rq; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - dst_cfs_rq =3D task_group(p)->cfs_rq[dest_cpu]; + dst_cfs_rq =3D tg_cfs_rq(task_group(p), dest_cpu); #else dst_cfs_rq =3D &cpu_rq(dest_cpu)->cfs; #endif @@ -13346,7 +13346,7 @@ static int task_is_throttled_fair(struct task_struc= t *p, int cpu) struct cfs_rq *cfs_rq; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - cfs_rq =3D task_group(p)->cfs_rq[cpu]; + cfs_rq =3D tg_cfs_rq(task_group(p), cpu); #else cfs_rq =3D &cpu_rq(cpu)->cfs; #endif @@ -13612,42 +13612,31 @@ static void task_change_group_fair(struct task_st= ruct *p) =20 void free_fair_sched_group(struct task_group *tg) { - int i; - - for_each_possible_cpu(i) { - if (tg->cfs_rq && tg->cfs_rq[i]) { - struct cfs_rq_with_se *combined =3D - container_of(tg->cfs_rq[i], struct cfs_rq_with_se, cfs_rq); - kfree(combined); - } - } - - kfree(tg->cfs_rq); + free_percpu(tg->cfs_rq); } =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) { - struct cfs_rq_with_se *combined; + struct cfs_rq_with_se __percpu *combined; struct sched_entity *se; struct cfs_rq *cfs_rq; int i; =20 - tg->cfs_rq =3D kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); - if (!tg->cfs_rq) + combined =3D alloc_percpu_gfp(struct cfs_rq_with_se, GFP_KERNEL); + if (!combined) goto err; =20 + tg->cfs_rq =3D &combined->cfs_rq; tg->shares =3D NICE_0_LOAD; =20 init_cfs_bandwidth(tg_cfs_bandwidth(tg), tg_cfs_bandwidth(parent)); =20 for_each_possible_cpu(i) { - combined =3D kzalloc_node(sizeof(*combined), - GFP_KERNEL, cpu_to_node(i)); - if (!combined) + cfs_rq =3D tg_cfs_rq(tg, i); + if (!cfs_rq) goto err; =20 - cfs_rq =3D &combined->cfs_rq; - se =3D &combined->ses.se; + se =3D tg_se(tg, i); init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, tg_se(parent, i)); init_entity_runnable_average(se); @@ -13684,7 +13673,7 @@ void unregister_fair_sched_group(struct task_group = *tg) destroy_cfs_bandwidth(tg_cfs_bandwidth(tg)); =20 for_each_possible_cpu(cpu) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu); struct sched_entity *se =3D tg_se(tg, cpu); struct rq *rq =3D cpu_rq(cpu); =20 @@ -13721,8 +13710,6 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, cfs_rq->rq =3D rq; init_cfs_rq_runtime(cfs_rq); =20 - tg->cfs_rq[cpu] =3D cfs_rq; - /* se could be NULL for root_task_group */ if (!se) return; @@ -13815,7 +13802,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); struct sched_entity *se =3D tg_se(tg, i); - struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *grp_cfs_rq =3D tg_cfs_rq(tg, i); bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; struct rq_flags rf; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 97c27ac0ae18..b42ae324bab8 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -477,7 +477,7 @@ struct task_group { =20 #ifdef CONFIG_FAIR_GROUP_SCHED /* runqueue "owned" by this group on each CPU */ - struct cfs_rq **cfs_rq; + struct cfs_rq __percpu *cfs_rq; unsigned long shares; /* * load_avg can be heavily contended at clock tick time, so put @@ -2187,13 +2187,19 @@ struct cfs_rq_with_se { struct sched_entity_stats ses; }; =20 +/* Access a specific CPU's cfs_rq from a task group */ +static inline struct cfs_rq *tg_cfs_rq(struct task_group *tg, int cpu) +{ + return per_cpu_ptr(tg->cfs_rq, cpu); +} + static inline struct sched_entity *tg_se(struct task_group *tg, int cpu) { if (is_root_task_group(tg)) return NULL; =20 struct cfs_rq_with_se *combined =3D - container_of(tg->cfs_rq[cpu], struct cfs_rq_with_se, cfs_rq); + container_of(tg_cfs_rq(tg, cpu), struct cfs_rq_with_se, cfs_rq); return &combined->ses.se; } =20 @@ -2216,8 +2222,8 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); - p->se.cfs_rq =3D tg->cfs_rq[cpu]; + set_task_rq_fair(&p->se, p->se.cfs_rq, tg_cfs_rq(tg, cpu)); + p->se.cfs_rq =3D tg_cfs_rq(tg, cpu); p->se.parent =3D tg_se(tg, cpu); p->se.depth =3D p->se.parent ? p->se.parent->depth + 1 : 0; #endif --=20 2.52.0