From nobody Mon Feb 9 04:45:38 2026 Received: from mail-yw1-f169.google.com (mail-yw1-f169.google.com [209.85.128.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B04EE2D9492 for ; Wed, 21 Jan 2026 20:34:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769027654; cv=none; b=CXaC3nRtHxADZJ0sHwGbJe2/tdXWMzAQtIWpY8VsojUR2/tpeMKuASYz+JinKLealjHBRLPDD5PYEt8iHhY5GbWbY+btvPH/HnUZ5KBDC7cUYwhPbNsuwT2TYKNTRd6PRiXnZqAcD0KalggO2vht/7tteRqGTVHILJ6PvCM6oPQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769027654; c=relaxed/simple; bh=7QZpUagakfAiuOONfAhA0HOJsFx0TpEbYROm/UO/RP0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tcs19V7raQ37cXpG5l3qf9aMlM9uZkATMALnRGqjghpeS6QU9wL3UZdRUU8b8ZBJl9/GKIbZM7PLzBXYy720ClGRQ3DabhcwTLXlXNFcSFXnNWMAhoJKajnuvZG6/VrK6juKHiARKKYwmZ1nUn0qVMUEnccP+tTYpW5+vuFD1mQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ncsu.edu; spf=pass smtp.mailfrom=ncsu.edu; dkim=pass (2048-bit key) header.d=ncsu.edu header.i=@ncsu.edu header.b=eE6qm5O/; arc=none smtp.client-ip=209.85.128.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ncsu.edu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ncsu.edu Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ncsu.edu header.i=@ncsu.edu header.b="eE6qm5O/" Received: by mail-yw1-f169.google.com with SMTP id 00721157ae682-78fdb90b670so2217217b3.2 for ; Wed, 21 Jan 2026 12:34:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ncsu.edu; s=google; t=1769027650; x=1769632450; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sCituT1Ilyr4e6Xho5nbFgLO2J5DLtYEcKsexXPVav8=; b=eE6qm5O/Gm0UyiTdT06zt/jtCpzFGAPQHJNO4C0atRDm0SWFQ3WS94SkjOJKq3XjSH 7ejpR3/GI2OY4sO9on7LbRcsPRfV5pic/nL1umbgEBmJ9rUEylg39VyHbUmS+TUcqJQC RhFkjsVpVHiR3RYMQedwRvQT9acgFbkQ1PHaYRpsfeKC5c9zluRoHk2hlFHfGx7uYMjv LfQVqYW7KdMzj9AcUNE50LljxjP7DNrTld0fZwn6/HveMGKUTBfJ9gdGkbaC+KvvB3kn nkw3uvXpF9BrcjHVaP+uXQy7sdKVDy1lQZOrSPDQvNrKNSpV4d4uwHQelIrJPXXbbbtI iPKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769027650; x=1769632450; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=sCituT1Ilyr4e6Xho5nbFgLO2J5DLtYEcKsexXPVav8=; b=VRXM8gZ2fClAMCswH1xG/D1a/Ppq46hVy3xDOAqMDXMwlyuCIGtxFjj40RBcaN2IUY fsV9H2MOkK2mlTgEFBcXXX2yx/0sCb8ND1LPe+ZtIofCzr7aVjY4+HmcgcpYO7QrZGhj RpKfVI0jXyrC1/sL+WSqoUU2qChJYu0cmLB7X339phmdA8D6iNLUKPEGGRnIexgBps0D dAuK2HU2vc7ZpvY0hTY9pkulJemFn5EqG3OB39MRlhoJ61ALMSqpzHwkGBITlZl1u72v 1qLvFzfbokbeanr8aLapsdl/4U/STGaa8yoLXZxuHX4lAz7MHkyo7pbXIsDdkC4e412x oU4g== X-Forwarded-Encrypted: i=1; AJvYcCVpOTklb84X4bLMAPg6YE8iFWTEt9B9NuwbkNzGjK5jODQFu0cWYJiWS+sJDxb2Jx2MkP+K12S//vgByp8=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+zy7/7VTUIWbiENYjTUXy88sFGCNdnwrOPtwz1jiksRbElqf3 Gg8/Iuu9ZZn1TNyuLaOe0zNEUvLA2kKPHicDSYOOccqNihwSsMacHeywLovxD99lZQ== X-Gm-Gg: AZuq6aL640s4pnr/1qsf7tmden1VIbA8nHKS2bPD+bF7rTopCXZj3aUZ2vDZYKZdVa6 PpGRaETGYCBMdo/SGOPE+Xx2X7VeEURkveUjxw8pPV86xBL/tbBKYA4BETDaLyzcGhBGG/pS4gB C+r+5FMl6vwHijrJJ9CztHIh73H/7LqDI9HkzOcRhuR0Feww7BMlPKVsA5RqoAO9T+zBGhy+Z8s ntIuxq23cYslivSGdRQDjHKzcV07qsjNYMWlnH0qTDgK0T86RI3GZFgG4sXBn5JowETcBltUW90 GUnX20REidLolWl/WqJasB4JqnZ6xKkRbSiiZQlFKoWh1ED8mxD/5HgD4UNt/1VNGQ2tJmD3ZjE 9loan/ZJH4HZ+/64L/hQt3DV2cpvN4pUAv4pwCwMwCOjfoOEf/s2X1SmudGSUv9KG1+qDn3pIp0 /bLc4KKkYMsA== X-Received: by 2002:a05:690e:1405:b0:649:38f0:ae7f with SMTP id 956f58d0204a3-64938f0b13bmr6625733d50.22.1769027650132; Wed, 21 Jan 2026 12:34:10 -0800 (PST) Received: from um773-cachyos ([2600:1700:cc0:94af:eca6:9ff9:6f3c:5de9]) by smtp.gmail.com with ESMTPSA id 956f58d0204a3-64916ffaeb2sm8412639d50.4.2026.01.21.12.34.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jan 2026 12:34:09 -0800 (PST) From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Rik van Riel , Chris Mason , Madadi Vineeth Reddy , Xu Liu , Blake Jones , Josh Don , Nilay Vaish , K Prateek Nayak , linux-kernel@vger.kernel.org, Zecheng Li , Zecheng Li Subject: [PATCH v8 1/3] sched/fair: Co-locate cfs_rq and sched_entity in cfs_tg_state Date: Wed, 21 Jan 2026 15:33:34 -0500 Message-ID: <20260121203402.1008441-2-zli94@ncsu.edu> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260121203402.1008441-1-zli94@ncsu.edu> References: <20260121203402.1008441-1-zli94@ncsu.edu> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zecheng Li Improve data locality and reduce pointer chasing by allocating struct cfs_rq and struct sched_entity together for non-root task groups. This is achieved by introducing a new combined struct cfs_tg_state that holds both objects in a single allocation. This patch: - Introduces struct cfs_tg_state that embeds cfs_rq, sched_entity, and sched_statistics together in a single structure. - Updates __schedstats_from_se() in stats.h to use cfs_tg_state for accessing sched_statistics from a group sched_entity. - Modifies alloc_fair_sched_group() and free_fair_sched_group() to allocate and free the new struct as a single unit. - Modifies the per-CPU pointers in task_group->se and task_group->cfs_rq to point to the members in the new combined structure. Signed-off-by: Zecheng Li Signed-off-by: Zecheng Li Reviewed-by: K Prateek Nayak Tested-by: K Prateek Nayak --- kernel/sched/fair.c | 18 ++++++------------ kernel/sched/sched.h | 12 ++++++++++++ kernel/sched/stats.h | 9 +-------- 3 files changed, 19 insertions(+), 20 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 04993c763a06..eadf72b3835e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -13619,8 +13619,6 @@ void free_fair_sched_group(struct task_group *tg) for_each_possible_cpu(i) { if (tg->cfs_rq) kfree(tg->cfs_rq[i]); - if (tg->se) - kfree(tg->se[i]); } =20 kfree(tg->cfs_rq); @@ -13629,6 +13627,7 @@ void free_fair_sched_group(struct task_group *tg) =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) { + struct cfs_tg_state *state; struct sched_entity *se; struct cfs_rq *cfs_rq; int i; @@ -13645,16 +13644,13 @@ int alloc_fair_sched_group(struct task_group *tg,= struct task_group *parent) init_cfs_bandwidth(tg_cfs_bandwidth(tg), tg_cfs_bandwidth(parent)); =20 for_each_possible_cpu(i) { - cfs_rq =3D kzalloc_node(sizeof(struct cfs_rq), - GFP_KERNEL, cpu_to_node(i)); - if (!cfs_rq) + state =3D kzalloc_node(sizeof(*state), + GFP_KERNEL, cpu_to_node(i)); + if (!state) goto err; =20 - se =3D kzalloc_node(sizeof(struct sched_entity_stats), - GFP_KERNEL, cpu_to_node(i)); - if (!se) - goto err_free_rq; - + cfs_rq =3D &state->cfs_rq; + se =3D &state->se; init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); init_entity_runnable_average(se); @@ -13662,8 +13658,6 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) =20 return 1; =20 -err_free_rq: - kfree(cfs_rq); err: return 0; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 58c9d244f12b..50b37ed2f7d6 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2191,6 +2191,18 @@ static inline struct task_group *task_group(struct t= ask_struct *p) return p->sched_task_group; } =20 +#ifdef CONFIG_FAIR_GROUP_SCHED +/* + * Defined here to be available before stats.h is included, since + * stats.h has dependencies on things defined later in this file. + */ +struct cfs_tg_state { + struct cfs_rq cfs_rq; + struct sched_entity se; + struct sched_statistics stats; +} __no_randomize_layout; +#endif + /* Change a task's cfs_rq and parent entity if it moves across CPUs/groups= */ static inline void set_task_rq(struct task_struct *p, unsigned int cpu) { diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h index c903f1a42891..63b9a800a354 100644 --- a/kernel/sched/stats.h +++ b/kernel/sched/stats.h @@ -89,19 +89,12 @@ static inline void rq_sched_info_depart (struct rq *rq= , unsigned long long delt =20 #endif /* CONFIG_SCHEDSTATS */ =20 -#ifdef CONFIG_FAIR_GROUP_SCHED -struct sched_entity_stats { - struct sched_entity se; - struct sched_statistics stats; -} __no_randomize_layout; -#endif - static inline struct sched_statistics * __schedstats_from_se(struct sched_entity *se) { #ifdef CONFIG_FAIR_GROUP_SCHED if (!entity_is_task(se)) - return &container_of(se, struct sched_entity_stats, se)->stats; + return &container_of(se, struct cfs_tg_state, se)->stats; #endif return &task_of(se)->stats; } --=20 2.52.0 From nobody Mon Feb 9 04:45:38 2026 Received: from mail-yw1-f170.google.com (mail-yw1-f170.google.com [209.85.128.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B67C435E535 for ; Wed, 21 Jan 2026 20:34:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769027655; cv=none; b=hh8sWHB30zWpSsyLSJQzXKnByI8dpdRCyd0wcFacMWeb9BUReju5YW6nHA/QOuORMpAq8dw+jydO4kekS9zbas0HJvQBgXLTRCXiCPiKevy/Oh6rNvuZ92cgWxXTMg13/KZ+p/1RwUkfgpLo+8N8VzVwgeYb5TQiVwjZQGaHr9U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769027655; c=relaxed/simple; bh=Pyv+kVEmk87B3lBszQ7DTl63COT0T7FBVfyrdJHdjlQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sgQQ1QhSSAjbP7KoQbN8fciNMpv/xP9rhpET3pTREdcwDnv4QWFrbX2sAgf5XB7Bux2AcdVhtNCYYANBCiZxE3qajgzVvn+Qo5jFicZHWPocyNumkMwzcBc2fA7q+lcFksqJl3WuxIE6Nd3V+nGLwQhWAW0WbOlIHYg9F5riG1g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ncsu.edu; spf=pass smtp.mailfrom=ncsu.edu; dkim=pass (2048-bit key) header.d=ncsu.edu header.i=@ncsu.edu header.b=XglKX4N1; arc=none smtp.client-ip=209.85.128.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ncsu.edu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ncsu.edu Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ncsu.edu header.i=@ncsu.edu header.b="XglKX4N1" Received: by mail-yw1-f170.google.com with SMTP id 00721157ae682-78fc174ada4so1927077b3.2 for ; Wed, 21 Jan 2026 12:34:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ncsu.edu; s=google; t=1769027651; x=1769632451; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xznTl+zdp/UqhNxC9iecTsa2I3CgP8iymTA2O1DwYGU=; b=XglKX4N1dCUQXbQd7HPFfcXkz3/h6zIC3Aa9WEcWs9OBY3HUDdcEAEk35n0ZZrkeGw 4oxmR6HfxhcP5M6JKHFHB66B2g64xZEhYpoSDkWL6Pn7wDqvMrJimZ9MUyvRXMTynkbk er6me846DXpXeNGdTd7h4kBJRCYktk/rzIGyFQ79+Quu8zkK3vues8wUDEVgPYXtdAq4 cmfy5WvR6j5ParbOtZjEf0mbZ+hdqzbss6II/NbFilHbEoiLITlDIz2wy1goX6IvTLrU kWt0p5FnTjn2Mb5JVtxnb05FneHuC7KQtT/Dq/gxtJvg8ixHEB1eRJWOC6LrzQUq77h7 1okA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769027651; x=1769632451; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=xznTl+zdp/UqhNxC9iecTsa2I3CgP8iymTA2O1DwYGU=; b=C7co3agPLczpPh7pLTJviwTDixJi/r9yVFMnGAcuQjF7GU5ErGw8dyXFbA5bHb23Yl 6XUA5OHJnTXs7Wjo6NoAZGJHwJOS+N9htZ7lR6ytAS7Zz78Q7yIslRnSKX93Y8svvvF3 fkcX1LbBU2oxxYh+WvTJT0x1MeBVJ4MsfDnV9DTHcz9lxXw3hrAbaC9D8YtsH8uqELWl DSuSkJVKoTJ+qUWWCp/IXc7jGj4Y+jGSzu8qyX53DihHq4PXaVN+mmYapk+Qdv/E7nte zw50DYEhX/zXgnhUZzGHicmTiB2ZGAYjOe7OfFW7SyK6wMpQHcg4lSMQ/UN1tVyY+VTe YhLA== X-Forwarded-Encrypted: i=1; AJvYcCVrbjkxLVMEFZg7TfBNHuvrw87K6P7gdP7P/PrdVHRUqZDsYv/sHR6DB9NkBE03UsZ5/j2m/8i6lqvb2eQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwtW5ftPO9pUrAOVcXNthnWA1A/0qC3IamQRs111V7cyla2TX7D E0bPhe0KNGo8co/ILd0HLO+p5DIt6haxPo9z+C3tp5VEYyF4bxyxs2myinQ1dhS/PQ== X-Gm-Gg: AZuq6aL77GTUwBwLkIOx8pTzBHeBqMN+nBdaUwy9qFYhVABoEKm1bvbIcK/f746Uvju 0+6D7/aX2BVv0XVEz+3VFVPTeanyPYhj9d5crX3KL7pfGZDicZKqYeVrKWZ9flzT4yeF1ocPURC onrPYi/D8CAblHdXBkbqx5EHz4XQ7xMRzOg4pRx98DGx/s8tsbb0UAOEM34QnDmqzYhwUDaghaG wNzKhiA+zRKr3x4irbm3qtKehQy7fHFVHq1BydScmIFcni2hd65Tefh4ki3YZvtQU1ISNGu3MUZ q9GSNKafeQ/B8MEHhg/qPra9JBWeXbRL4AWpzMScfc6Qf28aLv9ejxhLixo1QV4EkJ+QOwd2Xzp frl92ejdb0J5gzvlQX8gfvSMXObvCibkVu0JpbNrkwoM4KHJAd34ycw2TtGukBKs4HWfYuhtD7Z YhaW9q93NM6/VXsZMVE/pV X-Received: by 2002:a05:690e:1c21:b0:63f:9c70:84ed with SMTP id 956f58d0204a3-6493c84923dmr5158686d50.58.1769027651359; Wed, 21 Jan 2026 12:34:11 -0800 (PST) Received: from um773-cachyos ([2600:1700:cc0:94af:eca6:9ff9:6f3c:5de9]) by smtp.gmail.com with ESMTPSA id 956f58d0204a3-64916ffaeb2sm8412639d50.4.2026.01.21.12.34.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jan 2026 12:34:10 -0800 (PST) From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Rik van Riel , Chris Mason , Madadi Vineeth Reddy , Xu Liu , Blake Jones , Josh Don , Nilay Vaish , K Prateek Nayak , linux-kernel@vger.kernel.org, Zecheng Li , Zecheng Li Subject: [PATCH v8 2/3] sched/fair: Remove task_group->se pointer array Date: Wed, 21 Jan 2026 15:33:35 -0500 Message-ID: <20260121203402.1008441-3-zli94@ncsu.edu> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260121203402.1008441-1-zli94@ncsu.edu> References: <20260121203402.1008441-1-zli94@ncsu.edu> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zecheng Li Now that struct sched_entity is co-located with struct cfs_rq for non-root task groups, the task_group->se pointer array is redundant. The associated sched_entity can be loaded directly from the cfs_rq. This patch performs the access conversion with the helpers: - is_root_task_group(tg): checks if a task group is the root task group. It compares the task group's address with the global root_task_group variable. - tg_se(tg, cpu): retrieves the cfs_rq and returns the address of the co-located se. This function checks if tg is the root task group to ensure behaving the same of previous tg->se[cpu]. Replaces all accesses that use the tg->se[cpu] pointer array with calls to the new tg_se(tg, cpu) accessor. - cfs_rq_se(cfs_rq): simplifies access paths like cfs_rq->tg->se[...] to use the co-located sched_entity. This function also checks if tg is the root task group to ensure same behavior. Since tg_se is not in very hot code paths, and the branch is a register comparison with an immediate value (`&root_task_group`), the performance impact is expected to be negligible. Signed-off-by: Zecheng Li Signed-off-by: Zecheng Li Reviewed-by: K Prateek Nayak Tested-by: K Prateek Nayak --- kernel/sched/core.c | 7 ++----- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 25 +++++++++---------------- kernel/sched/sched.h | 31 ++++++++++++++++++++++++++----- 4 files changed, 38 insertions(+), 27 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3cca012d1259..8e2a67cecee9 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8565,7 +8565,7 @@ void __init sched_init(void) wait_bit_init(); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - ptr +=3D 2 * nr_cpu_ids * sizeof(void **); + ptr +=3D nr_cpu_ids * sizeof(void **); #endif #ifdef CONFIG_RT_GROUP_SCHED ptr +=3D 2 * nr_cpu_ids * sizeof(void **); @@ -8574,9 +8574,6 @@ void __init sched_init(void) ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.se =3D (struct sched_entity **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); - root_task_group.cfs_rq =3D (struct cfs_rq **)ptr; ptr +=3D nr_cpu_ids * sizeof(void **); =20 @@ -9644,7 +9641,7 @@ static int cpu_cfs_stat_show(struct seq_file *sf, voi= d *v) int i; =20 for_each_possible_cpu(i) { - stats =3D __schedstats_from_se(tg->se[i]); + stats =3D __schedstats_from_se(tg_se(tg, i)); ws +=3D schedstat_val(stats->wait_sum); } =20 diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 5f9b77195159..544d9ae4e0df 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -644,7 +644,7 @@ void dirty_sched_domain_sysctl(int cpu) #ifdef CONFIG_FAIR_GROUP_SCHED static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task= _group *tg) { - struct sched_entity *se =3D tg->se[cpu]; + struct sched_entity *se =3D tg_se(tg, cpu); =20 #define P(F) SEQ_printf(m, " .%-30s: %lld\n", #F, (long long)F) #define P_SCHEDSTAT(F) SEQ_printf(m, " .%-30s: %lld\n", \ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index eadf72b3835e..8872d003af98 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5971,7 +5971,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) { struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); - struct sched_entity *se =3D cfs_rq->tg->se[cpu_of(rq)]; + struct sched_entity *se =3D cfs_rq_se(cfs_rq); =20 /* * It's possible we are called with runtime_remaining < 0 due to things @@ -9839,7 +9839,6 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) { struct cfs_rq *cfs_rq, *pos; bool decayed =3D false; - int cpu =3D cpu_of(rq); =20 /* * Iterates the task_group tree in a bottom up fashion, see @@ -9859,7 +9858,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) } =20 /* Propagate pending load changes to the parent, if any: */ - se =3D cfs_rq->tg->se[cpu]; + se =3D cfs_rq_se(cfs_rq); if (se && !skip_blocked_update(se)) update_load_avg(cfs_rq_of(se), se, UPDATE_TG); =20 @@ -9885,8 +9884,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) */ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq) { - struct rq *rq =3D rq_of(cfs_rq); - struct sched_entity *se =3D cfs_rq->tg->se[cpu_of(rq)]; + struct sched_entity *se =3D cfs_rq_se(cfs_rq); unsigned long now =3D jiffies; unsigned long load; =20 @@ -13622,7 +13620,6 @@ void free_fair_sched_group(struct task_group *tg) } =20 kfree(tg->cfs_rq); - kfree(tg->se); } =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) @@ -13635,9 +13632,6 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) tg->cfs_rq =3D kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); if (!tg->cfs_rq) goto err; - tg->se =3D kcalloc(nr_cpu_ids, sizeof(se), GFP_KERNEL); - if (!tg->se) - goto err; =20 tg->shares =3D NICE_0_LOAD; =20 @@ -13652,7 +13646,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) cfs_rq =3D &state->cfs_rq; se =3D &state->se; init_cfs_rq(cfs_rq); - init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); + init_tg_cfs_entry(tg, cfs_rq, se, i, tg_se(parent, i)); init_entity_runnable_average(se); } =20 @@ -13671,7 +13665,7 @@ void online_fair_sched_group(struct task_group *tg) =20 for_each_possible_cpu(i) { rq =3D cpu_rq(i); - se =3D tg->se[i]; + se =3D tg_se(tg, i); rq_lock_irq(rq, &rf); update_rq_clock(rq); attach_entity_cfs_rq(se); @@ -13688,7 +13682,7 @@ void unregister_fair_sched_group(struct task_group = *tg) =20 for_each_possible_cpu(cpu) { struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu]; - struct sched_entity *se =3D tg->se[cpu]; + struct sched_entity *se =3D tg_se(tg, cpu); struct rq *rq =3D cpu_rq(cpu); =20 if (se) { @@ -13725,7 +13719,6 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, init_cfs_rq_runtime(cfs_rq); =20 tg->cfs_rq[cpu] =3D cfs_rq; - tg->se[cpu] =3D se; =20 /* se could be NULL for root_task_group */ if (!se) @@ -13756,7 +13749,7 @@ static int __sched_group_set_shares(struct task_gro= up *tg, unsigned long shares) /* * We can't change the weight of the root cgroup. */ - if (!tg->se[0]) + if (is_root_task_group(tg)) return -EINVAL; =20 shares =3D clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES)); @@ -13767,7 +13760,7 @@ static int __sched_group_set_shares(struct task_gro= up *tg, unsigned long shares) tg->shares =3D shares; for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); - struct sched_entity *se =3D tg->se[i]; + struct sched_entity *se =3D tg_se(tg, i); struct rq_flags rf; =20 /* Propagate contribution to hierarchy */ @@ -13818,7 +13811,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) =20 for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); - struct sched_entity *se =3D tg->se[i]; + struct sched_entity *se =3D tg_se(tg, i); struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 50b37ed2f7d6..530b1d06e2d5 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -476,8 +476,6 @@ struct task_group { #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - /* schedulable entities of this group on each CPU */ - struct sched_entity **se; /* runqueue "owned" by this group on each CPU */ struct cfs_rq **cfs_rq; unsigned long shares; @@ -915,7 +913,8 @@ struct dl_rq { }; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - +/* Check whether a task group is root tg */ +#define is_root_task_group(tg) ((tg) =3D=3D &root_task_group) /* An entity is a task if it doesn't "own" a runqueue */ #define entity_is_task(se) (!se->my_q) =20 @@ -2201,6 +2200,28 @@ struct cfs_tg_state { struct sched_entity se; struct sched_statistics stats; } __no_randomize_layout; + +static inline struct sched_entity *tg_se(struct task_group *tg, int cpu) +{ + struct cfs_tg_state *state; + + if (is_root_task_group(tg)) + return NULL; + + state =3D container_of(tg->cfs_rq[cpu], struct cfs_tg_state, cfs_rq); + return &state->se; +} + +static inline struct sched_entity *cfs_rq_se(struct cfs_rq *cfs_rq) +{ + struct cfs_tg_state *state; + + if (is_root_task_group(cfs_rq->tg)) + return NULL; + + state =3D container_of(cfs_rq, struct cfs_tg_state, cfs_rq); + return &state->se; +} #endif =20 /* Change a task's cfs_rq and parent entity if it moves across CPUs/groups= */ @@ -2213,8 +2234,8 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #ifdef CONFIG_FAIR_GROUP_SCHED set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; - p->se.parent =3D tg->se[cpu]; - p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; + p->se.parent =3D tg_se(tg, cpu); + p->se.depth =3D p->se.parent ? p->se.parent->depth + 1 : 0; #endif =20 #ifdef CONFIG_RT_GROUP_SCHED --=20 2.52.0 From nobody Mon Feb 9 04:45:38 2026 Received: from mail-yx1-f52.google.com (mail-yx1-f52.google.com [74.125.224.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2F563B8D4D for ; Wed, 21 Jan 2026 20:34:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.224.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769027656; cv=none; b=FizaSalRZe0gC7JEaXSVrhjGMgK8qdxI6ZHpH+bbarDIpuFzYXiJzt5G3K5iPBeAddbdvUNgFxZafUubtgCTpkUpS5hm0D3iusnM7v4pjKUC3qpLOtwAostR0bRQsJdDgbvnFtEo96R5KYf9bFrRJDwLo6LvOAISXOPTGU0rrfk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769027656; c=relaxed/simple; bh=PMWGYpiN+hW87XValJdQskkeg94CrAzpdD03uRntgeM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Lo41/eD2jFQCy9hqXPeWiLySmoUWxkHE7rGzOypJpLfn0jnG6gbmvsnl8TFGjARParH1KQyXo/vey9Wpwax9tBsHORkl3ZdIhaIMJGHzn2xmbJbQRyHURZMSEOBwPAY9Y4UPjpIcrZMHZDiUHBjuqmQRNwp6xw3mxYNxO2ktgCw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ncsu.edu; spf=pass smtp.mailfrom=ncsu.edu; dkim=pass (2048-bit key) header.d=ncsu.edu header.i=@ncsu.edu header.b=OXUwbSCe; arc=none smtp.client-ip=74.125.224.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ncsu.edu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ncsu.edu Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ncsu.edu header.i=@ncsu.edu header.b="OXUwbSCe" Received: by mail-yx1-f52.google.com with SMTP id 956f58d0204a3-648ff033fb2so339888d50.0 for ; Wed, 21 Jan 2026 12:34:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ncsu.edu; s=google; t=1769027653; x=1769632453; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fmZRCUvpbEBHQkCdtG+HpyNiGgCifmFo615igQVWxhk=; b=OXUwbSCeZQ6k5hDV7vBJVpWHWpcPZdzGx5EYzcDGBCQaul6n+4MuBL4kZdwzAxnbu9 eSw5Wqzd6WD2qPPAcI2JAZh9MN6VGPhs1qxNjGpx7+4DKxOq8n04EUCL04VZ4PsK86+/ h8oB8UWUxcr3OumDzvZh3xElzwG8S8brmDGxuGfqe3W9hD++Naajw6lLLuIvx+PEeou+ 8XXKrV/7dBAUTg8Yu+SYjol3kh60mndW/k/91M1mNU8uHigKvUyL+7gXZDTfqaziA3mW tQWYjjad4x7oBSWnadh6u67bxebjWUp7BH8NRtMv1EjnRH5rCvxi7tDIS/vomJxYxjwc yYig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769027653; x=1769632453; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=fmZRCUvpbEBHQkCdtG+HpyNiGgCifmFo615igQVWxhk=; b=L6IteunOWHH7VPjLXgAo2X0OLZxAEh/XcxuoIbTl4qYNe3jaN9QGvW+dD1YVxeZplK CzXnGJxFFd1HdfAMgYq7eGyu+MFbLPPPws0ceH56+bP3CMpeKIwPpYfrsn3Dyf+F/a40 OCTerEj8xoNQ9MSuJhZHlnDQQ/do52FByiANBZLSFTJ7UCFrgmzAEZGEYOaBky/qA/Da /K2NNP5agQFFtXsgTguaYuUU2Y31ZvtPbROwOAgCAle8/ckjAAkUWv5Qfoa9YInpe3y/ VA5C8mt9iUqDzrWnV6YH9wncVxiJ2jZWP3ZDz63IWmh1iHA2FGQBShdDQPiBV19qu28N sdeQ== X-Forwarded-Encrypted: i=1; AJvYcCXe9439XiPbBvsIfE1xEB4ojn6lGslWvclmstEk945LNcOt93lDeU7l1JrRRZRKEHdqUkmYWpFRKlCgV1E=@vger.kernel.org X-Gm-Message-State: AOJu0YzvUlfJtlpX1cThfGb7lyjpwyAD8hwVdSwLd1Mmlu1Be4nUotQ4 s0YNP5VN3vR+TA2aMJYU1pyaJLVLTVIvfN0rw5+onjHjtYayc4hjHkXSKNg16gqOiw== X-Gm-Gg: AZuq6aLb6xg2VSwOyruiczTxevJDwg2JvOYsgp7ovZyxrEJJEH1dX1uDkOlRtGJsm9k JAtRrLBHhK4vAwihk2+xBnBwtbT14GBnQktF/dcocdgqxnwXYCkXQK+d1H9sGpccfmAQyTHLUm8 q71psxXQcwnwDr/WjIvgn++Io/TnW8lH/VUry+2HJzCMfxJapcp/NuABd2aqZRR+D42T0dJvfBO LZ8nJ7JhtMSy72mE1UUKJ1RZnOFl4ELUsDf2ZXUyvBf8yoW1UKA3Iomd91B09HkJM4TIz5pB15Q FaCpIMSnisAc7cDRAmHryDH9W9eGyokbA+IWNQIUN3MRAsS/NCu/KzRRyoS9lQIqDSZH5dFeIdf Gg95T3fBu5bUUdd29wkk5LikiTcGoF9Qw3KuabiA0wfyH/LHYHarNrFLbC9dXYB/l9C0Cp8jwUc gX6LANvLf7KQ== X-Received: by 2002:a05:690e:148b:b0:646:d311:dd5e with SMTP id 956f58d0204a3-649164f600bmr15533841d50.47.1769027652634; Wed, 21 Jan 2026 12:34:12 -0800 (PST) Received: from um773-cachyos ([2600:1700:cc0:94af:eca6:9ff9:6f3c:5de9]) by smtp.gmail.com with ESMTPSA id 956f58d0204a3-64916ffaeb2sm8412639d50.4.2026.01.21.12.34.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jan 2026 12:34:12 -0800 (PST) From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Rik van Riel , Chris Mason , Madadi Vineeth Reddy , Xu Liu , Blake Jones , Josh Don , Nilay Vaish , K Prateek Nayak , linux-kernel@vger.kernel.org, Zecheng Li , Zecheng Li Subject: [PATCH v8 3/3] sched/fair: Allocate cfs_tg_state with percpu allocator Date: Wed, 21 Jan 2026 15:33:36 -0500 Message-ID: <20260121203402.1008441-4-zli94@ncsu.edu> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260121203402.1008441-1-zli94@ncsu.edu> References: <20260121203402.1008441-1-zli94@ncsu.edu> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zecheng Li To remove the cfs_rq pointer array in task_group, allocate the combined cfs_rq and sched_entity using the per-cpu allocator. This patch implements the following: - Changes task_group->cfs_rq from struct cfs_rq ** to struct cfs_rq __percpu *. - Updates memory allocation in alloc_fair_sched_group() and free_fair_sched_group() to use alloc_percpu() and free_percpu() respectively. - Uses the inline accessor tg_cfs_rq(tg, cpu) with per_cpu_ptr() to retrieve the pointer to cfs_rq for the given task group and CPU. - Replaces direct accesses tg->cfs_rq[cpu] with calls to the new tg_cfs_rq(tg, cpu) helper. - Handles the root_task_group: since struct rq is already a per-cpu variable (runqueues), its embedded cfs_rq (rq->cfs) is also per-cpu. Therefore, we assign root_task_group.cfs_rq =3D &runqueues.cfs. - Cleanup the code in initializing the root task group. This change places each CPU's cfs_rq and sched_entity in its local per-cpu memory area to remove the per-task_group pointer arrays. Signed-off-by: Zecheng Li Signed-off-by: Zecheng Li Reviewed-by: K Prateek Nayak Tested-by: K Prateek Nayak --- kernel/sched/core.c | 35 +++++++++++----------------- kernel/sched/fair.c | 54 ++++++++++++++++++-------------------------- kernel/sched/sched.h | 14 ++++++++---- 3 files changed, 45 insertions(+), 58 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8e2a67cecee9..80e8f4eb3f87 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8549,7 +8549,7 @@ static struct kmem_cache *task_group_cache __ro_after= _init; =20 void __init sched_init(void) { - unsigned long ptr =3D 0; + unsigned long __maybe_unused ptr =3D 0; int i; =20 /* Make sure the linker didn't screw up */ @@ -8565,33 +8565,24 @@ void __init sched_init(void) wait_bit_init(); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - ptr +=3D nr_cpu_ids * sizeof(void **); -#endif -#ifdef CONFIG_RT_GROUP_SCHED - ptr +=3D 2 * nr_cpu_ids * sizeof(void **); -#endif - if (ptr) { - ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); + root_task_group.cfs_rq =3D &runqueues.cfs; =20 -#ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.cfs_rq =3D (struct cfs_rq **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); - - root_task_group.shares =3D ROOT_TASK_GROUP_LOAD; - init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); + root_task_group.shares =3D ROOT_TASK_GROUP_LOAD; + init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); #endif /* CONFIG_FAIR_GROUP_SCHED */ #ifdef CONFIG_EXT_GROUP_SCHED - scx_tg_init(&root_task_group); + scx_tg_init(&root_task_group); #endif /* CONFIG_EXT_GROUP_SCHED */ #ifdef CONFIG_RT_GROUP_SCHED - root_task_group.rt_se =3D (struct sched_rt_entity **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); + ptr +=3D 2 * nr_cpu_ids * sizeof(void **); + ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); + root_task_group.rt_se =3D (struct sched_rt_entity **)ptr; + ptr +=3D nr_cpu_ids * sizeof(void **); =20 - root_task_group.rt_rq =3D (struct rt_rq **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); + root_task_group.rt_rq =3D (struct rt_rq **)ptr; + ptr +=3D nr_cpu_ids * sizeof(void **); =20 #endif /* CONFIG_RT_GROUP_SCHED */ - } =20 init_defrootdomain(); =20 @@ -9492,7 +9483,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, } =20 for_each_online_cpu(i) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, i); struct rq *rq =3D cfs_rq->rq; =20 guard(rq_lock_irq)(rq); @@ -9660,7 +9651,7 @@ static u64 throttled_time_self(struct task_group *tg) u64 total =3D 0; =20 for_each_possible_cpu(i) { - total +=3D READ_ONCE(tg->cfs_rq[i]->throttled_clock_self_time); + total +=3D READ_ONCE(tg_cfs_rq(tg, i)->throttled_clock_self_time); } =20 return total; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8872d003af98..bc023704acd1 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -327,7 +327,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *= cfs_rq) * to a tree or when we reach the top of the tree */ if (cfs_rq->tg->parent && - cfs_rq->tg->parent->cfs_rq[cpu]->on_list) { + tg_cfs_rq(cfs_rq->tg->parent, cpu)->on_list) { /* * If parent is already on the list, we add the child * just before. Thanks to circular linked property of @@ -335,7 +335,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *= cfs_rq) * of the list that starts by parent. */ list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list, - &(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list)); + &(tg_cfs_rq(cfs_rq->tg->parent, cpu)->leaf_cfs_rq_list)); /* * The branch is now connected to its tree so we can * reset tmp_alone_branch to the beginning of the @@ -4153,7 +4153,7 @@ static void __maybe_unused clear_tg_offline_cfs_rqs(s= truct rq *rq) =20 rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 clear_tg_load_avg(cfs_rq); } @@ -5689,7 +5689,7 @@ static inline int throttled_hierarchy(struct cfs_rq *= cfs_rq) =20 static inline int lb_throttled_hierarchy(struct task_struct *p, int dst_cp= u) { - return throttled_hierarchy(task_group(p)->cfs_rq[dst_cpu]); + return throttled_hierarchy(tg_cfs_rq(task_group(p), dst_cpu)); } =20 static inline bool task_is_throttled(struct task_struct *p) @@ -5835,7 +5835,7 @@ static void enqueue_task_fair(struct rq *rq, struct t= ask_struct *p, int flags); static int tg_unthrottle_up(struct task_group *tg, void *data) { struct rq *rq =3D data; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); struct task_struct *p, *tmp; =20 if (--cfs_rq->throttle_count) @@ -5906,7 +5906,7 @@ static void record_throttle_clock(struct cfs_rq *cfs_= rq) static int tg_throttle_down(struct task_group *tg, void *data) { struct rq *rq =3D data; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 if (cfs_rq->throttle_count++) return 0; @@ -6379,8 +6379,8 @@ static void sync_throttle(struct task_group *tg, int = cpu) if (!tg->parent) return; =20 - cfs_rq =3D tg->cfs_rq[cpu]; - pcfs_rq =3D tg->parent->cfs_rq[cpu]; + cfs_rq =3D tg_cfs_rq(tg, cpu); + pcfs_rq =3D tg_cfs_rq(tg->parent, cpu); =20 cfs_rq->throttle_count =3D pcfs_rq->throttle_count; cfs_rq->throttled_clock_pelt =3D rq_clock_pelt(cpu_rq(cpu)); @@ -6572,7 +6572,7 @@ static void __maybe_unused update_runtime_enabled(str= uct rq *rq) rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { struct cfs_bandwidth *cfs_b =3D &tg->cfs_bandwidth; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 raw_spin_lock(&cfs_b->lock); cfs_rq->runtime_enabled =3D cfs_b->quota !=3D RUNTIME_INF; @@ -6601,7 +6601,7 @@ static void __maybe_unused unthrottle_offline_cfs_rqs= (struct rq *rq) =20 rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 if (!cfs_rq->runtime_enabled) continue; @@ -9408,7 +9408,7 @@ static inline int task_is_ineligible_on_dst_cpu(struc= t task_struct *p, int dest_ struct cfs_rq *dst_cfs_rq; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - dst_cfs_rq =3D task_group(p)->cfs_rq[dest_cpu]; + dst_cfs_rq =3D tg_cfs_rq(task_group(p), dest_cpu); #else dst_cfs_rq =3D &cpu_rq(dest_cpu)->cfs; #endif @@ -13346,7 +13346,7 @@ static int task_is_throttled_fair(struct task_struc= t *p, int cpu) struct cfs_rq *cfs_rq; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - cfs_rq =3D task_group(p)->cfs_rq[cpu]; + cfs_rq =3D tg_cfs_rq(task_group(p), cpu); #else cfs_rq =3D &cpu_rq(cpu)->cfs; #endif @@ -13612,39 +13612,31 @@ static void task_change_group_fair(struct task_st= ruct *p) =20 void free_fair_sched_group(struct task_group *tg) { - int i; - - for_each_possible_cpu(i) { - if (tg->cfs_rq) - kfree(tg->cfs_rq[i]); - } - - kfree(tg->cfs_rq); + free_percpu(tg->cfs_rq); } =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) { - struct cfs_tg_state *state; + struct cfs_tg_state __percpu *state; struct sched_entity *se; struct cfs_rq *cfs_rq; int i; =20 - tg->cfs_rq =3D kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); - if (!tg->cfs_rq) + state =3D alloc_percpu_gfp(struct cfs_tg_state, GFP_KERNEL); + if (!state) goto err; =20 + tg->cfs_rq =3D &state->cfs_rq; tg->shares =3D NICE_0_LOAD; =20 init_cfs_bandwidth(tg_cfs_bandwidth(tg), tg_cfs_bandwidth(parent)); =20 for_each_possible_cpu(i) { - state =3D kzalloc_node(sizeof(*state), - GFP_KERNEL, cpu_to_node(i)); - if (!state) + cfs_rq =3D tg_cfs_rq(tg, i); + if (!cfs_rq) goto err; =20 - cfs_rq =3D &state->cfs_rq; - se =3D &state->se; + se =3D tg_se(tg, i); init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, tg_se(parent, i)); init_entity_runnable_average(se); @@ -13681,7 +13673,7 @@ void unregister_fair_sched_group(struct task_group = *tg) destroy_cfs_bandwidth(tg_cfs_bandwidth(tg)); =20 for_each_possible_cpu(cpu) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu); struct sched_entity *se =3D tg_se(tg, cpu); struct rq *rq =3D cpu_rq(cpu); =20 @@ -13718,8 +13710,6 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, cfs_rq->rq =3D rq; init_cfs_rq_runtime(cfs_rq); =20 - tg->cfs_rq[cpu] =3D cfs_rq; - /* se could be NULL for root_task_group */ if (!se) return; @@ -13812,7 +13802,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); struct sched_entity *se =3D tg_se(tg, i); - struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *grp_cfs_rq =3D tg_cfs_rq(tg, i); bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; struct rq_flags rf; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 530b1d06e2d5..a05be7a8e04a 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -477,7 +477,7 @@ struct task_group { =20 #ifdef CONFIG_FAIR_GROUP_SCHED /* runqueue "owned" by this group on each CPU */ - struct cfs_rq **cfs_rq; + struct cfs_rq __percpu *cfs_rq; unsigned long shares; /* * load_avg can be heavily contended at clock tick time, so put @@ -2201,6 +2201,12 @@ struct cfs_tg_state { struct sched_statistics stats; } __no_randomize_layout; =20 +/* Access a specific CPU's cfs_rq from a task group */ +static inline struct cfs_rq *tg_cfs_rq(struct task_group *tg, int cpu) +{ + return per_cpu_ptr(tg->cfs_rq, cpu); +} + static inline struct sched_entity *tg_se(struct task_group *tg, int cpu) { struct cfs_tg_state *state; @@ -2208,7 +2214,7 @@ static inline struct sched_entity *tg_se(struct task_= group *tg, int cpu) if (is_root_task_group(tg)) return NULL; =20 - state =3D container_of(tg->cfs_rq[cpu], struct cfs_tg_state, cfs_rq); + state =3D container_of(tg_cfs_rq(tg, cpu), struct cfs_tg_state, cfs_rq); return &state->se; } =20 @@ -2232,8 +2238,8 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); - p->se.cfs_rq =3D tg->cfs_rq[cpu]; + set_task_rq_fair(&p->se, p->se.cfs_rq, tg_cfs_rq(tg, cpu)); + p->se.cfs_rq =3D tg_cfs_rq(tg, cpu); p->se.parent =3D tg_se(tg, cpu); p->se.depth =3D p->se.parent ? p->se.parent->depth + 1 : 0; #endif --=20 2.52.0