From nobody Fri Dec 19 20:35:02 2025 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B5F31213240 for ; Wed, 4 Jun 2025 19:59:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749067155; cv=none; b=qAt7OpiXUTlcvppnv/gf5+yQ2ntNvd0+M2hPIwNkg3AEmBWYT8l7s7HnMgvTSGZmbJO7yJ9E2C2jwn3QT5AjKFfhcMTVbkF9xVte0gJtLFXiHeL28aZDm5VMU3LLMTY0hmZRijCBhTTrxIbJGDs714fnGWTqCsRSIMZSaLq9Qjc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749067155; c=relaxed/simple; bh=hNsugqu2cMOw3pa8As49Fq/LBELbjZEHbAsiXWjpb0Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MmrhhMdj4VRtVLOXVXH/+gzZscSmKZ41Mbeh+Alg08+0qkGV07/7jGrbhG0Yp+GUcGYMtU7Zzp1T8LIBYuQcNT08CFn54n8iCYWw/pykqPOxDM2/X1c9PhL6I9xEePqb55mIlXaEkiMHULfrjnIpIAsRP+MNnq6ixBDuj0W3TRU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=s3gl0WMx; arc=none smtp.client-ip=209.85.222.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="s3gl0WMx" Received: by mail-qk1-f201.google.com with SMTP id af79cd13be357-7c5cd0f8961so45034285a.1 for ; Wed, 04 Jun 2025 12:59:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749067152; x=1749671952; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BE1JvPau67Augvd0EXHJU7jahlXkBqnSAfRcstGC/1o=; b=s3gl0WMxCWAEhejB5+3aHxDuIdD3auK2C7bKpw6uvf7XIey0E6UNUo67IaqEOLeP46 ZYdil9Ghwpdz2iVR4Bd0kyitD2rUNNfW2ZnV/+T3fMqSx34XCbXksfVXjSUkodIKC6Mz /uu/NvMz+fP8wow3HlW3U7KLaElpl7Wyj3vwAyoZ9gJRhWdHea744j9c9qGzymf8cJlf hn59ySwsMcGcw+eI2L1bpY/eHCzABiNAMwJQlJLSA/0sfqZ7e36yjZbw/xAsHVsD4cEW 1DDrxrIMxRDSNHCP8Z6dYo+lu0dp5UtRgRmxWSAo0UTRfUgYB1TFpHsxY+OXRfNp3kWZ hm1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749067152; x=1749671952; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BE1JvPau67Augvd0EXHJU7jahlXkBqnSAfRcstGC/1o=; b=O1Js8nDWz7g8LxrMkMGKZaJJj2VHLfd1FI+eJrVLD++1BE770NRn+r2yDrUxU23sY/ q5JiCF5QMjyOYs0CrBCyVf6BnZFEslF0HyzZrfqVtp2+/MSrJVip/Je0DecAGUrUyH/I XTet+rIX2T6AZgc0kQHStqTAMu5ZkS5WS+EkV0Vd6OE0xAWAoAPIoz6cnj2IafvLu1Ks k7zY/05H5bmGhRvDcLojp31NopIWoyy9g9bN2Yhua5yQYiaZm3VRTUVu393kpuUBlRAu c8CF9XnPnqTm22LxTzg21KnBCVQKwvhYaxAV8q0VI0pVWjweOh2bGDI7i+vEEFh9xGK5 Aiqw== X-Forwarded-Encrypted: i=1; AJvYcCVB0l2pHENdz7c2hHDJkIXDtvgVxMDJKv5mHO9O+on4YWkDSynscsNF28j/Mj9MaMs6THDSs1tRJ5PcSAM=@vger.kernel.org X-Gm-Message-State: AOJu0YxefFUIuLyp7tt+YaUkNUP0m70RKVG0PElSPrrvy+nLQfvAiqNw UtVaVUdpTgnchzRw5Df2N5CcVwQiAWZD6m4741pIp7aD70aQqAmh36rGpo5LEuQtRm+MM8Z3KZ9 tTuSXFl+pgw== X-Google-Smtp-Source: AGHT+IH2PVPU+jnCDOWlvH1nmTCsP6zZPCzuBIXNirLucuXHv9eITdBYMuX/KC8xAAlbaVmrnjaKIXmwEmUt X-Received: from qkzz15.prod.google.com ([2002:a05:620a:8cf:b0:7ce:d551:8079]) (user=zecheng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:620a:45a7:b0:7d2:1509:752e with SMTP id af79cd13be357-7d21991bef7mr636055485a.58.1749067152578; Wed, 04 Jun 2025 12:59:12 -0700 (PDT) Date: Wed, 4 Jun 2025 19:58:41 +0000 In-Reply-To: <20250604195846.193159-1-zecheng@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250604195846.193159-1-zecheng@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250604195846.193159-2-zecheng@google.com> Subject: [RFC PATCH 1/3] sched/fair: Embed sched_entity into cfs_rq From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Xu Liu , Blake Jones , Josh Don , linux-kernel@vger.kernel.org, Zecheng Li Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To improve data locality and avoid pointer chasing, embed struct sched_entity within struct cfs_rq. This co-locates the runqueue state (cfs_rq) and the entity's scheduling state (se). This patch implements the following: - Adds a struct sched_entity field to struct cfs_rq. - Modifies alloc_fair_sched_group() and free_fair_sched_group() to remove the separate allocation and freeing logic for sched_entity objects themselves. - The task_group->se pointer array (struct sched_entity **se) is retained. The pointers in this array are updated to point to the corresponding embedded &cfs_rq->se for each CPU. Signed-off-by: Zecheng Li --- kernel/sched/fair.c | 10 +--------- kernel/sched/sched.h | 4 ++++ 2 files changed, 5 insertions(+), 9 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0fb9bf995a47..c2af9896eef4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -13343,8 +13343,6 @@ void free_fair_sched_group(struct task_group *tg) for_each_possible_cpu(i) { if (tg->cfs_rq) kfree(tg->cfs_rq[i]); - if (tg->se) - kfree(tg->se[i]); } =20 kfree(tg->cfs_rq); @@ -13374,11 +13372,7 @@ int alloc_fair_sched_group(struct task_group *tg, = struct task_group *parent) if (!cfs_rq) goto err; =20 - se =3D kzalloc_node(sizeof(struct sched_entity_stats), - GFP_KERNEL, cpu_to_node(i)); - if (!se) - goto err_free_rq; - + se =3D &cfs_rq->se; init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); init_entity_runnable_average(se); @@ -13386,8 +13380,6 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) =20 return 1; =20 -err_free_rq: - kfree(cfs_rq); err: return 0; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 47972f34ea70..6e26b7d59c13 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -738,6 +738,10 @@ struct cfs_rq { struct list_head throttled_csd_list; #endif /* CONFIG_CFS_BANDWIDTH */ #endif /* CONFIG_FAIR_GROUP_SCHED */ +#ifdef CONFIG_FAIR_GROUP_SCHED + /* sched_entity on parent runqueue */ + struct sched_entity se ____cacheline_aligned; +#endif }; =20 #ifdef CONFIG_SCHED_CLASS_EXT --=20 2.50.0 From nobody Fri Dec 19 20:35:02 2025 Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD82920E03C for ; Wed, 4 Jun 2025 19:59:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749067161; cv=none; b=KXIv8qsOKXNvZNelqq+WA4VuiB8ElYOxlbzMOCaKwlyOWeSy6GodIajFlIhv339t2IQf7tn4EAGedg5dcS/njXKyGStS2bzb8+77fKbSNf7PXe892T+QgjI2CYctiumOz53p4slvqmMzlB3cEUCnT987n3oC84ij1UVu7Mk3zsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749067161; c=relaxed/simple; bh=2t5xI/r8YaZBaRR2ISnIAM7jtJeOC1IcuXA6QDOgn9M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZbSD3OO4Otp96jPpfFREaAts8uxOwfnPRaGb7/1Wt5VyXcoPfOjrSybjovNDQXPMAUdjdTrSVfkXV21akieWfEKt/z3d54u4E4D3S9hPlFsgc7e2oFitlMvNmOOlil3PBUAOyyP4wNcu1c/PuwRGvsX7tfmbV0hycRZMBmk3AeU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RVuFC88R; arc=none smtp.client-ip=209.85.222.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RVuFC88R" Received: by mail-qk1-f202.google.com with SMTP id af79cd13be357-7caef20a527so61540985a.3 for ; Wed, 04 Jun 2025 12:59:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749067156; x=1749671956; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=f78ZscsKSpFWV6dTx9y/j3qK5rp5OV9Rq4/XVxh/80I=; b=RVuFC88RXAJbp+dAwF+aMXXExfK2cFSscr6StlFHPZJBcTRmn83RCUv9tyP0or7GgU FyPXH2tAz+biNsYnUKp2K3BTxOefyC8RfhpdihsqvHisON5eyHCNGb5ssuumtp/+T6XM pURbE33JwNQYu0/3scNmJI8o6L6DNNxQmZT+OwRqY2GPna1WMyFW6pBrhZcY4w25SE2r CcrHOujKn/5jrmQyEBR4MSUjO84iAKa8UJAthxN5Z9jEaqAcZqgJqu+Keh0SnWmfUV7S btdMywTUpSDutQNNxLOdm63WqoPfE/HDora3KUFEnWBx23onlQxCVF5dZQsTNB6d0Bkl q8Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749067156; x=1749671956; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=f78ZscsKSpFWV6dTx9y/j3qK5rp5OV9Rq4/XVxh/80I=; b=wYovm3wjFYoeNOw6xNhqrDLgORycyawVSViem2tIWLutOHMG4hWps6PO7zpX9iBVL9 KMW9FbUTf/HehqGKuIY2yZjdFEgJk8iZimknE/+CWqSJSFuubl9PVjzTCoyQNQ6EU3wG 2kklvdlaOd1rTjiAZAe/74Mm4ezu4kUkVAAI0jeYIK0t6owpGD4jvfQ5QTNF8qAxfnG/ /1DEac7MJulMMf8wVpny5HWESzBD9eBw8HxXWVuAGUzbYBvP9KyL9lLF6UU951ADpMW7 bKtbWVEJi+TjWXNs5y9EUZrKNIjzTbJZsz9sgKYsyNvS5/40pV1kiuioKBCg2BWYJMth HF1g== X-Forwarded-Encrypted: i=1; AJvYcCURx4wN59CLZVaDIEceQV/TO0+70NMl+qUUNlgcKv++fq4A2HozGlNlGJfTYskM8X6c8oJbo+2gDIO7MDc=@vger.kernel.org X-Gm-Message-State: AOJu0YzH2+2JA7WnoLikuezQUc1ZgAK5YoOh3qXXU6Zq9iXFHbFiGgsf ag2lXj4Np3/w1zUkl0KwYTiTKkHI2Cox1oe61vVE49uOdNMCr3H48mY3T1cXbteEICM9Bx5aPIY kjOEnGorMvQ== X-Google-Smtp-Source: AGHT+IFr3L/gUo2bR8ALKh0dNSXaLNajS3pPxoM+DNqZ2wmbqixEYCL0TA2jjTQOA0C97KZEe7Hb1MRrcQ+o X-Received: from qvbng3.prod.google.com ([2002:a05:6214:3bc3:b0:6fb:52:fe2e]) (user=zecheng job=prod-delivery.src-stubby-dispatcher) by 2002:ad4:5f85:0:b0:6fa:aaf1:7abb with SMTP id 6a1803df08f44-6faf704bd1cmr52611676d6.40.1749067156532; Wed, 04 Jun 2025 12:59:16 -0700 (PDT) Date: Wed, 4 Jun 2025 19:58:42 +0000 In-Reply-To: <20250604195846.193159-1-zecheng@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250604195846.193159-1-zecheng@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250604195846.193159-3-zecheng@google.com> Subject: [RFC PATCH 2/3] sched/fair: Remove task_group->se pointer From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Xu Liu , Blake Jones , Josh Don , linux-kernel@vger.kernel.org, Zecheng Li Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Following the embedding of sched_entity into cfs_rq, the task_group->se pointer array is now redundant. All accesses to sched_entity should go via cfs_rq. This patch performs the access conversion: - Adds a macro is_root_task_group(tg) as to check if a task group is the root task group. This helper compares the task group's address with the global root_task_group variable. - Adds a helper tg_se(tg, cpu) which retrieves the cfs_rq and returns the address of the embedded se. This function checks if tg is the root task group to ensure behaving the same of previous tg->se[cpu]. Replaces all accesses that use the tg->se[cpu] pointer array with calls to the new tg_se(tg, cpu) accessor. - Adds a helper cfs_rq_se(cfs_rq) to change access paths like cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))] to use the direct embedded cfs_rq->se. This function also checks if tg is the root task group to ensure same behavior. - Removes the se field (struct sched_entity **se) from struct task_group. Since tg_se is not in very hot code paths, and the branch is a register comparison with an immediate value (`&root_task_group`), it is expected to have a low-performance impact. Signed-off-by: Zecheng Li --- kernel/sched/core.c | 7 ++----- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 27 ++++++++++----------------- kernel/sched/sched.h | 25 ++++++++++++++++++++----- 4 files changed, 33 insertions(+), 28 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c81cf642dba0..8598492854fc 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8544,7 +8544,7 @@ void __init sched_init(void) wait_bit_init(); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - ptr +=3D 2 * nr_cpu_ids * sizeof(void **); + ptr +=3D nr_cpu_ids * sizeof(void **); #endif #ifdef CONFIG_RT_GROUP_SCHED ptr +=3D 2 * nr_cpu_ids * sizeof(void **); @@ -8553,9 +8553,6 @@ void __init sched_init(void) ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.se =3D (struct sched_entity **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); - root_task_group.cfs_rq =3D (struct cfs_rq **)ptr; ptr +=3D nr_cpu_ids * sizeof(void **); =20 @@ -9743,7 +9740,7 @@ static int cpu_cfs_stat_show(struct seq_file *sf, voi= d *v) int i; =20 for_each_possible_cpu(i) { - stats =3D __schedstats_from_se(tg->se[i]); + stats =3D __schedstats_from_se(tg_se(tg, i)); ws +=3D schedstat_val(stats->wait_sum); } =20 diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 56ae54e0ce6a..385076d5741c 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -653,7 +653,7 @@ void dirty_sched_domain_sysctl(int cpu) #ifdef CONFIG_FAIR_GROUP_SCHED static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task= _group *tg) { - struct sched_entity *se =3D tg->se[cpu]; + struct sched_entity *se =3D tg_se(tg, cpu); =20 #define P(F) SEQ_printf(m, " .%-30s: %lld\n", #F, (long long)F) #define P_SCHEDSTAT(F) SEQ_printf(m, " .%-30s: %lld\n", \ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c2af9896eef4..2ca01587f956 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5907,7 +5907,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) if (!dequeue) return false; /* Throttle no longer required. */ =20 - se =3D cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))]; + se =3D cfs_rq_se(cfs_rq); =20 /* freeze hierarchy runnable averages while throttled */ rcu_read_lock(); @@ -5992,7 +5992,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) long queued_delta, runnable_delta, idle_delta; long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 - se =3D cfs_rq->tg->se[cpu_of(rq)]; + se =3D cfs_rq_se(cfs_rq); =20 cfs_rq->throttled =3D 0; =20 @@ -9787,7 +9787,6 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) { struct cfs_rq *cfs_rq, *pos; bool decayed =3D false; - int cpu =3D cpu_of(rq); =20 /* * Iterates the task_group tree in a bottom up fashion, see @@ -9807,7 +9806,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) } =20 /* Propagate pending load changes to the parent, if any: */ - se =3D cfs_rq->tg->se[cpu]; + se =3D cfs_rq_se(cfs_rq); if (se && !skip_blocked_update(se)) update_load_avg(cfs_rq_of(se), se, UPDATE_TG); =20 @@ -9833,8 +9832,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) */ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq) { - struct rq *rq =3D rq_of(cfs_rq); - struct sched_entity *se =3D cfs_rq->tg->se[cpu_of(rq)]; + struct sched_entity *se =3D cfs_rq_se(cfs_rq); unsigned long now =3D jiffies; unsigned long load; =20 @@ -13346,7 +13344,6 @@ void free_fair_sched_group(struct task_group *tg) } =20 kfree(tg->cfs_rq); - kfree(tg->se); } =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) @@ -13358,9 +13355,6 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) tg->cfs_rq =3D kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); if (!tg->cfs_rq) goto err; - tg->se =3D kcalloc(nr_cpu_ids, sizeof(se), GFP_KERNEL); - if (!tg->se) - goto err; =20 tg->shares =3D NICE_0_LOAD; =20 @@ -13374,7 +13368,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) =20 se =3D &cfs_rq->se; init_cfs_rq(cfs_rq); - init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); + init_tg_cfs_entry(tg, cfs_rq, se, i, tg_se(parent, i)); init_entity_runnable_average(se); } =20 @@ -13393,7 +13387,7 @@ void online_fair_sched_group(struct task_group *tg) =20 for_each_possible_cpu(i) { rq =3D cpu_rq(i); - se =3D tg->se[i]; + se =3D tg_se(tg, i); rq_lock_irq(rq, &rf); update_rq_clock(rq); attach_entity_cfs_rq(se); @@ -13410,7 +13404,7 @@ void unregister_fair_sched_group(struct task_group = *tg) =20 for_each_possible_cpu(cpu) { struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu]; - struct sched_entity *se =3D tg->se[cpu]; + struct sched_entity *se =3D tg_se(tg, cpu); struct rq *rq =3D cpu_rq(cpu); =20 if (se) { @@ -13447,7 +13441,6 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, init_cfs_rq_runtime(cfs_rq); =20 tg->cfs_rq[cpu] =3D cfs_rq; - tg->se[cpu] =3D se; =20 /* se could be NULL for root_task_group */ if (!se) @@ -13478,7 +13471,7 @@ static int __sched_group_set_shares(struct task_gro= up *tg, unsigned long shares) /* * We can't change the weight of the root cgroup. */ - if (!tg->se[0]) + if (is_root_task_group(tg)) return -EINVAL; =20 shares =3D clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES)); @@ -13489,7 +13482,7 @@ static int __sched_group_set_shares(struct task_gro= up *tg, unsigned long shares) tg->shares =3D shares; for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); - struct sched_entity *se =3D tg->se[i]; + struct sched_entity *se =3D tg_se(tg, i); struct rq_flags rf; =20 /* Propagate contribution to hierarchy */ @@ -13540,7 +13533,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) =20 for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); - struct sched_entity *se =3D tg->se[i]; + struct sched_entity *se =3D tg_se(tg, i); struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 6e26b7d59c13..644e19476c83 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -437,8 +437,6 @@ struct task_group { #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - /* schedulable entities of this group on each CPU */ - struct sched_entity **se; /* runqueue "owned" by this group on each CPU */ struct cfs_rq **cfs_rq; unsigned long shares; @@ -897,7 +895,8 @@ struct dl_rq { }; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - +/* Check whether a task group is root tg */ +#define is_root_task_group(tg) ((tg) =3D=3D &root_task_group) /* An entity is a task if it doesn't "own" a runqueue */ #define entity_is_task(se) (!se->my_q) =20 @@ -1571,6 +1570,22 @@ static inline struct task_struct *task_of(struct sch= ed_entity *se) return container_of(se, struct task_struct, se); } =20 +static inline struct sched_entity *tg_se(struct task_group *tg, int cpu) +{ + if (is_root_task_group(tg)) + return NULL; + + return &tg->cfs_rq[cpu]->se; +} + +static inline struct sched_entity *cfs_rq_se(struct cfs_rq *cfs_rq) +{ + if (is_root_task_group(cfs_rq->tg)) + return NULL; + + return &cfs_rq->se; +} + static inline struct cfs_rq *task_cfs_rq(struct task_struct *p) { return p->se.cfs_rq; @@ -2145,8 +2160,8 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #ifdef CONFIG_FAIR_GROUP_SCHED set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; - p->se.parent =3D tg->se[cpu]; - p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; + p->se.parent =3D tg_se(tg, cpu); + p->se.depth =3D p->se.parent ? p->se.parent->depth + 1 : 0; #endif =20 #ifdef CONFIG_RT_GROUP_SCHED --=20 2.50.0 From nobody Fri Dec 19 20:35:02 2025 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 712B520E6F3 for ; Wed, 4 Jun 2025 19:59:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749067163; cv=none; b=X0dV0aLwOQ8WCrUK051af6h6E0yJAo6EZcgqH3DF8HqUn/wDIjDOreCcPJbIfuSKXRis8hh6RboLiZR6F+K3At1K36q8VUl/cPoW+DiohQsDntAz6cFv4L7BH6dz9GxKuWlOPFgJxPKJcGfPNgoiN/b36vctyQwjARpX9WjpdTI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749067163; c=relaxed/simple; bh=c/nZGGckp6ZKne1GPTY+t47qEifWzxHNrqpvpWNeMqs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Pa4v3Xpe4Rsu6UMe8/B5QA/Ryj6DEtN3Pw/cXIyDt7ZfCEAfMDaVMY2CChEOIMBhZsdXEwsqZdC9EAleM46sI+WAGU0arG/c3IkmQJUrO2BRT07YEpDiJZtaDiUioiwrIIFiw7ypL7ob3vWZzv+PmgZzTofNa/b+chyL83Lh7Fc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0r+HKCml; arc=none smtp.client-ip=209.85.222.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0r+HKCml" Received: by mail-qk1-f201.google.com with SMTP id af79cd13be357-7c5750ca8b2so41600985a.0 for ; Wed, 04 Jun 2025 12:59:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749067160; x=1749671960; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=s1LsQOH7nnUdVDkVGgb3K2OOng/x5+9Xo0meB9pgEAk=; b=0r+HKCmlbfTADjkG3Q1BqF6citwwBg/pKcarTnqhEkDz8+EhM1SgM74c9RQnVCfWnS POG5cLEdvRcS0GvkkUUBdzBtIYnbHqlHeugrsKUxv0a3vjdnC2qLMy9f+nchqR0qtlMH lRkRofHYAwO63sF0ASHK0NpyG+/zDwNXf5hvTQxQjw+lmMvSAlQaJeuXJdeh1wsYZ1+o OMY6z+zj6hZkaHPfCdQsMiCaLqoVFNxKE9u9GSDVWJ/pTqGnq1mVwi2nVtUitAKS1adT Q1uf3Wj+0SHk4tjWkoxAZwFx7XUYv3tOJg5FI+O8K4oKK50cy2yCr9jYbqnOY1UvgrWE 808w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749067160; x=1749671960; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=s1LsQOH7nnUdVDkVGgb3K2OOng/x5+9Xo0meB9pgEAk=; b=MCTrTZXAAWHD2s9bJ5mYMSv+Fn3QAHIyHDI+W2J2WVbqWPTRoJOv0TfgJhllm7Ctjx g9pl96ArM2XRjYGw6WlGRvpcFnvjgZpKuIYuhO0lmsXHS90c8tnZRRTem4eJDopYdfH8 RR/e7lGPUpGxliQnqDSJ3I8/wviv5/C/8h9drmN7S4a2P6+ftE5dlcWHuIX8sPTHXEhJ 35sCecYlA+fSW8dJ3pX3IHfDXRMMOlCFyZO2baNupJXpMjk8PxMzBxEd6VhDC+ZQsL1K u6xHKBiayQu93B3n/PMCWT+g02e8edPpDCriRUGPVwVxPyfsPKvwetBvdff/7ZPOpwJI 0MJA== X-Forwarded-Encrypted: i=1; AJvYcCUEnigOK6O/1BBz3QYdsrvPLTs2+53QOqT8lquPLNBJBJAy8Dexp4n8Zp1QrouhgjbpBibKmYyCVmDSz10=@vger.kernel.org X-Gm-Message-State: AOJu0Ywu4mhcokJt0rTSmX+h3cE22x594drTrlpNI8JoGkyCmD9yoqyb hiWpsGY8F0coqMxM+Vlx2Z9CliVcgISDkCnLvRIrIFoBLhHBrxYTxhTQ1yQ+oc269zI7TxPhPs+ /WZLq5ANGOQ== X-Google-Smtp-Source: AGHT+IF+T8Lm/PUcvSU1r4qeXz1oENU3EAIttcjYxwYkBraJ6bUJip03Zp03VNSlkuZx09sn4Hea7A2lH2KR X-Received: from qvbmy2.prod.google.com ([2002:a05:6214:2e42:b0:6fa:c315:4cb7]) (user=zecheng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:620a:4447:b0:7cd:4dbf:3c4c with SMTP id af79cd13be357-7d2199b3276mr592589885a.38.1749067160249; Wed, 04 Jun 2025 12:59:20 -0700 (PDT) Date: Wed, 4 Jun 2025 19:58:43 +0000 In-Reply-To: <20250604195846.193159-1-zecheng@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250604195846.193159-1-zecheng@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250604195846.193159-4-zecheng@google.com> Subject: [RFC PATCH 3/3] sched/fair: Allocate cfs_rq structs per-cpu From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Xu Liu , Blake Jones , Josh Don , linux-kernel@vger.kernel.org, Zecheng Li Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To remove the cfs_rq pointer array in task_group and improve cache performance, allocate the cfs_rq structs (which now contain the embedded sched_entity) using the per-cpu allocator. This patch implements the following: - Changes task_group->cfs_rq from struct cfs_rq ** to struct cfs_rq __percpu *. - Updates alloc_fair_sched_group() and free_fair_sched_group() to use alloc_percpu() and free_percpu() respectively for the cfs_rq data associated with a task group. - Uses the inline accessor tg_cfs_rq(tg, cpu) with per_cpu_ptr() to retrieve the pointer to cfs_rq for the given task group and CPU. - Replaces direct accesses like the former tg->cfs_rq[cpu] with calls to the new tg_cfs_rq(tg, cpu) helper. - Handles the root_task_group: since struct rq is already a per-cpu variable (runqueues), its embedded cfs_rq (rq->cfs) is also per-cpu. Therefore, we assign root_task_group.cfs_rq =3D &runqueues.cfs. - Cleanup the code in initializing the root task group. This change places each CPU's cfs_rq and sched_entity in its local per-cpu memory area. Signed-off-by: Zecheng Li --- kernel/sched/core.c | 35 +++++++++++++-------------------- kernel/sched/fair.c | 46 +++++++++++++++++--------------------------- kernel/sched/sched.h | 13 +++++++++---- 3 files changed, 40 insertions(+), 54 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8598492854fc..60b9872e4b01 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8526,7 +8526,7 @@ static struct kmem_cache *task_group_cache __ro_after= _init; =20 void __init sched_init(void) { - unsigned long ptr =3D 0; + unsigned long __maybe_unused ptr =3D 0; int i; =20 /* Make sure the linker didn't screw up */ @@ -8544,33 +8544,24 @@ void __init sched_init(void) wait_bit_init(); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - ptr +=3D nr_cpu_ids * sizeof(void **); -#endif -#ifdef CONFIG_RT_GROUP_SCHED - ptr +=3D 2 * nr_cpu_ids * sizeof(void **); -#endif - if (ptr) { - ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); + root_task_group.cfs_rq =3D &runqueues.cfs; =20 -#ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.cfs_rq =3D (struct cfs_rq **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); - - root_task_group.shares =3D ROOT_TASK_GROUP_LOAD; - init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); + root_task_group.shares =3D ROOT_TASK_GROUP_LOAD; + init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); #endif /* CONFIG_FAIR_GROUP_SCHED */ #ifdef CONFIG_EXT_GROUP_SCHED - root_task_group.scx_weight =3D CGROUP_WEIGHT_DFL; + root_task_group.scx_weight =3D CGROUP_WEIGHT_DFL; #endif /* CONFIG_EXT_GROUP_SCHED */ #ifdef CONFIG_RT_GROUP_SCHED - root_task_group.rt_se =3D (struct sched_rt_entity **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); + ptr +=3D 2 * nr_cpu_ids * sizeof(void **); + ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); + root_task_group.rt_se =3D (struct sched_rt_entity **)ptr; + ptr +=3D nr_cpu_ids * sizeof(void **); =20 - root_task_group.rt_rq =3D (struct rt_rq **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); + root_task_group.rt_rq =3D (struct rt_rq **)ptr; + ptr +=3D nr_cpu_ids * sizeof(void **); =20 #endif /* CONFIG_RT_GROUP_SCHED */ - } =20 #ifdef CONFIG_SMP init_defrootdomain(); @@ -9511,7 +9502,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg= , u64 period, u64 quota, } =20 for_each_online_cpu(i) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, i); struct rq *rq =3D cfs_rq->rq; =20 guard(rq_lock_irq)(rq); @@ -9759,7 +9750,7 @@ static u64 throttled_time_self(struct task_group *tg) u64 total =3D 0; =20 for_each_possible_cpu(i) { - total +=3D READ_ONCE(tg->cfs_rq[i]->throttled_clock_self_time); + total +=3D READ_ONCE(tg_cfs_rq(tg, i)->throttled_clock_self_time); } =20 return total; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2ca01587f956..c24d8f16ff54 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -329,7 +329,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *= cfs_rq) * to a tree or when we reach the top of the tree */ if (cfs_rq->tg->parent && - cfs_rq->tg->parent->cfs_rq[cpu]->on_list) { + tg_cfs_rq(cfs_rq->tg->parent, cpu)->on_list) { /* * If parent is already on the list, we add the child * just before. Thanks to circular linked property of @@ -337,7 +337,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *= cfs_rq) * of the list that starts by parent. */ list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list, - &(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list)); + &(tg_cfs_rq(cfs_rq->tg->parent, cpu)->leaf_cfs_rq_list)); /* * The branch is now connected to its tree so we can * reset tmp_alone_branch to the beginning of the @@ -4168,7 +4168,7 @@ static void __maybe_unused clear_tg_offline_cfs_rqs(s= truct rq *rq) =20 rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 clear_tg_load_avg(cfs_rq); } @@ -5823,8 +5823,8 @@ static inline int throttled_lb_pair(struct task_group= *tg, { struct cfs_rq *src_cfs_rq, *dest_cfs_rq; =20 - src_cfs_rq =3D tg->cfs_rq[src_cpu]; - dest_cfs_rq =3D tg->cfs_rq[dest_cpu]; + src_cfs_rq =3D tg_cfs_rq(tg, src_cpu); + dest_cfs_rq =3D tg_cfs_rq(tg, dest_cpu); =20 return throttled_hierarchy(src_cfs_rq) || throttled_hierarchy(dest_cfs_rq); @@ -5833,7 +5833,7 @@ static inline int throttled_lb_pair(struct task_group= *tg, static int tg_unthrottle_up(struct task_group *tg, void *data) { struct rq *rq =3D data; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 cfs_rq->throttle_count--; if (!cfs_rq->throttle_count) { @@ -5862,7 +5862,7 @@ static int tg_unthrottle_up(struct task_group *tg, vo= id *data) static int tg_throttle_down(struct task_group *tg, void *data) { struct rq *rq =3D data; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 /* group is entering throttled state, stop time */ if (!cfs_rq->throttle_count) { @@ -6449,8 +6449,8 @@ static void sync_throttle(struct task_group *tg, int = cpu) if (!tg->parent) return; =20 - cfs_rq =3D tg->cfs_rq[cpu]; - pcfs_rq =3D tg->parent->cfs_rq[cpu]; + cfs_rq =3D tg_cfs_rq(tg, cpu); + pcfs_rq =3D tg_cfs_rq(tg->parent, cpu); =20 cfs_rq->throttle_count =3D pcfs_rq->throttle_count; cfs_rq->throttled_clock_pelt =3D rq_clock_pelt(cpu_rq(cpu)); @@ -6635,7 +6635,7 @@ static void __maybe_unused update_runtime_enabled(str= uct rq *rq) rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { struct cfs_bandwidth *cfs_b =3D &tg->cfs_bandwidth; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 raw_spin_lock(&cfs_b->lock); cfs_rq->runtime_enabled =3D cfs_b->quota !=3D RUNTIME_INF; @@ -6664,7 +6664,7 @@ static void __maybe_unused unthrottle_offline_cfs_rqs= (struct rq *rq) =20 rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 if (!cfs_rq->runtime_enabled) continue; @@ -9364,7 +9364,7 @@ static inline int task_is_ineligible_on_dst_cpu(struc= t task_struct *p, int dest_ struct cfs_rq *dst_cfs_rq; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - dst_cfs_rq =3D task_group(p)->cfs_rq[dest_cpu]; + dst_cfs_rq =3D tg_cfs_rq(task_group(p), dest_cpu); #else dst_cfs_rq =3D &cpu_rq(dest_cpu)->cfs; #endif @@ -13080,7 +13080,7 @@ static int task_is_throttled_fair(struct task_struc= t *p, int cpu) struct cfs_rq *cfs_rq; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - cfs_rq =3D task_group(p)->cfs_rq[cpu]; + cfs_rq =3D tg_cfs_rq(task_group(p), cpu); #else cfs_rq =3D &cpu_rq(cpu)->cfs; #endif @@ -13336,14 +13336,7 @@ static void task_change_group_fair(struct task_str= uct *p) =20 void free_fair_sched_group(struct task_group *tg) { - int i; - - for_each_possible_cpu(i) { - if (tg->cfs_rq) - kfree(tg->cfs_rq[i]); - } - - kfree(tg->cfs_rq); + free_percpu(tg->cfs_rq); } =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) @@ -13352,7 +13345,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) struct cfs_rq *cfs_rq; int i; =20 - tg->cfs_rq =3D kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); + tg->cfs_rq =3D alloc_percpu_gfp(struct cfs_rq, GFP_KERNEL); if (!tg->cfs_rq) goto err; =20 @@ -13361,8 +13354,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) init_cfs_bandwidth(tg_cfs_bandwidth(tg), tg_cfs_bandwidth(parent)); =20 for_each_possible_cpu(i) { - cfs_rq =3D kzalloc_node(sizeof(struct cfs_rq), - GFP_KERNEL, cpu_to_node(i)); + cfs_rq =3D tg_cfs_rq(tg, i); if (!cfs_rq) goto err; =20 @@ -13403,7 +13395,7 @@ void unregister_fair_sched_group(struct task_group = *tg) destroy_cfs_bandwidth(tg_cfs_bandwidth(tg)); =20 for_each_possible_cpu(cpu) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu); struct sched_entity *se =3D tg_se(tg, cpu); struct rq *rq =3D cpu_rq(cpu); =20 @@ -13440,8 +13432,6 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, cfs_rq->rq =3D rq; init_cfs_rq_runtime(cfs_rq); =20 - tg->cfs_rq[cpu] =3D cfs_rq; - /* se could be NULL for root_task_group */ if (!se) return; @@ -13534,7 +13524,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); struct sched_entity *se =3D tg_se(tg, i); - struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *grp_cfs_rq =3D tg_cfs_rq(tg, i); bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; struct rq_flags rf; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 644e19476c83..9b46b0bc9d3d 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -438,7 +438,7 @@ struct task_group { =20 #ifdef CONFIG_FAIR_GROUP_SCHED /* runqueue "owned" by this group on each CPU */ - struct cfs_rq **cfs_rq; + struct cfs_rq __percpu *cfs_rq; unsigned long shares; #ifdef CONFIG_SMP /* @@ -1569,13 +1569,18 @@ static inline struct task_struct *task_of(struct sc= hed_entity *se) WARN_ON_ONCE(!entity_is_task(se)); return container_of(se, struct task_struct, se); } +/* Access a specific CPU's cfs_rq from a task group */ +static inline struct cfs_rq *tg_cfs_rq(struct task_group *tg, int cpu) +{ + return per_cpu_ptr(tg->cfs_rq, cpu); +} =20 static inline struct sched_entity *tg_se(struct task_group *tg, int cpu) { if (is_root_task_group(tg)) return NULL; =20 - return &tg->cfs_rq[cpu]->se; + return &tg_cfs_rq(tg, cpu)->se; } =20 static inline struct sched_entity *cfs_rq_se(struct cfs_rq *cfs_rq) @@ -2158,8 +2163,8 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); - p->se.cfs_rq =3D tg->cfs_rq[cpu]; + set_task_rq_fair(&p->se, p->se.cfs_rq, tg_cfs_rq(tg, cpu)); + p->se.cfs_rq =3D tg_cfs_rq(tg, cpu); p->se.parent =3D tg_se(tg, cpu); p->se.depth =3D p->se.parent ? p->se.parent->depth + 1 : 0; #endif --=20 2.50.0