From nobody Sun Feb 8 09:53:05 2026 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A9B921FF32 for ; Mon, 9 Jun 2025 19:38:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749497925; cv=none; b=XG3lVuJyq2pzvsNIgtgUzJyY+ZYLs22c40UAhoDV3EFwARnDGMBRsAGPbpQIrFUUENZYIqDBZTgbaILNKOgDyYkDbdvhTr8ZAS6sjvV36m5ZFlpbGlDUGetZM62SLzi+NpLeR1f4+8D8q7IJkHp6d8dqJzJ89xAxzoShXToSrZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749497925; c=relaxed/simple; bh=2Rv9p6tNaMgQnX7ROTD/t3VPANSO2JnMTzpQk6fahbU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=igkhw/4a5zdfTJ1L0kEQgFjB1uAn8yGhw2xuUU/C36vo0uMb+BfewVjObrCZ+bMiFk+OsUGvT1BudHGM8x+Os3pqCIEWENZPiBd5zF7h84azG+NTdQfRrj/l+YpI0xsS2vxnHzajqe5vJ2vXtPb7Y5TdC0GLmVTkGqINWsSKmOc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ro8j07Hh; arc=none smtp.client-ip=209.85.219.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ro8j07Hh" Received: by mail-qv1-f73.google.com with SMTP id 6a1803df08f44-6fae0df0b35so77688016d6.3 for ; Mon, 09 Jun 2025 12:38:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749497922; x=1750102722; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9PNHkUJ9ZGq1uRh4C8x9DUhH6un0NctvGdYHnDp4NNo=; b=Ro8j07HhMtNvQ4GPdxb9dRV/g+EK53JfYHTx1L8cPtZcOF7AIZ7edhT6pVqlxw/fRe 4PsX3HJ+1SFUCyc5PFPSvkQ1haJXtghVRNeGC3z35nb299ZOq/kqWdE6/O+sc1/CyBG8 3Uw+JrVyIQ4Rgs/KGxusNrytHJjkLDGudiKBHePJBMIB7RbqPDgIG4WGdJb2RrYD2AnC FOXgiTPRf+6w1c22SByqsOuzdfl0aykaK4LYTMrcvwt1bHJp9FyZ13Vbhmyyae90eHXj P7ZNeCdJG/9DNiOr9Nz5+Nsy9GSASN1mtnLOivLG1PfyT/0IE6yQBgcbgV2whINizdEG Xw4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749497922; x=1750102722; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9PNHkUJ9ZGq1uRh4C8x9DUhH6un0NctvGdYHnDp4NNo=; b=gVwNxSxMr4Bam4b7hNdEVliCaQ00i1cIQnDY3g4OGRO5tBwA4RZFxg/6R/5gslHGDD 3vKytz8gH9y2ViUl5IYSt/AQnnBGviCX9uWpQZLLwXuvIdUI+3gMw5G6TlDaFWWa2fMF ndzTy9xE7Egw/P0+UNDBU1tfBGo3UP/KFv2hw69a2f8ZexrADOZ0upG4LWvpgYNguGn6 jIXZRZc3eZBCPyrkyGQ35zyM4r5mhLkU71SLKcCFXjWu9aVAm7O9ZDsq7LnWSbgiPfHx ERt1zeuYNpai2xU3gBHrhEWkyx9XScZvb4bUJWV1Hvl+OGkTiRaclDpDxn/b1HIUyzCR dicQ== X-Forwarded-Encrypted: i=1; AJvYcCUHuo/nFZT+Vt4kNssXUzRFgc9MUME7CbpMgb3IM6DzXs8CrCr/udYDcU5AQ2ZPmZBOaNjHxWNDxApZQDc=@vger.kernel.org X-Gm-Message-State: AOJu0YyW8CPYDqdxY83Ihpvt3TRcH3ALkXI69lwtidPdwYYo0x/3pW8L +8ZysiMpSyqLMdj8iCckkDlAQFDcq/BnWd0VQZ6QRAzVOFLwaCraP/dAyuhMVugxqHCQjYaLYSV gVtRxaldh/A== X-Google-Smtp-Source: AGHT+IGwtLwTR9bWi9Hwt6GDozWcxWkI/YF7qOX7xhVb8YMMVmF6tMkKyLLE8BSiOPQrxcmodNBY/RjOll1f X-Received: from qvbqp11.prod.google.com ([2002:a05:6214:598b:b0:6fa:c619:8ba1]) (user=zecheng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6214:d02:b0:6fa:cdc9:8aff with SMTP id 6a1803df08f44-6fb08f612c7mr259085596d6.25.1749497922301; Mon, 09 Jun 2025 12:38:42 -0700 (PDT) Date: Mon, 9 Jun 2025 19:38:31 +0000 In-Reply-To: <20250609193834.2556866-1-zecheng@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250609193834.2556866-1-zecheng@google.com> X-Mailer: git-send-email 2.50.0.rc0.604.gd4ff7b7c86-goog Message-ID: <20250609193834.2556866-2-zecheng@google.com> Subject: [RFC PATCH v2 1/3] sched/fair: Co-locate cfs_rq and sched_entity From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Xu Liu , Blake Jones , Josh Don , linux-kernel@vger.kernel.org, Zecheng Li Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Improve data locality and reduce pointer chasing by allocating struct cfs_rq and struct sched_entity together for non-root task groups. This is achieved by introducing a new combined struct cfs_rq_with_se, that holds both objects in contiguous memory. This patch: - Defines the new struct cfs_rq_with_se. - Modifies alloc_fair_sched_group() and free_fair_sched_group() to allocate and free the new struct as a single unit. - Modifies the per-CPU pointers in task_group->se and task_group->cfs_rq to point to the members in the new combined structure. Signed-off-by: Zecheng Li --- kernel/sched/fair.c | 23 ++++++++++------------- kernel/sched/sched.h | 8 ++++++++ 2 files changed, 18 insertions(+), 13 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0fb9bf995a47..cd090ceec633 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -13341,10 +13341,11 @@ void free_fair_sched_group(struct task_group *tg) int i; =20 for_each_possible_cpu(i) { - if (tg->cfs_rq) - kfree(tg->cfs_rq[i]); - if (tg->se) - kfree(tg->se[i]); + if (tg->cfs_rq && tg->cfs_rq[i]) { + struct cfs_rq_with_se *combined =3D + container_of(tg->cfs_rq[i], struct cfs_rq_with_se, cfs_rq); + kfree(combined); + } } =20 kfree(tg->cfs_rq); @@ -13353,6 +13354,7 @@ void free_fair_sched_group(struct task_group *tg) =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) { + struct cfs_rq_with_se *combined; struct sched_entity *se; struct cfs_rq *cfs_rq; int i; @@ -13369,16 +13371,13 @@ int alloc_fair_sched_group(struct task_group *tg,= struct task_group *parent) init_cfs_bandwidth(tg_cfs_bandwidth(tg), tg_cfs_bandwidth(parent)); =20 for_each_possible_cpu(i) { - cfs_rq =3D kzalloc_node(sizeof(struct cfs_rq), + combined =3D kzalloc_node(sizeof(struct cfs_rq_with_se), GFP_KERNEL, cpu_to_node(i)); - if (!cfs_rq) + if (!combined) goto err; =20 - se =3D kzalloc_node(sizeof(struct sched_entity_stats), - GFP_KERNEL, cpu_to_node(i)); - if (!se) - goto err_free_rq; - + cfs_rq =3D &combined->cfs_rq; + se =3D &combined->se; init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); init_entity_runnable_average(se); @@ -13386,8 +13385,6 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) =20 return 1; =20 -err_free_rq: - kfree(cfs_rq); err: return 0; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 47972f34ea70..af23917194fb 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -740,6 +740,14 @@ struct cfs_rq { #endif /* CONFIG_FAIR_GROUP_SCHED */ }; =20 +#ifdef CONFIG_FAIR_GROUP_SCHED +struct cfs_rq_with_se { + struct cfs_rq cfs_rq; + /* cfs_rq's sched_entity on parent runqueue */ + struct sched_entity se ____cacheline_aligned; +}; +#endif + #ifdef CONFIG_SCHED_CLASS_EXT /* scx_rq->flags, protected by the rq lock */ enum scx_rq_flags { --=20 2.50.0.rc0.604.gd4ff7b7c86-goog From nobody Sun Feb 8 09:53:05 2026 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AD78220687 for ; Mon, 9 Jun 2025 19:38:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749497928; cv=none; b=et47EAEIfoc3Im2rt7sjSVHkhp23fOuRzjknAhgWmUz7HLRgM/qJaQlvex+3U+pLv6LxX3t1gwq5sUn4PG61iK1OPftmxEG+p/fhHT6gpNBu1W5BOzP3UhSN/RISjSbbmPGG27e+y7VtuIuHl3HCnMXgnpCcq/NEnerWevEGeM8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749497928; c=relaxed/simple; bh=U+Kw6x8qmp3LshTvMCt9FBP7tAj+XoKmLWJu4PMkBGY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JL89KdnQmfVieCac6qKUbJ26Eg31NUm7ZdA7h1Jbw5XLenhMCvDJpfjJaxFi14J46si3oVyOlnO6ycz1DgAgL/pwUmNrXS2GcVayh+/XKPBfW1eO272g8ypO+Cdhakzdk6M2L318PkmrBu8pk78rEVzBIYOM/+TuIuXTLlhKKxw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4c7aZN4o; arc=none smtp.client-ip=209.85.222.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4c7aZN4o" Received: by mail-qk1-f201.google.com with SMTP id af79cd13be357-7d2107d6b30so610197885a.0 for ; Mon, 09 Jun 2025 12:38:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749497925; x=1750102725; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6r6LyQxv7yHS0Evlv6qjLlVV3mfViVsaSLBe6y6ldGM=; b=4c7aZN4ogD5ESIQr3cN9lrRnG7FZh+TMf2b3ErFx9uYt+3k9odzf3lYssZjGtS7a5Q MndyHpyYHTOupzseTXTnYIzBDep6XCiFOj8yfzyo3q661DJcz3fwvs5VvWuhpBYGYwi8 i/RD59Rbv9q/xUNuFiRx/CxlFhrGIwlesuY8wOmruhp5EzW9ej9Z6vlCOue/3ST0E3a/ BLJudG2OnME7LKATpHVpW8/UKfI9ZnDS7yAjOKwJTPoMmDtK2+CUCEFu+CxU7jXaR+XL ozrjgyp3NqCK5mFNBG9PmdB8zDJqpeH9brqrNwjvcYZIyVWO7IR19YafxdZpxLG1V8c5 JUMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749497925; x=1750102725; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6r6LyQxv7yHS0Evlv6qjLlVV3mfViVsaSLBe6y6ldGM=; b=RfOClopFi4cHLa1Fn6ITz942kppg/Kzr1Ws/LZWdD4qdU/E2yPAZ9necw5sF9TdxC2 Zbs7I5cAPeEBImPqmsk22aDgoGhRRAtft47bFnlckHjI+aYZ5WPYuCih0hdCIMwktooy 3GfIFRLm3mOIJk2bPySiEypUwy/Bm5CFBXD7EVMAfTBFHsGQ6JKfCkYu8uWILxtNb6hv HVufvfDt92Pq6ZE6/Hdil/fSH+FT9oglBmiqxyrw+L+zylT5nKsKU/wC+KfS2pdVlRfy ySANwaoejCiw6KVJEFst7yGm0wBuTPGtLzBW4sfGpsoOGkNnE4veN4W/BwvQRwtc2Mve qggg== X-Forwarded-Encrypted: i=1; AJvYcCXqH4h9yKEiHz2tPZjNnvp2tLb2qSLdiEc2zvsArrrGVjlxk43/ogG9GaOMg1ZKL6avE56+68FAXN0HxrA=@vger.kernel.org X-Gm-Message-State: AOJu0YynmEsQodeh6uUeCfqOYVi1uDhSMRBJMvYnRcBV5oXWRDfWrPzC MljmRxvwepR7kXHZIJVFogyBTO7d6kNDEOKTjcjGLeTzcQZGCxnRjhNvO9zrgIFYeu96SXvY7uD 5ONCaqNFl7w== X-Google-Smtp-Source: AGHT+IGHlDBbgwB0RAV7hNhoHEw5p79bM22KwWMV7dzoKxjsX3k/liUaIdntZWo8uMKplhJVr8UhJ2PeKK+2 X-Received: from qkas22.prod.google.com ([2002:a05:620a:ab16:b0:7ce:c1d9:a92a]) (user=zecheng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:620a:29d4:b0:7d3:9109:4472 with SMTP id af79cd13be357-7d391094763mr990689485a.37.1749497925008; Mon, 09 Jun 2025 12:38:45 -0700 (PDT) Date: Mon, 9 Jun 2025 19:38:32 +0000 In-Reply-To: <20250609193834.2556866-1-zecheng@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250609193834.2556866-1-zecheng@google.com> X-Mailer: git-send-email 2.50.0.rc0.604.gd4ff7b7c86-goog Message-ID: <20250609193834.2556866-3-zecheng@google.com> Subject: [RFC PATCH v2 2/3] sched/fair: Remove task_group->se pointer array From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Xu Liu , Blake Jones , Josh Don , linux-kernel@vger.kernel.org, Zecheng Li Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that struct sched_entity is co-located with struct cfs_rq for non-root task groups, the task_group->se pointer array is redundant. The associated sched_entity can be loaded directly from the cfs_rq. This patch performs the access conversion with the helpers: - is_root_task_group(tg): checks if a task group is the root task group. It compares the task group's address with the global root_task_group variable. - tg_se(tg, cpu): retrieves the cfs_rq and returns the address of the co-located se. This function checks if tg is the root task group to ensure behaving the same of previous tg->se[cpu]. Replaces all accesses that use the tg->se[cpu] pointer array with calls to the new tg_se(tg, cpu) accessor. - cfs_rq_se(cfs_rq): simplifies access paths like cfs_rq->tg->se[...] to use the co-located sched_entity. This function also checks if tg is the root task group to ensure same behavior. Since tg_se is not in very hot code paths, and the branch is a register comparison with an immediate value (`&root_task_group`), the performance impact is expected to be negligible. Signed-off-by: Zecheng Li --- kernel/sched/core.c | 7 ++----- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 27 ++++++++++----------------- kernel/sched/sched.h | 29 ++++++++++++++++++++++++----- 4 files changed, 37 insertions(+), 28 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c81cf642dba0..8598492854fc 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8544,7 +8544,7 @@ void __init sched_init(void) wait_bit_init(); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - ptr +=3D 2 * nr_cpu_ids * sizeof(void **); + ptr +=3D nr_cpu_ids * sizeof(void **); #endif #ifdef CONFIG_RT_GROUP_SCHED ptr +=3D 2 * nr_cpu_ids * sizeof(void **); @@ -8553,9 +8553,6 @@ void __init sched_init(void) ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.se =3D (struct sched_entity **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); - root_task_group.cfs_rq =3D (struct cfs_rq **)ptr; ptr +=3D nr_cpu_ids * sizeof(void **); =20 @@ -9743,7 +9740,7 @@ static int cpu_cfs_stat_show(struct seq_file *sf, voi= d *v) int i; =20 for_each_possible_cpu(i) { - stats =3D __schedstats_from_se(tg->se[i]); + stats =3D __schedstats_from_se(tg_se(tg, i)); ws +=3D schedstat_val(stats->wait_sum); } =20 diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 56ae54e0ce6a..385076d5741c 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -653,7 +653,7 @@ void dirty_sched_domain_sysctl(int cpu) #ifdef CONFIG_FAIR_GROUP_SCHED static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task= _group *tg) { - struct sched_entity *se =3D tg->se[cpu]; + struct sched_entity *se =3D tg_se(tg, cpu); =20 #define P(F) SEQ_printf(m, " .%-30s: %lld\n", #F, (long long)F) #define P_SCHEDSTAT(F) SEQ_printf(m, " .%-30s: %lld\n", \ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index cd090ceec633..b099b593f364 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5907,7 +5907,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) if (!dequeue) return false; /* Throttle no longer required. */ =20 - se =3D cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))]; + se =3D cfs_rq_se(cfs_rq); =20 /* freeze hierarchy runnable averages while throttled */ rcu_read_lock(); @@ -5992,7 +5992,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) long queued_delta, runnable_delta, idle_delta; long rq_h_nr_queued =3D rq->cfs.h_nr_queued; =20 - se =3D cfs_rq->tg->se[cpu_of(rq)]; + se =3D cfs_rq_se(cfs_rq); =20 cfs_rq->throttled =3D 0; =20 @@ -9787,7 +9787,6 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) { struct cfs_rq *cfs_rq, *pos; bool decayed =3D false; - int cpu =3D cpu_of(rq); =20 /* * Iterates the task_group tree in a bottom up fashion, see @@ -9807,7 +9806,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) } =20 /* Propagate pending load changes to the parent, if any: */ - se =3D cfs_rq->tg->se[cpu]; + se =3D cfs_rq_se(cfs_rq); if (se && !skip_blocked_update(se)) update_load_avg(cfs_rq_of(se), se, UPDATE_TG); =20 @@ -9833,8 +9832,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) */ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq) { - struct rq *rq =3D rq_of(cfs_rq); - struct sched_entity *se =3D cfs_rq->tg->se[cpu_of(rq)]; + struct sched_entity *se =3D cfs_rq_se(cfs_rq); unsigned long now =3D jiffies; unsigned long load; =20 @@ -13349,7 +13347,6 @@ void free_fair_sched_group(struct task_group *tg) } =20 kfree(tg->cfs_rq); - kfree(tg->se); } =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) @@ -13362,9 +13359,6 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) tg->cfs_rq =3D kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); if (!tg->cfs_rq) goto err; - tg->se =3D kcalloc(nr_cpu_ids, sizeof(se), GFP_KERNEL); - if (!tg->se) - goto err; =20 tg->shares =3D NICE_0_LOAD; =20 @@ -13379,7 +13373,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) cfs_rq =3D &combined->cfs_rq; se =3D &combined->se; init_cfs_rq(cfs_rq); - init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); + init_tg_cfs_entry(tg, cfs_rq, se, i, tg_se(parent, i)); init_entity_runnable_average(se); } =20 @@ -13398,7 +13392,7 @@ void online_fair_sched_group(struct task_group *tg) =20 for_each_possible_cpu(i) { rq =3D cpu_rq(i); - se =3D tg->se[i]; + se =3D tg_se(tg, i); rq_lock_irq(rq, &rf); update_rq_clock(rq); attach_entity_cfs_rq(se); @@ -13415,7 +13409,7 @@ void unregister_fair_sched_group(struct task_group = *tg) =20 for_each_possible_cpu(cpu) { struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu]; - struct sched_entity *se =3D tg->se[cpu]; + struct sched_entity *se =3D tg_se(tg, cpu); struct rq *rq =3D cpu_rq(cpu); =20 if (se) { @@ -13452,7 +13446,6 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, init_cfs_rq_runtime(cfs_rq); =20 tg->cfs_rq[cpu] =3D cfs_rq; - tg->se[cpu] =3D se; =20 /* se could be NULL for root_task_group */ if (!se) @@ -13483,7 +13476,7 @@ static int __sched_group_set_shares(struct task_gro= up *tg, unsigned long shares) /* * We can't change the weight of the root cgroup. */ - if (!tg->se[0]) + if (is_root_task_group(tg)) return -EINVAL; =20 shares =3D clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES)); @@ -13494,7 +13487,7 @@ static int __sched_group_set_shares(struct task_gro= up *tg, unsigned long shares) tg->shares =3D shares; for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); - struct sched_entity *se =3D tg->se[i]; + struct sched_entity *se =3D tg_se(tg, i); struct rq_flags rf; =20 /* Propagate contribution to hierarchy */ @@ -13545,7 +13538,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) =20 for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); - struct sched_entity *se =3D tg->se[i]; + struct sched_entity *se =3D tg_se(tg, i); struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index af23917194fb..08e17746ea01 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -437,8 +437,6 @@ struct task_group { #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - /* schedulable entities of this group on each CPU */ - struct sched_entity **se; /* runqueue "owned" by this group on each CPU */ struct cfs_rq **cfs_rq; unsigned long shares; @@ -901,7 +899,8 @@ struct dl_rq { }; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - +/* Check whether a task group is root tg */ +#define is_root_task_group(tg) ((tg) =3D=3D &root_task_group) /* An entity is a task if it doesn't "own" a runqueue */ #define entity_is_task(se) (!se->my_q) =20 @@ -1575,6 +1574,26 @@ static inline struct task_struct *task_of(struct sch= ed_entity *se) return container_of(se, struct task_struct, se); } =20 +static inline struct sched_entity *tg_se(struct task_group *tg, int cpu) +{ + if (is_root_task_group(tg)) + return NULL; + + struct cfs_rq_with_se *combined =3D + container_of(tg->cfs_rq[cpu], struct cfs_rq_with_se, cfs_rq); + return &combined->se; +} + +static inline struct sched_entity *cfs_rq_se(struct cfs_rq *cfs_rq) +{ + if (is_root_task_group(cfs_rq->tg)) + return NULL; + + struct cfs_rq_with_se *combined =3D + container_of(cfs_rq, struct cfs_rq_with_se, cfs_rq); + return &combined->se; +} + static inline struct cfs_rq *task_cfs_rq(struct task_struct *p) { return p->se.cfs_rq; @@ -2149,8 +2168,8 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #ifdef CONFIG_FAIR_GROUP_SCHED set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; - p->se.parent =3D tg->se[cpu]; - p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; + p->se.parent =3D tg_se(tg, cpu); + p->se.depth =3D p->se.parent ? p->se.parent->depth + 1 : 0; #endif =20 #ifdef CONFIG_RT_GROUP_SCHED --=20 2.50.0.rc0.604.gd4ff7b7c86-goog From nobody Sun Feb 8 09:53:05 2026 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 848D521FF32 for ; Mon, 9 Jun 2025 19:38:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749497930; cv=none; b=Ys7CS12a5Xhgi0WElIZELzXKUDH6AzN2GzZUIewNyCYjQE/d+GKnq9ix+2J3GE0TLzYGYwrpodyZZI1ogwRV+ZOCHDJYN7m5ANyl/QUKzEBud8Pxwj0uNJNcU2srJtGLJ7XwbBi+tv+w7y+IlfpnwJrtIKhcmo/v0BU/gzh5GJ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749497930; c=relaxed/simple; bh=XT+xOfxp+zk7aYzY1PgCkIwDnRk94iFwzLmud4f/pz8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=q0OsnC+7MRdbAzWvBAwlqehnelJaAGUovXFAxtD92sKDh5Jgd8Xathm4x26e9KbtH4wsKtJ/ECZoGaDbBcqW100PehzUH8/dX+QuEQ2EdQPjywYSpgtW+L74qiqxBmQqg30E+Lai/wE0bBmeB1wZ7ElSHe0HfKUPjoV1m6F1bs0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RqGz4HbU; arc=none smtp.client-ip=209.85.219.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RqGz4HbU" Received: by mail-qv1-f73.google.com with SMTP id 6a1803df08f44-6fae0df0b35so77688896d6.3 for ; Mon, 09 Jun 2025 12:38:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749497927; x=1750102727; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Og2ihHkaCSEHzMEHGiD5gGPxuZp1iaPyPdhon+l+rWU=; b=RqGz4HbUXSZxs/T+pYMTQAmxLI6QKj5uQpSZfhuC8eWPfsHEgHfYcU2vZFAvtbesuR Z8Kw8Fh6u5CstQujczSCX0+uekX2owa4y/c58p1pGapqBE6HN7TubD5WlOrkfCsp3NOj z4zalgQ2c9QUMD1UTnz6ZH3i2pKK+gscVwAQM1jXn6SBuoZZG7b2o+UyvA01ogFZUARy 49x/g2xCKblRGbyr/dEUUxWUeXTfa9N73rIL3HBsq99QMkSQ3sI1sb0shy5O8RvYZ/HO spfsTHDtAqjU9XYvhDhqnMbXHlSn49wbqCKZuhIc2H1cZb+Qi9aJDZjVmrDk5TB1s5dT 1Oeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749497927; x=1750102727; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Og2ihHkaCSEHzMEHGiD5gGPxuZp1iaPyPdhon+l+rWU=; b=ftp7piBDt0n/hudtNA1Tq9ojQs8JgIFhAKfOcErsx1NK4FlXS1t0iVKu0gCCL8EDbd YvNscxIFZsnAm6V11KuvDC428PNnmsh9LWsTQc4yuMpX0EgNOkUPVKGWJ4fVtLpwO9w9 7awQoBKcRNDaLHsMof48xn48b83LEuiHBVd8ByzngH6TVCjbPX1XCBdnX2zO6d5K/6xd ndLlJ+4fx19hXu4xwhdwg4V3J/aHlzAYYRrmJVy7ofRZzM+bAYugSqozXFbYPBbqk8Gs uRvJSIUm84odWfLhbd+i+OUqRFPEdRx1U+R0iU3dkyPEo+1PeHgiTTZkLPsV41/40SuZ /mjg== X-Forwarded-Encrypted: i=1; AJvYcCXITHTOrBNIfm133R6lPUXcezPaAowXyXh2a89LpgI/4rNmUXqnWVVA0on/t1S2BmY9w66wdW/90g+zSGc=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+3EpZx/5rSW+RVAxjXkX84UzyG56lxCmZWETk6HD3Fs0rrt8c Xt/xMD85QklrW7RwI/YOnb9AcX7Yx64H8uyBABQM37KKfu2BxHqjf5TcVnpevi2UOdxrKMvhRZM +XftL7hJUWg== X-Google-Smtp-Source: AGHT+IEGSTVOFL9QM3WE1NqlEm6D4m2SmCAAfF27vZROsTZ4xzZ05Cch4xPHBnFxrrgb9EQmU2pB5Jr7xO33 X-Received: from qvbop6.prod.google.com ([2002:a05:6214:4586:b0:6fa:ed93:152c]) (user=zecheng job=prod-delivery.src-stubby-dispatcher) by 2002:ad4:5bc8:0:b0:6fa:d976:197e with SMTP id 6a1803df08f44-6fb08f87e67mr249140906d6.33.1749497927679; Mon, 09 Jun 2025 12:38:47 -0700 (PDT) Date: Mon, 9 Jun 2025 19:38:33 +0000 In-Reply-To: <20250609193834.2556866-1-zecheng@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250609193834.2556866-1-zecheng@google.com> X-Mailer: git-send-email 2.50.0.rc0.604.gd4ff7b7c86-goog Message-ID: <20250609193834.2556866-4-zecheng@google.com> Subject: [RFC PATCH v2 3/3] sched/fair: Allocate the combined cfs_rq/se struct per-cpu From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Xu Liu , Blake Jones , Josh Don , linux-kernel@vger.kernel.org, Zecheng Li Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To remove the cfs_rq pointer array in task_group, allocate the combined cfs_rq and sched_entity using the per-cpu allocator. This patch implements the following: - Changes task_group->cfs_rq from struct cfs_rq ** to struct cfs_rq __percpu *. - Updates memory allocation in alloc_fair_sched_group() and free_fair_sched_group() to use alloc_percpu() and free_percpu() respectively. - Uses the inline accessor tg_cfs_rq(tg, cpu) with per_cpu_ptr() to retrieve the pointer to cfs_rq for the given task group and CPU. - Replaces direct accesses tg->cfs_rq[cpu] with calls to the new tg_cfs_rq(tg, cpu) helper. - Handles the root_task_group: since struct rq is already a per-cpu variable (runqueues), its embedded cfs_rq (rq->cfs) is also per-cpu. Therefore, we assign root_task_group.cfs_rq =3D &runqueues.cfs. - Cleanup the code in initializing the root task group. This change places each CPU's cfs_rq and sched_entity in its local per-cpu memory area. Signed-off-by: Zecheng Li --- kernel/sched/core.c | 35 ++++++++++---------------- kernel/sched/fair.c | 59 +++++++++++++++++--------------------------- kernel/sched/sched.h | 13 +++++++--- 3 files changed, 45 insertions(+), 62 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8598492854fc..60b9872e4b01 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8526,7 +8526,7 @@ static struct kmem_cache *task_group_cache __ro_after= _init; =20 void __init sched_init(void) { - unsigned long ptr =3D 0; + unsigned long __maybe_unused ptr =3D 0; int i; =20 /* Make sure the linker didn't screw up */ @@ -8544,33 +8544,24 @@ void __init sched_init(void) wait_bit_init(); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - ptr +=3D nr_cpu_ids * sizeof(void **); -#endif -#ifdef CONFIG_RT_GROUP_SCHED - ptr +=3D 2 * nr_cpu_ids * sizeof(void **); -#endif - if (ptr) { - ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); + root_task_group.cfs_rq =3D &runqueues.cfs; =20 -#ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.cfs_rq =3D (struct cfs_rq **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); - - root_task_group.shares =3D ROOT_TASK_GROUP_LOAD; - init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); + root_task_group.shares =3D ROOT_TASK_GROUP_LOAD; + init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); #endif /* CONFIG_FAIR_GROUP_SCHED */ #ifdef CONFIG_EXT_GROUP_SCHED - root_task_group.scx_weight =3D CGROUP_WEIGHT_DFL; + root_task_group.scx_weight =3D CGROUP_WEIGHT_DFL; #endif /* CONFIG_EXT_GROUP_SCHED */ #ifdef CONFIG_RT_GROUP_SCHED - root_task_group.rt_se =3D (struct sched_rt_entity **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); + ptr +=3D 2 * nr_cpu_ids * sizeof(void **); + ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); + root_task_group.rt_se =3D (struct sched_rt_entity **)ptr; + ptr +=3D nr_cpu_ids * sizeof(void **); =20 - root_task_group.rt_rq =3D (struct rt_rq **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); + root_task_group.rt_rq =3D (struct rt_rq **)ptr; + ptr +=3D nr_cpu_ids * sizeof(void **); =20 #endif /* CONFIG_RT_GROUP_SCHED */ - } =20 #ifdef CONFIG_SMP init_defrootdomain(); @@ -9511,7 +9502,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg= , u64 period, u64 quota, } =20 for_each_online_cpu(i) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, i); struct rq *rq =3D cfs_rq->rq; =20 guard(rq_lock_irq)(rq); @@ -9759,7 +9750,7 @@ static u64 throttled_time_self(struct task_group *tg) u64 total =3D 0; =20 for_each_possible_cpu(i) { - total +=3D READ_ONCE(tg->cfs_rq[i]->throttled_clock_self_time); + total +=3D READ_ONCE(tg_cfs_rq(tg, i)->throttled_clock_self_time); } =20 return total; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index b099b593f364..8c13bc1bbe08 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -329,7 +329,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *= cfs_rq) * to a tree or when we reach the top of the tree */ if (cfs_rq->tg->parent && - cfs_rq->tg->parent->cfs_rq[cpu]->on_list) { + tg_cfs_rq(cfs_rq->tg->parent, cpu)->on_list) { /* * If parent is already on the list, we add the child * just before. Thanks to circular linked property of @@ -337,7 +337,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *= cfs_rq) * of the list that starts by parent. */ list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list, - &(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list)); + &(tg_cfs_rq(cfs_rq->tg->parent, cpu)->leaf_cfs_rq_list)); /* * The branch is now connected to its tree so we can * reset tmp_alone_branch to the beginning of the @@ -4168,7 +4168,7 @@ static void __maybe_unused clear_tg_offline_cfs_rqs(s= truct rq *rq) =20 rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 clear_tg_load_avg(cfs_rq); } @@ -5823,8 +5823,8 @@ static inline int throttled_lb_pair(struct task_group= *tg, { struct cfs_rq *src_cfs_rq, *dest_cfs_rq; =20 - src_cfs_rq =3D tg->cfs_rq[src_cpu]; - dest_cfs_rq =3D tg->cfs_rq[dest_cpu]; + src_cfs_rq =3D tg_cfs_rq(tg, src_cpu); + dest_cfs_rq =3D tg_cfs_rq(tg, dest_cpu); =20 return throttled_hierarchy(src_cfs_rq) || throttled_hierarchy(dest_cfs_rq); @@ -5833,7 +5833,7 @@ static inline int throttled_lb_pair(struct task_group= *tg, static int tg_unthrottle_up(struct task_group *tg, void *data) { struct rq *rq =3D data; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 cfs_rq->throttle_count--; if (!cfs_rq->throttle_count) { @@ -5862,7 +5862,7 @@ static int tg_unthrottle_up(struct task_group *tg, vo= id *data) static int tg_throttle_down(struct task_group *tg, void *data) { struct rq *rq =3D data; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 /* group is entering throttled state, stop time */ if (!cfs_rq->throttle_count) { @@ -6449,8 +6449,8 @@ static void sync_throttle(struct task_group *tg, int = cpu) if (!tg->parent) return; =20 - cfs_rq =3D tg->cfs_rq[cpu]; - pcfs_rq =3D tg->parent->cfs_rq[cpu]; + cfs_rq =3D tg_cfs_rq(tg, cpu); + pcfs_rq =3D tg_cfs_rq(tg->parent, cpu); =20 cfs_rq->throttle_count =3D pcfs_rq->throttle_count; cfs_rq->throttled_clock_pelt =3D rq_clock_pelt(cpu_rq(cpu)); @@ -6635,7 +6635,7 @@ static void __maybe_unused update_runtime_enabled(str= uct rq *rq) rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { struct cfs_bandwidth *cfs_b =3D &tg->cfs_bandwidth; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 raw_spin_lock(&cfs_b->lock); cfs_rq->runtime_enabled =3D cfs_b->quota !=3D RUNTIME_INF; @@ -6664,7 +6664,7 @@ static void __maybe_unused unthrottle_offline_cfs_rqs= (struct rq *rq) =20 rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 if (!cfs_rq->runtime_enabled) continue; @@ -9364,7 +9364,7 @@ static inline int task_is_ineligible_on_dst_cpu(struc= t task_struct *p, int dest_ struct cfs_rq *dst_cfs_rq; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - dst_cfs_rq =3D task_group(p)->cfs_rq[dest_cpu]; + dst_cfs_rq =3D tg_cfs_rq(task_group(p), dest_cpu); #else dst_cfs_rq =3D &cpu_rq(dest_cpu)->cfs; #endif @@ -13080,7 +13080,7 @@ static int task_is_throttled_fair(struct task_struc= t *p, int cpu) struct cfs_rq *cfs_rq; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - cfs_rq =3D task_group(p)->cfs_rq[cpu]; + cfs_rq =3D tg_cfs_rq(task_group(p), cpu); #else cfs_rq =3D &cpu_rq(cpu)->cfs; #endif @@ -13336,42 +13336,31 @@ static void task_change_group_fair(struct task_st= ruct *p) =20 void free_fair_sched_group(struct task_group *tg) { - int i; - - for_each_possible_cpu(i) { - if (tg->cfs_rq && tg->cfs_rq[i]) { - struct cfs_rq_with_se *combined =3D - container_of(tg->cfs_rq[i], struct cfs_rq_with_se, cfs_rq); - kfree(combined); - } - } - - kfree(tg->cfs_rq); + free_percpu(tg->cfs_rq); } =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) { - struct cfs_rq_with_se *combined; + struct cfs_rq_with_se __percpu *combined; struct sched_entity *se; struct cfs_rq *cfs_rq; int i; =20 - tg->cfs_rq =3D kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); - if (!tg->cfs_rq) + combined =3D alloc_percpu_gfp(struct cfs_rq_with_se, GFP_KERNEL); + if (!combined) goto err; =20 + tg->cfs_rq =3D &combined->cfs_rq; tg->shares =3D NICE_0_LOAD; =20 init_cfs_bandwidth(tg_cfs_bandwidth(tg), tg_cfs_bandwidth(parent)); =20 for_each_possible_cpu(i) { - combined =3D kzalloc_node(sizeof(struct cfs_rq_with_se), - GFP_KERNEL, cpu_to_node(i)); - if (!combined) + cfs_rq =3D tg_cfs_rq(tg, i); + if (!cfs_rq) goto err; =20 - cfs_rq =3D &combined->cfs_rq; - se =3D &combined->se; + se =3D tg_se(tg, i); init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, tg_se(parent, i)); init_entity_runnable_average(se); @@ -13408,7 +13397,7 @@ void unregister_fair_sched_group(struct task_group = *tg) destroy_cfs_bandwidth(tg_cfs_bandwidth(tg)); =20 for_each_possible_cpu(cpu) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu); struct sched_entity *se =3D tg_se(tg, cpu); struct rq *rq =3D cpu_rq(cpu); =20 @@ -13445,8 +13434,6 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, cfs_rq->rq =3D rq; init_cfs_rq_runtime(cfs_rq); =20 - tg->cfs_rq[cpu] =3D cfs_rq; - /* se could be NULL for root_task_group */ if (!se) return; @@ -13539,7 +13526,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); struct sched_entity *se =3D tg_se(tg, i); - struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *grp_cfs_rq =3D tg_cfs_rq(tg, i); bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; struct rq_flags rf; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 08e17746ea01..a04d20fc9ff2 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -438,7 +438,7 @@ struct task_group { =20 #ifdef CONFIG_FAIR_GROUP_SCHED /* runqueue "owned" by this group on each CPU */ - struct cfs_rq **cfs_rq; + struct cfs_rq __percpu *cfs_rq; unsigned long shares; #ifdef CONFIG_SMP /* @@ -1573,6 +1573,11 @@ static inline struct task_struct *task_of(struct sch= ed_entity *se) WARN_ON_ONCE(!entity_is_task(se)); return container_of(se, struct task_struct, se); } +/* Access a specific CPU's cfs_rq from a task group */ +static inline struct cfs_rq *tg_cfs_rq(struct task_group *tg, int cpu) +{ + return per_cpu_ptr(tg->cfs_rq, cpu); +} =20 static inline struct sched_entity *tg_se(struct task_group *tg, int cpu) { @@ -1580,7 +1585,7 @@ static inline struct sched_entity *tg_se(struct task_= group *tg, int cpu) return NULL; =20 struct cfs_rq_with_se *combined =3D - container_of(tg->cfs_rq[cpu], struct cfs_rq_with_se, cfs_rq); + container_of(tg_cfs_rq(tg, cpu), struct cfs_rq_with_se, cfs_rq); return &combined->se; } =20 @@ -2166,8 +2171,8 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); - p->se.cfs_rq =3D tg->cfs_rq[cpu]; + set_task_rq_fair(&p->se, p->se.cfs_rq, tg_cfs_rq(tg, cpu)); + p->se.cfs_rq =3D tg_cfs_rq(tg, cpu); p->se.parent =3D tg_se(tg, cpu); p->se.depth =3D p->se.parent ? p->se.parent->depth + 1 : 0; #endif --=20 2.50.0.rc0.604.gd4ff7b7c86-goog