From nobody Fri Dec 19 19:58:05 2025 Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 106A013D51E for ; Fri, 7 Nov 2025 04:38:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762490294; cv=none; b=CP6782qPtJ3rgGFNgUwCkQVcbLnFG9dxzmTHyPfmC95z3eHlVAhq6GkBQf+l9fhNyjWcdPb0X8tclgJ+O7XV5YaptgI9ZaWiME9d5O0S5IcqkhQJ87fHL4tsgExj3y5dcPXpL5+x46gkI3IOHAJDkUHB9KqKVUBm2ugzn7z8QwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762490294; c=relaxed/simple; bh=wIWs2iDM+S5H4IBmphBmWyqozqQ+5W7vo9pFBotx1w0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=e0GFt3JHhwFTOvqr5rDPn8iTMrfYdUC1Ix9nFqNNO654QsA3C5ct+/E83k6yJT6dZCJM306PY/apAr3cNG0xkq8Gf8Ll+r53Pf9Zxf4W/UMcTMd2l32iB2L7BpMIa1HmxwUPRlER1OzMzL/UtazkSyGYTYWFOkISK0lK/+Ti4+A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pUm4QSQw; arc=none smtp.client-ip=209.85.222.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pUm4QSQw" Received: by mail-qk1-f202.google.com with SMTP id af79cd13be357-8b21dfdfd00so92105885a.1 for ; Thu, 06 Nov 2025 20:38:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1762490292; x=1763095092; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3MzgbBXKhcbKM9AgDj4L6etW4gGMqP1OB0Ze+8gJkHw=; b=pUm4QSQwbDaWcWPIawcCHRjRcgoqSsl6WfrptE9fVfwFPy1ktKn56gsjYKKDqaW1Mr VCsMFbXltMC3j9slgbaTqbu2rTH93YNSUeHEUCS6R2DtP2jTwZ8zAnriUvTeq3ziq+jd mD6ze+kYTD8WPFRwiUG4di6ay2nkaRPA9ERhSnhqDHIX8q0RVLbL4UUjseeIZFJmXdKJ WtWDQSmHAXHkek1d7b0kfKuNWNq3kR+4xAaAJI+x1As2z4qkP3LDucUeQ9uXj++Q1fJC svRqytEiHXwtisUq5izcriRKSFBdwdIVLEAHh0UA1l9fsGgz98kf/r/b3PHLFwPEjsem Lpyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762490292; x=1763095092; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3MzgbBXKhcbKM9AgDj4L6etW4gGMqP1OB0Ze+8gJkHw=; b=uWxuKFtK5jrQHexbg8/bxTTQrF0H5uZ9KFn3toT42dMPjNwcqaAJp0HpHLBffZ3TB6 30ez2AwgyszJftYBCbQVaUZA1tZ6MsLGMWBPhlx59/cVjMdqd76ZL7pTtwWv2xOgD/3B ppT4/sPqDZD3N9qZ37ARaEUO3GujXwlNb1jnH03J9ENptBj8mh8LBKtTaaojAQI/P6fr 4XYlvCh9jTpX416pDCgQJxJSSl3Ej81DcXsioBHjfGlM6ikJl1z2+PNQ+C1mzAZ2g/am 4dROl1fUfS8OqndEk/5lOZv+i4ajz9OnBOpYrn2ZhUdaX32Rebkxm9mNV/jHImICfHfY H5KA== X-Forwarded-Encrypted: i=1; AJvYcCVnuWVOcvUEEc2WWQws+e8pK5vH+IQrj7rZHLizboYrCOeodtMce1/ccV53Rnc0NzIh5lv5HpqeRIXSgaw=@vger.kernel.org X-Gm-Message-State: AOJu0YyV4MhxFZGaSx753gBJYkmLvZHgbZfEPB4zns+AEjcAVeUHvshN HizuTJufsUU6vCNXUubtkMa1v/E8c8nHxmx2Nf2KQvJ+MUp2wNK+ZMeRp3AAsHizQh/MdpadENa RkHEwRViDNA== X-Google-Smtp-Source: AGHT+IE6RDEiLwcasbBdE0LPNjDhpsGZDA4EiihKALekxMSwJpfjCJY24Bcafcx9a6gHqSOR398clf2PRutl X-Received: from qkao13.prod.google.com ([2002:a05:620a:a80d:b0:8b2:4d20:bb9f]) (user=zecheng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:620a:29c4:b0:891:ae32:d696 with SMTP id af79cd13be357-8b24534f829mr299898785a.66.1762490291964; Thu, 06 Nov 2025 20:38:11 -0800 (PST) Date: Fri, 7 Nov 2025 04:38:03 +0000 In-Reply-To: <20251107043807.1758889-1-zecheng@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251107043807.1758889-1-zecheng@google.com> X-Mailer: git-send-email 2.51.2.1041.gc1ab5b90ca-goog Message-ID: <20251107043807.1758889-2-zecheng@google.com> Subject: [PATCH v5 1/3] sched/fair: Co-locate cfs_rq and sched_entity From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Rik van Riel , Chris Mason , Madadi Vineeth Reddy , Xu Liu , Blake Jones , Josh Don , linux-kernel@vger.kernel.org, Zecheng Li Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Improve data locality and reduce pointer chasing by allocating struct cfs_rq and struct sched_entity together for non-root task groups. This is achieved by introducing a new combined struct cfs_rq_with_se that holds both objects in a single allocation. This patch: - Defines the new struct cfs_rq_with_se. - Modifies alloc_fair_sched_group() and free_fair_sched_group() to allocate and free the new struct as a single unit. - Modifies the per-CPU pointers in task_group->se and task_group->cfs_rq to point to the members in the new combined structure. Signed-off-by: Zecheng Li --- kernel/sched/fair.c | 23 ++++++++++------------- kernel/sched/sched.h | 8 ++++++++ 2 files changed, 18 insertions(+), 13 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 273e2871b59e..1676119e302b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -13359,10 +13359,11 @@ void free_fair_sched_group(struct task_group *tg) int i; =20 for_each_possible_cpu(i) { - if (tg->cfs_rq) - kfree(tg->cfs_rq[i]); - if (tg->se) - kfree(tg->se[i]); + if (tg->cfs_rq && tg->cfs_rq[i]) { + struct cfs_rq_with_se *combined =3D + container_of(tg->cfs_rq[i], struct cfs_rq_with_se, cfs_rq); + kfree(combined); + } } =20 kfree(tg->cfs_rq); @@ -13371,6 +13372,7 @@ void free_fair_sched_group(struct task_group *tg) =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) { + struct cfs_rq_with_se *combined; struct sched_entity *se; struct cfs_rq *cfs_rq; int i; @@ -13387,16 +13389,13 @@ int alloc_fair_sched_group(struct task_group *tg,= struct task_group *parent) init_cfs_bandwidth(tg_cfs_bandwidth(tg), tg_cfs_bandwidth(parent)); =20 for_each_possible_cpu(i) { - cfs_rq =3D kzalloc_node(sizeof(struct cfs_rq), + combined =3D kzalloc_node(sizeof(struct cfs_rq_with_se), GFP_KERNEL, cpu_to_node(i)); - if (!cfs_rq) + if (!combined) goto err; =20 - se =3D kzalloc_node(sizeof(struct sched_entity_stats), - GFP_KERNEL, cpu_to_node(i)); - if (!se) - goto err_free_rq; - + cfs_rq =3D &combined->cfs_rq; + se =3D &combined->se; init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); init_entity_runnable_average(se); @@ -13404,8 +13403,6 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) =20 return 1; =20 -err_free_rq: - kfree(cfs_rq); err: return 0; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d04e007608a3..8db53f4d4d06 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -769,6 +769,14 @@ struct cfs_rq { #endif /* CONFIG_FAIR_GROUP_SCHED */ }; =20 +#ifdef CONFIG_FAIR_GROUP_SCHED +struct cfs_rq_with_se { + struct cfs_rq cfs_rq; + /* cfs_rq's sched_entity on parent runqueue */ + struct sched_entity se ____cacheline_aligned; +}; +#endif + #ifdef CONFIG_SCHED_CLASS_EXT /* scx_rq->flags, protected by the rq lock */ enum scx_rq_flags { --=20 2.51.2.1041.gc1ab5b90ca-goog From nobody Fri Dec 19 19:58:05 2025 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6056A1A256E for ; Fri, 7 Nov 2025 04:38:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762490296; cv=none; b=a4HOtDYzm6z++YOZvj20HMQnonJqigBkUoNsBb4SVHi5hpl6Tow5hJyMPTAa7mw90pLWA/1qEadt4HJO8H2giA5OsQFuBazNutdday5rYVKpf0FqhS/ynSTFXjQDqakVoCme6jhKtCv+ZnW6TAuVKD9tMxWT0gpAxrpXpn4FBhM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762490296; c=relaxed/simple; bh=5FSVN6t16HUTQUIrBQI90gbY5f9LW3PO/Osyuixyd/g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=DhIjueQhLM6f0sDqzpc0SrlVbCtS8Ao012KsY8sBMJ5+kClKWgiun9s6u1PnY6RcnLDDA8DabZIKzKExcFoiqhKCpSeQ/qvzKcEfk21oV2cyXrJRsoTkYSH5uszHoQEnHcX93/mpP7nXxWAjSLP2zfPEZPpRn6wTZb/RBtNx/Cw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TCsXQHOa; arc=none smtp.client-ip=209.85.219.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TCsXQHOa" Received: by mail-qv1-f73.google.com with SMTP id 6a1803df08f44-8803f43073bso11364456d6.0 for ; Thu, 06 Nov 2025 20:38:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1762490293; x=1763095093; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NBWFEtspIzrg97TaHeeXIiA08f/quVSUywNGx/mI/Pw=; b=TCsXQHOaMKNTBI9G05WgB4tslVihnBnD4buoJRiMEWSiIu2tvOtMppYSm4UWD6AAy8 u7pz7yg8bliuFUtIG4XyiSk8dqmSKjCE1kbj89Ns+Pq2bR4LgKDqtb9WDqYNg6aKTHSq 7SIq/obxiwt/S/rIW5U2NpRDBDX1ISZLOxJ9B0Cr4d5QLg8mcDhSD+TGlq+w86AletmU rdbznG0c6l1+Yf9bh4BFcOiw/eb5CeSHnEsAfEIPyQAjHfJFY+H9/jBSNBl9ye+MFsug 5m3h4KFzdPqN9Lqmk2AygB1CQTgRcRjHC0TAR1XaRA+dAg8Z7p/z2sDn52Oj/rrapwUf nnKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762490293; x=1763095093; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NBWFEtspIzrg97TaHeeXIiA08f/quVSUywNGx/mI/Pw=; b=nSWzI5FU7jpIa0vIi60m0sczcbau0dKbF/aDqT8aVWpkRgLIh/fMR/LCZThulCFHpC 5vdjIGM4bfFhnsSnCCRvduCzUfFAHF78XkwpJlsxczAjc/WELgypv2mWg76Wksf+MTdh UUZjAK+KCthViOoOvF0y/0Kjmzwy8ONsgm2eXQ9xAmjJX6bPuNq+rn58cCNdTgiYv8Cr 4eirnbRBi90qo0Jh6gvejPH+9mH/AEFNUfEqSLeZChaYLwDLb40YgQddnMcV/ov9GERH AhRWp+k+tbVFxuEjUpR3Gwpi72TfG3KFlhL0mLx0FZJU19t4z4QSjD60ykzScZ73tAmi Qhyg== X-Forwarded-Encrypted: i=1; AJvYcCU6YpWg2eKX4JGlb7wljekL2y4wtFCIGCUyQUS/kCG+m8c7W136yv+EPHx3IbXI7BHWvLmBN32OqxkHi6Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yz4pEscXoqxladqh0jbV88mBRjfrh3HZlsUfzZ4vIv+UAuLrghS R08AKuIjdCLtO6buM+FybGV1pXBu8CPncjUU1ZBGNw/y81veqnHsAN7xrs4wq1eQz9+g7TG7pVo fM/fZroDfzA== X-Google-Smtp-Source: AGHT+IE3kMhQogVWInfOC9o2A17iW8RABljN9iQmNQ36igUtZH+jBCvHZZ0MiPoyZTkK1d8FkyMyeeFEnXxF X-Received: from qvc1.prod.google.com ([2002:a05:6214:8101:b0:880:4abb:71c2]) (user=zecheng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6214:1313:b0:880:4eeb:3661 with SMTP id 6a1803df08f44-8817678f529mr27356246d6.63.1762490293150; Thu, 06 Nov 2025 20:38:13 -0800 (PST) Date: Fri, 7 Nov 2025 04:38:04 +0000 In-Reply-To: <20251107043807.1758889-1-zecheng@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251107043807.1758889-1-zecheng@google.com> X-Mailer: git-send-email 2.51.2.1041.gc1ab5b90ca-goog Message-ID: <20251107043807.1758889-3-zecheng@google.com> Subject: [PATCH v5 2/3] sched/fair: Remove task_group->se pointer array From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Rik van Riel , Chris Mason , Madadi Vineeth Reddy , Xu Liu , Blake Jones , Josh Don , linux-kernel@vger.kernel.org, Zecheng Li Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that struct sched_entity is co-located with struct cfs_rq for non-root task groups, the task_group->se pointer array is redundant. The associated sched_entity can be loaded directly from the cfs_rq. This patch performs the access conversion with the helpers: - is_root_task_group(tg): checks if a task group is the root task group. It compares the task group's address with the global root_task_group variable. - tg_se(tg, cpu): retrieves the cfs_rq and returns the address of the co-located se. This function checks if tg is the root task group to ensure behaving the same of previous tg->se[cpu]. Replaces all accesses that use the tg->se[cpu] pointer array with calls to the new tg_se(tg, cpu) accessor. - cfs_rq_se(cfs_rq): simplifies access paths like cfs_rq->tg->se[...] to use the co-located sched_entity. This function also checks if tg is the root task group to ensure same behavior. Since tg_se is not in very hot code paths, and the branch is a register comparison with an immediate value (`&root_task_group`), the performance impact is expected to be negligible. Signed-off-by: Zecheng Li --- kernel/sched/core.c | 7 ++----- kernel/sched/debug.c | 2 +- kernel/sched/fair.c | 27 ++++++++++----------------- kernel/sched/sched.h | 29 ++++++++++++++++++++++++----- 4 files changed, 37 insertions(+), 28 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 67b5f2faab36..12ebe1b4c8ae 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8558,7 +8558,7 @@ void __init sched_init(void) wait_bit_init(); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - ptr +=3D 2 * nr_cpu_ids * sizeof(void **); + ptr +=3D nr_cpu_ids * sizeof(void **); #endif #ifdef CONFIG_RT_GROUP_SCHED ptr +=3D 2 * nr_cpu_ids * sizeof(void **); @@ -8567,9 +8567,6 @@ void __init sched_init(void) ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.se =3D (struct sched_entity **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); - root_task_group.cfs_rq =3D (struct cfs_rq **)ptr; ptr +=3D nr_cpu_ids * sizeof(void **); =20 @@ -9635,7 +9632,7 @@ static int cpu_cfs_stat_show(struct seq_file *sf, voi= d *v) int i; =20 for_each_possible_cpu(i) { - stats =3D __schedstats_from_se(tg->se[i]); + stats =3D __schedstats_from_se(tg_se(tg, i)); ws +=3D schedstat_val(stats->wait_sum); } =20 diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 02e16b70a790..16542596d4b0 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -644,7 +644,7 @@ void dirty_sched_domain_sysctl(int cpu) #ifdef CONFIG_FAIR_GROUP_SCHED static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task= _group *tg) { - struct sched_entity *se =3D tg->se[cpu]; + struct sched_entity *se =3D tg_se(tg, cpu); =20 #define P(F) SEQ_printf(m, " .%-30s: %lld\n", #F, (long long)F) #define P_SCHEDSTAT(F) SEQ_printf(m, " .%-30s: %lld\n", \ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1676119e302b..f9fb07d73a03 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6021,7 +6021,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) { struct rq *rq =3D rq_of(cfs_rq); struct cfs_bandwidth *cfs_b =3D tg_cfs_bandwidth(cfs_rq->tg); - struct sched_entity *se =3D cfs_rq->tg->se[cpu_of(rq)]; + struct sched_entity *se; =20 /* * It's possible we are called with !runtime_remaining due to things @@ -6036,7 +6036,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) if (cfs_rq->runtime_enabled && cfs_rq->runtime_remaining <=3D 0) return; =20 - se =3D cfs_rq->tg->se[cpu_of(rq)]; + se =3D cfs_rq_se(cfs_rq); =20 cfs_rq->throttled =3D 0; =20 @@ -9788,7 +9788,6 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) { struct cfs_rq *cfs_rq, *pos; bool decayed =3D false; - int cpu =3D cpu_of(rq); =20 /* * Iterates the task_group tree in a bottom up fashion, see @@ -9808,7 +9807,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) } =20 /* Propagate pending load changes to the parent, if any: */ - se =3D cfs_rq->tg->se[cpu]; + se =3D cfs_rq_se(cfs_rq); if (se && !skip_blocked_update(se)) update_load_avg(cfs_rq_of(se), se, UPDATE_TG); =20 @@ -9834,8 +9833,7 @@ static bool __update_blocked_fair(struct rq *rq, bool= *done) */ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq) { - struct rq *rq =3D rq_of(cfs_rq); - struct sched_entity *se =3D cfs_rq->tg->se[cpu_of(rq)]; + struct sched_entity *se =3D cfs_rq_se(cfs_rq); unsigned long now =3D jiffies; unsigned long load; =20 @@ -13367,7 +13365,6 @@ void free_fair_sched_group(struct task_group *tg) } =20 kfree(tg->cfs_rq); - kfree(tg->se); } =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) @@ -13380,9 +13377,6 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) tg->cfs_rq =3D kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); if (!tg->cfs_rq) goto err; - tg->se =3D kcalloc(nr_cpu_ids, sizeof(se), GFP_KERNEL); - if (!tg->se) - goto err; =20 tg->shares =3D NICE_0_LOAD; =20 @@ -13397,7 +13391,7 @@ int alloc_fair_sched_group(struct task_group *tg, s= truct task_group *parent) cfs_rq =3D &combined->cfs_rq; se =3D &combined->se; init_cfs_rq(cfs_rq); - init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]); + init_tg_cfs_entry(tg, cfs_rq, se, i, tg_se(parent, i)); init_entity_runnable_average(se); } =20 @@ -13416,7 +13410,7 @@ void online_fair_sched_group(struct task_group *tg) =20 for_each_possible_cpu(i) { rq =3D cpu_rq(i); - se =3D tg->se[i]; + se =3D tg_se(tg, i); rq_lock_irq(rq, &rf); update_rq_clock(rq); attach_entity_cfs_rq(se); @@ -13433,7 +13427,7 @@ void unregister_fair_sched_group(struct task_group = *tg) =20 for_each_possible_cpu(cpu) { struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu]; - struct sched_entity *se =3D tg->se[cpu]; + struct sched_entity *se =3D tg_se(tg, cpu); struct rq *rq =3D cpu_rq(cpu); =20 if (se) { @@ -13470,7 +13464,6 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, init_cfs_rq_runtime(cfs_rq); =20 tg->cfs_rq[cpu] =3D cfs_rq; - tg->se[cpu] =3D se; =20 /* se could be NULL for root_task_group */ if (!se) @@ -13501,7 +13494,7 @@ static int __sched_group_set_shares(struct task_gro= up *tg, unsigned long shares) /* * We can't change the weight of the root cgroup. */ - if (!tg->se[0]) + if (is_root_task_group(tg)) return -EINVAL; =20 shares =3D clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES)); @@ -13512,7 +13505,7 @@ static int __sched_group_set_shares(struct task_gro= up *tg, unsigned long shares) tg->shares =3D shares; for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); - struct sched_entity *se =3D tg->se[i]; + struct sched_entity *se =3D tg_se(tg, i); struct rq_flags rf; =20 /* Propagate contribution to hierarchy */ @@ -13563,7 +13556,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) =20 for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); - struct sched_entity *se =3D tg->se[i]; + struct sched_entity *se =3D tg_se(tg, i); struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 8db53f4d4d06..1133910a13c2 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -476,8 +476,6 @@ struct task_group { #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - /* schedulable entities of this group on each CPU */ - struct sched_entity **se; /* runqueue "owned" by this group on each CPU */ struct cfs_rq **cfs_rq; unsigned long shares; @@ -923,7 +921,8 @@ struct dl_rq { }; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - +/* Check whether a task group is root tg */ +#define is_root_task_group(tg) ((tg) =3D=3D &root_task_group) /* An entity is a task if it doesn't "own" a runqueue */ #define entity_is_task(se) (!se->my_q) =20 @@ -1609,6 +1608,26 @@ static inline struct task_struct *task_of(struct sch= ed_entity *se) return container_of(se, struct task_struct, se); } =20 +static inline struct sched_entity *tg_se(struct task_group *tg, int cpu) +{ + if (is_root_task_group(tg)) + return NULL; + + struct cfs_rq_with_se *combined =3D + container_of(tg->cfs_rq[cpu], struct cfs_rq_with_se, cfs_rq); + return &combined->se; +} + +static inline struct sched_entity *cfs_rq_se(struct cfs_rq *cfs_rq) +{ + if (is_root_task_group(cfs_rq->tg)) + return NULL; + + struct cfs_rq_with_se *combined =3D + container_of(cfs_rq, struct cfs_rq_with_se, cfs_rq); + return &combined->se; +} + static inline struct cfs_rq *task_cfs_rq(struct task_struct *p) { return p->se.cfs_rq; @@ -2182,8 +2201,8 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #ifdef CONFIG_FAIR_GROUP_SCHED set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); p->se.cfs_rq =3D tg->cfs_rq[cpu]; - p->se.parent =3D tg->se[cpu]; - p->se.depth =3D tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0; + p->se.parent =3D tg_se(tg, cpu); + p->se.depth =3D p->se.parent ? p->se.parent->depth + 1 : 0; #endif =20 #ifdef CONFIG_RT_GROUP_SCHED --=20 2.51.2.1041.gc1ab5b90ca-goog From nobody Fri Dec 19 19:58:05 2025 Received: from mail-qv1-f74.google.com (mail-qv1-f74.google.com [209.85.219.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC1A72459D9 for ; Fri, 7 Nov 2025 04:38:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762490297; cv=none; b=JwXkoIDmJgMxtgPgjVqyAuZBdxc/IlTeN98+5ck8G/sH1QMc4AoMqaQF9F+0nUn4sA7DfYMk68buZ4AeTZSeVhUEhqtaNO6BDxH8XvRiIsUGaCebX5uhcHxlvrxAPt9hUnmIFQKdtTrOZz6wkbk3thavWU1XnyIbhqKhPv7hD2k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762490297; c=relaxed/simple; bh=10iLhoesYzQ1FgJMpzu8BAAG0JfBAkZ6tKI8R/+tGKk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=DPQUSD3fMSY+SEU1PxbPku6zHuuNRocfJSbGDzTmIS92XMCVxrx/IMJoU+0rbIEEpZ4cVRBUZMBJZK7+9+JqkRQMQfrOKTy4n+c7D9W2CXtiAH+kYplucL+3+XxEKdtjgAzcBEOfD+Ir4lmet+aVnWuv83hS2npdhjkZpDn/g6o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=i2JhR1Kl; arc=none smtp.client-ip=209.85.219.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--zecheng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="i2JhR1Kl" Received: by mail-qv1-f74.google.com with SMTP id 6a1803df08f44-88050bdc2abso11608896d6.2 for ; Thu, 06 Nov 2025 20:38:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1762490294; x=1763095094; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Hc7zg0Xip4caBhiUMibaoPI31CF6EpscUiD6C+IhJbQ=; b=i2JhR1KlusdoZPCbu7Tb/Le0l19w1UDlqr7sQ4EfAb6kDPiGMOlmI2ADDE1itP2I6m gfL3izf0dMEySqlN9wY/FHOEY8BjdK3BgI7Mxs1SgOQBKXFhRgiMJ1b5FKW5DLXgMv+e sIPTqvMrWSoQMREV9BDK5/PyBejZnH921JS19Ah44vNsAhXcECcRuVb8OSyjcNy9kf89 e2cjKe16aw6eLW90aNPxpvpJi3XbOpwJcOxFhzF3IXisyAuwk5FShv0iL4SE9PxcYyVe P6Gvx8JHRnpoqI98q4icD1e48Cra/fROheA01KRr0uuSSe7aYqdmIr5TLLqNq38QpJti FklA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762490294; x=1763095094; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Hc7zg0Xip4caBhiUMibaoPI31CF6EpscUiD6C+IhJbQ=; b=XjTcQ1nrz8AaFhlGpCPx74K8BW18pVxDheZ4wDZ3rqNXoWUKNT9KXNwhbeFaMv2+6l FOhY7NVxI+rHANzEnYEDeamKNKQhXyiOOsmVvF64GLzf6QLeBG6Z0yIZ+URiuDWqOD56 3GNqnq+T+JEmvVMvTzY59dmrGf8hmB3TU2jGYvKclwi96v+DvFBvezJsP4aBeskVOMJU XaLFqrrYBoPqXeaulgElx8NcEn2JhzXQMBbDYugBv0P+ds7te/EEbcGLTrY03egJ+Rac +CmDcvERtYh8QR59YJ/DbYVYieA2zBQ+v5xATTXP7MP7+I0DeD8yonfUHvUkeyhyP4yu LCdQ== X-Forwarded-Encrypted: i=1; AJvYcCVYprdS3Rx+79vZxakFjoIYIn0TB/f9mxCkJ6yh//ccWHDhs1geWamUAwAIy/cQGdn9N2NTsTMiAaDwUP8=@vger.kernel.org X-Gm-Message-State: AOJu0YyFE5tVKtUYySt7EM6rdIw37N15giSdW+G4iirvu8LSQY9qI0xU ZWlM/XKDzyFcp+3fygY2/xnfEI87bTk3kamIQjs2TcISMhq2TZ5rxhVOymcMU6xUupJEEWMnVPw XMUgzwGc5jg== X-Google-Smtp-Source: AGHT+IEXXfOC6WiTmBRsaeV/hP7iJz1LUUe+hCuTmTAu+/MeSBoWRoIJxo5en8MR3W8t+cG5HEPXVs5DDs8Z X-Received: from qva12.prod.google.com ([2002:a05:6214:800c:b0:882:2f2f:9dc]) (user=zecheng job=prod-delivery.src-stubby-dispatcher) by 2002:ad4:5f0a:0:b0:880:501f:5dd with SMTP id 6a1803df08f44-88167afb540mr25484026d6.14.1762490294378; Thu, 06 Nov 2025 20:38:14 -0800 (PST) Date: Fri, 7 Nov 2025 04:38:05 +0000 In-Reply-To: <20251107043807.1758889-1-zecheng@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251107043807.1758889-1-zecheng@google.com> X-Mailer: git-send-email 2.51.2.1041.gc1ab5b90ca-goog Message-ID: <20251107043807.1758889-4-zecheng@google.com> Subject: [PATCH v5 3/3] sched/fair: Allocate both cfs_rq and sched_entity with per-cpu From: Zecheng Li To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Rik van Riel , Chris Mason , Madadi Vineeth Reddy , Xu Liu , Blake Jones , Josh Don , linux-kernel@vger.kernel.org, Zecheng Li Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To remove the cfs_rq pointer array in task_group, allocate the combined cfs_rq and sched_entity using the per-cpu allocator. This patch implements the following: - Changes task_group->cfs_rq from struct cfs_rq ** to struct cfs_rq __percpu *. - Updates memory allocation in alloc_fair_sched_group() and free_fair_sched_group() to use alloc_percpu() and free_percpu() respectively. - Uses the inline accessor tg_cfs_rq(tg, cpu) with per_cpu_ptr() to retrieve the pointer to cfs_rq for the given task group and CPU. - Replaces direct accesses tg->cfs_rq[cpu] with calls to the new tg_cfs_rq(tg, cpu) helper. - Handles the root_task_group: since struct rq is already a per-cpu variable (runqueues), its embedded cfs_rq (rq->cfs) is also per-cpu. Therefore, we assign root_task_group.cfs_rq =3D &runqueues.cfs. - Cleanup the code in initializing the root task group. This change places each CPU's cfs_rq and sched_entity in its local per-cpu memory area to remove the per-task_group pointer arrays. Signed-off-by: Zecheng Li --- kernel/sched/core.c | 35 ++++++++++----------------- kernel/sched/fair.c | 57 +++++++++++++++++--------------------------- kernel/sched/sched.h | 13 ++++++---- 3 files changed, 44 insertions(+), 61 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 12ebe1b4c8ae..376d27f4bbdb 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8542,7 +8542,7 @@ static struct kmem_cache *task_group_cache __ro_after= _init; =20 void __init sched_init(void) { - unsigned long ptr =3D 0; + unsigned long __maybe_unused ptr =3D 0; int i; =20 /* Make sure the linker didn't screw up */ @@ -8558,33 +8558,24 @@ void __init sched_init(void) wait_bit_init(); =20 #ifdef CONFIG_FAIR_GROUP_SCHED - ptr +=3D nr_cpu_ids * sizeof(void **); -#endif -#ifdef CONFIG_RT_GROUP_SCHED - ptr +=3D 2 * nr_cpu_ids * sizeof(void **); -#endif - if (ptr) { - ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); + root_task_group.cfs_rq =3D &runqueues.cfs; =20 -#ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.cfs_rq =3D (struct cfs_rq **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); - - root_task_group.shares =3D ROOT_TASK_GROUP_LOAD; - init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); + root_task_group.shares =3D ROOT_TASK_GROUP_LOAD; + init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); #endif /* CONFIG_FAIR_GROUP_SCHED */ #ifdef CONFIG_EXT_GROUP_SCHED - scx_tg_init(&root_task_group); + scx_tg_init(&root_task_group); #endif /* CONFIG_EXT_GROUP_SCHED */ #ifdef CONFIG_RT_GROUP_SCHED - root_task_group.rt_se =3D (struct sched_rt_entity **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); + ptr +=3D 2 * nr_cpu_ids * sizeof(void **); + ptr =3D (unsigned long)kzalloc(ptr, GFP_NOWAIT); + root_task_group.rt_se =3D (struct sched_rt_entity **)ptr; + ptr +=3D nr_cpu_ids * sizeof(void **); =20 - root_task_group.rt_rq =3D (struct rt_rq **)ptr; - ptr +=3D nr_cpu_ids * sizeof(void **); + root_task_group.rt_rq =3D (struct rt_rq **)ptr; + ptr +=3D nr_cpu_ids * sizeof(void **); =20 #endif /* CONFIG_RT_GROUP_SCHED */ - } =20 init_defrootdomain(); =20 @@ -9483,7 +9474,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, } =20 for_each_online_cpu(i) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, i); struct rq *rq =3D cfs_rq->rq; =20 guard(rq_lock_irq)(rq); @@ -9651,7 +9642,7 @@ static u64 throttled_time_self(struct task_group *tg) u64 total =3D 0; =20 for_each_possible_cpu(i) { - total +=3D READ_ONCE(tg->cfs_rq[i]->throttled_clock_self_time); + total +=3D READ_ONCE(tg_cfs_rq(tg, i)->throttled_clock_self_time); } =20 return total; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f9fb07d73a03..a5403f5900d9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -327,7 +327,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *= cfs_rq) * to a tree or when we reach the top of the tree */ if (cfs_rq->tg->parent && - cfs_rq->tg->parent->cfs_rq[cpu]->on_list) { + tg_cfs_rq(cfs_rq->tg->parent, cpu)->on_list) { /* * If parent is already on the list, we add the child * just before. Thanks to circular linked property of @@ -335,7 +335,7 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *= cfs_rq) * of the list that starts by parent. */ list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list, - &(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list)); + &(tg_cfs_rq(cfs_rq->tg->parent, cpu)->leaf_cfs_rq_list)); /* * The branch is now connected to its tree so we can * reset tmp_alone_branch to the beginning of the @@ -4141,7 +4141,7 @@ static void __maybe_unused clear_tg_offline_cfs_rqs(s= truct rq *rq) =20 rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 clear_tg_load_avg(cfs_rq); } @@ -5739,7 +5739,7 @@ static inline int throttled_hierarchy(struct cfs_rq *= cfs_rq) =20 static inline int lb_throttled_hierarchy(struct task_struct *p, int dst_cp= u) { - return throttled_hierarchy(task_group(p)->cfs_rq[dst_cpu]); + return throttled_hierarchy(tg_cfs_rq(task_group(p), dst_cpu)); } =20 static inline bool task_is_throttled(struct task_struct *p) @@ -5885,7 +5885,7 @@ static void enqueue_task_fair(struct rq *rq, struct t= ask_struct *p, int flags); static int tg_unthrottle_up(struct task_group *tg, void *data) { struct rq *rq =3D data; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); struct task_struct *p, *tmp; =20 if (--cfs_rq->throttle_count) @@ -5956,7 +5956,7 @@ static void record_throttle_clock(struct cfs_rq *cfs_= rq) static int tg_throttle_down(struct task_group *tg, void *data) { struct rq *rq =3D data; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 if (cfs_rq->throttle_count++) return 0; @@ -6432,8 +6432,8 @@ static void sync_throttle(struct task_group *tg, int = cpu) if (!tg->parent) return; =20 - cfs_rq =3D tg->cfs_rq[cpu]; - pcfs_rq =3D tg->parent->cfs_rq[cpu]; + cfs_rq =3D tg_cfs_rq(tg, cpu); + pcfs_rq =3D tg_cfs_rq(tg->parent, cpu); =20 cfs_rq->throttle_count =3D pcfs_rq->throttle_count; cfs_rq->throttled_clock_pelt =3D rq_clock_pelt(cpu_rq(cpu)); @@ -6625,7 +6625,7 @@ static void __maybe_unused update_runtime_enabled(str= uct rq *rq) rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { struct cfs_bandwidth *cfs_b =3D &tg->cfs_bandwidth; - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 raw_spin_lock(&cfs_b->lock); cfs_rq->runtime_enabled =3D cfs_b->quota !=3D RUNTIME_INF; @@ -6654,7 +6654,7 @@ static void __maybe_unused unthrottle_offline_cfs_rqs= (struct rq *rq) =20 rcu_read_lock(); list_for_each_entry_rcu(tg, &task_groups, list) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu_of(rq)]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu_of(rq)); =20 if (!cfs_rq->runtime_enabled) continue; @@ -9357,7 +9357,7 @@ static inline int task_is_ineligible_on_dst_cpu(struc= t task_struct *p, int dest_ struct cfs_rq *dst_cfs_rq; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - dst_cfs_rq =3D task_group(p)->cfs_rq[dest_cpu]; + dst_cfs_rq =3D tg_cfs_rq(task_group(p), dest_cpu); #else dst_cfs_rq =3D &cpu_rq(dest_cpu)->cfs; #endif @@ -13094,7 +13094,7 @@ static int task_is_throttled_fair(struct task_struc= t *p, int cpu) struct cfs_rq *cfs_rq; =20 #ifdef CONFIG_FAIR_GROUP_SCHED - cfs_rq =3D task_group(p)->cfs_rq[cpu]; + cfs_rq =3D tg_cfs_rq(task_group(p), cpu); #else cfs_rq =3D &cpu_rq(cpu)->cfs; #endif @@ -13354,42 +13354,31 @@ static void task_change_group_fair(struct task_st= ruct *p) =20 void free_fair_sched_group(struct task_group *tg) { - int i; - - for_each_possible_cpu(i) { - if (tg->cfs_rq && tg->cfs_rq[i]) { - struct cfs_rq_with_se *combined =3D - container_of(tg->cfs_rq[i], struct cfs_rq_with_se, cfs_rq); - kfree(combined); - } - } - - kfree(tg->cfs_rq); + free_percpu(tg->cfs_rq); } =20 int alloc_fair_sched_group(struct task_group *tg, struct task_group *paren= t) { - struct cfs_rq_with_se *combined; + struct cfs_rq_with_se __percpu *combined; struct sched_entity *se; struct cfs_rq *cfs_rq; int i; =20 - tg->cfs_rq =3D kcalloc(nr_cpu_ids, sizeof(cfs_rq), GFP_KERNEL); - if (!tg->cfs_rq) + combined =3D alloc_percpu_gfp(struct cfs_rq_with_se, GFP_KERNEL); + if (!combined) goto err; =20 + tg->cfs_rq =3D &combined->cfs_rq; tg->shares =3D NICE_0_LOAD; =20 init_cfs_bandwidth(tg_cfs_bandwidth(tg), tg_cfs_bandwidth(parent)); =20 for_each_possible_cpu(i) { - combined =3D kzalloc_node(sizeof(struct cfs_rq_with_se), - GFP_KERNEL, cpu_to_node(i)); - if (!combined) + cfs_rq =3D tg_cfs_rq(tg, i); + if (!cfs_rq) goto err; =20 - cfs_rq =3D &combined->cfs_rq; - se =3D &combined->se; + se =3D tg_se(tg, i); init_cfs_rq(cfs_rq); init_tg_cfs_entry(tg, cfs_rq, se, i, tg_se(parent, i)); init_entity_runnable_average(se); @@ -13426,7 +13415,7 @@ void unregister_fair_sched_group(struct task_group = *tg) destroy_cfs_bandwidth(tg_cfs_bandwidth(tg)); =20 for_each_possible_cpu(cpu) { - struct cfs_rq *cfs_rq =3D tg->cfs_rq[cpu]; + struct cfs_rq *cfs_rq =3D tg_cfs_rq(tg, cpu); struct sched_entity *se =3D tg_se(tg, cpu); struct rq *rq =3D cpu_rq(cpu); =20 @@ -13463,8 +13452,6 @@ void init_tg_cfs_entry(struct task_group *tg, struc= t cfs_rq *cfs_rq, cfs_rq->rq =3D rq; init_cfs_rq_runtime(cfs_rq); =20 - tg->cfs_rq[cpu] =3D cfs_rq; - /* se could be NULL for root_task_group */ if (!se) return; @@ -13557,7 +13544,7 @@ int sched_group_set_idle(struct task_group *tg, lon= g idle) for_each_possible_cpu(i) { struct rq *rq =3D cpu_rq(i); struct sched_entity *se =3D tg_se(tg, i); - struct cfs_rq *grp_cfs_rq =3D tg->cfs_rq[i]; + struct cfs_rq *grp_cfs_rq =3D tg_cfs_rq(tg, i); bool was_idle =3D cfs_rq_is_idle(grp_cfs_rq); long idle_task_delta; struct rq_flags rf; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 1133910a13c2..132e6098c058 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -477,7 +477,7 @@ struct task_group { =20 #ifdef CONFIG_FAIR_GROUP_SCHED /* runqueue "owned" by this group on each CPU */ - struct cfs_rq **cfs_rq; + struct cfs_rq __percpu *cfs_rq; unsigned long shares; /* * load_avg can be heavily contended at clock tick time, so put @@ -1607,6 +1607,11 @@ static inline struct task_struct *task_of(struct sch= ed_entity *se) WARN_ON_ONCE(!entity_is_task(se)); return container_of(se, struct task_struct, se); } +/* Access a specific CPU's cfs_rq from a task group */ +static inline struct cfs_rq *tg_cfs_rq(struct task_group *tg, int cpu) +{ + return per_cpu_ptr(tg->cfs_rq, cpu); +} =20 static inline struct sched_entity *tg_se(struct task_group *tg, int cpu) { @@ -1614,7 +1619,7 @@ static inline struct sched_entity *tg_se(struct task_= group *tg, int cpu) return NULL; =20 struct cfs_rq_with_se *combined =3D - container_of(tg->cfs_rq[cpu], struct cfs_rq_with_se, cfs_rq); + container_of(tg_cfs_rq(tg, cpu), struct cfs_rq_with_se, cfs_rq); return &combined->se; } =20 @@ -2199,8 +2204,8 @@ static inline void set_task_rq(struct task_struct *p,= unsigned int cpu) #endif =20 #ifdef CONFIG_FAIR_GROUP_SCHED - set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); - p->se.cfs_rq =3D tg->cfs_rq[cpu]; + set_task_rq_fair(&p->se, p->se.cfs_rq, tg_cfs_rq(tg, cpu)); + p->se.cfs_rq =3D tg_cfs_rq(tg, cpu); p->se.parent =3D tg_se(tg, cpu); p->se.depth =3D p->se.parent ? p->se.parent->depth + 1 : 0; #endif --=20 2.51.2.1041.gc1ab5b90ca-goog