From nobody Wed Apr 8 04:37:54 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCB39C00140 for ; Wed, 24 Aug 2022 08:21:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235906AbiHXIVD (ORCPT ); Wed, 24 Aug 2022 04:21:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235695AbiHXIUh (ORCPT ); Wed, 24 Aug 2022 04:20:37 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFB948A7EA for ; Wed, 24 Aug 2022 01:20:33 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id p18so15026844plr.8 for ; Wed, 24 Aug 2022 01:20:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=kvXw0ka1ZDD0lFwIGZxik+dCkFG4/lruKQeSxN5T30s=; b=nkGT4TyusGK+mWMOOY7Xm3rXsH00GUhDlwqaTv7vf3U9tOqYCUzuAqPS43Ktm7YyNh id2VfvdP2ZUXqnnyt7ongxGCdKo3Ar4OlPb/Y18JYXSe+794stCN5JFluaz2xFFNt1zB l2utc1YLnYAT6BVvoYvfm3upjnONzJOrtTwN9bGZXk2ENHrtXhlE9xB0Sn81McrAmZEc 4V3JruzuwcJb6VqFS28Zc3S3kBwk5S1yGcEU06DwsfVyRQ6Spc8JSuVTPFzoBOpwZaOV LRSG09bUlYZjKp7rYgmHRCq7/BIzZIn27vPeMdod8FhjgFD3jd0znTejuWRrnIKYDmSo 1Hxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=kvXw0ka1ZDD0lFwIGZxik+dCkFG4/lruKQeSxN5T30s=; b=nL2snEQSq/kiADBgATf6ZiAhDzvP4Ix6uqNyVfnKE0mciii0iyLJo6aNv90azL/MBj FhxrTNfX9DFRnUayJGcG+VC5JPnTAcvc5oTuYVq93GCx1fYlnJR0sfcTThyoFeBbpQPD JaBNY9XTgrxmb1p36aoVxoSz0wn2zx05oKEfMGJDq7n5/0kIm33bMf58Qy2xPUFfAaAZ P5LxrSC1fybAdAWOsc2/eTIs9SyCRByUib3F/soMKduv2LZDWoUErBPG/nl6xWL73G3e iu3OkCJmzixmvffFzxhC45AZ1HboZErdgxthB/BMc3kipltdlEaBCrin5+/ys21HhFYz f4mQ== X-Gm-Message-State: ACgBeo1oj2iIvYDkiL0cwcfDw6mbBih+zu7r2cihdgGonnKd/gDMYrOC fuwBvXIBTIui+aDH+JZpKsP/4Q== X-Google-Smtp-Source: AA6agR6z6/4fcGhCFstCPsL+RGBmWVxsFfozY/bJYVFZTD7+Zw94jl3IWpPYHq2rs62jvDcM4c/8BA== X-Received: by 2002:a17:902:ce0e:b0:172:69cc:60aa with SMTP id k14-20020a170902ce0e00b0017269cc60aamr27310024plg.31.1661329232881; Wed, 24 Aug 2022 01:20:32 -0700 (PDT) Received: from C02CV1DAMD6P.bytedance.net ([139.177.225.244]) by smtp.gmail.com with ESMTPSA id q31-20020a635c1f000000b00421841943dfsm10486587pgb.12.2022.08.24.01.20.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 01:20:32 -0700 (PDT) From: Chengming Zhou To: tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com, surenb@google.com Cc: gregkh@linuxfoundation.org, corbet@lwn.net, mingo@redhat.com, peterz@infradead.org, songmuchun@bytedance.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, Chengming Zhou Subject: [PATCH v3 09/10] sched/psi: cache parent psi_group to speed up groups iterate Date: Wed, 24 Aug 2022 16:18:28 +0800 Message-Id: <20220824081829.33748-10-zhouchengming@bytedance.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220824081829.33748-1-zhouchengming@bytedance.com> References: <20220824081829.33748-1-zhouchengming@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We use iterate_groups() to iterate each level psi_group to update PSI stats, which is a very hot path. In current code, iterate_groups() have to use multiple branches and cgroup_parent() to get parent psi_group for each level, which is not very efficient. This patch cache parent psi_group in struct psi_group, only need to get psi_group of task itself first, then just use group->parent to iterate. Signed-off-by: Chengming Zhou Acked-by: Johannes Weiner --- include/linux/psi_types.h | 2 ++ kernel/sched/psi.c | 47 ++++++++++++++++----------------------- 2 files changed, 21 insertions(+), 28 deletions(-) diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h index 40c28171cd91..a0b746258c68 100644 --- a/include/linux/psi_types.h +++ b/include/linux/psi_types.h @@ -151,6 +151,8 @@ struct psi_trigger { }; =20 struct psi_group { + struct psi_group *parent; + /* Protects data used by the aggregator */ struct mutex avgs_lock; =20 diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index 7aab6f13ed12..814e99b1fed3 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -772,30 +772,18 @@ static void psi_group_change(struct psi_group *group,= int cpu, schedule_delayed_work(&group->avgs_work, PSI_FREQ); } =20 -static struct psi_group *iterate_groups(struct task_struct *task, void **i= ter) +static inline struct psi_group *task_psi_group(struct task_struct *task) { - if (*iter =3D=3D &psi_system) - return NULL; - #ifdef CONFIG_CGROUPS - if (static_branch_likely(&psi_cgroups_enabled)) { - struct cgroup *cgroup =3D NULL; - - if (!*iter) - cgroup =3D task->cgroups->dfl_cgrp; - else - cgroup =3D cgroup_parent(*iter); - - if (cgroup && cgroup_parent(cgroup)) { - *iter =3D cgroup; - return cgroup_psi(cgroup); - } - } + if (static_branch_likely(&psi_cgroups_enabled)) + return cgroup_psi(task_dfl_cgroup(task)); #endif - *iter =3D &psi_system; return &psi_system; } =20 +#define for_each_psi_group(group) \ + for (; group; group =3D group->parent) + static void psi_flags_change(struct task_struct *task, int clear, int set) { if (((task->psi_flags & set) || @@ -815,7 +803,6 @@ void psi_task_change(struct task_struct *task, int clea= r, int set) { int cpu =3D task_cpu(task); struct psi_group *group; - void *iter =3D NULL; u64 now; =20 if (!task->pid) @@ -825,7 +812,8 @@ void psi_task_change(struct task_struct *task, int clea= r, int set) =20 now =3D cpu_clock(cpu); =20 - while ((group =3D iterate_groups(task, &iter))) + group =3D task_psi_group(task); + for_each_psi_group(group) psi_group_change(group, cpu, clear, set, now, true); } =20 @@ -834,7 +822,6 @@ void psi_task_switch(struct task_struct *prev, struct t= ask_struct *next, { struct psi_group *group, *common =3D NULL; int cpu =3D task_cpu(prev); - void *iter; u64 now =3D cpu_clock(cpu); =20 if (next->pid) { @@ -845,8 +832,8 @@ void psi_task_switch(struct task_struct *prev, struct t= ask_struct *next, * we reach the first common ancestor. Iterate @next's * ancestors only until we encounter @prev's ONCPU. */ - iter =3D NULL; - while ((group =3D iterate_groups(next, &iter))) { + group =3D task_psi_group(next); + for_each_psi_group(group) { if (per_cpu_ptr(group->pcpu, cpu)->state_mask & PSI_ONCPU) { common =3D group; @@ -887,9 +874,12 @@ void psi_task_switch(struct task_struct *prev, struct = task_struct *next, =20 psi_flags_change(prev, clear, set); =20 - iter =3D NULL; - while ((group =3D iterate_groups(prev, &iter)) && group !=3D common) + group =3D task_psi_group(prev); + for_each_psi_group(group) { + if (group =3D=3D common) + break; psi_group_change(group, cpu, clear, set, now, wake_clock); + } =20 /* * TSK_ONCPU is handled up to the common ancestor. If we're tasked @@ -897,7 +887,7 @@ void psi_task_switch(struct task_struct *prev, struct t= ask_struct *next, */ if (sleep || unlikely(prev->in_memstall !=3D next->in_memstall)) { clear &=3D ~TSK_ONCPU; - for (; group; group =3D iterate_groups(prev, &iter)) + for_each_psi_group(group) psi_group_change(group, cpu, clear, set, now, wake_clock); } } @@ -907,7 +897,6 @@ void psi_task_switch(struct task_struct *prev, struct t= ask_struct *next, void psi_account_irqtime(struct task_struct *task, u32 delta) { int cpu =3D task_cpu(task); - void *iter =3D NULL; struct psi_group *group; struct psi_group_cpu *groupc; u64 now; @@ -917,7 +906,8 @@ void psi_account_irqtime(struct task_struct *task, u32 = delta) =20 now =3D cpu_clock(cpu); =20 - while ((group =3D iterate_groups(task, &iter))) { + group =3D task_psi_group(task); + for_each_psi_group(group) { groupc =3D per_cpu_ptr(group->pcpu, cpu); =20 write_seqcount_begin(&groupc->seq); @@ -1009,6 +999,7 @@ int psi_cgroup_alloc(struct cgroup *cgroup) return -ENOMEM; } group_init(cgroup->psi); + cgroup->psi->parent =3D cgroup_psi(cgroup_parent(cgroup)); return 0; } =20 --=20 2.37.2