From nobody Tue Apr 7 00:42:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA772ECAAD1 for ; Wed, 31 Aug 2022 22:49:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232717AbiHaWtS (ORCPT ); Wed, 31 Aug 2022 18:49:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232711AbiHaWtQ (ORCPT ); Wed, 31 Aug 2022 18:49:16 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D9C0D91E8 for ; Wed, 31 Aug 2022 15:49:15 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-340c6cfc388so164648557b3.20 for ; Wed, 31 Aug 2022 15:49:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date; bh=Ipn4iHe/yg+mxny4c01g7Po8t+R0aRPugnC10Z422Fk=; b=p2AFOEi5vGtb1tUhtbYlZ00F68mil8VLs1Lo6ATVSmPdszXyCTRaayeQPRhLYq+tb4 2BU0EHsinRDXn7QQtVcOjaYfomxoAKYknKBD/vtrXVN00hCBJSPaxOJXQWffslTgrkw+ 9PlpVdZoxZ4xyvUiGv3eo5+6U+maaN+achNbh1vmJDxZuCtb86Zb55Qwz0ZVC8EUh0Xc AFvv8Eb0w/a3rfdHIS9BfzXyfcV9staK9ZdwJxgpBoIm0r+mhmLAXCtvBWpRphL2UbBQ klas0NiDWrY1Qstm9kx09OPX3GwNhdNGfUwlRwXQwp3forhXV6wko2ESn8fqSQ+kA0zN MG1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date; bh=Ipn4iHe/yg+mxny4c01g7Po8t+R0aRPugnC10Z422Fk=; b=UqddN0BYxse+D5tcnYK1oJg+FougVfuyEp5ncUPzvNXDktUpY77KDpj/JBgIGDKuCu 0/SzHStyn7MUG7tsHD/voUEyudNEuaoTyCUfMWsvUAfxy3dgQvjbRjb6tK8jPkso/J4Q jlkRcAkkDxyY7quZstxmG68m0bowfQ47FNila7hHhpNRGAY8qBaM2rryP6Ci4XWaYqeE Z1/FZa/1y3SfHx0DPfgz7l2w/UOZaXzX/7Vgq2hnsT9P96Z56n5s0c35LP8iNWzeXp0z fJvNrznJ1cOdNqyQSOftsVPala6jzXWMwmVNycbh2BuTerUQ4u6mjxXPDkqkDtPK9Rng n8Ug== X-Gm-Message-State: ACgBeo3CPzIpPsFsNIXypJp/KERYHN0/jmsUp+uOjESftMYXHOMYYLUy 1V5EklYR7w2x7ZQYqvx0FpVM8IaInaMN X-Google-Smtp-Source: AA6agR7/V3nJns03/37pPbe+xIfXuklzJsVz03pu5XaQu2MQjjDInOYRowUTTaCUCdXK0dOyWlhwtNuaITE2 X-Received: from joshdon-desktop.svl.corp.google.com ([2620:15c:2d4:203:aa74:a917:f27:6c1c]) (user=joshdon job=sendgmr) by 2002:a25:3b46:0:b0:69c:a60e:2e57 with SMTP id i67-20020a253b46000000b0069ca60e2e57mr5457884yba.364.1661986154491; Wed, 31 Aug 2022 15:49:14 -0700 (PDT) Date: Wed, 31 Aug 2022 15:49:03 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220831224903.454303-1-joshdon@google.com> Subject: [PATCH] cgroup: add pids.peak interface for pids controller From: Josh Don To: Tejun Heo , Zefan Li , Johannes Weiner Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Josh Don Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" pids.peak tracks the high watermark of usage for number of pids. This helps give a better baseline on which to set pids.max. Polling pids.current isn't really feasible, since it would potentially miss short-lived spikes. This interface is analogous to memory.peak. Signed-off-by: Josh Don --- kernel/cgroup/pids.c | 37 +++++++++++++++++++++++++++++++++++-- 1 file changed, 35 insertions(+), 2 deletions(-) diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c index 511af87f685e..7695e60bcb40 100644 --- a/kernel/cgroup/pids.c +++ b/kernel/cgroup/pids.c @@ -47,6 +47,7 @@ struct pids_cgroup { */ atomic64_t counter; atomic64_t limit; + int64_t watermark; =20 /* Handle for "pids.events" */ struct cgroup_file events_file; @@ -85,6 +86,16 @@ static void pids_css_free(struct cgroup_subsys_state *cs= s) kfree(css_pids(css)); } =20 +static void pids_update_watermark(struct pids_cgroup *p, int64_t nr_pids) +{ + /* + * This is racy, but we don't need perfectly accurate tallying of + * the watermark, and this lets us avoid extra atomic overhead. + */ + if (nr_pids > READ_ONCE(p->watermark)) + WRITE_ONCE(p->watermark, nr_pids); +} + /** * pids_cancel - uncharge the local pid count * @pids: the pid cgroup state @@ -128,8 +139,11 @@ static void pids_charge(struct pids_cgroup *pids, int = num) { struct pids_cgroup *p; =20 - for (p =3D pids; parent_pids(p); p =3D parent_pids(p)) - atomic64_add(num, &p->counter); + for (p =3D pids; parent_pids(p); p =3D parent_pids(p)) { + int64_t new =3D atomic64_add_return(num, &p->counter); + + pids_update_watermark(p, new); + } } =20 /** @@ -156,6 +170,12 @@ static int pids_try_charge(struct pids_cgroup *pids, i= nt num) */ if (new > limit) goto revert; + + /* + * Not technically accurate if we go over limit somewhere up + * the hierarchy, but that's tolerable for the watermark. + */ + pids_update_watermark(p, new); } =20 return 0; @@ -311,6 +331,14 @@ static s64 pids_current_read(struct cgroup_subsys_stat= e *css, return atomic64_read(&pids->counter); } =20 +static s64 pids_peak_read(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + struct pids_cgroup *pids =3D css_pids(css); + + return READ_ONCE(pids->watermark); +} + static int pids_events_show(struct seq_file *sf, void *v) { struct pids_cgroup *pids =3D css_pids(seq_css(sf)); @@ -331,6 +359,11 @@ static struct cftype pids_files[] =3D { .read_s64 =3D pids_current_read, .flags =3D CFTYPE_NOT_ON_ROOT, }, + { + .name =3D "peak", + .flags =3D CFTYPE_NOT_ON_ROOT, + .read_s64 =3D pids_peak_read, + }, { .name =3D "events", .seq_show =3D pids_events_show, --=20 2.37.2.672.g94769d06f0-goog