From nobody Sat Oct 4 01:42:12 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8793C19004A for ; Fri, 22 Aug 2025 01:38:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755826736; cv=none; b=hkjseoNy6SFBxk4es+dqFoY2CfXW7XWAZ9urDdCpPMJSM66IsjJP4awiirfIucLM2M5CQfqu6esKrJ9c0tZ4ZcQk9TiMUWXhEjtt4ykuQZrm7tpqEkptnmO2lLaQtohdPupeWG1Hg407K8rtrofL3udy7wqECpGtRQDeqOvO4gk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755826736; c=relaxed/simple; bh=uzwVbtH3o+hjF2l5bOPykDA1fiOU/qSMF14O0eSqX1A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Vq2P/8mvEYDcVc/7wMH0Tz3jc0LJFZFCf3o89OnGIQusdGtyyr7ZZVimOi9gAUXQOZfuynASavmahTb7wcl3zVwEd7EBWsGf9M/yVOOAZ1hL2u1elSspta/aDfB0qmseEyEvuNsOGOm7T8EMd01lfAQF2gdodWKw9P53Kt7B3Ig= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=asdIECFg; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ynaffit.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="asdIECFg" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-244581ce13aso33021525ad.2 for ; Thu, 21 Aug 2025 18:38:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1755826733; x=1756431533; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=XicJAgG/MZnmzphaKWfnZbo7JmxBw0IawJij22UnvYM=; b=asdIECFg5CBy4pTVEYORCBsjv8VKLgsmV38sDVgNkfQrt4MwHwy4RMwosCrixCUiIV IFj8kUrPmb6D/pCTjudj0T5mVKMHw6Wwtzrd3c/4bTVIWNTRiHPqlgIxpOv6jV2RZScl 8Q284gJMJ5x18rGhq0ugUg/CpFSUFao1P0g5VLN3EuskZMUPrIQztsZolB1Bnwu1H22k d4/gPzSFuNqDf59pn4NGPtRbkvc01M7wntrYXaOhwnX+QPvwquR131ZS6EqVsBSBH/Lo hei1sfj+nliL4Esiw04ICDj/OjDVlI5iIXMOkvX3MfiQSgTQ6tFgNMiGFkyeWkNlxECQ gHsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755826733; x=1756431533; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=XicJAgG/MZnmzphaKWfnZbo7JmxBw0IawJij22UnvYM=; b=p9XfyShYWktRMo5oUoM0Tbgvm18jVuWspMm8wZCuEsLMjLZKcbpxrxmwrsn1UaeemY oYOw+G+Y0HE5CkiwoO06oqD/neN53J91DE1Vwb5tK9aQeXNQU1q+ftfcctzXvv5AfcSM WwAVZUPNLGD+g1Xy01RoR9zqTRfLdmDFj0P+k+0aj91LABssW3vz1iag0RVDrNWKaX0P 4+ymjD4YH+lWv6NXI39Ww2fx/uH2K1diLvv/SwqsxwsvxHv1l60+7qx0itkY+zHKYMvW K3qe9nlA5hIcbh9gmR13xjexvVjxG5tHjhYuPic/BleG291kV7sHPxYN+jhsPX30az8Y B6Ew== X-Gm-Message-State: AOJu0Yw3GDeN2LRE+gQKmCkJDdRQmUhjiCEwVGNbu2j+VZ440lxkbbRK Un+sMIeuPPo90t9GG5SIdY9jp7DzBjnmm2fRf6DVie6JrCYw/AUwdNIWoLrcQernLVirKGRg3OX TBH6S1sTPhv7yn3SwHv4ucQhmNtaxSj4fNgF162EIJZAldgvDzghWrjbWCzh7oFcmzvSVmv1+4b iscbEw61P4deiayzn5q1OdT2tAg6U4thfnyfUTgwxor6oGHGmCQw== X-Google-Smtp-Source: AGHT+IGM5W9uClBkGABg+25APHoASZyRNMkhP9+ED/Z5aAxqaCN5U58g337NiIzjq3f8aWlcxcFwpcQBD36K X-Received: from plblw16.prod.google.com ([2002:a17:903:2ad0:b0:246:1648:2c77]) (user=ynaffit job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:238e:b0:240:50ef:2f00 with SMTP id d9443c01a7336-2462eea80fbmr22118335ad.26.1755826733477; Thu, 21 Aug 2025 18:38:53 -0700 (PDT) Date: Thu, 21 Aug 2025 18:37:52 -0700 In-Reply-To: <20250822013749.3268080-6-ynaffit@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250822013749.3268080-6-ynaffit@google.com> X-Mailer: git-send-email 2.51.0.rc2.233.g662b1ed5c5-goog Message-ID: <20250822013749.3268080-7-ynaffit@google.com> Subject: [PATCH v4 1/2] cgroup: cgroup.stat.local time accounting From: Tiffany Yang To: linux-kernel@vger.kernel.org Cc: John Stultz , Thomas Gleixner , Stephen Boyd , Anna-Maria Behnsen , Frederic Weisbecker , Tejun Heo , Johannes Weiner , "=?UTF-8?q?Michal=20Koutn=C3=BD?=" , "Rafael J. Wysocki" , Pavel Machek , Roman Gushchin , Chen Ridong , kernel-team@android.com, Jonathan Corbet , Shuah Khan , cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There isn't yet a clear way to identify a set of "lost" time that everyone (or at least a wider group of users) cares about. However, users can perform some delay accounting by iterating over components of interest. This patch allows cgroup v2 freezing time to be one of those components. Track the cumulative time that each v2 cgroup spends freezing and expose it to userland via a new local stat file in cgroupfs. Thank you to Michal, who provided the ASCII art in the updated documentation. To access this value: $ mkdir /sys/fs/cgroup/test $ cat /sys/fs/cgroup/test/cgroup.stat.local freeze_time_total 0 Ensure consistent freeze time reads with freeze_seq, a per-cgroup sequence counter. Writes are serialized using the css_set_lock. Signed-off-by: Tiffany Yang Cc: Tejun Heo Cc: Michal Koutn=C3=BD --- v3 -> v4: * Replace "freeze_time_total" with "frozen" and expose stats via cgroup.stat.local, as recommended by Tejun. * Use the same timestamp when freezing/unfreezing a cgroup as its descendants, as suggested by Michal. v2 -> v3: * Use seqcount along with css_set_lock to guard freeze time accesses, as suggested by Michal. v1 -> v2: * Track per-cgroup freezing time instead of per-task frozen time, as suggested by Tejun. --- Documentation/admin-guide/cgroup-v2.rst | 18 ++++++++++++++++ include/linux/cgroup-defs.h | 17 +++++++++++++++ kernel/cgroup/cgroup.c | 28 +++++++++++++++++++++++++ kernel/cgroup/freezer.c | 16 ++++++++++---- 4 files changed, 75 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-= guide/cgroup-v2.rst index 51c0bc4c2dc5..a1e3d431974c 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1001,6 +1001,24 @@ All cgroup core files are prefixed with "cgroup." Total number of dying cgroup subsystems (e.g. memory cgroup) at and beneath the current cgroup. =20 + cgroup.stat.local + A read-only flat-keyed file which exists in non-root cgroups. + The following entry is defined: + + frozen_usec + Cumulative time that this cgroup has spent between freezing and + thawing, regardless of whether by self or ancestor groups. + NB: (not) reaching "frozen" state is not accounted here. + + Using the following ASCII representation of a cgroup's freezer + state, :: + + 1 _____ + frozen 0 __/ \__ + ab cd + + the duration being measured is the span between a and c. + cgroup.freeze A read-write single value file which exists on non-root cgroups. Allowed values are "0" and "1". The default is "0". diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h index 6b93a64115fe..539c64eeef38 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -433,6 +433,23 @@ struct cgroup_freezer_state { * frozen, SIGSTOPped, and PTRACEd. */ int nr_frozen_tasks; + + /* Freeze time data consistency protection */ + seqcount_t freeze_seq; + + /* + * Most recent time the cgroup was requested to freeze. + * Accesses guarded by freeze_seq counter. Writes serialized + * by css_set_lock. + */ + u64 freeze_start_nsec; + + /* + * Total duration the cgroup has spent freezing. + * Accesses guarded by freeze_seq counter. Writes serialized + * by css_set_lock. + */ + u64 frozen_nsec; }; =20 struct cgroup { diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 312c6a8b55bb..ab096b884bbc 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -3763,6 +3763,27 @@ static int cgroup_stat_show(struct seq_file *seq, vo= id *v) return 0; } =20 +static int cgroup_core_local_stat_show(struct seq_file *seq, void *v) +{ + struct cgroup *cgrp =3D seq_css(seq)->cgroup; + unsigned int sequence; + u64 freeze_time; + + do { + sequence =3D read_seqcount_begin(&cgrp->freezer.freeze_seq); + freeze_time =3D cgrp->freezer.frozen_nsec; + /* Add in current freezer interval if the cgroup is freezing. */ + if (test_bit(CGRP_FREEZE, &cgrp->flags)) + freeze_time +=3D (ktime_get_ns() - + cgrp->freezer.freeze_start_nsec); + } while (read_seqcount_retry(&cgrp->freezer.freeze_seq, sequence)); + + seq_printf(seq, "frozen_usec %llu\n", + (unsigned long long) freeze_time / NSEC_PER_USEC); + + return 0; +} + #ifdef CONFIG_CGROUP_SCHED /** * cgroup_tryget_css - try to get a cgroup's css for the specified subsyst= em @@ -5354,6 +5375,11 @@ static struct cftype cgroup_base_files[] =3D { .name =3D "cgroup.stat", .seq_show =3D cgroup_stat_show, }, + { + .name =3D "cgroup.stat.local", + .flags =3D CFTYPE_NOT_ON_ROOT, + .seq_show =3D cgroup_core_local_stat_show, + }, { .name =3D "cgroup.freeze", .flags =3D CFTYPE_NOT_ON_ROOT, @@ -5763,6 +5789,7 @@ static struct cgroup *cgroup_create(struct cgroup *pa= rent, const char *name, * if the parent has to be frozen, the child has too. */ cgrp->freezer.e_freeze =3D parent->freezer.e_freeze; + seqcount_init(&cgrp->freezer.freeze_seq); if (cgrp->freezer.e_freeze) { /* * Set the CGRP_FREEZE flag, so when a process will be @@ -5771,6 +5798,7 @@ static struct cgroup *cgroup_create(struct cgroup *pa= rent, const char *name, * consider it frozen immediately. */ set_bit(CGRP_FREEZE, &cgrp->flags); + cgrp->freezer.freeze_start_nsec =3D ktime_get_ns(); set_bit(CGRP_FROZEN, &cgrp->flags); } =20 diff --git a/kernel/cgroup/freezer.c b/kernel/cgroup/freezer.c index bf1690a167dd..6c18854bff34 100644 --- a/kernel/cgroup/freezer.c +++ b/kernel/cgroup/freezer.c @@ -171,7 +171,7 @@ static void cgroup_freeze_task(struct task_struct *task= , bool freeze) /* * Freeze or unfreeze all tasks in the given cgroup. */ -static void cgroup_do_freeze(struct cgroup *cgrp, bool freeze) +static void cgroup_do_freeze(struct cgroup *cgrp, bool freeze, u64 ts_nsec) { struct css_task_iter it; struct task_struct *task; @@ -179,10 +179,16 @@ static void cgroup_do_freeze(struct cgroup *cgrp, boo= l freeze) lockdep_assert_held(&cgroup_mutex); =20 spin_lock_irq(&css_set_lock); - if (freeze) + write_seqcount_begin(&cgrp->freezer.freeze_seq); + if (freeze) { set_bit(CGRP_FREEZE, &cgrp->flags); - else + cgrp->freezer.freeze_start_nsec =3D ts_nsec; + } else { clear_bit(CGRP_FREEZE, &cgrp->flags); + cgrp->freezer.frozen_nsec +=3D (ts_nsec - + cgrp->freezer.freeze_start_nsec); + } + write_seqcount_end(&cgrp->freezer.freeze_seq); spin_unlock_irq(&css_set_lock); =20 if (freeze) @@ -260,6 +266,7 @@ void cgroup_freeze(struct cgroup *cgrp, bool freeze) struct cgroup *parent; struct cgroup *dsct; bool applied =3D false; + u64 ts_nsec; bool old_e; =20 lockdep_assert_held(&cgroup_mutex); @@ -271,6 +278,7 @@ void cgroup_freeze(struct cgroup *cgrp, bool freeze) return; =20 cgrp->freezer.freeze =3D freeze; + ts_nsec =3D ktime_get_ns(); =20 /* * Propagate changes downwards the cgroup tree. @@ -298,7 +306,7 @@ void cgroup_freeze(struct cgroup *cgrp, bool freeze) /* * Do change actual state: freeze or unfreeze. */ - cgroup_do_freeze(dsct, freeze); + cgroup_do_freeze(dsct, freeze, ts_nsec); applied =3D true; } =20 --=20 2.51.0.rc2.233.g662b1ed5c5-goog