From nobody Tue Feb 10 00:00:24 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60347CDB47E for ; Wed, 18 Oct 2023 06:20:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344620AbjJRGUN (ORCPT ); Wed, 18 Oct 2023 02:20:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235028AbjJRGSn (ORCPT ); Wed, 18 Oct 2023 02:18:43 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9410AD58 for ; Tue, 17 Oct 2023 23:17:59 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1c9e06f058bso44979305ad.0 for ; Tue, 17 Oct 2023 23:17:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1697609878; x=1698214678; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FNW2oYGtelVvgczyk2A13Ij/ocoUNYFTCyV6b+P6gh8=; b=V4T7oq8VpxNt6t2w14WbFy1JCSIyBZZb/8p38RNc0BIImJ1ucXOw68wWgX+Ywv7Nsi Vl3hVKY43Dz1IqRE5jkxkR0Y53AhOnCikPS1y5BDRZmrjmEwklA8FQQlhJHxGFUfHNdT ZFFm0GY8hrHCG4KF3hui+1/gn1zx6Md8MjOi0sTcHcSCjz6ngvwF4DJyRLCSrJv2C3H5 Z/JEAkBoQiRbZL0hHDVlgh1xznXAas4wpOiwHh4pO+Dh23KuUMKBEuf5TL7+TK8JsRvk DMMB3CuGfSC0TgkFT8r++RvrZAHkD3MCJz7FS7TdmRf7cP35y5hKp4zzfs4JA5Xh5F+H qbCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697609878; x=1698214678; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FNW2oYGtelVvgczyk2A13Ij/ocoUNYFTCyV6b+P6gh8=; b=vy4wLjC1T5nvJ+Ek7oZxFGJj9ZUPXBZVG+rMh7kXM9/A6sRfnDb9fbly4ZqmO817fD IbNsArMp7jlRa5OheLwjdLk6eubcVJg0vd9Btp7zL2CASq7XwjSbBB/O2mMnnqE31DCE BNcK/TpQItE8h9/2hyPF+TrdpY+ssR1ZP5KaSamzIFY7FsCzGWWAqXI62U8tsF/5RB3K VOtvY1jb90yBMdFpzTd0rNa+HBCBKTOFmcULmNQVnFLhAcpJ+rz8VXGBsgb8aHjaEsAq w4HYYmOB8/J9ChmOuVqPcJ8SneDh073MAX+YnF3DUeG3V8ogWZ5VVw8nuWm91UbKhptv 31MQ== X-Gm-Message-State: AOJu0Yw8iuatioCxul0DchhSrig/dFSPcqtQssF89QzVoiTQrxmd8PHD nYQB7ObYaySn5IRuIgBDGLBWXw== X-Google-Smtp-Source: AGHT+IEW/SIo2swuWYnxf2PCWQq7o5dnjZYKadyrosONhQGYQvdQ8A9ZnePa6p8+5KKhLaMtOqCY0A== X-Received: by 2002:a17:903:41d1:b0:1c5:7d49:570e with SMTP id u17-20020a17090341d100b001c57d49570emr4978737ple.29.1697609877965; Tue, 17 Oct 2023 23:17:57 -0700 (PDT) Received: from n37-019-243.byted.org ([180.184.103.200]) by smtp.gmail.com with ESMTPSA id ix13-20020a170902f80d00b001c61acd5bd2sm2659116plb.112.2023.10.17.23.17.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 23:17:57 -0700 (PDT) From: Chuyi Zhou To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, tj@kernel.org, linux-kernel@vger.kernel.org, Chuyi Zhou Subject: [RESEND PATCH bpf-next v6 1/8] cgroup: Prepare for using css_task_iter_*() in BPF Date: Wed, 18 Oct 2023 14:17:39 +0800 Message-Id: <20231018061746.111364-2-zhouchuyi@bytedance.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20231018061746.111364-1-zhouchuyi@bytedance.com> References: <20231018061746.111364-1-zhouchuyi@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This patch makes some preparations for using css_task_iter_*() in BPF Program. 1. Flags CSS_TASK_ITER_* are #define-s and it's not easy for bpf prog to use them. Convert them to enum so bpf prog can take them from vmlinux.h. 2. In the next patch we will add css_task_iter_*() in common kfuncs which is not safe. Since css_task_iter_*() does spin_unlock_irq() which might screw up irq flags depending on the context where bpf prog is running. So we should use irqsave/irqrestore here and the switching is harmless. Suggested-by: Alexei Starovoitov Signed-off-by: Chuyi Zhou Acked-by: Tejun Heo --- include/linux/cgroup.h | 12 +++++------- kernel/cgroup/cgroup.c | 18 ++++++++++++------ 2 files changed, 17 insertions(+), 13 deletions(-) diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index b307013b9c6c..0ef0af66080e 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -40,13 +40,11 @@ struct kernel_clone_args; #define CGROUP_WEIGHT_DFL 100 #define CGROUP_WEIGHT_MAX 10000 =20 -/* walk only threadgroup leaders */ -#define CSS_TASK_ITER_PROCS (1U << 0) -/* walk all threaded css_sets in the domain */ -#define CSS_TASK_ITER_THREADED (1U << 1) - -/* internal flags */ -#define CSS_TASK_ITER_SKIPPED (1U << 16) +enum { + CSS_TASK_ITER_PROCS =3D (1U << 0), /* walk only threadgroup leaders */ + CSS_TASK_ITER_THREADED =3D (1U << 1), /* walk all threaded css_sets in t= he domain */ + CSS_TASK_ITER_SKIPPED =3D (1U << 16), /* internal flags */ +}; =20 /* a css_task_iter should be treated as an opaque object */ struct css_task_iter { diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 1fb7f562289d..b6d64f3b8888 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -4917,9 +4917,11 @@ static void css_task_iter_advance(struct css_task_it= er *it) void css_task_iter_start(struct cgroup_subsys_state *css, unsigned int fla= gs, struct css_task_iter *it) { + unsigned long irqflags; + memset(it, 0, sizeof(*it)); =20 - spin_lock_irq(&css_set_lock); + spin_lock_irqsave(&css_set_lock, irqflags); =20 it->ss =3D css->ss; it->flags =3D flags; @@ -4933,7 +4935,7 @@ void css_task_iter_start(struct cgroup_subsys_state *= css, unsigned int flags, =20 css_task_iter_advance(it); =20 - spin_unlock_irq(&css_set_lock); + spin_unlock_irqrestore(&css_set_lock, irqflags); } =20 /** @@ -4946,12 +4948,14 @@ void css_task_iter_start(struct cgroup_subsys_state= *css, unsigned int flags, */ struct task_struct *css_task_iter_next(struct css_task_iter *it) { + unsigned long irqflags; + if (it->cur_task) { put_task_struct(it->cur_task); it->cur_task =3D NULL; } =20 - spin_lock_irq(&css_set_lock); + spin_lock_irqsave(&css_set_lock, irqflags); =20 /* @it may be half-advanced by skips, finish advancing */ if (it->flags & CSS_TASK_ITER_SKIPPED) @@ -4964,7 +4968,7 @@ struct task_struct *css_task_iter_next(struct css_tas= k_iter *it) css_task_iter_advance(it); } =20 - spin_unlock_irq(&css_set_lock); + spin_unlock_irqrestore(&css_set_lock, irqflags); =20 return it->cur_task; } @@ -4977,11 +4981,13 @@ struct task_struct *css_task_iter_next(struct css_t= ask_iter *it) */ void css_task_iter_end(struct css_task_iter *it) { + unsigned long irqflags; + if (it->cur_cset) { - spin_lock_irq(&css_set_lock); + spin_lock_irqsave(&css_set_lock, irqflags); list_del(&it->iters_node); put_css_set_locked(it->cur_cset); - spin_unlock_irq(&css_set_lock); + spin_unlock_irqrestore(&css_set_lock, irqflags); } =20 if (it->cur_dcset) --=20 2.20.1