From nobody Sun Feb 8 11:22:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45E77C7EE29 for ; Sat, 10 Jun 2023 03:58:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232557AbjFJD6A (ORCPT ); Fri, 9 Jun 2023 23:58:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229461AbjFJD55 (ORCPT ); Fri, 9 Jun 2023 23:57:57 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78D063A8C; Fri, 9 Jun 2023 20:57:56 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QdPJ86Jj0zLmly; Sat, 10 Jun 2023 11:56:08 +0800 (CST) Received: from huawei.com (10.175.104.170) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Sat, 10 Jun 2023 11:57:54 +0800 From: Miaohe Lin To: , , CC: , , Subject: [PATCH] cgroup: remove unused task_cgroup_path() Date: Sat, 10 Jun 2023 11:57:37 +0800 Message-ID: <20230610035737.1046651-1-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.104.170] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" task_cgroup_path() is not used anymore. So remove it. Signed-off-by: Miaohe Lin --- include/linux/cgroup.h | 1 - kernel/cgroup/cgroup.c | 39 --------------------------------------- 2 files changed, 40 deletions(-) diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index 567c547cf371..b307013b9c6c 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -118,7 +118,6 @@ int cgroup_rm_cftypes(struct cftype *cfts); void cgroup_file_notify(struct cgroup_file *cfile); void cgroup_file_show(struct cgroup_file *cfile, bool show); =20 -int task_cgroup_path(struct task_struct *task, char *buf, size_t buflen); int cgroupstats_build(struct cgroupstats *stats, struct dentry *dentry); int proc_cgroup_show(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *tsk); diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index d0d033a48f61..48278adb7522 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -2364,45 +2364,6 @@ int cgroup_path_ns(struct cgroup *cgrp, char *buf, s= ize_t buflen, } EXPORT_SYMBOL_GPL(cgroup_path_ns); =20 -/** - * task_cgroup_path - cgroup path of a task in the first cgroup hierarchy - * @task: target task - * @buf: the buffer to write the path into - * @buflen: the length of the buffer - * - * Determine @task's cgroup on the first (the one with the lowest non-zero - * hierarchy_id) cgroup hierarchy and copy its path into @buf. This - * function grabs cgroup_mutex and shouldn't be used inside locks used by - * cgroup controller callbacks. - * - * Return value is the same as kernfs_path(). - */ -int task_cgroup_path(struct task_struct *task, char *buf, size_t buflen) -{ - struct cgroup_root *root; - struct cgroup *cgrp; - int hierarchy_id =3D 1; - int ret; - - cgroup_lock(); - spin_lock_irq(&css_set_lock); - - root =3D idr_get_next(&cgroup_hierarchy_idr, &hierarchy_id); - - if (root) { - cgrp =3D task_cgroup_from_root(task, root); - ret =3D cgroup_path_ns_locked(cgrp, buf, buflen, &init_cgroup_ns); - } else { - /* if no hierarchy exists, everyone is in "/" */ - ret =3D strscpy(buf, "/", buflen); - } - - spin_unlock_irq(&css_set_lock); - cgroup_unlock(); - return ret; -} -EXPORT_SYMBOL_GPL(task_cgroup_path); - /** * cgroup_attach_lock - Lock for ->attach() * @lock_threadgroup: whether to down_write cgroup_threadgroup_rwsem --=20 2.27.0