From nobody Mon Oct 6 10:16:40 2025 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 10B4E2417C8; Tue, 22 Jul 2025 11:41:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753184503; cv=none; b=IcjMbp7p3r3sNP0CG46pU0mCIvTKGzgC/ESzRdICBllK4RwNki+XZ5+/RaRrxs0g5YSUAQEvQC3pH0x/oZ1QfUZI8APPADm730I6up63qeg/FLcR8bN8uzLPIEbvx2zbzkUvXAWatimzzQLDXLj48A3aFSIzD/2Q+vnPEfUdRns= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753184503; c=relaxed/simple; bh=HRNfBZNWFjtpD2gokYphudl6CAUUrfBTg0zEgE6J9V4=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=Gzr/ArWRRe0k0Wr3m9eFNk/o84vkZJ0o0GdYF0JBHVZDi7BWSAwrqxpxves7kqa0z9713DOPOC9dXUDFf2knuNHmNuLskrmftYjm94B5Wsj33Vncfls//XIv4otyYlXvJh1O4wNe7lject6KrD8JndZsAa9vZUdv95IFTK4yZ/M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4bmb2R6GTDzYQv48; Tue, 22 Jul 2025 19:41:35 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.75]) by mail.maildlp.com (Postfix) with ESMTP id 95D5D1A0D48; Tue, 22 Jul 2025 19:41:34 +0800 (CST) Received: from hulk-vt.huawei.com (unknown [10.67.174.121]) by APP2 (Coremail) with SMTP id Syh0CgDH07XjeH9ove1hBA--.34468S2; Tue, 22 Jul 2025 19:41:34 +0800 (CST) From: Chen Ridong To: tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com, lizefan@huawei.com Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, lujialin4@huawei.com, chenridong@huawei.com, gaoyingjie@uniontech.com Subject: [PATCH v2 -next] cgroup: remove offline draining in root destruction to avoid hung_tasks Date: Tue, 22 Jul 2025 11:27:33 +0000 Message-Id: <20250722112733.4113237-1-chenridong@huaweicloud.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: Syh0CgDH07XjeH9ove1hBA--.34468S2 X-Coremail-Antispam: 1UD129KBjvJXoWxZryrXF4fXFWrKFyruw13XFb_yoW5ZrWrpF s8Cw12yw4rGF1Dt3ykta4Iga4F9a10qw4jq3WIg3y8AF17Xryqq3Z2yF1jqF10yFsrCay2 vrZ0vrn5G34jywUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUyEb4IE77IF4wAFF20E14v26r4j6ryUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4 vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x 0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG 6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFV Cjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JMxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWUJVW8JwCI42IY6x AIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY 1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7IU17KsUUUUUU== X-CM-SenderInfo: hfkh02xlgr0w46kxt4xhlfz01xgou0bp/ Content-Type: text/plain; charset="utf-8" From: Chen Ridong A hung task can occur during [1] LTP cgroup testing when repeatedly mounting/unmounting perf_event and net_prio controllers with systemd.unified_cgroup_hierarchy=3D1. The hang manifests in cgroup_lock_and_drain_offline() during root destruction. Related case: cgroup_fj_function_perf_event cgroup_fj_function.sh perf_event cgroup_fj_function_net_prio cgroup_fj_function.sh net_prio Call Trace: cgroup_lock_and_drain_offline+0x14c/0x1e8 cgroup_destroy_root+0x3c/0x2c0 css_free_rwork_fn+0x248/0x338 process_one_work+0x16c/0x3b8 worker_thread+0x22c/0x3b0 kthread+0xec/0x100 ret_from_fork+0x10/0x20 Root Cause: CPU0 CPU1 mount perf_event umount net_prio cgroup1_get_tree cgroup_kill_sb rebind_subsystems // root destruction enqueues // cgroup_destroy_wq // kill all perf_event css // one perf_event css A is dying // css A offline enqueues cgroup_destroy_wq // root destruction will be executed first css_free_rwork_fn cgroup_destroy_root cgroup_lock_and_drain_offline // some perf descendants are dying // cgroup_destroy_wq max_active =3D 1 // waiting for css A to die Problem scenario: 1. CPU0 mounts perf_event (rebind_subsystems) 2. CPU1 unmounts net_prio (cgroup_kill_sb), queuing root destruction work 3. A dying perf_event CSS gets queued for offline after root destruction 4. Root destruction waits for offline completion, but offline work is blocked behind root destruction in cgroup_destroy_wq (max_active=3D1) Solution: Move cgroup_lock_and_drain_offline() to the start of unmount operations. This ensures: 1. cgroup_lock_and_drain_offline() will not be called within cgroup_destroy_wq context. 2. No new dying csses for the subsystem being unmounted can appear in cgrp_dfl_root between unmount start and subsystem rebinding. [1] https://github.com/linux-test-project/ltp/blob/master/runtest/controlle= rs Fixes: 334c3679ec4b ("cgroup: reimplement rebind_subsystems() using cgroup_= apply_control() and friends") Reported-by: Gao Yingjie Signed-off-by: Chen Ridong --- kernel/cgroup/cgroup.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 312c6a8b55bb..af81a90f8c92 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -1346,8 +1346,7 @@ static void cgroup_destroy_root(struct cgroup_root *r= oot) =20 trace_cgroup_destroy_root(root); =20 - cgroup_lock_and_drain_offline(&cgrp_dfl_root.cgrp); - + cgroup_lock(); BUG_ON(atomic_read(&root->nr_cgrps)); BUG_ON(!list_empty(&cgrp->self.children)); =20 @@ -2336,6 +2335,8 @@ static void cgroup_kill_sb(struct super_block *sb) * * And don't kill the default root. */ + cgroup_lock_and_drain_offline(&cgrp_dfl_root.cgrp); + cgroup_unlock(); if (list_empty(&root->cgrp.self.children) && root !=3D &cgrp_dfl_root && !percpu_ref_is_dying(&root->cgrp.self.refcnt)) percpu_ref_kill(&root->cgrp.self.refcnt); --=20 2.34.1