From nobody Tue Feb 10 02:01:12 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD2CE366DDF; Tue, 13 Jan 2026 06:19:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768285191; cv=none; b=SEI6/NU6LUl/B4f6jEvA87AWWiFnJDUM80foE9RUM5jMteSkeFmUHkwWaAjPzx+eud5FzRFay5KbKdHPHrxiSzHdPOP3RGxFSG5UMnY09Srr+xbuqtHnUsvZHfx/dHQoutWRD6JFsZSc8Dox5JYVHJemp3cCCKZTEL+WmLI6cqc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768285191; c=relaxed/simple; bh=rNFYJeZOc4wusPE+uTlAOhRaLYyTjU2wU5vyiLRtX84=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Xd1fxzjdMvtITUGwScxJ8B/QH2jji7EkAa2YSEsp5OvV4f4vcSWDOM8maHfMSaCUKON1bB1bz8hnjySInFlXCzihvm+DZuJu024W5Bb+quqcB8EgzZpRPU2qiKPN0L4iJhd546wFHenlLUEv9t260wxmnbBtvxi2iT4GL1r1SEc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4dqzc14R4jzYQtqg; Tue, 13 Jan 2026 14:19:29 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 2DB5E40577; Tue, 13 Jan 2026 14:19:41 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP4 (Coremail) with SMTP id gCh0CgC3ZPX642VpuTeEDg--.370S5; Tue, 13 Jan 2026 14:19:41 +0800 (CST) From: Zheng Qixing To: tj@kernel.org, josef@toxicpanda.com, axboe@kernel.dk, yukuai3@huawei.com, hch@infradead.org Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, mkoutny@suse.com, yi.zhang@huawei.com, yangerkun@huawei.com, houtao1@huawei.com, zhengqixing@huawei.com Subject: [PATCH v2 1/3] blk-cgroup: fix race between policy activation and blkg destruction Date: Tue, 13 Jan 2026 14:10:33 +0800 Message-Id: <20260113061035.1902522-2-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260113061035.1902522-1-zhengqixing@huaweicloud.com> References: <20260113061035.1902522-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgC3ZPX642VpuTeEDg--.370S5 X-Coremail-Antispam: 1UD129KBjvJXoW7uF1kAF1DWFW5ur4kKF48WFg_yoW8Kr43pF Wagr15C3s2gr1qya1Uuw47XryIqws5Jr45GrWkC39IkrsxZ34FvF47Crs5CFWfAFs7JF43 Zw4jq3y8KF4UA3JanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPIb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVW8Jr0_Cr1UMcvjeVCFs4IE7xkE bVWUJVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7 AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02 F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GF ylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r 1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07j1 E_NUUUUU= X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ Content-Type: text/plain; charset="utf-8" From: Zheng Qixing When switching an IO scheduler on a block device, blkcg_activate_policy() allocates blkg_policy_data (pd) for all blkgs attached to the queue. However, blkcg_activate_policy() may race with concurrent blkcg deletion, leading to use-after-free and memory leak issues. The use-after-free occurs in the following race: T1 (blkcg_activate_policy): - Successfully allocates pd for blkg1 (loop0->queue, blkcgA) - Fails to allocate pd for blkg2 (loop0->queue, blkcgB) - Enters the enomem rollback path to release blkg1 resources T2 (blkcg deletion): - blkcgA is deleted concurrently - blkg1 is freed via blkg_free_workfn() - blkg1->pd is freed T1 (continued): - Rollback path accesses blkg1->pd->online after pd is freed - Triggers use-after-free In addition, blkg_free_workfn() frees pd before removing the blkg from q->blkg_list. This allows blkcg_activate_policy() to allocate a new pd for a blkg that is being destroyed, leaving the newly allocated pd unreachable when the blkg is finally freed. Fix these races by extending blkcg_mutex coverage to serialize blkcg_activate_policy() rollback and blkg destruction, ensuring pd lifecycle is synchronized with blkg list visibility. Link: https://lore.kernel.org/all/20260108014416.3656493-3-zhengqixing@huaw= eicloud.com/ Fixes: f1c006f1c685 ("blk-cgroup: synchronize pd_free_fn() from blkg_free_w= orkfn() and blkcg_deactivate_policy()") Signed-off-by: Zheng Qixing --- block/blk-cgroup.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 3cffb68ba5d8..600f8c5843ea 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1596,6 +1596,8 @@ int blkcg_activate_policy(struct gendisk *disk, const= struct blkcg_policy *pol) =20 if (queue_is_mq(q)) memflags =3D blk_mq_freeze_queue(q); + + mutex_lock(&q->blkcg_mutex); retry: spin_lock_irq(&q->queue_lock); =20 @@ -1658,6 +1660,7 @@ int blkcg_activate_policy(struct gendisk *disk, const= struct blkcg_policy *pol) =20 spin_unlock_irq(&q->queue_lock); out: + mutex_unlock(&q->blkcg_mutex); if (queue_is_mq(q)) blk_mq_unfreeze_queue(q, memflags); if (pinned_blkg) --=20 2.39.2 From nobody Tue Feb 10 02:01:12 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD350368293; Tue, 13 Jan 2026 06:19:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768285194; cv=none; b=cJZvYnwjujHKV+WtlXO4XhHXqjTrgRns6kjpeFF2OneMTmWK44rvl7gemqP92bdWYf39hXLJk3x52QedTAhT/mlQmURGWtr2W5zHIO2EKug7fEVQEaQ5Mypdke9jhclg7RURPZPXz+GMurLwVkAuq59IlJJv/oDh0bYGKJMX8nY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768285194; c=relaxed/simple; bh=KBF8jz+ebcC1qqOqMiUvSGUGHcenXLyqk+wzOIWALfs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=W1I6aaeK11kSW3E1xGoJddGnxSzyIuMdCPSme5La9AsrPPtEOP8uNUZm1S1c5Tn9QMVTaUCFwH9bVFCt7YfBVUhkR5NYZkqLpWNXCGJ/ZjEgqXQAyPYxKc2JfrJ+dq6/BdJ2jO5/MEV6O+vPE46f9Egth+gpe9wswJEGnyImkOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.170]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4dqzc14nP7zYQtrW; Tue, 13 Jan 2026 14:19:29 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 3A4C940571; Tue, 13 Jan 2026 14:19:41 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP4 (Coremail) with SMTP id gCh0CgC3ZPX642VpuTeEDg--.370S6; Tue, 13 Jan 2026 14:19:41 +0800 (CST) From: Zheng Qixing To: tj@kernel.org, josef@toxicpanda.com, axboe@kernel.dk, yukuai3@huawei.com, hch@infradead.org Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, mkoutny@suse.com, yi.zhang@huawei.com, yangerkun@huawei.com, houtao1@huawei.com, zhengqixing@huawei.com Subject: [PATCH v2 2/3] blk-cgroup: skip dying blkg in blkcg_activate_policy() Date: Tue, 13 Jan 2026 14:10:34 +0800 Message-Id: <20260113061035.1902522-3-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260113061035.1902522-1-zhengqixing@huaweicloud.com> References: <20260113061035.1902522-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgC3ZPX642VpuTeEDg--.370S6 X-Coremail-Antispam: 1UD129KBjvJXoWxJr1fZw17Jw4ftFyDWryrtFb_yoW8AFyDp3 9xWFn8Cr9xWFy8ua1q9a47X34FyF48Jr45XFWSk39I9rsxXw1SyF17urs8XrWxZFsrtay3 ZrnFqa4jkw4UK3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPIb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVW8Jr0_Cr1UMcvjeVCFs4IE7xkE bVWUJVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7 AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02 F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GF ylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r 1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07UJ xhLUUUUU= X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ Content-Type: text/plain; charset="utf-8" From: Zheng Qixing When switching IO schedulers on a block device, blkcg_activate_policy() can race with concurrent blkcg deletion, leading to a use-after-free in rcu_accelerate_cbs. T1: T2: blkg_destroy kill(&blkg->refcnt) // blkg->refcnt=3D1->0 blkg_release // call_rcu(__blkg_release) ... blkg_free_workfn ->pd_free_fn(pd) elv_iosched_store elevator_switch ... iterate blkg list blkg_get(blkg) // blkg->refcnt=3D0->1 list_del_init(&blkg->q_node) blkg_put(pinned_blkg) // blkg->refcnt=3D1->0 blkg_release // call_rcu again rcu_accelerate_cbs // uaf Fix this by replacing blkg_get() with blkg_tryget(), which fails if the blkg's refcount has already reached zero. If blkg_tryget() fails, skip processing this blkg since it's already being destroyed. Link: https://lore.kernel.org/all/20260108014416.3656493-4-zhengqixing@huaw= eicloud.com/ Fixes: f1c006f1c685 ("blk-cgroup: synchronize pd_free_fn() from blkg_free_w= orkfn() and blkcg_deactivate_policy()") Signed-off-by: Zheng Qixing Reviewed-by: Christoph Hellwig Reviewed-by: Michal Koutn=C3=BD --- block/blk-cgroup.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 600f8c5843ea..5dbc107eec53 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1622,9 +1622,10 @@ int blkcg_activate_policy(struct gendisk *disk, cons= t struct blkcg_policy *pol) * GFP_NOWAIT failed. Free the existing one and * prealloc for @blkg w/ GFP_KERNEL. */ + if (!blkg_tryget(blkg)) + continue; if (pinned_blkg) blkg_put(pinned_blkg); - blkg_get(blkg); pinned_blkg =3D blkg; =20 spin_unlock_irq(&q->queue_lock); --=20 2.39.2 From nobody Tue Feb 10 02:01:12 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13538368299; Tue, 13 Jan 2026 06:19:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768285204; cv=none; b=hfqTyrpKLdBd9w0JsyIPrKOYAKj/BvgNMRMwYamx0Gatjr5vTnJsxn05fGdHXNOMGPZcHN5/Vu9/poJYnx6OB6qrjh8WkiAyJlVQkot5u4+757D5e2gBAPjv9Q3x1RTFKt4w7mRwTI9Rrp3e8DDFGh9kHcTQWdDXqfTvABorYkY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768285204; c=relaxed/simple; bh=wx4cpTYZq4R2Lg3UHzFa6muooui+M6LU1lPYFBrJ5Ms=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=o81vp8s/t9zG1mFMBeNV/RikOvbWGYjtgaVsbqM/ipIPN28jZNviCZmMJ0hMahlksqgUiutjMztw0G1z4VQ22WxOPlV2re3GFKzqYGd1uwpSWkzfMuGiZIU4JCKgmBdWdVQgsJbBrpCDQdBRuX8hGndMqh8SBjgtP08GLelSUI0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4dqzbH1kkwzKHMWK; Tue, 13 Jan 2026 14:18:51 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 510B140539; Tue, 13 Jan 2026 14:19:41 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP4 (Coremail) with SMTP id gCh0CgC3ZPX642VpuTeEDg--.370S7; Tue, 13 Jan 2026 14:19:41 +0800 (CST) From: Zheng Qixing To: tj@kernel.org, josef@toxicpanda.com, axboe@kernel.dk, yukuai3@huawei.com, hch@infradead.org Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, mkoutny@suse.com, yi.zhang@huawei.com, yangerkun@huawei.com, houtao1@huawei.com, zhengqixing@huawei.com Subject: [PATCH v2 3/3] blk-cgroup: factor policy pd teardown loop into helper Date: Tue, 13 Jan 2026 14:10:35 +0800 Message-Id: <20260113061035.1902522-4-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260113061035.1902522-1-zhengqixing@huaweicloud.com> References: <20260113061035.1902522-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgC3ZPX642VpuTeEDg--.370S7 X-Coremail-Antispam: 1UD129KBjvJXoWxGr45Gw4rur4xWrW8WF4rZrb_yoW5CF18pF 43Kry3Ar92yr4Dua1UWw1UZrZIga1rKw4UA3yxCa9akr47trnxX3Wqv3ykZFWfAFZrWF45 uF48t3yakr4UC3JanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPab4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUWw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVW8Jr0_Cr1UMcvjeVCFs4IE7xkE bVWUJVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7 AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02 F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GF ylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxV WUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU oApnDUUUU X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ Content-Type: text/plain; charset="utf-8" From: Zheng Qixing Move the teardown sequence which offlines and frees per-policy blkg_policy_data (pd) into a helper for readability. No functional change intended. Signed-off-by: Zheng Qixing Reviewed-by: Christoph Hellwig Reviewed-by: Yu Kuai --- block/blk-cgroup.c | 58 +++++++++++++++++++++------------------------- 1 file changed, 27 insertions(+), 31 deletions(-) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 5dbc107eec53..78227ab0c1d7 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1559,6 +1559,31 @@ struct cgroup_subsys io_cgrp_subsys =3D { }; EXPORT_SYMBOL_GPL(io_cgrp_subsys); =20 +/* + * Tear down per-blkg policy data for @pol on @q. + */ +static void blkcg_policy_teardown_pds(struct request_queue *q, + const struct blkcg_policy *pol) +{ + struct blkcg_gq *blkg; + + list_for_each_entry(blkg, &q->blkg_list, q_node) { + struct blkcg *blkcg =3D blkg->blkcg; + struct blkg_policy_data *pd; + + spin_lock(&blkcg->lock); + pd =3D blkg->pd[pol->plid]; + if (pd) { + if (pd->online && pol->pd_offline_fn) + pol->pd_offline_fn(pd); + pd->online =3D false; + pol->pd_free_fn(pd); + blkg->pd[pol->plid] =3D NULL; + } + spin_unlock(&blkcg->lock); + } +} + /** * blkcg_activate_policy - activate a blkcg policy on a gendisk * @disk: gendisk of interest @@ -1673,21 +1698,7 @@ int blkcg_activate_policy(struct gendisk *disk, cons= t struct blkcg_policy *pol) enomem: /* alloc failed, take down everything */ spin_lock_irq(&q->queue_lock); - list_for_each_entry(blkg, &q->blkg_list, q_node) { - struct blkcg *blkcg =3D blkg->blkcg; - struct blkg_policy_data *pd; - - spin_lock(&blkcg->lock); - pd =3D blkg->pd[pol->plid]; - if (pd) { - if (pd->online && pol->pd_offline_fn) - pol->pd_offline_fn(pd); - pd->online =3D false; - pol->pd_free_fn(pd); - blkg->pd[pol->plid] =3D NULL; - } - spin_unlock(&blkcg->lock); - } + blkcg_policy_teardown_pds(q, pol); spin_unlock_irq(&q->queue_lock); ret =3D -ENOMEM; goto out; @@ -1706,7 +1717,6 @@ void blkcg_deactivate_policy(struct gendisk *disk, const struct blkcg_policy *pol) { struct request_queue *q =3D disk->queue; - struct blkcg_gq *blkg; unsigned int memflags; =20 if (!blkcg_policy_enabled(q, pol)) @@ -1717,22 +1727,8 @@ void blkcg_deactivate_policy(struct gendisk *disk, =20 mutex_lock(&q->blkcg_mutex); spin_lock_irq(&q->queue_lock); - __clear_bit(pol->plid, q->blkcg_pols); - - list_for_each_entry(blkg, &q->blkg_list, q_node) { - struct blkcg *blkcg =3D blkg->blkcg; - - spin_lock(&blkcg->lock); - if (blkg->pd[pol->plid]) { - if (blkg->pd[pol->plid]->online && pol->pd_offline_fn) - pol->pd_offline_fn(blkg->pd[pol->plid]); - pol->pd_free_fn(blkg->pd[pol->plid]); - blkg->pd[pol->plid] =3D NULL; - } - spin_unlock(&blkcg->lock); - } - + blkcg_policy_teardown_pds(q, pol); spin_unlock_irq(&q->queue_lock); mutex_unlock(&q->blkcg_mutex); =20 --=20 2.39.2