[PATCH v2 01/19] blk-cgroup: protect iterating blkgs with blkcg->lock in blkcg_print_stat()

Yu Kuai posted 19 patches 4 months ago
[PATCH v2 01/19] blk-cgroup: protect iterating blkgs with blkcg->lock in blkcg_print_stat()
Posted by Yu Kuai 4 months ago
From: Yu Kuai <yukuai3@huawei.com>

blkcg_print_one_stat() will be called for each blkg:
- access blkg->iostat, which is freed from rcu callback
  blkg_free_workfn();
- access policy data from pd_stat_fn(), which is frred from
  pd_free_fn(), while pd_free_fn() can be called by removing blkcg or
  deactivating policy;

The blkcg->lock can make sure iterated blkgs are still online, and both
blkg->iostat and policy data for activated policy won't be freed.

Prepare to convert protecting blkgs from request_queue with mutex.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-cgroup.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index f93de34fe87d..0f6039d468a6 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1242,13 +1242,10 @@ static int blkcg_print_stat(struct seq_file *sf, void *v)
 	else
 		css_rstat_flush(&blkcg->css);
 
-	rcu_read_lock();
-	hlist_for_each_entry_rcu(blkg, &blkcg->blkg_list, blkcg_node) {
-		spin_lock_irq(&blkg->q->queue_lock);
+	guard(spinlock)(&blkcg->lock);
+	hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node)
 		blkcg_print_one_stat(blkg, sf);
-		spin_unlock_irq(&blkg->q->queue_lock);
-	}
-	rcu_read_unlock();
+
 	return 0;
 }
 
-- 
2.51.0