kernel/cgroup/cgroup.c | 2 -- 1 file changed, 2 deletions(-)
From: Li RongQing <lirongqing@baidu.com>
Since spin_lock_irq() already disables preemption and task_css_set()
is protected by css_set_lock, the rcu_read_lock() calls are unnecessary
within the critical section. Remove them to simplify the code.
Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
kernel/cgroup/cgroup.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 312c6a8..db9e00a 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -2944,14 +2944,12 @@ int cgroup_attach_task(struct cgroup *dst_cgrp, struct task_struct *leader,
/* look up all src csets */
spin_lock_irq(&css_set_lock);
- rcu_read_lock();
task = leader;
do {
cgroup_migrate_add_src(task_css_set(task), dst_cgrp, &mgctx);
if (!threadgroup)
break;
} while_each_thread(leader, task);
- rcu_read_unlock();
spin_unlock_irq(&css_set_lock);
/* prepare dst csets and commit */
--
2.9.4
Hello RongQing. On Fri, Aug 15, 2025 at 05:14:30PM +0800, lirongqing <lirongqing@baidu.com> wrote: > From: Li RongQing <lirongqing@baidu.com> > > Since spin_lock_irq() already disables preemption and task_css_set() > is protected by css_set_lock, the rcu_read_lock() calls are unnecessary > within the critical section. Remove them to simplify the code. > > Signed-off-by: Li RongQing <lirongqing@baidu.com> So there is some inconsistency betwen cgroup_migrate() and cgroup_attach_task() (see also 674b745e22b3c ("cgroup: remove rcu_read_lock()/rcu_read_unlock() in critical section of spin_lock_irq()")) -- that'd warrant unification. Have you spotted other instances of this? The RCU lock is there not only because of task_css_set() but also for while_each_thread(). I'd slightly prefer honoring the advice from Paul [1] and keep a redundant rcu_read_lock() -- for more robustness to reworks, I'm not convinced this simplification has othe visible benefits. Thanks, Michal [1] https://lore.kernel.org/all/20220107213612.GQ4202@paulmck-ThinkPad-P17-Gen-1/
On 2025/8/15 17:14, lirongqing wrote: > From: Li RongQing <lirongqing@baidu.com> > > Since spin_lock_irq() already disables preemption and task_css_set() > is protected by css_set_lock, the rcu_read_lock() calls are unnecessary > within the critical section. Remove them to simplify the code. > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > --- > kernel/cgroup/cgroup.c | 2 -- > 1 file changed, 2 deletions(-) > > diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c > index 312c6a8..db9e00a 100644 > --- a/kernel/cgroup/cgroup.c > +++ b/kernel/cgroup/cgroup.c > @@ -2944,14 +2944,12 @@ int cgroup_attach_task(struct cgroup *dst_cgrp, struct task_struct *leader, > > /* look up all src csets */ > spin_lock_irq(&css_set_lock); > - rcu_read_lock(); > task = leader; > do { > cgroup_migrate_add_src(task_css_set(task), dst_cgrp, &mgctx); > if (!threadgroup) > break; > } while_each_thread(leader, task); > - rcu_read_unlock(); > spin_unlock_irq(&css_set_lock); > > /* prepare dst csets and commit */ LGTM -- Best regards, Ridong
© 2016 - 2025 Red Hat, Inc.