A full memory barrier in the RCU-PREEMPT task unblock path advertizes
to order the context switch (or rather the accesses prior to
rcu_read_unlock()) with the expedited grace period fastpath.
However the grace period can not complete without the rnp calling into
rcu_report_exp_rnp() with the node locked. This reports the quiescent
state in a fully ordered fashion against updater's accesses thanks to:
1) The READ-SIDE smp_mb__after_unlock_lock() barrier accross nodes
locking while propagating QS up to the root.
2) The UPDATE-SIDE smp_mb__after_unlock_lock() barrier while holding the
the root rnp to wait/check for the GP completion.
3) The (perhaps redundant given step 1) and 2)) smp_mb() in rcu_seq_end()
before the grace period completes.
This makes the explicit barrier in this place superflous. Therefore
remove it as it is confusing.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/rcu/tree_plugin.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 3c0bbbbb686f..d51cc7a5dfc7 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -534,7 +534,6 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
WARN_ON_ONCE(rnp->completedqs == rnp->gp_seq &&
(!empty_norm || rnp->qsmask));
empty_exp = sync_rcu_exp_done(rnp);
- smp_mb(); /* ensure expedited fastpath sees end of RCU c-s. */
np = rcu_next_node_entry(t, rnp);
list_del_init(&t->rcu_node_entry);
t->rcu_blocked_node = NULL;
--
2.48.1
On Fri, Mar 14, 2025 at 03:36:39PM +0100, Frederic Weisbecker wrote: > A full memory barrier in the RCU-PREEMPT task unblock path advertizes > to order the context switch (or rather the accesses prior to > rcu_read_unlock()) with the expedited grace period fastpath. > > However the grace period can not complete without the rnp calling into > rcu_report_exp_rnp() with the node locked. This reports the quiescent > state in a fully ordered fashion against updater's accesses thanks to: > > 1) The READ-SIDE smp_mb__after_unlock_lock() barrier accross nodes > locking while propagating QS up to the root. > > 2) The UPDATE-SIDE smp_mb__after_unlock_lock() barrier while holding the > the root rnp to wait/check for the GP completion. > > 3) The (perhaps redundant given step 1) and 2)) smp_mb() in rcu_seq_end() > before the grace period completes. > > This makes the explicit barrier in this place superflous. Therefore > remove it as it is confusing. > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Still cannot see a problem with this, but still a bit nervous. Acked-by: Paul E. McKenney <paulmck@kernel.org> > --- > kernel/rcu/tree_plugin.h | 1 - > 1 file changed, 1 deletion(-) > > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > index 3c0bbbbb686f..d51cc7a5dfc7 100644 > --- a/kernel/rcu/tree_plugin.h > +++ b/kernel/rcu/tree_plugin.h > @@ -534,7 +534,6 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags) > WARN_ON_ONCE(rnp->completedqs == rnp->gp_seq && > (!empty_norm || rnp->qsmask)); > empty_exp = sync_rcu_exp_done(rnp); > - smp_mb(); /* ensure expedited fastpath sees end of RCU c-s. */ > np = rcu_next_node_entry(t, rnp); > list_del_init(&t->rcu_node_entry); > t->rcu_blocked_node = NULL; > -- > 2.48.1 >
Le Tue, Mar 18, 2025 at 10:18:12AM -0700, Paul E. McKenney a écrit : > On Fri, Mar 14, 2025 at 03:36:39PM +0100, Frederic Weisbecker wrote: > > A full memory barrier in the RCU-PREEMPT task unblock path advertizes > > to order the context switch (or rather the accesses prior to > > rcu_read_unlock()) with the expedited grace period fastpath. > > > > However the grace period can not complete without the rnp calling into > > rcu_report_exp_rnp() with the node locked. This reports the quiescent > > state in a fully ordered fashion against updater's accesses thanks to: > > > > 1) The READ-SIDE smp_mb__after_unlock_lock() barrier accross nodes > > locking while propagating QS up to the root. > > > > 2) The UPDATE-SIDE smp_mb__after_unlock_lock() barrier while holding the > > the root rnp to wait/check for the GP completion. > > > > 3) The (perhaps redundant given step 1) and 2)) smp_mb() in rcu_seq_end() > > before the grace period completes. > > > > This makes the explicit barrier in this place superflous. Therefore > > remove it as it is confusing. > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > Still cannot see a problem with this, but still a bit nervous. Where is the challenge in life if we manage to fall alseep within a minute at bedtime? > > Acked-by: Paul E. McKenney <paulmck@kernel.org> Thanks!
On Wed, Mar 19, 2025 at 10:01:36AM +0100, Frederic Weisbecker wrote: > Le Tue, Mar 18, 2025 at 10:18:12AM -0700, Paul E. McKenney a écrit : > > On Fri, Mar 14, 2025 at 03:36:39PM +0100, Frederic Weisbecker wrote: > > > A full memory barrier in the RCU-PREEMPT task unblock path advertizes > > > to order the context switch (or rather the accesses prior to > > > rcu_read_unlock()) with the expedited grace period fastpath. > > > > > > However the grace period can not complete without the rnp calling into > > > rcu_report_exp_rnp() with the node locked. This reports the quiescent > > > state in a fully ordered fashion against updater's accesses thanks to: > > > > > > 1) The READ-SIDE smp_mb__after_unlock_lock() barrier accross nodes > > > locking while propagating QS up to the root. > > > > > > 2) The UPDATE-SIDE smp_mb__after_unlock_lock() barrier while holding the > > > the root rnp to wait/check for the GP completion. > > > > > > 3) The (perhaps redundant given step 1) and 2)) smp_mb() in rcu_seq_end() > > > before the grace period completes. > > > > > > This makes the explicit barrier in this place superflous. Therefore > > > remove it as it is confusing. > > > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > > > Still cannot see a problem with this, but still a bit nervous. > > Where is the challenge in life if we manage to fall alseep within a minute > at bedtime? ;-) ;-) ;-) Suppose that there was an issue with this that we are somehow not spotting. How would you go about debugging it? Thanx, Paul > > Acked-by: Paul E. McKenney <paulmck@kernel.org> > > Thanks!
© 2016 - 2025 Red Hat, Inc.