kernel/sched/fair.c | 1 + 1 file changed, 1 insertion(+)
An invalid pointer dereference bug was reported on arm64 cpu, and has
not yet been seen on x86. A partial oops looks like:
Call trace:
update_cfs_rq_h_load+0x80/0xb0
wake_affine+0x158/0x168
select_task_rq_fair+0x364/0x3a8
try_to_wake_up+0x154/0x648
wake_up_q+0x68/0xd0
futex_wake_op+0x280/0x4c8
do_futex+0x198/0x1c0
__arm64_sys_futex+0x11c/0x198
Link: https://lore.kernel.org/all/20251013071820.1531295-1-CruzZhao@linux.alibaba.com/
We found that the task_group corresponding to the problematic se
is not in the parent task_group’s children list, indicating that
h_load_next points to an invalid address. Consider the following
cgroup and task hierarchy:
A
/ \
/ \
B E
/ \ |
/ \ t2
C D
| |
t0 t1
Here follows a timing sequence that may be responsible for triggering
the problem:
CPU X CPU Y CPU Z
wakeup t0
set list A->B->C
traverse A->B->C
t0 exits
destroy C
wakeup t2
set list A->E wakeup t1
set list A->B->D
traverse A->B->C
panic
CPU Z sets ->h_load_next list to A->B->D, but due to arm64 weaker memory
ordering, Y may observe A->B before it sees B->D, then in this time window,
it can traverse A->B->C and reach an invalid se.
We can avoid stale pointer accesses by clearing ->h_load_next for
earlier break.
Fixes: 685207963be9 ("sched: Move h_load calculation to task_h_load()")
Cc: <stable@vger.kernel.org>
Co-developed-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Peng Wang <peng_wang@linux.alibaba.com>
---
kernel/sched/fair.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bc0b7ce8a65d..da7baba35e60 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9847,6 +9847,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
}
while ((se = READ_ONCE(cfs_rq->h_load_next)) != NULL) {
+ WRITE_ONCE(cfs_rq->h_load_next, NULL);
load = cfs_rq->h_load;
load = div64_ul(load * se->avg.load_avg,
cfs_rq_load_avg(cfs_rq) + 1);
--
2.27.0
On Wed, Oct 15, 2025 at 08:19:50PM +0800, Peng Wang wrote:
> We found that the task_group corresponding to the problematic se
> is not in the parent task_group’s children list, indicating that
> h_load_next points to an invalid address. Consider the following
> cgroup and task hierarchy:
>
> A
> / \
> / \
> B E
> / \ |
> / \ t2
> C D
> | |
> t0 t1
>
> Here follows a timing sequence that may be responsible for triggering
> the problem:
>
> CPU X CPU Y CPU Z
> wakeup t0
> set list A->B->C
> traverse A->B->C
> t0 exits
> destroy C
> wakeup t2
> set list A->E wakeup t1
> set list A->B->D
> traverse A->B->C
> panic
>
> CPU Z sets ->h_load_next list to A->B->D, but due to arm64 weaker memory
> ordering, Y may observe A->B before it sees B->D, then in this time window,
> it can traverse A->B->C and reach an invalid se.
Hmm, I rather think we should ensure update_cfs_rq_h_load() is
serialized against unregister_fair_sched_group().
And I'm thinking that really shouldn't be hard; note how
sched_unregister_group() already has an RCU grace period. So all we need
to ensure is that task_h_load() is called in a context that stops RCU
grace periods (rcu_read_lock(), preempt_disable(), local_irq_disable(),
local_bh_disable()).
A very quick scan makes me think at the very least the usage in
task_numa_migrate()
task_numa_find_cpu()
task_h_load()
fails here; probably more.
On Wed, Oct 15, 2025 at 02:44:22PM +0200, Peter Zijlstra wrote:
> On Wed, Oct 15, 2025 at 08:19:50PM +0800, Peng Wang wrote:
>
> > We found that the task_group corresponding to the problematic se
> > is not in the parent task_group’s children list, indicating that
> > h_load_next points to an invalid address. Consider the following
> > cgroup and task hierarchy:
> >
> > A
> > / \
> > / \
> > B E
> > / \ |
> > / \ t2
> > C D
> > | |
> > t0 t1
> >
> > Here follows a timing sequence that may be responsible for triggering
> > the problem:
> >
> > CPU X CPU Y CPU Z
> > wakeup t0
> > set list A->B->C
> > traverse A->B->C
> > t0 exits
> > destroy C
> > wakeup t2
> > set list A->E wakeup t1
> > set list A->B->D
> > traverse A->B->C
> > panic
> >
> > CPU Z sets ->h_load_next list to A->B->D, but due to arm64 weaker memory
> > ordering, Y may observe A->B before it sees B->D, then in this time window,
> > it can traverse A->B->C and reach an invalid se.
>
> Hmm, I rather think we should ensure update_cfs_rq_h_load() is
> serialized against unregister_fair_sched_group().
I might be mistaken, but it seems that, even with RCU protection around
update_cfs_rq_h_load(), there remains a risk of reading stale values.
CPU X CPU Y CPU Z
wakeup t0
rcu_read_lock()
set list A->B->C
traverse A->B->C
rcu_read_unlock()
t0 exits
destroy C
After the prior RCU grace period has elapsed, C has already been reclaimed,
yet the stale A->B->C remains.
wakeup t2
rcu_read_lock()
set list A->E wakeup t1
rcu_read_lock()
set list A->B->D
...
traverse A->B->C
panic
A subsequent rcu_read_lock() only guarantees that A/B/D/E will not be
reclaimed while the list is being traversed; C had already been freed
before the next grace period even began.
>
> And I'm thinking that really shouldn't be hard; note how
> sched_unregister_group() already has an RCU grace period. So all we need
> to ensure is that task_h_load() is called in a context that stops RCU
> grace periods (rcu_read_lock(), preempt_disable(), local_irq_disable(),
> local_bh_disable()).
>
> A very quick scan makes me think at the very least the usage in
>
> task_numa_migrate()
> task_numa_find_cpu()
> task_h_load()
>
> fails here; probably more.
On Thu, Oct 16, 2025 at 11:06:17AM +0800, Peng Wang wrote: > On Wed, Oct 15, 2025 at 02:44:22PM +0200, Peter Zijlstra wrote: > > On Wed, Oct 15, 2025 at 08:19:50PM +0800, Peng Wang wrote: > > > > > We found that the task_group corresponding to the problematic se > > > is not in the parent task_group’s children list, indicating that > > > h_load_next points to an invalid address. Consider the following > > > cgroup and task hierarchy: > > > > > > A > > > / \ > > > / \ > > > B E > > > / \ | > > > / \ t2 > > > C D > > > | | > > > t0 t1 > > > > > > Here follows a timing sequence that may be responsible for triggering > > > the problem: > > > > > > CPU X CPU Y CPU Z > > > wakeup t0 > > > set list A->B->C > > > traverse A->B->C > > > t0 exits > > > destroy C > > > wakeup t2 > > > set list A->E wakeup t1 > > > set list A->B->D > > > traverse A->B->C > > > panic > > > > > > CPU Z sets ->h_load_next list to A->B->D, but due to arm64 weaker memory > > > ordering, Y may observe A->B before it sees B->D, then in this time window, > > > it can traverse A->B->C and reach an invalid se. > > > > Hmm, I rather think we should ensure update_cfs_rq_h_load() is > > serialized against unregister_fair_sched_group(). > > I might be mistaken, but it seems that, even with RCU protection around > update_cfs_rq_h_load(), there remains a risk of reading stale values. > > > CPU X CPU Y CPU Z > > wakeup t0 > rcu_read_lock() > set list A->B->C > traverse A->B->C > rcu_read_unlock() > t0 exits > destroy C > > After the prior RCU grace period has elapsed, C has already been reclaimed, > yet the stale A->B->C remains. > > > wakeup t2 > rcu_read_lock() > set list A->E wakeup t1 > rcu_read_lock() > set list A->B->D > ... > traverse A->B->C > panic > > A subsequent rcu_read_lock() only guarantees that A/B/D/E will not be > reclaimed while the list is being traversed; C had already been freed > before the next grace period even began. FWIW, I've caught arm64 machines running into this problem recently on 6.x kernels. These particular systems are small enough that they have just a single memory node and no NUMA balancing enabled. Would the scheduling experts be willing to consider picking up Peng's fix while the 6.18 release is still open for bug fixes? -K
On Wed, 15 Oct 2025 at 14:44, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Oct 15, 2025 at 08:19:50PM +0800, Peng Wang wrote:
>
> > We found that the task_group corresponding to the problematic se
> > is not in the parent task_group’s children list, indicating that
> > h_load_next points to an invalid address. Consider the following
> > cgroup and task hierarchy:
> >
> > A
> > / \
> > / \
> > B E
> > / \ |
> > / \ t2
> > C D
> > | |
> > t0 t1
> >
> > Here follows a timing sequence that may be responsible for triggering
> > the problem:
> >
> > CPU X CPU Y CPU Z
> > wakeup t0
> > set list A->B->C
> > traverse A->B->C
> > t0 exits
> > destroy C
> > wakeup t2
> > set list A->E wakeup t1
> > set list A->B->D
> > traverse A->B->C
> > panic
> >
> > CPU Z sets ->h_load_next list to A->B->D, but due to arm64 weaker memory
> > ordering, Y may observe A->B before it sees B->D, then in this time window,
> > it can traverse A->B->C and reach an invalid se.
>
> Hmm, I rather think we should ensure update_cfs_rq_h_load() is
> serialized against unregister_fair_sched_group().
The bug has been reported for v5.10 which probably don't have fixed
done "recently"
commit b027789e5e50 ("sched/fair: Prevent dead task groups from
regaining cfs_rq's")
>
> And I'm thinking that really shouldn't be hard; note how
> sched_unregister_group() already has an RCU grace period. So all we need
> to ensure is that task_h_load() is called in a context that stops RCU
> grace periods (rcu_read_lock(), preempt_disable(), local_irq_disable(),
> local_bh_disable()).
>
> A very quick scan makes me think at the very least the usage in
>
> task_numa_migrate()
> task_numa_find_cpu()
> task_h_load()
>
> fails here; probably more.
On Wed, Oct 15, 2025 at 03:14:37PM +0200, Vincent Guittot wrote:
> On Wed, 15 Oct 2025 at 14:44, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Wed, Oct 15, 2025 at 08:19:50PM +0800, Peng Wang wrote:
> >
> > > We found that the task_group corresponding to the problematic se
> > > is not in the parent task_group’s children list, indicating that
> > > h_load_next points to an invalid address. Consider the following
> > > cgroup and task hierarchy:
> > >
> > > A
> > > / \
> > > / \
> > > B E
> > > / \ |
> > > / \ t2
> > > C D
> > > | |
> > > t0 t1
> > >
> > > Here follows a timing sequence that may be responsible for triggering
> > > the problem:
> > >
> > > CPU X CPU Y CPU Z
> > > wakeup t0
> > > set list A->B->C
> > > traverse A->B->C
> > > t0 exits
> > > destroy C
> > > wakeup t2
> > > set list A->E wakeup t1
> > > set list A->B->D
> > > traverse A->B->C
> > > panic
> > >
> > > CPU Z sets ->h_load_next list to A->B->D, but due to arm64 weaker memory
> > > ordering, Y may observe A->B before it sees B->D, then in this time window,
> > > it can traverse A->B->C and reach an invalid se.
> >
> > Hmm, I rather think we should ensure update_cfs_rq_h_load() is
> > serialized against unregister_fair_sched_group().
>
> The bug has been reported for v5.10 which probably don't have fixed
> done "recently"
> commit b027789e5e50 ("sched/fair: Prevent dead task groups from
> regaining cfs_rq's")
Hi, Vincent and Peter,
We have already integrated this commit, but the bug persists.
Do you think we should explicitly clear the h_load_next list?
Even though update_cfs_rq_h_load runs under an RCU lock, ARM's
weak memory ordering could still allow readers to observe stale
values in the list.
>
> >
> > And I'm thinking that really shouldn't be hard; note how
> > sched_unregister_group() already has an RCU grace period. So all we need
> > to ensure is that task_h_load() is called in a context that stops RCU
> > grace periods (rcu_read_lock(), preempt_disable(), local_irq_disable(),
> > local_bh_disable()).
> >
> > A very quick scan makes me think at the very least the usage in
> >
> > task_numa_migrate()
> > task_numa_find_cpu()
> > task_h_load()
> >
> > fails here; probably more.
On Wed, 22 Oct 2025 at 11:00, Peng Wang <peng_wang@linux.alibaba.com> wrote:
>
> On Wed, Oct 15, 2025 at 03:14:37PM +0200, Vincent Guittot wrote:
> > On Wed, 15 Oct 2025 at 14:44, Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > On Wed, Oct 15, 2025 at 08:19:50PM +0800, Peng Wang wrote:
> > >
> > > > We found that the task_group corresponding to the problematic se
> > > > is not in the parent task_group’s children list, indicating that
> > > > h_load_next points to an invalid address. Consider the following
> > > > cgroup and task hierarchy:
> > > >
> > > > A
> > > > / \
> > > > / \
> > > > B E
> > > > / \ |
> > > > / \ t2
> > > > C D
> > > > | |
> > > > t0 t1
> > > >
> > > > Here follows a timing sequence that may be responsible for triggering
> > > > the problem:
> > > >
> > > > CPU X CPU Y CPU Z
> > > > wakeup t0
> > > > set list A->B->C
> > > > traverse A->B->C
> > > > t0 exits
> > > > destroy C
> > > > wakeup t2
> > > > set list A->E wakeup t1
> > > > set list A->B->D
> > > > traverse A->B->C
> > > > panic
> > > >
> > > > CPU Z sets ->h_load_next list to A->B->D, but due to arm64 weaker memory
> > > > ordering, Y may observe A->B before it sees B->D, then in this time window,
> > > > it can traverse A->B->C and reach an invalid se.
> > >
> > > Hmm, I rather think we should ensure update_cfs_rq_h_load() is
> > > serialized against unregister_fair_sched_group().
> >
> > The bug has been reported for v5.10 which probably don't have fixed
> > done "recently"
> > commit b027789e5e50 ("sched/fair: Prevent dead task groups from
> > regaining cfs_rq's")
>
> Hi, Vincent and Peter,
>
> We have already integrated this commit, but the bug persists.
>
> Do you think we should explicitly clear the h_load_next list?
>
> Even though update_cfs_rq_h_load runs under an RCU lock, ARM's
> weak memory ordering could still allow readers to observe stale
> values in the list.
I'm worried about the increase of contention on the cache with this write.
Could we check cfs_rq->h_load_next and clear it if needed in
unregister_fair_sched_group() instead ?
>
> >
> > >
> > > And I'm thinking that really shouldn't be hard; note how
> > > sched_unregister_group() already has an RCU grace period. So all we need
> > > to ensure is that task_h_load() is called in a context that stops RCU
> > > grace periods (rcu_read_lock(), preempt_disable(), local_irq_disable(),
> > > local_bh_disable()).
> > >
> > > A very quick scan makes me think at the very least the usage in
> > >
> > > task_numa_migrate()
> > > task_numa_find_cpu()
> > > task_h_load()
> > >
> > > fails here; probably more.
An invalid pointer dereference bug was reported on arm64 cpu, and has
not yet been seen on x86. A partial oops looks like:
Call trace:
update_cfs_rq_h_load+0x80/0xb0
wake_affine+0x158/0x168
select_task_rq_fair+0x364/0x3a8
try_to_wake_up+0x154/0x648
wake_up_q+0x68/0xd0
futex_wake_op+0x280/0x4c8
do_futex+0x198/0x1c0
__arm64_sys_futex+0x11c/0x198
Link: https://lore.kernel.org/all/20251013071820.1531295-1-CruzZhao@linux.alibaba.com/
We found that the task_group corresponding to the problematic se
is not in the parent task_group’s children list, indicating that
h_load_next points to an invalid address. Consider the following
cgroup and task hierarchy:
A
/ \
/ \
B E
/ \ |
/ \ t2
C D
| |
t0 t1
Here follows a timing sequence that may be responsible for triggering
the problem:
CPU X CPU Y CPU Z
wakeup t0
set list A->B->C
traverse A->B->C
t0 exits
destroy C
wakeup t2
set list A->E wakeup t1
set list A->B->D
traverse A->B->C
panic
CPU Z sets ->h_load_next list to A->B->D, but due to arm64 weaker memory
ordering, Y may observe A->B before it sees B->D, then in this time window,
it can traverse A->B->C and reach an invalid se.
We can avoid stale pointer accesses by clearing ->h_load_next when
unregistering cgroup.
Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
Fixes: 685207963be9 ("sched: Move h_load calculation to task_h_load()")
Cc: <stable@vger.kernel.org>
Co-developed-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Cruz Zhao <CruzZhao@linux.alibaba.com>
Signed-off-by: Peng Wang <peng_wang@linux.alibaba.com>
---
kernel/sched/fair.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index cee1793e8277..a5fce15093d3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -13427,6 +13427,14 @@ void unregister_fair_sched_group(struct task_group *tg)
list_del_leaf_cfs_rq(cfs_rq);
}
remove_entity_load_avg(se);
+ /*
+ * Clear parent's h_load_next if it points to the
+ * sched_entity being freed to avoid stale pointer.
+ */
+ struct cfs_rq *parent_cfs_rq = cfs_rq_of(se);
+
+ if (READ_ONCE(parent_cfs_rq->h_load_next) == se)
+ WRITE_ONCE(parent_cfs_rq->h_load_next, NULL);
}
/*
--
2.27.0
On Wed, Oct 15, 2025 at 03:14:37PM +0200, Vincent Guittot wrote:
> On Wed, 15 Oct 2025 at 14:44, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Wed, Oct 15, 2025 at 08:19:50PM +0800, Peng Wang wrote:
> >
> > > We found that the task_group corresponding to the problematic se
> > > is not in the parent task_group???s children list, indicating that
> > > h_load_next points to an invalid address. Consider the following
> > > cgroup and task hierarchy:
> > >
> > > A
> > > / \
> > > / \
> > > B E
> > > / \ |
> > > / \ t2
> > > C D
> > > | |
> > > t0 t1
> > >
> > > Here follows a timing sequence that may be responsible for triggering
> > > the problem:
> > >
> > > CPU X CPU Y CPU Z
> > > wakeup t0
> > > set list A->B->C
> > > traverse A->B->C
> > > t0 exits
> > > destroy C
> > > wakeup t2
> > > set list A->E wakeup t1
> > > set list A->B->D
> > > traverse A->B->C
> > > panic
> > >
> > > CPU Z sets ->h_load_next list to A->B->D, but due to arm64 weaker memory
> > > ordering, Y may observe A->B before it sees B->D, then in this time window,
> > > it can traverse A->B->C and reach an invalid se.
> >
> > Hmm, I rather think we should ensure update_cfs_rq_h_load() is
> > serialized against unregister_fair_sched_group().
>
> The bug has been reported for v5.10 which probably don't have fixed
> done "recently"
> commit b027789e5e50 ("sched/fair: Prevent dead task groups from
> regaining cfs_rq's")
Yeah, but nobody is going to develop against that ancient thing. So the
above is just one more patch the would need to get backported.
© 2016 - 2026 Red Hat, Inc.