include/linux/topology.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
for_each_node_numadist() can lead to hard lockups on kernels built
without CONFIG_NUMA. For instance, the following was triggered by
sched_ext:
watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 5
...
RIP: 0010:_find_first_and_bit+0x8/0x60
...
Call Trace:
<TASK>
cpumask_any_and_distribute+0x49/0x80
pick_idle_cpu_in_node+0xcf/0x140
scx_bpf_pick_idle_cpu_node+0xaa/0x110
bpf_prog_16ee5b1f077af006_pick_idle_cpu+0x57f/0x5de
bpf_prog_df2ce5cfac58ce09_bpfland_select_cpu+0x37/0xf4
bpf__sched_ext_ops_select_cpu+0x4b/0xb3
This happens because nearest_node_nodemask() always returns NUMA_NO_NODE
(-1) when CONFIG_NUMA is disabled, causing the loop to never terminate,
as the condition node >= MAX_NUMNODES is never satisfied.
Prevent this by handling NUMA_NO_NODE explicitly in the exit condition.
Fixes: f09177ca5f242 ("sched/topology: Introduce for_each_node_numadist() iterator")
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
include/linux/topology.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/topology.h b/include/linux/topology.h
index cd6b4bdc9cfd3..095cda6dbf041 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -310,7 +310,7 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
#define for_each_node_numadist(node, unvisited) \
for (int __start = (node), \
(node) = nearest_node_nodemask((__start), &(unvisited)); \
- (node) < MAX_NUMNODES; \
+ (node) < MAX_NUMNODES && (node) != NUMA_NO_NODE; \
node_clear((node), (unvisited)), \
(node) = nearest_node_nodemask((__start), &(unvisited)))
--
2.49.0
On Tue, Jun 03, 2025 at 10:04:02AM +0200, Andrea Righi wrote:
> for_each_node_numadist() can lead to hard lockups on kernels built
> without CONFIG_NUMA. For instance, the following was triggered by
> sched_ext:
>
> watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 5
> ...
> RIP: 0010:_find_first_and_bit+0x8/0x60
> ...
> Call Trace:
> <TASK>
> cpumask_any_and_distribute+0x49/0x80
> pick_idle_cpu_in_node+0xcf/0x140
> scx_bpf_pick_idle_cpu_node+0xaa/0x110
> bpf_prog_16ee5b1f077af006_pick_idle_cpu+0x57f/0x5de
> bpf_prog_df2ce5cfac58ce09_bpfland_select_cpu+0x37/0xf4
> bpf__sched_ext_ops_select_cpu+0x4b/0xb3
>
> This happens because nearest_node_nodemask() always returns NUMA_NO_NODE
> (-1) when CONFIG_NUMA is disabled, causing the loop to never terminate,
> as the condition node >= MAX_NUMNODES is never satisfied.
>
> Prevent this by handling NUMA_NO_NODE explicitly in the exit condition.
>
> Fixes: f09177ca5f242 ("sched/topology: Introduce for_each_node_numadist() iterator")
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
> ---
> include/linux/topology.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index cd6b4bdc9cfd3..095cda6dbf041 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -310,7 +310,7 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> #define for_each_node_numadist(node, unvisited) \
> for (int __start = (node), \
> (node) = nearest_node_nodemask((__start), &(unvisited)); \
> - (node) < MAX_NUMNODES; \
> + (node) < MAX_NUMNODES && (node) != NUMA_NO_NODE; \
> node_clear((node), (unvisited)), \
> (node) = nearest_node_nodemask((__start), &(unvisited)))
When NUMA is enabled, you add an extra conditional which is never the
true.
When disabled, I think this macro should not be available, or more
likely have a stub implementation, similar to for_each_node_mask()
Thanks,
Yury
Hi Yuri,
On Wed, Jun 04, 2025 at 10:13:21AM -0400, Yury Norov wrote:
> On Tue, Jun 03, 2025 at 10:04:02AM +0200, Andrea Righi wrote:
> > for_each_node_numadist() can lead to hard lockups on kernels built
> > without CONFIG_NUMA. For instance, the following was triggered by
> > sched_ext:
> >
> > watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 5
> > ...
> > RIP: 0010:_find_first_and_bit+0x8/0x60
> > ...
> > Call Trace:
> > <TASK>
> > cpumask_any_and_distribute+0x49/0x80
> > pick_idle_cpu_in_node+0xcf/0x140
> > scx_bpf_pick_idle_cpu_node+0xaa/0x110
> > bpf_prog_16ee5b1f077af006_pick_idle_cpu+0x57f/0x5de
> > bpf_prog_df2ce5cfac58ce09_bpfland_select_cpu+0x37/0xf4
> > bpf__sched_ext_ops_select_cpu+0x4b/0xb3
> >
> > This happens because nearest_node_nodemask() always returns NUMA_NO_NODE
> > (-1) when CONFIG_NUMA is disabled, causing the loop to never terminate,
> > as the condition node >= MAX_NUMNODES is never satisfied.
> >
> > Prevent this by handling NUMA_NO_NODE explicitly in the exit condition.
> >
> > Fixes: f09177ca5f242 ("sched/topology: Introduce for_each_node_numadist() iterator")
> > Signed-off-by: Andrea Righi <arighi@nvidia.com>
> > ---
> > include/linux/topology.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > index cd6b4bdc9cfd3..095cda6dbf041 100644
> > --- a/include/linux/topology.h
> > +++ b/include/linux/topology.h
> > @@ -310,7 +310,7 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> > #define for_each_node_numadist(node, unvisited) \
> > for (int __start = (node), \
> > (node) = nearest_node_nodemask((__start), &(unvisited)); \
> > - (node) < MAX_NUMNODES; \
> > + (node) < MAX_NUMNODES && (node) != NUMA_NO_NODE; \
> > node_clear((node), (unvisited)), \
> > (node) = nearest_node_nodemask((__start), &(unvisited)))
>
> When NUMA is enabled, you add an extra conditional which is never the
> true.
>
> When disabled, I think this macro should not be available, or more
> likely have a stub implementation, similar to for_each_node_mask()
Makes sense, I like the idea of having a stub implementation, I'll send a
v2 with that.
Thanks!
-Andrea
© 2016 - 2026 Red Hat, Inc.