drivers/net/ethernet/mellanox/mlx5/core/eq.c | 16 ++--- include/linux/find.h | 43 ++++++++++++ include/linux/topology.h | 40 ++++++----- kernel/sched/topology.c | 53 ++++++++------- lib/cpumask.c | 7 +- lib/find_bit.c | 12 ++++ lib/test_bitmap.c | 70 +++++++++++++++++++- 7 files changed, 183 insertions(+), 58 deletions(-)
for_each_cpu() is widely used in kernel, and it's beneficial to create
a NUMA-aware version of the macro.
Recently added for_each_numa_hop_mask() works, but switching existing
codebase to it is not an easy process.
This series adds for_each_numa_cpu(), which is designed to be similar to
the for_each_cpu(). It allows to convert existing code to NUMA-aware as
simple as adding a hop iterator variable and passing it inside new macro.
for_each_numa_cpu() takes care of the rest.
At the moment, we have 2 users of NUMA-aware enumerators. One is
Melanox's in-tree driver, and another is Intel's in-review driver:
https://lore.kernel.org/lkml/20230216145455.661709-1-pawel.chmielewski@intel.com/
Both real-life examples follow the same pattern:
for_each_numa_hop_mask(cpus, prev, node) {
for_each_cpu_andnot(cpu, cpus, prev) {
if (cnt++ == max_num)
goto out;
do_something(cpu);
}
prev = cpus;
}
With the new macro, it has a more standard look, like this:
for_each_numa_cpu(cpu, hop, node, cpu_possible_mask) {
if (cnt++ == max_num)
break;
do_something(cpu);
}
Straight conversion of existing for_each_cpu() codebase to NUMA-aware
version with for_each_numa_hop_mask() is difficult because it doesn't
take a user-provided cpu mask, and eventually ends up with open-coded
double loop. With for_each_numa_cpu() it shouldn't be a brainteaser.
Consider the NUMA-ignorant example:
cpumask_t cpus = get_mask();
int cnt = 0, cpu;
for_each_cpu(cpu, cpus) {
if (cnt++ == max_num)
break;
do_something(cpu);
}
Converting it to NUMA-aware version would be as simple as:
cpumask_t cpus = get_mask();
int node = get_node();
int cnt = 0, hop, cpu;
for_each_numa_cpu(cpu, hop, node, cpus) {
if (cnt++ == max_num)
break;
do_something(cpu);
}
The latter looks more verbose and avoids from open-coding that annoying
double loop. Another advantage is that it works with a 'hop' parameter with
the clear meaning of NUMA distance, and doesn't make people not familiar
to enumerator internals bothering with current and previous masks machinery.
v2: https://lore.kernel.org/netdev/ZD3l6FBnUh9vTIGc@yury-ThinkPad/T/
v3:
- fix sched_numa_find_{next,nth}_cpu() when CONFIG_NUMA is off to
only traverse online CPUs;
- don't export sched_domains_numa_levels for testing purposes. In
the test, use for_each_node() macro;
- extend the test for for_each_node();
- in comments, mention that only online CPUs are traversed;
- rebase on top of 6.3.
Yury Norov (8):
sched: fix sched_numa_find_nth_cpu() in non-NUMA case
lib/find: add find_next_and_andnot_bit()
sched/topology: introduce sched_numa_find_next_cpu()
sched/topology: add for_each_numa_{,online}_cpu() macro
net: mlx5: switch comp_irqs_request() to using for_each_numa_cpu
lib/cpumask: update comment to cpumask_local_spread()
sched: drop for_each_numa_hop_mask()
lib: test for_each_numa_cpus()
drivers/net/ethernet/mellanox/mlx5/core/eq.c | 16 ++---
include/linux/find.h | 43 ++++++++++++
include/linux/topology.h | 40 ++++++-----
kernel/sched/topology.c | 53 ++++++++-------
lib/cpumask.c | 7 +-
lib/find_bit.c | 12 ++++
lib/test_bitmap.c | 70 +++++++++++++++++++-
7 files changed, 183 insertions(+), 58 deletions(-)
--
2.37.2
On Sun, Apr 30, 2023 at 10:18:01AM -0700, Yury Norov wrote: > for_each_cpu() is widely used in kernel, and it's beneficial to create > a NUMA-aware version of the macro. > > Recently added for_each_numa_hop_mask() works, but switching existing > codebase to it is not an easy process. > > This series adds for_each_numa_cpu(), which is designed to be similar to > the for_each_cpu(). It allows to convert existing code to NUMA-aware as > simple as adding a hop iterator variable and passing it inside new macro. > for_each_numa_cpu() takes care of the rest. Hi Jakub, Now that the series reviewed, can you consider taking it in sched tree? Thanks, Yury
On Wed, 31 May 2023 08:43:46 -0700 Yury Norov wrote: > On Sun, Apr 30, 2023 at 10:18:01AM -0700, Yury Norov wrote: > > for_each_cpu() is widely used in kernel, and it's beneficial to create > > a NUMA-aware version of the macro. > > > > Recently added for_each_numa_hop_mask() works, but switching existing > > codebase to it is not an easy process. > > > > This series adds for_each_numa_cpu(), which is designed to be similar to > > the for_each_cpu(). It allows to convert existing code to NUMA-aware as > > simple as adding a hop iterator variable and passing it inside new macro. > > for_each_numa_cpu() takes care of the rest. > > Hi Jakub, > > Now that the series reviewed, can you consider taking it in sched > tree? Do you mean someone else or did you mean the net-next tree?
On Wed, May 31, 2023 at 10:01:25AM -0700, Jakub Kicinski wrote: > On Wed, 31 May 2023 08:43:46 -0700 Yury Norov wrote: > > On Sun, Apr 30, 2023 at 10:18:01AM -0700, Yury Norov wrote: > > > for_each_cpu() is widely used in kernel, and it's beneficial to create > > > a NUMA-aware version of the macro. > > > > > > Recently added for_each_numa_hop_mask() works, but switching existing > > > codebase to it is not an easy process. > > > > > > This series adds for_each_numa_cpu(), which is designed to be similar to > > > the for_each_cpu(). It allows to convert existing code to NUMA-aware as > > > simple as adding a hop iterator variable and passing it inside new macro. > > > for_each_numa_cpu() takes care of the rest. > > > > Hi Jakub, > > > > Now that the series reviewed, can you consider taking it in sched > > tree? > > Do you mean someone else or did you mean the net-next tree? Sorry, net-next.
On Wed, 31 May 2023 10:08:58 -0700 Yury Norov wrote: > On Wed, May 31, 2023 at 10:01:25AM -0700, Jakub Kicinski wrote: > > On Wed, 31 May 2023 08:43:46 -0700 Yury Norov wrote: > > > Now that the series reviewed, can you consider taking it in sched > > > tree? > > > > Do you mean someone else or did you mean the net-next tree? > > Sorry, net-next. I'm a bit of a coward. I don't trust my ability to judge this code, and it seems Linus has opinions about it :( The mlx5 patch looks like a small refactoring which can wait until 6.6. I don't feel like net-next is the best path downstream for this series :(
On 30/04/23 10:18, Yury Norov wrote:
> for_each_cpu() is widely used in kernel, and it's beneficial to create
> a NUMA-aware version of the macro.
>
> Recently added for_each_numa_hop_mask() works, but switching existing
> codebase to it is not an easy process.
>
> This series adds for_each_numa_cpu(), which is designed to be similar to
> the for_each_cpu(). It allows to convert existing code to NUMA-aware as
> simple as adding a hop iterator variable and passing it inside new macro.
> for_each_numa_cpu() takes care of the rest.
>
> At the moment, we have 2 users of NUMA-aware enumerators. One is
> Melanox's in-tree driver, and another is Intel's in-review driver:
>
> https://lore.kernel.org/lkml/20230216145455.661709-1-pawel.chmielewski@intel.com/
>
> Both real-life examples follow the same pattern:
>
> for_each_numa_hop_mask(cpus, prev, node) {
> for_each_cpu_andnot(cpu, cpus, prev) {
> if (cnt++ == max_num)
> goto out;
> do_something(cpu);
> }
> prev = cpus;
> }
>
> With the new macro, it has a more standard look, like this:
>
> for_each_numa_cpu(cpu, hop, node, cpu_possible_mask) {
> if (cnt++ == max_num)
> break;
> do_something(cpu);
> }
>
> Straight conversion of existing for_each_cpu() codebase to NUMA-aware
> version with for_each_numa_hop_mask() is difficult because it doesn't
> take a user-provided cpu mask, and eventually ends up with open-coded
> double loop. With for_each_numa_cpu() it shouldn't be a brainteaser.
> Consider the NUMA-ignorant example:
>
> cpumask_t cpus = get_mask();
> int cnt = 0, cpu;
>
> for_each_cpu(cpu, cpus) {
> if (cnt++ == max_num)
> break;
> do_something(cpu);
> }
>
> Converting it to NUMA-aware version would be as simple as:
>
> cpumask_t cpus = get_mask();
> int node = get_node();
> int cnt = 0, hop, cpu;
>
> for_each_numa_cpu(cpu, hop, node, cpus) {
> if (cnt++ == max_num)
> break;
> do_something(cpu);
> }
>
> The latter looks more verbose and avoids from open-coding that annoying
> double loop. Another advantage is that it works with a 'hop' parameter with
> the clear meaning of NUMA distance, and doesn't make people not familiar
> to enumerator internals bothering with current and previous masks machinery.
>
LGTM, I ran the tests on a few NUMA topologies and that all seems to behave
as expected. Thanks for working on this!
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
> LGTM, I ran the tests on a few NUMA topologies and that all seems to behave > as expected. Thanks for working on this! > > Reviewed-by: Valentin Schneider <vschneid@redhat.com> Thank you Valentin. If you spent time testing the series, why don't you add your Tested-by?
On 02/05/23 14:58, Yury Norov wrote: >> LGTM, I ran the tests on a few NUMA topologies and that all seems to behave >> as expected. Thanks for working on this! >> >> Reviewed-by: Valentin Schneider <vschneid@redhat.com> > > Thank you Valentin. If you spent time testing the series, why > don't you add your Tested-by? Well, I only ran the test_bitmap stuff and checked the output of the iterator then, I didn't get to test on actual hardware with a mellanox card :-) But yeah, I suppose that does count for the rest, so feel free to add to all patches but #5: Tested-by: Valentin Schneider <vschneid@redhat.com>
© 2016 - 2026 Red Hat, Inc.