drivers/infiniband/core/cm.c | 2 +- drivers/infiniband/core/device.c | 4 ++-- drivers/infiniband/core/ucma.c | 2 +- drivers/infiniband/hw/hfi1/init.c | 4 ++-- drivers/infiniband/hw/hfi1/opfn.c | 4 ++-- drivers/infiniband/hw/mlx4/cm.c | 2 +- drivers/infiniband/hw/mlx5/odp.c | 4 ++-- drivers/infiniband/sw/rdmavt/cq.c | 3 ++- 8 files changed, 13 insertions(+), 12 deletions(-)
Hi,
=== Current situation: problems ===
Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is
set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected.
This leads to different scenarios if a work item is scheduled on an
isolated CPU where "delay" value is 0 or greater then 0:
schedule_delayed_work(, 0);
This will be handled by __queue_work() that will queue the work item on the
current local (isolated) CPU, while:
schedule_delayed_work(, 1);
Will move the timer on an housekeeping CPU, and schedule the work there.
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistency cannot be addressed without refactoring the API.
=== Recent changes to the WQ API ===
The following, address the recent changes in the Workqueue API:
- commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
- commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
The old workqueues will be removed in a future release cycle.
=== Introduced Changes by this series ===
1) [P 1] Replace uses of system_wq and system_unbound_wq
system_unbound_wq is to be used when locality is not required.
Because of that, system_unbound_wq has been replaced with
system_dfl_wq, to make sure it is the default choice when locality
is not important.
system_dfl_wq has the same behavior of the old system_unbound_wq.
2) [P 2-5] WQ_PERCPU added to alloc_workqueue()
This change adds a new WQ_PERCPU flag to explicitly request
alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.
Thanks!
Marco Crivellari (5):
RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with
system_dfl_wq
RDMA/core: WQ_PERCPU added to alloc_workqueue users
hfi1: WQ_PERCPU added to alloc_workqueue users
RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users
IB/rdmavt: WQ_PERCPU added to alloc_workqueue users
drivers/infiniband/core/cm.c | 2 +-
drivers/infiniband/core/device.c | 4 ++--
drivers/infiniband/core/ucma.c | 2 +-
drivers/infiniband/hw/hfi1/init.c | 4 ++--
drivers/infiniband/hw/hfi1/opfn.c | 4 ++--
drivers/infiniband/hw/mlx4/cm.c | 2 +-
drivers/infiniband/hw/mlx5/odp.c | 4 ++--
drivers/infiniband/sw/rdmavt/cq.c | 3 ++-
8 files changed, 13 insertions(+), 12 deletions(-)
--
2.51.0
Hi, On Sat, Nov 1, 2025 at 5:31 PM Marco Crivellari <marco.crivellari@suse.com> wrote: > Marco Crivellari (5): > RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with > system_dfl_wq > RDMA/core: WQ_PERCPU added to alloc_workqueue users > hfi1: WQ_PERCPU added to alloc_workqueue users > RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users > IB/rdmavt: WQ_PERCPU added to alloc_workqueue users > > drivers/infiniband/core/cm.c | 2 +- > drivers/infiniband/core/device.c | 4 ++-- > drivers/infiniband/core/ucma.c | 2 +- > drivers/infiniband/hw/hfi1/init.c | 4 ++-- > drivers/infiniband/hw/hfi1/opfn.c | 4 ++-- > drivers/infiniband/hw/mlx4/cm.c | 2 +- > drivers/infiniband/hw/mlx5/odp.c | 4 ++-- > drivers/infiniband/sw/rdmavt/cq.c | 3 ++- > 8 files changed, 13 insertions(+), 12 deletions(-) Gentle ping. Thanks! -- Marco Crivellari L3 Support Engineer, Technology & Product
On Tue, Dec 02, 2025 at 02:22:55PM +0100, Marco Crivellari wrote: > Hi, > > On Sat, Nov 1, 2025 at 5:31 PM Marco Crivellari > <marco.crivellari@suse.com> wrote: > > Marco Crivellari (5): > > RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with > > system_dfl_wq > > RDMA/core: WQ_PERCPU added to alloc_workqueue users > > hfi1: WQ_PERCPU added to alloc_workqueue users > > RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users > > IB/rdmavt: WQ_PERCPU added to alloc_workqueue users > > > > drivers/infiniband/core/cm.c | 2 +- > > drivers/infiniband/core/device.c | 4 ++-- > > drivers/infiniband/core/ucma.c | 2 +- > > drivers/infiniband/hw/hfi1/init.c | 4 ++-- > > drivers/infiniband/hw/hfi1/opfn.c | 4 ++-- > > drivers/infiniband/hw/mlx4/cm.c | 2 +- > > drivers/infiniband/hw/mlx5/odp.c | 4 ++-- > > drivers/infiniband/sw/rdmavt/cq.c | 3 ++- > > 8 files changed, 13 insertions(+), 12 deletions(-) > > Gentle ping. It looks like it was picked up, the thank you email must have become lost: 5c467151f6197d IB/isert: add WQ_PERCPU to alloc_workqueue users 65d21dee533755 IB/iser: add WQ_PERCPU to alloc_workqueue users 7196156b0ce3dc IB/rdmavt: WQ_PERCPU added to alloc_workqueue users 5267feda50680c RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users 5f93287fa9d0db hfi1: WQ_PERCPU added to alloc_workqueue users e60c5583b661da RDMA/core: WQ_PERCPU added to alloc_workqueue users f673fb3449fcd8 RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with system_dfl_wq Jason
On Tue, Dec 2, 2025 at 8:17 PM Jason Gunthorpe <jgg@ziepe.ca> wrote: > It looks like it was picked up, the thank you email must have become lost: > > 5c467151f6197d IB/isert: add WQ_PERCPU to alloc_workqueue users > 65d21dee533755 IB/iser: add WQ_PERCPU to alloc_workqueue users > 7196156b0ce3dc IB/rdmavt: WQ_PERCPU added to alloc_workqueue users > 5267feda50680c RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users > 5f93287fa9d0db hfi1: WQ_PERCPU added to alloc_workqueue users > e60c5583b661da RDMA/core: WQ_PERCPU added to alloc_workqueue users > f673fb3449fcd8 RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with system_dfl_wq > > Jason Aha, thank you and sorry for the useless email! -- Marco Crivellari L3 Support Engineer, Technology & Product
© 2016 - 2026 Red Hat, Inc.