drivers/vdpa/vdpa_user/vduse_dev.c | 3 ++- drivers/virtio/virtio_balloon.c | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-)
Hi,
=== Current situation: problems ===
Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is
set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected.
This leads to different scenarios if a work item is scheduled on an
isolated CPU where "delay" value is 0 or greater then 0:
schedule_delayed_work(, 0);
This will be handled by __queue_work() that will queue the work item on the
current local (isolated) CPU, while:
schedule_delayed_work(, 1);
Will move the timer on an housekeeping CPU, and schedule the work there.
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistency cannot be addressed without refactoring the API.
=== Recent changes to the WQ API ===
The following, address the recent changes in the Workqueue API:
- commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
- commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
The old workqueues will be removed in a future release cycle.
=== Introduced Changes by this series ===
1) [P 1-2] WQ_PERCPU added to alloc_workqueue()
This adds a new WQ_PERCPU flag to explicitly request alloc_workqueue()
to be per-cpu when WQ_UNBOUND has not been specified.
Thanks!
Marco Crivellari (2):
virtio_balloon: add WQ_PERCPU to alloc_workqueue users
vduse: add WQ_PERCPU to alloc_workqueue users
drivers/vdpa/vdpa_user/vduse_dev.c | 3 ++-
drivers/virtio/virtio_balloon.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
--
2.51.1
On Fri, Nov 07, 2025 at 04:49:15PM +0100, Marco Crivellari wrote:
> Hi,
>
> === Current situation: problems ===
>
> Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is
> set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected.
>
> This leads to different scenarios if a work item is scheduled on an
> isolated CPU where "delay" value is 0 or greater then 0:
> schedule_delayed_work(, 0);
>
> This will be handled by __queue_work() that will queue the work item on the
> current local (isolated) CPU, while:
>
> schedule_delayed_work(, 1);
>
> Will move the timer on an housekeeping CPU, and schedule the work there.
>
> Currently if a user enqueue a work item using schedule_delayed_work() the
> used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
> WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
> schedule_work() that is using system_wq and queue_work(), that makes use
> again of WORK_CPU_UNBOUND.
>
> This lack of consistency cannot be addressed without refactoring the API.
>
> === Recent changes to the WQ API ===
>
> The following, address the recent changes in the Workqueue API:
>
> - commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
> - commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
>
> The old workqueues will be removed in a future release cycle.
>
> === Introduced Changes by this series ===
>
> 1) [P 1-2] WQ_PERCPU added to alloc_workqueue()
>
> This adds a new WQ_PERCPU flag to explicitly request alloc_workqueue()
> to be per-cpu when WQ_UNBOUND has not been specified.
>
> Thanks!
>
> Marco Crivellari (2):
> virtio_balloon: add WQ_PERCPU to alloc_workqueue users
> vduse: add WQ_PERCPU to alloc_workqueue users
To make sure, this does not seem to introduce any
functional change - you want me to queue this now?
> drivers/vdpa/vdpa_user/vduse_dev.c | 3 ++-
> drivers/virtio/virtio_balloon.c | 3 ++-
> 2 files changed, 4 insertions(+), 2 deletions(-)
>
> --
> 2.51.1
On Mon, Nov 17, 2025 at 11:17 AM Michael S. Tsirkin <mst@redhat.com> wrote: > [...] > To make sure, this does not seem to introduce any > functional change - you want me to queue this now? Hi, Yes please, there are no functional changes as you said. We are just marking explicitly this workqueue as per-cpu. Thanks! -- Marco Crivellari L3 Support Engineer, Technology & Product
© 2016 - 2025 Red Hat, Inc.