drivers/pci/hotplug/shpchp_core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
Currently if a user enqueues a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistency cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This continues the effort to refactor workqueue APIs, which began with
the introduction of new workqueues and a new alloc_workqueue flag in:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
This change adds a new WQ_PERCPU flag to explicitly request
alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.
Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
drivers/pci/hotplug/shpchp_core.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/hotplug/shpchp_core.c b/drivers/pci/hotplug/shpchp_core.c
index 0c341453afc6..56308515ecba 100644
--- a/drivers/pci/hotplug/shpchp_core.c
+++ b/drivers/pci/hotplug/shpchp_core.c
@@ -80,7 +80,8 @@ static int init_slots(struct controller *ctrl)
slot->device = ctrl->slot_device_offset + i;
slot->number = ctrl->first_slot + (ctrl->slot_num_inc * i);
- slot->wq = alloc_workqueue("shpchp-%d", 0, 0, slot->number);
+ slot->wq = alloc_workqueue("shpchp-%d", WQ_PERCPU, 0,
+ slot->number);
if (!slot->wq) {
retval = -ENOMEM;
goto error_slot;
--
2.51.1
[+cc Mani]
On Fri, Nov 07, 2025 at 03:36:24PM +0100, Marco Crivellari wrote:
> Currently if a user enqueues a work item using schedule_delayed_work() the
> used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
> WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
> schedule_work() that is using system_wq and queue_work(), that makes use
> again of WORK_CPU_UNBOUND.
> This lack of consistency cannot be addressed without refactoring the API.
>
> alloc_workqueue() treats all queues as per-CPU by default, while unbound
> workqueues must opt-in via WQ_UNBOUND.
>
> This default is suboptimal: most workloads benefit from unbound queues,
> allowing the scheduler to place worker threads where they’re needed and
> reducing noise when CPUs are isolated.
>
> This continues the effort to refactor workqueue APIs, which began with
> the introduction of new workqueues and a new alloc_workqueue flag in:
>
> commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
> commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
>
> This change adds a new WQ_PERCPU flag to explicitly request
> alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.
>
> With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
> any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
> must now use WQ_PERCPU.
>
> Once migration is complete, WQ_UNBOUND can be removed and unbound will
> become the implicit default.
>
> Suggested-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Squashed with similar patches [1] and applied on pci/workqueue for
v6.20, thanks!
See https://lore.kernel.org/r/20251229163858.GA63361@bhelgaas
> ---
> drivers/pci/hotplug/shpchp_core.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/hotplug/shpchp_core.c b/drivers/pci/hotplug/shpchp_core.c
> index 0c341453afc6..56308515ecba 100644
> --- a/drivers/pci/hotplug/shpchp_core.c
> +++ b/drivers/pci/hotplug/shpchp_core.c
> @@ -80,7 +80,8 @@ static int init_slots(struct controller *ctrl)
> slot->device = ctrl->slot_device_offset + i;
> slot->number = ctrl->first_slot + (ctrl->slot_num_inc * i);
>
> - slot->wq = alloc_workqueue("shpchp-%d", 0, 0, slot->number);
> + slot->wq = alloc_workqueue("shpchp-%d", WQ_PERCPU, 0,
> + slot->number);
> if (!slot->wq) {
> retval = -ENOMEM;
> goto error_slot;
> --
> 2.51.1
>
On Mon, Dec 29, 2025 at 5:41 PM Bjorn Helgaas <helgaas@kernel.org> wrote: > Squashed with similar patches [1] and applied on pci/workqueue for > v6.20, thanks! > > See https://lore.kernel.org/r/20251229163858.GA63361@bhelgaas Many thanks! -- Marco Crivellari L3 Support Engineer
On Fri, Nov 7, 2025 at 3:36 PM Marco Crivellari
<marco.crivellari@suse.com> wrote:
> [...]
> ---
> drivers/pci/hotplug/shpchp_core.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/hotplug/shpchp_core.c b/drivers/pci/hotplug/shpchp_core.c
> index 0c341453afc6..56308515ecba 100644
> --- a/drivers/pci/hotplug/shpchp_core.c
> +++ b/drivers/pci/hotplug/shpchp_core.c
> @@ -80,7 +80,8 @@ static int init_slots(struct controller *ctrl)
> slot->device = ctrl->slot_device_offset + i;
> slot->number = ctrl->first_slot + (ctrl->slot_num_inc * i);
>
> - slot->wq = alloc_workqueue("shpchp-%d", 0, 0, slot->number);
> + slot->wq = alloc_workqueue("shpchp-%d", WQ_PERCPU, 0,
> + slot->number);
> if (!slot->wq) {
> retval = -ENOMEM;
> goto error_slot;
Gentle ping.
Thanks!
--
Marco Crivellari
L3 Support Engineer
© 2016 - 2026 Red Hat, Inc.