Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This change add the WQ_UNBOUND flag to sync_wq, to make explicit this
workqueue can be unbound and that it does not benefit from per-cpu work.
Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
kernel/rcu/tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 4f3175df5999..7137723f8f95 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4888,7 +4888,7 @@ void __init rcu_init(void)
rcu_gp_wq = alloc_workqueue("rcu_gp", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
WARN_ON(!rcu_gp_wq);
- sync_wq = alloc_workqueue("sync_wq", WQ_MEM_RECLAIM, 0);
+ sync_wq = alloc_workqueue("sync_wq", WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
WARN_ON(!sync_wq);
/* Respect if explicitly disabled via a boot parameter. */
--
2.51.0
Le Fri, Sep 19, 2025 at 04:50:39PM +0200, Marco Crivellari a écrit :
> Currently if a user enqueue a work item using schedule_delayed_work() the
> used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
> WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
> schedule_work() that is using system_wq and queue_work(), that makes use
> again of WORK_CPU_UNBOUND.
> This lack of consistentcy cannot be addressed without refactoring the API.
>
> alloc_workqueue() treats all queues as per-CPU by default, while unbound
> workqueues must opt-in via WQ_UNBOUND.
>
> This default is suboptimal: most workloads benefit from unbound queues,
> allowing the scheduler to place worker threads where they’re needed and
> reducing noise when CPUs are isolated.
>
> This change add the WQ_UNBOUND flag to sync_wq, to make explicit this
> workqueue can be unbound and that it does not benefit from per-cpu work.
>
> Once migration is complete, WQ_UNBOUND can be removed and unbound will
> become the implicit default.
>
> With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
> any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
> must now use WQ_PERCPU.
>
> Suggested-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
> ---
> kernel/rcu/tree.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 4f3175df5999..7137723f8f95 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -4888,7 +4888,7 @@ void __init rcu_init(void)
> rcu_gp_wq = alloc_workqueue("rcu_gp", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
> WARN_ON(!rcu_gp_wq);
>
> - sync_wq = alloc_workqueue("sync_wq", WQ_MEM_RECLAIM, 0);
> + sync_wq = alloc_workqueue("sync_wq", WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
> WARN_ON(!sync_wq);
>
> /* Respect if explicitly disabled via a boot parameter. */
> --
> 2.51.0
>
--
Frederic Weisbecker
SUSE Labs
On Mon, Sep 22, 2025 at 04:14:46PM +0200, Frederic Weisbecker wrote:
> Le Fri, Sep 19, 2025 at 04:50:39PM +0200, Marco Crivellari a écrit :
> > Currently if a user enqueue a work item using schedule_delayed_work() the
> > used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
> > WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
> > schedule_work() that is using system_wq and queue_work(), that makes use
> > again of WORK_CPU_UNBOUND.
> > This lack of consistentcy cannot be addressed without refactoring the API.
> >
> > alloc_workqueue() treats all queues as per-CPU by default, while unbound
> > workqueues must opt-in via WQ_UNBOUND.
> >
> > This default is suboptimal: most workloads benefit from unbound queues,
> > allowing the scheduler to place worker threads where they’re needed and
> > reducing noise when CPUs are isolated.
> >
> > This change add the WQ_UNBOUND flag to sync_wq, to make explicit this
> > workqueue can be unbound and that it does not benefit from per-cpu work.
> >
> > Once migration is complete, WQ_UNBOUND can be removed and unbound will
> > become the implicit default.
> >
> > With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
> > any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
> > must now use WQ_PERCPU.
> >
> > Suggested-by: Tejun Heo <tj@kernel.org>
> > Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
>
> Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Applied, thank you both!
I will push these out on my next rebase.
Thanx, Paul
> > ---
> > kernel/rcu/tree.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 4f3175df5999..7137723f8f95 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -4888,7 +4888,7 @@ void __init rcu_init(void)
> > rcu_gp_wq = alloc_workqueue("rcu_gp", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
> > WARN_ON(!rcu_gp_wq);
> >
> > - sync_wq = alloc_workqueue("sync_wq", WQ_MEM_RECLAIM, 0);
> > + sync_wq = alloc_workqueue("sync_wq", WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
> > WARN_ON(!sync_wq);
> >
> > /* Respect if explicitly disabled via a boot parameter. */
> > --
> > 2.51.0
> >
>
> --
> Frederic Weisbecker
> SUSE Labs
On Mon, Sep 22, 2025 at 4:27 PM Paul E. McKenney <paulmck@kernel.org> wrote: > Applied, thank you both! > > I will push these out on my next rebase. Many thanks, Paul! -- Marco Crivellari L3 Support Engineer, Technology & Product
© 2016 - 2026 Red Hat, Inc.