drivers/gpu/drm/drm_atomic_helper.c | 6 +++--- drivers/gpu/drm/drm_probe_helper.c | 2 +- drivers/gpu/drm/drm_self_refresh_helper.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-)
Hi,
=== Current situation: problems ===
Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is
set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected.
This leads to different scenarios if a work item is scheduled on an
isolated CPU where "delay" value is 0 or greater then 0:
schedule_delayed_work(, 0);
This will be handled by __queue_work() that will queue the work item on the
current local (isolated) CPU, while:
schedule_delayed_work(, 1);
Will move the timer on an housekeeping CPU, and schedule the work there.
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistency cannot be addressed without refactoring the API.
=== Recent changes to the WQ API ===
The following, address the recent changes in the Workqueue API:
- commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
- commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
The old workqueues will be removed in a future release cycle.
=== Introduced Changes by this series ===
1) [P 1-2-3] Replace uses of system_wq and system_unbound_wq
system_wq is a per-CPU workqueue, but his name is not clear.
system_unbound_wq is to be used when locality is not required.
Because of that, system_wq has been replaced with system_percpu_wq, and
system_unbound_wq has been replaced with system_dfl_wq.
Thanks!
Marco Crivellari (3):
drm/atomic-helper: replace use of system_unbound_wq with system_dfl_wq
drm/probe-helper: replace use of system_wq with system_percpu_wq
drm/self_refresh: replace use of system_wq with system_percpu_wq
drivers/gpu/drm/drm_atomic_helper.c | 6 +++---
drivers/gpu/drm/drm_probe_helper.c | 2 +-
drivers/gpu/drm/drm_self_refresh_helper.c | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
--
2.51.0
Hi
Am 30.10.25 um 17:20 schrieb Marco Crivellari:
> Hi,
>
> === Current situation: problems ===
>
> Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is
> set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected.
>
> This leads to different scenarios if a work item is scheduled on an
> isolated CPU where "delay" value is 0 or greater then 0:
> schedule_delayed_work(, 0);
>
> This will be handled by __queue_work() that will queue the work item on the
> current local (isolated) CPU, while:
>
> schedule_delayed_work(, 1);
>
> Will move the timer on an housekeeping CPU, and schedule the work there.
>
> Currently if a user enqueue a work item using schedule_delayed_work() the
> used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
> WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
> schedule_work() that is using system_wq and queue_work(), that makes use
> again of WORK_CPU_UNBOUND.
>
> This lack of consistency cannot be addressed without refactoring the API.
>
> === Recent changes to the WQ API ===
>
> The following, address the recent changes in the Workqueue API:
>
> - commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
> - commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
>
> The old workqueues will be removed in a future release cycle.
>
> === Introduced Changes by this series ===
>
> 1) [P 1-2-3] Replace uses of system_wq and system_unbound_wq
>
> system_wq is a per-CPU workqueue, but his name is not clear.
> system_unbound_wq is to be used when locality is not required.
>
> Because of that, system_wq has been replaced with system_percpu_wq, and
> system_unbound_wq has been replaced with system_dfl_wq.
From the description, I've found it hard to see if there's a change in
semantics here. But this series is effectively about renaming AFAICT. If so,
Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>
for all patches.
Best regards
Thomas
>
> Thanks!
>
>
> Marco Crivellari (3):
> drm/atomic-helper: replace use of system_unbound_wq with system_dfl_wq
> drm/probe-helper: replace use of system_wq with system_percpu_wq
> drm/self_refresh: replace use of system_wq with system_percpu_wq
>
> drivers/gpu/drm/drm_atomic_helper.c | 6 +++---
> drivers/gpu/drm/drm_probe_helper.c | 2 +-
> drivers/gpu/drm/drm_self_refresh_helper.c | 2 +-
> 3 files changed, 5 insertions(+), 5 deletions(-)
>
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
On Wed, Feb 4, 2026 at 12:58 PM Thomas Zimmermann <tzimmermann@suse.de> wrote: > [...] > From the description, I've found it hard to see if there's a change in > semantics here. But this series is effectively about renaming AFAICT. If so, > > Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de> > > for all patches. Hi Thomas, The new version of the changelog is more clear then this one. In case you want a cleaner version I can submit a new version. Anyhow yes, in short the change is the introduction of system_percpu_wq and system_dfl_wq without changing the behavior: system_wq -> system_percpu_wq system_unbound_wq -> system_dfl_wq Many thanks! -- Marco Crivellari L3 Support Engineer
Hi Am 04.02.26 um 14:36 schrieb Marco Crivellari: > On Wed, Feb 4, 2026 at 12:58 PM Thomas Zimmermann <tzimmermann@suse.de> wrote: >> [...] >> From the description, I've found it hard to see if there's a change in >> semantics here. But this series is effectively about renaming AFAICT. If so, >> >> Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de> >> >> for all patches. > Hi Thomas, > > The new version of the changelog is more clear then this one. In case you > want a cleaner version I can submit a new version. No need for an update. > > Anyhow yes, in short the change is the introduction of system_percpu_wq > and system_dfl_wq without changing the behavior: > > system_wq -> system_percpu_wq > system_unbound_wq -> system_dfl_wq > > > Many thanks! These patches go through DRM trees, right? I can merge them if no other reviews come in. Ping me if they get lost again. Best regards Thomas > -- -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
Hi, On Thu, Feb 5, 2026 at 8:22 AM Thomas Zimmermann <tzimmermann@suse.de> wrote: >[...] > These patches go through DRM trees, right? I can merge them if no other > reviews come in. Ping me if they get lost again. Yes, sure I will, thank you! -- Marco Crivellari L3 Support Engineer
Hi, On Thu, Oct 30, 2025 at 5:20 PM Marco Crivellari <marco.crivellari@suse.com> wrote: > Marco Crivellari (3): > drm/atomic-helper: replace use of system_unbound_wq with system_dfl_wq > drm/probe-helper: replace use of system_wq with system_percpu_wq > drm/self_refresh: replace use of system_wq with system_percpu_wq > > drivers/gpu/drm/drm_atomic_helper.c | 6 +++--- > drivers/gpu/drm/drm_probe_helper.c | 2 +- > drivers/gpu/drm/drm_self_refresh_helper.c | 2 +- > 3 files changed, 5 insertions(+), 5 deletions(-) Gentle ping. Thanks! -- Marco Crivellari L3 Support Engineer, Technology & Product
On Tue, Dec 2, 2025 at 2:21 PM Marco Crivellari <marco.crivellari@suse.com> wrote: > On Thu, Oct 30, 2025 at 5:20 PM Marco Crivellari > <marco.crivellari@suse.com> wrote: > > Marco Crivellari (3): > > drm/atomic-helper: replace use of system_unbound_wq with system_dfl_wq > > drm/probe-helper: replace use of system_wq with system_percpu_wq > > drm/self_refresh: replace use of system_wq with system_percpu_wq > > > > drivers/gpu/drm/drm_atomic_helper.c | 6 +++--- > > drivers/gpu/drm/drm_probe_helper.c | 2 +- > > drivers/gpu/drm/drm_self_refresh_helper.c | 2 +- > > 3 files changed, 5 insertions(+), 5 deletions(-) > > Gentle ping. Hi, Gentle ping. Thanks. -- Marco Crivellari L3 Support Engineer
© 2016 - 2026 Red Hat, Inc.