drivers/soc/xilinx/zynqmp_power.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
This patch continues the effort to refactor worqueue APIs, which has begun
with the change introducing new workqueues and a new alloc_workqueue flag:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
system_dfl_wq should be the default workqueue so as not to enforce
locality constraints for random work whenever it's not required.
The old system_unbound_wq will be kept for a few release cycles.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
drivers/soc/xilinx/zynqmp_power.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/soc/xilinx/zynqmp_power.c b/drivers/soc/xilinx/zynqmp_power.c
index ae59bf16659a..6145c4fe192e 100644
--- a/drivers/soc/xilinx/zynqmp_power.c
+++ b/drivers/soc/xilinx/zynqmp_power.c
@@ -82,7 +82,7 @@ static void subsystem_restart_event_callback(const u32 *payload, void *data)
memcpy(zynqmp_pm_init_restart_work->args, &payload[0],
sizeof(zynqmp_pm_init_restart_work->args));
- queue_work(system_unbound_wq, &zynqmp_pm_init_restart_work->callback_work);
+ queue_work(system_dfl_wq, &zynqmp_pm_init_restart_work->callback_work);
}
static void suspend_event_callback(const u32 *payload, void *data)
@@ -95,7 +95,7 @@ static void suspend_event_callback(const u32 *payload, void *data)
memcpy(zynqmp_pm_init_suspend_work->args, &payload[1],
sizeof(zynqmp_pm_init_suspend_work->args));
- queue_work(system_unbound_wq, &zynqmp_pm_init_suspend_work->callback_work);
+ queue_work(system_dfl_wq, &zynqmp_pm_init_suspend_work->callback_work);
}
static irqreturn_t zynqmp_pm_isr(int irq, void *data)
@@ -140,7 +140,7 @@ static void ipi_receive_callback(struct mbox_client *cl, void *data)
memcpy(zynqmp_pm_init_suspend_work->args, &payload[1],
sizeof(zynqmp_pm_init_suspend_work->args));
- queue_work(system_unbound_wq,
+ queue_work(system_dfl_wq,
&zynqmp_pm_init_suspend_work->callback_work);
/* Send NULL message to mbox controller to ack the message */
--
2.51.1
On 11/4/25 11:39, Marco Crivellari wrote:
> Currently if a user enqueue a work item using schedule_delayed_work() the
> used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
> WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
> schedule_work() that is using system_wq and queue_work(), that makes use
> again of WORK_CPU_UNBOUND.
>
> This lack of consistentcy cannot be addressed without refactoring the API.
>
> This patch continues the effort to refactor worqueue APIs, which has begun
> with the change introducing new workqueues and a new alloc_workqueue flag:
>
> commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
> commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
>
> system_dfl_wq should be the default workqueue so as not to enforce
> locality constraints for random work whenever it's not required.
>
> The old system_unbound_wq will be kept for a few release cycles.
>
> Suggested-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
> ---
> drivers/soc/xilinx/zynqmp_power.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/soc/xilinx/zynqmp_power.c b/drivers/soc/xilinx/zynqmp_power.c
> index ae59bf16659a..6145c4fe192e 100644
> --- a/drivers/soc/xilinx/zynqmp_power.c
> +++ b/drivers/soc/xilinx/zynqmp_power.c
> @@ -82,7 +82,7 @@ static void subsystem_restart_event_callback(const u32 *payload, void *data)
> memcpy(zynqmp_pm_init_restart_work->args, &payload[0],
> sizeof(zynqmp_pm_init_restart_work->args));
>
> - queue_work(system_unbound_wq, &zynqmp_pm_init_restart_work->callback_work);
> + queue_work(system_dfl_wq, &zynqmp_pm_init_restart_work->callback_work);
> }
>
> static void suspend_event_callback(const u32 *payload, void *data)
> @@ -95,7 +95,7 @@ static void suspend_event_callback(const u32 *payload, void *data)
> memcpy(zynqmp_pm_init_suspend_work->args, &payload[1],
> sizeof(zynqmp_pm_init_suspend_work->args));
>
> - queue_work(system_unbound_wq, &zynqmp_pm_init_suspend_work->callback_work);
> + queue_work(system_dfl_wq, &zynqmp_pm_init_suspend_work->callback_work);
> }
>
> static irqreturn_t zynqmp_pm_isr(int irq, void *data)
> @@ -140,7 +140,7 @@ static void ipi_receive_callback(struct mbox_client *cl, void *data)
> memcpy(zynqmp_pm_init_suspend_work->args, &payload[1],
> sizeof(zynqmp_pm_init_suspend_work->args));
>
> - queue_work(system_unbound_wq,
> + queue_work(system_dfl_wq,
> &zynqmp_pm_init_suspend_work->callback_work);
>
> /* Send NULL message to mbox controller to ack the message */
Applied.
M
On Mon, Dec 15, 2025 at 8:47 AM Michal Simek <michal.simek@amd.com> wrote: > Applied. Many thanks! -- Marco Crivellari L3 Support Engineer
Hi, On Tue, Nov 4, 2025 at 11:39 AM Marco Crivellari <marco.crivellari@suse.com> wrote: > drivers/soc/xilinx/zynqmp_power.c | 6 +++--- Gentle ping. Thanks! -- Marco Crivellari L3 Support Engineer, Technology & Product
Hi, On 12/2/25 14:25, Marco Crivellari wrote: > Hi, > > On Tue, Nov 4, 2025 at 11:39 AM Marco Crivellari > <marco.crivellari@suse.com> wrote: >> drivers/soc/xilinx/zynqmp_power.c | 6 +++--- > I will queue it after rc1 tag. Thanks, Michal
On Tue, Dec 2, 2025 at 2:36 PM Michal Simek <michal.simek@amd.com> wrote: > > Hi, > > On 12/2/25 14:25, Marco Crivellari wrote: > > Hi, > > > > On Tue, Nov 4, 2025 at 11:39 AM Marco Crivellari > > <marco.crivellari@suse.com> wrote: > >> drivers/soc/xilinx/zynqmp_power.c | 6 +++--- > > > > I will queue it after rc1 tag. > > Thanks, > Michal Hi, Sure, many thanks! I've realized after I've sent the email that 6.18 was already out... -- Marco Crivellari L3 Support Engineer, Technology & Product
© 2016 - 2025 Red Hat, Inc.