kernel/workqueue.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
To ensure non-reentrancy, __queue_work() attempts to enqueue a work
item to the pool of the currently executing worker. This is not only
unnecessary for an ordered workqueue, where order inherently suggests
non-reentrancy, but it could also disrupt the sequence if the item is
not enqueued on the newest PWQ.
Just queue it to the newest PWQ and let order management guarantees
non-reentrancy.
Fixes: 4c065dbce1e8("workqueue: Enable unbound cpumask update on ordered workqueues")
Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
kernel/workqueue.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c910f3c28664..d4fecd23ea44 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2271,9 +2271,13 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
* If @work was previously on a different pool, it might still be
* running there, in which case the work needs to be queued on that
* pool to guarantee non-reentrancy.
+ *
+ * For ordered workqueue, work items must be queued on the newest pwq
+ * for accurate order management. Guaranteed order also guarantees
+ * non-reentrancy. See the comments above unplug_oldest_pwq().
*/
last_pool = get_work_pool(work);
- if (last_pool && last_pool != pool) {
+ if (last_pool && last_pool != pool && !(wq->flags & __WQ_ORDERED)) {
struct worker *worker;
raw_spin_lock(&last_pool->lock);
--
2.19.1.6.gb485710b
On 7/3/24 05:27, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
>
> To ensure non-reentrancy, __queue_work() attempts to enqueue a work
> item to the pool of the currently executing worker. This is not only
> unnecessary for an ordered workqueue, where order inherently suggests
> non-reentrancy, but it could also disrupt the sequence if the item is
> not enqueued on the newest PWQ.
>
> Just queue it to the newest PWQ and let order management guarantees
> non-reentrancy.
>
> Fixes: 4c065dbce1e8("workqueue: Enable unbound cpumask update on ordered workqueues")
> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> ---
> kernel/workqueue.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index c910f3c28664..d4fecd23ea44 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -2271,9 +2271,13 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
> * If @work was previously on a different pool, it might still be
> * running there, in which case the work needs to be queued on that
> * pool to guarantee non-reentrancy.
> + *
> + * For ordered workqueue, work items must be queued on the newest pwq
> + * for accurate order management. Guaranteed order also guarantees
> + * non-reentrancy. See the comments above unplug_oldest_pwq().
> */
> last_pool = get_work_pool(work);
> - if (last_pool && last_pool != pool) {
> + if (last_pool && last_pool != pool && !(wq->flags & __WQ_ORDERED)) {
> struct worker *worker;
>
> raw_spin_lock(&last_pool->lock);
Thanks for the fix again.
Acked-by: Waiman Long <longman@redhat.com>
On Wed, Jul 03, 2024 at 05:27:41PM +0800, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
>
> To ensure non-reentrancy, __queue_work() attempts to enqueue a work
> item to the pool of the currently executing worker. This is not only
> unnecessary for an ordered workqueue, where order inherently suggests
> non-reentrancy, but it could also disrupt the sequence if the item is
> not enqueued on the newest PWQ.
>
> Just queue it to the newest PWQ and let order management guarantees
> non-reentrancy.
>
> Fixes: 4c065dbce1e8("workqueue: Enable unbound cpumask update on ordered workqueues")
> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Applied to wq/for-6.10-fixes w/ stable cc added.
Thanks.
--
tejun
© 2016 - 2025 Red Hat, Inc.