mm/vmalloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
From: Li RongQing <lirongqing@baidu.com>
The workqueue watchdog currently reports that drain_vmap_area_work
hogs the CPU for more than 10ms. This typically happens during heavy
memory pressure or high-frequency vmap/vunmap operations where the lazy
drain list grows large.
[ 2069.796205] workqueue: drain_vmap_area_work hogged CPU for >10000us 4 times, consider switching to WQ_UNBOUND
[ 2192.823225] workqueue: drain_vmap_area_work hogged CPU for >10000us 5 times, consider switching to WQ_UNBOUND
[ 3225.388966] workqueue: drain_vmap_area_work hogged CPU for >10000us 7 times, consider switching to WQ_UNBOUND
Since vmap area draining is a background housekeeping task that does
not require strict CPU affinity or cache locality, it is a prime
candidate for an unbound workqueue.
Switching from schedule_work() to queue_work(system_unbound_wq, ...)
allows the scheduler to offload the draining process to any available
CPU core. This prevents stalling the current CPU and resolves the
"consider switching to WQ_UNBOUND" warning.
Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
mm/vmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 61caa55..5f2218a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2471,7 +2471,7 @@ static void free_vmap_area_noflush(struct vmap_area *va)
/* After this point, we may free va at any time */
if (unlikely(nr_lazy > nr_lazy_max))
- schedule_work(&drain_vmap_work);
+ queue_work(system_unbound_wq, &drain_vmap_work);
}
/*
--
2.9.4
On Wed, Mar 18, 2026 at 03:36:30AM -0400, lirongqing wrote: > From: Li RongQing <lirongqing@baidu.com> > > The workqueue watchdog currently reports that drain_vmap_area_work > hogs the CPU for more than 10ms. This typically happens during heavy > memory pressure or high-frequency vmap/vunmap operations where the lazy > drain list grows large. > > [ 2069.796205] workqueue: drain_vmap_area_work hogged CPU for >10000us 4 times, consider switching to WQ_UNBOUND > [ 2192.823225] workqueue: drain_vmap_area_work hogged CPU for >10000us 5 times, consider switching to WQ_UNBOUND > [ 3225.388966] workqueue: drain_vmap_area_work hogged CPU for >10000us 7 times, consider switching to WQ_UNBOUND > > Since vmap area draining is a background housekeeping task that does > not require strict CPU affinity or cache locality, it is a prime > candidate for an unbound workqueue. > > Switching from schedule_work() to queue_work(system_unbound_wq, ...) > allows the scheduler to offload the draining process to any available > CPU core. This prevents stalling the current CPU and resolves the > "consider switching to WQ_UNBOUND" warning. > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > --- > mm/vmalloc.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 61caa55..5f2218a 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2471,7 +2471,7 @@ static void free_vmap_area_noflush(struct vmap_area *va) > > /* After this point, we may free va at any time */ > if (unlikely(nr_lazy > nr_lazy_max)) > - schedule_work(&drain_vmap_work); > + queue_work(system_unbound_wq, &drain_vmap_work); > } > > /* > -- > 2.9.4 > We free memory here. Therefore it is time to switch to our own workqueue with WQ_MEM_RECLAIM | WQ_UNBOUND flags. -- Uladzislau Rezki
> > -- > > 2.9.4 > > > We free memory here. Therefore it is time to switch to our own workqueue > with WQ_MEM_RECLAIM | WQ_UNBOUND flags. > Ok, I will send v2 Thanks [Li,Rongqing]
© 2016 - 2026 Red Hat, Inc.