mm/slab.c | 2 ++ 1 file changed, 2 insertions(+)
drain_freelist() can be called with a very large number of slabs to free,
such as for kmem_cache_shrink(), or depending on various settings of the
slab cache when doing periodic reaping.
If there is a potentially long list of slabs to drain, periodically
schedule to ensure we aren't saturating the cpu for too long.
Signed-off-by: David Rientjes <rientjes@google.com>
---
mm/slab.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/slab.c b/mm/slab.c
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2211,6 +2211,8 @@ static int drain_freelist(struct kmem_cache *cache,
raw_spin_unlock_irq(&n->list_lock);
slab_destroy(cache, slab);
nr_freed++;
+
+ cond_resched();
}
out:
return nr_freed;
On 12/28/22 07:05, David Rientjes wrote: > drain_freelist() can be called with a very large number of slabs to free, > such as for kmem_cache_shrink(), or depending on various settings of the > slab cache when doing periodic reaping. > > If there is a potentially long list of slabs to drain, periodically > schedule to ensure we aren't saturating the cpu for too long. > > Signed-off-by: David Rientjes <rientjes@google.com> Thanks, added to slab/for-6.2-rc3/fixes > --- > mm/slab.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/slab.c b/mm/slab.c > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -2211,6 +2211,8 @@ static int drain_freelist(struct kmem_cache *cache, > raw_spin_unlock_irq(&n->list_lock); > slab_destroy(cache, slab); > nr_freed++; > + > + cond_resched(); > } > out: > return nr_freed;
On Tue, Dec 27, 2022 at 10:05:48PM -0800, David Rientjes wrote: > drain_freelist() can be called with a very large number of slabs to free, > such as for kmem_cache_shrink(), or depending on various settings of the > slab cache when doing periodic reaping. > > If there is a potentially long list of slabs to drain, periodically > schedule to ensure we aren't saturating the cpu for too long. > > Signed-off-by: David Rientjes <rientjes@google.com> > --- > mm/slab.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/slab.c b/mm/slab.c > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -2211,6 +2211,8 @@ static int drain_freelist(struct kmem_cache *cache, > raw_spin_unlock_irq(&n->list_lock); > slab_destroy(cache, slab); > nr_freed++; > + > + cond_resched(); > } > out: > return nr_freed; Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> -- Thanks, Hyeonggon
© 2016 - 2025 Red Hat, Inc.