SLUB allocator relies on percpu allocator to initialize its ->cpu_slab
during early boot. For that, the dynamic chunk of percpu which serves
the early allocation need be large enough to satisfy the kmalloc
creation.
However, the current BUILD_BUG_ON() in alloc_kmem_cache_cpus() doesn't
consider the kmalloc array with NR_KMALLOC_TYPES length. Fix that
with correct calculation.
Signed-off-by: Baoquan He <bhe@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/slub.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 157527d7101b..8ac3bb9a122a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4017,7 +4017,8 @@ init_kmem_cache_node(struct kmem_cache_node *n)
static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)
{
BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE <
- KMALLOC_SHIFT_HIGH * sizeof(struct kmem_cache_cpu));
+ NR_KMALLOC_TYPES * KMALLOC_SHIFT_HIGH *
+ sizeof(struct kmem_cache_cpu));
/*
* Must align to double word boundary for the double cmpxchg
--
2.34.1
On 10/24/22 10:14, Baoquan He wrote:
> SLUB allocator relies on percpu allocator to initialize its ->cpu_slab
> during early boot. For that, the dynamic chunk of percpu which serves
> the early allocation need be large enough to satisfy the kmalloc
> creation.
>
> However, the current BUILD_BUG_ON() in alloc_kmem_cache_cpus() doesn't
> consider the kmalloc array with NR_KMALLOC_TYPES length. Fix that
> with correct calculation.
>
> Signed-off-by: Baoquan He <bhe@redhat.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
> mm/slub.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
As only slub is touched and there's no prerequsities in the previous
patches, I took this to the slab tree, branch
slab/for-6.2/cleanups
Thanks!
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 157527d7101b..8ac3bb9a122a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4017,7 +4017,8 @@ init_kmem_cache_node(struct kmem_cache_node *n)
> static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)
> {
> BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE <
> - KMALLOC_SHIFT_HIGH * sizeof(struct kmem_cache_cpu));
> + NR_KMALLOC_TYPES * KMALLOC_SHIFT_HIGH *
> + sizeof(struct kmem_cache_cpu));
>
> /*
> * Must align to double word boundary for the double cmpxchg
On 11/06/22 at 09:56pm, Vlastimil Babka wrote: > On 10/24/22 10:14, Baoquan He wrote: > > SLUB allocator relies on percpu allocator to initialize its ->cpu_slab > > during early boot. For that, the dynamic chunk of percpu which serves > > the early allocation need be large enough to satisfy the kmalloc > > creation. > > > > However, the current BUILD_BUG_ON() in alloc_kmem_cache_cpus() doesn't > > consider the kmalloc array with NR_KMALLOC_TYPES length. Fix that > > with correct calculation. > > > > Signed-off-by: Baoquan He <bhe@redhat.com> > > Cc: Christoph Lameter <cl@linux.com> > > Cc: Pekka Enberg <penberg@kernel.org> > > Cc: David Rientjes <rientjes@google.com> > > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > Cc: Andrew Morton <akpm@linux-foundation.org> > > Cc: Vlastimil Babka <vbabka@suse.cz> > > Cc: Roman Gushchin <roman.gushchin@linux.dev> > > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > --- > > mm/slub.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > As only slub is touched and there's no prerequsities in the previous > patches, I took this to the slab tree, branch > slab/for-6.2/cleanups Yes, it only changes slub code. Thanks for taking it. I will resend v2 with the left 7 percpu only patches with update.
Hi Baoquan, On Mon, Nov 07, 2022 at 12:35:56PM +0800, Baoquan He wrote: > On 11/06/22 at 09:56pm, Vlastimil Babka wrote: > > On 10/24/22 10:14, Baoquan He wrote: > > > SLUB allocator relies on percpu allocator to initialize its ->cpu_slab > > > during early boot. For that, the dynamic chunk of percpu which serves > > > the early allocation need be large enough to satisfy the kmalloc > > > creation. > > > > > > However, the current BUILD_BUG_ON() in alloc_kmem_cache_cpus() doesn't > > > consider the kmalloc array with NR_KMALLOC_TYPES length. Fix that > > > with correct calculation. > > > > > > Signed-off-by: Baoquan He <bhe@redhat.com> > > > Cc: Christoph Lameter <cl@linux.com> > > > Cc: Pekka Enberg <penberg@kernel.org> > > > Cc: David Rientjes <rientjes@google.com> > > > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > > Cc: Andrew Morton <akpm@linux-foundation.org> > > > Cc: Vlastimil Babka <vbabka@suse.cz> > > > Cc: Roman Gushchin <roman.gushchin@linux.dev> > > > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > > --- > > > mm/slub.c | 3 ++- > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > As only slub is touched and there's no prerequsities in the previous > > patches, I took this to the slab tree, branch > > slab/for-6.2/cleanups > > Yes, it only changes slub code. Thanks for taking it. > > I will resend v2 with the left 7 percpu only patches with update. > > Don't worry about resending them, I'll pick them up tomorrow morning. Thanks, Dennis
On 11/06/22 at 11:20pm, Dennis Zhou wrote: > Hi Baoquan, > > On Mon, Nov 07, 2022 at 12:35:56PM +0800, Baoquan He wrote: > > On 11/06/22 at 09:56pm, Vlastimil Babka wrote: > > > On 10/24/22 10:14, Baoquan He wrote: > > > > SLUB allocator relies on percpu allocator to initialize its ->cpu_slab > > > > during early boot. For that, the dynamic chunk of percpu which serves > > > > the early allocation need be large enough to satisfy the kmalloc > > > > creation. > > > > > > > > However, the current BUILD_BUG_ON() in alloc_kmem_cache_cpus() doesn't > > > > consider the kmalloc array with NR_KMALLOC_TYPES length. Fix that > > > > with correct calculation. > > > > > > > > Signed-off-by: Baoquan He <bhe@redhat.com> > > > > Cc: Christoph Lameter <cl@linux.com> > > > > Cc: Pekka Enberg <penberg@kernel.org> > > > > Cc: David Rientjes <rientjes@google.com> > > > > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > > > Cc: Andrew Morton <akpm@linux-foundation.org> > > > > Cc: Vlastimil Babka <vbabka@suse.cz> > > > > Cc: Roman Gushchin <roman.gushchin@linux.dev> > > > > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > > > --- > > > > mm/slub.c | 3 ++- > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > As only slub is touched and there's no prerequsities in the previous > > > patches, I took this to the slab tree, branch > > > slab/for-6.2/cleanups > > > > Yes, it only changes slub code. Thanks for taking it. > > > > I will resend v2 with the left 7 percpu only patches with update. > > > > > > Don't worry about resending them, I'll pick them up tomorrow morning. That's great. Thanks a lot, Dennis.
On Mon, Oct 24, 2022 at 04:14:35PM +0800, Baoquan He wrote:
> SLUB allocator relies on percpu allocator to initialize its ->cpu_slab
> during early boot. For that, the dynamic chunk of percpu which serves
> the early allocation need be large enough to satisfy the kmalloc
> creation.
>
> However, the current BUILD_BUG_ON() in alloc_kmem_cache_cpus() doesn't
> consider the kmalloc array with NR_KMALLOC_TYPES length. Fix that
> with correct calculation.
>
> Signed-off-by: Baoquan He <bhe@redhat.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
> mm/slub.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 157527d7101b..8ac3bb9a122a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4017,7 +4017,8 @@ init_kmem_cache_node(struct kmem_cache_node *n)
> static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)
> {
> BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE <
> - KMALLOC_SHIFT_HIGH * sizeof(struct kmem_cache_cpu));
> + NR_KMALLOC_TYPES * KMALLOC_SHIFT_HIGH *
> + sizeof(struct kmem_cache_cpu));
>
> /*
> * Must align to double word boundary for the double cmpxchg
> --
> 2.34.1
>
Acked-by: Dennis Zhou <dennis@kernel.org>
Thanks,
Dennis
On Mon, Oct 24, 2022 at 04:14:35PM +0800, Baoquan He wrote:
> SLUB allocator relies on percpu allocator to initialize its ->cpu_slab
> during early boot. For that, the dynamic chunk of percpu which serves
> the early allocation need be large enough to satisfy the kmalloc
> creation.
>
> However, the current BUILD_BUG_ON() in alloc_kmem_cache_cpus() doesn't
> consider the kmalloc array with NR_KMALLOC_TYPES length. Fix that
> with correct calculation.
>
> Signed-off-by: Baoquan He <bhe@redhat.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
> mm/slub.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 157527d7101b..8ac3bb9a122a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4017,7 +4017,8 @@ init_kmem_cache_node(struct kmem_cache_node *n)
> static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)
> {
> BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE <
> - KMALLOC_SHIFT_HIGH * sizeof(struct kmem_cache_cpu));
> + NR_KMALLOC_TYPES * KMALLOC_SHIFT_HIGH *
> + sizeof(struct kmem_cache_cpu));
>
> /*
> * Must align to double word boundary for the double cmpxchg
Looks good to me.
Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Thanks!
> --
> 2.34.1
>
--
Thanks,
Hyeonggon
© 2016 - 2026 Red Hat, Inc.