include/linux/percpu.h | 4 ++++ 1 file changed, 4 insertions(+)
From: Arnd Bergmann <arnd@arndb.de>
When both PREEMPT_RT and RANDOM_KMALLOC_CACHES are enabled, the slub allocator
runs into a build time failure:
In file included from <command-line>:
In function 'alloc_kmem_cache_cpus',
inlined from 'do_kmem_cache_create' at mm/slub.c:6041:7:
include/linux/compiler_types.h:517:45: error: call to '__compiletime_assert_598' declared with attribute error: BUILD_BUG_ON failed: PERCPU_DYNAMIC_EARLY_SIZE < NR_KMALLOC_TYPES * KMALLOC_SHIFT_HIGH * sizeof(struct kmem_cache_cpu)
517 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^
include/linux/compiler_types.h:498:25: note: in definition of macro '__compiletime_assert'
498 | prefix ## suffix(); \
| ^~~~~~
include/linux/compiler_types.h:517:9: note: in expansion of macro '_compiletime_assert'
517 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^~~~~~~~~~~~~~~~~~~
include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
39 | #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
| ^~~~~~~~~~~~~~~~~~
include/linux/build_bug.h:50:9: note: in expansion of macro 'BUILD_BUG_ON_MSG'
50 | BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
| ^~~~~~~~~~~~~~~~
mm/slub.c:5133:9: note: in expansion of macro 'BUILD_BUG_ON'
5133 | BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE <
| ^~~~~~~~~~~~
The problem is the additional size overhead from local_lock in
struct kmem_cache_cpu. Avoid this by preallocating a larger area.
Fixes: d8fccd9ca5f9 ("arm64: Allow to enable PREEMPT_RT.")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202410020326.iaZIteIx-lkp@intel.com/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
There is a good chance that there is a better way to address this, this
version was the first I came up with and I verified that it fixes all of
the broken configs.
See https://pastebin.com/raw/tuPgfPzu for a .config from a failing
randconfig build on 6.12-rc1.
---
include/linux/percpu.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index b6321fc49159..4083295da27f 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -41,7 +41,11 @@
PCPU_MIN_ALLOC_SHIFT)
#ifdef CONFIG_RANDOM_KMALLOC_CACHES
+#ifdef CONFIG_PREEMPT_RT
+#define PERCPU_DYNAMIC_SIZE_SHIFT 13
+#else
#define PERCPU_DYNAMIC_SIZE_SHIFT 12
+#endif
#else
#define PERCPU_DYNAMIC_SIZE_SHIFT 10
#endif
--
2.39.2
On 2024-10-04 09:56:56 [+0000], Arnd Bergmann wrote: > The problem is the additional size overhead from local_lock in > struct kmem_cache_cpu. Avoid this by preallocating a larger area. The worst case would be enabling additionally MEMCG so NR_KMALLOC_TYPES increases by one. And then we have: PERCPU_DYNAMIC_EARLY_SIZE < NR_KMALLOC_TYPES * KMALLOC_SHIFT_HIGH * sizeof(struct kmem_cache_cpu)); There is more to it than just RANDOM_KMALLOC_CACHES and PREEMPT_RT. There is additionally CONFIG_LOCKDEP which increases the size of local_lock_t further. Plus CONFIG_LOCK_STAT. The last one a kind of bad in terms of required pad area. Then we have CONFIG_PAGE_SIZE_64KB set which is the culprit. But 16K_PAGES also fail in this full blown case. PERCPU_DYNAMIC_EARLY_SIZE < NR_KMALLOC_TYPES * KMALLOC_SHIFT_HIGH * sizeof(struct kmem_cache_cpu)); 4KiB 20 << 12 < 19 * (12 + 1) * 288 80KiB < 69.46875 16KiB 20 << 12 < 19 * (14 + 1) * 288 80KiB < 80.15625 64KiB 20 << 12 < 19 * (16 + 1) * 288 80KiB < 90.84375 128KiB 20 << 12 < 19 * (17 + 1) * 288 80KiB < 96.1875 Just disabling CONFIG_LOCK_STAT the 16KiB PAGE_SIZE case works again (75.703125), the 64KiB still fails (85.796875). > There is a good chance that there is a better way to address this, this > version was the first I came up with and I verified that it fixes all of > the broken configs. How bad is it, to have PERCPU_DYNAMIC_SIZE_SHIFT unconditionally set to 13? If it is bad could we restrict it with LOCKDEP and PAGE_SIZE > 4KiB? So maybe something like this: diff --git a/include/linux/percpu.h b/include/linux/percpu.h index b6321fc491598..52b5ea663b9f0 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -41,7 +41,11 @@ PCPU_MIN_ALLOC_SHIFT) #ifdef CONFIG_RANDOM_KMALLOC_CACHES -#define PERCPU_DYNAMIC_SIZE_SHIFT 12 +# if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PAGE_SIZE_4KB) +# define PERCPU_DYNAMIC_SIZE_SHIFT 13 +# else +# define PERCPU_DYNAMIC_SIZE_SHIFT 12 +#endif /* LOCKDEP and PAGE_SIZE > 4KiB */ #else #define PERCPU_DYNAMIC_SIZE_SHIFT 10 #endif Sebastian
On Mon, Oct 7, 2024, at 10:43, Sebastian Andrzej Siewior wrote: > On 2024-10-04 09:56:56 [+0000], Arnd Bergmann wrote: > How bad is it, to have PERCPU_DYNAMIC_SIZE_SHIFT unconditionally set to > 13? If it is bad could we restrict it with LOCKDEP and PAGE_SIZE > 4KiB? > > So maybe something like this: > > diff --git a/include/linux/percpu.h b/include/linux/percpu.h > index b6321fc491598..52b5ea663b9f0 100644 > --- a/include/linux/percpu.h > +++ b/include/linux/percpu.h > @@ -41,7 +41,11 @@ > PCPU_MIN_ALLOC_SHIFT) > > #ifdef CONFIG_RANDOM_KMALLOC_CACHES > -#define PERCPU_DYNAMIC_SIZE_SHIFT 12 > +# if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PAGE_SIZE_4KB) > +# define PERCPU_DYNAMIC_SIZE_SHIFT 13 > +# else > +# define PERCPU_DYNAMIC_SIZE_SHIFT 12 > +#endif /* LOCKDEP and PAGE_SIZE > 4KiB */ > #else > #define PERCPU_DYNAMIC_SIZE_SHIFT 10 > #endif I think that's fine. If you have lockdep and large page sizes, the percpu memory area is entirely lost in the noise of the overhead you already get. For your version: Acked-by: Arnd Bergmann <arnd@arndb.de> Reported-by: Arnd Bergmann <arnd@arndb.de> Can you pick that up for rt fixes (if you already have a tree) or send it to Andrew for the mm tree? Feel free to take my changelog text. Arnd
On 2024-10-07 10:59:36 [+0000], Arnd Bergmann wrote: > > Can you pick that up for rt fixes (if you already have a tree) > or send it to Andrew for the mm tree? Feel free to take my > changelog text. I make a patch send it to mm so akpm/ Vlastimil can pick it up. Thanks Arnd. > Arnd Sebastian
© 2016 - 2024 Red Hat, Inc.