[PATCH v5 12/19] x86: Minimal SLAB alignment

Maciej Wieczor-Retman posted 19 patches 1 month, 1 week ago
[PATCH v5 12/19] x86: Minimal SLAB alignment
Posted by Maciej Wieczor-Retman 1 month, 1 week ago
8 byte minimal SLAB alignment interferes with KASAN's granularity of 16
bytes. It causes a lot of out-of-bounds errors for unaligned 8 byte
allocations.

Compared to a kernel with KASAN disabled, the memory footprint increases
because all kmalloc-8 allocations now are realized as kmalloc-16, which
has twice the object size. But more meaningfully, when compared to a
kernel with generic KASAN enabled, there is no difference. Because of
redzones in generic KASAN, kmalloc-8' and kmalloc-16' object size is the
same (48 bytes). So changing the minimal SLAB alignment of the tag-based
mode doesn't have any negative impact when compared to the other
software KASAN mode.

Adjust x86 minimal SLAB alignment to match KASAN granularity size.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Extend the patch message with some more context and impact
  information.

Changelog v3:
- Fix typo in patch message 4 -> 16.
- Change define location to arch/x86/include/asm/cache.c.

 arch/x86/include/asm/cache.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
index 69404eae9983..3232583b5487 100644
--- a/arch/x86/include/asm/cache.h
+++ b/arch/x86/include/asm/cache.h
@@ -21,4 +21,8 @@
 #endif
 #endif
 
+#ifdef CONFIG_KASAN_SW_TAGS
+#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
+#endif
+
 #endif /* _ASM_X86_CACHE_H */
-- 
2.50.1
Re: [PATCH v5 12/19] x86: Minimal SLAB alignment
Posted by Andrey Konovalov 3 weeks, 6 days ago
On Mon, Aug 25, 2025 at 10:29 PM Maciej Wieczor-Retman
<maciej.wieczor-retman@intel.com> wrote:
>
> 8 byte minimal SLAB alignment interferes with KASAN's granularity of 16
> bytes. It causes a lot of out-of-bounds errors for unaligned 8 byte
> allocations.
>
> Compared to a kernel with KASAN disabled, the memory footprint increases
> because all kmalloc-8 allocations now are realized as kmalloc-16, which
> has twice the object size. But more meaningfully, when compared to a
> kernel with generic KASAN enabled, there is no difference. Because of
> redzones in generic KASAN, kmalloc-8' and kmalloc-16' object size is the
> same (48 bytes). So changing the minimal SLAB alignment of the tag-based
> mode doesn't have any negative impact when compared to the other
> software KASAN mode.
>
> Adjust x86 minimal SLAB alignment to match KASAN granularity size.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
> ---
> Changelog v4:
> - Extend the patch message with some more context and impact
>   information.
>
> Changelog v3:
> - Fix typo in patch message 4 -> 16.
> - Change define location to arch/x86/include/asm/cache.c.
>
>  arch/x86/include/asm/cache.h | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
> index 69404eae9983..3232583b5487 100644
> --- a/arch/x86/include/asm/cache.h
> +++ b/arch/x86/include/asm/cache.h
> @@ -21,4 +21,8 @@
>  #endif
>  #endif
>
> +#ifdef CONFIG_KASAN_SW_TAGS
> +#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
> +#endif
> +
>  #endif /* _ASM_X86_CACHE_H */
> --
> 2.50.1
>

Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>