From nobody Mon Feb 9 10:34:42 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08A18EB64DA for ; Wed, 5 Jul 2023 12:44:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230487AbjGEMoO (ORCPT ); Wed, 5 Jul 2023 08:44:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230268AbjGEMoM (ORCPT ); Wed, 5 Jul 2023 08:44:12 -0400 Received: from out-1.mta1.migadu.com (out-1.mta1.migadu.com [95.215.58.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D3E810D5 for ; Wed, 5 Jul 2023 05:44:11 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1688561049; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=OvYom9X/S7wayVeQUgTcDXl5jN2F2BPuKd+7kG4C4mU=; b=L8cZm9LxdvRBYMRNAvSEu9H0IxP7taSx5EvR7BcZ8G88ca7lnB4ReJaYDVG16WBkTlpRN7 24GG0wNfx+8UM1CyKbWj+H72mwRnnArUTCvZRaQVPbU+ySUdM/44CzzLQw+XaOIEncUFVt 8VsKnaiXIIbjxVhTkWZ85lf6z419tXo= From: andrey.konovalov@linux.dev To: Marco Elver , Mark Rutland Cc: Andrey Konovalov , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , Vincenzo Frascino , kasan-dev@googlegroups.com, Andrew Morton , linux-mm@kvack.org, Catalin Marinas , Peter Collingbourne , Feng Tang , stable@vger.kernel.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH] kasan, slub: fix HW_TAGS zeroing with slub_debug Date: Wed, 5 Jul 2023 14:44:02 +0200 Message-Id: <678ac92ab790dba9198f9ca14f405651b97c8502.1688561016.git.andreyknvl@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Commit 946fa0dbf2d8 ("mm/slub: extend redzone check to extra allocated kmalloc space than requested") added precise kmalloc redzone poisoning to the slub_debug functionality. However, this commit didn't account for HW_TAGS KASAN fully initializing the object via its built-in memory initialization feature. Even though HW_TAGS KASAN memory initialization contains special memory initialization handling for when slub_debug is enabled, it does not account for in-object slub_debug redzones. As a result, HW_TAGS KASAN can overwrite these redzones and cause false-positive slub_debug reports. To fix the issue, avoid HW_TAGS KASAN memory initialization when slub_debug is enabled altogether. Implement this by moving the __slub_debug_enabled check to slab_post_alloc_hook. Common slab code seems like a more appropriate place for a slub_debug check anyway. Fixes: 946fa0dbf2d8 ("mm/slub: extend redzone check to extra allocated kmal= loc space than requested") Cc: Reported-by: Mark Rutland Signed-off-by: Andrey Konovalov Acked-by: Marco Elver Acked-by: Vlastimil Babka Reported-by: Will Deacon Tested-by: Will Deacon --- mm/kasan/kasan.h | 12 ------------ mm/slab.h | 16 ++++++++++++++-- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index b799f11e45dc..2e973b36fe07 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -466,18 +466,6 @@ static inline void kasan_unpoison(const void *addr, si= ze_t size, bool init) =20 if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; - /* - * Explicitly initialize the memory with the precise object size to - * avoid overwriting the slab redzone. This disables initialization in - * the arch code and may thus lead to performance penalty. This penalty - * does not affect production builds, as slab redzones are not enabled - * there. - */ - if (__slub_debug_enabled() && - init && ((unsigned long)size & KASAN_GRANULE_MASK)) { - init =3D false; - memzero_explicit((void *)addr, size); - } size =3D round_up(size, KASAN_GRANULE_SIZE); =20 hw_set_mem_tag_range((void *)addr, size, tag, init); diff --git a/mm/slab.h b/mm/slab.h index 6a5633b25eb5..9c0e09d0f81f 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -723,6 +723,7 @@ static inline void slab_post_alloc_hook(struct kmem_cac= he *s, unsigned int orig_size) { unsigned int zero_size =3D s->object_size; + bool kasan_init =3D init; size_t i; =20 flags &=3D gfp_allowed_mask; @@ -739,6 +740,17 @@ static inline void slab_post_alloc_hook(struct kmem_ca= che *s, (s->flags & SLAB_KMALLOC)) zero_size =3D orig_size; =20 + /* + * When slub_debug is enabled, avoid memory initialization integrated + * into KASAN and instead zero out the memory via the memset below with + * the proper size. Otherwise, KASAN might overwrite SLUB redzones and + * cause false-positive reports. This does not lead to a performance + * penalty on production builds, as slub_debug is not intended to be + * enabled there. + */ + if (__slub_debug_enabled()) + kasan_init =3D false; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_alloc and initialization memset must be @@ -747,8 +759,8 @@ static inline void slab_post_alloc_hook(struct kmem_cac= he *s, * As p[i] might get tagged, memset and kmemleak hook come after KASAN. */ for (i =3D 0; i < size; i++) { - p[i] =3D kasan_slab_alloc(s, p[i], flags, init); - if (p[i] && init && !kasan_has_integrated_init()) + p[i] =3D kasan_slab_alloc(s, p[i], flags, kasan_init); + if (p[i] && init && (!kasan_init || !kasan_has_integrated_init())) memset(p[i], 0, zero_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); --=20 2.25.1