From nobody Sun Dec 14 17:54:44 2025 Received: from mail-10629.protonmail.ch (mail-10629.protonmail.ch [79.135.106.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9590E3233E3 for ; Wed, 10 Dec 2025 17:30:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.135.106.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387827; cv=none; b=Kz5C+xKzCbRyg4Mz7VypP8f3YDPp5lwFhNb9ExKRuvNM7HZylDZgMlEKd+AhbcBPzp+k+IUk/DjAKPwxn59wdO+WT8UyFGZhiFm/+44Wje2cICB5VH6Tg4rL8e3b4qdVD/o9pB6nyEqxUe1sq0WxP6vKXuEfdZqfTKB64tVQX+w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387827; c=relaxed/simple; bh=MqvX5rnWD0YBTss0HUtTTtGbWtrYS1CXxKV30SUKzC4=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=UzhAVrc1CRCSiHgKNNLs81L++sfNVFojwtLxEqkXaqZ/R7qDwq1L7aFPOLdJmW1h7q6GpHxq+Ho3DsoaCRns/BaOS7Xa726/pESKFqhlP17kJuqDITIKB7XWyKKYTDN7kwnbst1rUtIQ6ylOgUzHiLnAeiZuTNVWp1WLrqHUyT8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=WwAq/3hF; arc=none smtp.client-ip=79.135.106.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="WwAq/3hF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387821; x=1765647021; bh=yptnXmuAYz2NtPaUpK9TGJ5xoKzD/S2RQj7G4MSwUA8=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=WwAq/3hFtfJcKKZxvwBdu9XItJzK3N0PZdKkLOTNuMcLXaEwoGzToOwklIhFKy9W+ tEUI7p3D6/Q7SpbI1zC842u0uNQPyV+O6JSahVe3Dcrlk2jaVwfrArkvV6omX5x5Pu RSjt5T+fKVFYntwdNIipAjzqZv4sWPlvhORHx1sKsbiFSfNgyCvf5HQf00acqC7p99 lvV+oOQLmGXcGeYI1iJPZN5EaLbxpF9H+CYvJCoxnVf/fZVbVwkTlsgCDuc8lo7IgT YU1RJDZxpzk0ghJDjGMuSzgXRi1dVmlAoAqP+WmiYD6m/oKq1DRLNMX5AmGDYnJg6V zlv8G6xK1OJvg== Date: Wed, 10 Dec 2025 17:30:14 +0000 To: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v7 14/15] x86/kasan: Logical bit shift for kasan_mem_to_shadow Message-ID: <4dd0d4481bbd89d04bcc85a37a1b9d4ec08522c4.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: 67aeb239351e306f83687190add56130a7705643 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman The tag-based KASAN adopts an arithemitc bit shift to convert a memory address to a shadow memory address. While it makes a lot of sense on arm64, it doesn't work well for all cases on x86 - either the non-canonical hook becomes quite complex for different paging levels, or the inline mode would need a lot more adjustments. Thus the best working scheme is the logical bit shift and non-canonical shadow offset that x86 uses for generic KASAN, of course adjusted for the increased granularity from 8 to 16 bytes. Add an arch specific implementation of kasan_mem_to_shadow() that uses the logical bit shift. The non-canonical hook tries to calculate whether an address came from kasan_mem_to_shadow(). First it checks whether this address fits into the legal set of values possible to output from the mem to shadow function. Tie both generic and tag-based x86 KASAN modes to the address range check associated with generic KASAN. Signed-off-by: Maciej Wieczor-Retman --- Changelog v7: - Redo the patch message and add a comment to __kasan_mem_to_shadow() to provide better explanation on why x86 doesn't work well with the arithemitc bit shift approach (Marco). Changelog v4: - Add this patch to the series. arch/x86/include/asm/kasan.h | 15 +++++++++++++++ mm/kasan/report.c | 5 +++-- 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h index 6e083d45770d..395e133d551d 100644 --- a/arch/x86/include/asm/kasan.h +++ b/arch/x86/include/asm/kasan.h @@ -49,6 +49,21 @@ #include =20 #ifdef CONFIG_KASAN_SW_TAGS +/* + * Using the non-arch specific implementation of __kasan_mem_to_shadow() w= ith a + * arithmetic bit shift can cause high code complexity in KASAN's non-cano= nical + * hook for x86 or might not work for some paging level and KASAN mode + * combinations. The inline mode compiler support could also suffer from h= igher + * complexity for no specific benefit. Therefore the generic mode's logical + * shift implementation is used. + */ +static inline void *__kasan_mem_to_shadow(const void *addr) +{ + return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT) + + KASAN_SHADOW_OFFSET; +} + +#define kasan_mem_to_shadow(addr) __kasan_mem_to_shadow(addr) #define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag) #define __tag_reset(addr) (sign_extend64((u64)(addr), 56)) #define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr)) diff --git a/mm/kasan/report.c b/mm/kasan/report.c index b5beb1b10bd2..db6a9a3d01b2 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -642,13 +642,14 @@ void kasan_non_canonical_hook(unsigned long addr) const char *bug_type; =20 /* - * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift + * For Generic KASAN and Software Tag-Based mode on the x86 + * architecture, kasan_mem_to_shadow() uses the logical right shift * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on * both x86 and arm64). Thus, the possible shadow addresses (even for * bogus pointers) belong to a single contiguous region that is the * result of kasan_mem_to_shadow() applied to the whole address space. */ - if (IS_ENABLED(CONFIG_KASAN_GENERIC)) { + if (IS_ENABLED(CONFIG_KASAN_GENERIC) || IS_ENABLED(CONFIG_X86_64)) { if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0ULL)) || addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL))) return; --=20 2.52.0