From nobody Sun Dec 14 11:11:41 2025 Received: from mail-07.mail-europe.com (mail-0701.mail-europe.com [51.83.17.38]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 159272868B0 for ; Wed, 10 Dec 2025 17:28:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=51.83.17.38 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387728; cv=none; b=unXg8erW3VcxNnjWPlPvVkC+8UwmZvSQc4VZd+sx1Lof6pi9V7pDOpLdN/WEc+14AW7QPXLPQWIjr55pA28YzveefUyBw5J/RtJWDC92OAkgo2kut6DlnoKk5tsOMDbJRbSzsp9wPwxFtLHbZYYaUnRgCkn+9nthIhf2l5lmlFY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387728; c=relaxed/simple; bh=p5CxABqGYnmsAvaHJH/08x+0dDLqB561o5nvmmbKHQA=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mKbRV/gywgTP2xkpPyYexiqeYYHANRPm+yT3CQWp8ZKo3usvdqAprS3WH9dhwgdWYSMuEFF4lFMhLEWNGAZn8RAsgUJbtd/c4vPhg27uqoc/ohWfO0TqimjKb3qF8JzKe4WtgF4id6hTOyn6gOhZ/4O/zgoi59ATYstR2QQ024M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=fail smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=OzGCNER4; arc=none smtp.client-ip=51.83.17.38 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="OzGCNER4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387714; x=1765646914; bh=iLqMnBf9Dkf3MEbdMe9iw5C9u0sT6BUVUsOOppSK+ss=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=OzGCNER4tOuIA3psAEnGX+6BKP/q/4G25dDtnu2jP4nAdRTRRRSmSQnQ3a+PEtIRO 6cJ+K0g7ujf8/QtwL2Uiywc4cTAtEB7737ZxMbTBNuYpXuQriWlYd+0K3PgtF3SEh0 GlPIEDrJcer/AEprecZ/2PRO0bA+/wYfNS2r0FJ6BF1TT65tRjSjgCu5UH6Z6ezGhW 1H+j4Jz89hmoWn0ymAvzzdinvjrpyuiSKS+Nkrp4TFrGyIl2joJ2lnm+MFKkhq1iXw UVZHpvMKYLS517993EwmrHCYWuj8jjFK9DV+/QuPzlsfiQa5aMKq+cCQ4DN2v9cLrF 7zQK3urneTfdw== Date: Wed, 10 Dec 2025 17:28:29 +0000 To: Catalin Marinas , Will Deacon , Jonathan Corbet , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Jan Kiszka , Kieran Bingham , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Samuel Holland , Maciej Wieczor-Retman , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, llvm@lists.linux.dev Subject: [PATCH v7 01/15] kasan: sw_tags: Use arithmetic shift for shadow computation Message-ID: <138681b036a91587e62fd62548502bc3205c93af.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: 412efb90c10be98fb728128a91e07d15fd614c8e Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Samuel Holland Currently, kasan_mem_to_shadow() uses a logical right shift, which turns canonical kernel addresses into non-canonical addresses by clearing the high KASAN_SHADOW_SCALE_SHIFT bits. The value of KASAN_SHADOW_OFFSET is then chosen so that the addition results in a canonical address for the shadow memory. For KASAN_GENERIC, this shift/add combination is ABI with the compiler, because KASAN_SHADOW_OFFSET is used in compiler-generated inline tag checks[1], which must only attempt to dereference canonical addresses. However, for KASAN_SW_TAGS there is some freedom to change the algorithm without breaking the ABI. Because TBI is enabled for kernel addresses, the top bits of shadow memory addresses computed during tag checks are irrelevant, and so likewise are the top bits of KASAN_SHADOW_OFFSET. This is demonstrated by the fact that LLVM uses a logical right shift in the tag check fast path[2] but a sbfx (signed bitfield extract) instruction in the slow path[3] without causing any issues. Using an arithmetic shift in kasan_mem_to_shadow() provides a number of benefits: 1) The memory layout doesn't change but is easier to understand. KASAN_SHADOW_OFFSET becomes a canonical memory address, and the shifted pointer becomes a negative offset, so KASAN_SHADOW_OFFSET =3D=3D KASAN_SHADOW_END regardless of the shift amount or the size of the virtual address space. 2) KASAN_SHADOW_OFFSET becomes a simpler constant, requiring only one instruction to load instead of two. Since it must be loaded in each function with a tag check, this decreases kernel text size by 0.5%. 3) This shift and the sign extension from kasan_reset_tag() can be combined into a single sbfx instruction. When this same algorithm change is applied to the compiler, it removes an instruction from each inline tag check, further reducing kernel text size by an additional 4.6%. These benefits extend to other architectures as well. On RISC-V, where the baseline ISA does not shifted addition or have an equivalent to the sbfx instruction, loading KASAN_SHADOW_OFFSET is reduced from 3 to 2 instructions, and kasan_mem_to_shadow(kasan_reset_tag(addr)) similarly combines two consecutive right shifts. Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Tr= ansforms/Instrumentation/AddressSanitizer.cpp#L1316 [1] Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Tr= ansforms/Instrumentation/HWAddressSanitizer.cpp#L895 [2] Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Ta= rget/AArch64/AArch64AsmPrinter.cpp#L669 [3] Signed-off-by: Samuel Holland Co-developed-by: Maciej Wieczor-Retman Signed-off-by: Maciej Wieczor-Retman Acked-by: Catalin Marinas --- Changelog v7: (Maciej) - Change UL to ULL in report.c to fix some compilation warnings. Changelog v6: (Maciej) - Add Catalin's acked-by. - Move x86 gdb snippet here from the last patch. Changelog v5: (Maciej) - (u64) -> (unsigned long) in report.c Changelog v4: (Maciej) - Revert x86 to signed mem_to_shadow mapping. - Remove last two paragraphs since they were just poorer duplication of the comments in kasan_non_canonical_hook(). Changelog v3: (Maciej) - Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is reflected there. - Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into account. - Made changes to the kasan_non_canonical_hook() according to upstream discussion. Settled on overflow on both ranges and separate checks for x86 and arm. Changelog v2: (Maciej) - Correct address range that's checked in kasan_non_canonical_hook(). Adjust the comment inside. - Remove part of comment from arch/arm64/include/asm/memory.h. - Append patch message paragraph about the overflow in kasan_non_canonical_hook(). Documentation/arch/arm64/kasan-offsets.sh | 8 +++-- arch/arm64/Kconfig | 10 +++---- arch/arm64/include/asm/memory.h | 14 ++++++++- arch/arm64/mm/kasan_init.c | 7 +++-- include/linux/kasan.h | 10 +++++-- mm/kasan/report.c | 36 ++++++++++++++++++++--- scripts/gdb/linux/kasan.py | 5 +++- scripts/gdb/linux/mm.py | 5 ++-- 8 files changed, 76 insertions(+), 19 deletions(-) diff --git a/Documentation/arch/arm64/kasan-offsets.sh b/Documentation/arch= /arm64/kasan-offsets.sh index 2dc5f9e18039..ce777c7c7804 100644 --- a/Documentation/arch/arm64/kasan-offsets.sh +++ b/Documentation/arch/arm64/kasan-offsets.sh @@ -5,8 +5,12 @@ =20 print_kasan_offset () { printf "%02d\t" $1 - printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \ - - (1 << (64 - 32 - $2)) )) + if [[ $2 -ne 4 ]] then + printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \ + - (1 << (64 - 32 - $2)) )) + else + printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) )) + fi } =20 echo KASAN_SHADOW_SCALE_SHIFT =3D 3 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6663ffd23f25..ac50ba2d760b 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -433,11 +433,11 @@ config KASAN_SHADOW_OFFSET default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS - default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && != ARM64_16K_PAGES)) && KASAN_SW_TAGS - default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && A= RM64_16K_PAGES && KASAN_SW_TAGS - default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS - default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS - default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS + default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && != ARM64_16K_PAGES)) && KASAN_SW_TAGS + default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && A= RM64_16K_PAGES && KASAN_SW_TAGS + default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS + default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS + default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS default 0xffffffffffffffff =20 config UNWIND_TABLES diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memor= y.h index f1505c4acb38..7bbebde59a75 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -89,7 +89,15 @@ * * KASAN_SHADOW_END is defined first as the shadow address that correspond= s to * the upper bound of possible virtual kernel memory addresses UL(1) << 64 - * according to the mapping formula. + * according to the mapping formula. For Generic KASAN, the address in the + * mapping formula is treated as unsigned (part of the compiler's ABI), so= the + * end of the shadow memory region is at a large positive offset from + * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the + * formula is treated as signed. Since all kernel addresses are negative, = they + * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFF= SET + * itself the end of the shadow memory region. (User pointers are positive= and + * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory= is + * not allocated for them.) * * KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The sha= dow * memory start must map to the lowest possible kernel virtual memory addr= ess @@ -100,7 +108,11 @@ */ #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) +#ifdef CONFIG_KASAN_GENERIC #define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KAS= AN_SHADOW_OFFSET) +#else +#define KASAN_SHADOW_END KASAN_SHADOW_OFFSET +#endif #define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (UL(1) << ((va) - KASA= N_SHADOW_SCALE_SHIFT))) #define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual) #define PAGE_END KASAN_SHADOW_START diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index abeb81bf6ebd..937f6eb8115b 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr) /* The early shadow maps everything to a single page of zeroes */ asmlinkage void __init kasan_early_init(void) { - BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=3D - KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT))); + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=3D + KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT))); + else + BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=3D KASAN_SHADOW_END); BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN)); BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN)); diff --git a/include/linux/kasan.h b/include/linux/kasan.h index d12e1a5f5a9a..670de5427c32 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -61,8 +61,14 @@ int kasan_populate_early_shadow(const void *shadow_start, #ifndef kasan_mem_to_shadow static inline void *kasan_mem_to_shadow(const void *addr) { - return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT) - + KASAN_SHADOW_OFFSET; + void *scaled; + + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + scaled =3D (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT); + else + scaled =3D (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT); + + return KASAN_SHADOW_OFFSET + scaled; } #endif =20 diff --git a/mm/kasan/report.c b/mm/kasan/report.c index 62c01b4527eb..b5beb1b10bd2 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -642,11 +642,39 @@ void kasan_non_canonical_hook(unsigned long addr) const char *bug_type; =20 /* - * All addresses that came as a result of the memory-to-shadow mapping - * (even for bogus pointers) must be >=3D KASAN_SHADOW_OFFSET. + * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift + * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on + * both x86 and arm64). Thus, the possible shadow addresses (even for + * bogus pointers) belong to a single contiguous region that is the + * result of kasan_mem_to_shadow() applied to the whole address space. */ - if (addr < KASAN_SHADOW_OFFSET) - return; + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) { + if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0ULL)) || + addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL))) + return; + } + + /* + * For Software Tag-Based KASAN, kasan_mem_to_shadow() uses the + * arithmetic shift. Normally, this would make checking for a possible + * shadow address complicated, as the shadow address computation + * operation would overflow only for some memory addresses. However, due + * to the chosen KASAN_SHADOW_OFFSET values and the fact the + * kasan_mem_to_shadow() only operates on pointers with the tag reset, + * the overflow always happens. + * + * For arm64, the top byte of the pointer gets reset to 0xFF. Thus, the + * possible shadow addresses belong to a region that is the result of + * kasan_mem_to_shadow() applied to the memory range + * [0xFF000000000000, 0xFFFFFFFFFFFFFFFF]. Despite the overflow, the + * resulting possible shadow region is contiguous, as the overflow + * happens for both 0xFF000000000000 and 0xFFFFFFFFFFFFFFFF. + */ + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) && IS_ENABLED(CONFIG_ARM64)) { + if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0xFFULL << 56)) || + addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL))) + return; + } =20 orig_addr =3D (unsigned long)kasan_shadow_to_mem((void *)addr); =20 diff --git a/scripts/gdb/linux/kasan.py b/scripts/gdb/linux/kasan.py index 56730b3fde0b..4b86202b155f 100644 --- a/scripts/gdb/linux/kasan.py +++ b/scripts/gdb/linux/kasan.py @@ -7,7 +7,8 @@ # =20 import gdb -from linux import constants, mm +from linux import constants, utils, mm +from ctypes import c_int64 as s64 =20 def help(): t =3D """Usage: lx-kasan_mem_to_shadow [Hex memory addr] @@ -39,6 +40,8 @@ class KasanMemToShadow(gdb.Command): else: help() def kasan_mem_to_shadow(self, addr): + if constants.CONFIG_KASAN_SW_TAGS and not utils.is_target_arch('x8= 6'): + addr =3D s64(addr) return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.= KASAN_SHADOW_OFFSET =20 KasanMemToShadow() diff --git a/scripts/gdb/linux/mm.py b/scripts/gdb/linux/mm.py index 7571aebbe650..2e63f3dedd53 100644 --- a/scripts/gdb/linux/mm.py +++ b/scripts/gdb/linux/mm.py @@ -110,12 +110,13 @@ class aarch64_page_ops(): self.KERNEL_END =3D gdb.parse_and_eval("_end") =20 if constants.LX_CONFIG_KASAN_GENERIC or constants.LX_CONFIG_KASAN_= SW_TAGS: + self.KASAN_SHADOW_OFFSET =3D constants.LX_CONFIG_KASAN_SHADOW_= OFFSET if constants.LX_CONFIG_KASAN_GENERIC: self.KASAN_SHADOW_SCALE_SHIFT =3D 3 + self.KASAN_SHADOW_END =3D (1 << (64 - self.KASAN_SHADOW_SC= ALE_SHIFT)) + self.KASAN_SHADOW_OFFSET else: self.KASAN_SHADOW_SCALE_SHIFT =3D 4 - self.KASAN_SHADOW_OFFSET =3D constants.LX_CONFIG_KASAN_SHADOW_= OFFSET - self.KASAN_SHADOW_END =3D (1 << (64 - self.KASAN_SHADOW_SCALE_= SHIFT)) + self.KASAN_SHADOW_OFFSET + self.KASAN_SHADOW_END =3D self.KASAN_SHADOW_OFFSET self.PAGE_END =3D self.KASAN_SHADOW_END - (1 << (self.vabits_a= ctual - self.KASAN_SHADOW_SCALE_SHIFT)) else: self.PAGE_END =3D self._PAGE_END(self.VA_BITS_MIN) --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-24417.protonmail.ch (mail-24417.protonmail.ch [109.224.244.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D41D0313E0B for ; Wed, 10 Dec 2025 17:28:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=109.224.244.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387740; cv=none; b=jUFs9mwR+CHS3Lq/iVUXSooaJBU16csMElLtBBmf4QsDnmop3ndz5HgmdSPzkyn3CvehgNAtWReyj/MD1iBQjtS86/nuGSZ6OdcVjN+njNM5xGSoPcMKYAry8QBElmrP4wlbm02wn2u58A9gdItva8VLZpK2pjVra74yIG9gwKM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387740; c=relaxed/simple; bh=pd8VWWbVF8Aj/Dq46b8wiNqdwnYEV4PJewA5iIh4hrg=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=D3f1zC/7GNuUjurdVfhpiNNy8/1AgTNNa5E22Vjz9YKaG7S3jleC1SQW/5o/7y0CWyw5+N4BibIyGTuWBr+usaPLF8I2lSQSslaFpmW8jeIgUZM+OCIUTMaGU9eZ1r9ih5ZFq+6XMLWG1F/cqbL6mQ4v+1QqS1OYuGfRpo0BBH8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=XQg25V1L; arc=none smtp.client-ip=109.224.244.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="XQg25V1L" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387730; x=1765646930; bh=uKUcoWRGEpioVsYqfJYYUwDSXTCCDSTxDl6r47kG8IA=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=XQg25V1LFYpvoYroj2dRRJCBZNsgxg7t/VnqW9cMbGn3CIxQENLETA/NkSkPKowp+ Xf0w0hIwjaW3WHYwCshE6mXEQiKDQPFMGlZuKa3PZV4VP/VYD1sN4y+CuEYhvGej9f xydCpgdmxD0Aiqn8d7wOnX8WdVVA7jPIRxamZvSmn7Y9TtnWbVRvngZkZj2b4AtNkf sKwHQggNmSA8qYPQaS8H4cOm3AtmH0DizAI7EYE2yQYrynpiE2HU6rK+lz8OZGPQBn YOidNzDQ252kNJLRQHsxuLiv7DacGfaS3d3tz3CyROZ4Xe5H3tRmHSd5tA4sg1ELcx /qcLFnPwCEGXA== Date: Wed, 10 Dec 2025 17:28:43 +0000 To: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Catalin Marinas , Will Deacon , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Samuel Holland , Maciej Wieczor-Retman , linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Subject: [PATCH v7 02/15] kasan: arm64: x86: Make special tags arch specific Message-ID: <0db7ec3b1a813b4d9e3aa8648b3c212166a248b7.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: dcbf8ea52fee4f67b789474d9088b2f73bd5f9cb Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Samuel Holland KASAN's tag-based mode defines multiple special tag values. They're reserved for: - Native kernel value. On arm64 it's 0xFF and it causes an early return in the tag checking function. - Invalid value. 0xFE marks an area as freed / unallocated. It's also the value that is used to initialize regions of shadow memory. - Min and max values. 0xFD is the highest value that can be randomly generated for a new tag. 0 is the minimal value with the exception of arm64's hardware mode where it is equal to 0xF0. Metadata macro is also defined: - Tag width equal to 8. Tag-based mode on x86 is going to use 4 bit wide tags so all the above values need to be changed accordingly. Make tag width and native kernel tag arch specific for x86 and arm64. Base the invalid tag value and the max value on the native kernel tag since they follow the same pattern on both mentioned architectures. Also generalize KASAN_SHADOW_INIT and 0xff used in various page_kasan_tag* helpers. Give KASAN_TAG_MIN the default value of zero, and move the special value for hw_tags arm64 to its arch specific kasan-tags.h. Signed-off-by: Samuel Holland Co-developed-by: Maciej Wieczor-Retman Signed-off-by: Maciej Wieczor-Retman --- Changelog v7: - Reorder defines of arm64 tag width to prevent redefinition warnings. - Remove KASAN_TAG_MASK so it's only defined in mmzone.h (Andrey Konovalov) - Merge the 'support tag widths less than 8 bits' with this patch since they do similar things and overwrite each other. (Alexander) Changelog v6: - Add hardware tags KASAN_TAG_WIDTH value to the arm64 arch file. - Keep KASAN_TAG_MASK in the mmzone.h. - Remove ifndef from KASAN_SHADOW_INIT. Changelog v5: - Move KASAN_TAG_MIN to the arm64 kasan-tags.h for the hardware KASAN mode case. Changelog v4: - Move KASAN_TAG_MASK to kasan-tags.h. Changelog v2: - Remove risc-v from the patch. MAINTAINERS | 2 +- arch/arm64/include/asm/kasan-tags.h | 14 ++++++++++++++ arch/arm64/include/asm/kasan.h | 2 -- arch/arm64/include/asm/uaccess.h | 1 + arch/x86/include/asm/kasan-tags.h | 9 +++++++++ include/linux/kasan-tags.h | 19 ++++++++++++++----- include/linux/kasan.h | 3 +-- include/linux/mm.h | 6 +++--- include/linux/page-flags-layout.h | 9 +-------- 9 files changed, 44 insertions(+), 21 deletions(-) create mode 100644 arch/arm64/include/asm/kasan-tags.h create mode 100644 arch/x86/include/asm/kasan-tags.h diff --git a/MAINTAINERS b/MAINTAINERS index 7bf6385efe04..a591598cc4b5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -13420,7 +13420,7 @@ L: kasan-dev@googlegroups.com S: Maintained B: https://bugzilla.kernel.org/buglist.cgi?component=3DSanitizers&product= =3DMemory%20Management F: Documentation/dev-tools/kasan.rst -F: arch/*/include/asm/*kasan.h +F: arch/*/include/asm/*kasan*.h F: arch/*/mm/kasan_init* F: include/linux/kasan*.h F: lib/Kconfig.kasan diff --git a/arch/arm64/include/asm/kasan-tags.h b/arch/arm64/include/asm/k= asan-tags.h new file mode 100644 index 000000000000..259952677443 --- /dev/null +++ b/arch/arm64/include/asm/kasan-tags.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_KASAN_TAGS_H +#define __ASM_KASAN_TAGS_H + +#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ + +#ifdef CONFIG_KASAN_HW_TAGS +#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */ +#define KASAN_TAG_WIDTH 4 +#else +#define KASAN_TAG_WIDTH 8 +#endif + +#endif /* ASM_KASAN_TAGS_H */ diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index e1b57c13f8a4..d2841e0fb908 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -6,8 +6,6 @@ =20 #include #include -#include -#include =20 #define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag) #define arch_kasan_reset_tag(addr) __tag_reset(addr) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uacc= ess.h index 6490930deef8..ccd41a39e3a1 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include diff --git a/arch/x86/include/asm/kasan-tags.h b/arch/x86/include/asm/kasan= -tags.h new file mode 100644 index 000000000000..68ba385bc75c --- /dev/null +++ b/arch/x86/include/asm/kasan-tags.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_KASAN_TAGS_H +#define __ASM_KASAN_TAGS_H + +#define KASAN_TAG_KERNEL 0xF /* native kernel pointers tag */ + +#define KASAN_TAG_WIDTH 4 + +#endif /* ASM_KASAN_TAGS_H */ diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h index 4f85f562512c..ad5c11950233 100644 --- a/include/linux/kasan-tags.h +++ b/include/linux/kasan-tags.h @@ -2,13 +2,22 @@ #ifndef _LINUX_KASAN_TAGS_H #define _LINUX_KASAN_TAGS_H =20 +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS) +#include +#endif + +#ifndef KASAN_TAG_WIDTH +#define KASAN_TAG_WIDTH 0 +#endif + +#ifndef KASAN_TAG_KERNEL #define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ -#define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ -#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */ +#endif + +#define KASAN_TAG_INVALID (KASAN_TAG_KERNEL - 1) /* inaccessible memory ta= g */ +#define KASAN_TAG_MAX (KASAN_TAG_KERNEL - 2) /* maximum value for random = tags */ =20 -#ifdef CONFIG_KASAN_HW_TAGS -#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */ -#else +#ifndef KASAN_TAG_MIN #define KASAN_TAG_MIN 0x00 /* minimum value for random tags */ #endif =20 diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 670de5427c32..5cb21b90a2ec 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -39,8 +39,7 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t; /* Software KASAN implementations use shadow memory. */ =20 #ifdef CONFIG_KASAN_SW_TAGS -/* This matches KASAN_TAG_INVALID. */ -#define KASAN_SHADOW_INIT 0xFE +#define KASAN_SHADOW_INIT KASAN_TAG_INVALID #else #define KASAN_SHADOW_INIT 0 #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index 8e9268cf929e..b61090a80e3f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1762,7 +1762,7 @@ static inline u8 page_kasan_tag(const struct page *pa= ge) =20 if (kasan_enabled()) { tag =3D (page->flags.f >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK; - tag ^=3D 0xff; + tag ^=3D KASAN_TAG_KERNEL; } =20 return tag; @@ -1775,7 +1775,7 @@ static inline void page_kasan_tag_set(struct page *pa= ge, u8 tag) if (!kasan_enabled()) return; =20 - tag ^=3D 0xff; + tag ^=3D KASAN_TAG_KERNEL; old_flags =3D READ_ONCE(page->flags.f); do { flags =3D old_flags; @@ -1794,7 +1794,7 @@ static inline void page_kasan_tag_reset(struct page *= page) =20 static inline u8 page_kasan_tag(const struct page *page) { - return 0xff; + return KASAN_TAG_KERNEL; } =20 static inline void page_kasan_tag_set(struct page *page, u8 tag) { } diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-l= ayout.h index 760006b1c480..b2cc4cb870e0 100644 --- a/include/linux/page-flags-layout.h +++ b/include/linux/page-flags-layout.h @@ -3,6 +3,7 @@ #define PAGE_FLAGS_LAYOUT_H =20 #include +#include #include =20 /* @@ -72,14 +73,6 @@ #define NODE_NOT_IN_PAGE_FLAGS 1 #endif =20 -#if defined(CONFIG_KASAN_SW_TAGS) -#define KASAN_TAG_WIDTH 8 -#elif defined(CONFIG_KASAN_HW_TAGS) -#define KASAN_TAG_WIDTH 4 -#else -#define KASAN_TAG_WIDTH 0 -#endif - #ifdef CONFIG_NUMA_BALANCING #define LAST__PID_SHIFT 8 #define LAST__PID_MASK ((1 << LAST__PID_SHIFT)-1) --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-106121.protonmail.ch (mail-106121.protonmail.ch [79.135.106.121]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CED6731354D for ; Wed, 10 Dec 2025 17:28:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.135.106.121 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387740; cv=none; b=PNZagmmiOzqTfaIUb+hr3mh/THH5FPGCxCLqTGoeOHzbiigRFBc18FpZZNiMP4NFPUGgFByJ4+kBWpDLi4CBbxsy8gBfArmvCdcrVm9q/UOtNzzuarTgExBJn9pLIJ7dN7F1KPOHDy/L1bVhWHCLbffO7hh0YTOcES+/2AjdtFU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387740; c=relaxed/simple; bh=MOW+FEXSkjqoA9gO0vl1FWTSXtEjOD+DXsStw9lDKJE=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=XOvRn25VngDBDNBEw/D8RHQK21zMJKqY87NPJ/P8qmf1Uu5LK8nDmltONFbfOHEl9ai/1BXlmjr5IbbGgXOORdf8Y5vxay6otw/fiw1JLm8UEfWqKusVwCxsYiJfjeOSHj10HoThm/CnnGWaGzBfLtE3RCrwYYszfts90ZcPIl8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=QTYFtRRZ; arc=none smtp.client-ip=79.135.106.121 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="QTYFtRRZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387736; x=1765646936; bh=i82hVLnXXbAVfgARSkNfTSungiQEgJywgSx/RsX/wSc=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=QTYFtRRZSv5HMgCvLl5j70fefNNpEH8Vd1RNrChuePSJpM8jr7yUaqlbcP+bUmUT/ RcoSdi0g+qlPiC9rqCPDtQU7gMfNuJCDNJwrCF6WmDuj9Y0G6DyAoWUKNnzlh8SVWD cuSBNiYSJpJuVvcfsnv418i5V3/VmE+N/tvxafhFRj37FVgRiy/W6gh5Db3MfEFMB1 5DD6FkNkHtIed5sykhPZSImlqnhvX9cSJ9gCGa/y86ObAS1gXMQN6gNDweRWuaLQ+Q GLdQMG82NcuhzvJWhG9s5jDYoXmqRWg5KZILJfXQBjHVn8fe7TD26C13TQqvDxJ8AL +8ah8xL5tjfjA== Date: Wed, 10 Dec 2025 17:28:51 +0000 To: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Nathan Chancellor , Nicolas Schier , Nick Desaulniers , Bill Wendling , Justin Stitt From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , linux-kbuild@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev Subject: [PATCH v7 03/15] kasan: Fix inline mode for x86 tag-based mode Message-ID: In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: a7071320bc0417d682deed36d6eb1da05840a46c Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman The LLVM compiler uses hwasan-instrument-with-calls parameter to setup inline or outline mode in tag-based KASAN. If zeroed, it means the instrumentation implementation will be pasted into each relevant location along with KASAN related constants during compilation. If set to one all function instrumentation will be done with function calls instead. The default hwasan-instrument-with-calls value for the x86 architecture in the compiler is "1", which is not true for other architectures. Because of this, enabling inline mode in software tag-based KASAN doesn't work on x86 as the kernel script doesn't zero out the parameter and always sets up the outline mode. Explicitly zero out hwasan-instrument-with-calls when enabling inline mode in tag-based KASAN. Signed-off-by: Maciej Wieczor-Retman Reviewed-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- Changelog v7: - Add Alexander's Reviewed-by tag. Changelog v6: - Add Andrey's Reviewed-by tag. Changelog v3: - Add this patch to the series. scripts/Makefile.kasan | 3 +++ 1 file changed, 3 insertions(+) diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan index 0ba2aac3b8dc..e485814df3e9 100644 --- a/scripts/Makefile.kasan +++ b/scripts/Makefile.kasan @@ -76,8 +76,11 @@ CFLAGS_KASAN :=3D -fsanitize=3Dkernel-hwaddress RUSTFLAGS_KASAN :=3D -Zsanitizer=3Dkernel-hwaddress \ -Zsanitizer-recover=3Dkernel-hwaddress =20 +# LLVM sets hwasan-instrument-with-calls to 1 on x86 by default. Set it to= 0 +# when inline mode is enabled. ifdef CONFIG_KASAN_INLINE kasan_params +=3D hwasan-mapping-offset=3D$(KASAN_SHADOW_OFFSET) + kasan_params +=3D hwasan-instrument-with-calls=3D0 else kasan_params +=3D hwasan-instrument-with-calls=3D1 endif --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-10629.protonmail.ch (mail-10629.protonmail.ch [79.135.106.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F310310647 for ; Wed, 10 Dec 2025 17:29:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.135.106.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387763; cv=none; b=LhwtQ3NCBcIYLrrRGYdJ8jCx7aWju/RVVG8gcEzLEj2w9vDB4p+YiLnGZb1Q2Qz33pDp0ow/1iDQqDGyQr3EXF3uCSlCOIny1+RoCjgQOOvPdyoKkppVKKd+v8u0Z0hgkAtlSmU68aNSTTleQBGP6GAwRDBUCqsjS6bAXZQBpFU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387763; c=relaxed/simple; bh=rP7xunhcaDWW0z3AfpEPBzGW7RLfM9tsltIe4h3Ssis=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VpJM+j1M48ZjMgTrfZxU0ImThyvFAiuOOPucwUxMSnme3a2t201L+sd6Khy+JQEDJss/aMDiyFuv0il/bXU1szacRb7YZpML8Ws/Oj3gCdk7uuPNTVEvpu7Jo3vXHXFNnZADwtSkl2xEH7GnjXD8XT4CPZdIKVs1Kp3VbU9Mtxk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=ekLjSZAu; arc=none smtp.client-ip=79.135.106.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="ekLjSZAu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387751; x=1765646951; bh=FfChR2RkA9EYwmqKbBDmes/ZKIT7Eu4BfGwYt0WPyRQ=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=ekLjSZAuJKh1V6Q/ncfaqcknUCY3uBaE8bKGLCG9M8NbGmtUvK0fZ/zI9RlORxRlr XiK4UvFAIt2y5TROTEKt3H0Ec6tGx6V5K7cKF/Obux1nHRin6B39dUY+y8rxMBe/Yr aXobeX0WJXT3q5720HHN+nbv8qt+dMnCjWPDRW4QJE78SZIpvyBqo37u+S9EftUs66 z8tTm+LQGWP8gIr8BojV2bS1jjFwbkLLO26Za3/t47J9zO68XR4REjZZeneyO7SY9D XZyDMPhHYDrUI8y3C+owji97rppXuw8qmjOR6DB5MnpO2TSolmZqcPom3XogyZ5yz+ pFhiaYS6f53Uw== Date: Wed, 10 Dec 2025 17:29:03 +0000 To: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Axel Rasmussen , Yuanchu Xie , Wei Xu From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v7 04/15] x86/kasan: Add arch specific kasan functions Message-ID: <406416dea492be82578c2cf4ee70e45d98200081.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: 5a34045bd28f1283c14c644ca9f08dc44d52c0f8 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman KASAN's software tag-based mode needs multiple macros/functions to handle tag and pointer interactions - to set, retrieve and reset tags from the top bits of a pointer. Mimic functions currently used by arm64 but change the tag's position to bits [60:57] in the pointer. Signed-off-by: Maciej Wieczor-Retman --- Changelog v7: - Add KASAN_TAG_BYTE_MASK to avoid circular includes and avoid removing KASAN_TAG_MASK from mmzone.h. - Remove Andrey's Acked-by tag. Changelog v6: - Remove empty line after ifdef CONFIG_KASAN_SW_TAGS - Add ifdef 64 bit to avoid problems in vdso32. - Add Andrey's Acked-by tag. Changelog v4: - Rewrite __tag_set() without pointless casts and make it more readable. Changelog v3: - Reorder functions so that __tag_*() etc are above the arch_kasan_*() ones. - Remove CONFIG_KASAN condition from __tag_set() arch/x86/include/asm/kasan.h | 42 ++++++++++++++++++++++++++++++++++-- include/linux/kasan-tags.h | 2 ++ include/linux/mmzone.h | 2 +- 3 files changed, 43 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h index d7e33c7f096b..eab12527ed7f 100644 --- a/arch/x86/include/asm/kasan.h +++ b/arch/x86/include/asm/kasan.h @@ -3,6 +3,8 @@ #define _ASM_X86_KASAN_H =20 #include +#include +#include #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) #define KASAN_SHADOW_SCALE_SHIFT 3 =20 @@ -24,8 +26,43 @@ KASAN_SHADOW_SCALE_SHIFT))) =20 #ifndef __ASSEMBLER__ +#include +#include +#include + +#ifdef CONFIG_KASAN_SW_TAGS +#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag) +#define __tag_reset(addr) (sign_extend64((u64)(addr), 56)) +#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr)) +#else +#define __tag_shifted(tag) 0UL +#define __tag_reset(addr) (addr) +#define __tag_get(addr) 0 +#endif /* CONFIG_KASAN_SW_TAGS */ + +#ifdef CONFIG_64BIT +static inline void *__tag_set(const void *__addr, u8 tag) +{ + u64 addr =3D (u64)__addr; + + addr &=3D ~__tag_shifted(KASAN_TAG_BYTE_MASK); + addr |=3D __tag_shifted(tag & KASAN_TAG_BYTE_MASK); + + return (void *)addr; +} +#else +static inline void *__tag_set(void *__addr, u8 tag) +{ + return __addr; +} +#endif + +#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag) +#define arch_kasan_reset_tag(addr) __tag_reset(addr) +#define arch_kasan_get_tag(addr) __tag_get(addr) =20 #ifdef CONFIG_KASAN + void __init kasan_early_init(void); void __init kasan_init(void); void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid= ); @@ -34,8 +71,9 @@ static inline void kasan_early_init(void) { } static inline void kasan_init(void) { } static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid) { } -#endif =20 -#endif +#endif /* CONFIG_KASAN */ + +#endif /* __ASSEMBLER__ */ =20 #endif diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h index ad5c11950233..e4f26bec3673 100644 --- a/include/linux/kasan-tags.h +++ b/include/linux/kasan-tags.h @@ -10,6 +10,8 @@ #define KASAN_TAG_WIDTH 0 #endif =20 +#define KASAN_TAG_BYTE_MASK ((1UL << KASAN_TAG_WIDTH) - 1) + #ifndef KASAN_TAG_KERNEL #define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ #endif diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 7fb7331c5725..aa35f8331a4b 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1181,7 +1181,7 @@ static inline bool zone_is_empty(const struct zone *z= one) #define NODES_MASK ((1UL << NODES_WIDTH) - 1) #define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1) #define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1) -#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1) +#define KASAN_TAG_MASK KASAN_TAG_BYTE_MASK #define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1) =20 static inline enum zone_type memdesc_zonenum(memdesc_flags_t flags) --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-4316.protonmail.ch (mail-4316.protonmail.ch [185.70.43.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CE8A327201 for ; Wed, 10 Dec 2025 17:29:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.70.43.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387764; cv=none; b=sj5yalpD6sUs5593dnqhF1qOpxB8TZjGIMa7ZV+Q+ecHE82idckfAF+d4dWKSnRYziuEW0GJ3PS75AQKu5DfNj7QIib0+21UH3D9Vo227IEEigQLae2aLoxr2XKjCFcBoO1zIV5/TOh+sa/av7pxdHgv603AHmrWnnYk2yKgNik= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387764; c=relaxed/simple; bh=ts+xE8M7HMWR9p7O4iVVulk7XJz3wdjMc067AGvwts4=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VQ8bCBrlzgDDyUtHW7jJhN4+5IMwzRCe56ic8RAK3PHCKzxzxGaYLFT4bQMpJVioR5BAME5TD0dnhQAQ1522sw7AqpWxLQMiseMYiXp6o+zXibP5c/52hcY1kgSHePnAXdgLSoXo4SK8zHv6k34ROtwY/mXqfHzz7NngO1hEh5s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=DaLMSuFn; arc=none smtp.client-ip=185.70.43.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="DaLMSuFn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387761; x=1765646961; bh=EzPOKUh/SBkSmXklvPbupvxDq5oPzebM5okvJ6u0Weg=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=DaLMSuFnS3VtqKfngaKB11l95f/nYCVDeA/QhBgbkNwK0JRhGndP5HTuXzM+963Es i3PZMBpCKouiI/GiXj/hdEaVGl23DCDtbzSgf73uSn3t0YNN6N/mMdQ6dhLOe0fgkP RVAOiSS5Jgc6EGCvId+a8mAqTERWz3Qlix3C9FrGfKAdqXMxHqGtgMvxfAj/HZV9Yt BX47rW/HHreqEaIXCb8ZChU5keH+NUsozJQYa+h2SWU087WqNFKFsPQ22rTBIro+FV hcHV07/V5lzVtDtY/O2MOs7fv81Nz60knhKj+kGz0wmEMx58tmey7ZK5Uqa3W+mgvB LyWrVYLLu37Bg== Date: Wed, 10 Dec 2025 17:29:14 +0000 To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , Alexander Potapenko , linux-kernel@vger.kernel.org Subject: [PATCH v7 05/15] x86/mm: Reset tag for virtual to physical address conversions Message-ID: <6060dc3bf196279b71da0af5dc61601a8feb3c76.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: f19d48b30c9ec2b773d11bfa02360c150e2c425b Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman Any place where pointer arithmetic is used to convert a virtual address into a physical one can raise errors if the virtual address is tagged. Reset the pointer's tag by sign extending the tag bits in macros that do pointer arithmetic in address conversions. There will be no change in compiled code with KASAN disabled since the compiler will optimize the __tag_reset() out. Signed-off-by: Maciej Wieczor-Retman Acked-by: Alexander Potapenko --- Changelog v7: - Add Alexander's Acked-by tag. Changelog v5: - Move __tag_reset() calls into __phys_addr_nodebug() and __virt_addr_valid() instead of calling it on the arguments of higher level functions. Changelog v4: - Simplify page_to_virt() by removing pointless casts. - Remove change in __is_canonical_address() because it's taken care of in a later patch due to a LAM compatible definition of canonical. arch/x86/include/asm/page.h | 8 ++++++++ arch/x86/include/asm/page_64.h | 1 + arch/x86/mm/physaddr.c | 2 ++ 3 files changed, 11 insertions(+) diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 9265f2fca99a..bcf5cad3da36 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -7,6 +7,7 @@ #ifdef __KERNEL__ =20 #include +#include =20 #ifdef CONFIG_X86_64 #include @@ -65,6 +66,13 @@ static inline void copy_user_page(void *to, void *from, = unsigned long vaddr, * virt_to_page(kaddr) returns a valid pointer if and only if * virt_addr_valid(kaddr) returns true. */ + +#ifdef CONFIG_KASAN_SW_TAGS +#define page_to_virt(x) ({ \ + void *__addr =3D __va(page_to_pfn((struct page *)x) << PAGE_SHIFT); \ + __tag_set(__addr, page_kasan_tag(x)); \ +}) +#endif #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) extern bool __virt_addr_valid(unsigned long kaddr); #define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long) (kaddr)) diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index 2f0e47be79a4..01f9e6233bba 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -22,6 +22,7 @@ extern unsigned long direct_map_physmem_end; =20 static __always_inline unsigned long __phys_addr_nodebug(unsigned long x) { + x =3D __tag_reset(x); unsigned long y =3D x - __START_KERNEL_map; =20 /* use the carry flag to determine if x was < __START_KERNEL_map */ diff --git a/arch/x86/mm/physaddr.c b/arch/x86/mm/physaddr.c index 8d31c6b9e184..8f18273be0d2 100644 --- a/arch/x86/mm/physaddr.c +++ b/arch/x86/mm/physaddr.c @@ -14,6 +14,7 @@ #ifdef CONFIG_DEBUG_VIRTUAL unsigned long __phys_addr(unsigned long x) { + x =3D __tag_reset(x); unsigned long y =3D x - __START_KERNEL_map; =20 /* use the carry flag to determine if x was < __START_KERNEL_map */ @@ -35,6 +36,7 @@ EXPORT_SYMBOL(__phys_addr); =20 bool __virt_addr_valid(unsigned long x) { + x =3D __tag_reset(x); unsigned long y =3D x - __START_KERNEL_map; =20 /* use the carry flag to determine if x was < __START_KERNEL_map */ --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-106119.protonmail.ch (mail-106119.protonmail.ch [79.135.106.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 17826327208 for ; Wed, 10 Dec 2025 17:29:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.135.106.119 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387769; cv=none; b=ocxa5W6ina2GrrWuQfG0bBsdRDXYFBRZSj5ub3KApmc8UgMd112KcXsNNBgRFb4sVZXLGskS3Kz5Yol53TNULYuit2XTDJNjzZBxIbud055FwWN0cCiazPFDKV746jV5ScoNCv0greFZPv0q0yK+Nv0fbcO2v0q5ryUgnXOmh0s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387769; c=relaxed/simple; bh=EmG62XCFf/4n1QZqPQRANuDjJ6jWBpL1MjSimwo9iwk=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jZ4JtBgzZxhdk+Jx7YJQcoNQqg4oarMXMWqTUWkKPuNmr8FADqS8+y1CxzvyYbnMP0L2hp5MOC2Kit9i0/tnE5rM4loj07AN+BIdeWP/DzxXvtFq1yZ+BXe3DsvyG3eBp4yRrlH5UXH1f8UTurj2ub3SCK7fEGNODcKnGHB1rcc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=Lj+pOSMQ; arc=none smtp.client-ip=79.135.106.119 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="Lj+pOSMQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387765; x=1765646965; bh=I+pkg2GeDkkg1tjHYnWfZwgCM2Umwon1r6aRlNC5238=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=Lj+pOSMQylYz5XAt06tggBchaegn2CDrrpuU4tpShVbY/a0MPIRedf6BMgqEak7Ni Hu3mFoA8kzXs342pvLWg4nQ1Mq1ico+N2daQ45qdSXzP4hWul8SPJg7le1bYXkPTkW Vwnkhm8t4hEhk4owF3HmP0NpYrcy8f9N5yuP9zfy+eZrFanHjYOyDXpQc4XyMwmgce p42NNeIyFZeSTmkiMHYG66xTPySKr/REtupTfevBvXXIoEsLYBIs29cr6YZ/FdNBWV Hk6UOhm9tTmc9f5eDOyAkj6iHsnPet4fP3KbMdU9sHOdMenWVqAwAq+meISJ1m8tyt UU+QFoI2qpWrQ== Date: Wed, 10 Dec 2025 17:29:22 +0000 To: Andrew Morton , Mike Rapoport , Uladzislau Rezki From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , Alexander Potapenko , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 06/15] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic Message-ID: <52b76a6b1ea96e476473bcd6df18a8619be919cb.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: af4557dd716700699e277ef47539c4f0962cd1ee Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman ARCH_HAS_EXECMEM_ROX was re-enabled in x86 at Linux 6.14 release. vm_reset_perms() calculates range's start and end addresses using min() and max() functions. To do that it compares pointers but, with KASAN software tags mode enabled, some are tagged - addr variable is, while start and end variables aren't. This can cause the wrong address to be chosen and result in various errors in different places. Reset tags in the address used as function argument in min(), max(). execmem_cache_add() adds tagged pointers to a maple tree structure, which then are incorrectly compared when walking the tree. That results in different pointers being returned later and page permission violation errors panicking the kernel. Reset tag of the address range inserted into the maple tree inside execmem_vmalloc() which then gets propagated to execmem_cache_add(). Signed-off-by: Maciej Wieczor-Retman Acked-by: Alexander Potapenko --- Changelog v7: - Add Alexander's acked-by tag. - Add comments on why these tag resets are needed (Alexander) Changelog v6: - Move back the tag reset from execmem_cache_add() to execmem_vmalloc() (Mike Rapoport) - Rewrite the changelogs to match the code changes from v6 and v5. Changelog v5: - Remove the within_range() change. - arch_kasan_reset_tag -> kasan_reset_tag. Changelog v4: - Add patch to the series. mm/execmem.c | 9 ++++++++- mm/vmalloc.c | 7 ++++++- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/mm/execmem.c b/mm/execmem.c index 810a4ba9c924..dc7422222cf7 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -59,7 +59,14 @@ static void *execmem_vmalloc(struct execmem_range *range= , size_t size, return NULL; } =20 - return p; + /* + * Resetting the tag here is necessary to avoid the tagged address + * ending up in the maple tree structure. There it's linear address + * can be incorrectly compared with other addresses which can result in + * a wrong address being picked down the line and for example a page + * permission violation error panicking the kernel. + */ + return kasan_reset_tag(p); } =20 struct vm_struct *execmem_vmap(size_t size) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 798b2ed21e46..ead22a610b18 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3328,7 +3328,12 @@ static void vm_reset_perms(struct vm_struct *area) * the vm_unmap_aliases() flush includes the direct map. */ for (i =3D 0; i < area->nr_pages; i +=3D 1U << page_order) { - unsigned long addr =3D (unsigned long)page_address(area->pages[i]); + /* + * Addresses' tag needs resetting so it can be properly used in + * the min() and max() below. Otherwise the start or end values + * might be favoured. + */ + unsigned long addr =3D (unsigned long)kasan_reset_tag(page_address(area-= >pages[i])); =20 if (addr) { unsigned long page_size; --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-244123.protonmail.ch (mail-244123.protonmail.ch [109.224.244.123]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 816E83009C4 for ; Wed, 10 Dec 2025 17:29:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=109.224.244.123 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387783; cv=none; b=ej6sXJjTQXMFUzk31wst/K50vlEczMDMTEoYW2bXIsj64WykvdP8P86+U2yQaKXeiAGC3hqwW/ljUPTJsGVz4jpyaSSMoD1r1pI+2AWxQBSRVt8GFDDu8/kns5siYEjFYT+VcEI+AVtrgIqkZY1IsmESjvJDHDHNj3NKM1RrqWw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387783; c=relaxed/simple; bh=yEhp3hQXNk92a1EvsLuzdoY+A7QrqofCK+sEDFzJOkE=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=esDtAP09eAZw/r/vEGdXu2Z12MC3Hxr6+uHUn4vIsC72iGfXZ00cChRcm2XnaU//f6owFGaQEvtmZNT53BghFzmU0XttYzdll85OVpmg2rJXn+WIiAFrBfkk0J14tWUFfpW6Ea4Eom7zfzQ23cyBP9mYR/PB7lj7a/BbytNIQLQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=QtWMmIa+; arc=none smtp.client-ip=109.224.244.123 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="QtWMmIa+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387774; x=1765646974; bh=98FOBx0+7DA60om1wtN7N6+TJKTIqaX0Zt5bft5ZYp8=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=QtWMmIa+LG5AXwtzjx+mtRsRB4bp1/fLBmLFbRMAu4VWav5eneoTJ+ngV26ROgyhY P/eDk6ZP/IimbR8hIIplaqezDpvMyEXO5Dd4xD8fhRtEQIftNIKL0DCS3nt9xIzrzf 5/CSKerIV1E7WLo8sQaXF8p/7DIc3wdBQnmQqK4kTksJYotUvjWZ5W/9AaH0vgj3HW Bj6ZEEGVSQZMs+QPGsGB7UjfqIlqA73yedbEWVk748SWnHbIq1VkxqQy6lPHm397JR znM7+cGhk+inGjDhVbALwnF4XE23kN6znBJ3gNH2LM26qKkFvISsdz7KV7eYmFOahV 56ZIvEnaB9YaA== Date: Wed, 10 Dec 2025 17:29:29 +0000 To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , linux-kernel@vger.kernel.org Subject: [PATCH v7 07/15] x86/mm: Physical address comparisons in fill_p*d/pte Message-ID: In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: 1deacf41134359a53e7c7ed65d546b96e245276b Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman Calculating page offset returns a pointer without a tag. When comparing the calculated offset to a tagged page pointer an error is raised because they are not equal. Change pointer comparisons to physical address comparisons as to avoid issues with tagged pointers that pointer arithmetic would create. Open code pte_offset_kernel(), pmd_offset(), pud_offset() and p4d_offset(). Because one parameter is always zero and the rest of the function insides are enclosed inside __va(), removing that layer lowers the complexity of final assembly. Signed-off-by: Maciej Wieczor-Retman --- Changelog v7: - Swap ternary operator outcomes in fill_p4d since the order was incorrect. Changelog v2: - Open code *_offset() to avoid it's internal __va(). arch/x86/mm/init_64.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0e4270e20fad..ae173f8c2aa7 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -269,7 +269,10 @@ static p4d_t *fill_p4d(pgd_t *pgd, unsigned long vaddr) if (pgd_none(*pgd)) { p4d_t *p4d =3D (p4d_t *)spp_getpage(); pgd_populate(&init_mm, pgd, p4d); - if (p4d !=3D p4d_offset(pgd, 0)) + + if (__pa(p4d) !=3D (pgtable_l5_enabled() ? + (unsigned long)pgd_val(*pgd) & PTE_PFN_MASK : + __pa(pgd))) printk(KERN_ERR "PAGETABLE BUG #00! %p <-> %p\n", p4d, p4d_offset(pgd, 0)); } @@ -281,7 +284,7 @@ static pud_t *fill_pud(p4d_t *p4d, unsigned long vaddr) if (p4d_none(*p4d)) { pud_t *pud =3D (pud_t *)spp_getpage(); p4d_populate(&init_mm, p4d, pud); - if (pud !=3D pud_offset(p4d, 0)) + if (__pa(pud) !=3D (p4d_val(*p4d) & p4d_pfn_mask(*p4d))) printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n", pud, pud_offset(p4d, 0)); } @@ -293,7 +296,7 @@ static pmd_t *fill_pmd(pud_t *pud, unsigned long vaddr) if (pud_none(*pud)) { pmd_t *pmd =3D (pmd_t *) spp_getpage(); pud_populate(&init_mm, pud, pmd); - if (pmd !=3D pmd_offset(pud, 0)) + if (__pa(pmd) !=3D (pud_val(*pud) & pud_pfn_mask(*pud))) printk(KERN_ERR "PAGETABLE BUG #02! %p <-> %p\n", pmd, pmd_offset(pud, 0)); } @@ -305,7 +308,7 @@ static pte_t *fill_pte(pmd_t *pmd, unsigned long vaddr) if (pmd_none(*pmd)) { pte_t *pte =3D (pte_t *) spp_getpage(); pmd_populate_kernel(&init_mm, pmd, pte); - if (pte !=3D pte_offset_kernel(pmd, 0)) + if (__pa(pte) !=3D (pmd_val(*pmd) & pmd_pfn_mask(*pmd))) printk(KERN_ERR "PAGETABLE BUG #03!\n"); } return pte_offset_kernel(pmd, vaddr); --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-4316.protonmail.ch (mail-4316.protonmail.ch [185.70.43.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1ECB3009C4 for ; Wed, 10 Dec 2025 17:29:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.70.43.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387790; cv=none; b=I1Qm8S8grj6QSgFG1ssaTXZJCvcC25RiRZcA37ZPFcNdxFSNBmStCFwySCc6tteHxdAHg7LbU2NHBHM3JGVtMSfMSzusm4nBeC32BBLILPeP24V37LTyZ6bL8PpBB/eHzhsbB7TGESpahby5HGFDVMAmxsks7rB3JwWYCIp7SpE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387790; c=relaxed/simple; bh=dYaBvJLaMOl2hIKSIKTJzMVgY84HJ2TIqjD7lEozQg8=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=p/yTPjsdZUuCA46wKVoz/WDUZMr9ogp9szK7OA7ZgatKtuTTW+MmBb4DwGz7XSkAba6vYN/r/bEObvsIJdzGnLcX/IUhR5sJp+Se+pCdarw2SWeUypsjtRDLdQyBgsqi9qL+OcdcaoaqpYYp9i3VWIT55ZdvFbvcteK4BPMThvA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=BzeuwUkH; arc=none smtp.client-ip=185.70.43.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="BzeuwUkH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387786; x=1765646986; bh=3onoTwDkEsWsTTZEKyNLGTuXan36WSJnDCZpzSRbbSc=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=BzeuwUkHpNojtQLkLUv5KBvfhHubmwxhMOTPa+pNLNhqcJoiDLKtIxcu1PmV6czXE O3nmANpEKV3NvGjR0v75UCostVJj9DpuSpcCQD3XJjxZgKezv58pgs1HozDswJAK9c S4UZDWH75z4IbJhveXs8q6JUuIPoakVbCUPSNfvcx6Xd7Pdxp4XlEg+NqPGKMsjybY 3j/1dFkHKUm0sQSWqzvJBS4ZzXHNuZTdoFvs34vSDEU7PndY4r/VBmU0mLIux5lHCC hIcGZ9vR2d3Bzb36UjEUI+zS0Nxt4QWxbXUmNWKig8zaD/O45YDG1CpkJIhbirfSOQ ADhIgFOIF2kmg== Date: Wed, 10 Dec 2025 17:29:40 +0000 To: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Subject: [PATCH v7 08/15] x86/kasan: KASAN raw shadow memory PTE init Message-ID: In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: d96ecac2f6d36d49de04f91f75d4a5b61f3385a3 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman In KASAN's generic mode the default value in shadow memory is zero. During initialization of shadow memory pages they are allocated and zeroed. In KASAN's tag-based mode the default tag for the arm64 architecture is 0xFE which corresponds to any memory that should not be accessed. On x86 (where tags are 4-bit wide instead of 8-bit wide) that tag is 0xE so during the initializations all the bytes in shadow memory pages should be filled with it. Use memblock_alloc_try_nid_raw() instead of memblock_alloc_try_nid() to avoid zeroing out the memory so it can be set with the KASAN invalid tag. Signed-off-by: Maciej Wieczor-Retman Reviewed-by: Alexander Potapenko --- Changelog v7: - Fix flipped arguments in memset(). - Add Alexander's reviewed-by tag. Changelog v2: - Remove dense mode references, use memset() instead of kasan_poison(). arch/x86/mm/kasan_init_64.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 998b6010d6d3..7f5c11328ec1 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -34,6 +34,18 @@ static __init void *early_alloc(size_t size, int nid, bo= ol should_panic) return ptr; } =20 +static __init void *early_raw_alloc(size_t size, int nid, bool should_pani= c) +{ + void *ptr =3D memblock_alloc_try_nid_raw(size, size, + __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, nid); + + if (!ptr && should_panic) + panic("%pS: Failed to allocate page, nid=3D%d from=3D%lx\n", + (void *)_RET_IP_, nid, __pa(MAX_DMA_ADDRESS)); + + return ptr; +} + static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, int nid) { @@ -63,8 +75,9 @@ static void __init kasan_populate_pmd(pmd_t *pmd, unsigne= d long addr, if (!pte_none(*pte)) continue; =20 - p =3D early_alloc(PAGE_SIZE, nid, true); - entry =3D pfn_pte(PFN_DOWN(__pa(p)), PAGE_KERNEL); + p =3D early_raw_alloc(PAGE_SIZE, nid, true); + memset(p, KASAN_SHADOW_INIT, PAGE_SIZE); + entry =3D pfn_pte(PFN_DOWN(__pa_nodebug(p)), PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); } @@ -436,7 +449,7 @@ void __init kasan_init(void) * it may contain some garbage. Now we can clear and write protect it, * since after the TLB flush no one should write to it. */ - memset(kasan_early_shadow_page, 0, PAGE_SIZE); + memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); for (i =3D 0; i < PTRS_PER_PTE; i++) { pte_t pte; pgprot_t prot; --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-07.mail-europe.com (mail-0701.mail-europe.com [51.83.17.38]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16799327219 for ; Wed, 10 Dec 2025 17:30:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=51.83.17.38 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387804; cv=none; b=OZix/jNBhAq05JpaPdB1z/KmhH6QMOq8XNr8T7oPkIcHm5j6Az4sC78MZt4+LzyDihWsSNUEjm5SEoj0xATHtcpDTMyXBBO8NqsI7UN7MMYmo3qcOyQRda7dfo40aaezwBduPe17ljjDJ25JuFwGotVnZPJ51givadiIREw+Ap8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387804; c=relaxed/simple; bh=gvUa1176eKZ4v7OZs0Fzg0JdmM+FZNuNE6etRDnNF6U=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=N+MF/wFf99p+n8j3Rjrhpr2E5nrqNEGmlvYTypt01f1wdXk3TDuZrFrd+6dng4uBSPtnythK9ztvo61bm8NDeIbRtW8dxt0sY8wpG9aQ0+qano/FsPQ5g3C5O3JN+FiUBGqoSpL08Sj7uLUXsNZunpjgMYp+rvRKLGpzI0Z5kXI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=fail smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=N+ufnWcO; arc=none smtp.client-ip=51.83.17.38 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="N+ufnWcO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387793; x=1765646993; bh=1QI8O92VtcxKJXXMBmUWFQrJdTexY9lSWgo4BYwXQn4=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=N+ufnWcOvcETtFhh/a+DqRBdi4SPGViSEjinDZ2MlR14ozy7yX9+JdYh7KUfkeBXO 2/qQiqkNJvAJj7KDzhL1prTgcNALk8Dymleiz8Iz3SiTkly0c5wpoBQ6tvCdjTDAH0 Dq23CPFLJBH8OoLQ4dBT3jtpG6JtBb1oBAgxTIEhRNy/Itw94MWX7yTtUdrHaJHJw8 XqgvRsDgKM2Hz8EciBRldsVIabsaFZTO3L7V29CJrnRjDuYUoE8QVtEz4nGikeV/Wn oBsGFL2J/2rbr/EfujaihmJtOXvkL+yHQSVysV85U1oxCW0o+FtJdbYwi23RpJCH5d TNcc1uXzSI39w== Date: Wed, 10 Dec 2025 17:29:47 +0000 To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , Alexander Potapenko , linux-kernel@vger.kernel.org Subject: [PATCH v7 09/15] x86/mm: LAM compatible non-canonical definition Message-ID: In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: 5b288fd6084b701c9b92f8d0ee47cb2634bffa7a Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman For an address to be canonical it has to have its top bits equal to each other. The number of bits depends on the paging level and whether they're supposed to be ones or zeroes depends on whether the address points to kernel or user space. With Linear Address Masking (LAM) enabled, the definition of linear address canonicality is modified. Not all of the previously required bits need to be equal, only the first and last from the previously equal bitmask. So for example a 5-level paging kernel address needs to have bits [63] and [56] set. Change the canonical checking function to use bit masks instead of bit shifts. Signed-off-by: Maciej Wieczor-Retman Acked-by: Alexander Potapenko --- Changelog v7: - Add Alexander's acked-by tag. - Add parentheses around vaddr_bits as suggested by checkpatch. - Apply the bitmasks to the __canonical_address() function which is used in kvm code. Changelog v6: - Use bitmasks to check both kernel and userspace addresses in the __is_canonical_address() (Dave Hansen and Samuel Holland). Changelog v4: - Add patch to the series. arch/x86/include/asm/page.h | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index bcf5cad3da36..b7940fa49e64 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -82,9 +82,22 @@ static __always_inline void *pfn_to_kaddr(unsigned long = pfn) return __va(pfn << PAGE_SHIFT); } =20 +#ifdef CONFIG_KASAN_SW_TAGS +#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL((vaddr_bits) - 1= )) +#else +#define CANONICAL_MASK(vaddr_bits) GENMASK_ULL(63, vaddr_bits) +#endif + +/* + * To make an address canonical either set or clear the bits defined by the + * CANONICAL_MASK(). Clear the bits for userspace addresses if the top add= ress + * bit is a zero. Set the bits for kernel addresses if the top address bit= is a + * one. + */ static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits) { - return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits); + return (vaddr & BIT_ULL(63)) ? vaddr | CANONICAL_MASK(vaddr_bits) : + vaddr & ~CANONICAL_MASK(vaddr_bits); } =20 static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits) --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-244122.protonmail.ch (mail-244122.protonmail.ch [109.224.244.122]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB8BC3009C4 for ; Wed, 10 Dec 2025 17:30:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=109.224.244.122 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387804; cv=none; b=XFuJDESSzCDGcXYcL/wWuhW8+gAPI1hN6sKkeCe9oA5smbdaDNx80gK1SqLblDbtd6HJL9zVofBRbP6Nc+aIS1nCyBGSr23AjCpNq82P/AkEoZfOa5xrCxU4lI/RaLGbkvj7h/1TJzcGlNO5JYUj968IZbJe2czMJj1NtVNnNVg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387804; c=relaxed/simple; bh=cLU1yDlyIufiMciF88eGTnMUwfesXy9yINZ4eeqk2+I=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=LPP3J16sxL101HBOzFhPFXEfQSDJKqpXDLekgkpIRzxLmMrjH8LGuRRCZYqSaYHzMvomptcbpTG37l6eXNxPK0mr/DJN9L1vOegHAqkNgGU7PGooHNBTxyoTQdSvFX2Lqy3ey6KSWsGQToC3GXr0BEmsze1Fl4TtlVJk3TBdp2M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=DZ+OMy1x; arc=none smtp.client-ip=109.224.244.122 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="DZ+OMy1x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387794; x=1765646994; bh=tF6uZeqRtsc0uhEqsykdQtAU19tdI7CeSMEK1HyPxRA=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=DZ+OMy1xQ3Whr/Tamd7lFwNLJou8hMTMNIY+HODuob8gvgrRAwkIBSpqMQdC9NvnK dgnvPQS+lzl87gfV3WDyjT+EFYETzRPM03Q13Tg+lW++lNIFll497eIVVyatea7AHZ IDe1CyypCVGFBXjjaxpOxUeBepysK2a7GEUiTcJ7ul2tdAud4OHPHR1O7CwAK6LHUJ I4v2Ow8E8sYI2+mGjZplbnJLjVWgVz8lQUGTVJqhas2YVoKRd7Ms+7rV/2F8UuytZy cBI/maO5ikt0rnLKg+wBKZd9hZ5AaD6VyNA1utsiZslBlDMVmBPi+Y53uPvhukrfBA 3a0G3b72hg2ug== Date: Wed, 10 Dec 2025 17:29:52 +0000 To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , Alexander Potapenko , linux-kernel@vger.kernel.org Subject: [PATCH v7 10/15] x86/mm: LAM initialization Message-ID: <45dec73dcc8eefeaf00f0d0ab7dd65c6d1cd13d8.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: db9a7589834f5597b92a772b6e06b450fdeda4cc Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman To make use of KASAN's tag based mode on x86, Linear Address Masking (LAM) needs to be enabled. To do that the 28th bit in CR4 has to be set. Set the bit in early memory initialization. When launching secondary CPUs the LAM bit gets lost. To avoid this add it in a mask in head_64.S. The bitmask permits some bits of CR4 to pass from the primary CPU to the secondary CPUs without being cleared. Signed-off-by: Maciej Wieczor-Retman Acked-by: Alexander Potapenko --- Changelog v7: - Add Alexander's acked-by tag. Changelog v6: - boot_cpu_has() -> cpu_feature_enabled() arch/x86/kernel/head_64.S | 3 +++ arch/x86/mm/init.c | 3 +++ 2 files changed, 6 insertions(+) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 21816b48537c..c5a0bfbe280d 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -209,6 +209,9 @@ SYM_INNER_LABEL(common_startup_64, SYM_L_LOCAL) * there will be no global TLB entries after the execution." */ movl $(X86_CR4_PAE | X86_CR4_LA57), %edx +#ifdef CONFIG_ADDRESS_MASKING + orl $X86_CR4_LAM_SUP, %edx +#endif #ifdef CONFIG_X86_MCE /* * Preserve CR4.MCE if the kernel will enable #MC support. diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 8bf6ad4b9400..a8442b255481 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -764,6 +764,9 @@ void __init init_mem_mapping(void) probe_page_size_mask(); setup_pcid(); =20 + if (cpu_feature_enabled(X86_FEATURE_LAM) && IS_ENABLED(CONFIG_KASAN_SW_TA= GS)) + cr4_set_bits_and_update_boot(X86_CR4_LAM_SUP); + #ifdef CONFIG_X86_64 end =3D max_pfn << PAGE_SHIFT; #else --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-24418.protonmail.ch (mail-24418.protonmail.ch [109.224.244.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5313B1B85F8 for ; Wed, 10 Dec 2025 17:30:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=109.224.244.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387809; cv=none; b=MD+Hw3NJIC+3NMzxDPptUIYQ7bzA2LhoY29xYNazwEcZg0k84aSRnxqsU0xd5X1a4SED3MvknmY9RIlPBRMokwDBUca+1gXPzBIiUW3uyMgS9pknX6eGAtc9iks7yQXGX62Vp0LDBwPRJY264wfrBFhVAfu/BXVyU9JVSObcP98= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387809; c=relaxed/simple; bh=mdgk9M23toUSSdVjaABscn4nritKLnqb4cFgULxtoho=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Vnf5Qi7gHCRKfKMHj8W2ILzkeSZCEdCNpIcvMCMzifgUIlB/p1zwIU02Un37HbCSrG8r6mkWtFbW5pp55Ofo8PA0Y1X39KU1O0eNPOS9bAwhb+HdhSYiJ7YJrQyYrRJoEg9YeDhBoF1GpC7r9AnUlOdmxPntLymWurkIYFWU07g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=D++RtEx8; arc=none smtp.client-ip=109.224.244.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="D++RtEx8" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387800; x=1765647000; bh=aRRLcM1Q5eZzMoX1A6R6c/uTaz4KswX3LOgQlrenxtQ=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=D++RtEx8YI6ICA3eBD3PJbYjhNIvQlw/kkgxz/laOTL+VV7i8fByW6XqzDHZA7gyH 6Sw3FLW5SsMjfct+msBgJYZ3FxpHiqxK7lI1nH2FuZUbS895S3vJDOG+UiZEgMDUEQ gcN3+noLDoGNj3Brdb3V2ix2SQZi57eaq9yysXdDwJ7CWPE0SyA9PMiQPkK2BrHqiM /aylWbuuRIdU8v7ordUvXAovxGZ6PuYzjuIN8AB89LmJM7EkK0ONyP4jDEpensgMAa CIUpfsBlaqVKvUXZAMIkTkQUbNJF+AQGlpKwwfSkeeSL7VjcTUbVlvoPFn+MaWtVu1 KIhgPIwTJ70cA== Date: Wed, 10 Dec 2025 17:29:57 +0000 To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , Andrey Konovalov , linux-kernel@vger.kernel.org Subject: [PATCH v7 11/15] x86: Minimal SLAB alignment Message-ID: <740bd4f4dbf4ce17e4bbbd3f3624bea2d24d72d3.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: 563efb56bca302dfdf8fb3c0c4c2289211e8d00c Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman 8 byte minimal SLAB alignment interferes with KASAN's granularity of 16 bytes. It causes a lot of out-of-bounds errors for unaligned 8 byte allocations. Compared to a kernel with KASAN disabled, the memory footprint increases because all kmalloc-8 allocations now are realized as kmalloc-16, which has twice the object size. But more meaningfully, when compared to a kernel with generic KASAN enabled, there is no difference. Because of redzones in generic KASAN, kmalloc-8' and kmalloc-16' object size is the same (48 bytes). So changing the minimal SLAB alignment of the tag-based mode doesn't have any negative impact when compared to the other software KASAN mode. Adjust x86 minimal SLAB alignment to match KASAN granularity size. Signed-off-by: Maciej Wieczor-Retman Reviewed-by: Andrey Konovalov --- Changelog v6: - Add Andrey's Reviewed-by tag. Changelog v4: - Extend the patch message with some more context and impact information. Changelog v3: - Fix typo in patch message 4 -> 16. - Change define location to arch/x86/include/asm/cache.c. arch/x86/include/asm/cache.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h index 69404eae9983..3232583b5487 100644 --- a/arch/x86/include/asm/cache.h +++ b/arch/x86/include/asm/cache.h @@ -21,4 +21,8 @@ #endif #endif =20 +#ifdef CONFIG_KASAN_SW_TAGS +#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT) +#endif + #endif /* _ASM_X86_CACHE_H */ --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-4322.protonmail.ch (mail-4322.protonmail.ch [185.70.43.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 061DA1B85F8 for ; Wed, 10 Dec 2025 17:30:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.70.43.22 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387815; cv=none; b=Z72Qq/vdoA5FowLN49GvcKyrbDRDzC3OzSfIB6MlZ0UsowfE35YHSUEMH2wiDpa+T2BzPtSrOybjZ83pcmNYU8eRI3Ad5ZI0AZsuTzU8IJFNKgSqB4jyi73kyvQB9O5O2/rf7V0AlmsA6g6p7FwkNhorpXQc6gXrwMQ9dFTKN3I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387815; c=relaxed/simple; bh=LMaezUJWZi8w9+psRoT2CJOKvmM8RmxmTEKWOhg42fQ=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qF4G9I91fbyX4X6MJPvJItXp6c6OJWDlC1mCmEPf14f8bCjnZNd1jVoCRRgWzxLMBfpoBc+bdQnk8ti9vbjMbNiRGY1VzG22lvUSUXgdUjqFVIIxTiv25f3SzHAHqCuWN00TncwxR4qKWIBO5o4UDjxqpTjqynjLAcmRWJfwt94= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=bff9AUQ6; arc=none smtp.client-ip=185.70.43.22 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="bff9AUQ6" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387809; x=1765647009; bh=lY9fE4khm8TKePDz5X4aC5hOywz6v3YeXyq28VGtz8g=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=bff9AUQ6oevu1qmq4awuQxoIpaVMBvgWcsrkovnFdMLLnKhFnUAmFyidjfDgyvHe0 3M+9lDnYM56Ifcsfm8YJHY4kunEYYxx4B3eqknKTYzC90GMoxesWB/vM2z5YTTgnlG OwF9x6HqTwCREXSowqU+JpDm/q+L85JSm63BxpFVdj3APlRs2kyqFLbGiLQpZwrFBG bhdKnEsiWTo2Pdwtf0mz4JPCpeXzqJVBC0Z3hUXFC5fmTu1J620kB3ztHCNyFivZME Sc4eqgZlfsssXT7B0EHb+HkkhLusIRaALPc610UuPoh1fkiWsyzVtTZbEVLRh9UpmH NGkFThngFKhag== Date: Wed, 10 Dec 2025 17:30:05 +0000 To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andy Lutomirski , Peter Zijlstra , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, llvm@lists.linux.dev Subject: [PATCH v7 12/15] x86/kasan: Handle UD1 for inline KASAN reports Message-ID: <13fa5da13adf927abbb7dd85d19fbaa8e4fadc84.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: 7032ee222089e9740a5e84130b4a9493439885f2 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman Inline KASAN on x86 should do tag mismatch reports by passing the metadata through the UD1 instruction and the faulty address through RDI, a scheme that's already used by UBSan and is easy to extend. The current LLVM way of passing KASAN software tag mode metadata is done using the INT3 instruction. However that should be changed because it doesn't align to how the kernel already handles UD1 for similar use cases. Since inline software tag-based KASAN doesn't work on x86 due to missing compiler support it can be fixed and the INT3 can be changed to UD1 at the same time. Add a kasan component to the #UD decoding and handling functions. Make part of that hook - which decides whether to die or recover from a tag mismatch - arch independent to avoid duplicating a long comment on both x86 and arm64 architectures. Signed-off-by: Maciej Wieczor-Retman --- Changelog v7: - Redo the #UD handling that's based on Peter Zijlstra WARN() patches. - Rename kasan_inline.c -> kasan_sw_tags.c (Alexander) Changelog v6: - Change the whole patch from using INT3 to UD1. Changelog v5: - Add die to argument list of kasan_inline_recover() in arch/arm64/kernel/traps.c. Changelog v4: - Make kasan_handler() a stub in a header file. Remove #ifdef from traps.c. - Consolidate the "recover" comment into one place. - Make small changes to the patch message. MAINTAINERS | 2 +- arch/x86/include/asm/bug.h | 1 + arch/x86/include/asm/kasan.h | 20 ++++++++++++++++++++ arch/x86/kernel/traps.c | 13 ++++++++++++- arch/x86/mm/Makefile | 2 ++ arch/x86/mm/kasan_sw_tags.c | 19 +++++++++++++++++++ include/linux/kasan.h | 23 +++++++++++++++++++++++ 7 files changed, 78 insertions(+), 2 deletions(-) create mode 100644 arch/x86/mm/kasan_sw_tags.c diff --git a/MAINTAINERS b/MAINTAINERS index a591598cc4b5..ff1c036ae39f 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -13421,7 +13421,7 @@ S: Maintained B: https://bugzilla.kernel.org/buglist.cgi?component=3DSanitizers&product= =3DMemory%20Management F: Documentation/dev-tools/kasan.rst F: arch/*/include/asm/*kasan*.h -F: arch/*/mm/kasan_init* +F: arch/*/mm/kasan_* F: include/linux/kasan*.h F: lib/Kconfig.kasan F: mm/kasan/ diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h index 83b0fb38732d..eb733ac14598 100644 --- a/arch/x86/include/asm/bug.h +++ b/arch/x86/include/asm/bug.h @@ -32,6 +32,7 @@ #define BUG_UD1 0xfffd #define BUG_UD1_UBSAN 0xfffc #define BUG_UD1_WARN 0xfffb +#define BUG_UD1_KASAN 0xfffa #define BUG_UDB 0xffd6 #define BUG_LOCK 0xfff0 =20 diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h index eab12527ed7f..6e083d45770d 100644 --- a/arch/x86/include/asm/kasan.h +++ b/arch/x86/include/asm/kasan.h @@ -6,6 +6,24 @@ #include #include #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) + +/* + * LLVM ABI for reporting tag mismatches in inline KASAN mode. + * On x86 the UD1 instruction is used to carry metadata in the ECX register + * to the KASAN report. ECX is used to differentiate KASAN from UBSan when + * decoding the UD1 instruction. + * + * SIZE refers to how many bytes the faulty memory access + * requested. + * WRITE bit, when set, indicates the access was a write, otherwise + * it was a read. + * RECOVER bit, when set, should allow the kernel to carry on after + * a tag mismatch. Otherwise die() is called. + */ +#define KASAN_ECX_RECOVER 0x20 +#define KASAN_ECX_WRITE 0x10 +#define KASAN_ECX_SIZE_MASK 0x0f +#define KASAN_ECX_SIZE(ecx) (1 << ((ecx) & KASAN_ECX_SIZE_MASK)) #define KASAN_SHADOW_SCALE_SHIFT 3 =20 /* @@ -34,10 +52,12 @@ #define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag) #define __tag_reset(addr) (sign_extend64((u64)(addr), 56)) #define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr)) +void kasan_inline_handler(struct pt_regs *regs, unsigned int metadata, u64= addr); #else #define __tag_shifted(tag) 0UL #define __tag_reset(addr) (addr) #define __tag_get(addr) 0 +static inline void kasan_inline_handler(struct pt_regs *regs, unsigned int= metadata, u64 addr) { } #endif /* CONFIG_KASAN_SW_TAGS */ =20 #ifdef CONFIG_64BIT diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index cb324cc1fd99..e55e5441fc83 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -102,6 +102,7 @@ __always_inline int is_valid_bugaddr(unsigned long addr) * FineIBT: f0 75 f9 lock jne . - 6 * UBSan{0}: 67 0f b9 00 ud1 (%eax),%eax * UBSan{10}: 67 0f b9 40 10 ud1 0x10(%eax),%eax + * KASAN: 48 0f b9 41 XX ud1 0xXX(%rcx),%reg * static_call: 0f b9 cc ud1 %esp,%ecx * __WARN_trap: 67 48 0f b9 3a ud1 (%edx),%reg * @@ -190,6 +191,10 @@ __always_inline int decode_bug(unsigned long addr, s32= *imm, int *len) addr +=3D 1; if (rm =3D=3D 0) /* (%eax) */ type =3D BUG_UD1_UBSAN; + if (rm =3D=3D 1) { /* (%ecx) */ + type =3D BUG_UD1_KASAN; + *imm +=3D reg << 8; + } break; =20 case 2: *imm =3D *(s32 *)addr; @@ -399,7 +404,7 @@ static inline void handle_invalid_op(struct pt_regs *re= gs) =20 static noinstr bool handle_bug(struct pt_regs *regs) { - unsigned long addr =3D regs->ip; + unsigned long kasan_addr, addr =3D regs->ip; bool handled =3D false; int ud_type, ud_len; s32 ud_imm; @@ -454,6 +459,12 @@ static noinstr bool handle_bug(struct pt_regs *regs) } break; =20 + case BUG_UD1_KASAN: + kasan_addr =3D (u64)pt_regs_val(regs, ud_imm >> 8); + kasan_inline_handler(regs, ud_imm, kasan_addr); + handled =3D true; + break; + default: break; } diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 5b9908f13dcf..b562963a866e 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -36,7 +36,9 @@ obj-$(CONFIG_PTDUMP) +=3D dump_pagetables.o obj-$(CONFIG_PTDUMP_DEBUGFS) +=3D debug_pagetables.o =20 KASAN_SANITIZE_kasan_init_$(BITS).o :=3D n +KASAN_SANITIZE_kasan_sw_tags.o :=3D n obj-$(CONFIG_KASAN) +=3D kasan_init_$(BITS).o +obj-$(CONFIG_KASAN_SW_TAGS) +=3D kasan_sw_tags.o =20 KMSAN_SANITIZE_kmsan_shadow.o :=3D n obj-$(CONFIG_KMSAN) +=3D kmsan_shadow.o diff --git a/arch/x86/mm/kasan_sw_tags.c b/arch/x86/mm/kasan_sw_tags.c new file mode 100644 index 000000000000..93b63be584fd --- /dev/null +++ b/arch/x86/mm/kasan_sw_tags.c @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +void kasan_inline_handler(struct pt_regs *regs, unsigned int metadata, u64= addr) +{ + u64 pc =3D regs->ip; + bool recover =3D metadata & KASAN_ECX_RECOVER; + bool write =3D metadata & KASAN_ECX_WRITE; + size_t size =3D KASAN_ECX_SIZE(metadata); + + if (user_mode(regs)) + return; + + if (!kasan_report((void *)addr, size, write, pc)) + return; + + kasan_die_unless_recover(recover, "Oops - KASAN", regs, metadata, die); +} diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 5cb21b90a2ec..03e263fb9fa1 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -669,4 +669,27 @@ void kasan_non_canonical_hook(unsigned long addr); static inline void kasan_non_canonical_hook(unsigned long addr) { } #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ =20 +#ifdef CONFIG_KASAN_SW_TAGS +/* + * The instrumentation allows to control whether we can proceed after + * a crash was detected. This is done by passing the -recover flag to + * the compiler. Disabling recovery allows to generate more compact + * code. + * + * Unfortunately disabling recovery doesn't work for the kernel right + * now. KASAN reporting is disabled in some contexts (for example when + * the allocator accesses slab object metadata; this is controlled by + * current->kasan_depth). All these accesses are detected by the tool, + * even though the reports for them are not printed. + * + * This is something that might be fixed at some point in the future. + */ +static inline void kasan_die_unless_recover(bool recover, char *msg, struc= t pt_regs *regs, + unsigned long err, void die_fn(const char *str, struct pt_regs *regs, lon= g err)) +{ + if (!recover) + die_fn(msg, regs, err); +} +#endif + #endif /* LINUX_KASAN_H */ --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-4316.protonmail.ch (mail-4316.protonmail.ch [185.70.43.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64DE1328B44 for ; Wed, 10 Dec 2025 17:30:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.70.43.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387820; cv=none; b=sRArQK0LEjWa/nOW5hlym1DRMI2UQhkFEUDuF6rQMkjVfck9pQ8mCZaSNJ/jlzMizmbTmi9+8aVinMI3Iu2nZUEtKUaWkx2R/Dy2VuYTgE18HJRraIRehlNcUcukQLCnRq/vfsBuTBz5J4/1nX9EVtbwgSe0xVumatDj0TcjFsg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387820; c=relaxed/simple; bh=3Oprj88zPy/HKcL66codnIh/vLkQGX+Lp4FrXVTBmDQ=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=GXKHC4SUl+QIgMWzecCVIihnULhDdvvzBM3cfyFeeqKQs4npxaJwbOKSMHI3jv/0OkWAo1Y/BTBNO+55cnY/N+mfl3RoT9IcMpFOAPHFFiRi5qwG5DHUtSWel7M+03MQWjxj/8xrQZrCIgEy6RNi/M8CLX9g9P7HD/0O5CUTuiw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=YcCGlnGi; arc=none smtp.client-ip=185.70.43.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="YcCGlnGi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387815; x=1765647015; bh=3Oprj88zPy/HKcL66codnIh/vLkQGX+Lp4FrXVTBmDQ=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=YcCGlnGiX5NotnVIYBB4fzVizI4bXwHMZUEvypahJhnjeOKJ7Ph/UhwJtMMdC2OfC PeLd42p7trlGuXPYNbTCuUqngYqcnIWutQSUqxhiiavPrE238p8DiXRD0CugIGZF2v zw8q5hInGEJPPY6DnpYJNxXoy/V/VpynTyQu+6jsO09gDpXcjtB5WXcuIcNvwy2S5T dGkE28nfOAaVjP4IzECPasRNyG1P8uDddzAPNzmT/qaknggKpgryK8OxHunO5nOost tixi8RKrHqW7l5Jy3tGzCRerl0/KGedoC6ma4ceKH4rxaV2l4k5CfbnPBTzzIFfYhz UX0YS/ZLf6Tqg== Date: Wed, 10 Dec 2025 17:30:10 +0000 To: Catalin Marinas , Will Deacon From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , Alexander Potapenko , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 13/15] arm64: Unify software tag-based KASAN inline recovery path Message-ID: <3cb831d9ec43485b42b2c6ff765367c4a92c5d5f.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: caf63166bb826a0dbdb440310f4a58afdeedfcd3 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman To avoid having a copy of a long comment explaining the intricacies of the inline KASAN recovery system and issues for every architecture that uses the software tag-based mode, a unified kasan_die_unless_recover() function was added. Use kasan_die_unless_recover() in the kasan brk handler to cleanup the long comment, that's kept in the non-arch KASAN code. Signed-off-by: Maciej Wieczor-Retman Acked-by: Catalin Marinas Acked-by: Alexander Potapenko --- Changelog v7: - Add Alexander's Acked-by tag. Changelog v6: - Add Catalin's Acked-by tag. Changelog v5: - Split arm64 portion of patch 13/18 into this one. (Peter Zijlstra) arch/arm64/kernel/traps.c | 17 +---------------- 1 file changed, 1 insertion(+), 16 deletions(-) diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 681939ef5d16..b1efc11c3b5a 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -1071,22 +1071,7 @@ int kasan_brk_handler(struct pt_regs *regs, unsigned= long esr) =20 kasan_report(addr, size, write, pc); =20 - /* - * The instrumentation allows to control whether we can proceed after - * a crash was detected. This is done by passing the -recover flag to - * the compiler. Disabling recovery allows to generate more compact - * code. - * - * Unfortunately disabling recovery doesn't work for the kernel right - * now. KASAN reporting is disabled in some contexts (for example when - * the allocator accesses slab object metadata; this is controlled by - * current->kasan_depth). All these accesses are detected by the tool, - * even though the reports for them are not printed. - * - * This is something that might be fixed at some point in the future. - */ - if (!recover) - die("Oops - KASAN", regs, esr); + kasan_die_unless_recover(recover, "Oops - KASAN", regs, esr, die); =20 /* If thread survives, skip over the brk instruction and continue: */ arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-10629.protonmail.ch (mail-10629.protonmail.ch [79.135.106.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9590E3233E3 for ; Wed, 10 Dec 2025 17:30:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.135.106.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387827; cv=none; b=Kz5C+xKzCbRyg4Mz7VypP8f3YDPp5lwFhNb9ExKRuvNM7HZylDZgMlEKd+AhbcBPzp+k+IUk/DjAKPwxn59wdO+WT8UyFGZhiFm/+44Wje2cICB5VH6Tg4rL8e3b4qdVD/o9pB6nyEqxUe1sq0WxP6vKXuEfdZqfTKB64tVQX+w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387827; c=relaxed/simple; bh=MqvX5rnWD0YBTss0HUtTTtGbWtrYS1CXxKV30SUKzC4=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=UzhAVrc1CRCSiHgKNNLs81L++sfNVFojwtLxEqkXaqZ/R7qDwq1L7aFPOLdJmW1h7q6GpHxq+Ho3DsoaCRns/BaOS7Xa726/pESKFqhlP17kJuqDITIKB7XWyKKYTDN7kwnbst1rUtIQ6ylOgUzHiLnAeiZuTNVWp1WLrqHUyT8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=WwAq/3hF; arc=none smtp.client-ip=79.135.106.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="WwAq/3hF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387821; x=1765647021; bh=yptnXmuAYz2NtPaUpK9TGJ5xoKzD/S2RQj7G4MSwUA8=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=WwAq/3hFtfJcKKZxvwBdu9XItJzK3N0PZdKkLOTNuMcLXaEwoGzToOwklIhFKy9W+ tEUI7p3D6/Q7SpbI1zC842u0uNQPyV+O6JSahVe3Dcrlk2jaVwfrArkvV6omX5x5Pu RSjt5T+fKVFYntwdNIipAjzqZv4sWPlvhORHx1sKsbiFSfNgyCvf5HQf00acqC7p99 lvV+oOQLmGXcGeYI1iJPZN5EaLbxpF9H+CYvJCoxnVf/fZVbVwkTlsgCDuc8lo7IgT YU1RJDZxpzk0ghJDjGMuSzgXRi1dVmlAoAqP+WmiYD6m/oKq1DRLNMX5AmGDYnJg6V zlv8G6xK1OJvg== Date: Wed, 10 Dec 2025 17:30:14 +0000 To: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v7 14/15] x86/kasan: Logical bit shift for kasan_mem_to_shadow Message-ID: <4dd0d4481bbd89d04bcc85a37a1b9d4ec08522c4.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: 67aeb239351e306f83687190add56130a7705643 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman The tag-based KASAN adopts an arithemitc bit shift to convert a memory address to a shadow memory address. While it makes a lot of sense on arm64, it doesn't work well for all cases on x86 - either the non-canonical hook becomes quite complex for different paging levels, or the inline mode would need a lot more adjustments. Thus the best working scheme is the logical bit shift and non-canonical shadow offset that x86 uses for generic KASAN, of course adjusted for the increased granularity from 8 to 16 bytes. Add an arch specific implementation of kasan_mem_to_shadow() that uses the logical bit shift. The non-canonical hook tries to calculate whether an address came from kasan_mem_to_shadow(). First it checks whether this address fits into the legal set of values possible to output from the mem to shadow function. Tie both generic and tag-based x86 KASAN modes to the address range check associated with generic KASAN. Signed-off-by: Maciej Wieczor-Retman --- Changelog v7: - Redo the patch message and add a comment to __kasan_mem_to_shadow() to provide better explanation on why x86 doesn't work well with the arithemitc bit shift approach (Marco). Changelog v4: - Add this patch to the series. arch/x86/include/asm/kasan.h | 15 +++++++++++++++ mm/kasan/report.c | 5 +++-- 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h index 6e083d45770d..395e133d551d 100644 --- a/arch/x86/include/asm/kasan.h +++ b/arch/x86/include/asm/kasan.h @@ -49,6 +49,21 @@ #include =20 #ifdef CONFIG_KASAN_SW_TAGS +/* + * Using the non-arch specific implementation of __kasan_mem_to_shadow() w= ith a + * arithmetic bit shift can cause high code complexity in KASAN's non-cano= nical + * hook for x86 or might not work for some paging level and KASAN mode + * combinations. The inline mode compiler support could also suffer from h= igher + * complexity for no specific benefit. Therefore the generic mode's logical + * shift implementation is used. + */ +static inline void *__kasan_mem_to_shadow(const void *addr) +{ + return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT) + + KASAN_SHADOW_OFFSET; +} + +#define kasan_mem_to_shadow(addr) __kasan_mem_to_shadow(addr) #define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag) #define __tag_reset(addr) (sign_extend64((u64)(addr), 56)) #define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr)) diff --git a/mm/kasan/report.c b/mm/kasan/report.c index b5beb1b10bd2..db6a9a3d01b2 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -642,13 +642,14 @@ void kasan_non_canonical_hook(unsigned long addr) const char *bug_type; =20 /* - * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift + * For Generic KASAN and Software Tag-Based mode on the x86 + * architecture, kasan_mem_to_shadow() uses the logical right shift * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on * both x86 and arm64). Thus, the possible shadow addresses (even for * bogus pointers) belong to a single contiguous region that is the * result of kasan_mem_to_shadow() applied to the whole address space. */ - if (IS_ENABLED(CONFIG_KASAN_GENERIC)) { + if (IS_ENABLED(CONFIG_KASAN_GENERIC) || IS_ENABLED(CONFIG_X86_64)) { if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0ULL)) || addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL))) return; --=20 2.52.0 From nobody Sun Dec 14 11:11:41 2025 Received: from mail-106121.protonmail.ch (mail-106121.protonmail.ch [79.135.106.121]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 767BE3233E3 for ; Wed, 10 Dec 2025 17:30:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.135.106.121 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387832; cv=none; b=Zp9YcTGtdhxjQJ3Z6Hd5QlWnGQcXgisQOaT8bQySIwDVXpHah+5ewHtwbc8BPMrqxLVs7bPbEQmBAvwQ1NLrB31a+ClY2+4KFTXyJxpWcFM6eBzu0KjVRbCUXlhhwj0sToZRDIh6YlIZVVHTxsdJoBXtZePBEQZwIQc0JGll4K0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765387832; c=relaxed/simple; bh=fcW2VShH5fCP70au+ajiQRdRaCVt9vEeXuOkrlmP4TQ=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nMyjFP0eBtu/2Z9G09r6Rib8QSwfrrkCtA5fyflaPkYTxYO7VoLMeAIkTuBtcjykh5WWGxTeMUKL6E8gDXe/Uq8UMMKSfQTJzeSh5HPLFuZBMN4M6zwqFt0RHVDez//r2aweX2PFJxXP1sXAWkuxEvkG7gM29d312QuLtawU6v4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=lI8IFS/p; arc=none smtp.client-ip=79.135.106.121 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="lI8IFS/p" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1765387827; x=1765647027; bh=Lmu9cRts0lCie4eBHValJXDUxi1lOhN4X7sXmk/mIFM=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=lI8IFS/pqUgkKKNWzs6reAsLkkXkspkU9u9Mr7xn1VlxmTBGQlHpEjreMT8SjZtbu Id3+mACo7mJ5JywqMb97RFy0fCWIy1Z9yVTcGnXH5IuDFu40Ijx/uHG6jJ38fiikXU DaRTh2rjAgePzhNGVF572A7Xh3ZXtohv2zcaNoYKuZ+aa4t4f9EPXhQegXILZue8fZ 13rmdNKA15zAuA5UcQeFZkp7lSuiWtFuS54gZs8WosVHHPmPfLP4EN1pda7S1KWUlM uo5CVNEn94GZa1PbIbFI8eDEwlBuLB9Gf7zMt/vzwVfBXGrwzz6/OCMCbfW690cRwI UlOlwrUAf9uww== Date: Wed, 10 Dec 2025 17:30:23 +0000 To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Jonathan Corbet , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andy Lutomirski , Peter Zijlstra , Andrew Morton From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kasan-dev@googlegroups.com Subject: [PATCH v7 15/15] x86/kasan: Make software tag-based kasan available Message-ID: <97b033941d8e146a71827cf31d4d0a12303bcc41.1765386422.git.m.wieczorretman@pm.me> In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: 5b69e1a241d582816c2acd4254c1d2042c7e7e69 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman Make CONFIG_KASAN_SW_TAGS available for x86 machines if they have ADDRESS_MASKING enabled (LAM) as that works similarly to Top-Byte Ignore (TBI) that allows the software tag-based mode on arm64 platform. The value for sw_tags KASAN_SHADOW_OFFSET was calculated by rearranging the formulas for KASAN_SHADOW_START and KASAN_SHADOW_END from arch/x86/include/asm/kasan.h - the only prerequisites being KASAN_SHADOW_SCALE_SHIFT of 4, and KASAN_SHADOW_END equal to the one from KASAN generic mode. Set scale macro based on KASAN mode: in software tag-based mode 16 bytes of memory map to one shadow byte and 8 in generic mode. Disable CONFIG_KASAN_INLINE and CONFIG_KASAN_STACK when CONFIG_KASAN_SW_TAGS is enabled on x86 until the appropriate compiler support is available. Signed-off-by: Maciej Wieczor-Retman --- Changelog v7: - Add a paragraph to the patch message explaining how the various addresses and the KASAN_SHADOW_OFFSET were calculated. Changelog v6: - Don't enable KASAN if LAM is not supported. - Move kasan_init_tags() to kasan_init_64.c to not clutter the setup.c file. - Move the #ifdef for the KASAN scale shift here. - Move the gdb code to patch "Use arithmetic shift for shadow computation". - Return "depends on KASAN" line to Kconfig. - Add the defer kasan config option so KASAN can be disabled on hardware that doesn't have LAM. Changelog v4: - Add x86 specific kasan_mem_to_shadow(). - Revert x86 to the older unsigned KASAN_SHADOW_OFFSET. Do the same to KASAN_SHADOW_START/END. - Modify scripts/gdb/linux/kasan.py to keep x86 using unsigned offset. - Disable inline and stack support when software tags are enabled on x86. Changelog v3: - Remove runtime_const from previous patch and merge the rest here. - Move scale shift definition back to header file. - Add new kasan offset for software tag based mode. - Fix patch message typo 32 -> 16, and 16 -> 8. - Update lib/Kconfig.kasan with x86 now having software tag-based support. Changelog v2: - Remove KASAN dense code. Documentation/arch/x86/x86_64/mm.rst | 6 ++++-- arch/x86/Kconfig | 4 ++++ arch/x86/boot/compressed/misc.h | 1 + arch/x86/include/asm/kasan.h | 4 ++++ arch/x86/mm/kasan_init_64.c | 6 ++++++ lib/Kconfig.kasan | 3 ++- 6 files changed, 21 insertions(+), 3 deletions(-) diff --git a/Documentation/arch/x86/x86_64/mm.rst b/Documentation/arch/x86/= x86_64/mm.rst index a6cf05d51bd8..ccbdbb4cda36 100644 --- a/Documentation/arch/x86/x86_64/mm.rst +++ b/Documentation/arch/x86/x86_64/mm.rst @@ -60,7 +60,8 @@ Complete virtual memory map with 4-level page tables ffffe90000000000 | -23 TB | ffffe9ffffffffff | 1 TB | ... unused= hole ffffea0000000000 | -22 TB | ffffeaffffffffff | 1 TB | virtual me= mory map (vmemmap_base) ffffeb0000000000 | -21 TB | ffffebffffffffff | 1 TB | ... unused= hole - ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shad= ow memory + ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shad= ow memory (generic mode) + fffff40000000000 | -8 TB | fffffbffffffffff | 8 TB | KASAN shad= ow memory (software tag-based mode) __________________|____________|__________________|_________|___________= _________________________________________________ | | Identical = layout to the 56-bit one from here on: @@ -130,7 +131,8 @@ Complete virtual memory map with 5-level page tables ffd2000000000000 | -11.5 PB | ffd3ffffffffffff | 0.5 PB | ... unused= hole ffd4000000000000 | -11 PB | ffd5ffffffffffff | 0.5 PB | virtual me= mory map (vmemmap_base) ffd6000000000000 | -10.5 PB | ffdeffffffffffff | 2.25 PB | ... unused= hole - ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shad= ow memory + ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shad= ow memory (generic mode) + ffeffc0000000000 | -6 PB | fffffbffffffffff | 4 PB | KASAN shad= ow memory (software tag-based mode) __________________|____________|__________________|_________|___________= _________________________________________________ | | Identical = layout to the 47-bit one from here on: diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index a26dc3bad804..b5275e322061 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -67,6 +67,7 @@ config X86 select ARCH_CLOCKSOURCE_INIT select ARCH_CONFIGURES_CPU_MITIGATIONS select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE + select ARCH_DISABLE_KASAN_INLINE if X86_64 && KASAN_SW_TAGS select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRAT= ION select ARCH_ENABLE_MEMORY_HOTPLUG if X86_64 select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG @@ -196,6 +197,8 @@ config X86 select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if X86_64 select HAVE_ARCH_KASAN_VMALLOC if X86_64 + select HAVE_ARCH_KASAN_SW_TAGS if ADDRESS_MASKING + select ARCH_NEEDS_DEFER_KASAN if ADDRESS_MASKING select HAVE_ARCH_KFENCE select HAVE_ARCH_KMSAN if X86_64 select HAVE_ARCH_KGDB @@ -408,6 +411,7 @@ config AUDIT_ARCH config KASAN_SHADOW_OFFSET hex depends on KASAN + default 0xeffffc0000000000 if KASAN_SW_TAGS default 0xdffffc0000000000 =20 config HAVE_INTEL_TXT diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/mis= c.h index db1048621ea2..ded92b439ada 100644 --- a/arch/x86/boot/compressed/misc.h +++ b/arch/x86/boot/compressed/misc.h @@ -13,6 +13,7 @@ #undef CONFIG_PARAVIRT_SPINLOCKS #undef CONFIG_KASAN #undef CONFIG_KASAN_GENERIC +#undef CONFIG_KASAN_SW_TAGS =20 #define __NO_FORTIFY =20 diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h index 395e133d551d..3fa63036c93c 100644 --- a/arch/x86/include/asm/kasan.h +++ b/arch/x86/include/asm/kasan.h @@ -7,6 +7,7 @@ #include #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) =20 +#ifdef CONFIG_KASAN_SW_TAGS /* * LLVM ABI for reporting tag mismatches in inline KASAN mode. * On x86 the UD1 instruction is used to carry metadata in the ECX register @@ -24,7 +25,10 @@ #define KASAN_ECX_WRITE 0x10 #define KASAN_ECX_SIZE_MASK 0x0f #define KASAN_ECX_SIZE(ecx) (1 << ((ecx) & KASAN_ECX_SIZE_MASK)) +#define KASAN_SHADOW_SCALE_SHIFT 4 +#else #define KASAN_SHADOW_SCALE_SHIFT 3 +#endif =20 /* * Compiler uses shadow offset assuming that addresses start diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 7f5c11328ec1..3a5577341805 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -465,4 +465,10 @@ void __init kasan_init(void) =20 init_task.kasan_depth =3D 0; kasan_init_generic(); + pr_info("KernelAddressSanitizer initialized\n"); + + if (boot_cpu_has(X86_FEATURE_LAM)) + kasan_init_sw_tags(); + else + pr_info("KernelAddressSanitizer not initialized (sw-tags): hardware does= n't support LAM\n"); } diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan index a4bb610a7a6f..d13ea8da7bfd 100644 --- a/lib/Kconfig.kasan +++ b/lib/Kconfig.kasan @@ -112,7 +112,8 @@ config KASAN_SW_TAGS =20 Requires GCC 11+ or Clang. =20 - Supported only on arm64 CPUs and relies on Top Byte Ignore. + Supported on arm64 CPUs that support Top Byte Ignore and on x86 CPUs + that support Linear Address Masking. =20 Consumes about 1/16th of available memory at kernel start and add an overhead of ~20% for dynamic allocations. --=20 2.52.0