From nobody Tue Feb 10 00:59:29 2026 Received: from mail-24416.protonmail.ch (mail-24416.protonmail.ch [109.224.244.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 192B737BE85 for ; Mon, 12 Jan 2026 17:28:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=109.224.244.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768238918; cv=none; b=YJLnFX7BvyB5m75dQ4xJ8LGZ1nHGlRX5mQDiMe4/35LyW5+sMw2Z12nY1v/pJtlfwngPCQix7bwg/YtasFUAF/M0l943EXaUNC9I2OwsTAU1cvBtP8HZWzcFTOL1cyyIvhx9+4Md6NTc2p2vXNzW0mStL1mCGQve/M58nWOOZyA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768238918; c=relaxed/simple; bh=ZhcnD9Qe+0mN7XIFemCzbh7j08mLcyGZjKS1+S1cDfY=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QyUp2s6p+EyNm6SPT9rgQI4+jzD0vq8NPwvmDTXdchVI2dc+SeuVuCjD51LXSRnNcelhG0cbjHqLHWapY8JeOuVf5Z2Go20sHDr5ID0OvQ/1msK4rqOlTNpj3s+vNNfxULboXFUUoe2j1g/qnymSYABHBduRCn7TNf30AdcgizA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me; spf=pass smtp.mailfrom=pm.me; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b=G4blWAFk; arc=none smtp.client-ip=109.224.244.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=pm.me Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pm.me Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="G4blWAFk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail3; t=1768238912; x=1768498112; bh=/xH0MFGMknW13Vj90S8r/ENniLRp7vgZ605IVSOXpxk=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector; b=G4blWAFk5byNfXW6sc8JClrQD1Yo2joQbK9NwSd9/F+bao7hBWObosKIhX0h+Dzm+ l7NnKw1ApP3F/gjuBosm6jHB3W1LMFULfRIbQ/thydJ3OPRh0CsvfB1qxJDMFknnfp y2K4tUyJ8k63pdYGoQt4FtIhqa7Y9Us1pSZCf7Qqoe80joX+Inqsy2f9OOdKYcbJld U8Y++Psvuv+5YoSxY6GVJ7IDbiS9b9x8F4reEedI2UgMdZQg4/8uPbSxJIbhwADE00 2Qs1U9Ygtp/XXT0oeDwuIWKs5jW0kG00mCIc4+9GHqTxjo7exCxEgveBBWh4grh4Y4 UpkbwRx2UxEBw== Date: Mon, 12 Jan 2026 17:28:29 +0000 To: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton From: Maciej Wieczor-Retman Cc: m.wieczorretman@pm.me, Maciej Wieczor-Retman , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 13/14] x86/kasan: Logical bit shift for kasan_mem_to_shadow Message-ID: In-Reply-To: References: Feedback-ID: 164464600:user:proton X-Pm-Message-ID: ebf057f553a2e357d38ef3e18edf71ca7fe2c8c0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Maciej Wieczor-Retman The tag-based KASAN adopts an arithemitc bit shift to convert a memory address to a shadow memory address. While it makes a lot of sense on arm64, it doesn't work well for all cases on x86 - either the non-canonical hook becomes quite complex for different paging levels, or the inline mode would need a lot more adjustments. Thus the best working scheme is the logical bit shift and non-canonical shadow offset that x86 uses for generic KASAN, of course adjusted for the increased granularity from 8 to 16 bytes. Add an arch specific implementation of kasan_mem_to_shadow() that uses the logical bit shift. The non-canonical hook tries to calculate whether an address came from kasan_mem_to_shadow(). First it checks whether this address fits into the legal set of values possible to output from the mem to shadow function. Tie both generic and tag-based x86 KASAN modes to the address range check associated with generic KASAN. Signed-off-by: Maciej Wieczor-Retman --- Changelog v7: - Redo the patch message and add a comment to __kasan_mem_to_shadow() to provide better explanation on why x86 doesn't work well with the arithemitc bit shift approach (Marco). Changelog v4: - Add this patch to the series. arch/x86/include/asm/kasan.h | 15 +++++++++++++++ mm/kasan/report.c | 5 +++-- 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h index eab12527ed7f..9b7951a79753 100644 --- a/arch/x86/include/asm/kasan.h +++ b/arch/x86/include/asm/kasan.h @@ -31,6 +31,21 @@ #include =20 #ifdef CONFIG_KASAN_SW_TAGS +/* + * Using the non-arch specific implementation of __kasan_mem_to_shadow() w= ith a + * arithmetic bit shift can cause high code complexity in KASAN's non-cano= nical + * hook for x86 or might not work for some paging level and KASAN mode + * combinations. The inline mode compiler support could also suffer from h= igher + * complexity for no specific benefit. Therefore the generic mode's logical + * shift implementation is used. + */ +static inline void *__kasan_mem_to_shadow(const void *addr) +{ + return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT) + + KASAN_SHADOW_OFFSET; +} + +#define kasan_mem_to_shadow(addr) __kasan_mem_to_shadow(addr) #define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag) #define __tag_reset(addr) (sign_extend64((u64)(addr), 56)) #define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr)) diff --git a/mm/kasan/report.c b/mm/kasan/report.c index b5beb1b10bd2..db6a9a3d01b2 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -642,13 +642,14 @@ void kasan_non_canonical_hook(unsigned long addr) const char *bug_type; =20 /* - * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift + * For Generic KASAN and Software Tag-Based mode on the x86 + * architecture, kasan_mem_to_shadow() uses the logical right shift * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on * both x86 and arm64). Thus, the possible shadow addresses (even for * bogus pointers) belong to a single contiguous region that is the * result of kasan_mem_to_shadow() applied to the whole address space. */ - if (IS_ENABLED(CONFIG_KASAN_GENERIC)) { + if (IS_ENABLED(CONFIG_KASAN_GENERIC) || IS_ENABLED(CONFIG_X86_64)) { if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0ULL)) || addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL))) return; --=20 2.52.0