From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
For an address to be canonical it has to have its top bits equal to each
other. The number of bits depends on the paging level and whether
they're supposed to be ones or zeroes depends on whether the address
points to kernel or user space.
With Linear Address Masking (LAM) enabled, the definition of linear
address canonicality is modified. Not all of the previously required
bits need to be equal, only the first and last from the previously equal
bitmask. So for example a 5-level paging kernel address needs to have
bits [63] and [56] set.
Change the canonical checking function to use bit masks instead of bit
shifts.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Alexander Potapenko <glider@google.com>
---
Changelog v7:
- Add Alexander's acked-by tag.
- Add parentheses around vaddr_bits as suggested by checkpatch.
- Apply the bitmasks to the __canonical_address() function which is used
in kvm code.
Changelog v6:
- Use bitmasks to check both kernel and userspace addresses in the
__is_canonical_address() (Dave Hansen and Samuel Holland).
Changelog v4:
- Add patch to the series.
arch/x86/include/asm/page.h | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index bcf5cad3da36..b7940fa49e64 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -82,9 +82,22 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
return __va(pfn << PAGE_SHIFT);
}
+#ifdef CONFIG_KASAN_SW_TAGS
+#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL((vaddr_bits) - 1))
+#else
+#define CANONICAL_MASK(vaddr_bits) GENMASK_ULL(63, vaddr_bits)
+#endif
+
+/*
+ * To make an address canonical either set or clear the bits defined by the
+ * CANONICAL_MASK(). Clear the bits for userspace addresses if the top address
+ * bit is a zero. Set the bits for kernel addresses if the top address bit is a
+ * one.
+ */
static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
{
- return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
+ return (vaddr & BIT_ULL(63)) ? vaddr | CANONICAL_MASK(vaddr_bits) :
+ vaddr & ~CANONICAL_MASK(vaddr_bits);
}
static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
--
2.52.0
On 1/12/26 6:28 PM, Maciej Wieczor-Retman wrote:
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> For an address to be canonical it has to have its top bits equal to each
> other. The number of bits depends on the paging level and whether
> they're supposed to be ones or zeroes depends on whether the address
> points to kernel or user space.
>
> With Linear Address Masking (LAM) enabled, the definition of linear
> address canonicality is modified. Not all of the previously required
> bits need to be equal, only the first and last from the previously equal
> bitmask. So for example a 5-level paging kernel address needs to have
> bits [63] and [56] set.
>
> Change the canonical checking function to use bit masks instead of bit
> shifts.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
> Acked-by: Alexander Potapenko <glider@google.com>
> ---
> Changelog v7:
> - Add Alexander's acked-by tag.
> - Add parentheses around vaddr_bits as suggested by checkpatch.
> - Apply the bitmasks to the __canonical_address() function which is used
> in kvm code.
>
> Changelog v6:
> - Use bitmasks to check both kernel and userspace addresses in the
> __is_canonical_address() (Dave Hansen and Samuel Holland).
>
> Changelog v4:
> - Add patch to the series.
>
> arch/x86/include/asm/page.h | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
> index bcf5cad3da36..b7940fa49e64 100644
> --- a/arch/x86/include/asm/page.h
> +++ b/arch/x86/include/asm/page.h
> @@ -82,9 +82,22 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
> return __va(pfn << PAGE_SHIFT);
> }
>
> +#ifdef CONFIG_KASAN_SW_TAGS
> +#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL((vaddr_bits) - 1))
why is the choice of CANONICAL_MASK() gated at compile time? Shouldn’t this be a
runtime decision based on whether LAM is enabled or not on the running system?
> +#else
> +#define CANONICAL_MASK(vaddr_bits) GENMASK_ULL(63, vaddr_bits)
> +#endif
> +
> +/*
> + * To make an address canonical either set or clear the bits defined by the
> + * CANONICAL_MASK(). Clear the bits for userspace addresses if the top address
> + * bit is a zero. Set the bits for kernel addresses if the top address bit is a
> + * one.
> + */
> static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
+Cc KVM
This is used extensively in KVM code. As far as I can tell, it may be used to determine
whether a guest virtual address is canonical or not. If that’s the case, the result should
depend on whether LAM is enabled for the guest, not the host (and certainly not a host's compile-time option).
> {
> - return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
> + return (vaddr & BIT_ULL(63)) ? vaddr | CANONICAL_MASK(vaddr_bits) :
> + vaddr & ~CANONICAL_MASK(vaddr_bits);
> }
>
> static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
On Fri, Jan 16, 2026, Andrey Ryabinin wrote:
> On 1/12/26 6:28 PM, Maciej Wieczor-Retman wrote:
> > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
> > index bcf5cad3da36..b7940fa49e64 100644
> > --- a/arch/x86/include/asm/page.h
> > +++ b/arch/x86/include/asm/page.h
> > @@ -82,9 +82,22 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
> > return __va(pfn << PAGE_SHIFT);
> > }
> >
> > +#ifdef CONFIG_KASAN_SW_TAGS
> > +#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL((vaddr_bits) - 1))
>
> why is the choice of CANONICAL_MASK() gated at compile time? Shouldn’t this be a
> runtime decision based on whether LAM is enabled or not on the running system?
>
> > +#else
> > +#define CANONICAL_MASK(vaddr_bits) GENMASK_ULL(63, vaddr_bits)
> > +#endif
> > +
> > +/*
> > + * To make an address canonical either set or clear the bits defined by the
> > + * CANONICAL_MASK(). Clear the bits for userspace addresses if the top address
> > + * bit is a zero. Set the bits for kernel addresses if the top address bit is a
> > + * one.
> > + */
> > static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>
> +Cc KVM
Thanks!
> This is used extensively in KVM code. As far as I can tell, it may be used to
> determine whether a guest virtual address is canonical or not.
Yep, KVM uses this both to check canonical addresses and to force a canonical
address (Intel and AMD disagree on the MSR_IA32_SYSENTER_{EIP,ESP} semantics in
64-bit mode) for guest addresses. This change will break KVM badly if KASAN_SW_TAGS=y.
> case, the result should depend on whether LAM is enabled for the guest, not
> the host (and certainly not a host's compile-time option).
Ya, KVM could roll its own versions, but IMO these super low level helpers should
do exactly what they say. E.g. at a glance, I'm not sure pt_event_addr_filters_sync()
should be subjected to KASAN_SW_TAGS either. If that's true, then AFAICT the
_only_ code that wants the LAM-aware behavior is copy_from_kernel_nofault_allowed(),
so maybe just handle it there? Not sure that's a great long-term maintenance
story either though.
On 2026-01-16 at 06:57:04 -0800, Sean Christopherson wrote:
>On Fri, Jan 16, 2026, Andrey Ryabinin wrote:
>> On 1/12/26 6:28 PM, Maciej Wieczor-Retman wrote:
>> > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
>> > index bcf5cad3da36..b7940fa49e64 100644
>> > --- a/arch/x86/include/asm/page.h
>> > +++ b/arch/x86/include/asm/page.h
>> > @@ -82,9 +82,22 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
>> > return __va(pfn << PAGE_SHIFT);
>> > }
>> >
>> > +#ifdef CONFIG_KASAN_SW_TAGS
>> > +#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL((vaddr_bits) - 1))
>>
>> why is the choice of CANONICAL_MASK() gated at compile time? Shouldn’t this be a
>> runtime decision based on whether LAM is enabled or not on the running system?
What would be appropriate for KVM? Instead of using #ifdefs checking
if(cpu_feature_enabled(X86_FEATURE_LAM))?
>>
>> > +#else
>> > +#define CANONICAL_MASK(vaddr_bits) GENMASK_ULL(63, vaddr_bits)
>> > +#endif
>> > +
>> > +/*
>> > + * To make an address canonical either set or clear the bits defined by the
>> > + * CANONICAL_MASK(). Clear the bits for userspace addresses if the top address
>> > + * bit is a zero. Set the bits for kernel addresses if the top address bit is a
>> > + * one.
>> > + */
>> > static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>>
>> +Cc KVM
>
>Thanks!
>
>> This is used extensively in KVM code. As far as I can tell, it may be used to
>> determine whether a guest virtual address is canonical or not.
>
>Yep, KVM uses this both to check canonical addresses and to force a canonical
>address (Intel and AMD disagree on the MSR_IA32_SYSENTER_{EIP,ESP} semantics in
>64-bit mode) for guest addresses. This change will break KVM badly if KASAN_SW_TAGS=y.
Oh, thanks! That's good to know.
>
>> case, the result should depend on whether LAM is enabled for the guest, not
>> the host (and certainly not a host's compile-time option).
>
>Ya, KVM could roll its own versions, but IMO these super low level helpers should
>do exactly what they say. E.g. at a glance, I'm not sure pt_event_addr_filters_sync()
>should be subjected to KASAN_SW_TAGS either. If that's true, then AFAICT the
>_only_ code that wants the LAM-aware behavior is copy_from_kernel_nofault_allowed(),
>so maybe just handle it there? Not sure that's a great long-term maintenance
>story either though.
Yes, longterm it's probably best to just get it right in here.
--
Kind regards
Maciej Wieczór-Retman
On Fri, Jan 16, 2026, Maciej Wieczor-Retman wrote:
> On 2026-01-16 at 06:57:04 -0800, Sean Christopherson wrote:
> >On Fri, Jan 16, 2026, Andrey Ryabinin wrote:
> >> On 1/12/26 6:28 PM, Maciej Wieczor-Retman wrote:
> >> > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
> >> > index bcf5cad3da36..b7940fa49e64 100644
> >> > --- a/arch/x86/include/asm/page.h
> >> > +++ b/arch/x86/include/asm/page.h
> >> > @@ -82,9 +82,22 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
> >> > return __va(pfn << PAGE_SHIFT);
> >> > }
> >> >
> >> > +#ifdef CONFIG_KASAN_SW_TAGS
> >> > +#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL((vaddr_bits) - 1))
> >>
> >> why is the choice of CANONICAL_MASK() gated at compile time? Shouldn’t this be a
> >> runtime decision based on whether LAM is enabled or not on the running system?
>
> What would be appropriate for KVM? Instead of using #ifdefs checking
> if(cpu_feature_enabled(X86_FEATURE_LAM))?
None of the above. Practically speaking, the kernel APIs simply can't automatically
handle the checks, because they are dependent on guest virtual CPU state, _and_
on the exact operation. E.g. LAM doesn't apply to inputs to TLB invalidation
instructions like INVVPID and INVPCID.
By the time KVM invokes __is_canonical_address(), KVM has already done the necessary
LAM unmasking. E.g. having KVM pass in a flag saying it doesn't need LAM masking
would be rather silly.
> >> > +#else
> >> > +#define CANONICAL_MASK(vaddr_bits) GENMASK_ULL(63, vaddr_bits)
> >> > +#endif
> >> > +
> >> > +/*
> >> > + * To make an address canonical either set or clear the bits defined by the
> >> > + * CANONICAL_MASK(). Clear the bits for userspace addresses if the top address
> >> > + * bit is a zero. Set the bits for kernel addresses if the top address bit is a
> >> > + * one.
> >> > + */
> >> > static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
> >>
> >> +Cc KVM
> >
> >Thanks!
> >
> >> This is used extensively in KVM code. As far as I can tell, it may be used to
> >> determine whether a guest virtual address is canonical or not.
> >
> >Yep, KVM uses this both to check canonical addresses and to force a canonical
> >address (Intel and AMD disagree on the MSR_IA32_SYSENTER_{EIP,ESP} semantics in
> >64-bit mode) for guest addresses. This change will break KVM badly if KASAN_SW_TAGS=y.
>
> Oh, thanks! That's good to know.
>
> >
> >> case, the result should depend on whether LAM is enabled for the guest, not
> >> the host (and certainly not a host's compile-time option).
> >
> >Ya, KVM could roll its own versions, but IMO these super low level helpers should
> >do exactly what they say. E.g. at a glance, I'm not sure pt_event_addr_filters_sync()
> >should be subjected to KASAN_SW_TAGS either. If that's true, then AFAICT the
> >_only_ code that wants the LAM-aware behavior is copy_from_kernel_nofault_allowed(),
> >so maybe just handle it there? Not sure that's a great long-term maintenance
> >story either though.
>
> Yes, longterm it's probably best to just get it right in here.
As above, I don't think that's feasible, because the context of both the current
(virtual) CPU and the usage matters. In other words, making __canonical_address()
itself LAM-aware feels wrong.
Actually, the kernel already has to deal with masking LAM bits for userspace
addresses, and this series needs to unmask kernel address in other flows that
effectively consume virtual addresses in software, so why not just do something
similar for copy_from_kernel_nofault_allowed()?
diff --git a/arch/x86/mm/maccess.c b/arch/x86/mm/maccess.c
index 42115ac079cf..0b3c96f8902a 100644
--- a/arch/x86/mm/maccess.c
+++ b/arch/x86/mm/maccess.c
@@ -33,7 +33,8 @@ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
if (!boot_cpu_data.x86_virt_bits)
return true;
- return __is_canonical_address(vaddr, boot_cpu_data.x86_virt_bits);
+ return __is_canonical_address(__tag_reset(vaddr),
+ boot_cpu_data.x86_virt_bits);
}
#else
bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
On 2026-01-16 at 09:00:42 -0800, Sean Christopherson wrote:
>On Fri, Jan 16, 2026, Maciej Wieczor-Retman wrote:
>> On 2026-01-16 at 06:57:04 -0800, Sean Christopherson wrote:
>> >On Fri, Jan 16, 2026, Andrey Ryabinin wrote:
>> >> On 1/12/26 6:28 PM, Maciej Wieczor-Retman wrote:
>> >> > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
>> >> > index bcf5cad3da36..b7940fa49e64 100644
>> >> > --- a/arch/x86/include/asm/page.h
>> >> > +++ b/arch/x86/include/asm/page.h
>> >> > @@ -82,9 +82,22 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
>> >> > return __va(pfn << PAGE_SHIFT);
>> >> > }
>> >> >
>> >> > +#ifdef CONFIG_KASAN_SW_TAGS
>> >> > +#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL((vaddr_bits) - 1))
>> >>
>> >> why is the choice of CANONICAL_MASK() gated at compile time? Shouldn’t this be a
>> >> runtime decision based on whether LAM is enabled or not on the running system?
>>
>> What would be appropriate for KVM? Instead of using #ifdefs checking
>> if(cpu_feature_enabled(X86_FEATURE_LAM))?
>
>None of the above. Practically speaking, the kernel APIs simply can't automatically
>handle the checks, because they are dependent on guest virtual CPU state, _and_
>on the exact operation. E.g. LAM doesn't apply to inputs to TLB invalidation
>instructions like INVVPID and INVPCID.
>
>By the time KVM invokes __is_canonical_address(), KVM has already done the necessary
>LAM unmasking. E.g. having KVM pass in a flag saying it doesn't need LAM masking
>would be rather silly.
Oh good, then I'll leave this function alone and try to work it out differently.
Thanks!
>
>> >> > +#else
>> >> > +#define CANONICAL_MASK(vaddr_bits) GENMASK_ULL(63, vaddr_bits)
>> >> > +#endif
>> >> > +
>> >> > +/*
>> >> > + * To make an address canonical either set or clear the bits defined by the
>> >> > + * CANONICAL_MASK(). Clear the bits for userspace addresses if the top address
>> >> > + * bit is a zero. Set the bits for kernel addresses if the top address bit is a
>> >> > + * one.
>> >> > + */
>> >> > static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>> >>
>> >> +Cc KVM
>> >
>> >Thanks!
>> >
>> >> This is used extensively in KVM code. As far as I can tell, it may be used to
>> >> determine whether a guest virtual address is canonical or not.
>> >
>> >Yep, KVM uses this both to check canonical addresses and to force a canonical
>> >address (Intel and AMD disagree on the MSR_IA32_SYSENTER_{EIP,ESP} semantics in
>> >64-bit mode) for guest addresses. This change will break KVM badly if KASAN_SW_TAGS=y.
>>
>> Oh, thanks! That's good to know.
>>
>> >
>> >> case, the result should depend on whether LAM is enabled for the guest, not
>> >> the host (and certainly not a host's compile-time option).
>> >
>> >Ya, KVM could roll its own versions, but IMO these super low level helpers should
>> >do exactly what they say. E.g. at a glance, I'm not sure pt_event_addr_filters_sync()
>> >should be subjected to KASAN_SW_TAGS either. If that's true, then AFAICT the
>> >_only_ code that wants the LAM-aware behavior is copy_from_kernel_nofault_allowed(),
>> >so maybe just handle it there? Not sure that's a great long-term maintenance
>> >story either though.
>>
>> Yes, longterm it's probably best to just get it right in here.
>
>As above, I don't think that's feasible, because the context of both the current
>(virtual) CPU and the usage matters. In other words, making __canonical_address()
>itself LAM-aware feels wrong.
>
>Actually, the kernel already has to deal with masking LAM bits for userspace
>addresses, and this series needs to unmask kernel address in other flows that
>effectively consume virtual addresses in software, so why not just do something
>similar for copy_from_kernel_nofault_allowed()?
>
>diff --git a/arch/x86/mm/maccess.c b/arch/x86/mm/maccess.c
>index 42115ac079cf..0b3c96f8902a 100644
>--- a/arch/x86/mm/maccess.c
>+++ b/arch/x86/mm/maccess.c
>@@ -33,7 +33,8 @@ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
> if (!boot_cpu_data.x86_virt_bits)
> return true;
>
>- return __is_canonical_address(vaddr, boot_cpu_data.x86_virt_bits);
>+ return __is_canonical_address(__tag_reset(vaddr),
>+ boot_cpu_data.x86_virt_bits);
> }
> #else
> bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
Thanks! I'll try that :)
--
Kind regards
Maciej Wieczór-Retman
© 2016 - 2026 Red Hat, Inc.