.../admin-guide/kernel-parameters.txt | 4 +- arch/x86/Kconfig | 1 - arch/x86/Kconfig.cpufeatures | 4 + arch/x86/entry/vsyscall/vsyscall_64.c | 69 +++++++++++------ arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/smap.h | 27 ++++++- arch/x86/include/asm/string.h | 46 ++++++++++++ arch/x86/include/asm/uaccess_64.h | 38 +++------- arch/x86/include/asm/vsyscall.h | 14 +++- arch/x86/include/uapi/asm/processor-flags.h | 2 + arch/x86/kernel/alternative.c | 14 +++- arch/x86/kernel/cpu/common.c | 21 ++++-- arch/x86/kernel/cpu/cpuid-deps.c | 2 + arch/x86/kernel/traps.c | 75 +++++++++++++++---- arch/x86/kernel/umip.c | 3 + arch/x86/lib/clear_page_64.S | 10 ++- arch/x86/mm/fault.c | 2 +- arch/x86/platform/efi/efi.c | 15 ++++ tools/arch/x86/include/asm/cpufeatures.h | 1 + 19 files changed, 264 insertions(+), 85 deletions(-)
Linear Address Space Separation (LASS) is a security feature that intends to prevent malicious virtual address space accesses across user/kernel mode. Such mode based access protection already exists today with paging and features such as SMEP and SMAP. However, to enforce these protections, the processor must traverse the paging structures in memory. Malicious software can use timing information resulting from this traversal to determine details about the paging structures, and these details may also be used to determine the layout of the kernel memory. The LASS mechanism provides the same mode-based protections as paging but without traversing the paging structures. Because the protections enforced by LASS are applied before paging, software will not be able to derive paging-based timing information from the various caching structures such as the TLBs, mid-level caches, page walker, data caches, etc. LASS can avoid probing using double page faults, TLB flush and reload, and SW prefetch instructions. See [2], [3] and [4] for some research on the related attack vectors. Had it been available, LASS alone would have mitigated Meltdown. (Hindsight is 20/20 :) In addition, LASS prevents an attack vector described in a Spectre LAM (SLAM) whitepaper [7]. LASS enforcement relies on the typical kernel implementation to divide the 64-bit virtual address space into two halves: Addr[63]=0 -> User address space Addr[63]=1 -> Kernel address space Any data access or code execution across address spaces typically results in a #GP fault. Kernel accesses usually only happen to the kernel address space. However, there are valid reasons for kernel to access memory in the user half. For these cases (such as text poking and EFI runtime accesses), the kernel can temporarily suspend the enforcement of LASS by toggling SMAP (Supervisor Mode Access Prevention) using the stac()/clac() instructions and in one instance a downright disabling LASS for an EFI runtime call. User space cannot access any kernel address while LASS is enabled. Unfortunately, legacy vsyscall functions are located in the address range 0xffffffffff600000 - 0xffffffffff601000 and emulated in kernel. To avoid breaking user applications when LASS is enabled, extend the vsyscall emulation in execute (XONLY) mode to the #GP fault handler. In contrast, the vsyscall EMULATE mode is deprecated and not expected to be used by anyone. Supporting EMULATE mode with LASS would need complex instruction decoding in the #GP fault handler and is probably not worth the hassle. Disable LASS in this rare case when someone absolutely needs and enables vsyscall=emulate via the command line. Changes from v6[10]: - Rewrok #SS handler to work properly on FRED; - Do not require X86_PF_INSTR to emulate vsyscall; - Move lass_clac()/stac() definition to the patch where they are used; - Rename lass_clac/stac() to lass_disable/enable_enforcement(); - Fix several build issues around inline memcpy and memset; - Fix sparse warning; - Adjust comments and commit messages; - Drop "x86/efi: Move runtime service initialization to arch/x86" patch as it got applied; Changes from v5[9]: - Report LASS violation as NULL pointer dereference if the address is in the first page frame; - Provide helpful error message on #SS due to LASS violation; - Fold patch for vsyscall=emulate documentation into patch that disables LASS with vsyscall=emulate; - Rewrite __inline_memeset() and __inline_memcpy(); - Adjust comments and commit messages; Changes from v4[8]: - Added PeterZ's Originally-by and SoB to 2/16 - Added lass_clac()/lass_stac() to differentiate from SMAP necessitated clac()/stac() and to be NOPs on CPUs that don't support LASS - Moved LASS enabling patch to the end to avoid rendering machines unbootable between until the patch that disables LASS around EFI initialization - Reverted Pawan's LAM disabling commit Changes from v3[6]: - Made LAM dependent on LASS - Moved EFI runtime initialization to x86 side of things - Suspended LASS validation around EFI set_virtual_address_map call - Added a message for the case of kernel side LASS violation - Moved inline memset/memcpy versions to the common string.h Changes from v2[5]: - Added myself to the SoB chain Changes from v1[1]: - Emulate vsyscall violations in execute mode in the #GP fault handler - Use inline memcpy and memset while patching alternatives - Remove CONFIG_X86_LASS - Make LASS depend on SMAP - Dropped the minimal KVM enabling patch [1] https://lore.kernel.org/lkml/20230110055204.3227669-1-yian.chen@intel.com/ [2] “Practical Timing Side Channel Attacks against Kernel Space ASLR”, https://www.ieee-security.org/TC/SP2013/papers/4977a191.pdf [3] “Prefetch Side-Channel Attacks: Bypassing SMAP and Kernel ASLR”, http://doi.acm.org/10.1145/2976749.2978356 [4] “Harmful prefetch on Intel”, https://ioactive.com/harmful-prefetch-on-intel/ (H/T Anders) [5] https://lore.kernel.org/all/20230530114247.21821-1-alexander.shishkin@linux.intel.com/ [6] https://lore.kernel.org/all/20230609183632.48706-1-alexander.shishkin@linux.intel.com/ [7] https://download.vusec.net/papers/slam_sp24.pdf [8] https://lore.kernel.org/all/20240710160655.3402786-1-alexander.shishkin@linux.intel.com/ [9] https://lore.kernel.org/all/20241028160917.1380714-1-alexander.shishkin@linux.intel.com [10] https://lore.kernel.org/all/20250620135325.3300848-1-kirill.shutemov@linux.intel.com/ Alexander Shishkin (4): x86/cpu: Defer CR pinning setup until after EFI initialization efi: Disable LASS around set_virtual_address_map() EFI call x86/traps: Communicate a LASS violation in #GP message x86/cpu: Make LAM depend on LASS Kirill A. Shutemov (4): x86/asm: Introduce inline memcpy and memset x86/vsyscall: Do not require X86_PF_INSTR to emulate vsyscall x86/traps: Handle LASS thrown #SS x86: Re-enable Linear Address Masking Sohil Mehta (7): x86/cpu: Enumerate the LASS feature bits x86/alternatives: Disable LASS when patching kernel alternatives x86/vsyscall: Reorganize the #PF emulation code x86/traps: Consolidate user fixups in exc_general_protection() x86/vsyscall: Add vsyscall emulation for #GP x86/vsyscall: Disable LASS if vsyscall mode is set to EMULATE x86/cpu: Enable LASS during CPU initialization Yian Chen (1): x86/cpu: Set LASS CR4 bit as pinning sensitive .../admin-guide/kernel-parameters.txt | 4 +- arch/x86/Kconfig | 1 - arch/x86/Kconfig.cpufeatures | 4 + arch/x86/entry/vsyscall/vsyscall_64.c | 69 +++++++++++------ arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/smap.h | 27 ++++++- arch/x86/include/asm/string.h | 46 ++++++++++++ arch/x86/include/asm/uaccess_64.h | 38 +++------- arch/x86/include/asm/vsyscall.h | 14 +++- arch/x86/include/uapi/asm/processor-flags.h | 2 + arch/x86/kernel/alternative.c | 14 +++- arch/x86/kernel/cpu/common.c | 21 ++++-- arch/x86/kernel/cpu/cpuid-deps.c | 2 + arch/x86/kernel/traps.c | 75 +++++++++++++++---- arch/x86/kernel/umip.c | 3 + arch/x86/lib/clear_page_64.S | 10 ++- arch/x86/mm/fault.c | 2 +- arch/x86/platform/efi/efi.c | 15 ++++ tools/arch/x86/include/asm/cpufeatures.h | 1 + 19 files changed, 264 insertions(+), 85 deletions(-) -- 2.47.2
On 25/06/2025 14:50, Kirill A. Shutemov wrote: > Linear Address Space Separation (LASS) is a security feature that intends to > prevent malicious virtual address space accesses across user/kernel mode. I applied these patches on top of tip/master and when I try to boot it fails with errno 12 (ENOMEM - Cannot allocate memory): [ 1.517526] Kernel panic - not syncing: Requested init /bin/bash failed (error -12). Just using standard defconfig and booting in qemu/KVM with 2G RAM. Bisect lands on "x86/asm: Introduce inline memcpy and memset". Thanks, Vegard
On 26/06/2025 11:22, Vegard Nossum wrote: > > On 25/06/2025 14:50, Kirill A. Shutemov wrote: >> Linear Address Space Separation (LASS) is a security feature that >> intends to >> prevent malicious virtual address space accesses across user/kernel mode. > > I applied these patches on top of tip/master and when I try to boot it > fails with errno 12 (ENOMEM - Cannot allocate memory): > > [ 1.517526] Kernel panic - not syncing: Requested init /bin/bash > failed (error -12). > > Just using standard defconfig and booting in qemu/KVM with 2G RAM. > > Bisect lands on "x86/asm: Introduce inline memcpy and memset". I think the newly added mulq to rep_stos_alternative clobbers %rdx, at least this patch fixed it for me: diff --git a/arch/x86/include/asm/string.h b/arch/x86/include/asm/string.h index 5cd0f18a431fe..bc096526432a1 100644 --- a/arch/x86/include/asm/string.h +++ b/arch/x86/include/asm/string.h @@ -28,7 +28,7 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t "2:\n\t" _ASM_EXTABLE_UA(1b, 2b) :"+c" (len), "+D" (to), "+S" (from), ASM_CALL_CONSTRAINT - : : "memory", _ASM_AX); + : : "memory", _ASM_AX, _ASM_DX); return ret + len; } @@ -44,7 +44,7 @@ static __always_inline void *__inline_memset(void *addr, int v, size_t len) _ASM_EXTABLE_UA(1b, 2b) : "+c" (len), "+D" (addr), ASM_CALL_CONSTRAINT : "a" ((uint8_t)v) - : "memory", _ASM_SI); + : "memory", _ASM_SI, _ASM_DX); return ret + len; } diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S index ca94828def624..77cfd75718623 100644 --- a/arch/x86/lib/clear_page_64.S +++ b/arch/x86/lib/clear_page_64.S @@ -64,6 +64,7 @@ EXPORT_SYMBOL_GPL(clear_page_erms) * * Output: * rcx: uncleared bytes or 0 if successful. + * rdx: clobbered */ SYM_FUNC_START(rep_stos_alternative) ANNOTATE_NOENDBR Thanks, Vegard
On Thu, Jun 26, 2025 at 11:35:21AM +0200, Vegard Nossum wrote: > > On 26/06/2025 11:22, Vegard Nossum wrote: > > > > On 25/06/2025 14:50, Kirill A. Shutemov wrote: > > > Linear Address Space Separation (LASS) is a security feature that > > > intends to > > > prevent malicious virtual address space accesses across user/kernel mode. > > > > I applied these patches on top of tip/master and when I try to boot it > > fails with errno 12 (ENOMEM - Cannot allocate memory): > > > > [ 1.517526] Kernel panic - not syncing: Requested init /bin/bash > > failed (error -12). For some reason, I failed to reproduce it. What is your toolchain? > > Just using standard defconfig and booting in qemu/KVM with 2G RAM. > > > > Bisect lands on "x86/asm: Introduce inline memcpy and memset". > > I think the newly added mulq to rep_stos_alternative clobbers %rdx, Yes, it makes sense. > at > least this patch fixed it for me: > > diff --git a/arch/x86/include/asm/string.h b/arch/x86/include/asm/string.h > index 5cd0f18a431fe..bc096526432a1 100644 > --- a/arch/x86/include/asm/string.h > +++ b/arch/x86/include/asm/string.h > @@ -28,7 +28,7 @@ static __always_inline void *__inline_memcpy(void *to, > const void *from, size_t > "2:\n\t" > _ASM_EXTABLE_UA(1b, 2b) > :"+c" (len), "+D" (to), "+S" (from), > ASM_CALL_CONSTRAINT > - : : "memory", _ASM_AX); > + : : "memory", _ASM_AX, _ASM_DX); > > return ret + len; > } This part is not needed. rep_movs_alternative() doesn't touch RDX. I will fold the patch below. Or maybe some asm guru can suggest a better way to fix it without clobbering RDX? diff --git a/arch/x86/include/asm/string.h b/arch/x86/include/asm/string.h index 5cd0f18a431f..b0a26a3f11e0 100644 --- a/arch/x86/include/asm/string.h +++ b/arch/x86/include/asm/string.h @@ -44,7 +44,7 @@ static __always_inline void *__inline_memset(void *addr, int v, size_t len) _ASM_EXTABLE_UA(1b, 2b) : "+c" (len), "+D" (addr), ASM_CALL_CONSTRAINT : "a" ((uint8_t)v) - : "memory", _ASM_SI); + : "memory", _ASM_SI, _ASM_DX); return ret + len; } diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S index ca94828def62..d904c781fa3f 100644 --- a/arch/x86/lib/clear_page_64.S +++ b/arch/x86/lib/clear_page_64.S @@ -64,12 +64,15 @@ EXPORT_SYMBOL_GPL(clear_page_erms) * * Output: * rcx: uncleared bytes or 0 if successful. + * rdx: clobbered */ SYM_FUNC_START(rep_stos_alternative) ANNOTATE_NOENDBR movzbq %al, %rsi movabs $0x0101010101010101, %rax + + /* %rdx:%rax = %rax * %rsi */ mulq %rsi cmpq $64,%rcx -- Kiryl Shutsemau / Kirill A. Shutemov
On Thu, 26 Jun 2025 15:47:36 +0300 "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> wrote: > On Thu, Jun 26, 2025 at 11:35:21AM +0200, Vegard Nossum wrote: > > > > On 26/06/2025 11:22, Vegard Nossum wrote: > > > > > > On 25/06/2025 14:50, Kirill A. Shutemov wrote: > > > > Linear Address Space Separation (LASS) is a security feature that > > > > intends to > > > > prevent malicious virtual address space accesses across user/kernel mode. > > > > > > I applied these patches on top of tip/master and when I try to boot it > > > fails with errno 12 (ENOMEM - Cannot allocate memory): > > > > > > [ 1.517526] Kernel panic - not syncing: Requested init /bin/bash > > > failed (error -12). > > For some reason, I failed to reproduce it. What is your toolchain? > > > > Just using standard defconfig and booting in qemu/KVM with 2G RAM. > > > > > > Bisect lands on "x86/asm: Introduce inline memcpy and memset". > > > > I think the newly added mulq to rep_stos_alternative clobbers %rdx, > > Yes, it makes sense. > > > at > > least this patch fixed it for me: > > > > diff --git a/arch/x86/include/asm/string.h b/arch/x86/include/asm/string.h > > index 5cd0f18a431fe..bc096526432a1 100644 > > --- a/arch/x86/include/asm/string.h > > +++ b/arch/x86/include/asm/string.h > > @@ -28,7 +28,7 @@ static __always_inline void *__inline_memcpy(void *to, > > const void *from, size_t > > "2:\n\t" > > _ASM_EXTABLE_UA(1b, 2b) > > :"+c" (len), "+D" (to), "+S" (from), > > ASM_CALL_CONSTRAINT > > - : : "memory", _ASM_AX); > > + : : "memory", _ASM_AX, _ASM_DX); > > > > return ret + len; > > } > > This part is not needed. rep_movs_alternative() doesn't touch RDX. > > I will fold the patch below. > > Or maybe some asm guru can suggest a better way to fix it without > clobbering RDX? Or separate out the code where the value is a compile-time zero. That is pretty much 99% of the calls. David
On 26/06/2025 14:47, Kirill A. Shutemov wrote: > On Thu, Jun 26, 2025 at 11:35:21AM +0200, Vegard Nossum wrote: >> On 26/06/2025 11:22, Vegard Nossum wrote: >>> On 25/06/2025 14:50, Kirill A. Shutemov wrote: >>>> Linear Address Space Separation (LASS) is a security feature that >>>> intends to >>>> prevent malicious virtual address space accesses across user/kernel mode. >>> >>> I applied these patches on top of tip/master and when I try to boot it >>> fails with errno 12 (ENOMEM - Cannot allocate memory): >>> >>> [ 1.517526] Kernel panic - not syncing: Requested init /bin/bash >>> failed (error -12). > > For some reason, I failed to reproduce it. What is your toolchain? $ gcc --version gcc (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2.1.0.1) I tried to diff vmlinux with and without the clobber change and I see a bunch of changed functions, the first one I looked at is calling put_user() -- I guess anything could be affected, really. >> @@ -28,7 +28,7 @@ static __always_inline void *__inline_memcpy(void *to, >> const void *from, size_t >> "2:\n\t" >> _ASM_EXTABLE_UA(1b, 2b) >> :"+c" (len), "+D" (to), "+S" (from), >> ASM_CALL_CONSTRAINT >> - : : "memory", _ASM_AX); >> + : : "memory", _ASM_AX, _ASM_DX); >> >> return ret + len; >> } > > This part is not needed. rep_movs_alternative() doesn't touch RDX. True, I didn't look closely enough... > I will fold the patch below. Thanks, Vegard
© 2016 - 2025 Red Hat, Inc.