[tip: x86/cpu] x86/cpu: Defer LASS enabling until userspace comes up

tip-bot2 for Sohil Mehta posted 1 patch 1 month, 1 week ago
arch/x86/kernel/cpu/common.c | 23 ++++++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)
[tip: x86/cpu] x86/cpu: Defer LASS enabling until userspace comes up
Posted by tip-bot2 for Sohil Mehta 1 month, 1 week ago
The following commit has been merged into the x86/cpu branch of tip:

Commit-ID:     b3226af5ad7bbfcba79d26f547fe6582baf20ce9
Gitweb:        https://git.kernel.org/tip/b3226af5ad7bbfcba79d26f547fe6582baf20ce9
Author:        Sohil Mehta <sohil.mehta@intel.com>
AuthorDate:    Tue, 20 Jan 2026 15:47:28 -08:00
Committer:     Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 03 Mar 2026 09:49:44 -08:00

x86/cpu: Defer LASS enabling until userspace comes up

LASS blocks any kernel access to the lower half of the virtual address
space. Unfortunately, some EFI accesses happen during boot with bit 63
cleared, which causes a #GP fault when LASS is enabled.

Notably, the SetVirtualAddressMap() call can only happen in EFI physical
mode. Also, EFI_BOOT_SERVICES_CODE/_DATA could be accessed even after
ExitBootServices(). The boot services memory is truly freed during
efi_free_boot_services() after SVAM has completed.

To prevent EFI from tripping LASS, at a minimum, LASS enabling must be
deferred until EFI has completely finished entering virtual mode
(including freeing boot services memory). Moving setup_lass() to
arch_cpu_finalize_init() would do the trick, but that would make the
implementation very fragile. Something else might come in the future
that would need the LASS enabling to be moved again.

In general, security features such as LASS provide limited value before
userspace is up. They aren't necessary during early boot while only
trusted ring0 code is executing. Introduce a generic late initcall to
defer activating some CPU features until userspace is enabled.

For now, only move the LASS CR4 programming to this initcall. As APs are
already up by the time late initcalls run, some extra steps are needed
to enable LASS on all CPUs. Use a CPU hotplug callback instead of
on_each_cpu() or smp_call_function(). This ensures that LASS is enabled
on every CPU that is currently online as well as any future CPUs that
come online later. Note, even though hotplug callbacks run with
preemption enabled, cr4_set_bits() would disable interrupts while
updating CR4.

Keep the existing logic in place to clear the LASS feature bits early.
setup_clear_cpu_cap() must be called before boot_cpu_data is finalized
and alternatives are patched. Eventually, the entire setup_lass() logic
can go away once the restrictions based on vsyscall emulation and EFI
are removed.

Signed-off-by: Sohil Mehta <sohil.mehta@intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Tested-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Link: https://patch.msgid.link/20260120234730.2215498-2-sohil.mehta@intel.com
---
 arch/x86/kernel/cpu/common.c | 23 ++++++++++++++++++++++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 1c3261c..8c56d59 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -422,11 +422,32 @@ static __always_inline void setup_lass(struct cpuinfo_x86 *c)
 	if (IS_ENABLED(CONFIG_X86_VSYSCALL_EMULATION) ||
 	    IS_ENABLED(CONFIG_EFI)) {
 		setup_clear_cpu_cap(X86_FEATURE_LASS);
-		return;
 	}
+}
 
+static int enable_lass(unsigned int cpu)
+{
 	cr4_set_bits(X86_CR4_LASS);
+
+	return 0;
+}
+
+/*
+ * Finalize features that need to be enabled just before entering
+ * userspace. Note that this only runs on a single CPU. Use appropriate
+ * callbacks if all the CPUs need to reflect the same change.
+ */
+static int cpu_finalize_pre_userspace(void)
+{
+	if (!cpu_feature_enabled(X86_FEATURE_LASS))
+		return 0;
+
+	/* Runs on all online CPUs and future CPUs that come online. */
+	cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/lass:enable", enable_lass, NULL);
+
+	return 0;
 }
+late_initcall(cpu_finalize_pre_userspace);
 
 /* These bits should not change their value after CPU init is finished. */
 static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP |