From nobody Fri Apr 3 16:01:36 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38F3F3DE45B for ; Tue, 24 Mar 2026 09:50:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774345847; cv=none; b=LncivhbHGDN2tRlWvcwy7vhO5ds2T6vgjlua6sZ2rk+fMRFOIiUh+kMvlJ/91sxcBb6lRtdRJFD0pX4juhebpiT/CzhlFNgpVwTZIQueuBpEtBVULlNfaOv/3LrNcdmv5TAHjJjks7j0WtI4tQDXRIcbECVyZt89wjtBBw+j5+A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774345847; c=relaxed/simple; bh=b7ZNZCzOPKjGbwMlzoQoUzql2SUqhZEqKDX76D3sxUg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PEBl/rs/YapXL19QYuM5dUQ1A9wVyLMV4aH0tD5IzTVTuZghK/wVibkxmyq7af8XXpJUtABdTV/D9VGnuhWtiw82bRNKgazTh2gn9kXhvHPTX4WpZ9s6BXUF6cXrBA7NAhNGWOSBuCi8B4uBs9LlF3QZ3HkB/Aqlhpn5kxS2vUA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CZmKTVzl; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CZmKTVzl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774345845; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Tg6n/oqixejy7VhwIIBWEvPzLaP7Ximx0xbVbwmAmuE=; b=CZmKTVzlzHIs2o4WTWOvpgN+Ccxk9Is3FydnypCGh5itCDsrR9+Ug7c6quu/CEXeJ+6pTZ wI3DDdJeOsAAbw2eGEndyZmm8l4i6ha7gZbantFZS8j7q878l+4bNCrHyvCvBHrivgLh6L LyrOvUrXdZRC/wHvM4TSKwi6Bz8MtxY= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-663-Svp_4I3WOleNg3hwdwqHoA-1; Tue, 24 Mar 2026 05:50:39 -0400 X-MC-Unique: Svp_4I3WOleNg3hwdwqHoA-1 X-Mimecast-MFC-AGG-ID: Svp_4I3WOleNg3hwdwqHoA_1774345835 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 63F90180044D; Tue, 24 Mar 2026 09:50:35 +0000 (UTC) Received: from vschneid-thinkpadt14sgen2i.remote.csb (unknown [10.44.34.246]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E7E0B300019F; Tue, 24 Mar 2026 09:50:21 +0000 (UTC) From: Valentin Schneider To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Arnaldo Carvalho de Melo , Josh Poimboeuf , Paolo Bonzini , Arnd Bergmann , Frederic Weisbecker , "Paul E. McKenney" , Jason Baron , Steven Rostedt , Ard Biesheuvel , Sami Tolvanen , "David S. Miller" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Uladzislau Rezki , Mathieu Desnoyers , Mel Gorman , Andrew Morton , Masahiro Yamada , Han Shen , Rik van Riel , Jann Horn , Dan Carpenter , Oleg Nesterov , Juri Lelli , Clark Williams , Tomas Glozar , Yair Podemsky , Marcelo Tosatti , Daniel Wagner , Petr Tesarik , Shrikanth Hegde Subject: [RFC PATCH v8 08/10] x86/mm/pti: Introduce a kernel/user CR3 software signal Date: Tue, 24 Mar 2026 10:47:59 +0100 Message-ID: <20260324094801.3092968-9-vschneid@redhat.com> In-Reply-To: <20260324094801.3092968-1-vschneid@redhat.com> References: <20260324094801.3092968-1-vschneid@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Later commits will rely on being able to check whether a remote CPU is using the kernel or the user CR3. This software signal needs to be updated before the actual CR3 write, IOW it always immediately precedes it: KERNEL_CR3_LOADED :=3D 1 SWITCH_TO_KERNEL_CR3 [...] KERNEL_CR3_LOADED :=3D 0 SWITCH_TO_USER_CR3 The variable also gets mapped into the user space visible pages. I tried really hard not to do that, and at some point had something mostly working with having an alias to it through the cpu_entry_area accessed like so before the switch to the kernel CR3: subq $10, %rsp sgdt (%rsp) movq 2(%rsp), \scratch_reg /* GDT address */ addq $10, %rsp movl $1, CPU_ENTRY_AREA_kernel_cr3(\scratch_reg) however this explodes when running 64-bit user code that invokes SYSCALL, since the scratch reg is %rsp itself, and I figured this was enough headach= es. This will only be really useful for NOHZ_FULL CPUs, but it should be cheaper to unconditionally update a never-used per-CPU variable living in its own cacheline than to check a shared cpumask such as housekeeping_cpumask(HK_TYPE_KERNEL_NOISE) at every entry. Signed-off-by: Valentin Schneider --- arch/x86/Kconfig | 14 +++++++++++++ arch/x86/entry/calling.h | 13 ++++++++++++ arch/x86/entry/syscall_64.c | 4 ++++ arch/x86/include/asm/tlbflush.h | 3 +++ arch/x86/mm/pti.c | 36 ++++++++++++++++++++++----------- 5 files changed, 58 insertions(+), 12 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 80527299f859a..f680e83cd5962 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2192,6 +2192,20 @@ config ADDRESS_MASKING The capability can be used for efficient address sanitizers (ASAN) implementation and for optimizations in JITs. =20 +config TRACK_CR3 + def_bool n + prompt "Track which CR3 is in use" + depends on X86_64 && MITIGATION_PAGE_TABLE_ISOLATION && NO_HZ_FULL + help + This option adds a software signal that allows checking remotely + whether a CPU is using the user or the kernel page table. + + This allows further optimizations for NOHZ_FULL CPUs. + + This obviously makes the user<->kernel transition overhead even worse. + + If unsure, say N. + config HOTPLUG_CPU def_bool y depends on SMP diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h index 77e2d920a6407..4099b7d86efd9 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -9,6 +9,7 @@ #include #include #include +#include =20 /* =20 @@ -170,8 +171,17 @@ For 32-bit we have the following conventions - kernel = is built with andq $(~PTI_USER_PGTABLE_AND_PCID_MASK), \reg .endm =20 +.macro NOTE_CR3_SWITCH scratch_reg:req in_kernel:req +#ifdef CONFIG_TRACK_CR3 + STATIC_BRANCH_FALSE_LIKELY housekeeping_overridden, .Lend_\@ + movl \in_kernel, PER_CPU_VAR(kernel_cr3_loaded) +.Lend_\@: +#endif // CONFIG_TRACK_CR3 +.endm + .macro SWITCH_TO_KERNEL_CR3 scratch_reg:req ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI + NOTE_CR3_SWITCH \scratch_reg $1 mov %cr3, \scratch_reg ADJUST_KERNEL_CR3 \scratch_reg mov \scratch_reg, %cr3 @@ -182,6 +192,7 @@ For 32-bit we have the following conventions - kernel i= s built with PER_CPU_VAR(cpu_tlbstate + TLB_STATE_user_pcid_flush_mask) =20 .macro SWITCH_TO_USER_CR3 scratch_reg:req scratch_reg2:req + NOTE_CR3_SWITCH \scratch_reg $0 mov %cr3, \scratch_reg =20 ALTERNATIVE "jmp .Lwrcr3_\@", "", X86_FEATURE_PCID @@ -229,6 +240,7 @@ For 32-bit we have the following conventions - kernel i= s built with =20 .macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req ALTERNATIVE "jmp .Ldone_\@", "", X86_FEATURE_PTI + NOTE_CR3_SWITCH \scratch_reg $1 movq %cr3, \scratch_reg movq \scratch_reg, \save_reg /* @@ -257,6 +269,7 @@ For 32-bit we have the following conventions - kernel i= s built with bt $PTI_USER_PGTABLE_BIT, \save_reg jnc .Lend_\@ =20 + NOTE_CR3_SWITCH \scratch_reg $0 ALTERNATIVE "jmp .Lwrcr3_\@", "", X86_FEATURE_PCID =20 /* diff --git a/arch/x86/entry/syscall_64.c b/arch/x86/entry/syscall_64.c index b6e68ea98b839..7583f71978856 100644 --- a/arch/x86/entry/syscall_64.c +++ b/arch/x86/entry/syscall_64.c @@ -83,6 +83,10 @@ static __always_inline bool do_syscall_x32(struct pt_reg= s *regs, int nr) return false; } =20 +#ifdef CONFIG_TRACK_CR3 +DEFINE_PER_CPU_PAGE_ALIGNED(bool, kernel_cr3_loaded) =3D true; +#endif + /* Returns true to return using SYSRET, or false to use IRET */ __visible noinstr bool do_syscall_64(struct pt_regs *regs, int nr) { diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 00daedfefc1b0..3b3aceee701e6 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -17,6 +17,9 @@ #include =20 DECLARE_PER_CPU(u64, tlbstate_untag_mask); +#ifdef CONFIG_TRACK_CR3 +DECLARE_PER_CPU_PAGE_ALIGNED(bool, kernel_cr3_loaded); +#endif =20 void __flush_tlb_all(void); =20 diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c index f7546e9e8e896..e75450cabd3a6 100644 --- a/arch/x86/mm/pti.c +++ b/arch/x86/mm/pti.c @@ -440,6 +440,18 @@ static void __init pti_clone_p4d(unsigned long addr) *user_p4d =3D *kernel_p4d; } =20 +static void __init pti_clone_percpu(unsigned long va) +{ + phys_addr_t pa =3D per_cpu_ptr_to_phys((void *)va); + pte_t *target_pte; + + target_pte =3D pti_user_pagetable_walk_pte(va, false); + if (WARN_ON(!target_pte)) + return; + + *target_pte =3D pfn_pte(pa >> PAGE_SHIFT, PAGE_KERNEL); +} + /* * Clone the CPU_ENTRY_AREA and associated data into the user space visible * page table. @@ -450,25 +462,25 @@ static void __init pti_clone_user_shared(void) =20 pti_clone_p4d(CPU_ENTRY_AREA_BASE); =20 + /* + * This is done for all possible CPUs during boot to ensure that it's + * propagated to all mms. + */ for_each_possible_cpu(cpu) { /* * The SYSCALL64 entry code needs one word of scratch space * in which to spill a register. It lives in the sp2 slot * of the CPU's TSS. - * - * This is done for all possible CPUs during boot to ensure - * that it's propagated to all mms. */ + pti_clone_percpu((unsigned long)&per_cpu(cpu_tss_rw, cpu)); =20 - unsigned long va =3D (unsigned long)&per_cpu(cpu_tss_rw, cpu); - phys_addr_t pa =3D per_cpu_ptr_to_phys((void *)va); - pte_t *target_pte; - - target_pte =3D pti_user_pagetable_walk_pte(va, false); - if (WARN_ON(!target_pte)) - return; - - *target_pte =3D pfn_pte(pa >> PAGE_SHIFT, PAGE_KERNEL); +#ifdef CONFIG_TRACK_CR3 + /* + * The entry code needs access to the @kernel_cr3_loaded percpu + * variable before the kernel CR3 is loaded. + */ + pti_clone_percpu((unsigned long)&per_cpu(kernel_cr3_loaded, cpu)); +#endif } } =20 --=20 2.52.0