From nobody Fri Apr 3 04:09:03 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1240419D8AC; Fri, 3 Apr 2026 00:30:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176251; cv=none; b=M5hZrnn3Ejr74qKvdMKIroY8sP7mmOCmNdLH5CTOr+DOLnSW3F953xx5Em9z3jOuY07EtAlpDBn3CwPNbm5BnYHY1bMae+IkUQDkYOTE4EOY4Op8An3onPkmYKO3qYU+J1wvnAG/+/7GYqzn3NiaoWzY0X58CdQqHEb28tHEy3g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176251; c=relaxed/simple; bh=UEaAzxTV5k9qgAQSvHOrlSueZGJf50PP0Vc880X5kxY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=fMnfQA0Yu+BVm8dGMtEr8ViP1PJkU3E8mCKTbQzE/CRiRwxKhegWAPAZ1zJ+SPBqbA0XwC9Of15Kn8RPyuOGp7zOxq8gHoIAugScxe+oipNokZ2E6Ne2VG1FcRPZ1Fg7Cjl7+5yHDAK+XVwGa/Pecc/MIfdKm/+HXtZqXUBiZ4g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hyNIqkAm; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hyNIqkAm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775176250; x=1806712250; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=UEaAzxTV5k9qgAQSvHOrlSueZGJf50PP0Vc880X5kxY=; b=hyNIqkAmNJOfNGaO1aR1OrxifzUGlFj4iV771dXVYgyTW25V+7eQjRVB eDqmU2Dh16BbvXQtRa3ET7okFIU7EbkFOy0k5iRIYuI6yv/vT5VTNE5Ee fkBzuwZ7a+evpUdOLDrOTYE39Z9/Z8wCVvN+/JQv20eHBO4/rqkgxui/F xWszNJqINzwHskGuUJtZUQxgGs0xH0ebJRIDpxvCZgKKUqS+IjMyDgiOu MH77nkghOJFMJTi9QaOXIi0i8Pk+gvys1xpxwbR+GetZjRCX8pl+jeRiD YWq+lNOADspAfJj2VHAOUcJ79F5gTvoc5RMQG9zs+jThsBDhjUUdp9wqx A==; X-CSE-ConnectionGUID: FVSv3JBqTF6tes/nLaWvQg== X-CSE-MsgGUID: 5BEg14AHSK6EMZPGAZFE/g== X-IronPort-AV: E=McAfee;i="6800,10657,11747"; a="63794162" X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="63794162" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:30:49 -0700 X-CSE-ConnectionGUID: DWJutCRWQCywqqZMdG5oYg== X-CSE-MsgGUID: rlnQxJSPQPGZKAhZ0AnTxw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="250191117" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:30:48 -0700 Date: Thu, 2 Apr 2026 17:30:47 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v9 01/10] x86/bhi: x86/vmscape: Move LFENCE out of clear_bhb_loop() Message-ID: <20260402-vmscape-bhb-v9-1-94d16bc29774@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, the BHB clearing sequence is followed by an LFENCE to prevent transient execution of subsequent indirect branches prematurely. However, the LFENCE barrier could be unnecessary in certain cases. For example, when the kernel is using the BHI_DIS_S mitigation, and BHB clearing is only needed for userspace. In such cases, the LFENCE is redundant because ring transitions would provide the necessary serialization. Below is a quick recap of BHI mitigation options: On Alder Lake and newer BHI_DIS_S: Hardware control to mitigate BHI in ring0. This has low performance overhead. Long loop: Alternatively, a longer version of the BHB clearing sequence can be used to mitigate BHI. It can also be used to mitigate the BHI variant of VMSCAPE. This is not yet implemented in Linux. On older CPUs Short loop: Clears BHB at kernel entry and VMexit. The "Long loop" is effective on older CPUs as well, but should be avoided because of unnecessary overhead. On Alder Lake and newer CPUs, eIBRS isolates the indirect targets between guest and host. But when affected by the BHI variant of VMSCAPE, a guest's branch history may still influence indirect branches in userspace. This also means the big hammer IBPB could be replaced with a cheaper option that clears the BHB at exit-to-userspace after a VMexit. In preparation for adding the support for the BHB sequence (without LFENCE) on newer CPUs, move the LFENCE to the caller side after clear_bhb_loop() is executed. Allow callers to decide whether they need the LFENCE or not. This adds a few extra bytes to the call sites, but it obviates the need for multiple variants of clear_bhb_loop(). Suggested-by: Dave Hansen Tested-by: Jon Kohler Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_64.S | 5 ++++- arch/x86/include/asm/nospec-branch.h | 4 ++-- arch/x86/net/bpf_jit_comp.c | 2 ++ 3 files changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 42447b1e1dff..3a180a36ca0e 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1528,6 +1528,9 @@ SYM_CODE_END(rewind_stack_and_make_dead) * refactored in the future if needed. The .skips are for safety, to ensure * that all RETs are in the second half of a cacheline to mitigate Indirect * Target Selection, rather than taking the slowpath via its_return_thunk. + * + * Note, callers should use a speculation barrier like LFENCE immediately = after + * a call to this function to ensure BHB is cleared before indirect branch= es. */ SYM_FUNC_START(clear_bhb_loop) ANNOTATE_NOENDBR @@ -1562,7 +1565,7 @@ SYM_FUNC_START(clear_bhb_loop) sub $1, %ecx jnz 1b .Lret2: RET -5: lfence +5: pop %rbp RET SYM_FUNC_END(clear_bhb_loop) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 4f4b5e8a1574..70b377fcbc1c 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -331,11 +331,11 @@ =20 #ifdef CONFIG_X86_64 .macro CLEAR_BRANCH_HISTORY - ALTERNATIVE "", "call clear_bhb_loop", X86_FEATURE_CLEAR_BHB_LOOP + ALTERNATIVE "", "call clear_bhb_loop; lfence", X86_FEATURE_CLEAR_BHB_LOOP .endm =20 .macro CLEAR_BRANCH_HISTORY_VMEXIT - ALTERNATIVE "", "call clear_bhb_loop", X86_FEATURE_CLEAR_BHB_VMEXIT + ALTERNATIVE "", "call clear_bhb_loop; lfence", X86_FEATURE_CLEAR_BHB_VMEX= IT .endm #else #define CLEAR_BRANCH_HISTORY diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index e9b78040d703..63d6c9fa5e80 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1624,6 +1624,8 @@ static int emit_spectre_bhb_barrier(u8 **pprog, u8 *i= p, =20 if (emit_call(&prog, func, ip)) return -EINVAL; + /* Don't speculate past this until BHB is cleared */ + EMIT_LFENCE(); EMIT1(0x59); /* pop rcx */ EMIT1(0x58); /* pop rax */ } --=20 2.34.1 From nobody Fri Apr 3 04:09:03 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E682D2147E5; Fri, 3 Apr 2026 00:31:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176268; cv=none; b=GjprjLp7gBuKzFCS39m0xmmRzssLm6TSion46hfFUCAlkrssG6H4MJJvu/kqeXFpsv6Eb4C1MfWhQzR0s358zLS+LuIFMsldnK2Dux8awZBW34umPDegx+W+a7UEhXbjv2E6JbmHOxnDDb8rYGSkJUTbcvu1iipjRwLnePk0ejc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176268; c=relaxed/simple; bh=vUaq1L4lvEMbQrUcYXAvv1Y3U3fF/UdA7tbV7tlCoj4=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=AeGToIaq2QdTRIUWlnHRlFfDhThv2Vbg9xz7QhT143qOp/VMJ3Ug1X1NLSM1wJJ0AK/k/VRs16oktC7uPVrSCtSmwWOOuHBYCDjgcilqs7iw4X+Y6XtCB+H5awtMxJfZ2TA3a12TlxdgMWLr7SCmGcBVTrA7/QVUELR8Co+QMVU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=fyInb4eX; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fyInb4eX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775176267; x=1806712267; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=vUaq1L4lvEMbQrUcYXAvv1Y3U3fF/UdA7tbV7tlCoj4=; b=fyInb4eXHpFNZG9LPrxngm50YeCALc+SRttqQt2nZ7s6GbgpqxZV826v HV/TatP5K+DoR2P47E9RoSIuLOiVq2FTqsaFLnqsZVJzHVM3iDBnDCjMC YXEQkBCORo61AD6QwCrtHEXjE2AnB40H1TCNusJNQKO5nSpj869LxK7bz +Gdo6FAYnFZVKSqTlMVTJtXRqolEKt5qahTJGFfay/JYYHJT+hBGzHS84 CrBSOKxkkMlxji+/kLQnOQTj5zvkwIh66gSF6RinHGiTrbhJIyHoXT/ok DpeKFDWfi4OQl4GmQwAYfF6CYyFl4jzp5ljnNgxdVpa/aoAp3sw9IjviJ g==; X-CSE-ConnectionGUID: 253ljbtRSfe5H0MPiJdacg== X-CSE-MsgGUID: Y1v8StbKTgGtZX+0E5WCFA== X-IronPort-AV: E=McAfee;i="6800,10657,11747"; a="63794189" X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="63794189" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:31:06 -0700 X-CSE-ConnectionGUID: uPLorF36RVGRWD7aqlSONQ== X-CSE-MsgGUID: EX1W2jVERpOaR2gxZWNRxg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="250191143" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:31:05 -0700 Date: Thu, 2 Apr 2026 17:31:04 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v9 02/10] x86/bhi: Make clear_bhb_loop() effective on newer CPUs Message-ID: <20260402-vmscape-bhb-v9-2-94d16bc29774@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As a mitigation for BHI, clear_bhb_loop() executes branches that overwrite the Branch History Buffer (BHB). On Alder Lake and newer parts this sequence is not sufficient because it doesn't clear enough entries. This was not an issue because these CPUs use the BHI_DIS_S hardware mitigation in the kernel. Now with VMSCAPE (BHI variant) it is also required to isolate branch history between guests and userspace. Since BHI_DIS_S only protects the kernel, the newer CPUs also use IBPB. A cheaper alternative to the current IBPB mitigation is clear_bhb_loop(). But it currently does not clear enough BHB entries to be effective on newer CPUs with larger BHB. At boot, dynamically set the loop count of clear_bhb_loop() such that it is effective on newer CPUs too. Use the X86_FEATURE_BHI_CTRL feature flag to select the appropriate loop count. Suggested-by: Dave Hansen Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_64.S | 8 +++++--- arch/x86/include/asm/nospec-branch.h | 2 ++ arch/x86/kernel/cpu/bugs.c | 13 +++++++++++++ 3 files changed, 20 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 3a180a36ca0e..bbd4b1c7ec04 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1536,7 +1536,9 @@ SYM_FUNC_START(clear_bhb_loop) ANNOTATE_NOENDBR push %rbp mov %rsp, %rbp - movl $5, %ecx + + movzbl bhb_seq_outer_loop(%rip), %ecx + ANNOTATE_INTRA_FUNCTION_CALL call 1f jmp 5f @@ -1556,8 +1558,8 @@ SYM_FUNC_START(clear_bhb_loop) * This should be ideally be: .skip 32 - (.Lret2 - 2f), 0xcc * but some Clang versions (e.g. 18) don't like this. */ - .skip 32 - 18, 0xcc -2: movl $5, %eax + .skip 32 - 20, 0xcc +2: movzbl bhb_seq_inner_loop(%rip), %eax 3: jmp 4f nop 4: sub $1, %eax diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 70b377fcbc1c..87b83ae7c97f 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -548,6 +548,8 @@ DECLARE_PER_CPU(u64, x86_spec_ctrl_current); extern void update_spec_ctrl_cond(u64 val); extern u64 spec_ctrl_current(void); =20 +extern u8 bhb_seq_inner_loop, bhb_seq_outer_loop; + /* * With retpoline, we must use IBRS to restrict branch prediction * before calling into firmware. diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 83f51cab0b1e..2cb4a96247d8 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2047,6 +2047,10 @@ enum bhi_mitigations { static enum bhi_mitigations bhi_mitigation __ro_after_init =3D IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_AUTO : BHI_MIT= IGATION_OFF; =20 +/* Default to short BHB sequence values */ +u8 bhb_seq_outer_loop __ro_after_init =3D 5; +u8 bhb_seq_inner_loop __ro_after_init =3D 5; + static int __init spectre_bhi_parse_cmdline(char *str) { if (!str) @@ -3242,6 +3246,15 @@ void __init cpu_select_mitigations(void) x86_spec_ctrl_base &=3D ~SPEC_CTRL_MITIGATIONS_MASK; } =20 + /* + * Switch to long BHB clear sequence on newer CPUs (with BHI_CTRL + * support), see Intel's BHI guidance. + */ + if (cpu_feature_enabled(X86_FEATURE_BHI_CTRL)) { + bhb_seq_outer_loop =3D 12; + bhb_seq_inner_loop =3D 7; + } + x86_arch_cap_msr =3D x86_read_arch_cap_msr(); =20 cpu_print_attack_vectors(); --=20 2.34.1 From nobody Fri Apr 3 04:09:03 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F004920B810; Fri, 3 Apr 2026 00:31:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176283; cv=none; b=NaqvGvWqtQJMLsJ+Rd2vJ434h9BKxtgBGlmGoKysKiyq03CjX7sFkiV9FGRHIU45Ig1arh6d5fC1hX7MJtJVTfh86DyyWeiLmVnDLZjtqB+hyXd8cGJucJg0O5+HTZZxgNrfvEy/7trp0IVB/HS1GA2V2OxFS+Z7MxoYrFy8bd0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176283; c=relaxed/simple; bh=Dc8CmE7nMp3oc4TVyDoXVNiGITwPjHDIoAHr8yLepJY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=HKS4c+byBEbPCS5CGYihCMw+M6cNBA8S/j54oAlBnOQ9U1DCj9u0FBfL40xVeXMixBrmQ+oP/jItW1bgE6M4n0F0JrF4BT26O9b0lYvuDM1lwiypFzBcjzEwIatDelx5V1VuYn2e8F+snoylihMJOERJRIby9RqoioOQpTK01tY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cQhyQCyd; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cQhyQCyd" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775176282; x=1806712282; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=Dc8CmE7nMp3oc4TVyDoXVNiGITwPjHDIoAHr8yLepJY=; b=cQhyQCyd0JuhT/jsBORffKOTjd6iymVEgs/euxL8FapIWvKZkME01m6B Mjwc2asBbVWWaJeX6SfcjdKkqEMAM+NxQ+B6EhD/sgiudwnZaRsdlpDLm Hjv8Yl0VCoGWvP6P0XniV6Hx4b08IB5yi5vr0ArQMLeyyA+TE0lJdIuc3 TwWqVFRuS6B32AXu/4Rr5pusB60aYAsGqF3ZaeB1Xe92SJdegv3FAwjiB E0yIr7HL+cODxoCFpZaYyCq8X0V3IlWGnYin62jHrwKBwm78YiuWwU0kR nNFsVdWOEqt0CTA4LVSn1CJ7GM10aVE7/tCl5EYx/h5Rgx9p1aOsszfg1 w==; X-CSE-ConnectionGUID: WhnIr8LBRHqAVMCYTussMA== X-CSE-MsgGUID: Wz0YJOp7QzGaIjKgnf60WA== X-IronPort-AV: E=McAfee;i="6800,10657,11747"; a="76128732" X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="76128732" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:31:22 -0700 X-CSE-ConnectionGUID: PcWkzCncRMy8bSVLy2PQpA== X-CSE-MsgGUID: 8LrTP5mCQ1u9dmp7UUTUBQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="227059719" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:31:21 -0700 Date: Thu, 2 Apr 2026 17:31:21 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v9 03/10] x86/bhi: Rename clear_bhb_loop() to clear_bhb_loop_nofence() Message-ID: <20260402-vmscape-bhb-v9-3-94d16bc29774@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To reflect the recent change that moved LFENCE to the caller side. Suggested-by: Borislav Petkov Reviewed-by: Nikolay Borisov Tested-by: Jon Kohler Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_64.S | 8 ++++---- arch/x86/include/asm/nospec-branch.h | 6 +++--- arch/x86/net/bpf_jit_comp.c | 2 +- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index bbd4b1c7ec04..1f56d086d312 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1532,7 +1532,7 @@ SYM_CODE_END(rewind_stack_and_make_dead) * Note, callers should use a speculation barrier like LFENCE immediately = after * a call to this function to ensure BHB is cleared before indirect branch= es. */ -SYM_FUNC_START(clear_bhb_loop) +SYM_FUNC_START(clear_bhb_loop_nofence) ANNOTATE_NOENDBR push %rbp mov %rsp, %rbp @@ -1570,6 +1570,6 @@ SYM_FUNC_START(clear_bhb_loop) 5: pop %rbp RET -SYM_FUNC_END(clear_bhb_loop) -EXPORT_SYMBOL_FOR_KVM(clear_bhb_loop) -STACK_FRAME_NON_STANDARD(clear_bhb_loop) +SYM_FUNC_END(clear_bhb_loop_nofence) +EXPORT_SYMBOL_FOR_KVM(clear_bhb_loop_nofence) +STACK_FRAME_NON_STANDARD(clear_bhb_loop_nofence) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 87b83ae7c97f..157eb69c7f0f 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -331,11 +331,11 @@ =20 #ifdef CONFIG_X86_64 .macro CLEAR_BRANCH_HISTORY - ALTERNATIVE "", "call clear_bhb_loop; lfence", X86_FEATURE_CLEAR_BHB_LOOP + ALTERNATIVE "", "call clear_bhb_loop_nofence; lfence", X86_FEATURE_CLEAR_= BHB_LOOP .endm =20 .macro CLEAR_BRANCH_HISTORY_VMEXIT - ALTERNATIVE "", "call clear_bhb_loop; lfence", X86_FEATURE_CLEAR_BHB_VMEX= IT + ALTERNATIVE "", "call clear_bhb_loop_nofence; lfence", X86_FEATURE_CLEAR_= BHB_VMEXIT .endm #else #define CLEAR_BRANCH_HISTORY @@ -389,7 +389,7 @@ extern void entry_untrain_ret(void); extern void write_ibpb(void); =20 #ifdef CONFIG_X86_64 -extern void clear_bhb_loop(void); +extern void clear_bhb_loop_nofence(void); #endif =20 extern void (*x86_return_thunk)(void); diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 63d6c9fa5e80..f40e88f87273 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1619,7 +1619,7 @@ static int emit_spectre_bhb_barrier(u8 **pprog, u8 *i= p, EMIT1(0x51); /* push rcx */ ip +=3D 2; =20 - func =3D (u8 *)clear_bhb_loop; + func =3D (u8 *)clear_bhb_loop_nofence; ip +=3D x86_call_depth_emit_accounting(&prog, func, ip); =20 if (emit_call(&prog, func, ip)) --=20 2.34.1 From nobody Fri Apr 3 04:09:03 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1824D20B810; Fri, 3 Apr 2026 00:31:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176298; cv=none; b=thTS70a6RZWeqxIIi4ZJjF9XATwRLHU5a79XdsMl6qEyIewLE8E4Sp/3ehhs+N4EcG0RB/PJw54MTYkJ+K+CA9sk/jj5Z4VxUDPFgpU7tXZutF7klnG2DXBOfUQCLLbcaxTK0WjkiU4TtsHm9n4WmVu2BpB2+29ETveKT9RFiaE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176298; c=relaxed/simple; bh=mav5aDg8EwieFJ2WAvHM82MSHiRvIIrK2PTU0j5TFLQ=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=DDAPl9KkAkt1g1TyPcPLRAFdxVbXrtfioaq7s7SM0vfrxzGxSKgC2VRODBfwwuC8esMtCiGfOMs3O6M6u4y5/BDC++mxyyEnLS7R7lXC8FrniQafeKiOhWtQu/qmyfe/+9egpVykj+Af10yn0syKbEfNs+tf4ze555qomQskrK0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ibEueb/Z; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ibEueb/Z" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775176298; x=1806712298; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=mav5aDg8EwieFJ2WAvHM82MSHiRvIIrK2PTU0j5TFLQ=; b=ibEueb/ZRYJjBtXPwfJ/JgU6MTEBj7Hw5ABrgGNFRje5tlFrWYjqlx4Q +dC0QR9v3w8rbTmypW5w9ONNEXxJzIcOXGUaadoeHkmVmEljjDAyb07ew yjdrSsOEg8NgkZvx4gTm9i6vmyxJjNXRQKGudGe0ZYvrNzWxl/XbTkHKW Z6u77qi336d79SrF1viHE6cP4ltslEMGP2Ezrb/hpe3Pb/Ef628+6mgCy fx0QMV+S5rf0l81+Up90loD62lZU95PNiBJua9sdROvSlzWTAwera1jc+ uWHQLhzvQdPVhDpz0ANxK3lHZHO+t/jp5VDu+OfgITrjcHYii5B8q2VLZ g==; X-CSE-ConnectionGUID: zHPM1tD2TmKFjl5cgvNIHA== X-CSE-MsgGUID: GWLPJTTiTSuqxb/nR/67oA== X-IronPort-AV: E=McAfee;i="6800,10657,11747"; a="76128768" X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="76128768" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:31:37 -0700 X-CSE-ConnectionGUID: GDsvFYz6RBqMqjVPOAT8Cg== X-CSE-MsgGUID: QjttUhHHTGatbQdexPWgzw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="227059729" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:31:36 -0700 Date: Thu, 2 Apr 2026 17:31:36 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v9 04/10] x86/vmscape: Rename x86_ibpb_exit_to_user to x86_predictor_flush_exit_to_user Message-ID: <20260402-vmscape-bhb-v9-4-94d16bc29774@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the upcoming changes x86_ibpb_exit_to_user will also be used when BHB clearing sequence is used. Rename it cover both the cases. No functional change. Suggested-by: Sean Christopherson Tested-by: Jon Kohler Acked-by: Sean Christopherson Signed-off-by: Pawan Gupta --- arch/x86/include/asm/entry-common.h | 6 +++--- arch/x86/include/asm/nospec-branch.h | 2 +- arch/x86/kernel/cpu/bugs.c | 4 ++-- arch/x86/kvm/x86.c | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/ent= ry-common.h index ce3eb6d5fdf9..c45858db16c9 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -94,11 +94,11 @@ static inline void arch_exit_to_user_mode_prepare(struc= t pt_regs *regs, */ choose_random_kstack_offset(rdtsc()); =20 - /* Avoid unnecessary reads of 'x86_ibpb_exit_to_user' */ + /* Avoid unnecessary reads of 'x86_predictor_flush_exit_to_user' */ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && - this_cpu_read(x86_ibpb_exit_to_user)) { + this_cpu_read(x86_predictor_flush_exit_to_user)) { indirect_branch_prediction_barrier(); - this_cpu_write(x86_ibpb_exit_to_user, false); + this_cpu_write(x86_predictor_flush_exit_to_user, false); } } #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 157eb69c7f0f..0381db59c39d 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -533,7 +533,7 @@ void alternative_msr_write(unsigned int msr, u64 val, u= nsigned int feature) : "memory"); } =20 -DECLARE_PER_CPU(bool, x86_ibpb_exit_to_user); +DECLARE_PER_CPU(bool, x86_predictor_flush_exit_to_user); =20 static inline void indirect_branch_prediction_barrier(void) { diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 2cb4a96247d8..002bf4adccc3 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -65,8 +65,8 @@ EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current); * be needed to before running userspace. That IBPB will flush the branch * predictor content. */ -DEFINE_PER_CPU(bool, x86_ibpb_exit_to_user); -EXPORT_PER_CPU_SYMBOL_GPL(x86_ibpb_exit_to_user); +DEFINE_PER_CPU(bool, x86_predictor_flush_exit_to_user); +EXPORT_PER_CPU_SYMBOL_GPL(x86_predictor_flush_exit_to_user); =20 u64 x86_pred_cmd __ro_after_init =3D PRED_CMD_IBPB; =20 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fd1c4a36b593..45d7cfedc507 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11464,7 +11464,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) * may migrate to. */ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER)) - this_cpu_write(x86_ibpb_exit_to_user, true); + this_cpu_write(x86_predictor_flush_exit_to_user, true); =20 /* * Consume any pending interrupts, including the possible source of --=20 2.34.1 From nobody Fri Apr 3 04:09:03 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 452052248A0; Fri, 3 Apr 2026 00:31:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176313; cv=none; b=BbYSz1mQxOaHV29af7svzHdalG7tbOWUEtepBY/WwxYiMQtMScK99FJH6ZiX16XBZFgtztvS/apSt9fDLNKlc0qc6AQrYOJVQJmBGnY4K6bLsD6/8ZBBKYN1RzfoKucae06mzR/3Bpf5DERFvoJe+J2IpbLCtm12PjKULSsP3XA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176313; c=relaxed/simple; bh=Z+UBFYvZmhxdkKBWIZKrd5/Lkbp8grvu2K8zVnKyDK0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=TWC3f1pnK2cTSd6RNJFUSMbKr8uFTaecWZ+gcAsUIjz6hbSXEqUL//AMINWaiRGsyaNg/rlW3RoOPI7ghfEyibpPscWBaAPx/5bcDYulmkpzIKpJPDazspR/1gfRr5yWtOd90f88Onm9+ygSDBhT9crpk2dQCHsiSEjARwffTac= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cYRRCf4G; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cYRRCf4G" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775176312; x=1806712312; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=Z+UBFYvZmhxdkKBWIZKrd5/Lkbp8grvu2K8zVnKyDK0=; b=cYRRCf4GwjJajm0bUuOv0hXXMEu0pRPhFpWZzor5SNwEt0893cL+Bfz1 /4TFsKOcS18t0vkORV6ZNbwAmJXqoD4d/H0zEcpsTK6qdh05ANO+hfARz a2io/MEwWnh2NyNOAHRsDkOYDX+K1OLBjCLxBIDEorPhgcj7sHd5ca1KR yd0hTRmHkTHqSs9aV2Jam2TPCdNrjeCmV0DFoGNEiPGzGJe9mUdVOdZZv p55rcRzzF9XIokhxZBDf7jpO+ARTVNObDX5dgcHR5baPYrX0984FYj7/W gwgZEt/WVKcPj7XdYY5QRfgl7MY14Xu+fkBU1twPKSGx5eOKuS9cFW/2e w==; X-CSE-ConnectionGUID: WRkblJOBR3K19VQdmLqrHA== X-CSE-MsgGUID: 25bwrgI3SpK7wVgqrmnmcw== X-IronPort-AV: E=McAfee;i="6800,10657,11747"; a="76435663" X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="76435663" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:31:51 -0700 X-CSE-ConnectionGUID: exHVTTouTzO/tuKwb+cNag== X-CSE-MsgGUID: PqmrS9DSRQicWsnh68/igA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="226285626" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa010-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:31:52 -0700 Date: Thu, 2 Apr 2026 17:31:51 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v9 05/10] x86/vmscape: Move mitigation selection to a switch() Message-ID: <20260402-vmscape-bhb-v9-5-94d16bc29774@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This ensures that all mitigation modes are explicitly handled, while keeping the mitigation selection for each mode together. This also prepares for adding BHB-clearing mitigation mode for VMSCAPE. Tested-by: Jon Kohler Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- arch/x86/kernel/cpu/bugs.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 002bf4adccc3..636280c612f0 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -3088,17 +3088,33 @@ early_param("vmscape", vmscape_parse_cmdline); =20 static void __init vmscape_select_mitigation(void) { - if (!boot_cpu_has_bug(X86_BUG_VMSCAPE) || - !boot_cpu_has(X86_FEATURE_IBPB)) { + if (!boot_cpu_has_bug(X86_BUG_VMSCAPE)) { vmscape_mitigation =3D VMSCAPE_MITIGATION_NONE; return; } =20 - if (vmscape_mitigation =3D=3D VMSCAPE_MITIGATION_AUTO) { - if (should_mitigate_vuln(X86_BUG_VMSCAPE)) + if ((vmscape_mitigation =3D=3D VMSCAPE_MITIGATION_AUTO) && + !should_mitigate_vuln(X86_BUG_VMSCAPE)) + vmscape_mitigation =3D VMSCAPE_MITIGATION_NONE; + + switch (vmscape_mitigation) { + case VMSCAPE_MITIGATION_NONE: + break; + + case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER: + if (!boot_cpu_has(X86_FEATURE_IBPB)) + vmscape_mitigation =3D VMSCAPE_MITIGATION_NONE; + break; + + case VMSCAPE_MITIGATION_AUTO: + if (boot_cpu_has(X86_FEATURE_IBPB)) vmscape_mitigation =3D VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; else vmscape_mitigation =3D VMSCAPE_MITIGATION_NONE; + break; + + default: + break; } } =20 --=20 2.34.1 From nobody Fri Apr 3 04:09:03 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B5A520B810; Fri, 3 Apr 2026 00:32:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176328; cv=none; b=dLFbrqKe6bLvTCnRo0aDJ6DfZsB/++bnEcNMH9XPD8wpUapGIViI+JRxMkFPA3Tf3NfZ4k+CTSPRAg0zGiVMsNMZvfHlNEZZGikzRx4sJnCBsv77mmtDDnjafqt5RboILKqFjGYMXJ0NaQnVwE2EzT1pQTaFeWAiskLmbFBX+1Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176328; c=relaxed/simple; bh=YsdvlQqeVior0g/k+WAElPcnRGxse6gI8p2joQTg8Fc=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=THNY2WCE0X2lC51NU2L5Sc0jaj5YXTqZkEi9Liey1sGsglvl4HvLLlZOrqCBmbANu/kmR04HS/9axaCcNyLEkuXhbFv+llbg7TEnmxMqHXjnHhgTC4WZ0cF07k/oZCrjW4pnrujJUMdAW9EPMVGmhMdtkjWEKsH4LNXO0MDdbPw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=YgxCfpVq; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="YgxCfpVq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775176327; x=1806712327; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=YsdvlQqeVior0g/k+WAElPcnRGxse6gI8p2joQTg8Fc=; b=YgxCfpVqn/igi6pL+mpc4WnwS0rywOj91Um1Q6u9CNKBAgfi+7TiPBed psJv3oBVzo0UNsAZ2ALAW234FV/qPBamxs6U2Xy/ArbiWN/suScBvuZAO nUb+AGFNukt45g+4fnq0XjfQSwrh4qUO85jVGdJRKk3jLAQEAD4xgwEXL MzvJ5m9cEXZvpKv/s7/viinvQwDcvtKprB47c8+Isfc/zTxl2iNdb6jQ2 8a2DGc5Ta4bJ50xzYT5Y/FmBPWLVHnHkU+jJyft2rC7tQBQ3rr91JrKLG Y1HLw2ohITfG8x3pTFEyv88lOfpZcQjIaJTWh8HBSltV0WFS+5QLq+amP A==; X-CSE-ConnectionGUID: u0NjkD4jT96+2zhK6uMIsw== X-CSE-MsgGUID: j8dbFPA8TXy/9mQ1RN3Asw== X-IronPort-AV: E=McAfee;i="6800,10657,11747"; a="76435689" X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="76435689" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:32:06 -0700 X-CSE-ConnectionGUID: u2pkypy3T9yl3B3o+Um16Q== X-CSE-MsgGUID: 0PR9yuJqQEW/18W95KbrNQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="226285665" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa010-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:32:07 -0700 Date: Thu, 2 Apr 2026 17:32:06 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v9 06/10] x86/vmscape: Use write_ibpb() instead of indirect_branch_prediction_barrier() Message-ID: <20260402-vmscape-bhb-v9-6-94d16bc29774@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" indirect_branch_prediction_barrier() is a wrapper to write_ibpb(), which also checks if the CPU supports IBPB. For VMSCAPE, call to indirect_branch_prediction_barrier() is only possible when CPU supports IBPB. Simply call write_ibpb() directly to avoid unnecessary alternative patching. Suggested-by: Dave Hansen Tested-by: Jon Kohler Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- arch/x86/include/asm/entry-common.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/ent= ry-common.h index c45858db16c9..78b143673ca7 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -97,7 +97,7 @@ static inline void arch_exit_to_user_mode_prepare(struct = pt_regs *regs, /* Avoid unnecessary reads of 'x86_predictor_flush_exit_to_user' */ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && this_cpu_read(x86_predictor_flush_exit_to_user)) { - indirect_branch_prediction_barrier(); + write_ibpb(); this_cpu_write(x86_predictor_flush_exit_to_user, false); } } --=20 2.34.1 From nobody Fri Apr 3 04:09:03 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4A5E22576E; Fri, 3 Apr 2026 00:32:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176344; cv=none; b=cElADmSIj2tfxuZp/YkAYVqY36+QwztVEEV8somkIJO78L2vxZpA9nL7fwDBXLZ/enuxYgPmH/nmW81YDDZdKheMSI1gaS7bhvroRsrN8WaIOMHNBfSKPXku0jOJDJjRPWoYUpFFZNBSHn2vK4nZNM42GreHpXPyWw8tj9rx+PE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176344; c=relaxed/simple; bh=IsS5H9LRX7qRaRN1Nvf2PeFp9R5Ty1GnBvXBX7Nazoo=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=MdJ9eCXOO7V2M8fUH83Yx96NS1zvMXt8h9OIa+avc9bACYugjUeE0BTnzcqQYwxS0Na8T2zeEPvtzSgHOVGJge25qVNYezYMr9UlFmQnZG2rBZSCd9cHb3qoVlyp0cBSAFgm1cbdBYlzMbteofcDqQllktIi8dsdMBs7rK/3RWE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=aOOaLIe9; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="aOOaLIe9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775176342; x=1806712342; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=IsS5H9LRX7qRaRN1Nvf2PeFp9R5Ty1GnBvXBX7Nazoo=; b=aOOaLIe9k9aYYpCBXCNEFbh9uYdONv2V7lI9/tsQls+m4uRRjHSV+XFN oKXJnGuwZqY+LbObglyJW/20ZQuLcsbfM9VBJ0hM4RHsLNUe6lNRh9dyZ 8DWPDkRNkrtbcmu5jrBxI8lMSjNEpQUMLhcdsqqPwWsP+TKR54zeeF7yq r7gIBRf2WM6hTV9R2wqdr0aMZwxGXgTUd/X4P3SVuUndXoHYC1v8MA1pQ Th+F8qZ28afZjAleeb5UmPNukHcd946UH+Ws7CZG4E6+l90A7E7T0ovRT mkMAtKNjbEbXJRVH5FiuirqoC+p4WpEPoTWx8gp9jHX95JYH0kpA4HmGU A==; X-CSE-ConnectionGUID: PGlM55arSGehxZ1VuwXBWg== X-CSE-MsgGUID: iHca9wsyQTSQKvb/vSJc+g== X-IronPort-AV: E=McAfee;i="6800,10657,11747"; a="76213409" X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="76213409" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:32:22 -0700 X-CSE-ConnectionGUID: 5epr0TBSRwGDgNYhHjTvMg== X-CSE-MsgGUID: XX1jG75KQLuNfJcQjNLK1g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="226119047" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:32:22 -0700 Date: Thu, 2 Apr 2026 17:32:21 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v9 07/10] x86/vmscape: Use static_call() for predictor flush Message-ID: <20260402-vmscape-bhb-v9-7-94d16bc29774@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Adding more mitigation options at exit-to-userspace for VMSCAPE would usually require a series of checks to decide which mitigation to use. In this case, the mitigation is done by calling a function, which is decided at boot. So, adding more feature flags and multiple checks can be avoided by using static_call() to the mitigating function. Replace the flag-based mitigation selector with a static_call(). This also frees the existing X86_FEATURE_IBPB_EXIT_TO_USER. Suggested-by: Dave Hansen Tested-by: Jon Kohler Signed-off-by: Pawan Gupta --- arch/x86/Kconfig | 1 + arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/entry-common.h | 7 +++---- arch/x86/include/asm/nospec-branch.h | 3 +++ arch/x86/include/asm/processor.h | 1 + arch/x86/kernel/cpu/bugs.c | 14 +++++++++++++- arch/x86/kvm/x86.c | 2 +- 7 files changed, 23 insertions(+), 7 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index e2df1b147184..5b8def9ddb98 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2720,6 +2720,7 @@ config MITIGATION_TSA config MITIGATION_VMSCAPE bool "Mitigate VMSCAPE" depends on KVM + depends on HAVE_STATIC_CALL default y help Enable mitigation for VMSCAPE attacks. VMSCAPE is a hardware security diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpuf= eatures.h index dbe104df339b..b4d529dd6d30 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -503,7 +503,7 @@ #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA= -SQ */ #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA= -L1 */ #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using= VERW before VMRUN */ -#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-us= erspace, see VMSCAPE bug */ +/* Free */ #define X86_FEATURE_ABMC (21*32+15) /* Assignable Bandwidth Monitoring Co= unters */ #define X86_FEATURE_MSR_IMM (21*32+16) /* MSR immediate form instructions= */ #define X86_FEATURE_SGX_EUPDATESVN (21*32+17) /* Support for ENCLS[EUPDATE= SVN] instruction */ diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/ent= ry-common.h index 78b143673ca7..783e7cb50cae 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -4,6 +4,7 @@ =20 #include #include +#include =20 #include #include @@ -94,10 +95,8 @@ static inline void arch_exit_to_user_mode_prepare(struct= pt_regs *regs, */ choose_random_kstack_offset(rdtsc()); =20 - /* Avoid unnecessary reads of 'x86_predictor_flush_exit_to_user' */ - if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && - this_cpu_read(x86_predictor_flush_exit_to_user)) { - write_ibpb(); + if (unlikely(this_cpu_read(x86_predictor_flush_exit_to_user))) { + static_call_cond(vmscape_predictor_flush)(); this_cpu_write(x86_predictor_flush_exit_to_user, false); } } diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 0381db59c39d..066fd8095200 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -542,6 +542,9 @@ static inline void indirect_branch_prediction_barrier(v= oid) :: "rax", "rcx", "rdx", "memory"); } =20 +#include +DECLARE_STATIC_CALL(vmscape_predictor_flush, write_ibpb); + /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; DECLARE_PER_CPU(u64, x86_spec_ctrl_current); diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/proces= sor.h index a24c7805acdb..20ab4dd588c6 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -753,6 +753,7 @@ enum mds_mitigations { }; =20 extern bool gds_ucode_mitigated(void); +extern bool vmscape_mitigation_enabled(void); =20 /* * Make previous memory operations globally visible before diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 636280c612f0..2f431d0be3d9 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -144,6 +144,12 @@ EXPORT_SYMBOL_GPL(cpu_buf_idle_clear); */ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); =20 +/* + * Controls how vmscape is mitigated e.g. via IBPB or BHB-clear + * sequence. This defaults to no mitigation. + */ +DEFINE_STATIC_CALL_NULL(vmscape_predictor_flush, write_ibpb); + #undef pr_fmt #define pr_fmt(fmt) "mitigations: " fmt =20 @@ -3133,8 +3139,14 @@ static void __init vmscape_update_mitigation(void) static void __init vmscape_apply_mitigation(void) { if (vmscape_mitigation =3D=3D VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER) - setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_TO_USER); + static_call_update(vmscape_predictor_flush, write_ibpb); +} + +bool vmscape_mitigation_enabled(void) +{ + return !!static_call_query(vmscape_predictor_flush); } +EXPORT_SYMBOL_FOR_KVM(vmscape_mitigation_enabled); =20 #undef pr_fmt #define pr_fmt(fmt) fmt diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 45d7cfedc507..e204482e64f3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11463,7 +11463,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) * set for the CPU that actually ran the guest, and not the CPU that it * may migrate to. */ - if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER)) + if (vmscape_mitigation_enabled()) this_cpu_write(x86_predictor_flush_exit_to_user, true); =20 /* --=20 2.34.1 From nobody Fri Apr 3 04:09:03 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FC042248A0; Fri, 3 Apr 2026 00:32:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176358; cv=none; b=mkg3ey6G2aJkDd6IVGfBA6Bdn5hIRF2wu3+Vu2oX6ZER1SisqUcpn6Qdb0uy1hrKO2Q372p2IArnziIZdpjT3WseiViPszBXeMZS/3H7LEHsSAM9naB0ziygfzwK4SGe6mqyT8Chph0ZGBVIyfyVYJvb3vvbQfXEuEJ3GNJANfs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176358; c=relaxed/simple; bh=UeMwadS/mkiEqXmRfLIDGniHLYeY9pKlirkg+V45c2g=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=amHwmGzjEhwhZ8jZXrrcDUyueMT7F3o5IqKP3C/RdCgynkFffdlud31pZnr2Kw7VZKNPhz4bq7fgJFkoCZpWnXGEP/KJncLkxn5lalA1g5HTKumk8irn+2Jw1KxEHjrfIjJSLdRoT0cZ9znVaiPVyOXVpfKhzT7Aqu1J14aOLgk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=isiFgcKH; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="isiFgcKH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775176357; x=1806712357; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=UeMwadS/mkiEqXmRfLIDGniHLYeY9pKlirkg+V45c2g=; b=isiFgcKH67kY5sQxBR9LY1XQhbB2zCkycII2WdnRHkq0yAb3um/PuRzc rIBkxAVtuPDZKpJh6waIDKeK2S4WJjTfZcc9JmpbKbgfbh72gjvz8DP5A qOXlWhYgmmHxLNFYnlrm9LSg6Lhik9UOajbbxECqWUDGB6GRr07ECZufS r0Z9mXsUNDBv96K/eQMq1B5n5xg4hgmv4/jq54bNZ1z4P1FQlzjvSPKUw 9oiHyJR7DlzLr3kwMCr9aDHcaxueDytSy/ASnDn57lQRbFAQM/d5WQltY KIigP29csAQiWL5TlQnfhVNDr2csT8xS8GKA0kDq45NdW9D7RCGE+T5Mu Q==; X-CSE-ConnectionGUID: I1+MbOJES0KZedTrLsc8kQ== X-CSE-MsgGUID: L+CiCYWrROaNcSz27AyLaA== X-IronPort-AV: E=McAfee;i="6800,10657,11747"; a="76213443" X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="76213443" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:32:37 -0700 X-CSE-ConnectionGUID: +h6kPF4eRQGzO/06fCuxEw== X-CSE-MsgGUID: M0pfKB+FQUCEQXdIp0Islw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="226119115" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:32:37 -0700 Date: Thu, 2 Apr 2026 17:32:36 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v9 08/10] x86/vmscape: Deploy BHB clearing mitigation Message-ID: <20260402-vmscape-bhb-v9-8-94d16bc29774@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" IBPB mitigation for VMSCAPE is an overkill on CPUs that are only affected by the BHI variant of VMSCAPE. On such CPUs, eIBRS already provides indirect branch isolation between guest and host userspace. However, branch history from guest may also influence the indirect branches in host userspace. To mitigate the BHI aspect, use the BHB clearing sequence. Since now, IBPB is not the only mitigation for VMSCAPE, update the documentation to reflect that =3Dauto could select either IBPB or BHB clear mitigation based on the CPU. Reviewed-by: Nikolay Borisov Tested-by: Jon Kohler Signed-off-by: Pawan Gupta --- Documentation/admin-guide/hw-vuln/vmscape.rst | 11 ++++++++- Documentation/admin-guide/kernel-parameters.txt | 4 +++- arch/x86/include/asm/entry-common.h | 4 ++++ arch/x86/include/asm/nospec-branch.h | 2 ++ arch/x86/kernel/cpu/bugs.c | 30 +++++++++++++++++++--= ---- 5 files changed, 42 insertions(+), 9 deletions(-) diff --git a/Documentation/admin-guide/hw-vuln/vmscape.rst b/Documentation/= admin-guide/hw-vuln/vmscape.rst index d9b9a2b6c114..7c40cf70ad7a 100644 --- a/Documentation/admin-guide/hw-vuln/vmscape.rst +++ b/Documentation/admin-guide/hw-vuln/vmscape.rst @@ -86,6 +86,10 @@ The possible values in this file are: run a potentially malicious guest and issues an IBPB before the first exit to userspace after VM-exit. =20 + * 'Mitigation: Clear BHB before exit to userspace': + + As above, conditional BHB clearing mitigation is enabled. + * 'Mitigation: IBPB on VMEXIT': =20 IBPB is issued on every VM-exit. This occurs when other mitigations like @@ -102,9 +106,14 @@ The mitigation can be controlled via the ``vmscape=3D`= ` command line parameter: =20 * ``vmscape=3Dibpb``: =20 - Enable conditional IBPB mitigation (default when CONFIG_MITIGATION_VMSC= APE=3Dy). + Enable conditional IBPB mitigation. =20 * ``vmscape=3Dforce``: =20 Force vulnerability detection and mitigation even on processors that are not known to be affected. + + * ``vmscape=3Dauto``: + + Choose the mitigation based on the VMSCAPE variant the CPU is affected = by. + (default when CONFIG_MITIGATION_VMSCAPE=3Dy) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 03a550630644..3853c7109419 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -8378,9 +8378,11 @@ Kernel parameters =20 off - disable the mitigation ibpb - use Indirect Branch Prediction Barrier - (IBPB) mitigation (default) + (IBPB) mitigation force - force vulnerability detection even on unaffected processors + auto - (default) use IBPB or BHB clear + mitigation based on CPU =20 vsyscall=3D [X86-64,EARLY] Controls the behavior of vsyscalls (i.e. calls to diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/ent= ry-common.h index 783e7cb50cae..13db31472f3a 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -96,6 +96,10 @@ static inline void arch_exit_to_user_mode_prepare(struct= pt_regs *regs, choose_random_kstack_offset(rdtsc()); =20 if (unlikely(this_cpu_read(x86_predictor_flush_exit_to_user))) { + /* + * Since the mitigation is for userspace, an explicit + * speculation barrier is not required after flush. + */ static_call_cond(vmscape_predictor_flush)(); this_cpu_write(x86_predictor_flush_exit_to_user, false); } diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 066fd8095200..38478383139b 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -390,6 +390,8 @@ extern void write_ibpb(void); =20 #ifdef CONFIG_X86_64 extern void clear_bhb_loop_nofence(void); +#else +static inline void clear_bhb_loop_nofence(void) {} #endif =20 extern void (*x86_return_thunk)(void); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 2f431d0be3d9..c7946cd809f7 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -61,9 +61,8 @@ DEFINE_PER_CPU(u64, x86_spec_ctrl_current); EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current); =20 /* - * Set when the CPU has run a potentially malicious guest. An IBPB will - * be needed to before running userspace. That IBPB will flush the branch - * predictor content. + * Set when the CPU has run a potentially malicious guest. Indicates that a + * branch predictor flush is needed before running userspace. */ DEFINE_PER_CPU(bool, x86_predictor_flush_exit_to_user); EXPORT_PER_CPU_SYMBOL_GPL(x86_predictor_flush_exit_to_user); @@ -3060,13 +3059,15 @@ enum vmscape_mitigations { VMSCAPE_MITIGATION_AUTO, VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER, VMSCAPE_MITIGATION_IBPB_ON_VMEXIT, + VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER, }; =20 static const char * const vmscape_strings[] =3D { - [VMSCAPE_MITIGATION_NONE] =3D "Vulnerable", + [VMSCAPE_MITIGATION_NONE] =3D "Vulnerable", /* [VMSCAPE_MITIGATION_AUTO] */ - [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] =3D "Mitigation: IBPB before exit = to userspace", - [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] =3D "Mitigation: IBPB on VMEXIT", + [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] =3D "Mitigation: IBPB before exit= to userspace", + [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] =3D "Mitigation: IBPB on VMEXIT", + [VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER] =3D "Mitigation: Clear BHB be= fore exit to userspace", }; =20 static enum vmscape_mitigations vmscape_mitigation __ro_after_init =3D @@ -3084,6 +3085,8 @@ static int __init vmscape_parse_cmdline(char *str) } else if (!strcmp(str, "force")) { setup_force_cpu_bug(X86_BUG_VMSCAPE); vmscape_mitigation =3D VMSCAPE_MITIGATION_AUTO; + } else if (!strcmp(str, "auto")) { + vmscape_mitigation =3D VMSCAPE_MITIGATION_AUTO; } else { pr_err("Ignoring unknown vmscape=3D%s option.\n", str); } @@ -3113,7 +3116,17 @@ static void __init vmscape_select_mitigation(void) break; =20 case VMSCAPE_MITIGATION_AUTO: - if (boot_cpu_has(X86_FEATURE_IBPB)) + /* + * CPUs with BHI_CTRL(ADL and newer) can avoid the IBPB and use + * BHB clear sequence. These CPUs are only vulnerable to the BHI + * variant of the VMSCAPE attack, and thus they do not require a + * full predictor flush. + * + * Note, in 32-bit mode BHB clear sequence is not supported. + */ + if (boot_cpu_has(X86_FEATURE_BHI_CTRL) && IS_ENABLED(CONFIG_X86_64)) + vmscape_mitigation =3D VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER; + else if (boot_cpu_has(X86_FEATURE_IBPB)) vmscape_mitigation =3D VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; else vmscape_mitigation =3D VMSCAPE_MITIGATION_NONE; @@ -3140,6 +3153,8 @@ static void __init vmscape_apply_mitigation(void) { if (vmscape_mitigation =3D=3D VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER) static_call_update(vmscape_predictor_flush, write_ibpb); + else if (vmscape_mitigation =3D=3D VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_U= SER) + static_call_update(vmscape_predictor_flush, clear_bhb_loop_nofence); } =20 bool vmscape_mitigation_enabled(void) @@ -3237,6 +3252,7 @@ void cpu_bugs_smt_update(void) break; case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT: case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER: + case VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER: /* * Hypervisors can be attacked across-threads, warn for SMT when * STIBP is not already enabled system-wide. --=20 2.34.1 From nobody Fri Apr 3 04:09:03 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9588A22D7B9; Fri, 3 Apr 2026 00:32:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176373; cv=none; b=DKEeE88Lqp9Czz+cB/65+RRJJfODbVmhz1tMSm5dUxdJCt6S/Y8oO813H+jaCGjMNvHqZjr5zgbaOFY8cBvlpLV9qwWc3UAqrcg0MQ3nYn8abgLLt8vQVvIStBPZAOToenh4ZDFkx9vhs8VZTAeSV5WCWt4L7yLvhpyKoyJX9Fs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176373; c=relaxed/simple; bh=mU+0J2lXgHfLK6RPNJk+XjH8VvwqquydrhBKl6Ith9w=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=CfC0qFBXSJHUVSrh+benW0D6gMvneq/tIc2tn+gDpKpzzbUVkcitqQTI7KnKKTAQIlYJxVGibhQnTOjGFYY6+ORvMEAmf1upLU78Q7VcYtuidBaG8fMDfrIGvLOmQtmRxJdN2hEk1jtIQoTM3z5PhKIjRj9cuatnKN3+jnri2DE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hpTQQwhA; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hpTQQwhA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775176372; x=1806712372; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=mU+0J2lXgHfLK6RPNJk+XjH8VvwqquydrhBKl6Ith9w=; b=hpTQQwhAmwCFZo6HCgtX3vM+lxSxqFeC+AWaegEahli7iO5XxDPx1/h7 vMOJclzDGvZiQfUge83NcahPwd9PMGnPGqf8+WdybIAqOMFIS5IUyHcsa YTNt8pEGfOR3z+eNnDDOnViS/+0mQsCyOcZZMNddQQhmjDN3kxAOb0En1 w8Zj3OYXJzt+AikMrNU20VRMTff/Vmb99BUyzytQdS+aJxG3MWuV0tU62 urdDd1H8vSkL41LMbJZpinZ/uO8BzLCKQfKks6Tzz4y+9vj2mj9kAt51G c0Zucydnz5sfuJtn+lNXc+s6VERYBwvOnc9p0nG66uy4OyiBzFGiXzJky w==; X-CSE-ConnectionGUID: nlUYxS1tQYyBWZ0G+8AWTA== X-CSE-MsgGUID: AAQdRXM3Ti2/RCa1penUJQ== X-IronPort-AV: E=McAfee;i="6800,10657,11747"; a="76213466" X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="76213466" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:32:52 -0700 X-CSE-ConnectionGUID: gz0IxSgrTzyeYsWngtWZEA== X-CSE-MsgGUID: xGyVV1jCS2uFbtR5hQIy7w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="226119145" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:32:52 -0700 Date: Thu, 2 Apr 2026 17:32:51 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v9 09/10] x86/vmscape: Resolve conflict between attack-vectors and vmscape=force Message-ID: <20260402-vmscape-bhb-v9-9-94d16bc29774@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vmscape=3Dforce option currently defaults to AUTO mitigation. This lets attack-vector controls to override the vmscape mitigation. Preventing the user from being able to force VMSCAPE mitigation. When vmscape mitigation is forced, allow it be deployed irrespective of attack vectors. Introduce VMSCAPE_MITIGATION_ON that wins over attack-vector controls. Tested-by: Jon Kohler Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- arch/x86/kernel/cpu/bugs.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index c7946cd809f7..ba8389df467a 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -3057,6 +3057,7 @@ static void __init srso_apply_mitigation(void) enum vmscape_mitigations { VMSCAPE_MITIGATION_NONE, VMSCAPE_MITIGATION_AUTO, + VMSCAPE_MITIGATION_ON, VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER, VMSCAPE_MITIGATION_IBPB_ON_VMEXIT, VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER, @@ -3065,6 +3066,7 @@ enum vmscape_mitigations { static const char * const vmscape_strings[] =3D { [VMSCAPE_MITIGATION_NONE] =3D "Vulnerable", /* [VMSCAPE_MITIGATION_AUTO] */ + /* [VMSCAPE_MITIGATION_ON] */ [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] =3D "Mitigation: IBPB before exit= to userspace", [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] =3D "Mitigation: IBPB on VMEXIT", [VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER] =3D "Mitigation: Clear BHB be= fore exit to userspace", @@ -3084,7 +3086,7 @@ static int __init vmscape_parse_cmdline(char *str) vmscape_mitigation =3D VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; } else if (!strcmp(str, "force")) { setup_force_cpu_bug(X86_BUG_VMSCAPE); - vmscape_mitigation =3D VMSCAPE_MITIGATION_AUTO; + vmscape_mitigation =3D VMSCAPE_MITIGATION_ON; } else if (!strcmp(str, "auto")) { vmscape_mitigation =3D VMSCAPE_MITIGATION_AUTO; } else { @@ -3116,6 +3118,7 @@ static void __init vmscape_select_mitigation(void) break; =20 case VMSCAPE_MITIGATION_AUTO: + case VMSCAPE_MITIGATION_ON: /* * CPUs with BHI_CTRL(ADL and newer) can avoid the IBPB and use * BHB clear sequence. These CPUs are only vulnerable to the BHI @@ -3249,6 +3252,7 @@ void cpu_bugs_smt_update(void) switch (vmscape_mitigation) { case VMSCAPE_MITIGATION_NONE: case VMSCAPE_MITIGATION_AUTO: + case VMSCAPE_MITIGATION_ON: break; case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT: case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER: --=20 2.34.1 From nobody Fri Apr 3 04:09:03 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 585AF22576E; Fri, 3 Apr 2026 00:33:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176389; cv=none; b=fY5wUk3yya28gW8CUJqLQnoAm+kvFGzu7aviNeu+FGj1Z3a4GFm5YkDSs6tpSoMqs88/LhfE1dco/OYAKfT1xDpFw0c9GA2RuYPyC+DO2QTkX5aoMnqlw0ODBVPviY0zH3JYm/xrUcRAyhJ69ZTGay+oUtbOpwLj5itrYIqTAfc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176389; c=relaxed/simple; bh=Mae+zoSeH0IjM7IPUjJkYQd/usJgG+IDTrU4CPhalhs=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=QbKSMqPc0bERTOjyY7ipEc2igYFu7ZEIJKOKSbSNi0VBPiK71ofCLe5SthioZBG74y/hk9ryX9Vs+YHAqTYsxQbCX8pJ1B+LRMXgxaheNoo3pZ6yzbQuYPMs09YY07tX1Llcj5ZTmJbD0FLsQLoCNff6pI7sUZwt9HLZJUC6ts8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=g89uFOhs; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="g89uFOhs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775176388; x=1806712388; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=Mae+zoSeH0IjM7IPUjJkYQd/usJgG+IDTrU4CPhalhs=; b=g89uFOhsbMoEaMZKQ47EKxKcRMpGo/KJalCTM+dR38ZDgVMuNISBNoVk ZHVmrRs2bW80A/8m+v1P/zqPrr8JMD4abWUEYDtN6S9MzMxgqcojoeWYD U/umEgE6Ym9be0+T8ydMooaP/t0J5duk5Uj8OogaFj9DC9K40FdJCszG1 JCiH9ScNIzvfRbqG++X5zg9ZH/RMb92aXcCZyCJ0ciBBp413t4FmJktxk auOFjZxZJlrsz2KX219rNdESeqvTl55E11H7dWqmWhvcUocFouzpxlE63 vaWZ6O1FkU3vfyLtvXWfkS2OZJTlsRQwZkdx+TAT/+RYzl7P14/6BXQ6Y g==; X-CSE-ConnectionGUID: JwtsF8gPQaKj4nvhlpqLZw== X-CSE-MsgGUID: jnJQhLDWTXaH8dgalsXfig== X-IronPort-AV: E=McAfee;i="6800,10657,11747"; a="86864447" X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="86864447" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:33:07 -0700 X-CSE-ConnectionGUID: Ytm8/RvNSsC9fG9EKTMx2w== X-CSE-MsgGUID: H2lsZQBZQdCDfFwjUor96A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,156,1770624000"; d="scan'208";a="231511379" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2026 17:33:08 -0700 Date: Thu, 2 Apr 2026 17:33:07 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v9 10/10] x86/vmscape: Add cmdline vmscape=on to override attack vector controls Message-ID: <20260402-vmscape-bhb-v9-10-94d16bc29774@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260402-vmscape-bhb-v9-0-94d16bc29774@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In general, individual mitigation knobs override the attack vector controls. For VMSCAPE, =3Dibpb exists but nothing to select BHB clearing mitigation. The =3Dforce option would select BHB clearing when supported, b= ut with a side-effect of also forcing the bug, hence deploying the mitigation on unaffected parts too. Add a new cmdline option vmscape=3Don to enable the mitigation based on the VMSCAPE variant the CPU is affected by. Reviewed-by: Nikolay Borisov Tested-by: Jon Kohler Signed-off-by: Pawan Gupta --- Documentation/admin-guide/hw-vuln/vmscape.rst | 4 ++++ Documentation/admin-guide/kernel-parameters.txt | 2 ++ arch/x86/kernel/cpu/bugs.c | 2 ++ 3 files changed, 8 insertions(+) diff --git a/Documentation/admin-guide/hw-vuln/vmscape.rst b/Documentation/= admin-guide/hw-vuln/vmscape.rst index 7c40cf70ad7a..2558a5c3d956 100644 --- a/Documentation/admin-guide/hw-vuln/vmscape.rst +++ b/Documentation/admin-guide/hw-vuln/vmscape.rst @@ -117,3 +117,7 @@ The mitigation can be controlled via the ``vmscape=3D``= command line parameter: =20 Choose the mitigation based on the VMSCAPE variant the CPU is affected = by. (default when CONFIG_MITIGATION_VMSCAPE=3Dy) + + * ``vmscape=3Don``: + + Same as ``auto``, except that it overrides attack vector controls. diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 3853c7109419..98204d464477 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -8383,6 +8383,8 @@ Kernel parameters unaffected processors auto - (default) use IBPB or BHB clear mitigation based on CPU + on - same as "auto", but override attack + vector control =20 vsyscall=3D [X86-64,EARLY] Controls the behavior of vsyscalls (i.e. calls to diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index ba8389df467a..366ebe1e1fb9 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -3087,6 +3087,8 @@ static int __init vmscape_parse_cmdline(char *str) } else if (!strcmp(str, "force")) { setup_force_cpu_bug(X86_BUG_VMSCAPE); vmscape_mitigation =3D VMSCAPE_MITIGATION_ON; + } else if (!strcmp(str, "on")) { + vmscape_mitigation =3D VMSCAPE_MITIGATION_ON; } else if (!strcmp(str, "auto")) { vmscape_mitigation =3D VMSCAPE_MITIGATION_AUTO; } else { --=20 2.34.1