[PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers

Yosry Ahmed posted 6 patches 10 months ago
There is a newer version of this series
[PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers
Posted by Yosry Ahmed 10 months ago
indirect_branch_prediction_barrier() only performs the MSR write if
X86_FEATURE_USE_IBPB is set, using alternative_msr_write(). In
preparation for removing X86_FEATURE_USE_IBPB, move the feature check
into the callers so that they can be addressed one-by-one, and use
X86_FEATURE_IBPB instead to guard the MSR write.

Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
 arch/x86/include/asm/nospec-branch.h | 2 +-
 arch/x86/kernel/cpu/bugs.c           | 2 +-
 arch/x86/kvm/svm/svm.c               | 3 ++-
 arch/x86/kvm/vmx/nested.c            | 3 ++-
 arch/x86/kvm/vmx/vmx.c               | 3 ++-
 arch/x86/mm/tlb.c                    | 7 ++++---
 6 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 7e8bf78c03d5d..7cbb76a2434b9 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -515,7 +515,7 @@ extern u64 x86_pred_cmd;
 
 static inline void indirect_branch_prediction_barrier(void)
 {
-	alternative_msr_write(MSR_IA32_PRED_CMD, x86_pred_cmd, X86_FEATURE_USE_IBPB);
+	alternative_msr_write(MSR_IA32_PRED_CMD, x86_pred_cmd, X86_FEATURE_IBPB);
 }
 
 /* The Intel SPEC CTRL MSR base value cache */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a5d0998d76049..fc7ce7a2fc495 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2272,7 +2272,7 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
 		if (ctrl == PR_SPEC_FORCE_DISABLE)
 			task_set_spec_ib_force_disable(task);
 		task_update_spec_tif(task);
-		if (task == current)
+		if (task == current && cpu_feature_enabled(X86_FEATURE_USE_IBPB))
 			indirect_branch_prediction_barrier();
 		break;
 	default:
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index a713c803a3a37..a4ba5b4e3d682 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1559,7 +1559,8 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	if (sd->current_vmcb != svm->vmcb) {
 		sd->current_vmcb = svm->vmcb;
 
-		if (!cpu_feature_enabled(X86_FEATURE_IBPB_ON_VMEXIT))
+		if (!cpu_feature_enabled(X86_FEATURE_IBPB_ON_VMEXIT) &&
+		    cpu_feature_enabled(X86_FEATURE_USE_IBPB))
 			indirect_branch_prediction_barrier();
 	}
 	if (kvm_vcpu_apicv_active(vcpu))
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index ca18c3eec76d8..504f328907ad4 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -5026,7 +5026,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
 	 * doesn't isolate different VMCSs, i.e. in this case, doesn't provide
 	 * separate modes for L2 vs L1.
 	 */
-	if (guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL))
+	if (guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
+	    cpu_feature_enabled(X86_FEATURE_USE_IBPB))
 		indirect_branch_prediction_barrier();
 
 	/* Update any VMCS fields that might have changed while L2 ran */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 6c56d5235f0f3..729a8ee24037b 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1478,7 +1478,8 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
 		 * may switch the active VMCS multiple times).
 		 */
 		if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
-			indirect_branch_prediction_barrier();
+			if (cpu_feature_enabled(X86_FEATURE_USE_IBPB))
+				indirect_branch_prediction_barrier();
 	}
 
 	if (!already_loaded) {
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index ffc25b3480415..be0c1a509869c 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -437,7 +437,8 @@ static void cond_mitigation(struct task_struct *next)
 		 * both have the IBPB bit set.
 		 */
 		if (next_mm != prev_mm &&
-		    (next_mm | prev_mm) & LAST_USER_MM_IBPB)
+		    (next_mm | prev_mm) & LAST_USER_MM_IBPB &&
+		    cpu_feature_enabled(X86_FEATURE_USE_IBPB))
 			indirect_branch_prediction_barrier();
 	}
 
@@ -447,8 +448,8 @@ static void cond_mitigation(struct task_struct *next)
 		 * different context than the user space task which ran
 		 * last on this CPU.
 		 */
-		if ((prev_mm & ~LAST_USER_MM_SPEC_MASK) !=
-					(unsigned long)next->mm)
+		if ((prev_mm & ~LAST_USER_MM_SPEC_MASK) != (unsigned long)next->mm &&
+		    cpu_feature_enabled(X86_FEATURE_USE_IBPB))
 			indirect_branch_prediction_barrier();
 	}
 
-- 
2.48.1.601.g30ceb7b040-goog
Re: [PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers
Posted by Sean Christopherson 9 months, 3 weeks ago
On Wed, Feb 19, 2025, Yosry Ahmed wrote:
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 6c56d5235f0f3..729a8ee24037b 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1478,7 +1478,8 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
>  		 * may switch the active VMCS multiple times).
>  		 */
>  		if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
> -			indirect_branch_prediction_barrier();
> +			if (cpu_feature_enabled(X86_FEATURE_USE_IBPB))

Combine this into a single if-statement, to make it readable and because as-is
the outer if would need curly braces.

And since this check will stay around in the form of a static_branch, I vote to
check it first so that the checks on "buddy" are elided if vcpu_load_ibpb is disabled.
That'll mean the WARN_ON_ONCE() won't fire if we have a bug and someone is running
with mitigations disabled, but I'm a-ok with that.
Re: [PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers
Posted by Yosry Ahmed 9 months, 3 weeks ago
February 25, 2025 at 11:47 AM, "Sean Christopherson" <seanjc@google.com> wrote:
> 
> On Wed, Feb 19, 2025, Yosry Ahmed wrote: 
> > 
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> >  index 6c56d5235f0f3..729a8ee24037b 100644
> >  --- a/arch/x86/kvm/vmx/vmx.c
> >  +++ b/arch/x86/kvm/vmx/vmx.c
> >  @@ -1478,7 +1478,8 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
> >  * may switch the active VMCS multiple times).
> >  */
> >  if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
> >  - indirect_branch_prediction_barrier();
> >  + if (cpu_feature_enabled(X86_FEATURE_USE_IBPB))
> 
> Combine this into a single if-statement, to make it readable and because as-is
> the outer if would need curly braces.
> And since this check will stay around in the form of a static_branch, I vote to
> check it first so that the checks on "buddy" are elided if vcpu_load_ibpb is disabled.
> That'll mean the WARN_ON_ONCE() won't fire if we have a bug and someone is running
> with mitigations disabled, but I'm a-ok with that.


SGTM, will do that in the next version. Thanks!