[PATCH v2 05/22] KVM: x86: Retry to-be-emulated insn in "slow" unprotect path iff sp is zapped

Sean Christopherson posted 22 patches 1 year, 3 months ago
[PATCH v2 05/22] KVM: x86: Retry to-be-emulated insn in "slow" unprotect path iff sp is zapped
Posted by Sean Christopherson 1 year, 3 months ago
Resume the guest and thus skip emulation of a non-PTE-writing instruction
if and only if unprotecting the gfn actually zapped at least one shadow
page.  If the gfn is write-protected for some reason other than shadow
paging, attempting to unprotect the gfn will effectively fail, and thus
retrying the instruction is all but guaranteed to be pointless.  This bug
has existed for a long time, but was effectively fudged around by the
retry RIP+address anti-loop detection.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/x86.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 966fb301d44b..c4cb6c6d605b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8961,14 +8961,14 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
 	if (ctxt->eip == last_retry_eip && last_retry_addr == cr2_or_gpa)
 		return false;
 
+	if (!vcpu->arch.mmu->root_role.direct)
+		gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
+
+	if (!kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)))
+		return false;
+
 	vcpu->arch.last_retry_eip = ctxt->eip;
 	vcpu->arch.last_retry_addr = cr2_or_gpa;
-
-	if (!vcpu->arch.mmu->root_role.direct)
-		gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
-
-	kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa));
-
 	return true;
 }
 
-- 
2.46.0.469.g59c65b2a67-goog
Re: [PATCH v2 05/22] KVM: x86: Retry to-be-emulated insn in "slow" unprotect path iff sp is zapped
Posted by Yuan Yao 1 year, 3 months ago
On Fri, Aug 30, 2024 at 05:15:20PM -0700, Sean Christopherson wrote:
> Resume the guest and thus skip emulation of a non-PTE-writing instruction
> if and only if unprotecting the gfn actually zapped at least one shadow
> page.  If the gfn is write-protected for some reason other than shadow
> paging, attempting to unprotect the gfn will effectively fail, and thus
> retrying the instruction is all but guaranteed to be pointless.  This bug
> has existed for a long time, but was effectively fudged around by the
> retry RIP+address anti-loop detection.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/x86.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 966fb301d44b..c4cb6c6d605b 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8961,14 +8961,14 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
>  	if (ctxt->eip == last_retry_eip && last_retry_addr == cr2_or_gpa)
>  		return false;
>
> +	if (!vcpu->arch.mmu->root_role.direct)
> +		gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
> +
> +	if (!kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)))
> +		return false;
> +

Reviewed-by: Yuan Yao <yuan.yao@intel.com>

>  	vcpu->arch.last_retry_eip = ctxt->eip;
>  	vcpu->arch.last_retry_addr = cr2_or_gpa;
> -
> -	if (!vcpu->arch.mmu->root_role.direct)
> -		gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
> -
> -	kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa));
> -
>  	return true;
>  }
>
> --
> 2.46.0.469.g59c65b2a67-goog
>