nested_svm_vmrun() currently stores the return value of
nested_svm_copy_vmcb12_to_cache() in a local variable 'err', separate
from the generally used 'ret' variable. This is done to have a single
call to kvm_skip_emulated_instruction(), such that we can store the
return value of kvm_skip_emulated_instruction() in 'ret', and then
re-check the return value of nested_svm_copy_vmcb12_to_cache() in 'err'.
The code is unnecessarily confusing. Instead, call
kvm_skip_emulated_instruction() in the failure path of
nested_svm_copy_vmcb12_to_cache() if the return value is not -EFAULT,
and drop 'err'.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index b191c6cab57db..6d4c053778b21 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1079,7 +1079,7 @@ static int nested_svm_copy_vmcb12_to_cache(struct kvm_vcpu *vcpu, u64 vmcb12_gpa
int nested_svm_vmrun(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
- int ret, err;
+ int ret;
u64 vmcb12_gpa;
struct vmcb *vmcb01 = svm->vmcb01.ptr;
@@ -1104,19 +1104,20 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
return -EINVAL;
vmcb12_gpa = svm->vmcb->save.rax;
- err = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
- if (err == -EFAULT) {
+ ret = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
+
+ /*
+ * Advance RIP if #GP or #UD are not injected, but otherwise
+ * stop if copying and checking vmcb12 failed.
+ */
+ if (ret == -EFAULT) {
kvm_inject_gp(vcpu, 0);
return 1;
+ } else if (ret) {
+ return kvm_skip_emulated_instruction(vcpu);
}
- /*
- * Advance RIP if #GP or #UD are not injected, but otherwise stop if
- * copying and checking vmcb12 failed.
- */
ret = kvm_skip_emulated_instruction(vcpu);
- if (err)
- return ret;
/*
* Since vmcb01 is not in use, we can use it to store some of the L1
--
2.53.0.473.g4a7958ca14-goog
On Fri, Mar 06, 2026, Yosry Ahmed wrote:
> nested_svm_vmrun() currently stores the return value of
> nested_svm_copy_vmcb12_to_cache() in a local variable 'err', separate
> from the generally used 'ret' variable. This is done to have a single
> call to kvm_skip_emulated_instruction(), such that we can store the
> return value of kvm_skip_emulated_instruction() in 'ret', and then
> re-check the return value of nested_svm_copy_vmcb12_to_cache() in 'err'.
>
> The code is unnecessarily confusing. Instead, call
> kvm_skip_emulated_instruction() in the failure path of
> nested_svm_copy_vmcb12_to_cache() if the return value is not -EFAULT,
> and drop 'err'.
>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Yosry Ahmed <yosry@kernel.org>
FYI, I'm going to grab this right now to make it slightly easier to resolve the
merge conflict with Paolo's SMM fixes (the ret vs. err stuff is so confusing).
> ---
> arch/x86/kvm/svm/nested.c | 19 ++++++++++---------
> 1 file changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index b191c6cab57db..6d4c053778b21 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -1079,7 +1079,7 @@ static int nested_svm_copy_vmcb12_to_cache(struct kvm_vcpu *vcpu, u64 vmcb12_gpa
> int nested_svm_vmrun(struct kvm_vcpu *vcpu)
> {
> struct vcpu_svm *svm = to_svm(vcpu);
> - int ret, err;
> + int ret;
> u64 vmcb12_gpa;
> struct vmcb *vmcb01 = svm->vmcb01.ptr;
>
> @@ -1104,19 +1104,20 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
> return -EINVAL;
>
> vmcb12_gpa = svm->vmcb->save.rax;
> - err = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
> - if (err == -EFAULT) {
> + ret = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
> +
> + /*
> + * Advance RIP if #GP or #UD are not injected, but otherwise
> + * stop if copying and checking vmcb12 failed.
> + */
> + if (ret == -EFAULT) {
> kvm_inject_gp(vcpu, 0);
> return 1;
> + } else if (ret) {
> + return kvm_skip_emulated_instruction(vcpu);
> }
I strongly dislike the if-elif approach, because it makes unnecessarily hard to
see that *all* ret !=0 cases are handled, i.e. that overwriting ret below is ok.
The comment is also super confusing, because there's no #UD in sight, but there
is a #GP.
This is what I have locally and am planning on pushing to kvm-x86/next.
ret = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
if (ret) {
if (ret == -EFAULT) {
kvm_inject_gp(vcpu, 0);
return 1;
}
/* Advance RIP past VMRUN as part of the nested #VMEXIT. */
return kvm_skip_emulated_instruction(vcpu);
}
/* At this point, VMRUN is guaranteed to not fault; advance RIP. */
ret = kvm_skip_emulated_instruction(vcpu);
>
> - /*
> - * Advance RIP if #GP or #UD are not injected, but otherwise stop if
> - * copying and checking vmcb12 failed.
> - */
> ret = kvm_skip_emulated_instruction(vcpu);
> - if (err)
> - return ret;
>
> /*
> * Since vmcb01 is not in use, we can use it to store some of the L1
> --
> 2.53.0.473.g4a7958ca14-goog
>
> I strongly dislike the if-elif approach, because it makes unnecessarily hard to
> see that *all* ret !=0 cases are handled, i.e. that overwriting ret below is ok.
>
> The comment is also super confusing, because there's no #UD in sight, but there
> is a #GP.
>
> This is what I have locally and am planning on pushing to kvm-x86/next.
>
> ret = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
> if (ret) {
> if (ret == -EFAULT) {
> kvm_inject_gp(vcpu, 0);
> return 1;
> }
>
> /* Advance RIP past VMRUN as part of the nested #VMEXIT. */
> return kvm_skip_emulated_instruction(vcpu);
> }
>
> /* At this point, VMRUN is guaranteed to not fault; advance RIP. */
> ret = kvm_skip_emulated_instruction(vcpu);
Looks good. I will rebase on top of this and drop the patch before
sending the next version.
© 2016 - 2026 Red Hat, Inc.