arch/x86/kvm/x86.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
From: ZhuangYanying <ann.zhuangyanying@huawei.com>
When spin_lock_irqsave() deadlock occurs inside the guest, vcpu threads,
other than the lock-holding one, would enter into S state because of
pvspinlock. Then inject NMI via libvirt API "inject-nmi", the NMI could
not be injected into vm.
The reason is:
1 It sets nmi_queued to 1 when calling ioctl KVM_NMI in qemu, and sets
cpu->kvm_vcpu_dirty to true in do_inject_external_nmi() meanwhile.
2 It sets nmi_queued to 0 in process_nmi(), before entering guest, because
cpu->kvm_vcpu_dirty is true.
It's not enough just to check nmi_queued to decide whether to stay in
vcpu_block() or not. NMI should be injected immediately at any situation.
Add checking nmi_pending, and testing KVM_REQ_NMI replaces nmi_queued
in vm_vcpu_has_events().
Do the same change for SMIs.
Signed-off-by: Zhuang Yanying <ann.zhuangyanying@huawei.com>
---
v1->v2
- simplify message. The complete description is here:
http://www.spinics.net/lists/kvm/msg150380.html
- Testing KVM_REQ_NMI replaces nmi_pending.
- Add Testing kvm_x86_ops->nmi_allowed(vcpu).
v2->v3
- Testing KVM_REQ_NMI replaces nmi_queued, not nmi_pending.
- Do the same change for SMIs.
---
arch/x86/kvm/x86.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 02363e3..a2cd099 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8394,10 +8394,13 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
if (vcpu->arch.pv.pv_unhalted)
return true;
- if (atomic_read(&vcpu->arch.nmi_queued))
+ if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
+ (vcpu->arch.nmi_pending &&
+ kvm_x86_ops->nmi_allowed(vcpu)))
return true;
- if (kvm_test_request(KVM_REQ_SMI, vcpu))
+ if (kvm_test_request(KVM_REQ_SMI, vcpu) ||
+ (vcpu->arch.smi_pending && !is_smm(vcpu)))
return true;
if (kvm_arch_interrupt_allowed(vcpu) &&
--
1.8.3.1
2017-05-26 13:16+0800, Zhuangyanying: > From: ZhuangYanying <ann.zhuangyanying@huawei.com> > > When spin_lock_irqsave() deadlock occurs inside the guest, vcpu threads, > other than the lock-holding one, would enter into S state because of > pvspinlock. Then inject NMI via libvirt API "inject-nmi", the NMI could > not be injected into vm. > > The reason is: > 1 It sets nmi_queued to 1 when calling ioctl KVM_NMI in qemu, and sets > cpu->kvm_vcpu_dirty to true in do_inject_external_nmi() meanwhile. > 2 It sets nmi_queued to 0 in process_nmi(), before entering guest, because > cpu->kvm_vcpu_dirty is true. > > It's not enough just to check nmi_queued to decide whether to stay in > vcpu_block() or not. NMI should be injected immediately at any situation. > Add checking nmi_pending, and testing KVM_REQ_NMI replaces nmi_queued > in vm_vcpu_has_events(). > > Do the same change for SMIs. > > Signed-off-by: Zhuang Yanying <ann.zhuangyanying@huawei.com> > --- > v1->v2 > - simplify message. The complete description is here: > http://www.spinics.net/lists/kvm/msg150380.html > - Testing KVM_REQ_NMI replaces nmi_pending. > - Add Testing kvm_x86_ops->nmi_allowed(vcpu). > v2->v3 > - Testing KVM_REQ_NMI replaces nmi_queued, not nmi_pending. > - Do the same change for SMIs. > --- > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > @@ -8394,10 +8394,13 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) > if (vcpu->arch.pv.pv_unhalted) > return true; > > - if (atomic_read(&vcpu->arch.nmi_queued)) > + if (kvm_test_request(KVM_REQ_NMI, vcpu) || > + (vcpu->arch.nmi_pending && I think the logic should be if ((kvm_test_request(KVM_REQ_NMI, vcpu) || vcpu->arch.nmi_pending) && kvm_x86_ops->nmi_allowed(vcpu)) because there is no reason to resume the VCPU if we cannot inject. > + kvm_x86_ops->nmi_allowed(vcpu))) > return true; > > - if (kvm_test_request(KVM_REQ_SMI, vcpu)) > + if (kvm_test_request(KVM_REQ_SMI, vcpu) || > + (vcpu->arch.smi_pending && !is_smm(vcpu))) Ditto. > return true; > > if (kvm_arch_interrupt_allowed(vcpu) && We'll then be consistent with other interrupts, Thanks.
On 30/05/2017 15:36, Radim Krčmář wrote: >> - if (atomic_read(&vcpu->arch.nmi_queued)) >> + if (kvm_test_request(KVM_REQ_NMI, vcpu) || >> + (vcpu->arch.nmi_pending && > I think the logic should be > > if ((kvm_test_request(KVM_REQ_NMI, vcpu) || vcpu->arch.nmi_pending) && > kvm_x86_ops->nmi_allowed(vcpu)) > > because there is no reason to resume the VCPU if we cannot inject. KVM_REQ_NMI would be processed anyway, and would clear nmi_queued. Of course, it would very soon go back to sleep. Even before Yanying's patch, nmi_queued > 0 would have woken up the vCPU in this manner. So I'm applying the patch. Thanks! Paolo
© 2016 - 2024 Red Hat, Inc.