[Xen-devel] [PATCH for-4.13 v3] x86/passthrough: fix migration of MSI when using posted interrupts

Roger Pau Monne posted 1 patch 4 years, 4 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/xen tags/patchew/20191108133414.96381-1-roger.pau@citrix.com
xen/arch/x86/hvm/vlapic.c        |  6 +++---
xen/arch/x86/hvm/vmsi.c          | 11 ++++++++++-
xen/drivers/passthrough/io.c     | 29 +++++++++++++++++++++++------
xen/include/asm-x86/hvm/vlapic.h |  2 ++
4 files changed, 38 insertions(+), 10 deletions(-)
[Xen-devel] [PATCH for-4.13 v3] x86/passthrough: fix migration of MSI when using posted interrupts
Posted by Roger Pau Monne 4 years, 4 months ago
When using posted interrupts and the guest migrates MSI from vCPUs Xen
needs to flush any pending PIRR vectors on the previous vCPU, or else
those vectors could get wrongly injected at a later point when the MSI
fields are already updated.

Rename sync_pir_to_irr to vlapic_sync_pir_to_irr and export it so it
can be called when updating the binding of physical interrupts to
guests.

Note that PIRR is synced to IRR both in pt_irq_destroy_bind and
pt_irq_create_bind when the interrupt delivery data is being updated.

Also store the vCPU ID in multi-destination mode when using posted
interrupts so that the interrupt is always injected to a known vCPU in
order to be able to flush the PIRR when modifying the binding.

Reported-by: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Joe Jin <joe.jin@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
---
I would like to see a bug fix for this issue in 4.13. The fix here only
affects posted interrupts, hence I think the risk of breaking anything
else is low.
---
Changes since v2:
 - Also sync PIRR with IRR when using CPU posted interrupts.
 - Force the selection of a specific vCPU when using posted interrupts
   for multi-dest.
 - Change vmsi_deliver_pirq to honor dest_vcpu_id.

Changes since v1:
 - Store the vcpu id also in multi-dest mode if the interrupt is bound
   to a vcpu for posted delivery.
 - s/#if/#ifdef/.
---
 xen/arch/x86/hvm/vlapic.c        |  6 +++---
 xen/arch/x86/hvm/vmsi.c          | 11 ++++++++++-
 xen/drivers/passthrough/io.c     | 29 +++++++++++++++++++++++------
 xen/include/asm-x86/hvm/vlapic.h |  2 ++
 4 files changed, 38 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 9466258d6f..d255ad8db7 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -106,7 +106,7 @@ static void vlapic_clear_irr(int vector, struct vlapic *vlapic)
     vlapic_clear_vector(vector, &vlapic->regs->data[APIC_IRR]);
 }
 
-static void sync_pir_to_irr(struct vcpu *v)
+void vlapic_sync_pir_to_irr(struct vcpu *v)
 {
     if ( hvm_funcs.sync_pir_to_irr )
         alternative_vcall(hvm_funcs.sync_pir_to_irr, v);
@@ -114,7 +114,7 @@ static void sync_pir_to_irr(struct vcpu *v)
 
 static int vlapic_find_highest_irr(struct vlapic *vlapic)
 {
-    sync_pir_to_irr(vlapic_vcpu(vlapic));
+    vlapic_sync_pir_to_irr(vlapic_vcpu(vlapic));
 
     return vlapic_find_highest_vector(&vlapic->regs->data[APIC_IRR]);
 }
@@ -1493,7 +1493,7 @@ static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
     if ( !has_vlapic(v->domain) )
         return 0;
 
-    sync_pir_to_irr(v);
+    vlapic_sync_pir_to_irr(v);
 
     return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
 }
diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index 6597d9f719..fe488ccc7d 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -118,7 +118,16 @@ void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *pirq_dpci)
 
     ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI);
 
-    vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
+    if ( hvm_funcs.deliver_posted_intr && pirq_dpci->gmsi.dest_vcpu_id != -1 )
+        /*
+         * When using posted interrupts multi-destination delivery mode is
+         * forced to select a specific vCPU so that the PIRR can be synced into
+         * IRR when the interrupt is destroyed or moved.
+         */
+        vmsi_inj_irq(vcpu_vlapic(d->vcpu[pirq_dpci->gmsi.dest_vcpu_id]),
+                     vector, trig_mode, delivery_mode);
+    else
+        vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
 }
 
 /* Return value, -1 : multi-dests, non-negative value: dest_vcpu_id */
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index b292e79382..d3f1ae5c39 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -341,7 +341,7 @@ int pt_irq_create_bind(
     {
         uint8_t dest, delivery_mode;
         bool dest_mode;
-        int dest_vcpu_id;
+        int dest_vcpu_id, prev_vcpu_id = -1;
         const struct vcpu *vcpu;
         uint32_t gflags = pt_irq_bind->u.msi.gflags &
                           ~XEN_DOMCTL_VMSI_X86_UNMASKED;
@@ -411,6 +411,7 @@ int pt_irq_create_bind(
 
                 pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec;
                 pirq_dpci->gmsi.gflags = gflags;
+                prev_vcpu_id = pirq_dpci->gmsi.dest_vcpu_id;
             }
         }
         /* Calculate dest_vcpu_id for MSI-type pirq migration. */
@@ -426,14 +427,24 @@ int pt_irq_create_bind(
 
         pirq_dpci->gmsi.posted = false;
         vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;
-        if ( iommu_intpost )
+        if ( hvm_funcs.deliver_posted_intr && delivery_mode == dest_LowestPrio )
         {
-            if ( delivery_mode == dest_LowestPrio )
-                vcpu = vector_hashing_dest(d, dest, dest_mode,
-                                           pirq_dpci->gmsi.gvec);
+            /*
+             * NB: when using posted interrupts the vector is signaled
+             * on the PIRR, and hence Xen needs to force interrupts to be
+             * delivered to a specific vCPU in order to be able to sync PIRR
+             * with IRR when the interrupt binding is destroyed, or else
+             * pending interrupts in the previous vCPU PIRR field could be
+             * delivered after the update.
+             */
+            vcpu = vector_hashing_dest(d, dest, dest_mode,
+                                       pirq_dpci->gmsi.gvec);
             if ( vcpu )
-                pirq_dpci->gmsi.posted = true;
+                pirq_dpci->gmsi.dest_vcpu_id = vcpu->vcpu_id;
         }
+        if ( iommu_intpost && vcpu )
+            pirq_dpci->gmsi.posted = true;
+
         if ( vcpu && is_iommu_enabled(d) )
             hvm_migrate_pirq(pirq_dpci, vcpu);
 
@@ -442,6 +453,9 @@ int pt_irq_create_bind(
             pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
                            info, pirq_dpci->gmsi.gvec);
 
+        if ( hvm_funcs.deliver_posted_intr && prev_vcpu_id >= 0 )
+            vlapic_sync_pir_to_irr(d->vcpu[prev_vcpu_id]);
+
         if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
         {
             unsigned long flags;
@@ -731,6 +745,9 @@ int pt_irq_destroy_bind(
     else if ( pirq_dpci && pirq_dpci->gmsi.posted )
         pi_update_irte(NULL, pirq, 0);
 
+    if ( hvm_funcs.deliver_posted_intr && pirq_dpci->gmsi.dest_vcpu_id >= 0 )
+        vlapic_sync_pir_to_irr(d->vcpu[pirq_dpci->gmsi.dest_vcpu_id]);
+
     if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) &&
          list_empty(&pirq_dpci->digl_list) )
     {
diff --git a/xen/include/asm-x86/hvm/vlapic.h b/xen/include/asm-x86/hvm/vlapic.h
index dde66b4f0f..b0017d1dae 100644
--- a/xen/include/asm-x86/hvm/vlapic.h
+++ b/xen/include/asm-x86/hvm/vlapic.h
@@ -150,4 +150,6 @@ bool_t vlapic_match_dest(
     const struct vlapic *target, const struct vlapic *source,
     int short_hand, uint32_t dest, bool_t dest_mode);
 
+void vlapic_sync_pir_to_irr(struct vcpu *v);
+
 #endif /* __ASM_X86_HVM_VLAPIC_H__ */
-- 
2.23.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH for-4.13 v3] x86/passthrough: fix migration of MSI when using posted interrupts
Posted by Joe Jin 4 years, 4 months ago
This patch synced PIRR with IRR when misx table updated, I ran same test 
over 1.5 hours and did not reproduced it, without the patch, I could
reproduced within 10 minutes.

Tested-by: Joe Jin <joe.jin@oracle.com>

Thanks,
Joe

On 11/8/19 5:34 AM, Roger Pau Monne wrote:
> When using posted interrupts and the guest migrates MSI from vCPUs Xen
> needs to flush any pending PIRR vectors on the previous vCPU, or else
> those vectors could get wrongly injected at a later point when the MSI
> fields are already updated.
> 
> Rename sync_pir_to_irr to vlapic_sync_pir_to_irr and export it so it
> can be called when updating the binding of physical interrupts to
> guests.
> 
> Note that PIRR is synced to IRR both in pt_irq_destroy_bind and
> pt_irq_create_bind when the interrupt delivery data is being updated.
> 
> Also store the vCPU ID in multi-destination mode when using posted
> interrupts so that the interrupt is always injected to a known vCPU in
> order to be able to flush the PIRR when modifying the binding.
> 
> Reported-by: Joe Jin <joe.jin@oracle.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Joe Jin <joe.jin@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
> I would like to see a bug fix for this issue in 4.13. The fix here only
> affects posted interrupts, hence I think the risk of breaking anything
> else is low.
> ---
> Changes since v2:
>  - Also sync PIRR with IRR when using CPU posted interrupts.
>  - Force the selection of a specific vCPU when using posted interrupts
>    for multi-dest.
>  - Change vmsi_deliver_pirq to honor dest_vcpu_id.
> 
> Changes since v1:
>  - Store the vcpu id also in multi-dest mode if the interrupt is bound
>    to a vcpu for posted delivery.
>  - s/#if/#ifdef/.
> ---
>  xen/arch/x86/hvm/vlapic.c        |  6 +++---
>  xen/arch/x86/hvm/vmsi.c          | 11 ++++++++++-
>  xen/drivers/passthrough/io.c     | 29 +++++++++++++++++++++++------
>  xen/include/asm-x86/hvm/vlapic.h |  2 ++
>  4 files changed, 38 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> index 9466258d6f..d255ad8db7 100644
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -106,7 +106,7 @@ static void vlapic_clear_irr(int vector, struct vlapic *vlapic)
>      vlapic_clear_vector(vector, &vlapic->regs->data[APIC_IRR]);
>  }
>  
> -static void sync_pir_to_irr(struct vcpu *v)
> +void vlapic_sync_pir_to_irr(struct vcpu *v)
>  {
>      if ( hvm_funcs.sync_pir_to_irr )
>          alternative_vcall(hvm_funcs.sync_pir_to_irr, v);
> @@ -114,7 +114,7 @@ static void sync_pir_to_irr(struct vcpu *v)
>  
>  static int vlapic_find_highest_irr(struct vlapic *vlapic)
>  {
> -    sync_pir_to_irr(vlapic_vcpu(vlapic));
> +    vlapic_sync_pir_to_irr(vlapic_vcpu(vlapic));
>  
>      return vlapic_find_highest_vector(&vlapic->regs->data[APIC_IRR]);
>  }
> @@ -1493,7 +1493,7 @@ static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
>      if ( !has_vlapic(v->domain) )
>          return 0;
>  
> -    sync_pir_to_irr(v);
> +    vlapic_sync_pir_to_irr(v);
>  
>      return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
>  }
> diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
> index 6597d9f719..fe488ccc7d 100644
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -118,7 +118,16 @@ void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *pirq_dpci)
>  
>      ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI);
>  
> -    vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
> +    if ( hvm_funcs.deliver_posted_intr && pirq_dpci->gmsi.dest_vcpu_id != -1 )
> +        /*
> +         * When using posted interrupts multi-destination delivery mode is
> +         * forced to select a specific vCPU so that the PIRR can be synced into
> +         * IRR when the interrupt is destroyed or moved.
> +         */
> +        vmsi_inj_irq(vcpu_vlapic(d->vcpu[pirq_dpci->gmsi.dest_vcpu_id]),
> +                     vector, trig_mode, delivery_mode);
> +    else
> +        vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
>  }
>  
>  /* Return value, -1 : multi-dests, non-negative value: dest_vcpu_id */
> diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> index b292e79382..d3f1ae5c39 100644
> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -341,7 +341,7 @@ int pt_irq_create_bind(
>      {
>          uint8_t dest, delivery_mode;
>          bool dest_mode;
> -        int dest_vcpu_id;
> +        int dest_vcpu_id, prev_vcpu_id = -1;
>          const struct vcpu *vcpu;
>          uint32_t gflags = pt_irq_bind->u.msi.gflags &
>                            ~XEN_DOMCTL_VMSI_X86_UNMASKED;
> @@ -411,6 +411,7 @@ int pt_irq_create_bind(
>  
>                  pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec;
>                  pirq_dpci->gmsi.gflags = gflags;
> +                prev_vcpu_id = pirq_dpci->gmsi.dest_vcpu_id;
>              }
>          }
>          /* Calculate dest_vcpu_id for MSI-type pirq migration. */
> @@ -426,14 +427,24 @@ int pt_irq_create_bind(
>  
>          pirq_dpci->gmsi.posted = false;
>          vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;
> -        if ( iommu_intpost )
> +        if ( hvm_funcs.deliver_posted_intr && delivery_mode == dest_LowestPrio )
>          {
> -            if ( delivery_mode == dest_LowestPrio )
> -                vcpu = vector_hashing_dest(d, dest, dest_mode,
> -                                           pirq_dpci->gmsi.gvec);
> +            /*
> +             * NB: when using posted interrupts the vector is signaled
> +             * on the PIRR, and hence Xen needs to force interrupts to be
> +             * delivered to a specific vCPU in order to be able to sync PIRR
> +             * with IRR when the interrupt binding is destroyed, or else
> +             * pending interrupts in the previous vCPU PIRR field could be
> +             * delivered after the update.
> +             */
> +            vcpu = vector_hashing_dest(d, dest, dest_mode,
> +                                       pirq_dpci->gmsi.gvec);
>              if ( vcpu )
> -                pirq_dpci->gmsi.posted = true;
> +                pirq_dpci->gmsi.dest_vcpu_id = vcpu->vcpu_id;
>          }
> +        if ( iommu_intpost && vcpu )
> +            pirq_dpci->gmsi.posted = true;
> +
>          if ( vcpu && is_iommu_enabled(d) )
>              hvm_migrate_pirq(pirq_dpci, vcpu);
>  
> @@ -442,6 +453,9 @@ int pt_irq_create_bind(
>              pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
>                             info, pirq_dpci->gmsi.gvec);
>  
> +        if ( hvm_funcs.deliver_posted_intr && prev_vcpu_id >= 0 )
> +            vlapic_sync_pir_to_irr(d->vcpu[prev_vcpu_id]);
> +
>          if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
>          {
>              unsigned long flags;
> @@ -731,6 +745,9 @@ int pt_irq_destroy_bind(
>      else if ( pirq_dpci && pirq_dpci->gmsi.posted )
>          pi_update_irte(NULL, pirq, 0);
>  
> +    if ( hvm_funcs.deliver_posted_intr && pirq_dpci->gmsi.dest_vcpu_id >= 0 )
> +        vlapic_sync_pir_to_irr(d->vcpu[pirq_dpci->gmsi.dest_vcpu_id]);
> +
>      if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) &&
>           list_empty(&pirq_dpci->digl_list) )
>      {
> diff --git a/xen/include/asm-x86/hvm/vlapic.h b/xen/include/asm-x86/hvm/vlapic.h
> index dde66b4f0f..b0017d1dae 100644
> --- a/xen/include/asm-x86/hvm/vlapic.h
> +++ b/xen/include/asm-x86/hvm/vlapic.h
> @@ -150,4 +150,6 @@ bool_t vlapic_match_dest(
>      const struct vlapic *target, const struct vlapic *source,
>      int short_hand, uint32_t dest, bool_t dest_mode);
>  
> +void vlapic_sync_pir_to_irr(struct vcpu *v);
> +
>  #endif /* __ASM_X86_HVM_VLAPIC_H__ */
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH for-4.13 v3] x86/passthrough: fix migration of MSI when using posted interrupts
Posted by Jan Beulich 4 years, 4 months ago
On 08.11.2019 14:34, Roger Pau Monne wrote:
> When using posted interrupts and the guest migrates MSI from vCPUs Xen
> needs to flush any pending PIRR vectors on the previous vCPU, or else
> those vectors could get wrongly injected at a later point when the MSI
> fields are already updated.
> 
> Rename sync_pir_to_irr to vlapic_sync_pir_to_irr and export it so it
> can be called when updating the binding of physical interrupts to
> guests.
> 
> Note that PIRR is synced to IRR both in pt_irq_destroy_bind and
> pt_irq_create_bind when the interrupt delivery data is being updated.
> 
> Also store the vCPU ID in multi-destination mode when using posted
> interrupts so that the interrupt is always injected to a known vCPU in
> order to be able to flush the PIRR when modifying the binding.
> 
> Reported-by: Joe Jin <joe.jin@oracle.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Joe Jin <joe.jin@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
> I would like to see a bug fix for this issue in 4.13. The fix here only
> affects posted interrupts, hence I think the risk of breaking anything
> else is low.
> ---
> Changes since v2:
>  - Also sync PIRR with IRR when using CPU posted interrupts.
>  - Force the selection of a specific vCPU when using posted interrupts
>    for multi-dest.
>  - Change vmsi_deliver_pirq to honor dest_vcpu_id.
> 
> Changes since v1:
>  - Store the vcpu id also in multi-dest mode if the interrupt is bound
>    to a vcpu for posted delivery.
>  - s/#if/#ifdef/.
> ---
>  xen/arch/x86/hvm/vlapic.c        |  6 +++---
>  xen/arch/x86/hvm/vmsi.c          | 11 ++++++++++-
>  xen/drivers/passthrough/io.c     | 29 +++++++++++++++++++++++------
>  xen/include/asm-x86/hvm/vlapic.h |  2 ++
>  4 files changed, 38 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> index 9466258d6f..d255ad8db7 100644
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -106,7 +106,7 @@ static void vlapic_clear_irr(int vector, struct vlapic *vlapic)
>      vlapic_clear_vector(vector, &vlapic->regs->data[APIC_IRR]);
>  }
>  
> -static void sync_pir_to_irr(struct vcpu *v)
> +void vlapic_sync_pir_to_irr(struct vcpu *v)
>  {
>      if ( hvm_funcs.sync_pir_to_irr )
>          alternative_vcall(hvm_funcs.sync_pir_to_irr, v);
> @@ -114,7 +114,7 @@ static void sync_pir_to_irr(struct vcpu *v)
>  
>  static int vlapic_find_highest_irr(struct vlapic *vlapic)
>  {
> -    sync_pir_to_irr(vlapic_vcpu(vlapic));
> +    vlapic_sync_pir_to_irr(vlapic_vcpu(vlapic));
>  
>      return vlapic_find_highest_vector(&vlapic->regs->data[APIC_IRR]);
>  }
> @@ -1493,7 +1493,7 @@ static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
>      if ( !has_vlapic(v->domain) )
>          return 0;
>  
> -    sync_pir_to_irr(v);
> +    vlapic_sync_pir_to_irr(v);
>  
>      return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
>  }
> diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
> index 6597d9f719..fe488ccc7d 100644
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -118,7 +118,16 @@ void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *pirq_dpci)
>  
>      ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI);
>  
> -    vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
> +    if ( hvm_funcs.deliver_posted_intr && pirq_dpci->gmsi.dest_vcpu_id != -1 )
> +        /*
> +         * When using posted interrupts multi-destination delivery mode is
> +         * forced to select a specific vCPU so that the PIRR can be synced into
> +         * IRR when the interrupt is destroyed or moved.
> +         */
> +        vmsi_inj_irq(vcpu_vlapic(d->vcpu[pirq_dpci->gmsi.dest_vcpu_id]),
> +                     vector, trig_mode, delivery_mode);
> +    else
> +        vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
>  }
>  
>  /* Return value, -1 : multi-dests, non-negative value: dest_vcpu_id */
> diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> index b292e79382..d3f1ae5c39 100644
> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -341,7 +341,7 @@ int pt_irq_create_bind(
>      {
>          uint8_t dest, delivery_mode;
>          bool dest_mode;
> -        int dest_vcpu_id;
> +        int dest_vcpu_id, prev_vcpu_id = -1;
>          const struct vcpu *vcpu;
>          uint32_t gflags = pt_irq_bind->u.msi.gflags &
>                            ~XEN_DOMCTL_VMSI_X86_UNMASKED;
> @@ -411,6 +411,7 @@ int pt_irq_create_bind(
>  
>                  pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec;
>                  pirq_dpci->gmsi.gflags = gflags;
> +                prev_vcpu_id = pirq_dpci->gmsi.dest_vcpu_id;
>              }
>          }
>          /* Calculate dest_vcpu_id for MSI-type pirq migration. */
> @@ -426,14 +427,24 @@ int pt_irq_create_bind(
>  
>          pirq_dpci->gmsi.posted = false;
>          vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;
> -        if ( iommu_intpost )
> +        if ( hvm_funcs.deliver_posted_intr && delivery_mode == dest_LowestPrio )
>          {
> -            if ( delivery_mode == dest_LowestPrio )
> -                vcpu = vector_hashing_dest(d, dest, dest_mode,
> -                                           pirq_dpci->gmsi.gvec);
> +            /*
> +             * NB: when using posted interrupts the vector is signaled
> +             * on the PIRR, and hence Xen needs to force interrupts to be
> +             * delivered to a specific vCPU in order to be able to sync PIRR
> +             * with IRR when the interrupt binding is destroyed, or else
> +             * pending interrupts in the previous vCPU PIRR field could be
> +             * delivered after the update.
> +             */
> +            vcpu = vector_hashing_dest(d, dest, dest_mode,
> +                                       pirq_dpci->gmsi.gvec);
>              if ( vcpu )
> -                pirq_dpci->gmsi.posted = true;
> +                pirq_dpci->gmsi.dest_vcpu_id = vcpu->vcpu_id;
>          }
> +        if ( iommu_intpost && vcpu )
> +            pirq_dpci->gmsi.posted = true;
> +
>          if ( vcpu && is_iommu_enabled(d) )
>              hvm_migrate_pirq(pirq_dpci, vcpu);

I'm afraid I don't really agree with this as a whole, but to a fair
part because of shortcomings of the original code. For one, I can't
figure how hvm_girq_dest_2_vcpu_id() can possibly produce a useful
result without it being passed the delivery mode. I think what the
function does should much more closely resemble vmsi_deliver(),
just without actually delivering anything.

Next you seem to be assuming that multi-dest can only be a result
of lowest-priority delivery mode. But look at vmsi_deliver(): Fixed
mode too can result in multiple destinations, just that in this
case it ends up being a multicast.

When there are multiple destinations, we simply shouldn't post the
interrupt. Note that lowest-priority delivery mode, once
hvm_girq_dest_2_vcpu_id() actually learns of honoring it, would not
have to result in multiple destinations, again as per vmsi_deliver()
(i.e. posting would still be possible in this case, just that
arbitration then happens in software and [unfortunately] ahead of
the time the interrupt actually occurs; this may still represent a
problem though, but it's unclear to me how this case is actually
intended to work with an IRTE only being able to point at a single
PI descriptor - Kevin?).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH for-4.13 v3] x86/passthrough: fix migration of MSI when using posted interrupts
Posted by Tian, Kevin 4 years, 4 months ago
> From: Roger Pau Monne [mailto:roger.pau@citrix.com]
> Sent: Friday, November 8, 2019 9:34 PM
> 
> When using posted interrupts and the guest migrates MSI from vCPUs Xen
> needs to flush any pending PIRR vectors on the previous vCPU, or else
> those vectors could get wrongly injected at a later point when the MSI
> fields are already updated.

I may overlook but isn't it the guest's responsibility of handling such
case? Even on bare metal, an in-fly interrupt may be delivered to
wrong CPU when MSI is being migrated?

> 
> Rename sync_pir_to_irr to vlapic_sync_pir_to_irr and export it so it
> can be called when updating the binding of physical interrupts to
> guests.
> 
> Note that PIRR is synced to IRR both in pt_irq_destroy_bind and
> pt_irq_create_bind when the interrupt delivery data is being updated.
> 
> Also store the vCPU ID in multi-destination mode when using posted
> interrupts so that the interrupt is always injected to a known vCPU in
> order to be able to flush the PIRR when modifying the binding.
> 
> Reported-by: Joe Jin <joe.jin@oracle.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Joe Jin <joe.jin@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
> I would like to see a bug fix for this issue in 4.13. The fix here only
> affects posted interrupts, hence I think the risk of breaking anything
> else is low.
> ---
> Changes since v2:
>  - Also sync PIRR with IRR when using CPU posted interrupts.
>  - Force the selection of a specific vCPU when using posted interrupts
>    for multi-dest.
>  - Change vmsi_deliver_pirq to honor dest_vcpu_id.
> 
> Changes since v1:
>  - Store the vcpu id also in multi-dest mode if the interrupt is bound
>    to a vcpu for posted delivery.
>  - s/#if/#ifdef/.
> ---
>  xen/arch/x86/hvm/vlapic.c        |  6 +++---
>  xen/arch/x86/hvm/vmsi.c          | 11 ++++++++++-
>  xen/drivers/passthrough/io.c     | 29 +++++++++++++++++++++++------
>  xen/include/asm-x86/hvm/vlapic.h |  2 ++
>  4 files changed, 38 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> index 9466258d6f..d255ad8db7 100644
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -106,7 +106,7 @@ static void vlapic_clear_irr(int vector, struct vlapic
> *vlapic)
>      vlapic_clear_vector(vector, &vlapic->regs->data[APIC_IRR]);
>  }
> 
> -static void sync_pir_to_irr(struct vcpu *v)
> +void vlapic_sync_pir_to_irr(struct vcpu *v)
>  {
>      if ( hvm_funcs.sync_pir_to_irr )
>          alternative_vcall(hvm_funcs.sync_pir_to_irr, v);
> @@ -114,7 +114,7 @@ static void sync_pir_to_irr(struct vcpu *v)
> 
>  static int vlapic_find_highest_irr(struct vlapic *vlapic)
>  {
> -    sync_pir_to_irr(vlapic_vcpu(vlapic));
> +    vlapic_sync_pir_to_irr(vlapic_vcpu(vlapic));
> 
>      return vlapic_find_highest_vector(&vlapic->regs->data[APIC_IRR]);
>  }
> @@ -1493,7 +1493,7 @@ static int lapic_save_regs(struct vcpu *v,
> hvm_domain_context_t *h)
>      if ( !has_vlapic(v->domain) )
>          return 0;
> 
> -    sync_pir_to_irr(v);
> +    vlapic_sync_pir_to_irr(v);
> 
>      return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)-
> >regs);
>  }
> diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
> index 6597d9f719..fe488ccc7d 100644
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -118,7 +118,16 @@ void vmsi_deliver_pirq(struct domain *d, const
> struct hvm_pirq_dpci *pirq_dpci)
> 
>      ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI);
> 
> -    vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
> +    if ( hvm_funcs.deliver_posted_intr && pirq_dpci->gmsi.dest_vcpu_id !=
> -1 )
> +        /*
> +         * When using posted interrupts multi-destination delivery mode is
> +         * forced to select a specific vCPU so that the PIRR can be synced into
> +         * IRR when the interrupt is destroyed or moved.
> +         */
> +        vmsi_inj_irq(vcpu_vlapic(d->vcpu[pirq_dpci->gmsi.dest_vcpu_id]),
> +                     vector, trig_mode, delivery_mode);
> +    else
> +        vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
>  }
> 
>  /* Return value, -1 : multi-dests, non-negative value: dest_vcpu_id */
> diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> index b292e79382..d3f1ae5c39 100644
> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -341,7 +341,7 @@ int pt_irq_create_bind(
>      {
>          uint8_t dest, delivery_mode;
>          bool dest_mode;
> -        int dest_vcpu_id;
> +        int dest_vcpu_id, prev_vcpu_id = -1;
>          const struct vcpu *vcpu;
>          uint32_t gflags = pt_irq_bind->u.msi.gflags &
>                            ~XEN_DOMCTL_VMSI_X86_UNMASKED;
> @@ -411,6 +411,7 @@ int pt_irq_create_bind(
> 
>                  pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec;
>                  pirq_dpci->gmsi.gflags = gflags;
> +                prev_vcpu_id = pirq_dpci->gmsi.dest_vcpu_id;
>              }
>          }
>          /* Calculate dest_vcpu_id for MSI-type pirq migration. */
> @@ -426,14 +427,24 @@ int pt_irq_create_bind(
> 
>          pirq_dpci->gmsi.posted = false;
>          vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;
> -        if ( iommu_intpost )
> +        if ( hvm_funcs.deliver_posted_intr && delivery_mode ==
> dest_LowestPrio )
>          {
> -            if ( delivery_mode == dest_LowestPrio )
> -                vcpu = vector_hashing_dest(d, dest, dest_mode,
> -                                           pirq_dpci->gmsi.gvec);
> +            /*
> +             * NB: when using posted interrupts the vector is signaled
> +             * on the PIRR, and hence Xen needs to force interrupts to be
> +             * delivered to a specific vCPU in order to be able to sync PIRR
> +             * with IRR when the interrupt binding is destroyed, or else
> +             * pending interrupts in the previous vCPU PIRR field could be
> +             * delivered after the update.
> +             */
> +            vcpu = vector_hashing_dest(d, dest, dest_mode,
> +                                       pirq_dpci->gmsi.gvec);
>              if ( vcpu )
> -                pirq_dpci->gmsi.posted = true;
> +                pirq_dpci->gmsi.dest_vcpu_id = vcpu->vcpu_id;
>          }
> +        if ( iommu_intpost && vcpu )
> +            pirq_dpci->gmsi.posted = true;
> +
>          if ( vcpu && is_iommu_enabled(d) )
>              hvm_migrate_pirq(pirq_dpci, vcpu);
> 
> @@ -442,6 +453,9 @@ int pt_irq_create_bind(
>              pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
>                             info, pirq_dpci->gmsi.gvec);
> 
> +        if ( hvm_funcs.deliver_posted_intr && prev_vcpu_id >= 0 )
> +            vlapic_sync_pir_to_irr(d->vcpu[prev_vcpu_id]);
> +
>          if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
>          {
>              unsigned long flags;
> @@ -731,6 +745,9 @@ int pt_irq_destroy_bind(
>      else if ( pirq_dpci && pirq_dpci->gmsi.posted )
>          pi_update_irte(NULL, pirq, 0);
> 
> +    if ( hvm_funcs.deliver_posted_intr && pirq_dpci->gmsi.dest_vcpu_id >=
> 0 )
> +        vlapic_sync_pir_to_irr(d->vcpu[pirq_dpci->gmsi.dest_vcpu_id]);
> +
>      if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) &&
>           list_empty(&pirq_dpci->digl_list) )
>      {
> diff --git a/xen/include/asm-x86/hvm/vlapic.h b/xen/include/asm-
> x86/hvm/vlapic.h
> index dde66b4f0f..b0017d1dae 100644
> --- a/xen/include/asm-x86/hvm/vlapic.h
> +++ b/xen/include/asm-x86/hvm/vlapic.h
> @@ -150,4 +150,6 @@ bool_t vlapic_match_dest(
>      const struct vlapic *target, const struct vlapic *source,
>      int short_hand, uint32_t dest, bool_t dest_mode);
> 
> +void vlapic_sync_pir_to_irr(struct vcpu *v);
> +
>  #endif /* __ASM_X86_HVM_VLAPIC_H__ */
> --
> 2.23.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH for-4.13 v3] x86/passthrough: fix migration of MSI when using posted interrupts
Posted by Roger Pau Monné 4 years, 4 months ago
On Fri, Nov 15, 2019 at 05:23:51AM +0000, Tian, Kevin wrote:
> > From: Roger Pau Monne [mailto:roger.pau@citrix.com]
> > Sent: Friday, November 8, 2019 9:34 PM
> > 
> > When using posted interrupts and the guest migrates MSI from vCPUs Xen
> > needs to flush any pending PIRR vectors on the previous vCPU, or else
> > those vectors could get wrongly injected at a later point when the MSI
> > fields are already updated.
> 
> I may overlook but isn't it the guest's responsibility of handling such
> case? Even on bare metal, an in-fly interrupt may be delivered to
> wrong CPU when MSI is being migrated?

According to Joe from Oracle Linux already takes care of that by
checking IRR when migrating interrupts between CPUs, but it seems like
the vector is not pending in IRR (my hypothesis is that it's pending
in PIR but lacking a sync into IRR).

After digging more into the posted interrupt code, I think there's an
issue somewhere else, and the sync on MSI reconfiguration done by this
patch is just covering that up.

There shouldn't be any interrupts pending in the PIR when the vCPU is
running, and any pending vectors in the PIR should be synced into IRR
before executing the vCPU.

AFAICT there's an issue with how PIR is synced into IRR, it relies on
vlapic_find_highest_irr being called from vlapic_has_pending_irq, but
depending on which interrupts are pending it's possible that
vlapic_has_pending_irq is not called by hvm_vcpu_has_pending_irq, thus
leaving IRR stale.

The patch below should solve that and also simplify
__vmx_deliver_posted_interrupt, there's no reason to raise a softirq
in __vmx_deliver_posted_interrupt: if the vCPU is the one currently
running or if it's not running at all the sync of PIR to IRR will
happen on vmentry, without the need of any softirq being set. Also
note the raise_softirq in __vmx_deliver_posted_interrupt should have
been a cpu_raise_softirq(cpu, VCPU_KICK_SOFTIRQ) instead.

Joe, can you give a try to the patch below?

Thanks, Roger.
---8<---
commit 9ab79fcbc4dece15551fba177b59b51631101563
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 15 11:58:18 2019 +0100

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 0d097cf1f2..ce70f4bc75 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -232,6 +232,14 @@ void vmx_intr_assist(void)
     enum hvm_intblk intblk;
     int pt_vector;
 
+    if ( cpu_has_vmx_posted_intr_processing )
+        /*
+         * Always force PIR to be synced to IRR before vmentry, this is also
+         * done by vlapic_has_pending_irq but it's possible other pending
+         * interrupts prevent the execution of that function.
+         */
+        vmx_sync_pir_to_irr(v);
+
     /* Block event injection when single step with MTF. */
     if ( unlikely(v->arch.hvm.single_step) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 33e68eaddf..82a1b972c5 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1945,8 +1945,6 @@ static void vmx_process_isr(int isr, struct vcpu *v)
 
 static void __vmx_deliver_posted_interrupt(struct vcpu *v)
 {
-    bool_t running = v->is_running;
-
     vcpu_unblock(v);
     /*
      * Just like vcpu_kick(), nothing is needed for the following two cases:
@@ -1954,48 +1952,28 @@ static void __vmx_deliver_posted_interrupt(struct vcpu *v)
      * 2. The target vCPU is the current vCPU and we're in non-interrupt
      * context.
      */
-    if ( running && (in_irq() || (v != current)) )
-    {
+    if ( vcpu_runnable(v) && v != current )
         /*
-         * Note: Only two cases will reach here:
-         * 1. The target vCPU is running on other pCPU.
-         * 2. The target vCPU is the current vCPU.
+         * If the vCPU is running on another pCPU send an IPI to the pCPU. When
+         * the IPI arrives, the target vCPU may be running in non-root mode,
+         * running in root mode, runnable or blocked. If the target vCPU is
+         * running in non-root mode, the hardware will sync PIR to vIRR for
+         * 'posted_intr_vector' is a special vector handled directly by the
+         * hardware.
          *
-         * Note2: Don't worry the v->processor may change. The vCPU being
-         * moved to another processor is guaranteed to sync PIR to vIRR,
-         * due to the involved scheduling cycle.
+         * If the target vCPU is running in root-mode, the interrupt handler
+         * starts to run.  Considering an IPI may arrive in the window between
+         * the call to vmx_intr_assist() and interrupts getting disabled, the
+         * interrupt handler should raise a softirq to ensure events will be
+         * delivered in time.
          */
-        unsigned int cpu = v->processor;
+        send_IPI_mask(cpumask_of(v->processor), posted_intr_vector);
 
-        /*
-         * For case 1, we send an IPI to the pCPU. When an IPI arrives, the
-         * target vCPU maybe is running in non-root mode, running in root
-         * mode, runnable or blocked. If the target vCPU is running in
-         * non-root mode, the hardware will sync PIR to vIRR for
-         * 'posted_intr_vector' is special to the pCPU. If the target vCPU is
-         * running in root-mode, the interrupt handler starts to run.
-         * Considering an IPI may arrive in the window between the call to
-         * vmx_intr_assist() and interrupts getting disabled, the interrupt
-         * handler should raise a softirq to ensure events will be delivered
-         * in time. If the target vCPU is runnable, it will sync PIR to
-         * vIRR next time it is chose to run. In this case, a IPI and a
-         * softirq is sent to a wrong vCPU which will not have any adverse
-         * effect. If the target vCPU is blocked, since vcpu_block() checks
-         * whether there is an event to be delivered through
-         * local_events_need_delivery() just after blocking, the vCPU must
-         * have synced PIR to vIRR. Similarly, there is a IPI and a softirq
-         * sent to a wrong vCPU.
-         */
-        if ( cpu != smp_processor_id() )
-            send_IPI_mask(cpumask_of(cpu), posted_intr_vector);
-        /*
-         * For case 2, raising a softirq ensures PIR will be synced to vIRR.
-         * As any softirq will do, as an optimization we only raise one if
-         * none is pending already.
-         */
-        else if ( !softirq_pending(cpu) )
-            raise_softirq(VCPU_KICK_SOFTIRQ);
-    }
+    /*
+     * If the vCPU is not runnable or if it's the one currently running in this
+     * pCPU there's nothing to do, the vmentry code will already sync the PIR
+     * to IRR when resuming execution.
+     */
 }
 
 static void vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
@@ -2048,7 +2026,7 @@ static void vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
     vcpu_kick(v);
 }
 
-static void vmx_sync_pir_to_irr(struct vcpu *v)
+void vmx_sync_pir_to_irr(struct vcpu *v)
 {
     struct vlapic *vlapic = vcpu_vlapic(v);
     unsigned int group, i;
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index ebaa74449b..c43fab7c4f 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -101,6 +101,7 @@ void vmx_update_debug_state(struct vcpu *v);
 void vmx_update_exception_bitmap(struct vcpu *v);
 void vmx_update_cpu_exec_control(struct vcpu *v);
 void vmx_update_secondary_exec_control(struct vcpu *v);
+void vmx_sync_pir_to_irr(struct vcpu *v);
 
 #define POSTED_INTR_ON  0
 #define POSTED_INTR_SN  1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH for-4.13 v3] x86/passthrough: fix migration of MSI when using posted interrupts
Posted by Joe Jin 4 years, 4 months ago
On 11/15/19 6:06 AM, Roger Pau Monné wrote:
> On Fri, Nov 15, 2019 at 05:23:51AM +0000, Tian, Kevin wrote:
>>> From: Roger Pau Monne [mailto:roger.pau@citrix.com]
>>> Sent: Friday, November 8, 2019 9:34 PM
>>>
>>> When using posted interrupts and the guest migrates MSI from vCPUs Xen
>>> needs to flush any pending PIRR vectors on the previous vCPU, or else
>>> those vectors could get wrongly injected at a later point when the MSI
>>> fields are already updated.
>> I may overlook but isn't it the guest's responsibility of handling such
>> case? Even on bare metal, an in-fly interrupt may be delivered to
>> wrong CPU when MSI is being migrated?
> According to Joe from Oracle Linux already takes care of that by
> checking IRR when migrating interrupts between CPUs, but it seems like
> the vector is not pending in IRR (my hypothesis is that it's pending
> in PIR but lacking a sync into IRR).
> 
> After digging more into the posted interrupt code, I think there's an
> issue somewhere else, and the sync on MSI reconfiguration done by this
> patch is just covering that up.
> 
> There shouldn't be any interrupts pending in the PIR when the vCPU is
> running, and any pending vectors in the PIR should be synced into IRR
> before executing the vCPU.
> 
> AFAICT there's an issue with how PIR is synced into IRR, it relies on
> vlapic_find_highest_irr being called from vlapic_has_pending_irq, but
> depending on which interrupts are pending it's possible that
> vlapic_has_pending_irq is not called by hvm_vcpu_has_pending_irq, thus
> leaving IRR stale.
> 
> The patch below should solve that and also simplify
> __vmx_deliver_posted_interrupt, there's no reason to raise a softirq
> in __vmx_deliver_posted_interrupt: if the vCPU is the one currently
> running or if it's not running at all the sync of PIR to IRR will
> happen on vmentry, without the need of any softirq being set. Also
> note the raise_softirq in __vmx_deliver_posted_interrupt should have
> been a cpu_raise_softirq(cpu, VCPU_KICK_SOFTIRQ) instead.
> 
> Joe, can you give a try to the patch below?

This patch fixed my issue as well.

Thanks,
Joe

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel