There are two issues. First, pi_test_and_clear_on() pulls the cache-line to
the CPU and dirties it even if there's nothing outstanding, but the final
for_each_set_bit() is O(256) when O(8) would do, and would avoid multiple
atomic updates to the same IRR word.
Rewrite it from scratch, explaining what's going on at each step.
Bloat-o-meter reports 177 -> 145 (net -32), but the better aspect is the
removal calls to __find_{first,next}_bit() hidden behind for_each_set_bit().
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
The main purpose of this is to get rid of bitmap_for_each().
v2:
* Extend the comments
---
xen/arch/x86/hvm/vmx/vmx.c | 70 +++++++++++++++++++++++++++++++++-----
1 file changed, 62 insertions(+), 8 deletions(-)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 12f8a66458db..1baed7e816c4 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2317,18 +2317,72 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
static void cf_check vmx_sync_pir_to_irr(struct vcpu *v)
{
- struct vlapic *vlapic = vcpu_vlapic(v);
- unsigned int group, i;
- DECLARE_BITMAP(pending_intr, X86_NR_VECTORS);
+ struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc;
+ union {
+ uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)];
+ uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)];
+ } vec;
+ uint32_t *irr;
+ bool on;
- if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
+ /*
+ * The PIR is a contended cacheline which bounces between the CPU(s) and
+ * IOMMU(s). An IOMMU updates the entire PIR atomically, but we can't
+ * express the same on the CPU side, so care has to be taken.
+ *
+ * First, do a plain read of ON. If the PIR hasn't been modified, this
+ * will keep the cacheline Shared and not pull it Excusive on the current
+ * CPU.
+ */
+ if ( !pi_test_on(desc) )
return;
- for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
- pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
+ /*
+ * Second, if the plain read said that ON was set, we must clear it with
+ * an atomic action. This will bring the cachline to Exclusive on the
+ * current CPU.
+ *
+ * This should always succeed because noone else should be playing with
+ * the PIR behind our back, but assert so just in case.
+ */
+ on = pi_test_and_clear_on(desc);
+ ASSERT(on);
+
+ /*
+ * The cacheline is now Exclusive on the current CPU, and because ON was
+ * set, some other entity (an IOMMU, or Xen on another CPU) has indicated
+ * that at PIR needs re-scanning.
+ *
+ * Note: Entities which can't update the entire cacheline atomically
+ * (i.e. Xen on another CPU) are required to update PIR first, then
+ * set ON. Therefore, there is a corner case where we may have
+ * found and processed the PIR updates "last time around" and only
+ * found ON this time around. This is fine; the logic still
+ * operates correctly.
+ *
+ * Atomically read and clear the entire pending bitmap as fast as we, to
+ * reduce the window where another entity may steal the cacheline back
+ * from us. This is a performance concern, not a correctness concern. If
+ * the another entity does steal the cacheline back, we'll just wait to
+ * get it back again.
+ */
+ for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i )
+ vec._64[i] = xchg(&desc->pir[i], 0);
+
+ /*
+ * Finally, merge the pending vectors into IRR. The IRR register is
+ * scattered in memory, so we have to do this 32 bits at a time.
+ */
+ irr = (uint32_t *)&vcpu_vlapic(v)->regs->data[APIC_IRR];
+ for ( unsigned int i = 0; i < ARRAY_SIZE(vec._32); ++i )
+ {
+ if ( !vec._32[i] )
+ continue;
- bitmap_for_each ( i, pending_intr, X86_NR_VECTORS )
- vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
+ asm ( "lock or %[val], %[irr]"
+ : [irr] "+m" (irr[i * 0x10])
+ : [val] "r" (vec._32[i]) );
+ }
}
static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
--
2.39.2
On 27.08.2024 15:57, Andrew Cooper wrote: > There are two issues. First, pi_test_and_clear_on() pulls the cache-line to > the CPU and dirties it even if there's nothing outstanding, but the final > for_each_set_bit() is O(256) when O(8) would do, Nit: That's bitmap_for_each() now, I think. And again ... > and would avoid multiple > atomic updates to the same IRR word. > > Rewrite it from scratch, explaining what's going on at each step. > > Bloat-o-meter reports 177 -> 145 (net -32), but the better aspect is the > removal calls to __find_{first,next}_bit() hidden behind for_each_set_bit(). ... here, and no underscore prefixes on the two find functions. > --- a/xen/arch/x86/hvm/vmx/vmx.c > +++ b/xen/arch/x86/hvm/vmx/vmx.c > @@ -2317,18 +2317,72 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector) > > static void cf_check vmx_sync_pir_to_irr(struct vcpu *v) > { > - struct vlapic *vlapic = vcpu_vlapic(v); > - unsigned int group, i; > - DECLARE_BITMAP(pending_intr, X86_NR_VECTORS); > + struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc; > + union { > + uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)]; Using unsigned long here would imo be better, as that's what matches struct pi_desc's DECLARE_BITMAP(). > + uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)]; > + } vec; > + uint32_t *irr; > + bool on; > > - if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) ) > + /* > + * The PIR is a contended cacheline which bounces between the CPU(s) and > + * IOMMU(s). An IOMMU updates the entire PIR atomically, but we can't > + * express the same on the CPU side, so care has to be taken. > + * > + * First, do a plain read of ON. If the PIR hasn't been modified, this > + * will keep the cacheline Shared and not pull it Excusive on the current > + * CPU. > + */ > + if ( !pi_test_on(desc) ) > return; > > - for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ ) > - pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group); > + /* > + * Second, if the plain read said that ON was set, we must clear it with > + * an atomic action. This will bring the cachline to Exclusive on the Nit (from my spell checker): cacheline. > + * current CPU. > + * > + * This should always succeed because noone else should be playing with > + * the PIR behind our back, but assert so just in case. > + */ > + on = pi_test_and_clear_on(desc); > + ASSERT(on); > + > + /* > + * The cacheline is now Exclusive on the current CPU, and because ON was "is" is pretty ambitious. We can only hope it (still) is. > + * set, some other entity (an IOMMU, or Xen on another CPU) has indicated > + * that at PIR needs re-scanning. Stray "at"? > + * > + * Note: Entities which can't update the entire cacheline atomically > + * (i.e. Xen on another CPU) are required to update PIR first, then > + * set ON. Therefore, there is a corner case where we may have > + * found and processed the PIR updates "last time around" and only > + * found ON this time around. This is fine; the logic still > + * operates correctly. > + * > + * Atomically read and clear the entire pending bitmap as fast as we, to Missing "can" before the comma? > + * reduce the window where another entity may steal the cacheline back > + * from us. This is a performance concern, not a correctness concern. If > + * the another entity does steal the cacheline back, we'll just wait to "the other"? > + * get it back again. > + */ > + for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i ) > + vec._64[i] = xchg(&desc->pir[i], 0); > + > + /* > + * Finally, merge the pending vectors into IRR. The IRR register is > + * scattered in memory, so we have to do this 32 bits at a time. > + */ > + irr = (uint32_t *)&vcpu_vlapic(v)->regs->data[APIC_IRR]; > + for ( unsigned int i = 0; i < ARRAY_SIZE(vec._32); ++i ) > + { > + if ( !vec._32[i] ) > + continue; > > - bitmap_for_each ( i, pending_intr, X86_NR_VECTORS ) > - vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]); > + asm ( "lock or %[val], %[irr]" > + : [irr] "+m" (irr[i * 0x10]) This wants to be irr * 4 only, to account for sizeof(*irr) == 4. Jan
On 28/08/2024 10:19 am, Jan Beulich wrote: > On 27.08.2024 15:57, Andrew Cooper wrote: >> There are two issues. First, pi_test_and_clear_on() pulls the cache-line to >> the CPU and dirties it even if there's nothing outstanding, but the final >> for_each_set_bit() is O(256) when O(8) would do, > Nit: That's bitmap_for_each() now, I think. And again ... > >> and would avoid multiple >> atomic updates to the same IRR word. >> >> Rewrite it from scratch, explaining what's going on at each step. >> >> Bloat-o-meter reports 177 -> 145 (net -32), but the better aspect is the >> removal calls to __find_{first,next}_bit() hidden behind for_each_set_bit(). > ... here, and no underscore prefixes on the two find functions. Yes, and fixed. > >> --- a/xen/arch/x86/hvm/vmx/vmx.c >> +++ b/xen/arch/x86/hvm/vmx/vmx.c >> @@ -2317,18 +2317,72 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector) >> >> static void cf_check vmx_sync_pir_to_irr(struct vcpu *v) >> { >> - struct vlapic *vlapic = vcpu_vlapic(v); >> - unsigned int group, i; >> - DECLARE_BITMAP(pending_intr, X86_NR_VECTORS); >> + struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc; >> + union { >> + uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)]; > Using unsigned long here would imo be better, as that's what matches > struct pi_desc's DECLARE_BITMAP(). Why? It was also the primary contribution to particularly-bad code generation in this function. > >> + uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)]; >> + } vec; >> + uint32_t *irr; >> + bool on; >> >> - if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) ) >> + /* >> + * The PIR is a contended cacheline which bounces between the CPU(s) and >> + * IOMMU(s). An IOMMU updates the entire PIR atomically, but we can't >> + * express the same on the CPU side, so care has to be taken. >> + * >> + * First, do a plain read of ON. If the PIR hasn't been modified, this >> + * will keep the cacheline Shared and not pull it Excusive on the current >> + * CPU. >> + */ >> + if ( !pi_test_on(desc) ) >> return; >> >> - for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ ) >> - pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group); >> + /* >> + * Second, if the plain read said that ON was set, we must clear it with >> + * an atomic action. This will bring the cachline to Exclusive on the > Nit (from my spell checker): cacheline. > >> + * current CPU. >> + * >> + * This should always succeed because noone else should be playing with >> + * the PIR behind our back, but assert so just in case. >> + */ >> + on = pi_test_and_clear_on(desc); >> + ASSERT(on); >> + >> + /* >> + * The cacheline is now Exclusive on the current CPU, and because ON was > "is" is pretty ambitious. We can only hope it (still) is. I can't think of a clearer way of saying this. "will have become Exclusive" perhaps, but this is getting into some subtle tense gymnastics. >> + * get it back again. >> + */ >> + for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i ) >> + vec._64[i] = xchg(&desc->pir[i], 0); >> + >> + /* >> + * Finally, merge the pending vectors into IRR. The IRR register is >> + * scattered in memory, so we have to do this 32 bits at a time. >> + */ >> + irr = (uint32_t *)&vcpu_vlapic(v)->regs->data[APIC_IRR]; >> + for ( unsigned int i = 0; i < ARRAY_SIZE(vec._32); ++i ) >> + { >> + if ( !vec._32[i] ) >> + continue; >> >> - bitmap_for_each ( i, pending_intr, X86_NR_VECTORS ) >> - vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]); >> + asm ( "lock or %[val], %[irr]" >> + : [irr] "+m" (irr[i * 0x10]) > This wants to be irr * 4 only, to account for sizeof(*irr) == 4. Ah, that will be where the AlderLake interrupts are disappearing to. ~Andrew
On 28.08.2024 20:08, Andrew Cooper wrote: > On 28/08/2024 10:19 am, Jan Beulich wrote: >> On 27.08.2024 15:57, Andrew Cooper wrote: >>> There are two issues. First, pi_test_and_clear_on() pulls the cache-line to >>> the CPU and dirties it even if there's nothing outstanding, but the final >>> for_each_set_bit() is O(256) when O(8) would do, >> Nit: That's bitmap_for_each() now, I think. And again ... >> >>> and would avoid multiple >>> atomic updates to the same IRR word. >>> >>> Rewrite it from scratch, explaining what's going on at each step. >>> >>> Bloat-o-meter reports 177 -> 145 (net -32), but the better aspect is the >>> removal calls to __find_{first,next}_bit() hidden behind for_each_set_bit(). >> ... here, and no underscore prefixes on the two find functions. > > Yes, and fixed. > >> >>> --- a/xen/arch/x86/hvm/vmx/vmx.c >>> +++ b/xen/arch/x86/hvm/vmx/vmx.c >>> @@ -2317,18 +2317,72 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector) >>> >>> static void cf_check vmx_sync_pir_to_irr(struct vcpu *v) >>> { >>> - struct vlapic *vlapic = vcpu_vlapic(v); >>> - unsigned int group, i; >>> - DECLARE_BITMAP(pending_intr, X86_NR_VECTORS); >>> + struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc; >>> + union { >>> + uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)]; >> Using unsigned long here would imo be better, as that's what matches >> struct pi_desc's DECLARE_BITMAP(). > > Why? It was also the primary contribution to particularly-bad code > generation in this function. I answered the "why" already: Because of you copying from something ... >>> + uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)]; >>> + } vec; >>> + uint32_t *irr; >>> + bool on; >>> >>> - if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) ) >>> + /* >>> + * The PIR is a contended cacheline which bounces between the CPU(s) and >>> + * IOMMU(s). An IOMMU updates the entire PIR atomically, but we can't >>> + * express the same on the CPU side, so care has to be taken. >>> + * >>> + * First, do a plain read of ON. If the PIR hasn't been modified, this >>> + * will keep the cacheline Shared and not pull it Excusive on the current >>> + * CPU. >>> + */ >>> + if ( !pi_test_on(desc) ) >>> return; >>> >>> - for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ ) >>> - pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group); >>> + /* >>> + * Second, if the plain read said that ON was set, we must clear it with >>> + * an atomic action. This will bring the cachline to Exclusive on the >> Nit (from my spell checker): cacheline. >> >>> + * current CPU. >>> + * >>> + * This should always succeed because noone else should be playing with >>> + * the PIR behind our back, but assert so just in case. >>> + */ >>> + on = pi_test_and_clear_on(desc); >>> + ASSERT(on); >>> + >>> + /* >>> + * The cacheline is now Exclusive on the current CPU, and because ON was >> "is" is pretty ambitious. We can only hope it (still) is. > > I can't think of a clearer way of saying this. "will have become > Exclusive" perhaps, but this is getting into some subtle tense gymnastics. > >>> + * get it back again. >>> + */ >>> + for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i ) >>> + vec._64[i] = xchg(&desc->pir[i], 0); ... that is the result of DECLARE_BITMAP(), i.e. an array of unsigned longs. If you make that part of the new union unsigned long[] too, you'll have code which is bitness-independent (i.e. would also have worked correctly in 32-bit Xen, and would work correctly in hypothetical 128-bit Xen). I don't think the array _type_ was "the primary contribution to particularly-bad code generation in this function"; it was how that bitmap was used. Jan
On 28/08/2024 7:08 pm, Andrew Cooper wrote: > On 28/08/2024 10:19 am, Jan Beulich wrote: >> On 27.08.2024 15:57, Andrew Cooper wrote: >>> + * get it back again. >>> + */ >>> + for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i ) >>> + vec._64[i] = xchg(&desc->pir[i], 0); >>> + >>> + /* >>> + * Finally, merge the pending vectors into IRR. The IRR register is >>> + * scattered in memory, so we have to do this 32 bits at a time. >>> + */ >>> + irr = (uint32_t *)&vcpu_vlapic(v)->regs->data[APIC_IRR]; >>> + for ( unsigned int i = 0; i < ARRAY_SIZE(vec._32); ++i ) >>> + { >>> + if ( !vec._32[i] ) >>> + continue; >>> >>> - bitmap_for_each ( i, pending_intr, X86_NR_VECTORS ) >>> - vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]); >>> + asm ( "lock or %[val], %[irr]" >>> + : [irr] "+m" (irr[i * 0x10]) >> This wants to be irr * 4 only, to account for sizeof(*irr) == 4. > Ah, that will be where the AlderLake interrupts are disappearing to. Indeed. It's much happier now. https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1431047447 ~Andrew
On 27/08/2024 2:57 pm, Andrew Cooper wrote: > There are two issues. First, pi_test_and_clear_on() pulls the cache-line to > the CPU and dirties it even if there's nothing outstanding, but the final > for_each_set_bit() is O(256) when O(8) would do, and would avoid multiple > atomic updates to the same IRR word. > > Rewrite it from scratch, explaining what's going on at each step. > > Bloat-o-meter reports 177 -> 145 (net -32), but the better aspect is the > removal calls to __find_{first,next}_bit() hidden behind for_each_set_bit(). > > No functional change. > > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> > --- > CC: Jan Beulich <JBeulich@suse.com> > CC: Roger Pau Monné <roger.pau@citrix.com> > CC: Stefano Stabellini <sstabellini@kernel.org> > CC: Julien Grall <julien@xen.org> > CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> > CC: Bertrand Marquis <bertrand.marquis@arm.com> > CC: Michal Orzel <michal.orzel@amd.com> > CC: Oleksii Kurochko <oleksii.kurochko@gmail.com> > > The main purpose of this is to get rid of bitmap_for_each(). > > v2: > * Extend the comments FWIW, Gitlab CI has gained one reliable failure for this series (which includes the hweight series too, because of how I've got my branch arranged). It is a timeout (domU not reporting in after boot), and as it is specific to the AlderLake runner, it's very likely to be this patch. I guess I need to triple-check the IRR scatter logic... ~Andrew
© 2016 - 2024 Red Hat, Inc.