Using dynamically allocated / maintained vectors has several downsides:
- possible nesting of IRQs due to the effects of IRQ migration,
- reduction of vectors available for devices,
- IRQs not moving as intended if there's shortage of vectors,
- higher runtime overhead.
As the vector also doesn't need to be of any priority (first and foremost
it really shouldn't be of higher or same priority as the timer IRQ, as
that raises TIMER_SOFTIRQ anyway), simply use the lowest one above the
legacy range. The vector needs reserving early, until it is known whether
it actually is used. If it isn't, it's made available for general use.
With a fixed vector, less updating is now necessary in
set_channel_irq_affinity(); in particular channels don't need transiently
masking anymore, as the necessary update is now atomic. To fully leverage
this, however, we want to stop using hpet_msi_set_affinity() there. With
the transient masking dropped, we're no longer at risk of missing events.
Fixes: 996576b965cc ("xen: allow up to 16383 cpus")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Oleksii Kurochko<oleksii.kurochko@gmail.com>
---
This is an alternative proposal to
https://lists.xen.org/archives/html/xen-devel/2014-03/msg00399.html.
Should we keep hpet_msi_set_affinity() at all? We'd better not have the
generic IRQ subsystem play with our IRQs' affinities, and fixup_irqs()
isn't relevant here. (If so, this likely would want to be a separate
patch, though.)
The hpet_enable_channel() call could in principle be made (effectively)
conditional, at the price of introducing a check in there. However, as
much as eliminating the masking didn't help with the many excess (early)
IRQs I'm observing on Intel hardware, doing so doesn't help either.
The Fixes: tag indicates where the problem got signficantly worse; in
principle it was there already before (crashing at perhaps 6 or 7 levels
of nested IRQs).
---
v3: Switch to using vector 0x30, to unbreak AMD, including an adjustment
to AMD IOMMU intremap logic. Adjust condition around assertions in
set_channel_irq_affinity().
v2: Re-work set_channel_irq_affinity() intensively. Re-base over the
dropping of another patch. Drop setup_vector_irq() change.
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -9,17 +9,19 @@
#include <xen/timer.h>
#include <xen/smp.h>
#include <xen/softirq.h>
+#include <xen/cpuidle.h>
#include <xen/irq.h>
#include <xen/numa.h>
#include <xen/param.h>
#include <xen/sched.h>
#include <asm/apic.h>
-#include <asm/fixmap.h>
#include <asm/div64.h>
+#include <asm/fixmap.h>
+#include <asm/genapic.h>
#include <asm/hpet.h>
+#include <asm/irq-vectors.h>
#include <asm/msi.h>
-#include <xen/cpuidle.h>
#define MAX_DELTA_NS MILLISECS(10*1000)
#define MIN_DELTA_NS MICROSECS(20)
@@ -251,10 +253,9 @@ static void cf_check hpet_interrupt_hand
ch->event_handler(ch);
}
-static void cf_check hpet_msi_unmask(struct irq_desc *desc)
+static void hpet_enable_channel(struct hpet_event_channel *ch)
{
u32 cfg;
- struct hpet_event_channel *ch = desc->action->dev_id;
cfg = hpet_read32(HPET_Tn_CFG(ch->idx));
cfg |= HPET_TN_ENABLE;
@@ -262,6 +263,11 @@ static void cf_check hpet_msi_unmask(str
ch->msi.msi_attrib.host_masked = 0;
}
+static void cf_check hpet_msi_unmask(struct irq_desc *desc)
+{
+ hpet_enable_channel(desc->action->dev_id);
+}
+
static void cf_check hpet_msi_mask(struct irq_desc *desc)
{
u32 cfg;
@@ -303,15 +309,13 @@ static void cf_check hpet_msi_set_affini
struct hpet_event_channel *ch = desc->action->dev_id;
struct msi_msg msg = ch->msi.msg;
- msg.dest32 = set_desc_affinity(desc, mask);
- if ( msg.dest32 == BAD_APICID )
- return;
+ /* This really is only for dump_irqs(). */
+ cpumask_copy(desc->arch.cpu_mask, mask);
- msg.data &= ~MSI_DATA_VECTOR_MASK;
- msg.data |= MSI_DATA_VECTOR(desc->arch.vector);
+ msg.dest32 = cpu_mask_to_apicid(mask);
msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK;
msg.address_lo |= MSI_ADDR_DEST_ID(msg.dest32);
- if ( msg.data != ch->msi.msg.data || msg.dest32 != ch->msi.msg.dest32 )
+ if ( msg.dest32 != ch->msi.msg.dest32 )
hpet_msi_write(ch, &msg);
}
@@ -324,7 +328,7 @@ static hw_irq_controller hpet_msi_type =
.shutdown = hpet_msi_shutdown,
.enable = hpet_msi_unmask,
.disable = hpet_msi_mask,
- .ack = ack_nonmaskable_msi_irq,
+ .ack = irq_actor_none,
.end = end_nonmaskable_irq,
.set_affinity = hpet_msi_set_affinity,
};
@@ -343,6 +347,12 @@ static int __init hpet_setup_msi_irq(str
u32 cfg = hpet_read32(HPET_Tn_CFG(ch->idx));
irq_desc_t *desc = irq_to_desc(ch->msi.irq);
+ clear_irq_vector(ch->msi.irq);
+ ret = bind_irq_vector(ch->msi.irq, HPET_BROADCAST_VECTOR, &cpu_online_map);
+ if ( ret )
+ return ret;
+ cpumask_setall(desc->affinity);
+
if ( iommu_intremap != iommu_intremap_off )
{
ch->msi.hpet_id = hpet_blockid;
@@ -472,19 +482,50 @@ static struct hpet_event_channel *hpet_g
static void set_channel_irq_affinity(struct hpet_event_channel *ch)
{
struct irq_desc *desc = irq_to_desc(ch->msi.irq);
+ struct msi_msg msg = ch->msi.msg;
ASSERT(!local_irq_is_enabled());
spin_lock(&desc->lock);
- hpet_msi_mask(desc);
- hpet_msi_set_affinity(desc, cpumask_of(ch->cpu));
- hpet_msi_unmask(desc);
+
+ per_cpu(vector_irq, ch->cpu)[HPET_BROADCAST_VECTOR] = ch->msi.irq;
+
+ /*
+ * Open-coding a reduced form of hpet_msi_set_affinity() here. With the
+ * actual update below (either of the IRTE or of [just] message address;
+ * with interrupt remapping message address/data don't change) now being
+ * atomic, we can avoid masking the IRQ around the update. As a result
+ * we're no longer at risk of missing IRQs (provided hpet_broadcast_enter()
+ * keeps setting the new deadline only afterwards).
+ */
+ cpumask_copy(desc->arch.cpu_mask, cpumask_of(ch->cpu));
+
spin_unlock(&desc->lock);
- spin_unlock(&ch->lock);
+ msg.dest32 = cpu_physical_id(ch->cpu);
+ msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK;
+ msg.address_lo |= MSI_ADDR_DEST_ID(msg.dest32);
+ if ( msg.dest32 != ch->msi.msg.dest32 )
+ {
+ ch->msi.msg = msg;
- /* We may have missed an interrupt due to the temporary masking. */
- if ( ch->event_handler && ch->next_event < NOW() )
- ch->event_handler(ch);
+ if ( iommu_intremap != iommu_intremap_off )
+ {
+ int rc = iommu_update_ire_from_msi(&ch->msi, &msg);
+
+ ASSERT(rc <= 0);
+ if ( rc >= 0 )
+ {
+ ASSERT(msg.data == hpet_read32(HPET_Tn_ROUTE(ch->idx)));
+ ASSERT(msg.address_lo ==
+ hpet_read32(HPET_Tn_ROUTE(ch->idx) + 4));
+ }
+ }
+ else
+ hpet_write32(msg.address_lo, HPET_Tn_ROUTE(ch->idx) + 4);
+ }
+
+ hpet_enable_channel(ch);
+ spin_unlock(&ch->lock);
}
static void hpet_attach_channel(unsigned int cpu,
@@ -622,6 +663,12 @@ void __init hpet_broadcast_init(void)
hpet_events->flags = HPET_EVT_LEGACY;
}
+void __init hpet_broadcast_late_init(void)
+{
+ if ( !num_hpets_used )
+ free_lopriority_vector(HPET_BROADCAST_VECTOR);
+}
+
void hpet_broadcast_resume(void)
{
u32 cfg;
--- a/xen/arch/x86/include/asm/hpet.h
+++ b/xen/arch/x86/include/asm/hpet.h
@@ -90,6 +90,7 @@ void hpet_disable_legacy_replacement_mod
* rather than using the LAPIC timer. Used for Cx state entry.
*/
void hpet_broadcast_init(void);
+void hpet_broadcast_late_init(void);
void hpet_broadcast_resume(void);
void cf_check hpet_broadcast_enter(void);
void cf_check hpet_broadcast_exit(void);
--- a/xen/arch/x86/include/asm/irq.h
+++ b/xen/arch/x86/include/asm/irq.h
@@ -116,6 +116,7 @@ void cf_check call_function_interrupt(vo
void cf_check irq_move_cleanup_interrupt(void);
uint8_t alloc_hipriority_vector(void);
+void free_lopriority_vector(uint8_t vector);
void set_direct_apic_vector(uint8_t vector, void (*handler)(void));
void alloc_direct_apic_vector(uint8_t *vector, void (*handler)(void));
--- a/xen/arch/x86/include/asm/irq-vectors.h
+++ b/xen/arch/x86/include/asm/irq-vectors.h
@@ -22,6 +22,9 @@
#define FIRST_LEGACY_VECTOR FIRST_DYNAMIC_VECTOR
#define LAST_LEGACY_VECTOR (FIRST_LEGACY_VECTOR + 0xf)
+/* HPET broadcast is statically allocated and wants to be low priority. */
+#define HPET_BROADCAST_VECTOR (LAST_LEGACY_VECTOR + 1)
+
#ifdef CONFIG_PV32
#define HYPERCALL_VECTOR 0x82
#endif
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -468,6 +468,12 @@ int __init init_irq_data(void)
vector++ )
__set_bit(vector, used_vectors);
+ /*
+ * Prevent the HPET broadcast vector from being used, until it is known
+ * whether it's actually needed.
+ */
+ __set_bit(HPET_BROADCAST_VECTOR, used_vectors);
+
return 0;
}
@@ -991,6 +997,13 @@ void alloc_direct_apic_vector(uint8_t *v
spin_unlock(&lock);
}
+/* This could free any vectors, but is needed only for low-prio ones. */
+void __init free_lopriority_vector(uint8_t vector)
+{
+ ASSERT(vector < FIRST_HIPRIORITY_VECTOR);
+ clear_bit(vector, used_vectors);
+}
+
static void cf_check irq_ratelimit_timer_fn(void *data)
{
struct irq_desc *desc, *tmp;
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -2675,6 +2675,8 @@ static int __init cf_check disable_pit_i
"Force enable with 'cpuidle'.\n");
}
+ hpet_broadcast_late_init();
+
return 0;
}
__initcall(disable_pit_irq);
--- a/xen/drivers/passthrough/amd/iommu_intr.c
+++ b/xen/drivers/passthrough/amd/iommu_intr.c
@@ -551,6 +551,13 @@ int cf_check amd_iommu_msi_msg_update_ir
for ( i = 1; i < nr; ++i )
msi_desc[i].remap_index = msi_desc->remap_index + i;
msg->data = data;
+ /*
+ * While the low address bits don't matter, "canonicalize" the address
+ * by zapping the bits that were transferred to the IRTE. This way
+ * callers can check for there actually needing to be an update to
+ * wherever the address is put.
+ */
+ msg->address_lo &= ~(MSI_ADDR_DESTMODE_MASK | MSI_ADDR_DEST_ID_MASK);
}
return rc;
On Thu, Oct 23, 2025 at 05:50:17PM +0200, Jan Beulich wrote:
> Using dynamically allocated / maintained vectors has several downsides:
> - possible nesting of IRQs due to the effects of IRQ migration,
> - reduction of vectors available for devices,
> - IRQs not moving as intended if there's shortage of vectors,
> - higher runtime overhead.
>
> As the vector also doesn't need to be of any priority (first and foremost
> it really shouldn't be of higher or same priority as the timer IRQ, as
> that raises TIMER_SOFTIRQ anyway), simply use the lowest one above the
> legacy range. The vector needs reserving early, until it is known whether
> it actually is used. If it isn't, it's made available for general use.
>
> With a fixed vector, less updating is now necessary in
> set_channel_irq_affinity(); in particular channels don't need transiently
> masking anymore, as the necessary update is now atomic. To fully leverage
> this, however, we want to stop using hpet_msi_set_affinity() there. With
> the transient masking dropped, we're no longer at risk of missing events.
>
> Fixes: 996576b965cc ("xen: allow up to 16383 cpus")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Release-Acked-by: Oleksii Kurochko<oleksii.kurochko@gmail.com>
^ space?
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
I've got some comments below, but they shouldn't affect functionality.
I leave it up to you whether you think some of those might be
beneficial.
> ---
> This is an alternative proposal to
> https://lists.xen.org/archives/html/xen-devel/2014-03/msg00399.html.
>
> Should we keep hpet_msi_set_affinity() at all? We'd better not have the
> generic IRQ subsystem play with our IRQs' affinities, and fixup_irqs()
> isn't relevant here. (If so, this likely would want to be a separate
> patch, though.)
>
> The hpet_enable_channel() call could in principle be made (effectively)
> conditional, at the price of introducing a check in there. However, as
> much as eliminating the masking didn't help with the many excess (early)
> IRQs I'm observing on Intel hardware, doing so doesn't help either.
>
> The Fixes: tag indicates where the problem got signficantly worse; in
> principle it was there already before (crashing at perhaps 6 or 7 levels
> of nested IRQs).
> ---
> v3: Switch to using vector 0x30, to unbreak AMD, including an adjustment
> to AMD IOMMU intremap logic. Adjust condition around assertions in
> set_channel_irq_affinity().
> v2: Re-work set_channel_irq_affinity() intensively. Re-base over the
> dropping of another patch. Drop setup_vector_irq() change.
>
> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -9,17 +9,19 @@
> #include <xen/timer.h>
> #include <xen/smp.h>
> #include <xen/softirq.h>
> +#include <xen/cpuidle.h>
> #include <xen/irq.h>
> #include <xen/numa.h>
> #include <xen/param.h>
> #include <xen/sched.h>
>
> #include <asm/apic.h>
> -#include <asm/fixmap.h>
> #include <asm/div64.h>
> +#include <asm/fixmap.h>
> +#include <asm/genapic.h>
> #include <asm/hpet.h>
> +#include <asm/irq-vectors.h>
> #include <asm/msi.h>
> -#include <xen/cpuidle.h>
>
> #define MAX_DELTA_NS MILLISECS(10*1000)
> #define MIN_DELTA_NS MICROSECS(20)
> @@ -251,10 +253,9 @@ static void cf_check hpet_interrupt_hand
> ch->event_handler(ch);
> }
>
> -static void cf_check hpet_msi_unmask(struct irq_desc *desc)
> +static void hpet_enable_channel(struct hpet_event_channel *ch)
> {
> u32 cfg;
> - struct hpet_event_channel *ch = desc->action->dev_id;
>
> cfg = hpet_read32(HPET_Tn_CFG(ch->idx));
> cfg |= HPET_TN_ENABLE;
> @@ -262,6 +263,11 @@ static void cf_check hpet_msi_unmask(str
> ch->msi.msi_attrib.host_masked = 0;
> }
>
> +static void cf_check hpet_msi_unmask(struct irq_desc *desc)
> +{
> + hpet_enable_channel(desc->action->dev_id);
> +}
> +
> static void cf_check hpet_msi_mask(struct irq_desc *desc)
> {
> u32 cfg;
> @@ -303,15 +309,13 @@ static void cf_check hpet_msi_set_affini
> struct hpet_event_channel *ch = desc->action->dev_id;
> struct msi_msg msg = ch->msi.msg;
>
> - msg.dest32 = set_desc_affinity(desc, mask);
> - if ( msg.dest32 == BAD_APICID )
> - return;
> + /* This really is only for dump_irqs(). */
> + cpumask_copy(desc->arch.cpu_mask, mask);
To add some extra checks here for correctness, do you think it would
be helpful to add:
ASSERT(cpumask_weight(mask) == 1);
ASSERT(cpumask_intersects(mask, &cpu_online_mask));
Or that's too pedantic?
>
> - msg.data &= ~MSI_DATA_VECTOR_MASK;
> - msg.data |= MSI_DATA_VECTOR(desc->arch.vector);
> + msg.dest32 = cpu_mask_to_apicid(mask);
> msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK;
> msg.address_lo |= MSI_ADDR_DEST_ID(msg.dest32);
> - if ( msg.data != ch->msi.msg.data || msg.dest32 != ch->msi.msg.dest32 )
> + if ( msg.dest32 != ch->msi.msg.dest32 )
> hpet_msi_write(ch, &msg);
> }
>
> @@ -324,7 +328,7 @@ static hw_irq_controller hpet_msi_type =
> .shutdown = hpet_msi_shutdown,
> .enable = hpet_msi_unmask,
> .disable = hpet_msi_mask,
> - .ack = ack_nonmaskable_msi_irq,
> + .ack = irq_actor_none,
> .end = end_nonmaskable_irq,
> .set_affinity = hpet_msi_set_affinity,
> };
> @@ -343,6 +347,12 @@ static int __init hpet_setup_msi_irq(str
> u32 cfg = hpet_read32(HPET_Tn_CFG(ch->idx));
> irq_desc_t *desc = irq_to_desc(ch->msi.irq);
>
> + clear_irq_vector(ch->msi.irq);
> + ret = bind_irq_vector(ch->msi.irq, HPET_BROADCAST_VECTOR, &cpu_online_map);
By passing cpu_online_map here, it leads to _bind_irq_vector() doing:
cpumask_copy(desc->arch.cpu_mask, &cpu_online_map);
Which strictly speaking is wrong. However this is just a cosmetic
issue until the irq is used for the first time, at which point it will
be assigned to a concrete CPU.
You could do:
cpumask_clear(desc->arch.cpu_mask);
cpumask_set_cpu(cpumask_any(&cpu_online_map), desc->arch.cpu_mask);
(Or equivalent)
To assign the interrupt to a concrete CPU and reflex it on the
cpu_mask after the bind_irq_vector() call, but I can live with it
being like this. I have patches to adjust _bind_irq_vector() myself,
which I hope I will be able to post soon.
> + if ( ret )
> + return ret;
> + cpumask_setall(desc->affinity);
> +
> if ( iommu_intremap != iommu_intremap_off )
> {
> ch->msi.hpet_id = hpet_blockid;
> @@ -472,19 +482,50 @@ static struct hpet_event_channel *hpet_g
> static void set_channel_irq_affinity(struct hpet_event_channel *ch)
> {
> struct irq_desc *desc = irq_to_desc(ch->msi.irq);
> + struct msi_msg msg = ch->msi.msg;
>
> ASSERT(!local_irq_is_enabled());
> spin_lock(&desc->lock);
> - hpet_msi_mask(desc);
> - hpet_msi_set_affinity(desc, cpumask_of(ch->cpu));
> - hpet_msi_unmask(desc);
> +
> + per_cpu(vector_irq, ch->cpu)[HPET_BROADCAST_VECTOR] = ch->msi.irq;
> +
> + /*
> + * Open-coding a reduced form of hpet_msi_set_affinity() here. With the
> + * actual update below (either of the IRTE or of [just] message address;
> + * with interrupt remapping message address/data don't change) now being
> + * atomic, we can avoid masking the IRQ around the update. As a result
> + * we're no longer at risk of missing IRQs (provided hpet_broadcast_enter()
> + * keeps setting the new deadline only afterwards).
> + */
> + cpumask_copy(desc->arch.cpu_mask, cpumask_of(ch->cpu));
> +
> spin_unlock(&desc->lock);
>
> - spin_unlock(&ch->lock);
> + msg.dest32 = cpu_physical_id(ch->cpu);
> + msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK;
> + msg.address_lo |= MSI_ADDR_DEST_ID(msg.dest32);
> + if ( msg.dest32 != ch->msi.msg.dest32 )
> + {
> + ch->msi.msg = msg;
>
> - /* We may have missed an interrupt due to the temporary masking. */
> - if ( ch->event_handler && ch->next_event < NOW() )
> - ch->event_handler(ch);
> + if ( iommu_intremap != iommu_intremap_off )
> + {
> + int rc = iommu_update_ire_from_msi(&ch->msi, &msg);
> +
> + ASSERT(rc <= 0);
> + if ( rc >= 0 )
I don't think the rc > 0 part of this check is meaningful, as any rc
value > 0 will trigger the ASSERT(rc <= 0) ahead of it. The code
inside of the if block itself only contains ASSERTs, so it's only
relevant for debug=y builds that will also have the rc <= 0 ASSERT.
You could possibly use:
ASSERT(rc <= 0);
if ( !rc )
{
ASSERT(...
And achieve the same result?
> + {
> + ASSERT(msg.data == hpet_read32(HPET_Tn_ROUTE(ch->idx)));
> + ASSERT(msg.address_lo ==
> + hpet_read32(HPET_Tn_ROUTE(ch->idx) + 4));
> + }
> + }
> + else
> + hpet_write32(msg.address_lo, HPET_Tn_ROUTE(ch->idx) + 4);
> + }
> +
> + hpet_enable_channel(ch);
> + spin_unlock(&ch->lock);
> }
>
> static void hpet_attach_channel(unsigned int cpu,
> @@ -622,6 +663,12 @@ void __init hpet_broadcast_init(void)
> hpet_events->flags = HPET_EVT_LEGACY;
> }
>
> +void __init hpet_broadcast_late_init(void)
> +{
> + if ( !num_hpets_used )
> + free_lopriority_vector(HPET_BROADCAST_VECTOR);
> +}
> +
> void hpet_broadcast_resume(void)
> {
> u32 cfg;
> --- a/xen/arch/x86/include/asm/hpet.h
> +++ b/xen/arch/x86/include/asm/hpet.h
> @@ -90,6 +90,7 @@ void hpet_disable_legacy_replacement_mod
> * rather than using the LAPIC timer. Used for Cx state entry.
> */
> void hpet_broadcast_init(void);
> +void hpet_broadcast_late_init(void);
> void hpet_broadcast_resume(void);
> void cf_check hpet_broadcast_enter(void);
> void cf_check hpet_broadcast_exit(void);
> --- a/xen/arch/x86/include/asm/irq.h
> +++ b/xen/arch/x86/include/asm/irq.h
> @@ -116,6 +116,7 @@ void cf_check call_function_interrupt(vo
> void cf_check irq_move_cleanup_interrupt(void);
>
> uint8_t alloc_hipriority_vector(void);
> +void free_lopriority_vector(uint8_t vector);
>
> void set_direct_apic_vector(uint8_t vector, void (*handler)(void));
> void alloc_direct_apic_vector(uint8_t *vector, void (*handler)(void));
> --- a/xen/arch/x86/include/asm/irq-vectors.h
> +++ b/xen/arch/x86/include/asm/irq-vectors.h
> @@ -22,6 +22,9 @@
> #define FIRST_LEGACY_VECTOR FIRST_DYNAMIC_VECTOR
> #define LAST_LEGACY_VECTOR (FIRST_LEGACY_VECTOR + 0xf)
>
> +/* HPET broadcast is statically allocated and wants to be low priority. */
> +#define HPET_BROADCAST_VECTOR (LAST_LEGACY_VECTOR + 1)
> +
> #ifdef CONFIG_PV32
> #define HYPERCALL_VECTOR 0x82
> #endif
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -468,6 +468,12 @@ int __init init_irq_data(void)
> vector++ )
> __set_bit(vector, used_vectors);
>
> + /*
> + * Prevent the HPET broadcast vector from being used, until it is known
> + * whether it's actually needed.
> + */
> + __set_bit(HPET_BROADCAST_VECTOR, used_vectors);
> +
> return 0;
> }
>
> @@ -991,6 +997,13 @@ void alloc_direct_apic_vector(uint8_t *v
> spin_unlock(&lock);
> }
>
> +/* This could free any vectors, but is needed only for low-prio ones. */
> +void __init free_lopriority_vector(uint8_t vector)
> +{
> + ASSERT(vector < FIRST_HIPRIORITY_VECTOR);
> + clear_bit(vector, used_vectors);
> +}
I'm undecided whether we want to have such helper. This is all very
specific to the single use by the HPET vector, and hence might be best
to simply put the clear_bit() inside of hpet_broadcast_late_init()
itself.
I could see for example other callers wanting to use this also
requiring cleanup of the per cpu vector_irq arrays. Given it's (so
far) very limited usage it might be clearer to open-code the
clear_bit().
> +
> static void cf_check irq_ratelimit_timer_fn(void *data)
> {
> struct irq_desc *desc, *tmp;
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -2675,6 +2675,8 @@ static int __init cf_check disable_pit_i
> "Force enable with 'cpuidle'.\n");
> }
>
> + hpet_broadcast_late_init();
> +
> return 0;
> }
> __initcall(disable_pit_irq);
> --- a/xen/drivers/passthrough/amd/iommu_intr.c
> +++ b/xen/drivers/passthrough/amd/iommu_intr.c
> @@ -551,6 +551,13 @@ int cf_check amd_iommu_msi_msg_update_ir
> for ( i = 1; i < nr; ++i )
> msi_desc[i].remap_index = msi_desc->remap_index + i;
> msg->data = data;
> + /*
> + * While the low address bits don't matter, "canonicalize" the address
> + * by zapping the bits that were transferred to the IRTE. This way
> + * callers can check for there actually needing to be an update to
> + * wherever the address is put.
> + */
> + msg->address_lo &= ~(MSI_ADDR_DESTMODE_MASK | MSI_ADDR_DEST_ID_MASK);
You might want to mention this change on the commit message also, as
it could look unrelated to the rest of the code?
Thanks, Roger.
On 24.10.2025 15:24, Roger Pau Monné wrote:
> On Thu, Oct 23, 2025 at 05:50:17PM +0200, Jan Beulich wrote:
>> Using dynamically allocated / maintained vectors has several downsides:
>> - possible nesting of IRQs due to the effects of IRQ migration,
>> - reduction of vectors available for devices,
>> - IRQs not moving as intended if there's shortage of vectors,
>> - higher runtime overhead.
>>
>> As the vector also doesn't need to be of any priority (first and foremost
>> it really shouldn't be of higher or same priority as the timer IRQ, as
>> that raises TIMER_SOFTIRQ anyway), simply use the lowest one above the
>> legacy range. The vector needs reserving early, until it is known whether
>> it actually is used. If it isn't, it's made available for general use.
>>
>> With a fixed vector, less updating is now necessary in
>> set_channel_irq_affinity(); in particular channels don't need transiently
>> masking anymore, as the necessary update is now atomic. To fully leverage
>> this, however, we want to stop using hpet_msi_set_affinity() there. With
>> the transient masking dropped, we're no longer at risk of missing events.
>>
>> Fixes: 996576b965cc ("xen: allow up to 16383 cpus")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Release-Acked-by: Oleksii Kurochko<oleksii.kurochko@gmail.com>
> ^ space?
Looks like I simply took what was provided; I've added the blank now (also
in patch 1).
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Thanks.
>> @@ -303,15 +309,13 @@ static void cf_check hpet_msi_set_affini
>> struct hpet_event_channel *ch = desc->action->dev_id;
>> struct msi_msg msg = ch->msi.msg;
>>
>> - msg.dest32 = set_desc_affinity(desc, mask);
>> - if ( msg.dest32 == BAD_APICID )
>> - return;
>> + /* This really is only for dump_irqs(). */
>> + cpumask_copy(desc->arch.cpu_mask, mask);
>
> To add some extra checks here for correctness, do you think it would
> be helpful to add:
>
> ASSERT(cpumask_weight(mask) == 1);
> ASSERT(cpumask_intersects(mask, &cpu_online_mask));
>
> Or that's too pedantic?
Imo that would be pretty pointless in particular since the function is to go
away anyway.
>> @@ -343,6 +347,12 @@ static int __init hpet_setup_msi_irq(str
>> u32 cfg = hpet_read32(HPET_Tn_CFG(ch->idx));
>> irq_desc_t *desc = irq_to_desc(ch->msi.irq);
>>
>> + clear_irq_vector(ch->msi.irq);
>> + ret = bind_irq_vector(ch->msi.irq, HPET_BROADCAST_VECTOR, &cpu_online_map);
>
> By passing cpu_online_map here, it leads to _bind_irq_vector() doing:
>
> cpumask_copy(desc->arch.cpu_mask, &cpu_online_map);
>
> Which strictly speaking is wrong. However this is just a cosmetic
> issue until the irq is used for the first time, at which point it will
> be assigned to a concrete CPU.
>
> You could do:
>
> cpumask_clear(desc->arch.cpu_mask);
> cpumask_set_cpu(cpumask_any(&cpu_online_map), desc->arch.cpu_mask);
>
> (Or equivalent)
>
> To assign the interrupt to a concrete CPU and reflex it on the
> cpu_mask after the bind_irq_vector() call, but I can live with it
> being like this. I have patches to adjust _bind_irq_vector() myself,
> which I hope I will be able to post soon.
Hmm, I wrongly memorized hpet_broadcast_init() as being pre-SMP-init only.
It has three call sites:
- mwait_idle_init(), called from cpuidle_presmp_init(),
- amd_cpuidle_init(), calling in only when invoked the very first time,
which is again from cpuidle_presmp_init(),
- _disable_pit_irq(), called from the regular initcall disable_pit_irq().
I.e. for the latter you're right that the CPU mask is too broad (in only a
cosmetic way though). Would be you okay if I used cpumask_of(0) in place
of &cpu_online_map?
>> @@ -472,19 +482,50 @@ static struct hpet_event_channel *hpet_g
>> static void set_channel_irq_affinity(struct hpet_event_channel *ch)
>> {
>> struct irq_desc *desc = irq_to_desc(ch->msi.irq);
>> + struct msi_msg msg = ch->msi.msg;
>>
>> ASSERT(!local_irq_is_enabled());
>> spin_lock(&desc->lock);
>> - hpet_msi_mask(desc);
>> - hpet_msi_set_affinity(desc, cpumask_of(ch->cpu));
>> - hpet_msi_unmask(desc);
>> +
>> + per_cpu(vector_irq, ch->cpu)[HPET_BROADCAST_VECTOR] = ch->msi.irq;
>> +
>> + /*
>> + * Open-coding a reduced form of hpet_msi_set_affinity() here. With the
>> + * actual update below (either of the IRTE or of [just] message address;
>> + * with interrupt remapping message address/data don't change) now being
>> + * atomic, we can avoid masking the IRQ around the update. As a result
>> + * we're no longer at risk of missing IRQs (provided hpet_broadcast_enter()
>> + * keeps setting the new deadline only afterwards).
>> + */
>> + cpumask_copy(desc->arch.cpu_mask, cpumask_of(ch->cpu));
>> +
>> spin_unlock(&desc->lock);
>>
>> - spin_unlock(&ch->lock);
>> + msg.dest32 = cpu_physical_id(ch->cpu);
>> + msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK;
>> + msg.address_lo |= MSI_ADDR_DEST_ID(msg.dest32);
>> + if ( msg.dest32 != ch->msi.msg.dest32 )
>> + {
>> + ch->msi.msg = msg;
>>
>> - /* We may have missed an interrupt due to the temporary masking. */
>> - if ( ch->event_handler && ch->next_event < NOW() )
>> - ch->event_handler(ch);
>> + if ( iommu_intremap != iommu_intremap_off )
>> + {
>> + int rc = iommu_update_ire_from_msi(&ch->msi, &msg);
>> +
>> + ASSERT(rc <= 0);
>> + if ( rc >= 0 )
>
> I don't think the rc > 0 part of this check is meaningful, as any rc
> value > 0 will trigger the ASSERT(rc <= 0) ahead of it. The code
> inside of the if block itself only contains ASSERTs, so it's only
> relevant for debug=y builds that will also have the rc <= 0 ASSERT.
>
> You could possibly use:
>
> ASSERT(rc <= 0);
> if ( !rc )
> {
> ASSERT(...
>
> And achieve the same result?
Yes, except that I'd like to keep the >= to cover the case if the first
assertion was dropped / commented out, as well as to have a doc effect.
>> @@ -991,6 +997,13 @@ void alloc_direct_apic_vector(uint8_t *v
>> spin_unlock(&lock);
>> }
>>
>> +/* This could free any vectors, but is needed only for low-prio ones. */
>> +void __init free_lopriority_vector(uint8_t vector)
>> +{
>> + ASSERT(vector < FIRST_HIPRIORITY_VECTOR);
>> + clear_bit(vector, used_vectors);
>> +}
>
> I'm undecided whether we want to have such helper. This is all very
> specific to the single use by the HPET vector, and hence might be best
> to simply put the clear_bit() inside of hpet_broadcast_late_init()
> itself.
I wanted to avoid making used_vectors non-static.
> I could see for example other callers wanting to use this also
> requiring cleanup of the per cpu vector_irq arrays. Given it's (so
> far) very limited usage it might be clearer to open-code the
> clear_bit().
Dealing with vector_irq[] is a separate thing, though, isn't it?
>> --- a/xen/drivers/passthrough/amd/iommu_intr.c
>> +++ b/xen/drivers/passthrough/amd/iommu_intr.c
>> @@ -551,6 +551,13 @@ int cf_check amd_iommu_msi_msg_update_ir
>> for ( i = 1; i < nr; ++i )
>> msi_desc[i].remap_index = msi_desc->remap_index + i;
>> msg->data = data;
>> + /*
>> + * While the low address bits don't matter, "canonicalize" the address
>> + * by zapping the bits that were transferred to the IRTE. This way
>> + * callers can check for there actually needing to be an update to
>> + * wherever the address is put.
>> + */
>> + msg->address_lo &= ~(MSI_ADDR_DESTMODE_MASK | MSI_ADDR_DEST_ID_MASK);
>
> You might want to mention this change on the commit message also, as
> it could look unrelated to the rest of the code?
I thought the comment here provided enough context and detail. I've added
"AMD interrupt remapping code so far didn't "return" a consistent MSI
address when translating an MSI message. Clear respective fields there, to
keep the respective assertion in set_channel_irq_affinity() from
triggering."
Jan
On Mon, Oct 27, 2025 at 11:23:58AM +0100, Jan Beulich wrote:
> On 24.10.2025 15:24, Roger Pau Monné wrote:
> > On Thu, Oct 23, 2025 at 05:50:17PM +0200, Jan Beulich wrote:
> >> @@ -343,6 +347,12 @@ static int __init hpet_setup_msi_irq(str
> >> u32 cfg = hpet_read32(HPET_Tn_CFG(ch->idx));
> >> irq_desc_t *desc = irq_to_desc(ch->msi.irq);
> >>
> >> + clear_irq_vector(ch->msi.irq);
> >> + ret = bind_irq_vector(ch->msi.irq, HPET_BROADCAST_VECTOR, &cpu_online_map);
> >
> > By passing cpu_online_map here, it leads to _bind_irq_vector() doing:
> >
> > cpumask_copy(desc->arch.cpu_mask, &cpu_online_map);
> >
> > Which strictly speaking is wrong. However this is just a cosmetic
> > issue until the irq is used for the first time, at which point it will
> > be assigned to a concrete CPU.
> >
> > You could do:
> >
> > cpumask_clear(desc->arch.cpu_mask);
> > cpumask_set_cpu(cpumask_any(&cpu_online_map), desc->arch.cpu_mask);
> >
> > (Or equivalent)
> >
> > To assign the interrupt to a concrete CPU and reflex it on the
> > cpu_mask after the bind_irq_vector() call, but I can live with it
> > being like this. I have patches to adjust _bind_irq_vector() myself,
> > which I hope I will be able to post soon.
>
> Hmm, I wrongly memorized hpet_broadcast_init() as being pre-SMP-init only.
> It has three call sites:
> - mwait_idle_init(), called from cpuidle_presmp_init(),
> - amd_cpuidle_init(), calling in only when invoked the very first time,
> which is again from cpuidle_presmp_init(),
> - _disable_pit_irq(), called from the regular initcall disable_pit_irq().
> I.e. for the latter you're right that the CPU mask is too broad (in only a
> cosmetic way though). Would be you okay if I used cpumask_of(0) in place
> of &cpu_online_map?
Using cpumask_of(0) would be OK, as the per-cpu vector_irq array will
be updated ahead of assigning the interrupt to a CPU, and hence it
doesn't need to be done for all possible online CPUs in
_bind_irq_vector().
In the context here it would be more accurate to provide an empty CPU
mask, as the interrupt is not yet targeting any CPU. Using CPU 0
would be a placeholder, which seems fine for the purpose.
> >> @@ -472,19 +482,50 @@ static struct hpet_event_channel *hpet_g
> >> static void set_channel_irq_affinity(struct hpet_event_channel *ch)
> >> {
> >> struct irq_desc *desc = irq_to_desc(ch->msi.irq);
> >> + struct msi_msg msg = ch->msi.msg;
> >>
> >> ASSERT(!local_irq_is_enabled());
> >> spin_lock(&desc->lock);
> >> - hpet_msi_mask(desc);
> >> - hpet_msi_set_affinity(desc, cpumask_of(ch->cpu));
> >> - hpet_msi_unmask(desc);
> >> +
> >> + per_cpu(vector_irq, ch->cpu)[HPET_BROADCAST_VECTOR] = ch->msi.irq;
> >> +
> >> + /*
> >> + * Open-coding a reduced form of hpet_msi_set_affinity() here. With the
> >> + * actual update below (either of the IRTE or of [just] message address;
> >> + * with interrupt remapping message address/data don't change) now being
> >> + * atomic, we can avoid masking the IRQ around the update. As a result
> >> + * we're no longer at risk of missing IRQs (provided hpet_broadcast_enter()
> >> + * keeps setting the new deadline only afterwards).
> >> + */
> >> + cpumask_copy(desc->arch.cpu_mask, cpumask_of(ch->cpu));
> >> +
> >> spin_unlock(&desc->lock);
> >>
> >> - spin_unlock(&ch->lock);
> >> + msg.dest32 = cpu_physical_id(ch->cpu);
> >> + msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK;
> >> + msg.address_lo |= MSI_ADDR_DEST_ID(msg.dest32);
> >> + if ( msg.dest32 != ch->msi.msg.dest32 )
> >> + {
> >> + ch->msi.msg = msg;
> >>
> >> - /* We may have missed an interrupt due to the temporary masking. */
> >> - if ( ch->event_handler && ch->next_event < NOW() )
> >> - ch->event_handler(ch);
> >> + if ( iommu_intremap != iommu_intremap_off )
> >> + {
> >> + int rc = iommu_update_ire_from_msi(&ch->msi, &msg);
> >> +
> >> + ASSERT(rc <= 0);
> >> + if ( rc >= 0 )
> >
> > I don't think the rc > 0 part of this check is meaningful, as any rc
> > value > 0 will trigger the ASSERT(rc <= 0) ahead of it. The code
> > inside of the if block itself only contains ASSERTs, so it's only
> > relevant for debug=y builds that will also have the rc <= 0 ASSERT.
> >
> > You could possibly use:
> >
> > ASSERT(rc <= 0);
> > if ( !rc )
> > {
> > ASSERT(...
> >
> > And achieve the same result?
>
> Yes, except that I'd like to keep the >= to cover the case if the first
> assertion was dropped / commented out, as well as to have a doc effect.
Oh, OK. Fair enough, I wasn't taking into account that this could be
done in case code is modified.
> >> @@ -991,6 +997,13 @@ void alloc_direct_apic_vector(uint8_t *v
> >> spin_unlock(&lock);
> >> }
> >>
> >> +/* This could free any vectors, but is needed only for low-prio ones. */
> >> +void __init free_lopriority_vector(uint8_t vector)
> >> +{
> >> + ASSERT(vector < FIRST_HIPRIORITY_VECTOR);
> >> + clear_bit(vector, used_vectors);
> >> +}
> >
> > I'm undecided whether we want to have such helper. This is all very
> > specific to the single use by the HPET vector, and hence might be best
> > to simply put the clear_bit() inside of hpet_broadcast_late_init()
> > itself.
>
> I wanted to avoid making used_vectors non-static.
>
> > I could see for example other callers wanting to use this also
> > requiring cleanup of the per cpu vector_irq arrays. Given it's (so
> > far) very limited usage it might be clearer to open-code the
> > clear_bit().
>
> Dealing with vector_irq[] is a separate thing, though, isn't it?
Possibly, that's part of the binding, rather than the allocation
itself (which is what you cover here).
> >> --- a/xen/drivers/passthrough/amd/iommu_intr.c
> >> +++ b/xen/drivers/passthrough/amd/iommu_intr.c
> >> @@ -551,6 +551,13 @@ int cf_check amd_iommu_msi_msg_update_ir
> >> for ( i = 1; i < nr; ++i )
> >> msi_desc[i].remap_index = msi_desc->remap_index + i;
> >> msg->data = data;
> >> + /*
> >> + * While the low address bits don't matter, "canonicalize" the address
> >> + * by zapping the bits that were transferred to the IRTE. This way
> >> + * callers can check for there actually needing to be an update to
> >> + * wherever the address is put.
> >> + */
> >> + msg->address_lo &= ~(MSI_ADDR_DESTMODE_MASK | MSI_ADDR_DEST_ID_MASK);
> >
> > You might want to mention this change on the commit message also, as
> > it could look unrelated to the rest of the code?
>
> I thought the comment here provided enough context and detail. I've added
> "AMD interrupt remapping code so far didn't "return" a consistent MSI
> address when translating an MSI message. Clear respective fields there, to
> keep the respective assertion in set_channel_irq_affinity() from
> triggering."
LGTM, I would possibly remove the last "respective" for being
repetitive given the previous one in the sentence.
Thanks, Roger.
On 27.10.2025 12:33, Roger Pau Monné wrote: > On Mon, Oct 27, 2025 at 11:23:58AM +0100, Jan Beulich wrote: >> On 24.10.2025 15:24, Roger Pau Monné wrote: >>> On Thu, Oct 23, 2025 at 05:50:17PM +0200, Jan Beulich wrote: >>>> @@ -343,6 +347,12 @@ static int __init hpet_setup_msi_irq(str >>>> u32 cfg = hpet_read32(HPET_Tn_CFG(ch->idx)); >>>> irq_desc_t *desc = irq_to_desc(ch->msi.irq); >>>> >>>> + clear_irq_vector(ch->msi.irq); >>>> + ret = bind_irq_vector(ch->msi.irq, HPET_BROADCAST_VECTOR, &cpu_online_map); >>> >>> By passing cpu_online_map here, it leads to _bind_irq_vector() doing: >>> >>> cpumask_copy(desc->arch.cpu_mask, &cpu_online_map); >>> >>> Which strictly speaking is wrong. However this is just a cosmetic >>> issue until the irq is used for the first time, at which point it will >>> be assigned to a concrete CPU. >>> >>> You could do: >>> >>> cpumask_clear(desc->arch.cpu_mask); >>> cpumask_set_cpu(cpumask_any(&cpu_online_map), desc->arch.cpu_mask); >>> >>> (Or equivalent) >>> >>> To assign the interrupt to a concrete CPU and reflex it on the >>> cpu_mask after the bind_irq_vector() call, but I can live with it >>> being like this. I have patches to adjust _bind_irq_vector() myself, >>> which I hope I will be able to post soon. >> >> Hmm, I wrongly memorized hpet_broadcast_init() as being pre-SMP-init only. >> It has three call sites: >> - mwait_idle_init(), called from cpuidle_presmp_init(), >> - amd_cpuidle_init(), calling in only when invoked the very first time, >> which is again from cpuidle_presmp_init(), >> - _disable_pit_irq(), called from the regular initcall disable_pit_irq(). >> I.e. for the latter you're right that the CPU mask is too broad (in only a >> cosmetic way though). Would be you okay if I used cpumask_of(0) in place >> of &cpu_online_map? > > Using cpumask_of(0) would be OK, as the per-cpu vector_irq array will > be updated ahead of assigning the interrupt to a CPU, and hence it > doesn't need to be done for all possible online CPUs in > _bind_irq_vector(). > > In the context here it would be more accurate to provide an empty CPU > mask, as the interrupt is not yet targeting any CPU. Using CPU 0 > would be a placeholder, which seems fine for the purpose. Putting an empty mask there, while indeed logically correct, would (I fear) again put us at risk with other code making various assumptions. I'll go with cpumask_of(0). >>>> --- a/xen/drivers/passthrough/amd/iommu_intr.c >>>> +++ b/xen/drivers/passthrough/amd/iommu_intr.c >>>> @@ -551,6 +551,13 @@ int cf_check amd_iommu_msi_msg_update_ir >>>> for ( i = 1; i < nr; ++i ) >>>> msi_desc[i].remap_index = msi_desc->remap_index + i; >>>> msg->data = data; >>>> + /* >>>> + * While the low address bits don't matter, "canonicalize" the address >>>> + * by zapping the bits that were transferred to the IRTE. This way >>>> + * callers can check for there actually needing to be an update to >>>> + * wherever the address is put. >>>> + */ >>>> + msg->address_lo &= ~(MSI_ADDR_DESTMODE_MASK | MSI_ADDR_DEST_ID_MASK); >>> >>> You might want to mention this change on the commit message also, as >>> it could look unrelated to the rest of the code? >> >> I thought the comment here provided enough context and detail. I've added >> "AMD interrupt remapping code so far didn't "return" a consistent MSI >> address when translating an MSI message. Clear respective fields there, to >> keep the respective assertion in set_channel_irq_affinity() from >> triggering." > > LGTM, I would possibly remove the last "respective" for being > repetitive given the previous one in the sentence. Oh, indeed. Replaced it by "related" rather than dropping it completely. Jan
On Mon, Oct 27, 2025 at 12:53:34PM +0100, Jan Beulich wrote: > On 27.10.2025 12:33, Roger Pau Monné wrote: > > On Mon, Oct 27, 2025 at 11:23:58AM +0100, Jan Beulich wrote: > >> On 24.10.2025 15:24, Roger Pau Monné wrote: > >>> On Thu, Oct 23, 2025 at 05:50:17PM +0200, Jan Beulich wrote: > >>>> @@ -343,6 +347,12 @@ static int __init hpet_setup_msi_irq(str > >>>> u32 cfg = hpet_read32(HPET_Tn_CFG(ch->idx)); > >>>> irq_desc_t *desc = irq_to_desc(ch->msi.irq); > >>>> > >>>> + clear_irq_vector(ch->msi.irq); > >>>> + ret = bind_irq_vector(ch->msi.irq, HPET_BROADCAST_VECTOR, &cpu_online_map); > >>> > >>> By passing cpu_online_map here, it leads to _bind_irq_vector() doing: > >>> > >>> cpumask_copy(desc->arch.cpu_mask, &cpu_online_map); > >>> > >>> Which strictly speaking is wrong. However this is just a cosmetic > >>> issue until the irq is used for the first time, at which point it will > >>> be assigned to a concrete CPU. > >>> > >>> You could do: > >>> > >>> cpumask_clear(desc->arch.cpu_mask); > >>> cpumask_set_cpu(cpumask_any(&cpu_online_map), desc->arch.cpu_mask); > >>> > >>> (Or equivalent) > >>> > >>> To assign the interrupt to a concrete CPU and reflex it on the > >>> cpu_mask after the bind_irq_vector() call, but I can live with it > >>> being like this. I have patches to adjust _bind_irq_vector() myself, > >>> which I hope I will be able to post soon. > >> > >> Hmm, I wrongly memorized hpet_broadcast_init() as being pre-SMP-init only. > >> It has three call sites: > >> - mwait_idle_init(), called from cpuidle_presmp_init(), > >> - amd_cpuidle_init(), calling in only when invoked the very first time, > >> which is again from cpuidle_presmp_init(), > >> - _disable_pit_irq(), called from the regular initcall disable_pit_irq(). > >> I.e. for the latter you're right that the CPU mask is too broad (in only a > >> cosmetic way though). Would be you okay if I used cpumask_of(0) in place > >> of &cpu_online_map? > > > > Using cpumask_of(0) would be OK, as the per-cpu vector_irq array will > > be updated ahead of assigning the interrupt to a CPU, and hence it > > doesn't need to be done for all possible online CPUs in > > _bind_irq_vector(). > > > > In the context here it would be more accurate to provide an empty CPU > > mask, as the interrupt is not yet targeting any CPU. Using CPU 0 > > would be a placeholder, which seems fine for the purpose. > > Putting an empty mask there, while indeed logically correct, would (I fear) > again put us at risk with other code making various assumptions. I'll go > with cpumask_of(0). Yeah, that's what I fear also. It's in principle possible for the cpu_mask to be empty, but since this targets 4.21 I think it's too risky. Using cpumask_of(0) is fine. Please keep my RB. Thanks, Roger.
On 27.10.2025 12:53, Jan Beulich wrote: > On 27.10.2025 12:33, Roger Pau Monné wrote: >> On Mon, Oct 27, 2025 at 11:23:58AM +0100, Jan Beulich wrote: >>> On 24.10.2025 15:24, Roger Pau Monné wrote: >>>> On Thu, Oct 23, 2025 at 05:50:17PM +0200, Jan Beulich wrote: >>>>> @@ -343,6 +347,12 @@ static int __init hpet_setup_msi_irq(str >>>>> u32 cfg = hpet_read32(HPET_Tn_CFG(ch->idx)); >>>>> irq_desc_t *desc = irq_to_desc(ch->msi.irq); >>>>> >>>>> + clear_irq_vector(ch->msi.irq); >>>>> + ret = bind_irq_vector(ch->msi.irq, HPET_BROADCAST_VECTOR, &cpu_online_map); >>>> >>>> By passing cpu_online_map here, it leads to _bind_irq_vector() doing: >>>> >>>> cpumask_copy(desc->arch.cpu_mask, &cpu_online_map); >>>> >>>> Which strictly speaking is wrong. However this is just a cosmetic >>>> issue until the irq is used for the first time, at which point it will >>>> be assigned to a concrete CPU. >>>> >>>> You could do: >>>> >>>> cpumask_clear(desc->arch.cpu_mask); >>>> cpumask_set_cpu(cpumask_any(&cpu_online_map), desc->arch.cpu_mask); >>>> >>>> (Or equivalent) >>>> >>>> To assign the interrupt to a concrete CPU and reflex it on the >>>> cpu_mask after the bind_irq_vector() call, but I can live with it >>>> being like this. I have patches to adjust _bind_irq_vector() myself, >>>> which I hope I will be able to post soon. >>> >>> Hmm, I wrongly memorized hpet_broadcast_init() as being pre-SMP-init only. >>> It has three call sites: >>> - mwait_idle_init(), called from cpuidle_presmp_init(), >>> - amd_cpuidle_init(), calling in only when invoked the very first time, >>> which is again from cpuidle_presmp_init(), >>> - _disable_pit_irq(), called from the regular initcall disable_pit_irq(). >>> I.e. for the latter you're right that the CPU mask is too broad (in only a >>> cosmetic way though). Would be you okay if I used cpumask_of(0) in place >>> of &cpu_online_map? >> >> Using cpumask_of(0) would be OK, as the per-cpu vector_irq array will >> be updated ahead of assigning the interrupt to a CPU, and hence it >> doesn't need to be done for all possible online CPUs in >> _bind_irq_vector(). >> >> In the context here it would be more accurate to provide an empty CPU >> mask, as the interrupt is not yet targeting any CPU. Using CPU 0 >> would be a placeholder, which seems fine for the purpose. > > Putting an empty mask there, while indeed logically correct, would (I fear) > again put us at risk with other code making various assumptions. And indeed: _bind_irq_vector() would reject an empty mask. Jan
© 2016 - 2025 Red Hat, Inc.