Hello Thomas
I did some testing with pci_msix_alloc_irq_at() and I noticed that the
affinity provided, via “struct irq_affinity_desc *af_desc”, doesn’t have
any affect.
After some digging, I found out that irq_setup_affinity(), which is
called by request_irq(), is setting the affinity as all the CPUs online,
ignoring the affinity provided in pci_msix_alloc_irq_at().
Is this on purpose or a bug?
P.S. The bellow diff honors the affinity provided in
pci_msix_alloc_irq_at()
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -530,6 +530,7 @@ static int alloc_descs(unsigned int start, unsigned
int cnt, int node,
flags = IRQD_AFFINITY_MANAGED |
IRQD_MANAGED_SHUTDOWN;
}
+ flags |= IRQD_AFFINITY_SET;
mask = &affinity->mask;
node = cpu_to_node(cpumask_first(mask));
affinity++;
thanks
Shay Drori
On Thu, Jul 25 2024 at 08:34, Shay Drori wrote:
> I did some testing with pci_msix_alloc_irq_at() and I noticed that the
> affinity provided, via “struct irq_affinity_desc *af_desc”, doesn’t have
> any affect.
>
> After some digging, I found out that irq_setup_affinity(), which is
> called by request_irq(), is setting the affinity as all the CPUs online,
> ignoring the affinity provided in pci_msix_alloc_irq_at().
> Is this on purpose or a bug?
It's an oversight. So far this has only been used with managed
interrupts and the non-managed parts at the beginning or end of the
interrupt group have been assigned the default affinity which makes this
obviously a non-problem because the startup code uses the default
affinity too.
> P.S. The bellow diff honors the affinity provided in
> pci_msix_alloc_irq_at()
>
> --- a/kernel/irq/irqdesc.c
> +++ b/kernel/irq/irqdesc.c
> @@ -530,6 +530,7 @@ static int alloc_descs(unsigned int start, unsigned
> int cnt, int node,
> flags = IRQD_AFFINITY_MANAGED |
> IRQD_MANAGED_SHUTDOWN;
> }
> + flags |= IRQD_AFFINITY_SET;
> mask = &affinity->mask;
> node = cpu_to_node(cpumask_first(mask));
> affinity++;
Looks about right, though the diff is whitespace damaged.
Care to submit a proper patch?
Thanks,
tglx
On 26/07/2024 16:48, Thomas Gleixner wrote: > External email: Use caution opening links or attachments > > > On Thu, Jul 25 2024 at 08:34, Shay Drori wrote: >> I did some testing with pci_msix_alloc_irq_at() and I noticed that the >> affinity provided, via “struct irq_affinity_desc *af_desc”, doesn’t have >> any affect. >> >> After some digging, I found out that irq_setup_affinity(), which is >> called by request_irq(), is setting the affinity as all the CPUs online, >> ignoring the affinity provided in pci_msix_alloc_irq_at(). >> Is this on purpose or a bug? > > It's an oversight. So far this has only been used with managed > interrupts and the non-managed parts at the beginning or end of the > interrupt group have been assigned the default affinity which makes this > obviously a non-problem because the startup code uses the default > affinity too. > >> P.S. The bellow diff honors the affinity provided in >> pci_msix_alloc_irq_at() >> >> --- a/kernel/irq/irqdesc.c >> +++ b/kernel/irq/irqdesc.c >> @@ -530,6 +530,7 @@ static int alloc_descs(unsigned int start, unsigned >> int cnt, int node, >> flags = IRQD_AFFINITY_MANAGED | >> IRQD_MANAGED_SHUTDOWN; >> } >> + flags |= IRQD_AFFINITY_SET; >> mask = &affinity->mask; >> node = cpu_to_node(cpumask_first(mask)); >> affinity++; > > Looks about right, though the diff is whitespace damaged. > > Care to submit a proper patch? sorry for the late reply, Yes. on-top of which kernel branch to create the patch? > > Thanks, > > tglx
On Mon, Aug 05 2024 at 08:34, Shay Drori wrote:
> On 26/07/2024 16:48, Thomas Gleixner wrote:
>> Looks about right, though the diff is whitespace damaged.
>>
>> Care to submit a proper patch?
>
> sorry for the late reply, Yes.
> on-top of which kernel branch to create the patch?
Mainline.
Thanks,
tglx
© 2016 - 2025 Red Hat, Inc.