[PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()

Roger Pau Monne posted 7 patches 5 months, 2 weeks ago
There is a newer version of this series
[PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Posted by Roger Pau Monne 5 months, 2 weeks ago
Currently there's logic in fixup_irqs() that attempts to prevent
_assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
interrupts from the CPUs not present in the input mask.  The current logic in
fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.

Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
_assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
to deal with interrupts that have either move_{in_progress,cleanup_count} set
and no remaining online CPUs in ->arch.cpu_mask.

If _assign_irq_vector() is requested to move an interrupt in the state
described above, first attempt to see if ->arch.old_cpu_mask contains any valid
CPUs that could be used as fallback, and if that's the case do move the
interrupt back to the previous destination.  Note this is easier because the
vector hasn't been released yet, so there's no need to allocate and setup a new
vector on the destination.

Due to the logic in fixup_irqs() that clears offline CPUs from
->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
shouldn't be possible to get into _assign_irq_vector() with
->arch.move_{in_progress,cleanup_count} set but no online CPUs in
->arch.old_cpu_mask.

However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
longer part of the affinity set, move the interrupt to a different CPU part of
the provided mask and keep the current ->arch.old_{cpu_mask,vector} for the
pending interrupt movement to be completed.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Further refine the logic in _assign_irq_vector().
---
 xen/arch/x86/irq.c | 87 ++++++++++++++++++++++++++++++----------------
 1 file changed, 58 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index f07e09b63b53..54eabd23995c 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
     }
 
     if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
-        return -EAGAIN;
+    {
+        /*
+         * If the current destination is online refuse to shuffle.  Retry after
+         * the in-progress movement has finished.
+         */
+        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
+            return -EAGAIN;
+
+        /*
+         * Due to the logic in fixup_irqs() that clears offlined CPUs from
+         * ->arch.old_cpu_mask it shouldn't be possible to get here with
+         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
+         * ->arch.old_cpu_mask.
+         */
+        ASSERT(valid_irq_vector(desc->arch.old_vector));
+        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
+
+        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
+        {
+            /*
+             * Fallback to the old destination if moving is in progress and the
+             * current destination is to be offlined.  This is only possible if
+             * the CPUs in old_cpu_mask intersect with the affinity mask passed
+             * in the 'mask' parameter.
+             */
+            desc->arch.vector = desc->arch.old_vector;
+            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
+
+            /* Undo any possibly done cleanup. */
+            for_each_cpu(cpu, desc->arch.cpu_mask)
+                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
+
+            /* Cancel the pending move. */
+            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
+            cpumask_clear(desc->arch.old_cpu_mask);
+            desc->arch.move_in_progress = 0;
+            desc->arch.move_cleanup_count = 0;
+
+            return 0;
+        }
+
+        /*
+         * There's an interrupt movement in progress but the destination(s) in
+         * ->arch.old_cpu_mask are not suitable given the passed 'mask', go
+         * through the full logic to find a new vector in a suitable CPU.
+         */
+    }
 
     err = -ENOSPC;
 
@@ -600,7 +646,17 @@ next:
         current_vector = vector;
         current_offset = offset;
 
-        if ( valid_irq_vector(old_vector) )
+        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
+        {
+            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
+            /*
+             * Special case when evacuating an interrupt from a CPU to be
+             * offlined and the interrupt was already in the process of being
+             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
+             * replace ->arch.{cpu_mask,vector} with the new destination.
+             */
+        }
+        else if ( valid_irq_vector(old_vector) )
         {
             cpumask_and(desc->arch.old_cpu_mask, desc->arch.cpu_mask,
                         &cpu_online_map);
@@ -2622,33 +2678,6 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
             continue;
         }
 
-        /*
-         * In order for the affinity adjustment below to be successful, we
-         * need _assign_irq_vector() to succeed. This in particular means
-         * clearing desc->arch.move_in_progress if this would otherwise
-         * prevent the function from succeeding. Since there's no way for the
-         * flag to get cleared anymore when there's no possible destination
-         * left (the only possibility then would be the IRQs enabled window
-         * after this loop), there's then also no race with us doing it here.
-         *
-         * Therefore the logic here and there need to remain in sync.
-         */
-        if ( desc->arch.move_in_progress &&
-             !cpumask_intersects(mask, desc->arch.cpu_mask) )
-        {
-            unsigned int cpu;
-
-            cpumask_and(affinity, desc->arch.old_cpu_mask, &cpu_online_map);
-
-            spin_lock(&vector_lock);
-            for_each_cpu(cpu, affinity)
-                per_cpu(vector_irq, cpu)[desc->arch.old_vector] = ~irq;
-            spin_unlock(&vector_lock);
-
-            release_old_vec(desc);
-            desc->arch.move_in_progress = 0;
-        }
-
         if ( !cpumask_intersects(mask, desc->affinity) )
         {
             break_affinity = true;
-- 
2.44.0


Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Posted by Jan Beulich 5 months, 2 weeks ago
On 10.06.2024 16:20, Roger Pau Monne wrote:
> Currently there's logic in fixup_irqs() that attempts to prevent
> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> interrupts from the CPUs not present in the input mask.  The current logic in
> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> 
> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> to deal with interrupts that have either move_{in_progress,cleanup_count} set
> and no remaining online CPUs in ->arch.cpu_mask.
> 
> If _assign_irq_vector() is requested to move an interrupt in the state
> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> CPUs that could be used as fallback, and if that's the case do move the
> interrupt back to the previous destination.  Note this is easier because the
> vector hasn't been released yet, so there's no need to allocate and setup a new
> vector on the destination.
> 
> Due to the logic in fixup_irqs() that clears offline CPUs from
> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> shouldn't be possible to get into _assign_irq_vector() with
> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> ->arch.old_cpu_mask.
> 
> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> longer part of the affinity set,

I'm having trouble relating this (->arch.old_cpu_mask related) to ...

> move the interrupt to a different CPU part of
> the provided mask

... this (->arch.cpu_mask related).

> and keep the current ->arch.old_{cpu_mask,vector} for the
> pending interrupt movement to be completed.

Right, that's to clean up state from before the initial move. What isn't
clear to me is what's to happen with the state of the intermediate
placement. Description and code changes leave me with the impression that
it's okay to simply abandon, without any cleanup, yet I can't quite figure
why that would be an okay thing to do.

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>      }
>  
>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> -        return -EAGAIN;
> +    {
> +        /*
> +         * If the current destination is online refuse to shuffle.  Retry after
> +         * the in-progress movement has finished.
> +         */
> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> +            return -EAGAIN;
> +
> +        /*
> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> +         * ->arch.old_cpu_mask.
> +         */
> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> +
> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> +        {
> +            /*
> +             * Fallback to the old destination if moving is in progress and the
> +             * current destination is to be offlined.  This is only possible if
> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> +             * in the 'mask' parameter.
> +             */
> +            desc->arch.vector = desc->arch.old_vector;
> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> +
> +            /* Undo any possibly done cleanup. */
> +            for_each_cpu(cpu, desc->arch.cpu_mask)
> +                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
> +
> +            /* Cancel the pending move. */
> +            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
> +            cpumask_clear(desc->arch.old_cpu_mask);
> +            desc->arch.move_in_progress = 0;
> +            desc->arch.move_cleanup_count = 0;
> +
> +            return 0;
> +        }

In how far is this guaranteed to respect the (new) affinity that was set,
presumably having led to the movement in the first place?

> @@ -600,7 +646,17 @@ next:
>          current_vector = vector;
>          current_offset = offset;
>  
> -        if ( valid_irq_vector(old_vector) )
> +        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> +        {
> +            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
> +            /*
> +             * Special case when evacuating an interrupt from a CPU to be
> +             * offlined and the interrupt was already in the process of being
> +             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
> +             * replace ->arch.{cpu_mask,vector} with the new destination.
> +             */

And where's the cleaning up of ->arch.old_* going to be taken care of then?

Jan
Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Posted by Roger Pau Monné 5 months, 2 weeks ago
On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
> On 10.06.2024 16:20, Roger Pau Monne wrote:
> > Currently there's logic in fixup_irqs() that attempts to prevent
> > _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> > interrupts from the CPUs not present in the input mask.  The current logic in
> > fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> > move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> > 
> > Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> > _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> > to deal with interrupts that have either move_{in_progress,cleanup_count} set
> > and no remaining online CPUs in ->arch.cpu_mask.
> > 
> > If _assign_irq_vector() is requested to move an interrupt in the state
> > described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> > CPUs that could be used as fallback, and if that's the case do move the
> > interrupt back to the previous destination.  Note this is easier because the
> > vector hasn't been released yet, so there's no need to allocate and setup a new
> > vector on the destination.
> > 
> > Due to the logic in fixup_irqs() that clears offline CPUs from
> > ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> > shouldn't be possible to get into _assign_irq_vector() with
> > ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> > ->arch.old_cpu_mask.
> > 
> > However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> > also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> > longer part of the affinity set,
> 
> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
> 
> > move the interrupt to a different CPU part of
> > the provided mask
> 
> ... this (->arch.cpu_mask related).

No, the "provided mask" here is the "mask" parameter, not
->arch.cpu_mask.

> 
> > and keep the current ->arch.old_{cpu_mask,vector} for the
> > pending interrupt movement to be completed.
> 
> Right, that's to clean up state from before the initial move. What isn't
> clear to me is what's to happen with the state of the intermediate
> placement. Description and code changes leave me with the impression that
> it's okay to simply abandon, without any cleanup, yet I can't quite figure
> why that would be an okay thing to do.

There isn't much we can do with the intermediate placement, as the CPU
is going offline.  However we can drain any pending interrupts from
IRR after the new destination has been set, since setting the
destination is done from the CPU that's the current target of the
interrupts.  So we can ensure the draining is done strictly after the
target has been switched, hence ensuring no further interrupts from
this source will be delivered to the current CPU.

> > --- a/xen/arch/x86/irq.c
> > +++ b/xen/arch/x86/irq.c
> > @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
> >      }
> >  
> >      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> > -        return -EAGAIN;
> > +    {
> > +        /*
> > +         * If the current destination is online refuse to shuffle.  Retry after
> > +         * the in-progress movement has finished.
> > +         */
> > +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> > +            return -EAGAIN;
> > +
> > +        /*
> > +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> > +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> > +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> > +         * ->arch.old_cpu_mask.
> > +         */
> > +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> > +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> > +
> > +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> > +        {
> > +            /*
> > +             * Fallback to the old destination if moving is in progress and the
> > +             * current destination is to be offlined.  This is only possible if
> > +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> > +             * in the 'mask' parameter.
> > +             */
> > +            desc->arch.vector = desc->arch.old_vector;
> > +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> > +
> > +            /* Undo any possibly done cleanup. */
> > +            for_each_cpu(cpu, desc->arch.cpu_mask)
> > +                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
> > +
> > +            /* Cancel the pending move. */
> > +            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
> > +            cpumask_clear(desc->arch.old_cpu_mask);
> > +            desc->arch.move_in_progress = 0;
> > +            desc->arch.move_cleanup_count = 0;
> > +
> > +            return 0;
> > +        }
> 
> In how far is this guaranteed to respect the (new) affinity that was set,
> presumably having led to the movement in the first place?

The 'mask' parameter should account for the new affinity, hence the
cpumask_intersects() check guarantees we are moving to a CPU still in
the affinity mask.

> > @@ -600,7 +646,17 @@ next:
> >          current_vector = vector;
> >          current_offset = offset;
> >  
> > -        if ( valid_irq_vector(old_vector) )
> > +        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> > +        {
> > +            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
> > +            /*
> > +             * Special case when evacuating an interrupt from a CPU to be
> > +             * offlined and the interrupt was already in the process of being
> > +             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
> > +             * replace ->arch.{cpu_mask,vector} with the new destination.
> > +             */
> 
> And where's the cleaning up of ->arch.old_* going to be taken care of then?

Such cleaning will be handled normally by the interrupt still having
->arch.move_{in_progress,cleanup_count} set.  The CPUs in
->arch.old_cpu_mask must not all be offline, otherwise the logic in
fixup_irqs() would have already released the old vector.

Thanks, Roger.
Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Posted by Jan Beulich 5 months, 2 weeks ago
On 12.06.2024 12:39, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>> Currently there's logic in fixup_irqs() that attempts to prevent
>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
>>> interrupts from the CPUs not present in the input mask.  The current logic in
>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
>>>
>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
>>> and no remaining online CPUs in ->arch.cpu_mask.
>>>
>>> If _assign_irq_vector() is requested to move an interrupt in the state
>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
>>> CPUs that could be used as fallback, and if that's the case do move the
>>> interrupt back to the previous destination.  Note this is easier because the
>>> vector hasn't been released yet, so there's no need to allocate and setup a new
>>> vector on the destination.
>>>
>>> Due to the logic in fixup_irqs() that clears offline CPUs from
>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
>>> shouldn't be possible to get into _assign_irq_vector() with
>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
>>> ->arch.old_cpu_mask.
>>>
>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
>>> longer part of the affinity set,
>>
>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
>>
>>> move the interrupt to a different CPU part of
>>> the provided mask
>>
>> ... this (->arch.cpu_mask related).
> 
> No, the "provided mask" here is the "mask" parameter, not
> ->arch.cpu_mask.

Oh, so this describes the case of "hitting" the comment at the very bottom of
the first hunk then? (I probably was misreading this because I was expecting
it to describe a code change, rather than the case where original behavior
needs retaining. IOW - all fine here then.)

>>> and keep the current ->arch.old_{cpu_mask,vector} for the
>>> pending interrupt movement to be completed.
>>
>> Right, that's to clean up state from before the initial move. What isn't
>> clear to me is what's to happen with the state of the intermediate
>> placement. Description and code changes leave me with the impression that
>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
>> why that would be an okay thing to do.
> 
> There isn't much we can do with the intermediate placement, as the CPU
> is going offline.  However we can drain any pending interrupts from
> IRR after the new destination has been set, since setting the
> destination is done from the CPU that's the current target of the
> interrupts.  So we can ensure the draining is done strictly after the
> target has been switched, hence ensuring no further interrupts from
> this source will be delivered to the current CPU.

Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
the ...

>>> --- a/xen/arch/x86/irq.c
>>> +++ b/xen/arch/x86/irq.c
>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>>>      }
>>>  
>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
>>> -        return -EAGAIN;
>>> +    {
>>> +        /*
>>> +         * If the current destination is online refuse to shuffle.  Retry after
>>> +         * the in-progress movement has finished.
>>> +         */
>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
>>> +            return -EAGAIN;
>>> +
>>> +        /*
>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
>>> +         * ->arch.old_cpu_mask.
>>> +         */
>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
>>> +
>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
>>> +        {
>>> +            /*
>>> +             * Fallback to the old destination if moving is in progress and the
>>> +             * current destination is to be offlined.  This is only possible if
>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
>>> +             * in the 'mask' parameter.
>>> +             */
>>> +            desc->arch.vector = desc->arch.old_vector;
>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);

... replacing of vector (and associated mask), without any further accounting.

>>> +            /* Undo any possibly done cleanup. */
>>> +            for_each_cpu(cpu, desc->arch.cpu_mask)
>>> +                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
>>> +
>>> +            /* Cancel the pending move. */
>>> +            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
>>> +            cpumask_clear(desc->arch.old_cpu_mask);
>>> +            desc->arch.move_in_progress = 0;
>>> +            desc->arch.move_cleanup_count = 0;
>>> +
>>> +            return 0;
>>> +        }
>>
>> In how far is this guaranteed to respect the (new) affinity that was set,
>> presumably having led to the movement in the first place?
> 
> The 'mask' parameter should account for the new affinity, hence the
> cpumask_intersects() check guarantees we are moving to a CPU still in
> the affinity mask.

Ah, right, I must have been confused.

>>> @@ -600,7 +646,17 @@ next:
>>>          current_vector = vector;
>>>          current_offset = offset;
>>>  
>>> -        if ( valid_irq_vector(old_vector) )
>>> +        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
>>> +        {
>>> +            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
>>> +            /*
>>> +             * Special case when evacuating an interrupt from a CPU to be
>>> +             * offlined and the interrupt was already in the process of being
>>> +             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
>>> +             * replace ->arch.{cpu_mask,vector} with the new destination.
>>> +             */
>>
>> And where's the cleaning up of ->arch.old_* going to be taken care of then?
> 
> Such cleaning will be handled normally by the interrupt still having
> ->arch.move_{in_progress,cleanup_count} set.  The CPUs in
> ->arch.old_cpu_mask must not all be offline, otherwise the logic in
> fixup_irqs() would have already released the old vector.

Maybe add "Cleanup will be done normally" to the comment?

Jan

Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Posted by Roger Pau Monné 5 months, 2 weeks ago
On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
> On 12.06.2024 12:39, Roger Pau Monné wrote:
> > On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
> >> On 10.06.2024 16:20, Roger Pau Monne wrote:
> >>> Currently there's logic in fixup_irqs() that attempts to prevent
> >>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> >>> interrupts from the CPUs not present in the input mask.  The current logic in
> >>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> >>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> >>>
> >>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> >>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> >>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
> >>> and no remaining online CPUs in ->arch.cpu_mask.
> >>>
> >>> If _assign_irq_vector() is requested to move an interrupt in the state
> >>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> >>> CPUs that could be used as fallback, and if that's the case do move the
> >>> interrupt back to the previous destination.  Note this is easier because the
> >>> vector hasn't been released yet, so there's no need to allocate and setup a new
> >>> vector on the destination.
> >>>
> >>> Due to the logic in fixup_irqs() that clears offline CPUs from
> >>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> >>> shouldn't be possible to get into _assign_irq_vector() with
> >>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> >>> ->arch.old_cpu_mask.
> >>>
> >>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> >>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> >>> longer part of the affinity set,
> >>
> >> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
> >>
> >>> move the interrupt to a different CPU part of
> >>> the provided mask
> >>
> >> ... this (->arch.cpu_mask related).
> > 
> > No, the "provided mask" here is the "mask" parameter, not
> > ->arch.cpu_mask.
> 
> Oh, so this describes the case of "hitting" the comment at the very bottom of
> the first hunk then? (I probably was misreading this because I was expecting
> it to describe a code change, rather than the case where original behavior
> needs retaining. IOW - all fine here then.)
> 
> >>> and keep the current ->arch.old_{cpu_mask,vector} for the
> >>> pending interrupt movement to be completed.
> >>
> >> Right, that's to clean up state from before the initial move. What isn't
> >> clear to me is what's to happen with the state of the intermediate
> >> placement. Description and code changes leave me with the impression that
> >> it's okay to simply abandon, without any cleanup, yet I can't quite figure
> >> why that would be an okay thing to do.
> > 
> > There isn't much we can do with the intermediate placement, as the CPU
> > is going offline.  However we can drain any pending interrupts from
> > IRR after the new destination has been set, since setting the
> > destination is done from the CPU that's the current target of the
> > interrupts.  So we can ensure the draining is done strictly after the
> > target has been switched, hence ensuring no further interrupts from
> > this source will be delivered to the current CPU.
> 
> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
> the ...
> 
> >>> --- a/xen/arch/x86/irq.c
> >>> +++ b/xen/arch/x86/irq.c
> >>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
> >>>      }
> >>>  
> >>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> >>> -        return -EAGAIN;
> >>> +    {
> >>> +        /*
> >>> +         * If the current destination is online refuse to shuffle.  Retry after
> >>> +         * the in-progress movement has finished.
> >>> +         */
> >>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> >>> +            return -EAGAIN;
> >>> +
> >>> +        /*
> >>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> >>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> >>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> >>> +         * ->arch.old_cpu_mask.
> >>> +         */
> >>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> >>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> >>> +
> >>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> >>> +        {
> >>> +            /*
> >>> +             * Fallback to the old destination if moving is in progress and the
> >>> +             * current destination is to be offlined.  This is only possible if
> >>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> >>> +             * in the 'mask' parameter.
> >>> +             */
> >>> +            desc->arch.vector = desc->arch.old_vector;
> >>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> 
> ... replacing of vector (and associated mask), without any further accounting.

It's quite likely I'm missing something here, but what further
accounting you would like to do?

The current target of the interrupt (->arch.cpu_mask previous to
cpumask_and()) is all going offline, so any attempt to set it in
->arch.old_cpu_mask would just result in a stale (offline) CPU getting
set in ->arch.old_cpu_mask, which previous patches attempted to
solve.

Maybe by "further accounting" you meant something else not related to
->arch.old_{cpu_mask,vector}?

> >>> @@ -600,7 +646,17 @@ next:
> >>>          current_vector = vector;
> >>>          current_offset = offset;
> >>>  
> >>> -        if ( valid_irq_vector(old_vector) )
> >>> +        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> >>> +        {
> >>> +            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
> >>> +            /*
> >>> +             * Special case when evacuating an interrupt from a CPU to be
> >>> +             * offlined and the interrupt was already in the process of being
> >>> +             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
> >>> +             * replace ->arch.{cpu_mask,vector} with the new destination.
> >>> +             */
> >>
> >> And where's the cleaning up of ->arch.old_* going to be taken care of then?
> > 
> > Such cleaning will be handled normally by the interrupt still having
> > ->arch.move_{in_progress,cleanup_count} set.  The CPUs in
> > ->arch.old_cpu_mask must not all be offline, otherwise the logic in
> > fixup_irqs() would have already released the old vector.
> 
> Maybe add "Cleanup will be done normally" to the comment?


Can do.

Thanks, Roger.

Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Posted by Jan Beulich 5 months, 2 weeks ago
On 12.06.2024 17:36, Roger Pau Monné wrote:
> On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
>> On 12.06.2024 12:39, Roger Pau Monné wrote:
>>> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
>>>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>>>> Currently there's logic in fixup_irqs() that attempts to prevent
>>>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
>>>>> interrupts from the CPUs not present in the input mask.  The current logic in
>>>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
>>>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
>>>>>
>>>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
>>>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
>>>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
>>>>> and no remaining online CPUs in ->arch.cpu_mask.
>>>>>
>>>>> If _assign_irq_vector() is requested to move an interrupt in the state
>>>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
>>>>> CPUs that could be used as fallback, and if that's the case do move the
>>>>> interrupt back to the previous destination.  Note this is easier because the
>>>>> vector hasn't been released yet, so there's no need to allocate and setup a new
>>>>> vector on the destination.
>>>>>
>>>>> Due to the logic in fixup_irqs() that clears offline CPUs from
>>>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
>>>>> shouldn't be possible to get into _assign_irq_vector() with
>>>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
>>>>> ->arch.old_cpu_mask.
>>>>>
>>>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
>>>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
>>>>> longer part of the affinity set,
>>>>
>>>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
>>>>
>>>>> move the interrupt to a different CPU part of
>>>>> the provided mask
>>>>
>>>> ... this (->arch.cpu_mask related).
>>>
>>> No, the "provided mask" here is the "mask" parameter, not
>>> ->arch.cpu_mask.
>>
>> Oh, so this describes the case of "hitting" the comment at the very bottom of
>> the first hunk then? (I probably was misreading this because I was expecting
>> it to describe a code change, rather than the case where original behavior
>> needs retaining. IOW - all fine here then.)
>>
>>>>> and keep the current ->arch.old_{cpu_mask,vector} for the
>>>>> pending interrupt movement to be completed.
>>>>
>>>> Right, that's to clean up state from before the initial move. What isn't
>>>> clear to me is what's to happen with the state of the intermediate
>>>> placement. Description and code changes leave me with the impression that
>>>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
>>>> why that would be an okay thing to do.
>>>
>>> There isn't much we can do with the intermediate placement, as the CPU
>>> is going offline.  However we can drain any pending interrupts from
>>> IRR after the new destination has been set, since setting the
>>> destination is done from the CPU that's the current target of the
>>> interrupts.  So we can ensure the draining is done strictly after the
>>> target has been switched, hence ensuring no further interrupts from
>>> this source will be delivered to the current CPU.
>>
>> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
>> the ...
>>
>>>>> --- a/xen/arch/x86/irq.c
>>>>> +++ b/xen/arch/x86/irq.c
>>>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>>>>>      }
>>>>>  
>>>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
>>>>> -        return -EAGAIN;
>>>>> +    {
>>>>> +        /*
>>>>> +         * If the current destination is online refuse to shuffle.  Retry after
>>>>> +         * the in-progress movement has finished.
>>>>> +         */
>>>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
>>>>> +            return -EAGAIN;
>>>>> +
>>>>> +        /*
>>>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
>>>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
>>>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
>>>>> +         * ->arch.old_cpu_mask.
>>>>> +         */
>>>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
>>>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
>>>>> +
>>>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
>>>>> +        {
>>>>> +            /*
>>>>> +             * Fallback to the old destination if moving is in progress and the
>>>>> +             * current destination is to be offlined.  This is only possible if
>>>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
>>>>> +             * in the 'mask' parameter.
>>>>> +             */
>>>>> +            desc->arch.vector = desc->arch.old_vector;
>>>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
>>
>> ... replacing of vector (and associated mask), without any further accounting.
> 
> It's quite likely I'm missing something here, but what further
> accounting you would like to do?
> 
> The current target of the interrupt (->arch.cpu_mask previous to
> cpumask_and()) is all going offline, so any attempt to set it in
> ->arch.old_cpu_mask would just result in a stale (offline) CPU getting
> set in ->arch.old_cpu_mask, which previous patches attempted to
> solve.
> 
> Maybe by "further accounting" you meant something else not related to
> ->arch.old_{cpu_mask,vector}?

Indeed. What I'm thinking of is what normally release_old_vec() would
do (of which only desc->arch.used_vectors updating would appear to be
relevant, seeing the CPU's going offline). The other one I was thinking
of, updating vector_irq[], likely is also unnecessary, again because
that's per-CPU data of a CPU going down.

Jan

Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Posted by Roger Pau Monné 5 months, 2 weeks ago
On Thu, Jun 13, 2024 at 10:38:35AM +0200, Jan Beulich wrote:
> On 12.06.2024 17:36, Roger Pau Monné wrote:
> > On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
> >> On 12.06.2024 12:39, Roger Pau Monné wrote:
> >>> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
> >>>> On 10.06.2024 16:20, Roger Pau Monne wrote:
> >>>>> Currently there's logic in fixup_irqs() that attempts to prevent
> >>>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> >>>>> interrupts from the CPUs not present in the input mask.  The current logic in
> >>>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> >>>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> >>>>>
> >>>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> >>>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> >>>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
> >>>>> and no remaining online CPUs in ->arch.cpu_mask.
> >>>>>
> >>>>> If _assign_irq_vector() is requested to move an interrupt in the state
> >>>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> >>>>> CPUs that could be used as fallback, and if that's the case do move the
> >>>>> interrupt back to the previous destination.  Note this is easier because the
> >>>>> vector hasn't been released yet, so there's no need to allocate and setup a new
> >>>>> vector on the destination.
> >>>>>
> >>>>> Due to the logic in fixup_irqs() that clears offline CPUs from
> >>>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> >>>>> shouldn't be possible to get into _assign_irq_vector() with
> >>>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> >>>>> ->arch.old_cpu_mask.
> >>>>>
> >>>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> >>>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> >>>>> longer part of the affinity set,
> >>>>
> >>>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
> >>>>
> >>>>> move the interrupt to a different CPU part of
> >>>>> the provided mask
> >>>>
> >>>> ... this (->arch.cpu_mask related).
> >>>
> >>> No, the "provided mask" here is the "mask" parameter, not
> >>> ->arch.cpu_mask.
> >>
> >> Oh, so this describes the case of "hitting" the comment at the very bottom of
> >> the first hunk then? (I probably was misreading this because I was expecting
> >> it to describe a code change, rather than the case where original behavior
> >> needs retaining. IOW - all fine here then.)
> >>
> >>>>> and keep the current ->arch.old_{cpu_mask,vector} for the
> >>>>> pending interrupt movement to be completed.
> >>>>
> >>>> Right, that's to clean up state from before the initial move. What isn't
> >>>> clear to me is what's to happen with the state of the intermediate
> >>>> placement. Description and code changes leave me with the impression that
> >>>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
> >>>> why that would be an okay thing to do.
> >>>
> >>> There isn't much we can do with the intermediate placement, as the CPU
> >>> is going offline.  However we can drain any pending interrupts from
> >>> IRR after the new destination has been set, since setting the
> >>> destination is done from the CPU that's the current target of the
> >>> interrupts.  So we can ensure the draining is done strictly after the
> >>> target has been switched, hence ensuring no further interrupts from
> >>> this source will be delivered to the current CPU.
> >>
> >> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
> >> the ...
> >>
> >>>>> --- a/xen/arch/x86/irq.c
> >>>>> +++ b/xen/arch/x86/irq.c
> >>>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
> >>>>>      }
> >>>>>  
> >>>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> >>>>> -        return -EAGAIN;
> >>>>> +    {
> >>>>> +        /*
> >>>>> +         * If the current destination is online refuse to shuffle.  Retry after
> >>>>> +         * the in-progress movement has finished.
> >>>>> +         */
> >>>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> >>>>> +            return -EAGAIN;
> >>>>> +
> >>>>> +        /*
> >>>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> >>>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> >>>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> >>>>> +         * ->arch.old_cpu_mask.
> >>>>> +         */
> >>>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> >>>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> >>>>> +
> >>>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> >>>>> +        {
> >>>>> +            /*
> >>>>> +             * Fallback to the old destination if moving is in progress and the
> >>>>> +             * current destination is to be offlined.  This is only possible if
> >>>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> >>>>> +             * in the 'mask' parameter.
> >>>>> +             */
> >>>>> +            desc->arch.vector = desc->arch.old_vector;
> >>>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> >>
> >> ... replacing of vector (and associated mask), without any further accounting.
> > 
> > It's quite likely I'm missing something here, but what further
> > accounting you would like to do?
> > 
> > The current target of the interrupt (->arch.cpu_mask previous to
> > cpumask_and()) is all going offline, so any attempt to set it in
> > ->arch.old_cpu_mask would just result in a stale (offline) CPU getting
> > set in ->arch.old_cpu_mask, which previous patches attempted to
> > solve.
> > 
> > Maybe by "further accounting" you meant something else not related to
> > ->arch.old_{cpu_mask,vector}?
> 
> Indeed. What I'm thinking of is what normally release_old_vec() would
> do (of which only desc->arch.used_vectors updating would appear to be
> relevant, seeing the CPU's going offline). The other one I was thinking
> of, updating vector_irq[], likely is also unnecessary, again because
> that's per-CPU data of a CPU going down.

I think updating vector_irq[] should be explicitly avoided, as doing
so would prevent us from correctly draining any pending interrupts
because the vector -> irq mapping would be broken when the interrupt
enable window at the bottom of fixup_irqs() is reached.

For used_vectors: we might clean it, I'm a bit worried however that at
some point we insert a check in do_IRQ() path that ensures the
vector_irq[] is inline with desc->arch.used_vectors, which would fail
for interrupts drained at the bottom of fixup_irqs().  Let me attempt
to clean the currently used vector from ->arch.used_vectors.

Thanks, Roger.

Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Posted by Jan Beulich 5 months, 2 weeks ago
On 13.06.2024 13:31, Roger Pau Monné wrote:
> On Thu, Jun 13, 2024 at 10:38:35AM +0200, Jan Beulich wrote:
>> On 12.06.2024 17:36, Roger Pau Monné wrote:
>>> On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
>>>> On 12.06.2024 12:39, Roger Pau Monné wrote:
>>>>> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
>>>>>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>>>>>> Currently there's logic in fixup_irqs() that attempts to prevent
>>>>>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
>>>>>>> interrupts from the CPUs not present in the input mask.  The current logic in
>>>>>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
>>>>>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
>>>>>>>
>>>>>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
>>>>>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
>>>>>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
>>>>>>> and no remaining online CPUs in ->arch.cpu_mask.
>>>>>>>
>>>>>>> If _assign_irq_vector() is requested to move an interrupt in the state
>>>>>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
>>>>>>> CPUs that could be used as fallback, and if that's the case do move the
>>>>>>> interrupt back to the previous destination.  Note this is easier because the
>>>>>>> vector hasn't been released yet, so there's no need to allocate and setup a new
>>>>>>> vector on the destination.
>>>>>>>
>>>>>>> Due to the logic in fixup_irqs() that clears offline CPUs from
>>>>>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
>>>>>>> shouldn't be possible to get into _assign_irq_vector() with
>>>>>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
>>>>>>> ->arch.old_cpu_mask.
>>>>>>>
>>>>>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
>>>>>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
>>>>>>> longer part of the affinity set,
>>>>>>
>>>>>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
>>>>>>
>>>>>>> move the interrupt to a different CPU part of
>>>>>>> the provided mask
>>>>>>
>>>>>> ... this (->arch.cpu_mask related).
>>>>>
>>>>> No, the "provided mask" here is the "mask" parameter, not
>>>>> ->arch.cpu_mask.
>>>>
>>>> Oh, so this describes the case of "hitting" the comment at the very bottom of
>>>> the first hunk then? (I probably was misreading this because I was expecting
>>>> it to describe a code change, rather than the case where original behavior
>>>> needs retaining. IOW - all fine here then.)
>>>>
>>>>>>> and keep the current ->arch.old_{cpu_mask,vector} for the
>>>>>>> pending interrupt movement to be completed.
>>>>>>
>>>>>> Right, that's to clean up state from before the initial move. What isn't
>>>>>> clear to me is what's to happen with the state of the intermediate
>>>>>> placement. Description and code changes leave me with the impression that
>>>>>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
>>>>>> why that would be an okay thing to do.
>>>>>
>>>>> There isn't much we can do with the intermediate placement, as the CPU
>>>>> is going offline.  However we can drain any pending interrupts from
>>>>> IRR after the new destination has been set, since setting the
>>>>> destination is done from the CPU that's the current target of the
>>>>> interrupts.  So we can ensure the draining is done strictly after the
>>>>> target has been switched, hence ensuring no further interrupts from
>>>>> this source will be delivered to the current CPU.
>>>>
>>>> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
>>>> the ...
>>>>
>>>>>>> --- a/xen/arch/x86/irq.c
>>>>>>> +++ b/xen/arch/x86/irq.c
>>>>>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>>>>>>>      }
>>>>>>>  
>>>>>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
>>>>>>> -        return -EAGAIN;
>>>>>>> +    {
>>>>>>> +        /*
>>>>>>> +         * If the current destination is online refuse to shuffle.  Retry after
>>>>>>> +         * the in-progress movement has finished.
>>>>>>> +         */
>>>>>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
>>>>>>> +            return -EAGAIN;
>>>>>>> +
>>>>>>> +        /*
>>>>>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
>>>>>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
>>>>>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
>>>>>>> +         * ->arch.old_cpu_mask.
>>>>>>> +         */
>>>>>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
>>>>>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
>>>>>>> +
>>>>>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
>>>>>>> +        {
>>>>>>> +            /*
>>>>>>> +             * Fallback to the old destination if moving is in progress and the
>>>>>>> +             * current destination is to be offlined.  This is only possible if
>>>>>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
>>>>>>> +             * in the 'mask' parameter.
>>>>>>> +             */
>>>>>>> +            desc->arch.vector = desc->arch.old_vector;
>>>>>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
>>>>
>>>> ... replacing of vector (and associated mask), without any further accounting.
>>>
>>> It's quite likely I'm missing something here, but what further
>>> accounting you would like to do?
>>>
>>> The current target of the interrupt (->arch.cpu_mask previous to
>>> cpumask_and()) is all going offline, so any attempt to set it in
>>> ->arch.old_cpu_mask would just result in a stale (offline) CPU getting
>>> set in ->arch.old_cpu_mask, which previous patches attempted to
>>> solve.
>>>
>>> Maybe by "further accounting" you meant something else not related to
>>> ->arch.old_{cpu_mask,vector}?
>>
>> Indeed. What I'm thinking of is what normally release_old_vec() would
>> do (of which only desc->arch.used_vectors updating would appear to be
>> relevant, seeing the CPU's going offline). The other one I was thinking
>> of, updating vector_irq[], likely is also unnecessary, again because
>> that's per-CPU data of a CPU going down.
> 
> I think updating vector_irq[] should be explicitly avoided, as doing
> so would prevent us from correctly draining any pending interrupts
> because the vector -> irq mapping would be broken when the interrupt
> enable window at the bottom of fixup_irqs() is reached.
> 
> For used_vectors: we might clean it, I'm a bit worried however that at
> some point we insert a check in do_IRQ() path that ensures the
> vector_irq[] is inline with desc->arch.used_vectors, which would fail
> for interrupts drained at the bottom of fixup_irqs().  Let me attempt
> to clean the currently used vector from ->arch.used_vectors.

Just to clarify: It may well be that for draining the bit can't be cleared
right here. But it then still needs clearing _somewhere_, or else we
chance ending up with inconsistent state (triggering e.g. an assertion
later on) or the leaking of vectors. My problem here was that I also
couldn't locate any such "somewhere", and commentary also didn't point me
anywhere.

Jan

Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Posted by Roger Pau Monné 5 months, 2 weeks ago
On Thu, Jun 13, 2024 at 01:36:55PM +0200, Jan Beulich wrote:
> On 13.06.2024 13:31, Roger Pau Monné wrote:
> > On Thu, Jun 13, 2024 at 10:38:35AM +0200, Jan Beulich wrote:
> >> On 12.06.2024 17:36, Roger Pau Monné wrote:
> >>> On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
> >>>> On 12.06.2024 12:39, Roger Pau Monné wrote:
> >>>>> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
> >>>>>> On 10.06.2024 16:20, Roger Pau Monne wrote:
> >>>>>>> Currently there's logic in fixup_irqs() that attempts to prevent
> >>>>>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> >>>>>>> interrupts from the CPUs not present in the input mask.  The current logic in
> >>>>>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> >>>>>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> >>>>>>>
> >>>>>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> >>>>>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> >>>>>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
> >>>>>>> and no remaining online CPUs in ->arch.cpu_mask.
> >>>>>>>
> >>>>>>> If _assign_irq_vector() is requested to move an interrupt in the state
> >>>>>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> >>>>>>> CPUs that could be used as fallback, and if that's the case do move the
> >>>>>>> interrupt back to the previous destination.  Note this is easier because the
> >>>>>>> vector hasn't been released yet, so there's no need to allocate and setup a new
> >>>>>>> vector on the destination.
> >>>>>>>
> >>>>>>> Due to the logic in fixup_irqs() that clears offline CPUs from
> >>>>>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> >>>>>>> shouldn't be possible to get into _assign_irq_vector() with
> >>>>>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> >>>>>>> ->arch.old_cpu_mask.
> >>>>>>>
> >>>>>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> >>>>>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> >>>>>>> longer part of the affinity set,
> >>>>>>
> >>>>>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
> >>>>>>
> >>>>>>> move the interrupt to a different CPU part of
> >>>>>>> the provided mask
> >>>>>>
> >>>>>> ... this (->arch.cpu_mask related).
> >>>>>
> >>>>> No, the "provided mask" here is the "mask" parameter, not
> >>>>> ->arch.cpu_mask.
> >>>>
> >>>> Oh, so this describes the case of "hitting" the comment at the very bottom of
> >>>> the first hunk then? (I probably was misreading this because I was expecting
> >>>> it to describe a code change, rather than the case where original behavior
> >>>> needs retaining. IOW - all fine here then.)
> >>>>
> >>>>>>> and keep the current ->arch.old_{cpu_mask,vector} for the
> >>>>>>> pending interrupt movement to be completed.
> >>>>>>
> >>>>>> Right, that's to clean up state from before the initial move. What isn't
> >>>>>> clear to me is what's to happen with the state of the intermediate
> >>>>>> placement. Description and code changes leave me with the impression that
> >>>>>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
> >>>>>> why that would be an okay thing to do.
> >>>>>
> >>>>> There isn't much we can do with the intermediate placement, as the CPU
> >>>>> is going offline.  However we can drain any pending interrupts from
> >>>>> IRR after the new destination has been set, since setting the
> >>>>> destination is done from the CPU that's the current target of the
> >>>>> interrupts.  So we can ensure the draining is done strictly after the
> >>>>> target has been switched, hence ensuring no further interrupts from
> >>>>> this source will be delivered to the current CPU.
> >>>>
> >>>> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
> >>>> the ...
> >>>>
> >>>>>>> --- a/xen/arch/x86/irq.c
> >>>>>>> +++ b/xen/arch/x86/irq.c
> >>>>>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
> >>>>>>>      }
> >>>>>>>  
> >>>>>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> >>>>>>> -        return -EAGAIN;
> >>>>>>> +    {
> >>>>>>> +        /*
> >>>>>>> +         * If the current destination is online refuse to shuffle.  Retry after
> >>>>>>> +         * the in-progress movement has finished.
> >>>>>>> +         */
> >>>>>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> >>>>>>> +            return -EAGAIN;
> >>>>>>> +
> >>>>>>> +        /*
> >>>>>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> >>>>>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> >>>>>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> >>>>>>> +         * ->arch.old_cpu_mask.
> >>>>>>> +         */
> >>>>>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> >>>>>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> >>>>>>> +
> >>>>>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> >>>>>>> +        {
> >>>>>>> +            /*
> >>>>>>> +             * Fallback to the old destination if moving is in progress and the
> >>>>>>> +             * current destination is to be offlined.  This is only possible if
> >>>>>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> >>>>>>> +             * in the 'mask' parameter.
> >>>>>>> +             */
> >>>>>>> +            desc->arch.vector = desc->arch.old_vector;
> >>>>>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> >>>>
> >>>> ... replacing of vector (and associated mask), without any further accounting.
> >>>
> >>> It's quite likely I'm missing something here, but what further
> >>> accounting you would like to do?
> >>>
> >>> The current target of the interrupt (->arch.cpu_mask previous to
> >>> cpumask_and()) is all going offline, so any attempt to set it in
> >>> ->arch.old_cpu_mask would just result in a stale (offline) CPU getting
> >>> set in ->arch.old_cpu_mask, which previous patches attempted to
> >>> solve.
> >>>
> >>> Maybe by "further accounting" you meant something else not related to
> >>> ->arch.old_{cpu_mask,vector}?
> >>
> >> Indeed. What I'm thinking of is what normally release_old_vec() would
> >> do (of which only desc->arch.used_vectors updating would appear to be
> >> relevant, seeing the CPU's going offline). The other one I was thinking
> >> of, updating vector_irq[], likely is also unnecessary, again because
> >> that's per-CPU data of a CPU going down.
> > 
> > I think updating vector_irq[] should be explicitly avoided, as doing
> > so would prevent us from correctly draining any pending interrupts
> > because the vector -> irq mapping would be broken when the interrupt
> > enable window at the bottom of fixup_irqs() is reached.
> > 
> > For used_vectors: we might clean it, I'm a bit worried however that at
> > some point we insert a check in do_IRQ() path that ensures the
> > vector_irq[] is inline with desc->arch.used_vectors, which would fail
> > for interrupts drained at the bottom of fixup_irqs().  Let me attempt
> > to clean the currently used vector from ->arch.used_vectors.
> 
> Just to clarify: It may well be that for draining the bit can't be cleared
> right here. But it then still needs clearing _somewhere_, or else we
> chance ending up with inconsistent state (triggering e.g. an assertion
> later on) or the leaking of vectors. My problem here was that I also
> couldn't locate any such "somewhere", and commentary also didn't point me
> anywhere.

You are correct, there's no such place where the cleanup would happen.

I'm afraid the only option I see to correctly deal with this is to do
the cleanup of the old destination in _assign_irq_vector(), and then
do the pending interrupt draining from IRR like I had proposed in
patch 7/7, thus removing the interrupt enable window at the bottom of
fixup_irqs().

Let me know if that seems sensible.

Thanks, Roger.

Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Posted by Jan Beulich 5 months, 2 weeks ago
On 13.06.2024 14:55, Roger Pau Monné wrote:
> On Thu, Jun 13, 2024 at 01:36:55PM +0200, Jan Beulich wrote:
>> On 13.06.2024 13:31, Roger Pau Monné wrote:
>>> On Thu, Jun 13, 2024 at 10:38:35AM +0200, Jan Beulich wrote:
>>>> On 12.06.2024 17:36, Roger Pau Monné wrote:
>>>>> On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
>>>>>> On 12.06.2024 12:39, Roger Pau Monné wrote:
>>>>>>> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
>>>>>>>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>>>>>>>> Currently there's logic in fixup_irqs() that attempts to prevent
>>>>>>>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
>>>>>>>>> interrupts from the CPUs not present in the input mask.  The current logic in
>>>>>>>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
>>>>>>>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
>>>>>>>>>
>>>>>>>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
>>>>>>>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
>>>>>>>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
>>>>>>>>> and no remaining online CPUs in ->arch.cpu_mask.
>>>>>>>>>
>>>>>>>>> If _assign_irq_vector() is requested to move an interrupt in the state
>>>>>>>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
>>>>>>>>> CPUs that could be used as fallback, and if that's the case do move the
>>>>>>>>> interrupt back to the previous destination.  Note this is easier because the
>>>>>>>>> vector hasn't been released yet, so there's no need to allocate and setup a new
>>>>>>>>> vector on the destination.
>>>>>>>>>
>>>>>>>>> Due to the logic in fixup_irqs() that clears offline CPUs from
>>>>>>>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
>>>>>>>>> shouldn't be possible to get into _assign_irq_vector() with
>>>>>>>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
>>>>>>>>> ->arch.old_cpu_mask.
>>>>>>>>>
>>>>>>>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
>>>>>>>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
>>>>>>>>> longer part of the affinity set,
>>>>>>>>
>>>>>>>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
>>>>>>>>
>>>>>>>>> move the interrupt to a different CPU part of
>>>>>>>>> the provided mask
>>>>>>>>
>>>>>>>> ... this (->arch.cpu_mask related).
>>>>>>>
>>>>>>> No, the "provided mask" here is the "mask" parameter, not
>>>>>>> ->arch.cpu_mask.
>>>>>>
>>>>>> Oh, so this describes the case of "hitting" the comment at the very bottom of
>>>>>> the first hunk then? (I probably was misreading this because I was expecting
>>>>>> it to describe a code change, rather than the case where original behavior
>>>>>> needs retaining. IOW - all fine here then.)
>>>>>>
>>>>>>>>> and keep the current ->arch.old_{cpu_mask,vector} for the
>>>>>>>>> pending interrupt movement to be completed.
>>>>>>>>
>>>>>>>> Right, that's to clean up state from before the initial move. What isn't
>>>>>>>> clear to me is what's to happen with the state of the intermediate
>>>>>>>> placement. Description and code changes leave me with the impression that
>>>>>>>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
>>>>>>>> why that would be an okay thing to do.
>>>>>>>
>>>>>>> There isn't much we can do with the intermediate placement, as the CPU
>>>>>>> is going offline.  However we can drain any pending interrupts from
>>>>>>> IRR after the new destination has been set, since setting the
>>>>>>> destination is done from the CPU that's the current target of the
>>>>>>> interrupts.  So we can ensure the draining is done strictly after the
>>>>>>> target has been switched, hence ensuring no further interrupts from
>>>>>>> this source will be delivered to the current CPU.
>>>>>>
>>>>>> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
>>>>>> the ...
>>>>>>
>>>>>>>>> --- a/xen/arch/x86/irq.c
>>>>>>>>> +++ b/xen/arch/x86/irq.c
>>>>>>>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>>>>>>>>>      }
>>>>>>>>>  
>>>>>>>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
>>>>>>>>> -        return -EAGAIN;
>>>>>>>>> +    {
>>>>>>>>> +        /*
>>>>>>>>> +         * If the current destination is online refuse to shuffle.  Retry after
>>>>>>>>> +         * the in-progress movement has finished.
>>>>>>>>> +         */
>>>>>>>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
>>>>>>>>> +            return -EAGAIN;
>>>>>>>>> +
>>>>>>>>> +        /*
>>>>>>>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
>>>>>>>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
>>>>>>>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
>>>>>>>>> +         * ->arch.old_cpu_mask.
>>>>>>>>> +         */
>>>>>>>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
>>>>>>>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
>>>>>>>>> +
>>>>>>>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
>>>>>>>>> +        {
>>>>>>>>> +            /*
>>>>>>>>> +             * Fallback to the old destination if moving is in progress and the
>>>>>>>>> +             * current destination is to be offlined.  This is only possible if
>>>>>>>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
>>>>>>>>> +             * in the 'mask' parameter.
>>>>>>>>> +             */
>>>>>>>>> +            desc->arch.vector = desc->arch.old_vector;
>>>>>>>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
>>>>>>
>>>>>> ... replacing of vector (and associated mask), without any further accounting.
>>>>>
>>>>> It's quite likely I'm missing something here, but what further
>>>>> accounting you would like to do?
>>>>>
>>>>> The current target of the interrupt (->arch.cpu_mask previous to
>>>>> cpumask_and()) is all going offline, so any attempt to set it in
>>>>> ->arch.old_cpu_mask would just result in a stale (offline) CPU getting
>>>>> set in ->arch.old_cpu_mask, which previous patches attempted to
>>>>> solve.
>>>>>
>>>>> Maybe by "further accounting" you meant something else not related to
>>>>> ->arch.old_{cpu_mask,vector}?
>>>>
>>>> Indeed. What I'm thinking of is what normally release_old_vec() would
>>>> do (of which only desc->arch.used_vectors updating would appear to be
>>>> relevant, seeing the CPU's going offline). The other one I was thinking
>>>> of, updating vector_irq[], likely is also unnecessary, again because
>>>> that's per-CPU data of a CPU going down.
>>>
>>> I think updating vector_irq[] should be explicitly avoided, as doing
>>> so would prevent us from correctly draining any pending interrupts
>>> because the vector -> irq mapping would be broken when the interrupt
>>> enable window at the bottom of fixup_irqs() is reached.
>>>
>>> For used_vectors: we might clean it, I'm a bit worried however that at
>>> some point we insert a check in do_IRQ() path that ensures the
>>> vector_irq[] is inline with desc->arch.used_vectors, which would fail
>>> for interrupts drained at the bottom of fixup_irqs().  Let me attempt
>>> to clean the currently used vector from ->arch.used_vectors.
>>
>> Just to clarify: It may well be that for draining the bit can't be cleared
>> right here. But it then still needs clearing _somewhere_, or else we
>> chance ending up with inconsistent state (triggering e.g. an assertion
>> later on) or the leaking of vectors. My problem here was that I also
>> couldn't locate any such "somewhere", and commentary also didn't point me
>> anywhere.
> 
> You are correct, there's no such place where the cleanup would happen.
> 
> I'm afraid the only option I see to correctly deal with this is to do
> the cleanup of the old destination in _assign_irq_vector(), and then
> do the pending interrupt draining from IRR like I had proposed in
> patch 7/7, thus removing the interrupt enable window at the bottom of
> fixup_irqs().
> 
> Let me know if that seems sensible.

I think it does; doing away with that entirely heuristic window would be
pretty nice anyway.

Jan