[PATCH -next v4 07/19] arm64: entry: Call arm64_preempt_schedule_irq() only if irqs enabled

Jinjie Ruan posted 19 patches 1 month, 1 week ago
There is a newer version of this series
[PATCH -next v4 07/19] arm64: entry: Call arm64_preempt_schedule_irq() only if irqs enabled
Posted by Jinjie Ruan 1 month, 1 week ago
Only if irqs are enabled when the interrupt trapped, there may be
a chance to reschedule after the interrupt has been handled, so move
arm64_preempt_schedule_irq() into regs_irqs_disabled() check false
if block.

As Mark pointed out, this change will have the following key impact:

    "We will not preempt when taking interrupts from a region of kernel
    code where IRQs are enabled but RCU is not watching, matching the
    behaviour of the generic entry code.

    This has the potential to introduce livelock if we can ever have a
    screaming interrupt in such a region, so we'll need to go figure out
    whether that's actually a problem.

    Having this as a separate patch will make it easier to test/bisect
    for that specifically."

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 arch/arm64/kernel/entry-common.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index e0380812d71e..b57f6dc66115 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -114,8 +114,6 @@ static void __sched arm64_preempt_schedule_irq(void)
 static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
 					irqentry_state_t state)
 {
-	arm64_preempt_schedule_irq();
-
 	mte_check_tfsr_exit();
 
 	lockdep_assert_irqs_disabled();
@@ -129,6 +127,8 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
 			return;
 		}
 
+		arm64_preempt_schedule_irq();
+
 		trace_hardirqs_on();
 	} else {
 		if (state.exit_rcu)
-- 
2.34.1
Re: [PATCH -next v4 07/19] arm64: entry: Call arm64_preempt_schedule_irq() only if irqs enabled
Posted by Mark Rutland 1 month ago
On Fri, Oct 25, 2024 at 06:06:48PM +0800, Jinjie Ruan wrote:
> Only if irqs are enabled when the interrupt trapped, there may be
> a chance to reschedule after the interrupt has been handled, so move
> arm64_preempt_schedule_irq() into regs_irqs_disabled() check false
> if block.
> 
> As Mark pointed out, this change will have the following key impact:
> 
>     "We will not preempt when taking interrupts from a region of kernel
>     code where IRQs are enabled but RCU is not watching, matching the
>     behaviour of the generic entry code.
> 
>     This has the potential to introduce livelock if we can ever have a
>     screaming interrupt in such a region, so we'll need to go figure out
>     whether that's actually a problem.
> 
>     Having this as a separate patch will make it easier to test/bisect
>     for that specifically."
> 
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>

This should be folded into the prior patch.

Mark.

> ---
>  arch/arm64/kernel/entry-common.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index e0380812d71e..b57f6dc66115 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -114,8 +114,6 @@ static void __sched arm64_preempt_schedule_irq(void)
>  static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
>  					irqentry_state_t state)
>  {
> -	arm64_preempt_schedule_irq();
> -
>  	mte_check_tfsr_exit();
>  
>  	lockdep_assert_irqs_disabled();
> @@ -129,6 +127,8 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
>  			return;
>  		}
>  
> +		arm64_preempt_schedule_irq();
> +
>  		trace_hardirqs_on();
>  	} else {
>  		if (state.exit_rcu)
> -- 
> 2.34.1
>