kernel/irq/manage.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
With PREEMPT_RT, both BH and irq handled in threads.
BH served by the ksoftirqd that is SCHED_OTHER, IRQ threads are
SCHED_FIFO with some priority (default is 50).
On the test platform the following observed:
- ksoftirqd thread running, servicing softirqs
- hard IRQ received by the CPU. Default handler wakes up IRQ thread
and exits
- in the IRQ thread, force_irqthreads is defined as (true) for PREEMPT_RT
thus handler is irq_forced_thread_fn
- before calling IRQ handler, irq_forced_thread_fn calls
local_bh_disable(); it in turn acquires local_lock(&softirq_ctrl.lock);
- softirq_ctrl.lock still owned by the ksoftirqd thread, so
slow path taken (rt_spin_lock_slowlock), causing context switch to the
ksoftirqd (with properly adjusted priority) and waiting for it to
release the lock. Then, context switched back to the IRQ thread
- as result, IRQ handler invoked with observed delay for several
hundreds micro-seconds on 2GHz platform.
This causes failure to deliver on real-time latency requirements.
What is the reason for disabling BH in the PREEMPT_RT? Looks like
it is more reasonable to let scheduler to run threads according to
priorities; IRQ thread will preempt BHs, run to its completion.
Experiment conducted with this approach, smooth execution observed with
no spikes in the IRQ latency.
I am likely missing corner cases with this approach, so this is request
for comments. Input welcome
Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@mobileye.com>
---
kernel/irq/manage.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index bf9ae8a8686f..d7189ac37fa8 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1191,17 +1191,19 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
{
irqreturn_t ret;
- local_bh_disable();
- if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {
+ local_bh_disable();
local_irq_disable();
+ }
ret = action->thread_fn(action->irq, action->dev_id);
if (ret == IRQ_HANDLED)
atomic_inc(&desc->threads_handled);
irq_finalize_oneshot(desc, action);
- if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {
local_irq_enable();
- local_bh_enable();
+ local_bh_enable();
+ }
return ret;
}
--
2.37.3
On Mon, 2024-04-15 at 14:28 +0300, Vladimir Kondratiev wrote: > With PREEMPT_RT, both BH and irq handled in threads. > BH served by the ksoftirqd that is SCHED_OTHER, IRQ threads are > SCHED_FIFO with some priority (default is 50). > > On the test platform the following observed: > > - ksoftirqd thread running, servicing softirqs > - hard IRQ received by the CPU. Default handler wakes up IRQ thread > and exits > - in the IRQ thread, force_irqthreads is defined as (true) for PREEMPT_RT > thus handler is irq_forced_thread_fn > - before calling IRQ handler, irq_forced_thread_fn calls > local_bh_disable(); it in turn acquires local_lock(&softirq_ctrl.lock); > - softirq_ctrl.lock still owned by the ksoftirqd thread, so > slow path taken (rt_spin_lock_slowlock), causing context switch to the > ksoftirqd (with properly adjusted priority) and waiting for it to > release the lock. Then, context switched back to the IRQ thread > - as result, IRQ handler invoked with observed delay for several > hundreds micro-seconds on 2GHz platform. > > This causes failure to deliver on real-time latency requirements. > > What is the reason for disabling BH in the PREEMPT_RT? Looks like > it is more reasonable to let scheduler to run threads according to > priorities; IRQ thread will preempt BHs, run to its completion. > > Experiment conducted with this approach, smooth execution observed with > no spikes in the IRQ latency. > > I am likely missing corner cases with this approach, so this is request > for comments. Input welcome I don't know if anything is actually depending on the mutual exclusion, but disabling BH there causes any softirq raised by the irq to run in local_bh_enable() rather than waking ksoftirqd. It also appears to be the only thing causing in_interrupt() to return true inside the threaded handler, which may or may not actually matter. -Crystal
© 2016 - 2026 Red Hat, Inc.