kernel/sched/build_policy.c | 1 + kernel/sched/deadline.c | 24 +++++++++++++++++++++++- 2 files changed, 24 insertions(+), 1 deletion(-)
inactive_task_timer() executes in interrupt (atomic) context. It calls
put_task_struct(), which indirectly acquires sleeping locks under
PREEMPT_RT.
Below is an example of a splat that happened in a test environment:
CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W ---------
Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012
Call Trace:
dump_stack_lvl+0x57/0x7d
mark_lock_irq.cold+0x33/0xba
? stack_trace_save+0x4b/0x70
? save_trace+0x55/0x150
mark_lock+0x1e7/0x400
mark_usage+0x11d/0x140
__lock_acquire+0x30d/0x930
lock_acquire.part.0+0x9c/0x210
? refill_obj_stock+0x3d/0x3a0
? rcu_read_lock_sched_held+0x3f/0x70
? trace_lock_acquire+0x38/0x140
? lock_acquire+0x30/0x80
? refill_obj_stock+0x3d/0x3a0
rt_spin_lock+0x27/0xe0
? refill_obj_stock+0x3d/0x3a0
refill_obj_stock+0x3d/0x3a0
? inactive_task_timer+0x1ad/0x340
kmem_cache_free+0x357/0x560
inactive_task_timer+0x1ad/0x340
? switched_from_dl+0x2d0/0x2d0
__run_hrtimer+0x8a/0x1a0
__hrtimer_run_queues+0x91/0x130
hrtimer_interrupt+0x10f/0x220
__sysvec_apic_timer_interrupt+0x7b/0xd0
sysvec_apic_timer_interrupt+0x4f/0xd0
? asm_sysvec_apic_timer_interrupt+0xa/0x20
asm_sysvec_apic_timer_interrupt+0x12/0x20
RIP: 0033:0x7fff196bf6f5
Instead of calling put_task_struct() directly, we defer it using
call_rcu(). A more natural approach would use a workqueue, but since
in PREEMPT_RT, we can't allocate dynamic memory from atomic context,
the code would become more complex because we would need to put the
work_struct instance in the task_struct and initialize it when we
allocate a new task_struct.
Signed-off-by: Wander Lairson Costa <wander@redhat.com>
Cc: Paul McKenney <paulmck@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
---
kernel/sched/build_policy.c | 1 +
kernel/sched/deadline.c | 24 +++++++++++++++++++++++-
2 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
index d9dc9ab3773f..f159304ee792 100644
--- a/kernel/sched/build_policy.c
+++ b/kernel/sched/build_policy.c
@@ -28,6 +28,7 @@
#include <linux/suspend.h>
#include <linux/tsacct_kern.h>
#include <linux/vtime.h>
+#include <linux/rcupdate.h>
#include <uapi/linux/sched/types.h>
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 9ae8f41e3372..ab9301d4cc24 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1405,6 +1405,13 @@ static void update_curr_dl(struct rq *rq)
}
}
+static void delayed_put_task_struct(struct rcu_head *rhp)
+{
+ struct task_struct *task = container_of(rhp, struct task_struct, rcu);
+
+ __put_task_struct(task);
+}
+
static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer)
{
struct sched_dl_entity *dl_se = container_of(timer,
@@ -1442,7 +1449,22 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer)
dl_se->dl_non_contending = 0;
unlock:
task_rq_unlock(rq, p, &rf);
- put_task_struct(p);
+
+ if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+ /*
+ * Decrement the refcount explicitly to avoid unnecessarily
+ * calling call_rcu.
+ */
+ if (refcount_dec_and_test(&p->usage))
+ /*
+ * under PREEMPT_RT, we can't call put_task_struct
+ * in atomic context because it will indirectly
+ * acquire sleeping locks.
+ */
+ call_rcu(&p->rcu, delayed_put_task_struct);
+ } else {
+ put_task_struct(p);
+ }
return HRTIMER_NORESTART;
}
--
2.39.0
On 04/01/23 15:17, Wander Lairson Costa wrote: > inactive_task_timer() executes in interrupt (atomic) context. It calls > put_task_struct(), which indirectly acquires sleeping locks under > PREEMPT_RT. > > Below is an example of a splat that happened in a test environment: > > CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W --------- > Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012 > Call Trace: > dump_stack_lvl+0x57/0x7d > mark_lock_irq.cold+0x33/0xba > ? stack_trace_save+0x4b/0x70 > ? save_trace+0x55/0x150 > mark_lock+0x1e7/0x400 > mark_usage+0x11d/0x140 > __lock_acquire+0x30d/0x930 > lock_acquire.part.0+0x9c/0x210 > ? refill_obj_stock+0x3d/0x3a0 > ? rcu_read_lock_sched_held+0x3f/0x70 > ? trace_lock_acquire+0x38/0x140 > ? lock_acquire+0x30/0x80 > ? refill_obj_stock+0x3d/0x3a0 > rt_spin_lock+0x27/0xe0 > ? refill_obj_stock+0x3d/0x3a0 > refill_obj_stock+0x3d/0x3a0 > ? inactive_task_timer+0x1ad/0x340 > kmem_cache_free+0x357/0x560 > inactive_task_timer+0x1ad/0x340 > ? switched_from_dl+0x2d0/0x2d0 > __run_hrtimer+0x8a/0x1a0 > __hrtimer_run_queues+0x91/0x130 > hrtimer_interrupt+0x10f/0x220 > __sysvec_apic_timer_interrupt+0x7b/0xd0 > sysvec_apic_timer_interrupt+0x4f/0xd0 > ? asm_sysvec_apic_timer_interrupt+0xa/0x20 > asm_sysvec_apic_timer_interrupt+0x12/0x20 > RIP: 0033:0x7fff196bf6f5 > > Instead of calling put_task_struct() directly, we defer it using > call_rcu(). A more natural approach would use a workqueue, but since > in PREEMPT_RT, we can't allocate dynamic memory from atomic context, > the code would become more complex because we would need to put the > work_struct instance in the task_struct and initialize it when we > allocate a new task_struct. > Sorry to come back on this; Juri reminded me offline that put_task_struct() is invoked in other non-sleepable contexts, not just inactive_task_timer(). e.g. rto_push_irq_work_func() // hard irq work so hardirq context `\ push_rt_task() `\ put_task_struct() Or cpu_stopper_thread() // stopper callbacks must not sleep `\ push_cpu_stop() `\ put_task_struct() ... But then again I'm not aware of any splats happening in these paths. Is there something special about inactive_task_timer(), or could it be the issue is there for those other paths but we just haven't had them reported yet?
On Thu, Jan 19, 2023 at 3:03 PM Valentin Schneider <vschneid@redhat.com> wrote: > > On 04/01/23 15:17, Wander Lairson Costa wrote: > > inactive_task_timer() executes in interrupt (atomic) context. It calls > > put_task_struct(), which indirectly acquires sleeping locks under > > PREEMPT_RT. > > > > Below is an example of a splat that happened in a test environment: > > > > CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W --------- > > Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012 > > Call Trace: > > dump_stack_lvl+0x57/0x7d > > mark_lock_irq.cold+0x33/0xba > > ? stack_trace_save+0x4b/0x70 > > ? save_trace+0x55/0x150 > > mark_lock+0x1e7/0x400 > > mark_usage+0x11d/0x140 > > __lock_acquire+0x30d/0x930 > > lock_acquire.part.0+0x9c/0x210 > > ? refill_obj_stock+0x3d/0x3a0 > > ? rcu_read_lock_sched_held+0x3f/0x70 > > ? trace_lock_acquire+0x38/0x140 > > ? lock_acquire+0x30/0x80 > > ? refill_obj_stock+0x3d/0x3a0 > > rt_spin_lock+0x27/0xe0 > > ? refill_obj_stock+0x3d/0x3a0 > > refill_obj_stock+0x3d/0x3a0 > > ? inactive_task_timer+0x1ad/0x340 > > kmem_cache_free+0x357/0x560 > > inactive_task_timer+0x1ad/0x340 > > ? switched_from_dl+0x2d0/0x2d0 > > __run_hrtimer+0x8a/0x1a0 > > __hrtimer_run_queues+0x91/0x130 > > hrtimer_interrupt+0x10f/0x220 > > __sysvec_apic_timer_interrupt+0x7b/0xd0 > > sysvec_apic_timer_interrupt+0x4f/0xd0 > > ? asm_sysvec_apic_timer_interrupt+0xa/0x20 > > asm_sysvec_apic_timer_interrupt+0x12/0x20 > > RIP: 0033:0x7fff196bf6f5 > > > > Instead of calling put_task_struct() directly, we defer it using > > call_rcu(). A more natural approach would use a workqueue, but since > > in PREEMPT_RT, we can't allocate dynamic memory from atomic context, > > the code would become more complex because we would need to put the > > work_struct instance in the task_struct and initialize it when we > > allocate a new task_struct. > > > > Sorry to come back on this; Juri reminded me offline that put_task_struct() > is invoked in other non-sleepable contexts, not just inactive_task_timer(). > I guess there is no splat because the usage count doesn't reach zero in those code paths. > e.g. > > rto_push_irq_work_func() // hard irq work so hardirq context > `\ > push_rt_task() > `\ > put_task_struct() > This is paired with a get_task_struct() a few lines above inside in the same function. > Or > > cpu_stopper_thread() // stopper callbacks must not sleep > `\ > push_cpu_stop() > `\ > put_task_struct() > This is paired with a get_task_struct() from get_push_task() > ... But then again I'm not aware of any splats happening in these paths. Is > there something special about inactive_task_timer(), or could it be the > issue is there for those other paths but we just haven't had them reported > yet? > Given that those calls have corresponding get_task_struct() calls that are close in time, there is a low probability of the usage count reaching zero and triggering the splat. In any case, I will work in a v2 that also addresses those call sites.
On Wed, Jan 04, 2023 at 03:17:01PM -0300, Wander Lairson Costa wrote: > inactive_task_timer() executes in interrupt (atomic) context. It calls > put_task_struct(), which indirectly acquires sleeping locks under > PREEMPT_RT. > > Below is an example of a splat that happened in a test environment: > > CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W --------- > Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012 > Call Trace: > dump_stack_lvl+0x57/0x7d > mark_lock_irq.cold+0x33/0xba > ? stack_trace_save+0x4b/0x70 > ? save_trace+0x55/0x150 > mark_lock+0x1e7/0x400 > mark_usage+0x11d/0x140 > __lock_acquire+0x30d/0x930 > lock_acquire.part.0+0x9c/0x210 > ? refill_obj_stock+0x3d/0x3a0 > ? rcu_read_lock_sched_held+0x3f/0x70 > ? trace_lock_acquire+0x38/0x140 > ? lock_acquire+0x30/0x80 > ? refill_obj_stock+0x3d/0x3a0 > rt_spin_lock+0x27/0xe0 > ? refill_obj_stock+0x3d/0x3a0 > refill_obj_stock+0x3d/0x3a0 > ? inactive_task_timer+0x1ad/0x340 > kmem_cache_free+0x357/0x560 > inactive_task_timer+0x1ad/0x340 > ? switched_from_dl+0x2d0/0x2d0 > __run_hrtimer+0x8a/0x1a0 > __hrtimer_run_queues+0x91/0x130 > hrtimer_interrupt+0x10f/0x220 > __sysvec_apic_timer_interrupt+0x7b/0xd0 > sysvec_apic_timer_interrupt+0x4f/0xd0 > ? asm_sysvec_apic_timer_interrupt+0xa/0x20 > asm_sysvec_apic_timer_interrupt+0x12/0x20 > RIP: 0033:0x7fff196bf6f5 > > Instead of calling put_task_struct() directly, we defer it using > call_rcu(). A more natural approach would use a workqueue, but since > in PREEMPT_RT, we can't allocate dynamic memory from atomic context, > the code would become more complex because we would need to put the > work_struct instance in the task_struct and initialize it when we > allocate a new task_struct. > > Signed-off-by: Wander Lairson Costa <wander@redhat.com> > Cc: Paul McKenney <paulmck@kernel.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > --- > kernel/sched/build_policy.c | 1 + > kernel/sched/deadline.c | 24 +++++++++++++++++++++++- > 2 files changed, 24 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c > index d9dc9ab3773f..f159304ee792 100644 > --- a/kernel/sched/build_policy.c > +++ b/kernel/sched/build_policy.c > @@ -28,6 +28,7 @@ > #include <linux/suspend.h> > #include <linux/tsacct_kern.h> > #include <linux/vtime.h> > +#include <linux/rcupdate.h> > > #include <uapi/linux/sched/types.h> > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > index 9ae8f41e3372..ab9301d4cc24 100644 > --- a/kernel/sched/deadline.c > +++ b/kernel/sched/deadline.c > @@ -1405,6 +1405,13 @@ static void update_curr_dl(struct rq *rq) > } > } > > +static void delayed_put_task_struct(struct rcu_head *rhp) > +{ > + struct task_struct *task = container_of(rhp, struct task_struct, rcu); > + > + __put_task_struct(task); Please note that BH is disabled here. Don't you therefore need to schedule a workqueue handler? Perhaps directly from inactive_task_timer(), or maybe from this point. If the latter, one way to skip the extra step is to use queue_rcu_work(). Thanx, Paul > +} > + > static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) > { > struct sched_dl_entity *dl_se = container_of(timer, > @@ -1442,7 +1449,22 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) > dl_se->dl_non_contending = 0; > unlock: > task_rq_unlock(rq, p, &rf); > - put_task_struct(p); > + > + if (IS_ENABLED(CONFIG_PREEMPT_RT)) { > + /* > + * Decrement the refcount explicitly to avoid unnecessarily > + * calling call_rcu. > + */ > + if (refcount_dec_and_test(&p->usage)) > + /* > + * under PREEMPT_RT, we can't call put_task_struct > + * in atomic context because it will indirectly > + * acquire sleeping locks. > + */ > + call_rcu(&p->rcu, delayed_put_task_struct); > + } else { > + put_task_struct(p); > + } > > return HRTIMER_NORESTART; > } > -- > 2.39.0 >
On Mon, Jan 9, 2023 at 10:40 PM Paul E. McKenney <paulmck@kernel.org> wrote: > > On Wed, Jan 04, 2023 at 03:17:01PM -0300, Wander Lairson Costa wrote: > > inactive_task_timer() executes in interrupt (atomic) context. It calls > > put_task_struct(), which indirectly acquires sleeping locks under > > PREEMPT_RT. > > > > Below is an example of a splat that happened in a test environment: > > > > CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W --------- > > Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012 > > Call Trace: > > dump_stack_lvl+0x57/0x7d > > mark_lock_irq.cold+0x33/0xba > > ? stack_trace_save+0x4b/0x70 > > ? save_trace+0x55/0x150 > > mark_lock+0x1e7/0x400 > > mark_usage+0x11d/0x140 > > __lock_acquire+0x30d/0x930 > > lock_acquire.part.0+0x9c/0x210 > > ? refill_obj_stock+0x3d/0x3a0 > > ? rcu_read_lock_sched_held+0x3f/0x70 > > ? trace_lock_acquire+0x38/0x140 > > ? lock_acquire+0x30/0x80 > > ? refill_obj_stock+0x3d/0x3a0 > > rt_spin_lock+0x27/0xe0 > > ? refill_obj_stock+0x3d/0x3a0 > > refill_obj_stock+0x3d/0x3a0 > > ? inactive_task_timer+0x1ad/0x340 > > kmem_cache_free+0x357/0x560 > > inactive_task_timer+0x1ad/0x340 > > ? switched_from_dl+0x2d0/0x2d0 > > __run_hrtimer+0x8a/0x1a0 > > __hrtimer_run_queues+0x91/0x130 > > hrtimer_interrupt+0x10f/0x220 > > __sysvec_apic_timer_interrupt+0x7b/0xd0 > > sysvec_apic_timer_interrupt+0x4f/0xd0 > > ? asm_sysvec_apic_timer_interrupt+0xa/0x20 > > asm_sysvec_apic_timer_interrupt+0x12/0x20 > > RIP: 0033:0x7fff196bf6f5 > > > > Instead of calling put_task_struct() directly, we defer it using > > call_rcu(). A more natural approach would use a workqueue, but since > > in PREEMPT_RT, we can't allocate dynamic memory from atomic context, > > the code would become more complex because we would need to put the > > work_struct instance in the task_struct and initialize it when we > > allocate a new task_struct. > > > > Signed-off-by: Wander Lairson Costa <wander@redhat.com> > > Cc: Paul McKenney <paulmck@kernel.org> > > Cc: Thomas Gleixner <tglx@linutronix.de> > > --- > > kernel/sched/build_policy.c | 1 + > > kernel/sched/deadline.c | 24 +++++++++++++++++++++++- > > 2 files changed, 24 insertions(+), 1 deletion(-) > > > > diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c > > index d9dc9ab3773f..f159304ee792 100644 > > --- a/kernel/sched/build_policy.c > > +++ b/kernel/sched/build_policy.c > > @@ -28,6 +28,7 @@ > > #include <linux/suspend.h> > > #include <linux/tsacct_kern.h> > > #include <linux/vtime.h> > > +#include <linux/rcupdate.h> > > > > #include <uapi/linux/sched/types.h> > > > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > > index 9ae8f41e3372..ab9301d4cc24 100644 > > --- a/kernel/sched/deadline.c > > +++ b/kernel/sched/deadline.c > > @@ -1405,6 +1405,13 @@ static void update_curr_dl(struct rq *rq) > > } > > } > > > > +static void delayed_put_task_struct(struct rcu_head *rhp) > > +{ > > + struct task_struct *task = container_of(rhp, struct task_struct, rcu); > > + > > + __put_task_struct(task); > > Please note that BH is disabled here. Don't you therefore > need to schedule a workqueue handler? Perhaps directly from > inactive_task_timer(), or maybe from this point. If the latter, one > way to skip the extra step is to use queue_rcu_work(). > My initial work was using a workqueue [1,2]. However, I realized I could reach a much simpler code with call_rcu(). I am afraid my ignorance doesn't allow me to get your point. Does disabling softirq imply atomic context? [1] https://gitlab.com/walac/kernel-ark/-/commit/ec8addbe38d5c318f1789b4c0fa480a9d2afdb65 [2] https://gitlab.com/walac/kernel-ark/-/commit/0bde233235ffed233a7466a36a4866bc48064f54 > Thanx, Paul > > > +} > > + > > static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) > > { > > struct sched_dl_entity *dl_se = container_of(timer, > > @@ -1442,7 +1449,22 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) > > dl_se->dl_non_contending = 0; > > unlock: > > task_rq_unlock(rq, p, &rf); > > - put_task_struct(p); > > + > > + if (IS_ENABLED(CONFIG_PREEMPT_RT)) { > > + /* > > + * Decrement the refcount explicitly to avoid unnecessarily > > + * calling call_rcu. > > + */ > > + if (refcount_dec_and_test(&p->usage)) > > + /* > > + * under PREEMPT_RT, we can't call put_task_struct > > + * in atomic context because it will indirectly > > + * acquire sleeping locks. > > + */ > > + call_rcu(&p->rcu, delayed_put_task_struct); > > + } else { > > + put_task_struct(p); > > + } > > > > return HRTIMER_NORESTART; > > } > > -- > > 2.39.0 > > >
On Tue, Jan 10, 2023 at 05:52:03PM -0300, Wander Lairson Costa wrote: > On Mon, Jan 9, 2023 at 10:40 PM Paul E. McKenney <paulmck@kernel.org> wrote: > > > > On Wed, Jan 04, 2023 at 03:17:01PM -0300, Wander Lairson Costa wrote: > > > inactive_task_timer() executes in interrupt (atomic) context. It calls > > > put_task_struct(), which indirectly acquires sleeping locks under > > > PREEMPT_RT. > > > > > > Below is an example of a splat that happened in a test environment: > > > > > > CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W --------- > > > Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012 > > > Call Trace: > > > dump_stack_lvl+0x57/0x7d > > > mark_lock_irq.cold+0x33/0xba > > > ? stack_trace_save+0x4b/0x70 > > > ? save_trace+0x55/0x150 > > > mark_lock+0x1e7/0x400 > > > mark_usage+0x11d/0x140 > > > __lock_acquire+0x30d/0x930 > > > lock_acquire.part.0+0x9c/0x210 > > > ? refill_obj_stock+0x3d/0x3a0 > > > ? rcu_read_lock_sched_held+0x3f/0x70 > > > ? trace_lock_acquire+0x38/0x140 > > > ? lock_acquire+0x30/0x80 > > > ? refill_obj_stock+0x3d/0x3a0 > > > rt_spin_lock+0x27/0xe0 > > > ? refill_obj_stock+0x3d/0x3a0 > > > refill_obj_stock+0x3d/0x3a0 > > > ? inactive_task_timer+0x1ad/0x340 > > > kmem_cache_free+0x357/0x560 > > > inactive_task_timer+0x1ad/0x340 > > > ? switched_from_dl+0x2d0/0x2d0 > > > __run_hrtimer+0x8a/0x1a0 > > > __hrtimer_run_queues+0x91/0x130 > > > hrtimer_interrupt+0x10f/0x220 > > > __sysvec_apic_timer_interrupt+0x7b/0xd0 > > > sysvec_apic_timer_interrupt+0x4f/0xd0 > > > ? asm_sysvec_apic_timer_interrupt+0xa/0x20 > > > asm_sysvec_apic_timer_interrupt+0x12/0x20 > > > RIP: 0033:0x7fff196bf6f5 > > > > > > Instead of calling put_task_struct() directly, we defer it using > > > call_rcu(). A more natural approach would use a workqueue, but since > > > in PREEMPT_RT, we can't allocate dynamic memory from atomic context, > > > the code would become more complex because we would need to put the > > > work_struct instance in the task_struct and initialize it when we > > > allocate a new task_struct. > > > > > > Signed-off-by: Wander Lairson Costa <wander@redhat.com> > > > Cc: Paul McKenney <paulmck@kernel.org> > > > Cc: Thomas Gleixner <tglx@linutronix.de> > > > --- > > > kernel/sched/build_policy.c | 1 + > > > kernel/sched/deadline.c | 24 +++++++++++++++++++++++- > > > 2 files changed, 24 insertions(+), 1 deletion(-) > > > > > > diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c > > > index d9dc9ab3773f..f159304ee792 100644 > > > --- a/kernel/sched/build_policy.c > > > +++ b/kernel/sched/build_policy.c > > > @@ -28,6 +28,7 @@ > > > #include <linux/suspend.h> > > > #include <linux/tsacct_kern.h> > > > #include <linux/vtime.h> > > > +#include <linux/rcupdate.h> > > > > > > #include <uapi/linux/sched/types.h> > > > > > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > > > index 9ae8f41e3372..ab9301d4cc24 100644 > > > --- a/kernel/sched/deadline.c > > > +++ b/kernel/sched/deadline.c > > > @@ -1405,6 +1405,13 @@ static void update_curr_dl(struct rq *rq) > > > } > > > } > > > > > > +static void delayed_put_task_struct(struct rcu_head *rhp) > > > +{ > > > + struct task_struct *task = container_of(rhp, struct task_struct, rcu); > > > + > > > + __put_task_struct(task); > > > > Please note that BH is disabled here. Don't you therefore > > need to schedule a workqueue handler? Perhaps directly from > > inactive_task_timer(), or maybe from this point. If the latter, one > > way to skip the extra step is to use queue_rcu_work(). > > > > My initial work was using a workqueue [1,2]. However, I realized I > could reach a much simpler code with call_rcu(). > I am afraid my ignorance doesn't allow me to get your point. Does > disabling softirq imply atomic context? Given that this problem occurred in PREEMPT_RT, I am assuming that the appropriate definition of "atomic context" is "cannot call schedule()". And you are in fact not permitted to call schedule() from a bh-disabled region. This also means that you cannot acquire a non-raw spinlock in a bh-disabled region of code in a PREEMPT_RT kernel, because doing so can invoke schedule. Of course, using a workqueue does incur needless overhead in non-PREEMPT_RT kernels. So one alternative approach is to use the workqueue only in PREEMPT_RT kernels and to just invoke __put_task_struct() directly (without call_rcu() along the way) otherwise. Does that help, or am I missing your point? Thanx, Paul > [1] https://gitlab.com/walac/kernel-ark/-/commit/ec8addbe38d5c318f1789b4c0fa480a9d2afdb65 > [2] https://gitlab.com/walac/kernel-ark/-/commit/0bde233235ffed233a7466a36a4866bc48064f54 > > > > Thanx, Paul > > > > > +} > > > + > > > static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) > > > { > > > struct sched_dl_entity *dl_se = container_of(timer, > > > @@ -1442,7 +1449,22 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) > > > dl_se->dl_non_contending = 0; > > > unlock: > > > task_rq_unlock(rq, p, &rf); > > > - put_task_struct(p); > > > + > > > + if (IS_ENABLED(CONFIG_PREEMPT_RT)) { > > > + /* > > > + * Decrement the refcount explicitly to avoid unnecessarily > > > + * calling call_rcu. > > > + */ > > > + if (refcount_dec_and_test(&p->usage)) > > > + /* > > > + * under PREEMPT_RT, we can't call put_task_struct > > > + * in atomic context because it will indirectly > > > + * acquire sleeping locks. > > > + */ > > > + call_rcu(&p->rcu, delayed_put_task_struct); > > > + } else { > > > + put_task_struct(p); > > > + } > > > > > > return HRTIMER_NORESTART; > > > } > > > -- > > > 2.39.0 > > > > > >
On 10/01/23 14:27, Paul E. McKenney wrote: > On Tue, Jan 10, 2023 at 05:52:03PM -0300, Wander Lairson Costa wrote: >> On Mon, Jan 9, 2023 at 10:40 PM Paul E. McKenney <paulmck@kernel.org> wrote: >> > >> > On Wed, Jan 04, 2023 at 03:17:01PM -0300, Wander Lairson Costa wrote: >> > > inactive_task_timer() executes in interrupt (atomic) context. It calls >> > > put_task_struct(), which indirectly acquires sleeping locks under >> > > PREEMPT_RT. >> > > >> > > Below is an example of a splat that happened in a test environment: >> > > >> > > CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W --------- >> > > Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012 >> > > Call Trace: >> > > dump_stack_lvl+0x57/0x7d >> > > mark_lock_irq.cold+0x33/0xba >> > > ? stack_trace_save+0x4b/0x70 >> > > ? save_trace+0x55/0x150 >> > > mark_lock+0x1e7/0x400 >> > > mark_usage+0x11d/0x140 >> > > __lock_acquire+0x30d/0x930 >> > > lock_acquire.part.0+0x9c/0x210 >> > > ? refill_obj_stock+0x3d/0x3a0 >> > > ? rcu_read_lock_sched_held+0x3f/0x70 >> > > ? trace_lock_acquire+0x38/0x140 >> > > ? lock_acquire+0x30/0x80 >> > > ? refill_obj_stock+0x3d/0x3a0 >> > > rt_spin_lock+0x27/0xe0 >> > > ? refill_obj_stock+0x3d/0x3a0 >> > > refill_obj_stock+0x3d/0x3a0 >> > > ? inactive_task_timer+0x1ad/0x340 >> > > kmem_cache_free+0x357/0x560 >> > > inactive_task_timer+0x1ad/0x340 >> > > ? switched_from_dl+0x2d0/0x2d0 >> > > __run_hrtimer+0x8a/0x1a0 >> > > __hrtimer_run_queues+0x91/0x130 >> > > hrtimer_interrupt+0x10f/0x220 >> > > __sysvec_apic_timer_interrupt+0x7b/0xd0 >> > > sysvec_apic_timer_interrupt+0x4f/0xd0 >> > > ? asm_sysvec_apic_timer_interrupt+0xa/0x20 >> > > asm_sysvec_apic_timer_interrupt+0x12/0x20 >> > > RIP: 0033:0x7fff196bf6f5 >> > > >> > > Instead of calling put_task_struct() directly, we defer it using >> > > call_rcu(). A more natural approach would use a workqueue, but since >> > > in PREEMPT_RT, we can't allocate dynamic memory from atomic context, >> > > the code would become more complex because we would need to put the >> > > work_struct instance in the task_struct and initialize it when we >> > > allocate a new task_struct. >> > > >> > > Signed-off-by: Wander Lairson Costa <wander@redhat.com> >> > > Cc: Paul McKenney <paulmck@kernel.org> >> > > Cc: Thomas Gleixner <tglx@linutronix.de> >> > > --- >> > > kernel/sched/build_policy.c | 1 + >> > > kernel/sched/deadline.c | 24 +++++++++++++++++++++++- >> > > 2 files changed, 24 insertions(+), 1 deletion(-) >> > > >> > > diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c >> > > index d9dc9ab3773f..f159304ee792 100644 >> > > --- a/kernel/sched/build_policy.c >> > > +++ b/kernel/sched/build_policy.c >> > > @@ -28,6 +28,7 @@ >> > > #include <linux/suspend.h> >> > > #include <linux/tsacct_kern.h> >> > > #include <linux/vtime.h> >> > > +#include <linux/rcupdate.h> >> > > >> > > #include <uapi/linux/sched/types.h> >> > > >> > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c >> > > index 9ae8f41e3372..ab9301d4cc24 100644 >> > > --- a/kernel/sched/deadline.c >> > > +++ b/kernel/sched/deadline.c >> > > @@ -1405,6 +1405,13 @@ static void update_curr_dl(struct rq *rq) >> > > } >> > > } >> > > >> > > +static void delayed_put_task_struct(struct rcu_head *rhp) >> > > +{ >> > > + struct task_struct *task = container_of(rhp, struct task_struct, rcu); >> > > + >> > > + __put_task_struct(task); >> > >> > Please note that BH is disabled here. Don't you therefore >> > need to schedule a workqueue handler? Perhaps directly from >> > inactive_task_timer(), or maybe from this point. If the latter, one >> > way to skip the extra step is to use queue_rcu_work(). >> > >> >> My initial work was using a workqueue [1,2]. However, I realized I >> could reach a much simpler code with call_rcu(). >> I am afraid my ignorance doesn't allow me to get your point. Does >> disabling softirq imply atomic context? > > Given that this problem occurred in PREEMPT_RT, I am assuming that the > appropriate definition of "atomic context" is "cannot call schedule()". > And you are in fact not permitted to call schedule() from a bh-disabled > region. > > This also means that you cannot acquire a non-raw spinlock in a > bh-disabled region of code in a PREEMPT_RT kernel, because doing > so can invoke schedule. > But per the PREEMPT_RT lock "replacement", non-raw spinlocks end up invoking schedule_rtlock(), which should be safe vs BH disabled (local_lock() + rcu_read_lock()): 6991436c2b5d ("sched/core: Provide a scheduling point for RT locks") Unless I'm missing something else?
On Wed, Jan 18, 2023 at 03:57:38PM +0000, Valentin Schneider wrote: > On 10/01/23 14:27, Paul E. McKenney wrote: > > On Tue, Jan 10, 2023 at 05:52:03PM -0300, Wander Lairson Costa wrote: > >> On Mon, Jan 9, 2023 at 10:40 PM Paul E. McKenney <paulmck@kernel.org> wrote: > >> > > >> > On Wed, Jan 04, 2023 at 03:17:01PM -0300, Wander Lairson Costa wrote: > >> > > inactive_task_timer() executes in interrupt (atomic) context. It calls > >> > > put_task_struct(), which indirectly acquires sleeping locks under > >> > > PREEMPT_RT. > >> > > > >> > > Below is an example of a splat that happened in a test environment: > >> > > > >> > > CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W --------- > >> > > Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012 > >> > > Call Trace: > >> > > dump_stack_lvl+0x57/0x7d > >> > > mark_lock_irq.cold+0x33/0xba > >> > > ? stack_trace_save+0x4b/0x70 > >> > > ? save_trace+0x55/0x150 > >> > > mark_lock+0x1e7/0x400 > >> > > mark_usage+0x11d/0x140 > >> > > __lock_acquire+0x30d/0x930 > >> > > lock_acquire.part.0+0x9c/0x210 > >> > > ? refill_obj_stock+0x3d/0x3a0 > >> > > ? rcu_read_lock_sched_held+0x3f/0x70 > >> > > ? trace_lock_acquire+0x38/0x140 > >> > > ? lock_acquire+0x30/0x80 > >> > > ? refill_obj_stock+0x3d/0x3a0 > >> > > rt_spin_lock+0x27/0xe0 > >> > > ? refill_obj_stock+0x3d/0x3a0 > >> > > refill_obj_stock+0x3d/0x3a0 > >> > > ? inactive_task_timer+0x1ad/0x340 > >> > > kmem_cache_free+0x357/0x560 > >> > > inactive_task_timer+0x1ad/0x340 > >> > > ? switched_from_dl+0x2d0/0x2d0 > >> > > __run_hrtimer+0x8a/0x1a0 > >> > > __hrtimer_run_queues+0x91/0x130 > >> > > hrtimer_interrupt+0x10f/0x220 > >> > > __sysvec_apic_timer_interrupt+0x7b/0xd0 > >> > > sysvec_apic_timer_interrupt+0x4f/0xd0 > >> > > ? asm_sysvec_apic_timer_interrupt+0xa/0x20 > >> > > asm_sysvec_apic_timer_interrupt+0x12/0x20 > >> > > RIP: 0033:0x7fff196bf6f5 > >> > > > >> > > Instead of calling put_task_struct() directly, we defer it using > >> > > call_rcu(). A more natural approach would use a workqueue, but since > >> > > in PREEMPT_RT, we can't allocate dynamic memory from atomic context, > >> > > the code would become more complex because we would need to put the > >> > > work_struct instance in the task_struct and initialize it when we > >> > > allocate a new task_struct. > >> > > > >> > > Signed-off-by: Wander Lairson Costa <wander@redhat.com> > >> > > Cc: Paul McKenney <paulmck@kernel.org> > >> > > Cc: Thomas Gleixner <tglx@linutronix.de> > >> > > --- > >> > > kernel/sched/build_policy.c | 1 + > >> > > kernel/sched/deadline.c | 24 +++++++++++++++++++++++- > >> > > 2 files changed, 24 insertions(+), 1 deletion(-) > >> > > > >> > > diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c > >> > > index d9dc9ab3773f..f159304ee792 100644 > >> > > --- a/kernel/sched/build_policy.c > >> > > +++ b/kernel/sched/build_policy.c > >> > > @@ -28,6 +28,7 @@ > >> > > #include <linux/suspend.h> > >> > > #include <linux/tsacct_kern.h> > >> > > #include <linux/vtime.h> > >> > > +#include <linux/rcupdate.h> > >> > > > >> > > #include <uapi/linux/sched/types.h> > >> > > > >> > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > >> > > index 9ae8f41e3372..ab9301d4cc24 100644 > >> > > --- a/kernel/sched/deadline.c > >> > > +++ b/kernel/sched/deadline.c > >> > > @@ -1405,6 +1405,13 @@ static void update_curr_dl(struct rq *rq) > >> > > } > >> > > } > >> > > > >> > > +static void delayed_put_task_struct(struct rcu_head *rhp) > >> > > +{ > >> > > + struct task_struct *task = container_of(rhp, struct task_struct, rcu); > >> > > + > >> > > + __put_task_struct(task); > >> > > >> > Please note that BH is disabled here. Don't you therefore > >> > need to schedule a workqueue handler? Perhaps directly from > >> > inactive_task_timer(), or maybe from this point. If the latter, one > >> > way to skip the extra step is to use queue_rcu_work(). > >> > > >> > >> My initial work was using a workqueue [1,2]. However, I realized I > >> could reach a much simpler code with call_rcu(). > >> I am afraid my ignorance doesn't allow me to get your point. Does > >> disabling softirq imply atomic context? > > > > Given that this problem occurred in PREEMPT_RT, I am assuming that the > > appropriate definition of "atomic context" is "cannot call schedule()". > > And you are in fact not permitted to call schedule() from a bh-disabled > > region. > > > > This also means that you cannot acquire a non-raw spinlock in a > > bh-disabled region of code in a PREEMPT_RT kernel, because doing > > so can invoke schedule. > > But per the PREEMPT_RT lock "replacement", non-raw spinlocks end up > invoking schedule_rtlock(), which should be safe vs BH disabled > (local_lock() + rcu_read_lock()): > > 6991436c2b5d ("sched/core: Provide a scheduling point for RT locks") > > Unless I'm missing something else? No, you miss nothing. Apologies for my confusion! (I could have sworn that someone else corrected me on this earlier, but I don't see it right off hand.) Thanx, Paul
On 18/01/23 10:11, Paul E. McKenney wrote: > On Wed, Jan 18, 2023 at 03:57:38PM +0000, Valentin Schneider wrote: >> > Given that this problem occurred in PREEMPT_RT, I am assuming that the >> > appropriate definition of "atomic context" is "cannot call schedule()". >> > And you are in fact not permitted to call schedule() from a bh-disabled >> > region. >> > >> > This also means that you cannot acquire a non-raw spinlock in a >> > bh-disabled region of code in a PREEMPT_RT kernel, because doing >> > so can invoke schedule. >> >> But per the PREEMPT_RT lock "replacement", non-raw spinlocks end up >> invoking schedule_rtlock(), which should be safe vs BH disabled >> (local_lock() + rcu_read_lock()): >> >> 6991436c2b5d ("sched/core: Provide a scheduling point for RT locks") >> >> Unless I'm missing something else? > > No, you miss nothing. Apologies for my confusion! > > (I could have sworn that someone else corrected me on this earlier, > but I don't see it right off hand.) > > Thanx, Paul Heh, I had a smidge of doubt myself, but since we've cleared this up: Reviewed-by: Valentin Schneider <vschneid@redhat.com>
© 2016 - 2025 Red Hat, Inc.