[RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

John Stultz posted 3 patches 3 years, 6 months ago
[RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by John Stultz 3 years, 6 months ago
From: Connor O'Brien <connoro@google.com>

In certain audio use cases, scheduling RT threads on cores that
are handling softirqs can lead to glitches. Prevent this
behavior in cases where the softirq is likely to take a long
time. To avoid unnecessary migrations, the old behavior is
preserved for RCU, SCHED and TIMER irqs which are expected to be
relatively quick.

This patch reworks and combines two related changes originally
by John Dias <joaodias@google.com>

Cc: John Dias <joaodias@google.com>
Cc: Connor O'Brien <connoro@google.com>
Cc: Rick Yiu <rickyiu@google.com>
Cc: John Kacur <jkacur@redhat.com>
Cc: Qais Yousef <qais.yousef@arm.com>
Cc: Chris Redpath <chris.redpath@arm.com>
Cc: Abhijeet Dharmapurikar <adharmap@quicinc.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-team@android.com
Signed-off-by: John Dias <joaodias@google.com>
[elavila: Port to mainline, amend commit text]
Signed-off-by: J. Avila <elavila@google.com>
[connoro: Reworked, simplified, and merged two patches together]
Signed-off-by: Connor O'Brien <connoro@google.com>
[jstultz: Further simplified and fixed issues, reworded commit
 message, removed arm64-isms]
Signed-off-by: John Stultz <jstultz@google.com>
---
v2:
* Reformatted Kconfig entry to match coding style
  (Reported-by: Randy Dunlap <rdunlap@infradead.org>)
* Made rt_task_fits_capacity_and_may_preempt static to
  avoid warnings (Reported-by: kernel test robot <lkp@intel.com>)
* Rework to use preempt_count and drop kconfig dependency on ARM64
v3:
* Use introduced __cpu_softirq_pending() to avoid s390 build
  issues (Reported-by: kernel test robot <lkp@intel.com>)
v4:
* Drop TASKLET_SOFTIRQ from LONG_SOFTIRQS (suggested by Qais)
* Depend on !PREEMPT_RT (Suggested by Qais)
* Larger simplification of logic (suggested by Qais)
* Rework LONG_SOFTIRQS to use BIT() macros
* Rename task_may_preempt() to cpu_busy_with_softirqs()
---
 include/linux/interrupt.h |  6 ++++
 init/Kconfig              | 10 +++++++
 kernel/sched/rt.c         | 61 +++++++++++++++++++++++++++++++++------
 kernel/softirq.c          |  9 ++++++
 4 files changed, 77 insertions(+), 9 deletions(-)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index a749a8663841..e3a4add67e8c 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -582,6 +582,11 @@ enum
  * _ IRQ_POLL: irq_poll_cpu_dead() migrates the queue
  */
 #define SOFTIRQ_HOTPLUG_SAFE_MASK (BIT(RCU_SOFTIRQ) | BIT(IRQ_POLL_SOFTIRQ))
+/* Softirq's where the handling might be long: */
+#define LONG_SOFTIRQ_MASK (BIT(NET_TX_SOFTIRQ)    | \
+			   BIT(NET_RX_SOFTIRQ)    | \
+			   BIT(BLOCK_SOFTIRQ)     | \
+			   BIT(IRQ_POLL_SOFTIRQ))
 
 /* map softirq index to softirq name. update 'softirq_to_name' in
  * kernel/softirq.c when adding a new softirq.
@@ -617,6 +622,7 @@ extern void raise_softirq_irqoff(unsigned int nr);
 extern void raise_softirq(unsigned int nr);
 
 DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
+DECLARE_PER_CPU(u32, active_softirqs);
 
 static inline struct task_struct *this_cpu_ksoftirqd(void)
 {
diff --git a/init/Kconfig b/init/Kconfig
index 532362fcfe31..3d1de6edcfa1 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1284,6 +1284,16 @@ config SCHED_AUTOGROUP
 	  desktop applications.  Task group autogeneration is currently based
 	  upon task session.
 
+config RT_SOFTIRQ_OPTIMIZATION
+	bool "Improve RT scheduling during long softirq execution"
+	depends on SMP && !PREEMPT_RT
+	default n
+	help
+	  Enable an optimization which tries to avoid placing RT tasks on CPUs
+	  occupied by nonpreemptible tasks, such as a long softirq or CPUs
+	  which may soon block preemptions, such as a CPU running a ksoftirq
+	  thread which handles slow softirqs.
+
 config SYSFS_DEPRECATED
 	bool "Enable deprecated sysfs features to support old userspace tools"
 	depends on SYSFS
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 55f39c8f4203..3c628db807c8 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1599,6 +1599,44 @@ static void yield_task_rt(struct rq *rq)
 #ifdef CONFIG_SMP
 static int find_lowest_rq(struct task_struct *task);
 
+#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
+#define __use_softirq_opt 1
+/*
+ * Return whether the given cpu is currently non-preemptible
+ * while handling a potentially long softirq, or if the current
+ * task is likely to block preemptions soon because it is a
+ * ksoftirq thread that is handling slow softirq.
+ */
+static bool cpu_busy_with_softirqs(int cpu)
+{
+	u32 softirqs = per_cpu(active_softirqs, cpu) |
+		       __cpu_softirq_pending(cpu);
+	struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
+	struct task_struct *curr;
+	struct rq *rq = cpu_rq(cpu);
+	int ret;
+
+	rcu_read_lock();
+	curr = READ_ONCE(rq->curr); /* unlocked access */
+	ret = (softirqs & LONG_SOFTIRQ_MASK) &&
+		 (curr == cpu_ksoftirqd ||
+		  preempt_count() & SOFTIRQ_MASK);
+	rcu_read_unlock();
+	return ret;
+}
+#else
+#define __use_softirq_opt 0
+static bool cpu_busy_with_softirqs(int cpu)
+{
+	return false;
+}
+#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
+
+static bool rt_task_fits_cpu(struct task_struct *p, int cpu)
+{
+	return !cpu_busy_with_softirqs(cpu) && rt_task_fits_capacity(p, cpu);
+}
+
 static int
 select_task_rq_rt(struct task_struct *p, int cpu, int flags)
 {
@@ -1637,22 +1675,24 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
 	 * This test is optimistic, if we get it wrong the load-balancer
 	 * will have to sort it out.
 	 *
-	 * We take into account the capacity of the CPU to ensure it fits the
-	 * requirement of the task - which is only important on heterogeneous
-	 * systems like big.LITTLE.
+	 * We use rt_task_fits_cpu() to evaluate if the CPU is busy with
+	 * potentially long-running softirq work, as well as take into
+	 * account the capacity of the CPU to ensure it fits the
+	 * requirement of the task - which is only important on
+	 * heterogeneous systems like big.LITTLE.
 	 */
 	test = curr &&
 	       unlikely(rt_task(curr)) &&
 	       (curr->nr_cpus_allowed < 2 || curr->prio <= p->prio);
 
-	if (test || !rt_task_fits_capacity(p, cpu)) {
+	if (test || !rt_task_fits_cpu(p, cpu)) {
 		int target = find_lowest_rq(p);
 
 		/*
 		 * Bail out if we were forcing a migration to find a better
 		 * fitting CPU but our search failed.
 		 */
-		if (!test && target != -1 && !rt_task_fits_capacity(p, target))
+		if (!test && target != -1 && !rt_task_fits_cpu(p, target))
 			goto out_unlock;
 
 		/*
@@ -1894,14 +1934,17 @@ static int find_lowest_rq(struct task_struct *task)
 		return -1; /* No other targets possible */
 
 	/*
-	 * If we're on asym system ensure we consider the different capacities
-	 * of the CPUs when searching for the lowest_mask.
+	 * If we're using the softirq optimization or if we are
+	 * on asym system, ensure we consider the softirq processing
+	 * or different capacities of the CPUs when searching for the
+	 * lowest_mask.
 	 */
-	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
+	if (__use_softirq_opt ||
+	    static_branch_unlikely(&sched_asym_cpucapacity)) {
 
 		ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
 					  task, lowest_mask,
-					  rt_task_fits_capacity);
+					  rt_task_fits_cpu);
 	} else {
 
 		ret = cpupri_find(&task_rq(task)->rd->cpupri,
diff --git a/kernel/softirq.c b/kernel/softirq.c
index c8a6913c067d..35ee79dd8786 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -60,6 +60,13 @@ static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp
 
 DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
 
+/*
+ * active_softirqs -- per cpu, a mask of softirqs that are being handled,
+ * with the expectation that approximate answers are acceptable and therefore
+ * no synchronization.
+ */
+DEFINE_PER_CPU(u32, active_softirqs);
+
 const char * const softirq_to_name[NR_SOFTIRQS] = {
 	"HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "IRQ_POLL",
 	"TASKLET", "SCHED", "HRTIMER", "RCU"
@@ -551,6 +558,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
 restart:
 	/* Reset the pending bitmask before enabling irqs */
 	set_softirq_pending(0);
+	__this_cpu_write(active_softirqs, pending);
 
 	local_irq_enable();
 
@@ -580,6 +588,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
 		pending >>= softirq_bit;
 	}
 
+	__this_cpu_write(active_softirqs, 0);
 	if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
 	    __this_cpu_read(ksoftirqd) == current)
 		rcu_softirq_qs();
-- 
2.38.0.rc1.362.ged0d419d3c-goog
Re: [RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by Joel Fernandes 3 years, 5 months ago
Hi,

On Mon, Oct 03, 2022 at 11:20:32PM +0000, John Stultz wrote:
> From: Connor O'Brien <connoro@google.com>
> 
> In certain audio use cases, scheduling RT threads on cores that
> are handling softirqs can lead to glitches. Prevent this
> behavior in cases where the softirq is likely to take a long
> time. To avoid unnecessary migrations, the old behavior is
> preserved for RCU, SCHED and TIMER irqs which are expected to be
> relatively quick.
> 
> This patch reworks and combines two related changes originally
> by John Dias <joaodias@google.com>
[..]
> ---
> v2:
> * Reformatted Kconfig entry to match coding style
>   (Reported-by: Randy Dunlap <rdunlap@infradead.org>)
> * Made rt_task_fits_capacity_and_may_preempt static to
>   avoid warnings (Reported-by: kernel test robot <lkp@intel.com>)
> * Rework to use preempt_count and drop kconfig dependency on ARM64
> v3:
> * Use introduced __cpu_softirq_pending() to avoid s390 build
>   issues (Reported-by: kernel test robot <lkp@intel.com>)
> v4:
> * Drop TASKLET_SOFTIRQ from LONG_SOFTIRQS (suggested by Qais)
> * Depend on !PREEMPT_RT (Suggested by Qais)
> * Larger simplification of logic (suggested by Qais)
> * Rework LONG_SOFTIRQS to use BIT() macros
> * Rename task_may_preempt() to cpu_busy_with_softirqs()
> ---
>  include/linux/interrupt.h |  6 ++++
>  init/Kconfig              | 10 +++++++
>  kernel/sched/rt.c         | 61 +++++++++++++++++++++++++++++++++------
>  kernel/softirq.c          |  9 ++++++
>  4 files changed, 77 insertions(+), 9 deletions(-)
> 
> diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> index a749a8663841..e3a4add67e8c 100644
> --- a/include/linux/interrupt.h
> +++ b/include/linux/interrupt.h
> @@ -582,6 +582,11 @@ enum
>   * _ IRQ_POLL: irq_poll_cpu_dead() migrates the queue
>   */
>  #define SOFTIRQ_HOTPLUG_SAFE_MASK (BIT(RCU_SOFTIRQ) | BIT(IRQ_POLL_SOFTIRQ))
> +/* Softirq's where the handling might be long: */
> +#define LONG_SOFTIRQ_MASK (BIT(NET_TX_SOFTIRQ)    | \
> +			   BIT(NET_RX_SOFTIRQ)    | \
> +			   BIT(BLOCK_SOFTIRQ)     | \
> +			   BIT(IRQ_POLL_SOFTIRQ))

Instead of a mask, I wonder if a better approach is to have a heuristic based
on measurement of softirq load. There is already code to measure irq load
(see update_irq_load_avg). Perhaps, Vincent would be willing to add the
softirq load in it as well if asked nicely ;)

That's much better because:
1. Any new softirqs added will also trigger this optimization.
2. Systems where these softirqs are quiet will not unnecessarily trigger it.
3. You can also include irq load so that things like IRQ storms also don't
cause RT issues (when there are better "uncompromised" idle CPU candidates).
4. may not be need CONFIG option at all if the optimization makes
sense for everything. Think all the systems not running Android.

>  /* map softirq index to softirq name. update 'softirq_to_name' in
>   * kernel/softirq.c when adding a new softirq.
> @@ -617,6 +622,7 @@ extern void raise_softirq_irqoff(unsigned int nr);
>  extern void raise_softirq(unsigned int nr);
>  
>  DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
> +DECLARE_PER_CPU(u32, active_softirqs);
>  
>  static inline struct task_struct *this_cpu_ksoftirqd(void)
>  {
> diff --git a/init/Kconfig b/init/Kconfig
> index 532362fcfe31..3d1de6edcfa1 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1284,6 +1284,16 @@ config SCHED_AUTOGROUP
>  	  desktop applications.  Task group autogeneration is currently based
>  	  upon task session.
>  
> +config RT_SOFTIRQ_OPTIMIZATION

I much dislike this config option name because it does not self-explain what
the option is and it is too long. More so, any future such optimizations,
even if rare, will have to come up with another name causing more confusion.
Instead, CONFIG_RT_AVOID_SOFTIRQS seems cleaner, but I'll defer to scheduler
maintainers since they have to take the patch ultimately. Though I'll give my
tag for a rename / better name. More below:

> +	bool "Improve RT scheduling during long softirq execution"
> +	depends on SMP && !PREEMPT_RT
> +	default n
> +	help
> +	  Enable an optimization which tries to avoid placing RT tasks on CPUs
> +	  occupied by nonpreemptible tasks, such as a long softirq or CPUs
> +	  which may soon block preemptions, such as a CPU running a ksoftirq
> +	  thread which handles slow softirqs.
> +
>  config SYSFS_DEPRECATED
>  	bool "Enable deprecated sysfs features to support old userspace tools"
>  	depends on SYSFS
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 55f39c8f4203..3c628db807c8 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1599,6 +1599,44 @@ static void yield_task_rt(struct rq *rq)
>  #ifdef CONFIG_SMP
>  static int find_lowest_rq(struct task_struct *task);
>  
> +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
> +#define __use_softirq_opt 1

Why not just use IS_ENABLED(CONFIG_RT_SOFTIRQ_OPT..) instead of defining new
macros?

> + * Return whether the given cpu is currently non-preemptible
> + * while handling a potentially long softirq, or if the current
> + * task is likely to block preemptions soon because it is a
> + * ksoftirq thread that is handling slow softirq.
> + */
> +static bool cpu_busy_with_softirqs(int cpu)
> +{
> +	u32 softirqs = per_cpu(active_softirqs, cpu) |
> +		       __cpu_softirq_pending(cpu);
> +	struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
> +	struct task_struct *curr;
> +	struct rq *rq = cpu_rq(cpu);
> +	int ret;
> +
> +	rcu_read_lock();

Could you clarify what is the RCU read-side critical section protecting?

> +	curr = READ_ONCE(rq->curr); /* unlocked access */
> +	ret = (softirqs & LONG_SOFTIRQ_MASK) &&
> +		 (curr == cpu_ksoftirqd ||
> +		  preempt_count() & SOFTIRQ_MASK);

If the goal is to avoid ksoftirqd when it is running an actual softirq,
doesn't ksoftirqd already run softirqs under local_bh_disable()? If so, then
SOFTIRQ_MASK should already be set. If ksoftirqd is in between softirqs (and
bh is enabled there), then scheduling the RT task on the CPU may preempt
ksoftirqd and give the RT task a chance to run anyway, right?.

> +	rcu_read_unlock();
> +	return ret;
> +}
> +#else
> +#define __use_softirq_opt 0

define not needed if using IS_ENABLED.

more comments later :)

thanks,

 - Joel


> +static bool cpu_busy_with_softirqs(int cpu)
> +{
> +	return false;
> +}
> +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
> +
> +static bool rt_task_fits_cpu(struct task_struct *p, int cpu)
> +{
> +	return !cpu_busy_with_softirqs(cpu) && rt_task_fits_capacity(p, cpu);
> +}
> +
>  static int
>  select_task_rq_rt(struct task_struct *p, int cpu, int flags)
>  {
> @@ -1637,22 +1675,24 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
>  	 * This test is optimistic, if we get it wrong the load-balancer
>  	 * will have to sort it out.
>  	 *
> -	 * We take into account the capacity of the CPU to ensure it fits the
> -	 * requirement of the task - which is only important on heterogeneous
> -	 * systems like big.LITTLE.
> +	 * We use rt_task_fits_cpu() to evaluate if the CPU is busy with
> +	 * potentially long-running softirq work, as well as take into
> +	 * account the capacity of the CPU to ensure it fits the
> +	 * requirement of the task - which is only important on
> +	 * heterogeneous systems like big.LITTLE.
>  	 */
>  	test = curr &&
>  	       unlikely(rt_task(curr)) &&
>  	       (curr->nr_cpus_allowed < 2 || curr->prio <= p->prio);
>  
> -	if (test || !rt_task_fits_capacity(p, cpu)) {
> +	if (test || !rt_task_fits_cpu(p, cpu)) {
>  		int target = find_lowest_rq(p);
>  
>  		/*
>  		 * Bail out if we were forcing a migration to find a better
>  		 * fitting CPU but our search failed.
>  		 */
> -		if (!test && target != -1 && !rt_task_fits_capacity(p, target))
> +		if (!test && target != -1 && !rt_task_fits_cpu(p, target))
>  			goto out_unlock;
>  
>  		/*
> @@ -1894,14 +1934,17 @@ static int find_lowest_rq(struct task_struct *task)
>  		return -1; /* No other targets possible */
>  
>  	/*
> -	 * If we're on asym system ensure we consider the different capacities
> -	 * of the CPUs when searching for the lowest_mask.
> +	 * If we're using the softirq optimization or if we are
> +	 * on asym system, ensure we consider the softirq processing
> +	 * or different capacities of the CPUs when searching for the
> +	 * lowest_mask.
>  	 */
> -	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
> +	if (__use_softirq_opt ||

replace with IS_ENABLED.

> +	    static_branch_unlikely(&sched_asym_cpucapacity)) {
>  
>  		ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
>  					  task, lowest_mask,
> -					  rt_task_fits_capacity);
> +					  rt_task_fits_cpu);
>  	} else {
>  
>  		ret = cpupri_find(&task_rq(task)->rd->cpupri,
> diff --git a/kernel/softirq.c b/kernel/softirq.c
> index c8a6913c067d..35ee79dd8786 100644
> --- a/kernel/softirq.c
> +++ b/kernel/softirq.c
> @@ -60,6 +60,13 @@ static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp
>  
>  DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
>  
> +/*
> + * active_softirqs -- per cpu, a mask of softirqs that are being handled,
> + * with the expectation that approximate answers are acceptable and therefore
> + * no synchronization.
> + */
> +DEFINE_PER_CPU(u32, active_softirqs);
> +
>  const char * const softirq_to_name[NR_SOFTIRQS] = {
>  	"HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "IRQ_POLL",
>  	"TASKLET", "SCHED", "HRTIMER", "RCU"
> @@ -551,6 +558,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
>  restart:
>  	/* Reset the pending bitmask before enabling irqs */
>  	set_softirq_pending(0);
> +	__this_cpu_write(active_softirqs, pending);
>  
>  	local_irq_enable();
>  
> @@ -580,6 +588,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
>  		pending >>= softirq_bit;
>  	}
>  
> +	__this_cpu_write(active_softirqs, 0);
>  	if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
>  	    __this_cpu_read(ksoftirqd) == current)
>  		rcu_softirq_qs();
> -- 
> 2.38.0.rc1.362.ged0d419d3c-goog
>
Re: [RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by John Stultz 3 years, 4 months ago
On Sat, Oct 22, 2022 at 11:28 AM Joel Fernandes <joel@joelfernandes.org> wrote:
> On Mon, Oct 03, 2022 at 11:20:32PM +0000, John Stultz wrote:
> > diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> > index a749a8663841..e3a4add67e8c 100644
> > --- a/include/linux/interrupt.h
> > +++ b/include/linux/interrupt.h
> > @@ -582,6 +582,11 @@ enum
> >   * _ IRQ_POLL: irq_poll_cpu_dead() migrates the queue
> >   */
> >  #define SOFTIRQ_HOTPLUG_SAFE_MASK (BIT(RCU_SOFTIRQ) | BIT(IRQ_POLL_SOFTIRQ))
> > +/* Softirq's where the handling might be long: */
> > +#define LONG_SOFTIRQ_MASK (BIT(NET_TX_SOFTIRQ)    | \
> > +                        BIT(NET_RX_SOFTIRQ)    | \
> > +                        BIT(BLOCK_SOFTIRQ)     | \
> > +                        BIT(IRQ_POLL_SOFTIRQ))
>
> Instead of a mask, I wonder if a better approach is to have a heuristic based
> on measurement of softirq load. There is already code to measure irq load
> (see update_irq_load_avg). Perhaps, Vincent would be willing to add the
> softirq load in it as well if asked nicely ;)
>
> That's much better because:
> 1. Any new softirqs added will also trigger this optimization.
> 2. Systems where these softirqs are quiet will not unnecessarily trigger it.
> 3. You can also include irq load so that things like IRQ storms also don't
> cause RT issues (when there are better "uncompromised" idle CPU candidates).
> 4. may not be need CONFIG option at all if the optimization makes
> sense for everything. Think all the systems not running Android.

Hey Joel,
  Thanks again for the feedback, and sorry to take awhile to get back to you.

I'll have to think about this some more, but I'm hesitant about
switching to a load based hursitic approach. Mostly as responding to
"load" feels a bit fuzzy to me.
If we suddenly get a burst of softirqs preempting scheduling, we don't
want to have to wait for the load tracking to recognize this, as the
effect of the blocked audio processing will be immediate.

That's why this optimization just avoids scheduling rt tasks on cpus
that are running potentially long-running softirqs, as we can't be
sure in that instance we'll be able to run soon.


> >  /* map softirq index to softirq name. update 'softirq_to_name' in
> >   * kernel/softirq.c when adding a new softirq.
> > @@ -617,6 +622,7 @@ extern void raise_softirq_irqoff(unsigned int nr);
> >  extern void raise_softirq(unsigned int nr);
> >
> >  DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
> > +DECLARE_PER_CPU(u32, active_softirqs);
> >
> >  static inline struct task_struct *this_cpu_ksoftirqd(void)
> >  {
> > diff --git a/init/Kconfig b/init/Kconfig
> > index 532362fcfe31..3d1de6edcfa1 100644
> > --- a/init/Kconfig
> > +++ b/init/Kconfig
> > @@ -1284,6 +1284,16 @@ config SCHED_AUTOGROUP
> >         desktop applications.  Task group autogeneration is currently based
> >         upon task session.
> >
> > +config RT_SOFTIRQ_OPTIMIZATION
>
> I much dislike this config option name because it does not self-explain what
> the option is and it is too long. More so, any future such optimizations,
> even if rare, will have to come up with another name causing more confusion.
> Instead, CONFIG_RT_AVOID_SOFTIRQS seems cleaner, but I'll defer to scheduler
> maintainers since they have to take the patch ultimately. Though I'll give my
> tag for a rename / better name. More below:

That's a fair point.  RT_SOFTIRQ_AWARE_SCHED maybe?


> > diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> > index 55f39c8f4203..3c628db807c8 100644
> > --- a/kernel/sched/rt.c
> > +++ b/kernel/sched/rt.c
> > @@ -1599,6 +1599,44 @@ static void yield_task_rt(struct rq *rq)
> >  #ifdef CONFIG_SMP
> >  static int find_lowest_rq(struct task_struct *task);
> >
> > +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
> > +#define __use_softirq_opt 1
>
> Why not just use IS_ENABLED(CONFIG_RT_SOFTIRQ_OPT..) instead of defining new
> macros?

Mostly for readability. The lines where its added are already fairly long, ie:
  if (static_branch_unlikely(&sched_asym_cpucapacity)) {

So it seemed nicer to have the shorter macro.  But I'm not strongly
opinionated on this.

> > + * Return whether the given cpu is currently non-preemptible
> > + * while handling a potentially long softirq, or if the current
> > + * task is likely to block preemptions soon because it is a
> > + * ksoftirq thread that is handling slow softirq.
> > + */
> > +static bool cpu_busy_with_softirqs(int cpu)
> > +{
> > +     u32 softirqs = per_cpu(active_softirqs, cpu) |
> > +                    __cpu_softirq_pending(cpu);
> > +     struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
> > +     struct task_struct *curr;
> > +     struct rq *rq = cpu_rq(cpu);
> > +     int ret;
> > +
> > +     rcu_read_lock();
>
> Could you clarify what is the RCU read-side critical section protecting?
>
> > +     curr = READ_ONCE(rq->curr); /* unlocked access */
> > +     ret = (softirqs & LONG_SOFTIRQ_MASK) &&
> > +              (curr == cpu_ksoftirqd ||
> > +               preempt_count() & SOFTIRQ_MASK);
>
> If the goal is to avoid ksoftirqd when it is running an actual softirq,
> doesn't ksoftirqd already run softirqs under local_bh_disable()? If so, then
> SOFTIRQ_MASK should already be set. If ksoftirqd is in between softirqs (and
> bh is enabled there), then scheduling the RT task on the CPU may preempt
> ksoftirqd and give the RT task a chance to run anyway, right?.

Yeah. I'm refactoring/simplifying this for v5, so hopefully it will be
more straightforward.

Thanks again for the feedback!
-john
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by Alexander Gordeev 3 years, 5 months ago
On Mon, Oct 03, 2022 at 11:20:32PM +0000, John Stultz wrote:
> From: Connor O'Brien <connoro@google.com>

Hi John, Connor,

I took a cursory look and have couple of hopefully meaningful
comments, but mostly - questions.

[...]

> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 55f39c8f4203..3c628db807c8 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1599,6 +1599,44 @@ static void yield_task_rt(struct rq *rq)
>  #ifdef CONFIG_SMP
>  static int find_lowest_rq(struct task_struct *task);
>  
> +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
> +#define __use_softirq_opt 1
> +/*
> + * Return whether the given cpu is currently non-preemptible
> + * while handling a potentially long softirq, or if the current
> + * task is likely to block preemptions soon because it is a
> + * ksoftirq thread that is handling slow softirq.

What is slow softirqs in this context compared to long?

> + */
> +static bool cpu_busy_with_softirqs(int cpu)
> +{
> +	u32 softirqs = per_cpu(active_softirqs, cpu) |
> +		       __cpu_softirq_pending(cpu);
> +	struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
> +	struct task_struct *curr;
> +	struct rq *rq = cpu_rq(cpu);
> +	int ret;
> +
> +	rcu_read_lock();
> +	curr = READ_ONCE(rq->curr); /* unlocked access */

select_task_rq_rt() takes the lock and reads curr already,
before calling this funciton. I think there is a way to
decompose it in a better way.

> +	ret = (softirqs & LONG_SOFTIRQ_MASK) &&
> +		 (curr == cpu_ksoftirqd ||

EOL is extra.

> +		  preempt_count() & SOFTIRQ_MASK);

Could you please clarify this whole check in more detail?

What is the point in checking if a remote CPU is handling
a "long" softirq while the local one is handling any softirq?

> +	rcu_read_unlock();

Why ret needs to be calculated under the lock?

> +	return ret;
> +}
> +#else
> +#define __use_softirq_opt 0
> +static bool cpu_busy_with_softirqs(int cpu)
> +{
> +	return false;
> +}
> +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
> +
> +static bool rt_task_fits_cpu(struct task_struct *p, int cpu)

To me, the new name is unfortunate, since it strips a notion
of the reason. Instead, "CPU un/fits, because of capacity" it
reads as "CPU un/fits, because of ..." what?

> +{
> +	return !cpu_busy_with_softirqs(cpu) && rt_task_fits_capacity(p, cpu);

I guess the order needs to be swapped, as rt_task_fits_capacity()
is rather "quick" while cpu_busy_with_softirqs() is rather "slow".

> +}
> +
>  static int
>  select_task_rq_rt(struct task_struct *p, int cpu, int flags)
>  {
> @@ -1894,14 +1934,17 @@ static int find_lowest_rq(struct task_struct *task)
>  		return -1; /* No other targets possible */
>  
>  	/*
> -	 * If we're on asym system ensure we consider the different capacities
> -	 * of the CPUs when searching for the lowest_mask.
> +	 * If we're using the softirq optimization or if we are
> +	 * on asym system, ensure we consider the softirq processing
> +	 * or different capacities of the CPUs when searching for the
> +	 * lowest_mask.
>  	 */
> -	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
> +	if (__use_softirq_opt ||

Why use __use_softirq_opt and not IS_ENABLED(CONFIG_RT_SOFTIRQ_OPTIMIZATION)?

> +	    static_branch_unlikely(&sched_asym_cpucapacity)) {
>  
>  		ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
>  					  task, lowest_mask,
> -					  rt_task_fits_capacity);
> +					  rt_task_fits_cpu);
>  	} else {
>  
>  		ret = cpupri_find(&task_rq(task)->rd->cpupri,
> diff --git a/kernel/softirq.c b/kernel/softirq.c
> index c8a6913c067d..35ee79dd8786 100644
> --- a/kernel/softirq.c
> +++ b/kernel/softirq.c
> @@ -60,6 +60,13 @@ static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp
>  
>  DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
>  
> +/*
> + * active_softirqs -- per cpu, a mask of softirqs that are being handled,
> + * with the expectation that approximate answers are acceptable and therefore
> + * no synchronization.
> + */
> +DEFINE_PER_CPU(u32, active_softirqs);

I guess all active_softirqs uses need to be coupled with
IS_ENABLED(CONFIG_RT_SOFTIRQ_OPTIMIZATION) check.

>  const char * const softirq_to_name[NR_SOFTIRQS] = {
>  	"HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "IRQ_POLL",
>  	"TASKLET", "SCHED", "HRTIMER", "RCU"
> @@ -551,6 +558,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
>  restart:
>  	/* Reset the pending bitmask before enabling irqs */
>  	set_softirq_pending(0);
> +	__this_cpu_write(active_softirqs, pending);
>  
>  	local_irq_enable();
>  
> @@ -580,6 +588,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
>  		pending >>= softirq_bit;
>  	}
>  
> +	__this_cpu_write(active_softirqs, 0);
>  	if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
>  	    __this_cpu_read(ksoftirqd) == current)
>  		rcu_softirq_qs();

Thanks!
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by John Stultz 3 years, 5 months ago
 On Mon, Oct 17, 2022 at 5:40 AM Alexander Gordeev
<agordeev@linux.ibm.com> wrote:
>
> On Mon, Oct 03, 2022 at 11:20:32PM +0000, John Stultz wrote:
> > From: Connor O'Brien <connoro@google.com>
> > diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> > index 55f39c8f4203..3c628db807c8 100644
> > --- a/kernel/sched/rt.c
> > +++ b/kernel/sched/rt.c
> > @@ -1599,6 +1599,44 @@ static void yield_task_rt(struct rq *rq)
> >  #ifdef CONFIG_SMP
> >  static int find_lowest_rq(struct task_struct *task);
> >
> > +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
> > +#define __use_softirq_opt 1
> > +/*
> > + * Return whether the given cpu is currently non-preemptible
> > + * while handling a potentially long softirq, or if the current
> > + * task is likely to block preemptions soon because it is a
> > + * ksoftirq thread that is handling slow softirq.
>
> What is slow softirqs in this context compared to long?

So the long softirqs are any of the softirqs in the LONG_SOFTIRQ_MASK
(net, block, irqpoll).
The "slow softirq" just refers to softirq processing that has already
been pushed out to ksoftirqd on the target cpu.

By default ksoftirqd's priority is likely not high enough priority to
prevent the rt task from running, however, it does disable preemption
while it's running the softirqs, so it effectively could block the rt
task from running for a while.


> > + */
> > +static bool cpu_busy_with_softirqs(int cpu)
> > +{
> > +     u32 softirqs = per_cpu(active_softirqs, cpu) |
> > +                    __cpu_softirq_pending(cpu);
> > +     struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
> > +     struct task_struct *curr;
> > +     struct rq *rq = cpu_rq(cpu);
> > +     int ret;
> > +
> > +     rcu_read_lock();
> > +     curr = READ_ONCE(rq->curr); /* unlocked access */
>
> select_task_rq_rt() takes the lock and reads curr already,
> before calling this funciton. I think there is a way to
> decompose it in a better way.

Hrm. Suggestions? As select_task_rq_rt() is only one of the callers.
Trying to pass curr into cpu_busy_with_softirqs() would mean
cpupri_find_fitness() would need to read the cpu_rq(cpu)->curr for the
specified cpu and pass that in.

The original patch actually was


> > +     ret = (softirqs & LONG_SOFTIRQ_MASK) &&
> > +              (curr == cpu_ksoftirqd ||
>
> EOL is extra.

Removed - thanks!

>
> > +               preempt_count() & SOFTIRQ_MASK);
>
> Could you please clarify this whole check in more detail?
>
> What is the point in checking if a remote CPU is handling
> a "long" softirq while the local one is handling any softirq?

Good question! This looks like an error from my rework of the earlier
version of the patch.
In v1 it was:
   task_thread_info(curr)->preempt_count & SOFTIRQ_MASK));
And it looks like I erroneously condensed that to preempt_count() &
SOFTIRQ_MASK  treating curr (from the target cpu rq) as if it were
current. :P

I'll fix this. Thank you for catching it!

Just to expand what it should be in detail:
1:  (softirqs & LONG_SOFTIRQ_MASK) &&
2:  (curr == cpu_ksoftirqd ||
3:  task_thread_info(curr)->preempt_count & SOFTIRQ_MASK)

Where we're checking
1) that  the active_softirqs and __cpu_softirq_pending() values on the
target cpu are running a long softirq.
AND (
2) The current task on the target cpu is ksoftirqd
OR
3) The preempt_count of the current task on the target cpu has SOFTIRQ entries
)

> > +     rcu_read_unlock();
>
> Why ret needs to be calculated under the lock?

I believe it is the rq->curr == cpu_ksoftirqd (and the erroneously
removed task_thread_info(curr)) check?
Part of me wonders if it should have the rcu_dereference(rq->curr) as well?

> > +     return ret;
> > +}
> > +#else
> > +#define __use_softirq_opt 0
> > +static bool cpu_busy_with_softirqs(int cpu)
> > +{
> > +     return false;
> > +}
> > +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
> > +
> > +static bool rt_task_fits_cpu(struct task_struct *p, int cpu)
>
> To me, the new name is unfortunate, since it strips a notion
> of the reason. Instead, "CPU un/fits, because of capacity" it
> reads as "CPU un/fits, because of ..." what?

As with all complicated questions: "It depends" :) On the kernel
config specifically.

The earlier version of the patch had:
rt_task_fits_capacity_and_may_preempt() but Qais suggested it be
switched to the generic "fits_cpu" as it would depend on the kernel
config as to which aspect of "fit" was being considered.

I guess it could be  rt_task_fits_capacity_and_softirq_free() ?

> > +{
> > +     return !cpu_busy_with_softirqs(cpu) && rt_task_fits_capacity(p, cpu);
>
> I guess the order needs to be swapped, as rt_task_fits_capacity()
> is rather "quick" while cpu_busy_with_softirqs() is rather "slow".

Fair point. I've gone ahead and tweaked that. Thanks!

> > @@ -1894,14 +1934,17 @@ static int find_lowest_rq(struct task_struct *task)
> >               return -1; /* No other targets possible */
> >
> >       /*
> > -      * If we're on asym system ensure we consider the different capacities
> > -      * of the CPUs when searching for the lowest_mask.
> > +      * If we're using the softirq optimization or if we are
> > +      * on asym system, ensure we consider the softirq processing
> > +      * or different capacities of the CPUs when searching for the
> > +      * lowest_mask.
> >        */
> > -     if (static_branch_unlikely(&sched_asym_cpucapacity)) {
> > +     if (__use_softirq_opt ||
>
> Why use __use_softirq_opt and not IS_ENABLED(CONFIG_RT_SOFTIRQ_OPTIMIZATION)?

Because IS_ENABLED(CONFIG_RT_SOFTIRQ_OPTIMIZATION) seemed a bit long
and less easy to read?
I'm ambivalent if folks would rather have the CONFIG switch be
explicit here, but to me it seemed nicer to read the flag variable.

> > @@ -60,6 +60,13 @@ static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp
> >
> >  DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
> >
> > +/*
> > + * active_softirqs -- per cpu, a mask of softirqs that are being handled,
> > + * with the expectation that approximate answers are acceptable and therefore
> > + * no synchronization.
> > + */
> > +DEFINE_PER_CPU(u32, active_softirqs);
>
> I guess all active_softirqs uses need to be coupled with
> IS_ENABLED(CONFIG_RT_SOFTIRQ_OPTIMIZATION) check.

I guess it could be wrapped. I didn't see the per-cpu accounting as
troublesome enough to conditionalize, but the extra per-cpu data isn't
great if no one uses it. So I'll do that.

Thanks so much for the review and feedback! I really appreciate it.
-john
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by Qais Yousef 3 years, 5 months ago
On 10/17/22 20:42, John Stultz wrote:

> > > +     return ret;
> > > +}
> > > +#else
> > > +#define __use_softirq_opt 0
> > > +static bool cpu_busy_with_softirqs(int cpu)
> > > +{
> > > +     return false;
> > > +}
> > > +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
> > > +
> > > +static bool rt_task_fits_cpu(struct task_struct *p, int cpu)
> >
> > To me, the new name is unfortunate, since it strips a notion
> > of the reason. Instead, "CPU un/fits, because of capacity" it
> > reads as "CPU un/fits, because of ..." what?
> 
> As with all complicated questions: "It depends" :) On the kernel
> config specifically.
> 
> The earlier version of the patch had:
> rt_task_fits_capacity_and_may_preempt() but Qais suggested it be
> switched to the generic "fits_cpu" as it would depend on the kernel
> config as to which aspect of "fit" was being considered.
> 
> I guess it could be  rt_task_fits_capacity_and_softirq_free() ?

My line of thinking is that we have multiple reasons which could potentially
come and go. Having a simple generic name is future proof to the details of the
reason.

For example, in the future we might find a better way to handle the softirq
conflict and remove this reason; or we might find a new 'reason' is required.
Documenting that in the function header (if required, I see the one line
implementation is self descriptive so far) rather than in name made more sense
to me, hence the suggestion.

I am already aware of another new reason required to handle thermal pressure
when checking for capacity [1]. The discussion has stalled, but the problem is
still there and we'd need to address it. So I expect this code will be massaged
somewhat in the near future.

Sticking to rt_task_fits_cpu() will reduce the churn IMHO. But if you really
prefer something else, works for me :-)

[1] https://lore.kernel.org/lkml/20220514235513.jm7ul2y6uddj6eh2@airbuntu/


Cheers

--
Qais Yousef
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by Alexander Gordeev 3 years, 5 months ago
On Mon, Oct 17, 2022 at 08:42:53PM -0700, John Stultz wrote:
> Hrm. Suggestions? As select_task_rq_rt() is only one of the callers.
> Trying to pass curr into cpu_busy_with_softirqs() would mean
> cpupri_find_fitness() would need to read the cpu_rq(cpu)->curr for the
> specified cpu and pass that in.

May be you could have a lightweight checker that accepts rq and curr
and gets called from select_task_rq_rt(). Then you could call that
same checker from cpu_busy_with_softirqs().

> Just to expand what it should be in detail:
> 1:  (softirqs & LONG_SOFTIRQ_MASK) &&
> 2:  (curr == cpu_ksoftirqd ||
> 3:  task_thread_info(curr)->preempt_count & SOFTIRQ_MASK)
> 
> Where we're checking
> 1) that  the active_softirqs and __cpu_softirq_pending() values on the
> target cpu are running a long softirq.
> AND (
> 2) The current task on the target cpu is ksoftirqd
> OR
> 3) The preempt_count of the current task on the target cpu has SOFTIRQ entries
> )

2) When the target CPU is handling or about to handle long softirqs
already what is the difference if it is also running ksoftirqd or not?

3) What is the point of this check when 1) is true already?

Thanks!
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by John Stultz 3 years, 5 months ago
On Wed, Oct 19, 2022 at 2:11 AM Alexander Gordeev
<agordeev@linux.ibm.com> wrote:
>
> On Mon, Oct 17, 2022 at 08:42:53PM -0700, John Stultz wrote:
> > Hrm. Suggestions? As select_task_rq_rt() is only one of the callers.
> > Trying to pass curr into cpu_busy_with_softirqs() would mean
> > cpupri_find_fitness() would need to read the cpu_rq(cpu)->curr for the
> > specified cpu and pass that in.
>
> May be you could have a lightweight checker that accepts rq and curr
> and gets called from select_task_rq_rt(). Then you could call that
> same checker from cpu_busy_with_softirqs().

Fair enough. Though your other questions are making me wonder if this
is necessary.

> > Just to expand what it should be in detail:
> > 1:  (softirqs & LONG_SOFTIRQ_MASK) &&
> > 2:  (curr == cpu_ksoftirqd ||
> > 3:  task_thread_info(curr)->preempt_count & SOFTIRQ_MASK)
> >
> > Where we're checking
> > 1) that  the active_softirqs and __cpu_softirq_pending() values on the
> > target cpu are running a long softirq.
> > AND (
> > 2) The current task on the target cpu is ksoftirqd
> > OR
> > 3) The preempt_count of the current task on the target cpu has SOFTIRQ entries
> > )
>
> 2) When the target CPU is handling or about to handle long softirqs
> already what is the difference if it is also running ksoftirqd or not?

Again, a good question! From my understanding, the original patch was
basically checking just #2 and #3 above, then additional logic was
added to narrow it to only the LONG_SOFTIRQ_MASK values, so that may
make the older part of the check redundant.

I fret there are some edge cases where on the target cpu softirqs
might be pending but ksoftirqd isn't running yet maybe due to a
lowish-prio rt task - such that the cpu could still be considered a
good target. But this seems a bit of a stretch.

> 3) What is the point of this check when 1) is true already?

Yeah, the more I think about this, the more duplicative it seems.
Again, there's some edge details about the preempt_count being set
before the active_softirq accounting is set, but the whole decision
here about the target cpus is a bit racy to begin with, so I'm not
sure if that is significant.

So I'll go ahead and simplify the check to just the LONG_SOFTIRQ_MASK
& (active | pending softirqs) check. This should avoid the need to
pull the cpu_rq(cpu)->curr value and simplify things.

Will send out a new version once I've been able to validate that
similification doesn't introduce a regression.

Thanks so much for the feedback and suggestions!
-john
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by Alexander Gordeev 3 years, 5 months ago
On Wed, Oct 19, 2022 at 03:09:15PM -0700, John Stultz wrote:

Hi John,

[...]

> So I'll go ahead and simplify the check to just the LONG_SOFTIRQ_MASK
> & (active | pending softirqs) check. This should avoid the need to
> pull the cpu_rq(cpu)->curr value and simplify things.

In my reading of your approach if you find a way to additionally
indicate long softirqs being handled by the remote ksoftirqd, it
would cover all obvious/not-corner cases.

> -john
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by Joel Fernandes 3 years, 5 months ago
On Thu, Oct 20, 2022 at 02:47:54PM +0200, Alexander Gordeev wrote:
> On Wed, Oct 19, 2022 at 03:09:15PM -0700, John Stultz wrote:
> 
> Hi John,
> 
> [...]
> 
> > So I'll go ahead and simplify the check to just the LONG_SOFTIRQ_MASK
> > & (active | pending softirqs) check. This should avoid the need to
> > pull the cpu_rq(cpu)->curr value and simplify things.
> 
> In my reading of your approach if you find a way to additionally
> indicate long softirqs being handled by the remote ksoftirqd, it
> would cover all obvious/not-corner cases.

How will that help? The long softirq executing inside ksoftirqd will disable
preemption and prevent any RT task from executing.

Did I miss something?

thanks,

 - Joel
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by Alexander Gordeev 3 years, 5 months ago
On Sat, Oct 22, 2022 at 06:34:37PM +0000, Joel Fernandes wrote:
> > In my reading of your approach if you find a way to additionally
> > indicate long softirqs being handled by the remote ksoftirqd, it
> > would cover all obvious/not-corner cases.
> 
> How will that help? The long softirq executing inside ksoftirqd will disable
> preemption and prevent any RT task from executing.

Right. So the check to deem a remote CPU unfit would (logically) look like this:

(active | pending | ksoftirqd) & LONG_SOFTIRQ_MASK

> Did I miss something?

Or me :)

> thanks,
> 
>  - Joel
>
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by John Stultz 3 years, 4 months ago
On Sun, Oct 23, 2022 at 12:45 AM Alexander Gordeev
<agordeev@linux.ibm.com> wrote:
>
> On Sat, Oct 22, 2022 at 06:34:37PM +0000, Joel Fernandes wrote:
> > > In my reading of your approach if you find a way to additionally
> > > indicate long softirqs being handled by the remote ksoftirqd, it
> > > would cover all obvious/not-corner cases.
> >
> > How will that help? The long softirq executing inside ksoftirqd will disable
> > preemption and prevent any RT task from executing.
>
> Right. So the check to deem a remote CPU unfit would (logically) look like this:
>
> (active | pending | ksoftirqd) & LONG_SOFTIRQ_MASK
>

Alexander,
  Apologies for the late response here, some other work took priority for a bit.

Thanks again for the feedback. But I wanted to follow up on your
suggestion here, as I'm not quite sure I see why we need the separate
ksoftirqd bitmask here?

As run_ksoftirqd() basically looks at the pending set and calls
__do_softirq() which then moves the bits from the pending mask  to
active mask while they are being run.

So (pending|active)&LONG_SOFTIRQ_MASK seems like it should be a
sufficient check regardless of if the remote cpu is in softirq or
ksoftirqd, no?

thanks
-john
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by Alexander Gordeev 3 years, 4 months ago
On Mon, Nov 14, 2022 at 11:08:36PM -0800, John Stultz wrote:

Hi John,
...
> > Right. So the check to deem a remote CPU unfit would (logically) look like this:
> >
> > (active | pending | ksoftirqd) & LONG_SOFTIRQ_MASK
...
> As run_ksoftirqd() basically looks at the pending set and calls
> __do_softirq() which then moves the bits from the pending mask  to
> active mask while they are being run.
> 
> So (pending|active)&LONG_SOFTIRQ_MASK seems like it should be a
> sufficient check regardless of if the remote cpu is in softirq or
> ksoftirqd, no?

I did not realize run_ksoftirqd()->__do_softirq() covers it.
Sorry for the noise.

Thanks!

> thanks
> -john
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by Qais Yousef 3 years, 5 months ago
On 10/19/22 15:09, John Stultz wrote:

> I fret there are some edge cases where on the target cpu softirqs
> might be pending but ksoftirqd isn't running yet maybe due to a
> lowish-prio rt task - such that the cpu could still be considered a
> good target. But this seems a bit of a stretch.

Could using ksoftirqd_running() instead help with robustness here?


Cheers

--
Qais Yousef
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by John Stultz 3 years, 5 months ago
On Mon, Oct 17, 2022 at 8:42 PM John Stultz <jstultz@google.com> wrote:
>  On Mon, Oct 17, 2022 at 5:40 AM Alexander Gordeev
> <agordeev@linux.ibm.com> wrote:
> > On Mon, Oct 03, 2022 at 11:20:32PM +0000, John Stultz wrote:
> > > +               preempt_count() & SOFTIRQ_MASK);
> >
> > Could you please clarify this whole check in more detail?
> >
> > What is the point in checking if a remote CPU is handling
> > a "long" softirq while the local one is handling any softirq?
>
> Good question! This looks like an error from my rework of the earlier
> version of the patch.
> In v1 it was:
>    task_thread_info(curr)->preempt_count & SOFTIRQ_MASK));
> And it looks like I erroneously condensed that to preempt_count() &
> SOFTIRQ_MASK  treating curr (from the target cpu rq) as if it were
> current. :P
>
> I'll fix this. Thank you for catching it!

Ah, it's not so trivial to fix as I see part of my motivation for
condensing it was task_thread_info(curr)->preempt_count doesn't build
on x86 (which uses percpu values instead of threadinfo).

So I'll have to think a bit more about how to do this generically, and
if the conditionalization can be otherwise simplified.

Thanks again for pointing the issue out!
-john
Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs
Posted by John Stultz 3 years, 5 months ago
On Mon, Oct 17, 2022 at 8:42 PM John Stultz <jstultz@google.com> wrote:
>  On Mon, Oct 17, 2022 at 5:40 AM Alexander Gordeev
> <agordeev@linux.ibm.com> wrote:
> > select_task_rq_rt() takes the lock and reads curr already,
> > before calling this funciton. I think there is a way to
> > decompose it in a better way.
>
> Hrm. Suggestions? As select_task_rq_rt() is only one of the callers.
> Trying to pass curr into cpu_busy_with_softirqs() would mean
> cpupri_find_fitness() would need to read the cpu_rq(cpu)->curr for the
> specified cpu and pass that in.
>
> The original patch actually was
>

Whoops I didn't finish my thought there. I was going to say the
original patch did similar to your suggestion, passing the target cpu
curr task value in from select_task_rq_rt().
However it didn't use the cpupri_find_fitness() and duplicated a lot
of similar logic in a way that is not as nice as what the current
version uses.  But I'm very much open to suggestions for other ways to
simplify that as you suggest.

thanks
-john