[PATCH 0/3] sched/core: Fixes and enhancements around spurious need_resched() and idle load balancing

K Prateek Nayak posted 3 patches 1 year, 5 months ago
kernel/sched/core.c | 40 ++++++++++++++++++++--------------------
kernel/sched/smp.h  |  2 ++
kernel/smp.c        | 32 ++++++++++++++++++++++++++++++++
kernel/softirq.c    | 10 +++++++++-
4 files changed, 63 insertions(+), 21 deletions(-)
[PATCH 0/3] sched/core: Fixes and enhancements around spurious need_resched() and idle load balancing
Posted by K Prateek Nayak 1 year, 5 months ago
Since commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()"), an idle CPU in TIF_POLLING_NRFLAG can
be pulled out of idle by setting TIF_NEED_RESCHED instead of sending an
actual IPI. This affects at least three scenarios that have been
described below:

 o A need_resched() check within a call function does not necessarily
   indicate a task wakeup since a CPU intending to send an IPI to an
   idle target in TIF_POLLING_NRFLAG mode can simply queue the
   SMP-call-function and set the TIF_NEED_RESCHED flag to pull the
   polling target out of idle. The SMP-call-function will be executed by
   flush_smp_call_function_queue() on the idle-exit path. On x86, where
   mwait_idle_with_hints() sets TIF_POLLING_NRFLAG for long idling,
   this leads to idle load balancer bailing out early since
   need_resched() check in nohz_csd_func() returns true in most
   instances.

o A TIF_POLLING_NRFLAG idling CPU woken up to process an IPI will end
  up calling schedule() even in cases where the call function does not
  wake up a new task on the idle CPU, thus delaying the idle re-entry.

o Julia Lawall reported a case where a softirq raised from a
  SMP-call-function on an idle CPU will wake up ksoftirqd since
  flush_smp_call_function_queue() executes in the idle thread's context.
  This can throw off the idle load balancer by making the idle CPU
  appear busy since ksoftirqd just woke on the said CPU [1].

The three patches address each of the above issue individually, the
first one by removing the need_resched() check in nohz_csd_func() with
a proper justification, the second by introducing a fast-path in
__schedule() to speed up idle re-entry in case TIF_NEED_RESCHED was set
simply to process an IPI that did not perform a wakeup, and the third by
notifying raise_softirq() that the softirq was raised from a
SMP-call-function executed by the idle or migration thread in
flush_smp_call_function_queue(), and waking ksoftirqd is unnecessary
since a call to do_softirq_post_smp_call_flush() will follow soon.

Previous attempts to solve these problems involved introducing a new
TIF_NOTIFY_IPI flag to notify a TIF_POLLING_NRFLAG CPU of a pending IPI
and skip calling __schedule() in such cases but it involved using atomic
ops which could have performance implications [2]. Instead, Peter
suggested the approach outlined in the first two patches of the series.
The third one is an RFC to that (hopefully) solves the problem Julia was
chasing down related to idle load balancing.

[1] https://lore.kernel.org/lkml/fcf823f-195e-6c9a-eac3-25f870cb35ac@inria.fr/
[2] https://lore.kernel.org/lkml/20240615014256.GQ8774@noisy.programming.kicks-ass.net/

This patch is based on tip:sched/core at commit c793a62823d1
("sched/core: Drop spinlocks on contention iff kernel is preemptible")

--
K Prateek Nayak (2):
  sched/core: Remove the unnecessary need_resched() check in
    nohz_csd_func()
  softirq: Avoid waking up ksoftirqd from
    flush_smp_call_function_queue()

Peter Zijlstra (1):
  sched/core: Introduce SM_IDLE and an idle re-entry fast-path in
    __schedule()

 kernel/sched/core.c | 40 ++++++++++++++++++++--------------------
 kernel/sched/smp.h  |  2 ++
 kernel/smp.c        | 32 ++++++++++++++++++++++++++++++++
 kernel/softirq.c    | 10 +++++++++-
 4 files changed, 63 insertions(+), 21 deletions(-)


base-commit: c793a62823d1ce8f70d9cfc7803e3ea436277cda
-- 
2.34.1
Re: [PATCH 0/3] sched/core: Fixes and enhancements around spurious need_resched() and idle load balancing
Posted by Chen Yu 1 year, 4 months ago
Hi Prateek,

On 2024-07-10 at 09:02:07 +0000, K Prateek Nayak wrote:
> Since commit b2a02fc43a1f ("smp: Optimize
> send_call_function_single_ipi()"), an idle CPU in TIF_POLLING_NRFLAG can
> be pulled out of idle by setting TIF_NEED_RESCHED instead of sending an
> actual IPI. This affects at least three scenarios that have been
> described below:
> 
>  o A need_resched() check within a call function does not necessarily
>    indicate a task wakeup since a CPU intending to send an IPI to an
>    idle target in TIF_POLLING_NRFLAG mode can simply queue the
>    SMP-call-function and set the TIF_NEED_RESCHED flag to pull the
>    polling target out of idle. The SMP-call-function will be executed by
>    flush_smp_call_function_queue() on the idle-exit path. On x86, where
>    mwait_idle_with_hints() sets TIF_POLLING_NRFLAG for long idling,
>    this leads to idle load balancer bailing out early since
>    need_resched() check in nohz_csd_func() returns true in most
>    instances.
> 
> o A TIF_POLLING_NRFLAG idling CPU woken up to process an IPI will end
>   up calling schedule() even in cases where the call function does not
>   wake up a new task on the idle CPU, thus delaying the idle re-entry.
> 
> o Julia Lawall reported a case where a softirq raised from a
>   SMP-call-function on an idle CPU will wake up ksoftirqd since
>   flush_smp_call_function_queue() executes in the idle thread's context.
>   This can throw off the idle load balancer by making the idle CPU
>   appear busy since ksoftirqd just woke on the said CPU [1].
> 
> The three patches address each of the above issue individually, the
> first one by removing the need_resched() check in nohz_csd_func() with
> a proper justification, the second by introducing a fast-path in
> __schedule() to speed up idle re-entry in case TIF_NEED_RESCHED was set
> simply to process an IPI that did not perform a wakeup, and the third by
> notifying raise_softirq() that the softirq was raised from a
> SMP-call-function executed by the idle or migration thread in
> flush_smp_call_function_queue(), and waking ksoftirqd is unnecessary
> since a call to do_softirq_post_smp_call_flush() will follow soon.
> 
> Previous attempts to solve these problems involved introducing a new
> TIF_NOTIFY_IPI flag to notify a TIF_POLLING_NRFLAG CPU of a pending IPI
> and skip calling __schedule() in such cases but it involved using atomic
> ops which could have performance implications [2]. Instead, Peter
> suggested the approach outlined in the first two patches of the series.
> The third one is an RFC to that (hopefully) solves the problem Julia was
> chasing down related to idle load balancing.
> 
> [1] https://lore.kernel.org/lkml/fcf823f-195e-6c9a-eac3-25f870cb35ac@inria.fr/
> [2] https://lore.kernel.org/lkml/20240615014256.GQ8774@noisy.programming.kicks-ass.net/
> 
> This patch is based on tip:sched/core at commit c793a62823d1
> ("sched/core: Drop spinlocks on contention iff kernel is preemptible")
>

As discussed in another thread[1], I did a simple test to stress the newidle
balance with this patch applied, to see if there is any impact on newidle balance
cost.
 
This synthetic test script(shown below) launches NR_CPU process (on my platform
it is 240). Each process is a loop of nanosleep(1 us), which is supposed to
trigger newidle balance as much as possible.
 
Before the 3 patches are applied, on commit c793a62823d1:
 
3.67%     0.33%  [kernel.kallsyms]     [k] sched_balance_newidle
2.85%     2.16%  [kernel.kallsyms]     [k] update_sd_lb_stats.constprop.0
 
After 3 patches are applied:
3.51%     0.32%  [kernel.kallsyms]         [k] sched_balance_newidle
2.71%     2.06%  [kernel.kallsyms]         [k] update_sd_lb_stats.constprop.0
 
It seems that there is no much difference regarding the precent of newidle
balance. My understanding is that, the current patch set mainly deals with
the false positive of need_resched() caused by IPI, thus avoid newidle balance
if it is IPI. In the synthetic test, it is always a real wakeup, so it does
not cause much difference. I think ipi intensive workload would benefit most
from this change, so I'm planning to use ipistorm to have a double check.
 
According to the scenario of this synthetic test, it seems that there is no
need to launch the newidle balance, because the sleeping task will be woken
up soon, there is no need to pull another task to it. Besides, the calculating
statistics is not linealy scalable and remains the bottleneck of newly idle
balance. I'm thinking of doing some evaluation based on your fix.

 
The test code:
i=1;while [ $i -le "240" ]; do ./nano_sleep 1000 & i=$(($i+1)); done;

int main(int argc, char **argv)
{
	int response, sleep_ns;
	struct timespec remaining, request = { 0, 100 };

	if (argc != 2) {
		printf("please specify the sleep nanosleep\n");
		return -1;
	}
	sleep_ns = atoi(argv[1]);

	while (1) {
		request.tv_sec = 0;
		request.tv_nsec = sleep_ns;
		response = nanosleep(&request, &remaining);
	}
	return 0;
}

[1] https://lore.kernel.org/lkml/20240717121745.GF26750@noisy.programming.kicks-ass.net/

thanks,
Chenyu