Hi,
This series introduces two new scheduler features: UTIL_FITS_CAPACITY
and SELECT_BIAS_PREV. When used together, they achieve a 47% speedup of
a hackbench workload which leaves some idle CPU time on a 192-core AMD
EPYC.
The main metrics which are significantly improved are:
- cpu-migrations are reduced by 93%,
- CPU utilization is increased by 22%.
Feedback is welcome. I am especially interested to learn whether this
series has positive or detrimental effects on performance of other
workloads.
Thanks,
Mathieu
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Swapnil Sapkal <Swapnil.Sapkal@amd.com>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Chen Yu <yu.c.chen@intel.com>
Cc: Tim Chen <tim.c.chen@intel.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Gautham R . Shenoy <gautham.shenoy@amd.com>
Cc: x86@kernel.org
Mathieu Desnoyers (2):
sched/fair: Introduce UTIL_FITS_CAPACITY feature
sched/fair: Introduce SELECT_BIAS_PREV to reduce migrations
kernel/sched/fair.c | 77 ++++++++++++++++++++++++++++++++++++-----
kernel/sched/features.h | 12 +++++++
kernel/sched/sched.h | 5 +++
3 files changed, 86 insertions(+), 8 deletions(-)
--
2.39.2