[PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock

Changwoo Min posted 6 patches 1 year, 1 month ago
There is a newer version of this series
kernel/sched/core.c                      |  6 +-
kernel/sched/ext.c                       | 75 ++++++++++++++++++-
kernel/sched/sched.h                     | 52 +++++++++----
tools/sched_ext/include/scx/common.bpf.h | 95 ++++++++++++++++++++++++
tools/sched_ext/include/scx/compat.bpf.h |  5 ++
tools/sched_ext/scx_central.bpf.c        |  9 +--
tools/sched_ext/scx_flatcg.bpf.c         | 13 +---
tools/sched_ext/scx_simple.bpf.c         |  5 --
8 files changed, 222 insertions(+), 38 deletions(-)
[PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock
Posted by Changwoo Min 1 year, 1 month ago
Many BPF schedulers (such as scx_central, scx_lavd, scx_rusty, scx_bpfland,
and scx_flash) frequently call bpf_ktime_get_ns() for tracking tasks' runtime
properties. If supported, bpf_ktime_get_ns() eventually reads a hardware
timestamp counter (TSC). However, reading a hardware TSC is not
performant in some hardware platforms, degrading IPC.

This patchset addresses the performance problem of reading hardware TSC
by leveraging the rq clock in the scheduler core, introducing a
scx_bpf_now_ns() function for BPF schedulers. Whenever the rq clock
is fresh and valid, scx_bpf_now_ns() provides the rq clock, which is
already updated by the scheduler core (update_rq_clock), so it can reduce
reading the hardware TSC.

When the rq lock is released (rq_unpin_lock), the rq clock is invalidated,
so a subsequent scx_bpf_now_ns() call gets the fresh sched_clock for the caller.

In addition, scx_bpf_now_ns() guarantees the clock is monotonically
non-decreasing for the same CPU, so the clock cannot go backward
in the same CPU.

Using scx_bpf_now_ns() reduces the number of reading hardware TSC
by 50-80% (76% for scx_lavd, 82% for scx_bpfland, and 51% for scx_rusty)
for the following benchmark:

    perf bench -f simple sched messaging -t -g 20 -l 6000

ChangeLog v5 -> v6:
  - Drop prev_clock because a race between a process context and an interrupt
    context (e.g., timer interrupt) is not observable from a caller (discussed
    with Tejun offline), so it is not necessary to check prev_clock.

ChangeLog v4 -> v5:
  - Merge patches 2, 3, and 4 into one for readability.
  - Use a time helper (time_after_eq64) for time comparison at scx_bpf_now_ns().
  - Do not validate the rq clock outside the rq critical section for
    more predictable behavior.
  - Improve the comment at scx_bpf_now_ns() for readability.
  - Rename scx_rq_clock_stale() into  scx_rq_clock_invalidate().
  - Invalidate all the rq clocks upon unloading to prevent getting outdated
    rq clocks from a previous scx scheduler.
  - Use READ/WRITE_ONCE() when accessing rq->scx.{clock, prev_clock, flags}
    to properly handle concurrent accesses from an interrupt contex.
  - Add time helpers for BPF schedulers.
  - Update the rdtsc reduction numbers in the cover letter with the latest
    scx_scheds (1.0.7) and the updated scx_bpf_now_ns().

ChangeLog v3 -> v4:
  - Separate the code relocation related to scx_enabled() into a
    separate patch.
  - Remove scx_rq_clock_stale() after (or before) ops.running() and
    ops.update_idle() calls
  - Rename scx_bpf_clock_get_ns() into scx_bpf_now_ns() and revise it to
    address the comments
  - Move the per-CPU variable holding a prev clock into scx_rq
    (rq->scx.prev_clock)
  - Add a comment describing when the clock could go backward in
    scx_bpf_now_ns()
  - Rebase the code to the tip of Tejun's sched_ext repo (for-next
    branch)

ChangeLog v2 -> v3:
  - To avoid unnecessarily modifying cache lines, scx_rq_clock_update()
    and scx_rq_clock_stale() update the clock and flags only when a
    sched_ext scheduler is enabled.

ChangeLog v1 -> v2:
  - Rename SCX_RQ_CLK_UPDATED to SCX_RQ_CLK_VALID to denote the validity
    of an rq clock clearly.
  - Rearrange the clock and flags fields in struct scx_rq to make sure
    they are in the same cacheline to minimize the cache misses 
  - Add an additional explanation to the commit message in the 2/5 patch
    describing when the rq clock will be reused with an example.
  - Fix typos
  - Rebase the code to the tip of Tejun's sched_ext repo

Changwoo Min (6):
  sched_ext: Relocate scx_enabled() related code
  sched_ext: Implement scx_bpf_now_ns()
  sched_ext: Add scx_bpf_now_ns() for BPF scheduler
  sched_ext: Add time helpers for BPF schedulers
  sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now_ns()
  sched_ext: Use time helpers in BPF schedulers

 kernel/sched/core.c                      |  6 +-
 kernel/sched/ext.c                       | 75 ++++++++++++++++++-
 kernel/sched/sched.h                     | 52 +++++++++----
 tools/sched_ext/include/scx/common.bpf.h | 95 ++++++++++++++++++++++++
 tools/sched_ext/include/scx/compat.bpf.h |  5 ++
 tools/sched_ext/scx_central.bpf.c        |  9 +--
 tools/sched_ext/scx_flatcg.bpf.c         | 13 +---
 tools/sched_ext/scx_simple.bpf.c         |  5 --
 8 files changed, 222 insertions(+), 38 deletions(-)

-- 
2.47.1
Re: [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock
Posted by Andrea Righi 1 year, 1 month ago
Hi Changwoo,

On Fri, Dec 20, 2024 at 03:20:19PM +0900, Changwoo Min wrote:
> Many BPF schedulers (such as scx_central, scx_lavd, scx_rusty, scx_bpfland,
> and scx_flash) frequently call bpf_ktime_get_ns() for tracking tasks' runtime
> properties. If supported, bpf_ktime_get_ns() eventually reads a hardware
> timestamp counter (TSC). However, reading a hardware TSC is not
> performant in some hardware platforms, degrading IPC.
> 
> This patchset addresses the performance problem of reading hardware TSC
> by leveraging the rq clock in the scheduler core, introducing a
> scx_bpf_now_ns() function for BPF schedulers. Whenever the rq clock
> is fresh and valid, scx_bpf_now_ns() provides the rq clock, which is
> already updated by the scheduler core (update_rq_clock), so it can reduce
> reading the hardware TSC.
> 
> When the rq lock is released (rq_unpin_lock), the rq clock is invalidated,
> so a subsequent scx_bpf_now_ns() call gets the fresh sched_clock for the caller.
> 
> In addition, scx_bpf_now_ns() guarantees the clock is monotonically
> non-decreasing for the same CPU, so the clock cannot go backward
> in the same CPU.
> 
> Using scx_bpf_now_ns() reduces the number of reading hardware TSC
> by 50-80% (76% for scx_lavd, 82% for scx_bpfland, and 51% for scx_rusty)
> for the following benchmark:
> 
>     perf bench -f simple sched messaging -t -g 20 -l 6000

I've tested this patch set and I haven't observed any significant
performance improvements (but also no regressions), even if the systems
I've tested are likely quite efficient at reading the hardware TSC.

I'm curious if we'd see a more significant difference in non-hardware
virtualized systems (i.e., qemu without kvm). Have you done any testing in
such environments already?

In any case:

Tested-by: Andrea Righi <arighi@nvidia.com>

-Andrea
Re: [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock
Posted by Changwoo Min 1 year, 1 month ago
Hi Andrea,

On 24. 12. 21. 07:29, Andrea Righi wrote:
> I've tested this patch set and I haven't observed any significant
> performance improvements (but also no regressions), even if the systems
> I've tested are likely quite efficient at reading the hardware TSC.

Thank you for the testing. I am glad to hear that there is no
performance regression on an TSC-efficient system.

> I'm curious if we'd see a more significant difference in non-hardware
> virtualized systems (i.e., qemu without kvm). Have you done any testing in
> such environments already?

Well, I guess the gain is mostly relevant to how inefficient TSC
is in the system and how much a workload stresses TSC.

Regards,
Changwoo Min