[PATCH 2/8] sched/fair: Limit hrtick work

Peter Zijlstra posted 8 patches 4 months, 3 weeks ago
There is a newer version of this series
[PATCH 2/8] sched/fair: Limit hrtick work
Posted by Peter Zijlstra 4 months, 3 weeks ago
The task_tick_fair() function does:

 - update the hierarchical runtimes
 - drive numa-balancing
 - update load-balance statistics
 - drive force-idle preemption

All but the very first can be limited to the periodic tick. Let hrtick
only update accounting and drive preemption, not load-balancing and
other bits.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/sched/fair.c |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -13119,6 +13119,12 @@ static void task_tick_fair(struct rq *rq
 		entity_tick(cfs_rq, se, queued);
 	}
 
+	if (queued) {
+		if (!need_resched())
+			hrtick_start_fair(rq, curr);
+		return;
+	}
+
 	if (static_branch_unlikely(&sched_numa_balancing))
 		task_tick_numa(rq, curr);
Re: [PATCH 2/8] sched/fair: Limit hrtick work
Posted by K Prateek Nayak 4 months, 3 weeks ago
Hello Peter,

On 9/18/2025 1:22 PM, Peter Zijlstra wrote:
> @@ -13119,6 +13119,12 @@ static void task_tick_fair(struct rq *rq
>  		entity_tick(cfs_rq, se, queued);
>  	}
>  
> +	if (queued) {
> +		if (!need_resched())
> +			hrtick_start_fair(rq, curr);

Do we need a hrtick_start_fair() here? Queued tick will always do a
resched_curr_lazy() - if another HRTICK fires before the next tick,
all it'll do is resched_curr_lazy() again and the next opportunity to
resched is either exit to userspace or the periodic tick firing and
promoting that LAZY to a full NEED_RESCHED.

The early return does makes sense.

> +		return;
> +	}
> +
>  	if (static_branch_unlikely(&sched_numa_balancing))
>  		task_tick_numa(rq, curr);
>  

-- 
Thanks and Regards,
Prateek
Re: [PATCH 2/8] sched/fair: Limit hrtick work
Posted by Peter Zijlstra 2 months, 1 week ago
On Fri, Sep 19, 2025 at 08:29:09PM +0530, K Prateek Nayak wrote:
> Hello Peter,
> 
> On 9/18/2025 1:22 PM, Peter Zijlstra wrote:
> > @@ -13119,6 +13119,12 @@ static void task_tick_fair(struct rq *rq
> >  		entity_tick(cfs_rq, se, queued);
> >  	}
> >  
> > +	if (queued) {
> > +		if (!need_resched())
> > +			hrtick_start_fair(rq, curr);
> 
> Do we need a hrtick_start_fair() here? Queued tick will always do a
> resched_curr_lazy() - if another HRTICK fires before the next tick,
> all it'll do is resched_curr_lazy() again and the next opportunity to
> resched is either exit to userspace or the periodic tick firing and
> promoting that LAZY to a full NEED_RESCHED.

I think I had a version where entity_tick() doesn't force need_resched
on queue, and in that case the timer, which is wallclock, and
update_curr(), which is task_clock, might disagree and we might not have
reached the deadline, and so we need to try again.
[tip: sched/core] sched/fair: Limit hrtick work
Posted by tip-bot2 for Peter Zijlstra 1 month, 3 weeks ago
The following commit has been merged into the sched/core branch of tip:

Commit-ID:     95a0155224a658965f34ed4b1943b238d9be1fea
Gitweb:        https://git.kernel.org/tip/95a0155224a658965f34ed4b1943b238d9be1fea
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Mon, 01 Sep 2025 22:50:56 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Sun, 14 Dec 2025 08:25:02 +01:00

sched/fair: Limit hrtick work

The task_tick_fair() function does:

 - update the hierarchical runtimes
 - drive NUMA-balancing
 - update load-balance statistics
 - drive force-idle preemption

All but the very first can be limited to the periodic tick. Let hrtick
only update accounting and drive preemption, not load-balancing and
other bits.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://patch.msgid.link/20250918080205.563385766@infradead.org
---
 kernel/sched/fair.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 496a30a..f79951f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -13332,6 +13332,12 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
 		entity_tick(cfs_rq, se, queued);
 	}
 
+	if (queued) {
+		if (!need_resched())
+			hrtick_start_fair(rq, curr);
+		return;
+	}
+
 	if (static_branch_unlikely(&sched_numa_balancing))
 		task_tick_numa(rq, curr);