include/linux/sched.h | 107 ++++++++----- init/init_task.c | 4 + kernel/fork.c | 4 + kernel/locking/mutex.c | 80 +++++++-- kernel/locking/ww_mutex.h | 17 +- kernel/sched/core.c | 329 +++++++++++++++++++++++++++++++++++--- kernel/sched/fair.c | 3 +- kernel/sched/sched.h | 2 +- 8 files changed, 459 insertions(+), 87 deletions(-)
Hey All, As Peter just queued the Single-RQ portion of the Proxy Execution series, I wanted to start getting some initial review feedback for the next chunk of the series: Donor Migration v20 is not very different from v19 of the whole series that I’ve shared previously, I’ve only rebased it upon Peter’s sched/core branch, dropping the queued changes, resolving trivial conflicts and making some small tweaks to drop CONFIG_SMP conditionals that have been removed in -tip tree, along with a few minor cleanups. I’m trying to submit this larger work in smallish digestible pieces, so in this portion of the series, I’m only submitting for review and consideration the logic that allows us to do donor(blocked waiter) migration, allowing us to proxy-execute lock owners that might be on other cpu runqueues. This requires some additional changes to locking and extra state tracking to ensure we don’t accidentally run a migrated donor on a cpu it isn’t affined to, as well as some extra handling to deal with balance callback state that needs to be reset when we decide to pick a different task after doing donor migration. I’d love to get some initial feedback on any place where these patches are confusing, or could use additional clarifications. Also you can find the full proxy-exec series here: https://github.com/johnstultz-work/linux-dev/commits/proxy-exec-v20-peterz-sched-core/ https://github.com/johnstultz-work/linux-dev.git proxy-exec-v20-peterz-sched-core Issues still to address with the full series: * There’s a new quirk from recent changes for dl_server that is causing the ksched_football test in the full series to hang at boot. I’ve bisected and reverted the change for now, but I need to better understand what’s going wrong. * I spent some more time thinking about Peter’s suggestion to avoid using the blocked_on_state == BO_WAKING check to protect against running proxy-migrated tasks on cpus out of their affinity mask. His suggestion to just dequeue the task prior to the wakeup in the unlock-wakeup path is more elegant, but this would be insufficient to protect from other wakeup paths that don’t dequeue. I’m still thinking if there is a clean way around this, but I’ve not yet found it. * Need to sort out what is needed for sched_ext to be ok with proxy-execution enabled. * K Prateek Nayak did some testing about a bit over a year ago with an earlier version of the series and saw ~3-5% regressions in some cases. Need to re-evaluate this with the proxy-migration avoidance optimization Suleiman suggested now implemented. * The chain migration functionality needs further iterations and better validation to ensure it truly maintains the RT/DL load balancing invariants (despite this being broken in vanilla upstream with RT_PUSH_IPI currently) I’d really appreciate any feedback or review thoughts on the full series as well. I’m trying to keep the chunks small, reviewable and iteratively testable, but if you have any suggestions on how to improve the series, I’m all ears. Credit/Disclaimer: -------------------- As always, this Proxy Execution series has a long history with lots of developers that deserve credit: First described in a paper[1] by Watkins, Straub, Niehaus, then from patches from Peter Zijlstra, extended with lots of work by Juri Lelli, Valentin Schneider, and Connor O'Brien. (and thank you to Steven Rostedt for providing additional details here!) So again, many thanks to those above, as all the credit for this series really is due to them - while the mistakes are likely mine. Thanks so much! -john [1] https://static.lwn.net/images/conf/rtlws11/papers/proc/p38.pdf Cc: Joel Fernandes <joelagnelf@nvidia.com> Cc: Qais Yousef <qyousef@layalina.io> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Vincent Guittot <vincent.guittot@linaro.org> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Valentin Schneider <vschneid@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ben Segall <bsegall@google.com> Cc: Zimuzo Ezeozue <zezeozue@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Will Deacon <will@kernel.org> Cc: Waiman Long <longman@redhat.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: "Paul E. McKenney" <paulmck@kernel.org> Cc: Metin Kaya <Metin.Kaya@arm.com> Cc: Xuewen Yan <xuewen.yan94@gmail.com> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Suleiman Souhlal <suleiman@google.com> Cc: kuyo chang <kuyo.chang@mediatek.com> Cc: hupu <hupu.gm@gmail.com> Cc: kernel-team@android.com John Stultz (5): locking: Add task::blocked_lock to serialize blocked_on state kernel/locking: Add blocked_on_state to provide necessary tri-state for return migration sched: Add logic to zap balance callbacks if we pick again sched: Handle blocked-waiter migration (and return migration) sched: Migrate whole chain in proxy_migrate_task() Peter Zijlstra (1): sched: Add blocked_donor link to task for smarter mutex handoffs include/linux/sched.h | 107 ++++++++----- init/init_task.c | 4 + kernel/fork.c | 4 + kernel/locking/mutex.c | 80 +++++++-- kernel/locking/ww_mutex.h | 17 +- kernel/sched/core.c | 329 +++++++++++++++++++++++++++++++++++--- kernel/sched/fair.c | 3 +- kernel/sched/sched.h | 2 +- 8 files changed, 459 insertions(+), 87 deletions(-) -- 2.50.0.727.gbf7dc18ff4-goog
Hi, On 22/07/25 07:05, John Stultz wrote: ... > Issues still to address with the full series: > * There’s a new quirk from recent changes for dl_server that > is causing the ksched_football test in the full series to hang > at boot. I’ve bisected and reverted the change for now, but I > need to better understand what’s going wrong. After our quick chat on IRC, I remembered that there were additional two fixes for dl-server posted, but still not on tip. https://lore.kernel.org/lkml/20250615131129.954975-1-kuyo.chang@mediatek.com/ https://lore.kernel.org/lkml/20250627035420.37712-1-yangyicong@huawei.com/ So I went ahead and pushed them to git@github.com:jlelli/linux.git upstream/fix-dlserver Could you please check if any (or both together) of the two topmost changes do any good to the issue you are seeing? Thanks! Juri
On Wed, Jul 23, 2025 at 7:44 AM Juri Lelli <juri.lelli@redhat.com> wrote: > On 22/07/25 07:05, John Stultz wrote: > > Issues still to address with the full series: > > * There’s a new quirk from recent changes for dl_server that > > is causing the ksched_football test in the full series to hang > > at boot. I’ve bisected and reverted the change for now, but I > > need to better understand what’s going wrong. > > After our quick chat on IRC, I remembered that there were additional two > fixes for dl-server posted, but still not on tip. > > https://lore.kernel.org/lkml/20250615131129.954975-1-kuyo.chang@mediatek.com/ > https://lore.kernel.org/lkml/20250627035420.37712-1-yangyicong@huawei.com/ > > So I went ahead and pushed them to > > git@github.com:jlelli/linux.git upstream/fix-dlserver > > Could you please check if any (or both together) of the two topmost > changes do any good to the issue you are seeing? Thanks for sharing these! Unfortunately they don't seem to help. :/ I'm still digging down into the behavior. I'm not 100% sure the problem isn't just my test logic starving itself (after creating NR_CPU RT spinners, its not surprising creating new threads might be tough if the non-RT kthreadd can't get scheduled), but I don't quite see how the dl_server patch cccb45d7c429 ("sched/deadline: Less agressive dl_server handling") would be the cause of the dramatic behavioral change - esp as this test was also functional prior to the dl_server logic landing. Also it's odd just re-adding the dl_server_stop() call removed from dequeue_entities() seems to make it work again. So I clearly need to dig more to understand the behavior. Thanks again for your suggestions! I'm going to dig further and let folks know when I figure this detail out thanks -john
© 2016 - 2025 Red Hat, Inc.