drivers/gpu/drm/msm/msm_gpu.c | 42 ++++++++++++++++++++---------------------- 1 file changed, 20 insertions(+), 22 deletions(-)
Previously, in case there was no more work to do, recover worker
wouldn't trigger recovery and would instead rely on the gpu going to
sleep and then resuming when more work is submitted.
Recover_worker will first increment the fence of the hung ring so, if
there's only one job submitted to a ring and that causes an hang, it
will early out.
There's no guarantee that the gpu will suspend and resume before more
work is submitted and if the gpu is in a hung state it will stay in that
state and probably trigger a timeout again.
Just stop checking and always recover the gpu.
Signed-off-by: Anna Maniscalco <anna.maniscalco2000@gmail.com>
Cc: stable@vger.kernel.org
---
drivers/gpu/drm/msm/msm_gpu.c | 42 ++++++++++++++++++++----------------------
1 file changed, 20 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index 995549d0bbbc..ea3e79670f75 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -547,32 +547,30 @@ static void recover_worker(struct kthread_work *work)
msm_update_fence(ring->fctx, fence);
}
- if (msm_gpu_active(gpu)) {
- /* retire completed submits, plus the one that hung: */
- retire_submits(gpu);
+ /* retire completed submits, plus the one that hung: */
+ retire_submits(gpu);
- gpu->funcs->recover(gpu);
+ gpu->funcs->recover(gpu);
- /*
- * Replay all remaining submits starting with highest priority
- * ring
- */
- for (i = 0; i < gpu->nr_rings; i++) {
- struct msm_ringbuffer *ring = gpu->rb[i];
- unsigned long flags;
+ /*
+ * Replay all remaining submits starting with highest priority
+ * ring
+ */
+ for (i = 0; i < gpu->nr_rings; i++) {
+ struct msm_ringbuffer *ring = gpu->rb[i];
+ unsigned long flags;
- spin_lock_irqsave(&ring->submit_lock, flags);
- list_for_each_entry(submit, &ring->submits, node) {
- /*
- * If the submit uses an unusable vm make sure
- * we don't actually run it
- */
- if (to_msm_vm(submit->vm)->unusable)
- submit->nr_cmds = 0;
- gpu->funcs->submit(gpu, submit);
- }
- spin_unlock_irqrestore(&ring->submit_lock, flags);
+ spin_lock_irqsave(&ring->submit_lock, flags);
+ list_for_each_entry(submit, &ring->submits, node) {
+ /*
+ * If the submit uses an unusable vm make sure
+ * we don't actually run it
+ */
+ if (to_msm_vm(submit->vm)->unusable)
+ submit->nr_cmds = 0;
+ gpu->funcs->submit(gpu, submit);
}
+ spin_unlock_irqrestore(&ring->submit_lock, flags);
}
pm_runtime_put(&gpu->pdev->dev);
---
base-commit: 50c4a49f7292b33b454ea1a16c4f77d6965405dc
change-id: 20260210-recovery_suspend_fix-77a65932fa4c
Best regards,
--
Anna Maniscalco <anna.maniscalco2000@gmail.com>
© 2016 - 2026 Red Hat, Inc.