From nobody Wed Oct 8 05:34:58 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FA0D276049; Tue, 1 Jul 2025 13:22:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376146; cv=none; b=VsvUYpMdLb9N87raSc9SX+kmxpS1qkoeZfBeeeX7UtIHsC7wxMsafzCoSe/HFc5mc9SqlxWZ16114ANepqnbuWFyLXl6R0ThFsW63MstXhHyqPVWXv2/U/RWs6gCTwJW/+2Ambw2IPhL+VKakv6AWQU1R/ThbIZmyY8/3cHQWEM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376146; c=relaxed/simple; bh=hXhp4BvIw/DZIP5VkSnykQPTFuMxetN8CAwOyfsiMXM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XNdBBaW1tOakOaUbANXJv745AasFz7tWq3xlcBGL9z9Yigxm0qsUmwkJKuS1BJRT/2jsiXCyKEoROGwNbjo7n7m6/HPwkh/b4VJVAO/ccffH1vovTIHEwU7ewYb2GQeeAlyJ05SdgBDzl8q87SljIqEoO5XUP8zwagHtSkMEYyI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=uM4Lt5Pv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uM4Lt5Pv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19F85C4CEED; Tue, 1 Jul 2025 13:22:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751376146; bh=hXhp4BvIw/DZIP5VkSnykQPTFuMxetN8CAwOyfsiMXM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uM4Lt5PvVRlHNCFMtlu2A6FguRMpI6r/EZkTauoht3CDD5I4IHuWiEhDxACJBCm2Z KhDLdGuxfqJPnUVOdwYRxT1BCysbI3Gj4ioN25nrEBr+PbfYhgyDvakViX/U+bct+J UY7XVq4bTAzt0lZkU8ZCOSwJzix6suWiUhcPHLo03N3Rblmhs1QFZxnKoPCphWCmgr 5ytX+KB5mbIYsXPaXXjjMSd3/6oTIeVJWkJ4LmUIow3z2W+h15WI4V34rec+4A2jlI 6NmdwNpxVSW1h1rm93NV5tdP/c+sgz/VaioVn05dKEJ7jaLCd7ZMSz+yYfk9d3i/rh T9CRUWUv25qmw== From: Philipp Stanner To: Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Matthew Brost , Philipp Stanner , =?UTF-8?q?Christian=20K=C3=B6nig?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , Tvrtko Ursulin , Pierre-Eric Pelloux-Prayer Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 1/6] drm/sched: Avoid memory leaks with cancel_job() callback Date: Tue, 1 Jul 2025 15:21:39 +0200 Message-ID: <20250701132142.76899-4-phasta@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250701132142.76899-3-phasta@kernel.org> References: <20250701132142.76899-3-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since its inception, the GPU scheduler can leak memory if the driver calls drm_sched_fini() while there are still jobs in flight. The simplest way to solve this in a backwards compatible manner is by adding a new callback, drm_sched_backend_ops.cancel_job(), which instructs the driver to signal the hardware fence associated with the job. Afterwards, the scheduler can savely use the established free_job() callback for freeing the job. Implement the new backend_ops callback cancel_job(). Suggested-by: Tvrtko Ursulin Link: https://lore.kernel.org/dri-devel/20250418113211.69956-1-tvrtko.ursul= in@igalia.com/ Signed-off-by: Philipp Stanner Acked-by: Tvrtko Ursulin Reviewed-by: Ma=C3=ADra Canal --- drivers/gpu/drm/scheduler/sched_main.c | 34 ++++++++++++++++---------- include/drm/gpu_scheduler.h | 18 ++++++++++++++ 2 files changed, 39 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/sched= uler/sched_main.c index c63543132f9d..1239954f5f7c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1353,6 +1353,18 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, = const struct drm_sched_init_ } EXPORT_SYMBOL(drm_sched_init); =20 +static void drm_sched_cancel_remaining_jobs(struct drm_gpu_scheduler *sche= d) +{ + struct drm_sched_job *job, *tmp; + + /* All other accessors are stopped. No locking necessary. */ + list_for_each_entry_safe_reverse(job, tmp, &sched->pending_list, list) { + sched->ops->cancel_job(job); + list_del(&job->list); + sched->ops->free_job(job); + } +} + /** * drm_sched_fini - Destroy a gpu scheduler * @@ -1360,19 +1372,11 @@ EXPORT_SYMBOL(drm_sched_init); * * Tears down and cleans up the scheduler. * - * This stops submission of new jobs to the hardware through - * drm_sched_backend_ops.run_job(). Consequently, drm_sched_backend_ops.fr= ee_job() - * will not be called for all jobs still in drm_gpu_scheduler.pending_list. - * There is no solution for this currently. Thus, it is up to the driver t= o make - * sure that: - * - * a) drm_sched_fini() is only called after for all submitted jobs - * drm_sched_backend_ops.free_job() has been called or that - * b) the jobs for which drm_sched_backend_ops.free_job() has not been ca= lled - * after drm_sched_fini() ran are freed manually. - * - * FIXME: Take care of the above problem and prevent this function from le= aking - * the jobs in drm_gpu_scheduler.pending_list under any circumstances. + * This stops submission of new jobs to the hardware through &struct + * drm_sched_backend_ops.run_job. If &struct drm_sched_backend_ops.cancel_= job + * is implemented, all jobs will be canceled through it and afterwards cle= aned + * up through &struct drm_sched_backend_ops.free_job. If cancel_job is not + * implemented, memory could leak. */ void drm_sched_fini(struct drm_gpu_scheduler *sched) { @@ -1402,6 +1406,10 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) /* Confirm no work left behind accessing device structures */ cancel_delayed_work_sync(&sched->work_tdr); =20 + /* Avoid memory leaks if supported by the driver. */ + if (sched->ops->cancel_job) + drm_sched_cancel_remaining_jobs(sched); + if (sched->own_submit_wq) destroy_workqueue(sched->submit_wq); sched->ready =3D false; diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index e62a7214e052..190844370f48 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -512,6 +512,24 @@ struct drm_sched_backend_ops { * and it's time to clean it up. */ void (*free_job)(struct drm_sched_job *sched_job); + + /** + * @cancel_job: Used by the scheduler to guarantee remaining jobs' fences + * get signaled in drm_sched_fini(). + * + * Used by the scheduler to cancel all jobs that have not been executed + * with &struct drm_sched_backend_ops.run_job by the time + * drm_sched_fini() gets invoked. + * + * Drivers need to signal the passed job's hardware fence with an + * appropriate error code (e.g., -ECANCELED) in this callback. They + * must not free the job. + * + * The scheduler will only call this callback once it stopped calling + * all other callbacks forever, with the exception of &struct + * drm_sched_backend_ops.free_job. + */ + void (*cancel_job)(struct drm_sched_job *sched_job); }; =20 /** --=20 2.49.0 From nobody Wed Oct 8 05:34:58 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0D27277030; Tue, 1 Jul 2025 13:22:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376150; cv=none; b=cxmE5GTL4XlTUlvJQlHVAM7XLhqBFPstgpT0ZORVPTiG/EGhCI2+swOzSvCUA75HjqWJoj1USmouOidKvegIoSXBvxoNvncbg1KhhLUGrjSUA/O6f5g6DX0OVyuOVlOlTRT9rzF3j/Z0YkNWV2KC4TDBE97+AbkcrU2mOpx7nFY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376150; c=relaxed/simple; bh=Sgl1fbx3s/Q+rdymOYlAFyb0iIci+VpWW4JgdhHhMxs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=strxDZhHr32P803x7+x711oUQY70bpM3Z/QXHf/ShQsfinmAxcFu1NX8uFGo9kbtXm2x8m2F4Ls2EWPHF0IZmCw6jkSzwo2RAZ1Ht3hPs5Gw5LxL9XAJhBlLPh87S4sR3EI3QGdjpUsOAhYHnuQ0X5NCMdcH1CR/Vbv1f8B40Cs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sPgwB8zo; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sPgwB8zo" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 94B1AC4CEEB; Tue, 1 Jul 2025 13:22:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751376150; bh=Sgl1fbx3s/Q+rdymOYlAFyb0iIci+VpWW4JgdhHhMxs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sPgwB8zoH1cvmufw3QZQIQrINssDLYebXKpVTJH3dZNbCoO9tikWTg8YjFQgg6SNs hlOUbG8QDHT5UphUblWmTiuiGwnwojU9fW3jik1b1MHnFiRllsVS3kAQ0lbcW7GMTh rK3FMQRkGnHRqSa9Zx59oLIK8jOxDtLh9qSqtcSJmGcaObHCpS37Eck2O5MeBbJ205 llavq9GtUR+RIyxVfZA+2jcp0KQD87Xiq1W6bYsVcYA3+FnAGpsCQciGubpMvTrr73 p4PJ7fHKAS3+ToJBE/h78CcdpYBwNbVwXIiHveEKKluRaavatc0irwZJuua6sUwiBl ez1j4FX7mz90w== From: Philipp Stanner To: Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Matthew Brost , Philipp Stanner , =?UTF-8?q?Christian=20K=C3=B6nig?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , Tvrtko Ursulin , Pierre-Eric Pelloux-Prayer Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 2/6] drm/sched/tests: Port to cancel_job() Date: Tue, 1 Jul 2025 15:21:40 +0200 Message-ID: <20250701132142.76899-5-phasta@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250701132142.76899-3-phasta@kernel.org> References: <20250701132142.76899-3-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The GPU Scheduler now supports a new callback, cancel_job(), which lets the scheduler cancel all jobs which might not yet be freed when drm_sched_fini() runs. Using this callback allows for significantly simplifying the mock scheduler teardown code. Implement the cancel_job() callback and adjust the code where necessary. Signed-off-by: Philipp Stanner --- .../gpu/drm/scheduler/tests/mock_scheduler.c | 66 +++++++------------ 1 file changed, 23 insertions(+), 43 deletions(-) diff --git a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c b/drivers/gpu= /drm/scheduler/tests/mock_scheduler.c index 49d067fecd67..2d3169d95200 100644 --- a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c +++ b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c @@ -63,7 +63,7 @@ static void drm_mock_sched_job_complete(struct drm_mock_s= ched_job *job) lockdep_assert_held(&sched->lock); =20 job->flags |=3D DRM_MOCK_SCHED_JOB_DONE; - list_move_tail(&job->link, &sched->done_list); + list_del(&job->link); dma_fence_signal_locked(&job->hw_fence); complete(&job->done); } @@ -236,26 +236,39 @@ mock_sched_timedout_job(struct drm_sched_job *sched_j= ob) =20 static void mock_sched_free_job(struct drm_sched_job *sched_job) { - struct drm_mock_scheduler *sched =3D - drm_sched_to_mock_sched(sched_job->sched); struct drm_mock_sched_job *job =3D drm_sched_job_to_mock_job(sched_job); - unsigned long flags; =20 - /* Remove from the scheduler done list. */ - spin_lock_irqsave(&sched->lock, flags); - list_del(&job->link); - spin_unlock_irqrestore(&sched->lock, flags); dma_fence_put(&job->hw_fence); - drm_sched_job_cleanup(sched_job); =20 /* Mock job itself is freed by the kunit framework. */ } =20 +static void mock_sched_cancel_job(struct drm_sched_job *sched_job) +{ + struct drm_mock_scheduler *sched =3D drm_sched_to_mock_sched(sched_job->s= ched); + struct drm_mock_sched_job *job =3D drm_sched_job_to_mock_job(sched_job); + unsigned long flags; + + hrtimer_cancel(&job->timer); + + spin_lock_irqsave(&sched->lock, flags); + if (!dma_fence_is_signaled_locked(&job->hw_fence)) { + list_del(&job->link); + dma_fence_set_error(&job->hw_fence, -ECANCELED); + dma_fence_signal_locked(&job->hw_fence); + } + spin_unlock_irqrestore(&sched->lock, flags); + + /* The GPU Scheduler will call drm_sched_backend_ops.free_job(), still. + * Mock job itself is freed by the kunit framework. */ +} + static const struct drm_sched_backend_ops drm_mock_scheduler_ops =3D { .run_job =3D mock_sched_run_job, .timedout_job =3D mock_sched_timedout_job, - .free_job =3D mock_sched_free_job + .free_job =3D mock_sched_free_job, + .cancel_job =3D mock_sched_cancel_job, }; =20 /** @@ -289,7 +302,6 @@ struct drm_mock_scheduler *drm_mock_sched_new(struct ku= nit *test, long timeout) sched->hw_timeline.context =3D dma_fence_context_alloc(1); atomic_set(&sched->hw_timeline.next_seqno, 0); INIT_LIST_HEAD(&sched->job_list); - INIT_LIST_HEAD(&sched->done_list); spin_lock_init(&sched->lock); =20 return sched; @@ -304,38 +316,6 @@ struct drm_mock_scheduler *drm_mock_sched_new(struct k= unit *test, long timeout) */ void drm_mock_sched_fini(struct drm_mock_scheduler *sched) { - struct drm_mock_sched_job *job, *next; - unsigned long flags; - LIST_HEAD(list); - - drm_sched_wqueue_stop(&sched->base); - - /* Force complete all unfinished jobs. */ - spin_lock_irqsave(&sched->lock, flags); - list_for_each_entry_safe(job, next, &sched->job_list, link) - list_move_tail(&job->link, &list); - spin_unlock_irqrestore(&sched->lock, flags); - - list_for_each_entry(job, &list, link) - hrtimer_cancel(&job->timer); - - spin_lock_irqsave(&sched->lock, flags); - list_for_each_entry_safe(job, next, &list, link) - drm_mock_sched_job_complete(job); - spin_unlock_irqrestore(&sched->lock, flags); - - /* - * Free completed jobs and jobs not yet processed by the DRM scheduler - * free worker. - */ - spin_lock_irqsave(&sched->lock, flags); - list_for_each_entry_safe(job, next, &sched->done_list, link) - list_move_tail(&job->link, &list); - spin_unlock_irqrestore(&sched->lock, flags); - - list_for_each_entry_safe(job, next, &list, link) - mock_sched_free_job(&job->base); - drm_sched_fini(&sched->base); } =20 --=20 2.49.0 From nobody Wed Oct 8 05:34:58 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62523277815; Tue, 1 Jul 2025 13:22:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376155; cv=none; b=LohH8rvf1DQAdoDDUloXf+FFCPqCjNueXjN1Pfbf0ipGz4Wr4JxcmvibPp9QNTqgS9e8Gb8hLid+z/7MqBNCyU0Cv4CyCzqzIpmJH+lilGGLEa4bP5etndXGLcVL+gf2OQkqXX9T4j7ajF/OinND088Dc7YRVTzd+ZkbrTd5A78= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376155; c=relaxed/simple; bh=UQqJCi3q3248PmPaymeTCZH4g0d/zd1eLvZ5EVok/aw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BPouD0lT2HXa8ttPYMAizlp4HpgL6kod6Mk7qVr7yB6g+tIqM8DIjGy+/AsrnFXGFiUKAVukxZSLpyu3yoFoPbvkRs4CxZWHI4Me3QCUGutMF3uAFCJe+xT5bWiJMc2idwpP6mOvXcDOlEBBuP5SuLjADWFRuT909nYT3mVu7L8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sqnHL9rN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sqnHL9rN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1BFD2C4CEEB; Tue, 1 Jul 2025 13:22:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751376155; bh=UQqJCi3q3248PmPaymeTCZH4g0d/zd1eLvZ5EVok/aw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sqnHL9rNGJIULEfv+ov8QJAKVBlfWHtnbfJJKdxR8w9K0IMVAaDDb/HjDEtFeKP9O +0uQIP1+62oxaOD4Q8LqimhTdMBOuDeRSS/7speS5pbDJZtFCrpj9mEYAvs3vYMF8d KKa6FSqdTAmH2YlfCdGkoFEnC+yI/PeQpkGVVft3mkVpF5aFSBeYqMxQBSHDn6EUJn KJrQCq1K0EBFAvK1FX/CRz9LGYe3iNBxulNGlyLYhEE7NZRD8Q287qFUt4/LsXxQ4I uQqPVdZkL6pcyiEN0P8nSOqeJL+6Y2mYUJONi6IVc/+f/KE+R4mZKSMsnFp1EiE4/b VZe7mgzLvyV1w== From: Philipp Stanner To: Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Matthew Brost , Philipp Stanner , =?UTF-8?q?Christian=20K=C3=B6nig?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , Tvrtko Ursulin , Pierre-Eric Pelloux-Prayer Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 3/6] drm/sched: Warn if pending list is not empty Date: Tue, 1 Jul 2025 15:21:41 +0200 Message-ID: <20250701132142.76899-6-phasta@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250701132142.76899-3-phasta@kernel.org> References: <20250701132142.76899-3-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" drm_sched_fini() can leak jobs under certain circumstances. Warn if that happens. Signed-off-by: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_main.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/sched= uler/sched_main.c index 1239954f5f7c..dadf1a22ddf6 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1415,6 +1415,9 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) sched->ready =3D false; kfree(sched->sched_rq); sched->sched_rq =3D NULL; + + if (!list_empty(&sched->pending_list)) + dev_err(sched->dev, "Tearing down scheduler while jobs are pending!\n"); } EXPORT_SYMBOL(drm_sched_fini); =20 --=20 2.49.0 From nobody Wed Oct 8 05:34:58 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6F7A2737F3; Tue, 1 Jul 2025 13:22:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376159; cv=none; b=ArAid+/ErFEC3NQ8u8odmRExz+39TKXxnpU+Zhfd7pMGFVoA7ecgAciHkAcw9qjiN219fOgFN35zIfjbolVQWI7FG6lPr81B2hNah+uMpheD1XtZ3iwkbr6rgxbDByXDqqW6JhKuzTzfvFdRMsVhqWMm6gOxOpAXEfvLay0o0gY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376159; c=relaxed/simple; bh=QPT51nD7e7AHwZ27sJjHwy+UmpkZSyFDrE7MKORACwU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DjvTFLqbGVy0WR/frufm5IIe8hh9/Npl7QLlW0DDAd1/i/EiAhWwYc8ZeO6pw3oDHL0tRu8h2jnnLnutii0Lu8K/l0IOLf1kCsNm7Xsif4cORWTR6BLBFVrpwuKU2hroCgF2vsh9BiQth0IC8Isk77jwkNI5IUBuCROczlVCdXs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=aUyjcazN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aUyjcazN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B07B4C4CEEB; Tue, 1 Jul 2025 13:22:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751376159; bh=QPT51nD7e7AHwZ27sJjHwy+UmpkZSyFDrE7MKORACwU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aUyjcazN1qGH9bivXxKWfTg67qgyJoURtVk5CGRZQe/PRMxHT53CAnVUrVbwLnBa4 3Ug7jwkXtem6lrSlRFJPGDoABmoAk8Mv6ZR4NaZHDSLxKLkmAbNQANoTjDQNuCZxTj 4QHnx+c/PhLiiAhwx4CUC0l6srLR6oAFFvmmvhpUzhTyJbX1vxErL/nPpAvDc9OvuN uXiTN7S0vy1zwi+O71pQ/mhUS9dxivm9PlJKd4grBl23GpBbWrr0ShGxFR1Iam0oXT rntwg7NUbg11zwZpVGQhrizf+qCAFv+rJOAvfb2oizGOe6+DlVQPfqTP3RCRpmseC0 bPe4CCkU6ArFA== From: Philipp Stanner To: Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Matthew Brost , Philipp Stanner , =?UTF-8?q?Christian=20K=C3=B6nig?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , Tvrtko Ursulin , Pierre-Eric Pelloux-Prayer Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 4/6] drm/nouveau: Make fence container helper usable driver-wide Date: Tue, 1 Jul 2025 15:21:42 +0200 Message-ID: <20250701132142.76899-7-phasta@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250701132142.76899-3-phasta@kernel.org> References: <20250701132142.76899-3-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to implement a new DRM GPU scheduler callback in Nouveau, a helper for obtaining a nouveau_fence from a dma_fence is necessary. Such a helper exists already inside nouveau_fence.c, called from_fence(). Make that helper available to other C files with a more precise name. Signed-off-by: Philipp Stanner --- drivers/gpu/drm/nouveau/nouveau_fence.c | 20 +++++++------------- drivers/gpu/drm/nouveau/nouveau_fence.h | 6 ++++++ 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouv= eau/nouveau_fence.c index d5654e26d5bc..869d4335c0f4 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.c +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c @@ -38,12 +38,6 @@ static const struct dma_fence_ops nouveau_fence_ops_uevent; static const struct dma_fence_ops nouveau_fence_ops_legacy; =20 -static inline struct nouveau_fence * -from_fence(struct dma_fence *fence) -{ - return container_of(fence, struct nouveau_fence, base); -} - static inline struct nouveau_fence_chan * nouveau_fctx(struct nouveau_fence *fence) { @@ -77,7 +71,7 @@ nouveau_local_fence(struct dma_fence *fence, struct nouve= au_drm *drm) fence->ops !=3D &nouveau_fence_ops_uevent) return NULL; =20 - return from_fence(fence); + return to_nouveau_fence(fence); } =20 void @@ -268,7 +262,7 @@ nouveau_fence_done(struct nouveau_fence *fence) static long nouveau_fence_wait_legacy(struct dma_fence *f, bool intr, long wait) { - struct nouveau_fence *fence =3D from_fence(f); + struct nouveau_fence *fence =3D to_nouveau_fence(f); unsigned long sleep_time =3D NSEC_PER_MSEC / 1000; unsigned long t =3D jiffies, timeout =3D t + wait; =20 @@ -448,7 +442,7 @@ static const char *nouveau_fence_get_get_driver_name(st= ruct dma_fence *fence) =20 static const char *nouveau_fence_get_timeline_name(struct dma_fence *f) { - struct nouveau_fence *fence =3D from_fence(f); + struct nouveau_fence *fence =3D to_nouveau_fence(f); struct nouveau_fence_chan *fctx =3D nouveau_fctx(fence); =20 return !fctx->dead ? fctx->name : "dead channel"; @@ -462,7 +456,7 @@ static const char *nouveau_fence_get_timeline_name(stru= ct dma_fence *f) */ static bool nouveau_fence_is_signaled(struct dma_fence *f) { - struct nouveau_fence *fence =3D from_fence(f); + struct nouveau_fence *fence =3D to_nouveau_fence(f); struct nouveau_fence_chan *fctx =3D nouveau_fctx(fence); struct nouveau_channel *chan; bool ret =3D false; @@ -478,7 +472,7 @@ static bool nouveau_fence_is_signaled(struct dma_fence = *f) =20 static bool nouveau_fence_no_signaling(struct dma_fence *f) { - struct nouveau_fence *fence =3D from_fence(f); + struct nouveau_fence *fence =3D to_nouveau_fence(f); =20 /* * caller should have a reference on the fence, @@ -503,7 +497,7 @@ static bool nouveau_fence_no_signaling(struct dma_fence= *f) =20 static void nouveau_fence_release(struct dma_fence *f) { - struct nouveau_fence *fence =3D from_fence(f); + struct nouveau_fence *fence =3D to_nouveau_fence(f); struct nouveau_fence_chan *fctx =3D nouveau_fctx(fence); =20 kref_put(&fctx->fence_ref, nouveau_fence_context_put); @@ -521,7 +515,7 @@ static const struct dma_fence_ops nouveau_fence_ops_leg= acy =3D { =20 static bool nouveau_fence_enable_signaling(struct dma_fence *f) { - struct nouveau_fence *fence =3D from_fence(f); + struct nouveau_fence *fence =3D to_nouveau_fence(f); struct nouveau_fence_chan *fctx =3D nouveau_fctx(fence); bool ret; =20 diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.h b/drivers/gpu/drm/nouv= eau/nouveau_fence.h index 6a983dd9f7b9..183dd43ecfff 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.h +++ b/drivers/gpu/drm/nouveau/nouveau_fence.h @@ -17,6 +17,12 @@ struct nouveau_fence { unsigned long timeout; }; =20 +static inline struct nouveau_fence * +to_nouveau_fence(struct dma_fence *fence) +{ + return container_of(fence, struct nouveau_fence, base); +} + int nouveau_fence_create(struct nouveau_fence **, struct nouveau_channel = *); int nouveau_fence_new(struct nouveau_fence **, struct nouveau_channel *); void nouveau_fence_unref(struct nouveau_fence **); --=20 2.49.0 From nobody Wed Oct 8 05:34:58 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E35E273818; Tue, 1 Jul 2025 13:22:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376164; cv=none; b=N5esQMU654XqHoAbFa0++NJiPfodcjk/uJmQJd7C6FLemo4SnzBu5e+macbBSQnGgxgoWwvFz7FF7EJDOxJ8PIEkRAWV/Ov5PKzr/a1BEB9axncvNqcN+Qb15sesbs4raylxyKHx3Gc3MX2pm79T3o7fnpGeDXDh9W5KlI2xRqg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376164; c=relaxed/simple; bh=YPEvwsrhEYilovZE/FzKLWSDVL/XkoOF8+zmDx8isVw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OsO0rOQZO4pyfeEh12FN4vJObE/0b2diS0XkR5vzuCVlk5sFmLTKLFrkFQ3PU6UwUphC2iMos+M3KE/7T5XqgMCkmrfoijfR3stVd6jvBvAUGqt0Gg/ACLdtZrFoYj2SZhirBhNOhC0/YnML3gX2Raci8HVnfpKnhHH6GTuL/f4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eJWzypBb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eJWzypBb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 37C21C4CEEB; Tue, 1 Jul 2025 13:22:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751376164; bh=YPEvwsrhEYilovZE/FzKLWSDVL/XkoOF8+zmDx8isVw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eJWzypBbI9YtnzazLXUr4k+L5FjPNNBvtjlZBTnC3q4zfJuLUhsK2AdaT28MFA7Oq +W0ZCKoackXsY2UqpkVVQp+OoferUgFLtK0FsjuPno3gSVhQlMZKQg6AKkNtTLysqg D67DCdkCRGIUQb4LjYZWQ0FtTXTCX4HehKAB+AlhD7jUdJr2nXXQvjNVaSpm2xMyql WiINbodPorB0fIEvQqF4ysV6XCKckJsdMmdBebF5WJLH3v5f4Ndmx0Ns1V4Gy5aOHS rVREw5a4Uefw49RsBhOdZdUKx0XvY29OFO2+5MVIZp/bT76+lcWQraYxP/TIdagE9L UYx/PiOjyy9Pw== From: Philipp Stanner To: Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Matthew Brost , Philipp Stanner , =?UTF-8?q?Christian=20K=C3=B6nig?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , Tvrtko Ursulin , Pierre-Eric Pelloux-Prayer Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 5/6] drm/nouveau: Add new callback for scheduler teardown Date: Tue, 1 Jul 2025 15:21:43 +0200 Message-ID: <20250701132142.76899-8-phasta@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250701132142.76899-3-phasta@kernel.org> References: <20250701132142.76899-3-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There is a new callback for always tearing the scheduler down in a leak-free, deadlock-free manner. Port Nouveau as its first user by providing the scheduler with a callback that ensures the fence context gets killed in drm_sched_fini(). Signed-off-by: Philipp Stanner --- drivers/gpu/drm/nouveau/nouveau_fence.c | 15 +++++++++++++++ drivers/gpu/drm/nouveau/nouveau_fence.h | 1 + drivers/gpu/drm/nouveau/nouveau_sched.c | 15 ++++++++++++++- 3 files changed, 30 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouv= eau/nouveau_fence.c index 869d4335c0f4..9f345a008717 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.c +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c @@ -240,6 +240,21 @@ nouveau_fence_emit(struct nouveau_fence *fence) return ret; } =20 +void +nouveau_fence_cancel(struct nouveau_fence *fence) +{ + struct nouveau_fence_chan *fctx =3D nouveau_fctx(fence); + unsigned long flags; + + spin_lock_irqsave(&fctx->lock, flags); + if (!dma_fence_is_signaled_locked(&fence->base)) { + dma_fence_set_error(&fence->base, -ECANCELED); + if (nouveau_fence_signal(fence)) + nvif_event_block(&fctx->event); + } + spin_unlock_irqrestore(&fctx->lock, flags); +} + bool nouveau_fence_done(struct nouveau_fence *fence) { diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.h b/drivers/gpu/drm/nouv= eau/nouveau_fence.h index 183dd43ecfff..9957a919bd38 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.h +++ b/drivers/gpu/drm/nouveau/nouveau_fence.h @@ -29,6 +29,7 @@ void nouveau_fence_unref(struct nouveau_fence **); =20 int nouveau_fence_emit(struct nouveau_fence *); bool nouveau_fence_done(struct nouveau_fence *); +void nouveau_fence_cancel(struct nouveau_fence *fence); int nouveau_fence_wait(struct nouveau_fence *, bool lazy, bool intr); int nouveau_fence_sync(struct nouveau_bo *, struct nouveau_channel *, boo= l exclusive, bool intr); =20 diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c b/drivers/gpu/drm/nouv= eau/nouveau_sched.c index 460a5fb02412..2ec62059c351 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sched.c +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c @@ -11,6 +11,7 @@ #include "nouveau_exec.h" #include "nouveau_abi16.h" #include "nouveau_sched.h" +#include "nouveau_chan.h" =20 #define NOUVEAU_SCHED_JOB_TIMEOUT_MS 10000 =20 @@ -393,10 +394,23 @@ nouveau_sched_free_job(struct drm_sched_job *sched_jo= b) nouveau_job_fini(job); } =20 +static void +nouveau_sched_cancel_job(struct drm_sched_job *sched_job) +{ + struct nouveau_fence *fence; + struct nouveau_job *job; + + job =3D to_nouveau_job(sched_job); + fence =3D to_nouveau_fence(job->done_fence); + + nouveau_fence_cancel(fence); +} + static const struct drm_sched_backend_ops nouveau_sched_ops =3D { .run_job =3D nouveau_sched_run_job, .timedout_job =3D nouveau_sched_timedout_job, .free_job =3D nouveau_sched_free_job, + .cancel_job =3D nouveau_sched_cancel_job, }; =20 static int @@ -482,7 +496,6 @@ nouveau_sched_create(struct nouveau_sched **psched, str= uct nouveau_drm *drm, return 0; } =20 - static void nouveau_sched_fini(struct nouveau_sched *sched) { --=20 2.49.0 From nobody Wed Oct 8 05:34:58 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DCF4277CB9; Tue, 1 Jul 2025 13:22:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376169; cv=none; b=EmnWt6eCTsCCgKuFpAEPBeclRVtz7hsGIu0SRe0B01OwwLB6HWxSi6YsNe86Oxu/6GhnXhmxO/XKOepCt33CQ1wduaE7fxnS+adDop9wAY5bZmC7JOTooaTDBVGne48h3z2OM0DE9Di6VdkeaRDnCZ6P94J19SEs2nyQynTqLGE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751376169; c=relaxed/simple; bh=kFeQuslhr/3hnNLg6rE8C4RDktELrENqA7ibJjiDglQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Kid2pCn6OmqTNFE+l3/Av6U4M+f/d9JulGMsVI1+0l2Q+MA1iup01QzyUb66+B0ldW0s1d4RTqo7qf8qV9i9TqpfGUw4oIwZIIxcMqW9LU0R5pLwsh1uH/sVu3Cq7VxUi9i7xXQvKYPNLLFMvvvVtqUJLHtyFr9akffPUyFLLEw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qVMYqytS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qVMYqytS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 985E3C4CEEB; Tue, 1 Jul 2025 13:22:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751376168; bh=kFeQuslhr/3hnNLg6rE8C4RDktELrENqA7ibJjiDglQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qVMYqytSw4c+fOMd9l6LPJD418MTvLPABae9DgFK+vaNEH1vhTzBpc9VyIPGUMBd+ 67F+gIEdEkgPGeoBLs+MSn5KLtNV9/ImU4Ogas2XPmphymo7NE659PBCiONYHE4Gb5 sA+RkUa4y8aPlSMY2v/mzMzSiP53nfZV4rSdsb+4IxxciCaVQJUlVh8Ts9uSSFsPDf xoVErj0Q7+AutJ7ZX2T2WQ/N6PXU9Tx3Bt/kVwnDaoxAVJu+x0Fm9z+r6RTxvsQ7pH XxqGScWN/T4f43vb4VvDg/x7PZhhIjSHwnb7rKSUlHExD2+QKk4eIA1UoFuCQ/24ui YnOKOoZmuHC+A== From: Philipp Stanner To: Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Matthew Brost , Philipp Stanner , =?UTF-8?q?Christian=20K=C3=B6nig?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , Tvrtko Ursulin , Pierre-Eric Pelloux-Prayer Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 6/6] drm/nouveau: Remove waitque for sched teardown Date: Tue, 1 Jul 2025 15:21:44 +0200 Message-ID: <20250701132142.76899-9-phasta@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250701132142.76899-3-phasta@kernel.org> References: <20250701132142.76899-3-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" struct nouveau_sched contains a waitque needed to prevent drm_sched_fini() from being called while there are still jobs pending. Doing so so far would have caused memory leaks. With the new memleak-free mode of operation switched on in drm_sched_fini() by providing the callback nouveau_sched_fence_context_kill() the waitque is not necessary anymore. Remove the waitque. Signed-off-by: Philipp Stanner --- drivers/gpu/drm/nouveau/nouveau_sched.c | 20 +++++++------------- drivers/gpu/drm/nouveau/nouveau_sched.h | 9 +++------ drivers/gpu/drm/nouveau/nouveau_uvmm.c | 8 ++++---- 3 files changed, 14 insertions(+), 23 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c b/drivers/gpu/drm/nouv= eau/nouveau_sched.c index 2ec62059c351..7d9c3418e76b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sched.c +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c @@ -122,11 +122,9 @@ nouveau_job_done(struct nouveau_job *job) { struct nouveau_sched *sched =3D job->sched; =20 - spin_lock(&sched->job.list.lock); + spin_lock(&sched->job_list.lock); list_del(&job->entry); - spin_unlock(&sched->job.list.lock); - - wake_up(&sched->job.wq); + spin_unlock(&sched->job_list.lock); } =20 void @@ -307,9 +305,9 @@ nouveau_job_submit(struct nouveau_job *job) } =20 /* Submit was successful; add the job to the schedulers job list. */ - spin_lock(&sched->job.list.lock); - list_add(&job->entry, &sched->job.list.head); - spin_unlock(&sched->job.list.lock); + spin_lock(&sched->job_list.lock); + list_add(&job->entry, &sched->job_list.head); + spin_unlock(&sched->job_list.lock); =20 drm_sched_job_arm(&job->base); job->done_fence =3D dma_fence_get(&job->base.s_fence->finished); @@ -460,9 +458,8 @@ nouveau_sched_init(struct nouveau_sched *sched, struct = nouveau_drm *drm, goto fail_sched; =20 mutex_init(&sched->mutex); - spin_lock_init(&sched->job.list.lock); - INIT_LIST_HEAD(&sched->job.list.head); - init_waitqueue_head(&sched->job.wq); + spin_lock_init(&sched->job_list.lock); + INIT_LIST_HEAD(&sched->job_list.head); =20 return 0; =20 @@ -502,9 +499,6 @@ nouveau_sched_fini(struct nouveau_sched *sched) struct drm_gpu_scheduler *drm_sched =3D &sched->base; struct drm_sched_entity *entity =3D &sched->entity; =20 - rmb(); /* for list_empty to work without lock */ - wait_event(sched->job.wq, list_empty(&sched->job.list.head)); - drm_sched_entity_fini(entity); drm_sched_fini(drm_sched); =20 diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.h b/drivers/gpu/drm/nouv= eau/nouveau_sched.h index 20cd1da8db73..b98c3f0bef30 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sched.h +++ b/drivers/gpu/drm/nouveau/nouveau_sched.h @@ -103,12 +103,9 @@ struct nouveau_sched { struct mutex mutex; =20 struct { - struct { - struct list_head head; - spinlock_t lock; - } list; - struct wait_queue_head wq; - } job; + struct list_head head; + spinlock_t lock; + } job_list; }; =20 int nouveau_sched_create(struct nouveau_sched **psched, struct nouveau_drm= *drm, diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouve= au/nouveau_uvmm.c index 48f105239f42..ddfc46bc1b3e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1019,8 +1019,8 @@ bind_validate_map_sparse(struct nouveau_job *job, u64= addr, u64 range) u64 end =3D addr + range; =20 again: - spin_lock(&sched->job.list.lock); - list_for_each_entry(__job, &sched->job.list.head, entry) { + spin_lock(&sched->job_list.lock); + list_for_each_entry(__job, &sched->job_list.head, entry) { struct nouveau_uvmm_bind_job *bind_job =3D to_uvmm_bind_job(__job); =20 list_for_each_op(op, &bind_job->ops) { @@ -1030,7 +1030,7 @@ bind_validate_map_sparse(struct nouveau_job *job, u64= addr, u64 range) =20 if (!(end <=3D op_addr || addr >=3D op_end)) { nouveau_uvmm_bind_job_get(bind_job); - spin_unlock(&sched->job.list.lock); + spin_unlock(&sched->job_list.lock); wait_for_completion(&bind_job->complete); nouveau_uvmm_bind_job_put(bind_job); goto again; @@ -1038,7 +1038,7 @@ bind_validate_map_sparse(struct nouveau_job *job, u64= addr, u64 range) } } } - spin_unlock(&sched->job.list.lock); + spin_unlock(&sched->job_list.lock); } =20 static int --=20 2.49.0