From nobody Sun Feb 8 06:04:17 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C79992D9EEA for ; Fri, 19 Dec 2025 09:35:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766136960; cv=none; b=eg75PU2xs/s9joqmsiQgV/+t1sDuFafJMTujQo02BVpPkWlBWn+U0sCC5RUhLlUx1zll3/A/kgHFOO5mOHugFd7JBgV86898qq0vHtlhyAYVLHqGvn8HOfhDoBVsW7q/sB6of6xBlY8qZBz9Vf0ag2XpM6E57hz+VKkLdh7ieSE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766136960; c=relaxed/simple; bh=o06dHzgS09gDbdOXZ2iwVNKA2WlDKnoMpXAr/eTtqro=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=fKNs92o/dU3OCzv4+iLHMRN/2ub+8NsdDXeWUcqZ02kH0C2+6HdN4EJwB+IarVnlLodohfZ87CZGy3yXoPs8W6wrHe3N9Oh/YEl63tDVtc1MNfdaq168ATIyZ2diT49hYps+mzwI4fPMwlXNiNl9brylJ+Xfz3yIWb0FlsyS28U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E1E33FEC; Fri, 19 Dec 2025 01:35:48 -0800 (PST) Received: from e120398-lin.trondheim.arm.com (e120398-lin.trondheim.arm.com [10.40.16.110]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2C7563F73F; Fri, 19 Dec 2025 01:35:52 -0800 (PST) From: Ketil Johnsen To: Boris Brezillon , Steven Price , Liviu Dudau , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Grant Likely , Heiko Stuebner Cc: Ketil Johnsen , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] drm/panthor: Evict groups before VM termination Date: Fri, 19 Dec 2025 10:35:44 +0100 Message-ID: <20251219093546.1227697-1-ketil.johnsen@arm.com> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Ensure all related groups are evicted and suspended before VM destruction takes place. This fixes an issue where panthor_vm_destroy() destroys and unmaps the heap context while there are still on slot groups using this. The FW will do a write out to the heap context when a CSG (group) is suspended, so a premature unmap of the heap context will cause a GPU page fault. This page fault is quite harmless, and do not affect the continued operation of the GPU. Fixes: 647810ec2476 ("drm/panthor: Add the MMU/VM logical block") Reviewed-by: Boris Brezillon Co-developed-by: Boris Brezillon Signed-off-by: Ketil Johnsen Reviewed-by: Liviu Dudau Reviewed-by: Steven Price --- Changes in v2: - Removed check for ptdev->scheduler - R-b from Boris - Link to v1: https://lore.kernel.org/all/20251218162644.828495-1-ketil.joh= nsen@arm.com/ --- drivers/gpu/drm/panthor/panthor_mmu.c | 4 ++++ drivers/gpu/drm/panthor/panthor_sched.c | 14 ++++++++++++++ drivers/gpu/drm/panthor/panthor_sched.h | 1 + 3 files changed, 19 insertions(+) diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/pantho= r/panthor_mmu.c index 74230f7199121..0e4b301a9c70e 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -1537,6 +1537,10 @@ static void panthor_vm_destroy(struct panthor_vm *vm) =20 vm->destroyed =3D true; =20 + /* Tell scheduler to stop all GPU work related to this VM */ + if (refcount_read(&vm->as.active_cnt) > 0) + panthor_sched_prepare_for_vm_destruction(vm->ptdev); + mutex_lock(&vm->heaps.lock); panthor_heap_pool_destroy(vm->heaps.pool); vm->heaps.pool =3D NULL; diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/pant= hor/panthor_sched.c index f680edcd40aad..a40ac94e5e989 100644 --- a/drivers/gpu/drm/panthor/panthor_sched.c +++ b/drivers/gpu/drm/panthor/panthor_sched.c @@ -2930,6 +2930,20 @@ void panthor_sched_report_mmu_fault(struct panthor_d= evice *ptdev) sched_queue_delayed_work(ptdev->scheduler, tick, 0); } =20 +void panthor_sched_prepare_for_vm_destruction(struct panthor_device *ptdev) +{ + /* FW can write out internal state, like the heap context, during CSG + * suspend. It is therefore important that the scheduler has fully + * evicted any pending and related groups before VM destruction can + * safely continue. Failure to do so can lead to GPU page faults. + * A controlled termination of a Panthor instance involves destroying + * the group(s) before the VM. This means any relevant group eviction + * has already been initiated by this point, and we just need to + * ensure that any pending tick_work() has been completed. + */ + flush_work(&ptdev->scheduler->tick_work.work); +} + void panthor_sched_resume(struct panthor_device *ptdev) { /* Force a tick to re-evaluate after a resume. */ diff --git a/drivers/gpu/drm/panthor/panthor_sched.h b/drivers/gpu/drm/pant= hor/panthor_sched.h index f4a475aa34c0a..9a8692de8aded 100644 --- a/drivers/gpu/drm/panthor/panthor_sched.h +++ b/drivers/gpu/drm/panthor/panthor_sched.h @@ -50,6 +50,7 @@ void panthor_sched_suspend(struct panthor_device *ptdev); void panthor_sched_resume(struct panthor_device *ptdev); =20 void panthor_sched_report_mmu_fault(struct panthor_device *ptdev); +void panthor_sched_prepare_for_vm_destruction(struct panthor_device *ptdev= ); void panthor_sched_report_fw_events(struct panthor_device *ptdev, u32 even= ts); =20 void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile); --=20 2.43.0