From nobody Sun Feb 8 04:57:31 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A3AFD35CBC2 for ; Thu, 18 Dec 2025 16:26:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766075221; cv=none; b=Cr9A754G7dbV15gfoBC5of6HwmjK9YA3qWpd0bK2tMyL4+lHRwEEa3psTNX4vCNHO3VVHg+E7yrVgqyBZLKpVQLTRbc7+Id8kYpdyHtjvTzciarVyKKh/iw5MGFoRQZGAhFzymgVYNrxR/C0wa0z2cXX5/519Uct9bPXw7y2aYw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766075221; c=relaxed/simple; bh=s45qhKHbYT6ze3bNvJROtnL+k9+sBHvLtCxUiKG8myg=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=FvuyrM4pYs5zyuPLcs+VboLQjMI3YpiBfPM9IOdQJqEmX85R7rfDK+LR2LUbHVkrHhHjaMqyUHFVetCWdJJfbgllRNl9IDPauaJBjT6WcTFR9r+VIE/oDBEYu87J5PXu0dQycoj7AGosfI1TS2sskDaFSrcxP7kidj79rRGGXoE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B01A4FEC; Thu, 18 Dec 2025 08:26:51 -0800 (PST) Received: from e120398-lin.trondheim.arm.com (e120398-lin.trondheim.arm.com [10.40.16.110]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EE03F3F762; Thu, 18 Dec 2025 08:26:55 -0800 (PST) From: Ketil Johnsen To: Boris Brezillon , Steven Price , Liviu Dudau , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Grant Likely , Heiko Stuebner Cc: Ketil Johnsen , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: [PATCH] drm/panthor: Evict groups before VM termination Date: Thu, 18 Dec 2025 17:26:42 +0100 Message-ID: <20251218162644.828495-1-ketil.johnsen@arm.com> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Ensure all related groups are evicted and suspended before VM destruction takes place. This fixes an issue where panthor_vm_destroy() destroys and unmaps the heap context while there are still on slot groups using this. The FW will do a write out to the heap context when a CSG (group) is suspended, so a premature unmap of the heap context will cause a GPU page fault. This page fault is quite harmless, and do not affect the continued operation of the GPU. Fixes: 647810ec2476 ("drm/panthor: Add the MMU/VM logical block") Co-developed-by: Boris Brezillon Signed-off-by: Ketil Johnsen Reviewed-by: Boris Brezillon --- drivers/gpu/drm/panthor/panthor_mmu.c | 4 ++++ drivers/gpu/drm/panthor/panthor_sched.c | 16 ++++++++++++++++ drivers/gpu/drm/panthor/panthor_sched.h | 1 + 3 files changed, 21 insertions(+) diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/pantho= r/panthor_mmu.c index 74230f7199121..0e4b301a9c70e 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -1537,6 +1537,10 @@ static void panthor_vm_destroy(struct panthor_vm *vm) =20 vm->destroyed =3D true; =20 + /* Tell scheduler to stop all GPU work related to this VM */ + if (refcount_read(&vm->as.active_cnt) > 0) + panthor_sched_prepare_for_vm_destruction(vm->ptdev); + mutex_lock(&vm->heaps.lock); panthor_heap_pool_destroy(vm->heaps.pool); vm->heaps.pool =3D NULL; diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/pant= hor/panthor_sched.c index f680edcd40aad..fbbaab9b25efb 100644 --- a/drivers/gpu/drm/panthor/panthor_sched.c +++ b/drivers/gpu/drm/panthor/panthor_sched.c @@ -2930,6 +2930,22 @@ void panthor_sched_report_mmu_fault(struct panthor_d= evice *ptdev) sched_queue_delayed_work(ptdev->scheduler, tick, 0); } =20 +void panthor_sched_prepare_for_vm_destruction(struct panthor_device *ptdev) +{ + /* FW can write out internal state, like the heap context, during CSG + * suspend. It is therefore important that the scheduler has fully + * evicted any pending and related groups before VM destruction can + * safely continue. Failure to do so can lead to GPU page faults. + * A controlled termination of a Panthor instance involves destroying + * the group(s) before the VM. This means any relevant group eviction + * has already been initiated by this point, and we just need to + * ensure that any pending tick_work() has been completed. + */ + if (ptdev->scheduler) { + flush_work(&ptdev->scheduler->tick_work.work); + } +} + void panthor_sched_resume(struct panthor_device *ptdev) { /* Force a tick to re-evaluate after a resume. */ diff --git a/drivers/gpu/drm/panthor/panthor_sched.h b/drivers/gpu/drm/pant= hor/panthor_sched.h index f4a475aa34c0a..9a8692de8aded 100644 --- a/drivers/gpu/drm/panthor/panthor_sched.h +++ b/drivers/gpu/drm/panthor/panthor_sched.h @@ -50,6 +50,7 @@ void panthor_sched_suspend(struct panthor_device *ptdev); void panthor_sched_resume(struct panthor_device *ptdev); =20 void panthor_sched_report_mmu_fault(struct panthor_device *ptdev); +void panthor_sched_prepare_for_vm_destruction(struct panthor_device *ptdev= ); void panthor_sched_report_fw_events(struct panthor_device *ptdev, u32 even= ts); =20 void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile); --=20 2.43.0