This is v3 of [1]. This patch is a workaround for performing partial unmaps of a VM region backed
by huge pages. Since these are now disallowed, the patch makes sure unmaps are done on a backing
page-granularity, and then regions untouched by the VM_BIND unmap operation are restored.
A patch series with IGT tests to validate this functionality is found at [2].
Changelog:
v3:
- Reworked address logic so that prev and next gpuava_op's va's are used in the calculations
instead of those of the original unmap vma.
- Got rid of the return struct from get_map_unmap_intervals() and now reckon panthor_vm_map_pages()
arguments by fiddlign with the gpuva's respective gem object offsets.
- Use folio_size() instead of folio_order() because the latter implies page sizes from the
CPU's MMU perspective, rather than that of the GPU.
v2:
- Fixed bug caused by confusion between semantics of gpu_va prev and next ops boundaries
and those of the original vma object.
- Coalesce all unmap operations into a single one.
- Refactored and simplified code.
[1] https://lore.kernel.org/dri-devel/20251127035021.624045-1-adrian.larumbe@collabora.com/
[2] https://lore.kernel.org/igt-dev/20251213190205.2435793-1-adrian.larumbe@collabora.com/T/#t
Adrián Larumbe (1):
drm/panthor: Support partial unmaps of huge pages
drivers/gpu/drm/panthor/panthor_mmu.c | 66 +++++++++++++++++++++++++++
1 file changed, 66 insertions(+)
--
2.51.2