Am Samstag, dem 26.10.2024 um 04:43 +0800 schrieb Sui Jingfeng:
> Etnaviv assumes that GPU page size is 4KiB, however, when using
> softpin capable GPUs on a different CPU page size configuration.
> The userspace allocated GPUVA ranges collision, unable to be
> inserted to the specified address hole exactly.
>
>
> For example, when running glmark2-drm:
>
> [kernel space debug log]
>
> etnaviv 0000:03:00.0: Insert bo failed, va: 0xfd38b000, size: 0x4000
> etnaviv 0000:03:00.0: Insert bo failed, va: 0xfd38a000, size: 0x4000
>
> [user space debug log]
>
> bo->va = 0xfd38c000, bo->size=0x100000
> bo->va = 0xfd38b000, bo->size=0x1000 <-- Insert IOVA fails here.
> bo->va = 0xfd38a000, bo->size=0x1000
> bo->va = 0xfd389000, bo->size=0x1000
>
>
> The root cause is that kernel side BO takes up bigger address space
> than userspace assumes.
>
> To solve this problem, we first track the GPU visible size of GEM buffer
> object, then map and unmap the GEM BOs exactly with respect to its GPUVA
> size. Ensure that GPU VA is fully mapped/unmapped, not more and not less.
>
Thanks, series applied to etnaviv/next
> v2:
> - Aligned to the GPU page size (Lucas)
>
> v1:
> - No GPUVA range wasting (Lucas)
> Link: https://lore.kernel.org/dri-devel/20241004194207.1013744-1-sui.jingfeng@linux.dev/
>
> v0:
> Link: https://lore.kernel.org/dri-devel/20240930221706.399139-1-sui.jingfeng@linux.dev/
>
> Sui Jingfeng (2):
> drm/etnaviv: Record GPU visible size of GEM BO separately
> drm/etnaviv: Map and unmap GPUVA range with respect to the GPUVA size
>
> drivers/gpu/drm/etnaviv/etnaviv_gem.c | 11 ++++----
> drivers/gpu/drm/etnaviv/etnaviv_gem.h | 5 ++++
> drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 36 +++++++++------------------
> 3 files changed, 22 insertions(+), 30 deletions(-)
>