From nobody Wed Oct 8 11:42:52 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2ED2023F413 for ; Sun, 29 Jun 2025 20:16:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228205; cv=none; b=q7o1j5n0ePYWRObTWReTkGVyRroqGSp5b7a4ZqOefn9od7J8XsAKDT8uIyoN8NC1j7xfovWlamKZdq22NXt7tCIwazp5bZJlEfqjDCccZynOjQmYRMHBcvu+cBGrN4DqyIxZ4e/BQ5StnsZOT1zGfinpgOW3dLXDFOFXDSTxNuc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228205; c=relaxed/simple; bh=WdQPriMrB8x+9JQHTqw4IQFdqMSdfDZGIPe1vDHc1bo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Prqyn16hvSSRRJZhFf+Y1ecre59HD2p9brU7ybKcohGjot3fVVvn/pUScePw4VRXn2SIHrhdWxBlKQIcvnQrLC0hzsc+Gzu0SO+wD4LLBUJYKUzHofeUmgTC0b9TCOUuDoRyyJbxgLLfefHXUrV4IY+4A6znVIpuoHtNi6SLie8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=JU0GrvNA; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="JU0GrvNA" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TJkdLU008496 for ; Sun, 29 Jun 2025 20:16:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Dn6rNLtmwPZ aS21Jedvkq0LzDSIrl4pjgU4LG7FVLPE=; b=JU0GrvNAQfJZR4x8GCLxbbZeRjW DvNpxAk+o1G+RQu4+3ha7ZKL7Qjo1Hy7/vwM9C8UX7VTn4BsDLeOKS+hSul5pLzs D7bMd890bhHfZlPAnBhbijN/mfJKOiRl+OD3H1hWGOco3f2vqHw2I9X6mnEhlKCR 7gXCXYZGTK5I1oBNhtM0aUL12Ln3H+C88VkZhvKZ6Vr66hhiqFj6EO8O22dtirjR CNUHIVgbCjxu8V3mxUN6JA3LBvZnXoVKIQX6URobDbn2ycjqEKiOETCNT2fDS+w5 /muHPZG6AEB944smTkYQ30Bv6PCEnDy6+A+yATKu6B6PrPAJ9QNGdDCRvKQ== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j9jpth7r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:43 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-748a4f5e735so1328630b3a.3 for ; Sun, 29 Jun 2025 13:16:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228202; x=1751833002; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Dn6rNLtmwPZaS21Jedvkq0LzDSIrl4pjgU4LG7FVLPE=; b=AvnbMQPFt0VAhCDX1pWWNXKQWeOxtLDR1orV9kywZfLhMfkMKw8dbjuQ6NQc3gH+KK F7FqdF3FxAzUXXRSSpayFsZyA0p08b3iaPEJVurJ3BIclYpFAeD8qYjYz/VnB4nSxI9K l/Mq3XUMXjrcTmiHNFwK6M0h37L4JhLoyGJGj7DfCL/2bB7d/uGMUZOOXueN+uTilLs8 tVvqK/vyrdcTON2DUASqga7LjSi3rlDsh4iI0Y8wG8KNkzzeMkGyLHEDqWrfh+y+aIGb idmo0Tw4/vZcvzZBj2equubmeSr2qem4UYs43dK21+zVbCSUWX+JSmj0JnMtaoF8lFTt 0SjQ== X-Forwarded-Encrypted: i=1; AJvYcCURSbQLhPo1k/dep6n1o3SrVcOtLG11QPhuGlZ3VVuqcZj4mEJL9Yta+1oQVBkqFe4WceTsAgQONoev2n8=@vger.kernel.org X-Gm-Message-State: AOJu0YweDprcyUXVK9F07mInCiWUTAcYV6X8Oo9Zg7aUhmOIEQxYZM0B GkP6pW9W5TdDXqHrD4jhTCLOVGKoSZbXjUs/Xtgq9NreKjIazyvuCw9AOcgDD0zZwZ2Ps6kb29D KUSrJUF0wGLPgavxBiyqRQ8uXRRQ/iWmSAeg9FDnotRFgQibjx6FExTzfl8ahlZyAE18= X-Gm-Gg: ASbGncsj0U22uhJyfZx7iyZL9q3Mw3bUOb8eCeGRJ0o+MJ/O3kOzOH8bE3m03pQpIB+ cUxm2np9KnNxBxAm9TXHDhcfJDxyziA1WUHLOAKQPCwgH89s37q7Tdu1HBD3FM/oJvRO0qVV+bx CkbM1KVeBc+W2X+ihuZHCZZe8OxqI2LYn5sA136nWj20UAF9k0wC3dr5HeSesnYQGRdarF4x3F3 /BVxPZtctHZ0fwM0MIwa2vgfgeOPdT7EbkH/ibiS14BfArMxQ4G4JXEMJh6BV0+dPp00kXlvC6S RXAtHfdUKJqbyeT85eFC+KekCoyFFUuReA== X-Received: by 2002:a05:6a00:2da9:b0:742:a02e:dd8d with SMTP id d2e1a72fcca58-74af6ff20d6mr15577051b3a.20.1751228202279; Sun, 29 Jun 2025 13:16:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGUdIAJWlJ4mtDHmpo6q7Rc44psuaTXkc5I16tlu52tuQa9kChYwG4i2nBSS0b7NS46v+ugwQ== X-Received: by 2002:a05:6a00:2da9:b0:742:a02e:dd8d with SMTP id d2e1a72fcca58-74af6ff20d6mr15577021b3a.20.1751228201789; Sun, 29 Jun 2025 13:16:41 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af541b6e5sm6901721b3a.43.2025.06.29.13.16.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:41 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 17/42] drm/msm: Add mmu support for non-zero offset Date: Sun, 29 Jun 2025 13:13:00 -0700 Message-ID: <20250629201530.25775-18-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX91YkDIZEADa9 ckFTHDWUu11dJMRHsac7HTfqs1aaFUgkly0P7cjCM0cwN9qZC+Cep18LcxocJ0cyZJ0oPODZLiH +qipmqKFJA1CO4Z5VIT3AoQwDtnyNV0R9sbDzLJHaa1eSZ7+iTAC3rrtdYRcRPP4FAULNLNA5L1 w7bwbmTuOlgJHaNDYzgmSC4pQX/353c4rnVGjc9CbCHB5w/I6yb+BPUjmr71/K8LMOg9HooyPk6 mUC2n4wbe+K4wtOf0duzt2CQefBjyX/sl3QxOYpH5/fNY9aT5EHktCV0D4MG0uPLKNhheqoABkP 5xMUQAwXvxgYp2eqZcYA4EhJjVNAgp8O0703vjXz8/0nijdaOk3Z/OOc++H0LmcWO3zvat35D8l t0AQqBEmqzz4DR9wfPFfd/x6Pz725bIlDJHkdVAeYrvnicKfln2NLnaodwULua0bKShpQHSs X-Proofpoint-ORIG-GUID: Z37Dc8Ps7hTMiNFDSx3YxwUe0ba-t0Sn X-Authority-Analysis: v=2.4 cv=Tq7mhCXh c=1 sm=1 tr=0 ts=68619f2b cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=SJjE8ph6EfIcxDFTuEgA:9 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-GUID: Z37Dc8Ps7hTMiNFDSx3YxwUe0ba-t0Sn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 lowpriorityscore=0 adultscore=0 priorityscore=1501 impostorscore=0 phishscore=0 mlxscore=0 spamscore=0 bulkscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Only needs to be supported for iopgtables mmu, the other cases are either only used for kernel managed mappings (where offset is always zero) or devices which do not support sparse bindings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------ drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 6 files changed, 36 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm= /adreno/a2xx_gpummu.c index 4280f71e472a..0407c9bc8c1b 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c @@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu) } =20 static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct a2xx_gpummu *gpummu =3D to_a2xx_gpummu(mmu); unsigned idx =3D (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE; struct sg_dma_page_iter dma_iter; unsigned prot_bits =3D 0; =20 + WARN_ON(off !=3D 0); + if (prot & IOMMU_WRITE) prot_bits |=3D 1; if (prot & IOMMU_READ) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 20d5e4b4d057..5c71a4be0dfa 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -453,7 +453,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_= object *obj, vma =3D lookup_vma(obj, vm); =20 if (!vma) { - vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); + vma =3D msm_gem_vma_new(vm, obj, 0, range_start, range_end); } else { GEM_WARN_ON(vma->va.addr < range_start); GEM_WARN_ON((vma->va.addr + obj->size) > range_end); @@ -491,7 +491,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, = struct drm_gpuva *vma) if (IS_ERR(pages)) return PTR_ERR(pages); =20 - return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); + return msm_gem_vma_map(vma, prot, msm_obj->sgt); } =20 void msm_gem_unpin_locked(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index a18872ab1393..0e7b17b2093b 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -110,9 +110,9 @@ struct msm_gem_vma { =20 struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end); + u64 offset, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct drm_gpuva *vma); -int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt,= int size); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); =20 struct msm_gem_object { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index df8eb910ca31..ef0efd87e4a6 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma) =20 /* Map and pin vma: */ int -msm_gem_vma_map(struct drm_gpuva *vma, int prot, - struct sg_table *sgt, int size) +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); @@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); - + ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, + vma->gem.offset, vma->va.range, + prot); if (ret) { msm_vma->mapped =3D false; } @@ -93,7 +93,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma) /* Create a new vma and allocate an iova for it */ struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, - u64 range_start, u64 range_end) + u64 offset, u64 range_start, u64 range_end) { struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; @@ -107,6 +107,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, return ERR_PTR(-ENOMEM); =20 if (vm->managed) { + BUG_ON(offset !=3D 0); ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -120,7 +121,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, =20 GEM_WARN_ON((range_end - range_start) > obj->size); =20 - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, off= set); vma->mapped =3D false; =20 ret =3D drm_gpuva_insert(&vm->base, &vma->base); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index 739ce2c283a4..3c2eb59bfd49 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mm= u, u64 iova, } =20 static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu_pagetable *pagetable =3D to_pagetable(mmu); struct io_pgtable_ops *ops =3D pagetable->pgtbl_ops; @@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu= , u64 iova, size_t size =3D sg->length; phys_addr_t phys =3D sg_phys(sg); =20 + if (!len) + break; + + if (size <=3D off) { + off -=3D size; + continue; + } + + phys +=3D off; + size -=3D off; + size =3D min_t(size_t, size, len); + off =3D 0; + while (size) { size_t pgsize, count, mapped =3D 0; int ret; @@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu,= u64 iova, phys +=3D mapped; addr +=3D mapped; size -=3D mapped; + len -=3D mapped; =20 if (ret) { msm_iommu_pagetable_unmap(mmu, iova, addr - iova); @@ -388,11 +403,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu) } =20 static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu *iommu =3D to_msm_iommu(mmu); size_t ret; =20 + WARN_ON(off !=3D 0); + /* The arm-smmu driver expects the addresses to be sign extended */ if (iova & BIT_ULL(48)) iova |=3D GENMASK_ULL(63, 49); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index 0c694907140d..9d61999f4d42 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -12,7 +12,7 @@ struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, - size_t len, int prot); + size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*set_stall)(struct msm_mmu *mmu, bool enable); --=20 2.50.0