From nobody Wed Oct 8 19:24:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F42332ED86D for ; Wed, 25 Jun 2025 18:58:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877932; cv=none; b=BIBoIt9vOVWkjnz3lvnRxma/QJuCw1PmGOoAmgLgxkna3AdKK7YVT6oN/uxLKLd1KRx2P+EI8E3G8qyRRprNKqnS+ncxo+3Ci8+9dVHbDX02iagQ2iR4zB3k43UVO7RRzUTWscxDuwGkkrCWbPDl0v0FuheCU24cNTnNTqI3xr8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877932; c=relaxed/simple; bh=B9eE+MJ7+O1AN7BKJ5NQdswWxGPSuZQjmPfgsgUDJiM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DswyMQAPPcsKDB6weeAFl/1fNMfInyvVaVjfL3nLMNJ6YQZ8njhs4iefBGnn+pYgOVbnqTfDVscty1oJn3K0zBgMZhnZS/OpxgxDHPIprjpOxASzW7D09W/N+cFi8SBCBqCPJHcr6izRZYRFtMXEDgcCtXic83wMsybZd8rywMA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=daHX063X; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="daHX063X" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PAfEN8000685 for ; Wed, 25 Jun 2025 18:58:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=uUof/zDaOaC uzta8w72KM+mWpGInUe/M4USMZdkFuc4=; b=daHX063XX52gcUva/JWjgS2QGJI 9b0MmNBvFM01Am+kUbi6KWt1Ncm8SkCzHVBxIJXDJQai6GZj5R+r645OHfYXMtCn KgcByb3qKPmCHmjMNe3C/JMncH+ESpnBsWkGP+VwhSwv7SyMB29U2OAUAhReK4Ry 42jOUS9VcaKY59XlueARc2Gf8uL4Zz1enH/zr4yOPxrEvh6DaZMPIpSghNbUgzmw ykNJY2if+HX0YOrozk7n/fUE3UtqYFMXm3YYoPG1O59H03JW+Y0H8C4gjY31Qw2Y dzhpleiuKplqfDAKGpchfAI6MSq+0/yq21oOxVfpEUWRO33WKrzStxleRqA== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47g7tdasrg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:49 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-74ad608d60aso177804b3a.1 for ; Wed, 25 Jun 2025 11:58:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877929; x=1751482729; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uUof/zDaOaCuzta8w72KM+mWpGInUe/M4USMZdkFuc4=; b=eczKLLerq+/h9UpdEULq3a1nqcwUREI7Iz80ReOyglIteMCS5smQm6mviA/vlYjiie 4X8vg+rqBAvXdG07WgR3iKUhvIpU67iBJU1M0+AGMf8Oy75JF0rT3pcYt+lBTGsjP6j+ GaIyNr/I5uwcenZXrLXUMxGV+k8Df/hyeIzirCIRQ58ghKHUahIlFzF8ODRg5qJIZQfT HSEcww9ZChGshDuHphct/pMRm3ADU1tIJA1/ITrPZ8Wxh11JrENl+GyHStN0ueRAMUSZ flsJPNWWSOxYHwH3pGn0v6QeEZpWFeBGrjRH123MVCogKqmsB6rfigLINMhkis4W1maO j7wQ== X-Forwarded-Encrypted: i=1; AJvYcCV3SfhU49lVkUvCXPCGksvCx7iM8KQ/ECtyj0F9Vc5QgZoT68MO4dBdf02/4K0k7mM9JlqlHWhoudmdWFg=@vger.kernel.org X-Gm-Message-State: AOJu0YxEIoHZ42+Y8Z7j0SEQyrSKtOxJu56KdlIASQ8G9vLe96d4pKpJ RdvJbDBOJe83K9eC4T2CBzS6HvDWRTUlhIZ/03z1HeJKpPR/LeqnSTdrvXRbZa/RT4ZKud11lvO zLWBiNydUnP78DGbBC6wEiYVPSXkCkb+RPQqooX0/Zk9tBb32ec/pJW8/8A4jsenmkdI= X-Gm-Gg: ASbGnctCyHm8ri6Lijh54eTd/vH23d2gZah1utX/5zMGKDyVfFT2RHp0zGKhtroRAQq 02K7p2q+zT12WY1OMSsgrO94JNdEWJRjaz2ttcESEUL2mdzIgifqqvXlM0rtn87XBeq+0A7gN5t uVHUTBOd7bldblB9QkVHg8gsgVI9t7aXIKTcpKXwyOv/cBn0NlvuPQ0MjGboaq+Q/EMDdx+arXv 93a6eDigAudBUJj+EWuqCeluWdYS/dOrqB/pY9ie16DZiXaQQ8sNtK/tS2HeeQCqqADsLaIWGUJ hG5FibvO81PS6q3bl+mQYL5mGedroUjk X-Received: by 2002:a05:6a00:3cd2:b0:736:51ab:7aed with SMTP id d2e1a72fcca58-74ad458d7b5mr6148087b3a.16.1750877928793; Wed, 25 Jun 2025 11:58:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGdsEeiKThsDDj6JNuEy0m0GDad42QGwOQbpzwLriU0Lfc6EjO+iivd3VBI2GjbL4nRye7z/w== X-Received: by 2002:a05:6a00:3cd2:b0:736:51ab:7aed with SMTP id d2e1a72fcca58-74ad458d7b5mr6148048b3a.16.1750877928325; Wed, 25 Jun 2025 11:58:48 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-749b5e2145bsm5458283b3a.54.2025.06.25.11.58.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:47 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 17/42] drm/msm: Add mmu support for non-zero offset Date: Wed, 25 Jun 2025 11:47:10 -0700 Message-ID: <20250625184918.124608-18-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=CPYqXQrD c=1 sm=1 tr=0 ts=685c46ea cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=SJjE8ph6EfIcxDFTuEgA:9 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX4OQ/c0xHn4yl JFugXCHFjNX9Nk7kUwAcvWUG47A+HLaxCfl9dUs2rkvTVlWkKpUMvh5Usa6K8ZuQilxBFT7rBSx EwdCMgCeztPKnuGPzWgX4nIZQMTD0vYG+9pdbeZfy0jDXDyTuokj/jt6A7If8+nu33n7wvSXouB 7CPOFfvNklm9E0TjrbeAgNWSR3q585jrEoTcr5XyQTXXNn/DKjxXIMX2FJOeG6k+LMhiMVJrnYi ZxdIpNWBO0IIJ//EYE2viBZjzQA6gez53LzA5QpWd3vSKJHEMvrGRkiqHp5abP/PMtaexbmHB8G MDE/AXNCl4TFWcM3EnOMu4drrcIN/xX2GiCWJ2q2AH9yr2we1Q4mp3ZFEmWjpaubheKHmSG5IoH uOZSfhHxiF2jZUPHTU6zo8DgwZOezzgKHq9wL9u6WRL6lny77VKq67SnxUEq5kDwhMf91LcQ X-Proofpoint-GUID: dMsOkvlG1zH-Z51Y29l4RVnb7SkPzbCO X-Proofpoint-ORIG-GUID: dMsOkvlG1zH-Z51Y29l4RVnb7SkPzbCO X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 phishscore=0 mlxlogscore=999 lowpriorityscore=0 malwarescore=0 impostorscore=0 suspectscore=0 clxscore=1015 spamscore=0 priorityscore=1501 adultscore=0 mlxscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Only needs to be supported for iopgtables mmu, the other cases are either only used for kernel managed mappings (where offset is always zero) or devices which do not support sparse bindings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------ drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 6 files changed, 36 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm= /adreno/a2xx_gpummu.c index 39641551eeb6..6124336af2ec 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c @@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu) } =20 static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct a2xx_gpummu *gpummu =3D to_a2xx_gpummu(mmu); unsigned idx =3D (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE; struct sg_dma_page_iter dma_iter; unsigned prot_bits =3D 0; =20 + WARN_ON(off !=3D 0); + if (prot & IOMMU_WRITE) prot_bits |=3D 1; if (prot & IOMMU_READ) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 09c40a7e04ac..194a15802a5f 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -457,7 +457,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_= object *obj, vma =3D lookup_vma(obj, vm); =20 if (!vma) { - vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); + vma =3D msm_gem_vma_new(vm, obj, 0, range_start, range_end); } else { GEM_WARN_ON(vma->va.addr < range_start); GEM_WARN_ON((vma->va.addr + obj->size) > range_end); @@ -499,7 +499,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, = struct drm_gpuva *vma) if (IS_ERR(pages)) return PTR_ERR(pages); =20 - return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); + return msm_gem_vma_map(vma, prot, msm_obj->sgt); } =20 void msm_gem_unpin_locked(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 278ec34c31fc..2dd9a7f585f4 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -110,9 +110,9 @@ struct msm_gem_vma { =20 struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end); + u64 offset, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct drm_gpuva *vma); -int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt,= int size); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); =20 struct msm_gem_object { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index df8eb910ca31..ef0efd87e4a6 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma) =20 /* Map and pin vma: */ int -msm_gem_vma_map(struct drm_gpuva *vma, int prot, - struct sg_table *sgt, int size) +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); @@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); - + ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, + vma->gem.offset, vma->va.range, + prot); if (ret) { msm_vma->mapped =3D false; } @@ -93,7 +93,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma) /* Create a new vma and allocate an iova for it */ struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, - u64 range_start, u64 range_end) + u64 offset, u64 range_start, u64 range_end) { struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; @@ -107,6 +107,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, return ERR_PTR(-ENOMEM); =20 if (vm->managed) { + BUG_ON(offset !=3D 0); ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -120,7 +121,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, =20 GEM_WARN_ON((range_end - range_start) > obj->size); =20 - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, off= set); vma->mapped =3D false; =20 ret =3D drm_gpuva_insert(&vm->base, &vma->base); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index e70088a91283..2fd48e66bc98 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mm= u, u64 iova, } =20 static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu_pagetable *pagetable =3D to_pagetable(mmu); struct io_pgtable_ops *ops =3D pagetable->pgtbl_ops; @@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu= , u64 iova, size_t size =3D sg->length; phys_addr_t phys =3D sg_phys(sg); =20 + if (!len) + break; + + if (size <=3D off) { + off -=3D size; + continue; + } + + phys +=3D off; + size -=3D off; + size =3D min_t(size_t, size, len); + off =3D 0; + while (size) { size_t pgsize, count, mapped =3D 0; int ret; @@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu,= u64 iova, phys +=3D mapped; addr +=3D mapped; size -=3D mapped; + len -=3D mapped; =20 if (ret) { msm_iommu_pagetable_unmap(mmu, iova, addr - iova); @@ -400,11 +415,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu) } =20 static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu *iommu =3D to_msm_iommu(mmu); size_t ret; =20 + WARN_ON(off !=3D 0); + /* The arm-smmu driver expects the addresses to be sign extended */ if (iova & BIT_ULL(48)) iova |=3D GENMASK_ULL(63, 49); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c33247e459d6..c874852b7331 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -12,7 +12,7 @@ struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, - size_t len, int prot); + size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*resume_translation)(struct msm_mmu *mmu); --=20 2.49.0