From nobody Wed Oct 8 11:40:05 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05E78247280 for ; Sun, 29 Jun 2025 20:16:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228204; cv=none; b=WTpr+dFu9gO8S+s9bT+gfOW4B/1x0HwlZpZZJ9L5m+mMvTGBbT6HdTH3hUU6+9a6w2dJ8UBx4kkLkrvBwHwvCvwBp4mxRIUoa23N3NfWKXRa8mr9Kdmn19rcEBYk5pYk/5x+9XAFqoLzN+0bkY/Yag0Vu2IAnDHMfeHxaAtwF6Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228204; c=relaxed/simple; bh=utD+EU1joHyPgdhBukUZWz5oj3sIgA6oZ09bBZk3Zt0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qDOk4VjQCdnFoTEH96t1vE6xYpRqV98XFRpPwi/g24BWzSqu233T7Ei9FUzR4Jk1lXQLpS+uZl6/jjT3d2IknsONRrys5jxdj/DzCnHIR6NPfSQsGlYoEqVcxYSGbrQVvR/9Y7ms5bxpaKW3jH5V6U2lywqX172iQiLjJOMxGT4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=R8+h2u4M; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="R8+h2u4M" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55THv4C9008985 for ; Sun, 29 Jun 2025 20:16:41 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=e9N226TKmG8 dr6P8X0n3BrfaNxHloLnA/9DS1innPPM=; b=R8+h2u4MSiQcKc8BrQI7AFcmnp2 VmjGh1nH27O2bKAyXVnB+qIPhEsBYzYn7xGY9jc+OWgzEIcS2DnS5XnFnqyh4j2o ZsSsu7se2v5lKwTnqRWXwrTXWs5SZtiKWkggUA94ooPCUbnv6miojtGmE38EJF18 ezLZIl63qIL+t5j9FSeoUfY2UCTuduNMa/vHE88Rpfy13jgKJ2BzMW98EISu/Kk9 ST7qChCGc4k7bydDgXsr2Lzcqr6OhWoMrO9caUXKx0C+lyRWXf7+VZ70g4S5Y0eT HZVZKLaa+5CdJ1AMy4wvAN7qP7Sd0jPBblozmXX826VfvGr/E2Qw1/pPw6g== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j95htk5u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:41 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-2349498f00eso35875695ad.0 for ; Sun, 29 Jun 2025 13:16:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228200; x=1751833000; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e9N226TKmG8dr6P8X0n3BrfaNxHloLnA/9DS1innPPM=; b=BuuzNnN6/VW6oJVd9SbNuIC26q6MGUyisVaYx14SXhCklUPdPlBm6s31FQn9VxseML wv4M7jINVL+rMIU9KhJ3HAjbTscWePzVzbxBej4folWszrBDS9xw93XD9jT8M02xxpa5 PKY/ADqzlyRHhQiUNCoFOSIkbnCu0eVVfzqsNkBmuwPrHziUe1VergGOWWPg5vSDhMbM ELh75vFl6VlwTW5cJ4heyoWaHKrzLhvK3qpGJJSP9qc1YiomxtmLW8pmf/JctMabnahP uDzSUrSqHPNCGnlpKkrUqGJKL6r7j77KH8Sh6wrPTvkwQyCKCUhbJgKbWDpxlgSiBxZe nbhA== X-Forwarded-Encrypted: i=1; AJvYcCUUC9UciGm4ZZlbcMftU7Xm6rv6lzPlhRVQ/qP/9I+o9AzVmBfqloKz4NvvUHWm9J/LxXX1fP2aO9754BA=@vger.kernel.org X-Gm-Message-State: AOJu0YwG76pK8L3hH94fbTL8MEdCCzoi+EsPn3jv9rj6zJ/iFiiE8SS4 8mPbf/SZQjbrYxo8EyOIoRU7TNjBRSQI0hey2lTSLAJ1kLnCoT55KnNNfioEdrvyk41GV1xiLhA qreh3qS11fpmqc19cX4FiOl6N2BwPV5cSblgGkB/YkWRc/8o1bLG4YlNG4lxRNVA4ffQ= X-Gm-Gg: ASbGncu0tu4W1sgfPCFwazsHfx8hcNpm4a1hV3nTcQkNdePLrZsmva2Xny08d/sXArw W6lXlGfqL5leHqs8+R4974jE2ztE0EGo+Eklgh/rSNs/QW2zvFAH20LoGg38vxlDbR6gG2GzBqW quzESFTXBgVkVhloSj4/wuQGqG3z1mZ9XUj/8RHoHc2YcVGwfNAaoj0a32FBQFWktb4nMmjZqdR peQVxAD5euET8ID7KM4i3/i9uo7hq9He6MI6qhaIqpLbU2RQJwmbX6pSLpkiHMxircqZKLZWrLm ZBrkVnxS7+rHzualVhxku0mtzpRrz7gleg== X-Received: by 2002:a17:902:ce12:b0:215:a303:24e9 with SMTP id d9443c01a7336-23ac4e97774mr169436265ad.3.1751228199461; Sun, 29 Jun 2025 13:16:39 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEz1Trjf+DHUb6PEfzch9tUkKExdYH3LkX+sviBA4allvA3GgPFBF97yXVEsHHXGzcZNY6ISA== X-Received: by 2002:a17:902:ce12:b0:215:a303:24e9 with SMTP id d9443c01a7336-23ac4e97774mr169435745ad.3.1751228198734; Sun, 29 Jun 2025 13:16:38 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3d27dbsm63037835ad.256.2025.06.29.13.16.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:38 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , Arnd Bergmann , Krzysztof Kozlowski , Eugene Lepshy , Haoxiang Li , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 15/42] drm/msm: Use drm_gpuvm types more Date: Sun, 29 Jun 2025 13:12:58 -0700 Message-ID: <20250629201530.25775-16-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX9/8rEVAJJtZU ZwK7K3emSXxW4guEQoWCZ5E8shEy9zDMsUAb2w5rdHjMmrJVt0T97IMSCSMWV5Ba9+UAwp6a3iE AOoQGoOHQloFyHlFVXCIenmwEbU+rw1ziGQeF8+uneqfWR/UsZmbUXmZA/3g+d/is+W3Ks0mJUk fLgPWPsrEMmMapYIM8g2uxkF0sIJB6H0TPBK3SA/MjFtDSMvRCW8brBeraY2NYviYegz7sP5bYT HqEGB270gMJkBrtR/A1ihWG/LiODrpxQ5Uq/7seuakZA7/UuIAXjaDk+V9uJ9GZ+U4LEU+BHTx1 P1rRwptTSnsh8fyvlQ6ky7oKa2FOHvUJvtLqXEg0P8i/wX+kSlDLi5XeUDkGGpBW2W/xkNHwFqP 0xQZL4UQL69DyaKae4q9/fVQ8PStwohkPH6hIlgLiMVo1B3UvlAZybrrJTBY+iWxEPeZFL0+ X-Proofpoint-ORIG-GUID: mVVCp_hZVTgdCCesa6O_OqfXK1Hi29e6 X-Authority-Analysis: v=2.4 cv=EuHSrTcA c=1 sm=1 tr=0 ts=68619f29 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=WETlIjWukoXTtCS2clAA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-GUID: mVVCp_hZVTgdCCesa6O_OqfXK1Hi29e6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxlogscore=999 malwarescore=0 mlxscore=0 phishscore=0 spamscore=0 adultscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark Most of the driver code doesn't need to reach in to msm specific fields, so just use the drm_gpuvm/drm_gpuva types directly. This should hopefully improve commonality with other drivers and make the code easier to understand. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 11 +-- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 19 +++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 11 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 11 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 6 +- drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_fb.c | 4 +- drivers/gpu/drm/msm/msm_gem.c | 94 +++++++++++------------ drivers/gpu/drm/msm/msm_gem.h | 59 +++++++------- drivers/gpu/drm/msm/msm_gem_submit.c | 8 +- drivers/gpu/drm/msm/msm_gem_vma.c | 70 +++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 18 +++-- drivers/gpu/drm/msm/msm_gpu.h | 10 +-- drivers/gpu/drm/msm/msm_kms.c | 6 +- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 23 files changed, 175 insertions(+), 189 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 889480aa13ba..ec38db45d8a3 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; =20 - a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(to_msm_vm(gpu->vm)->mmu, &pt_base, &tran_error); =20 DBG("%s", gpu->name); =20 @@ -466,11 +466,11 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struc= t msm_gpu *gpu) return state; } =20 -static struct msm_gem_vm * +static struct drm_gpuvm * a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu =3D a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 vm =3D msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, tr= ue); =20 diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a5xx_gpu.c index 04138a06724b..ee927d8cc0dc 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,7 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } =20 - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a5xx_fault_handler); =20 /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.c index 77d9ff9632d1..28e6705c6da6 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,6 +1259,8 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) =20 static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { + struct msm_mmu *mmu =3D to_msm_vm(gmu->vm)->mmu; + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); msm_gem_kernel_put(gmu->debug.obj, gmu->vm); msm_gem_kernel_put(gmu->icache.obj, gmu->vm); @@ -1266,8 +1268,8 @@ static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); msm_gem_kernel_put(gmu->log.obj, gmu->vm); =20 - gmu->vm->mmu->funcs->detach(gmu->vm->mmu); - msm_gem_vm_put(gmu->vm); + mmu->funcs->detach(mmu); + drm_gpuvm_put(gmu->vm); } =20 static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo = *bo, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.h index fc288dfe889f..d1ce11131ba6 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; =20 - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 262129cb4415..0b78888c58af 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, if (ctx->seqno =3D=3D ring->cur_ctx_seqno) return; =20 - if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) return; =20 if (adreno_gpu->info->family >=3D ADRENO_7XX_GEN1) { @@ -2256,7 +2256,7 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, st= ruct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } =20 -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); @@ -2274,12 +2274,12 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform= _device *pdev) return adreno_iommu_create_vm(gpu, pdev, quirks); } =20 -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; =20 - mmu =3D msm_iommu_pagetable_create(gpu->vm->mmu); + mmu =3D msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -2559,7 +2559,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0x1fffffffff000ull; =20 - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a6xx_fault_handler); =20 a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/ms= m/adreno/a6xx_preempt.c index f6194a57f794..9e7f2e5fb2b9 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -377,7 +377,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, =20 struct a7xx_cp_smmu_info *smmu_info_ptr =3D ptr; =20 - msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(to_msm_vm(gpu->vm)->mmu, &ttbr, &asid); =20 smmu_info_ptr->magic =3D GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 =3D ttbr; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 46199a6d0e41..676fc078d545 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 p= asid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } =20 -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { return adreno_iommu_create_vm(gpu, pdev, 0); } =20 -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; u64 start, size; =20 mmu =3D msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -275,9 +275,11 @@ void adreno_check_and_reenable_stall(struct adreno_gpu= *adreno_gpu) if (!priv->stall_enabled && ktime_after(ktime_get(), priv->stall_reenable_time) && !READ_ONCE(gpu->crashstate)) { + struct msm_mmu *mmu =3D to_msm_vm(gpu->vm)->mmu; + priv->stall_enabled =3D true; =20 - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); + mmu->funcs->set_stall(mmu, true); } spin_unlock_irqrestore(&priv->fault_stall_lock, flags); } @@ -292,6 +294,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, u32 scratch[4]) { struct msm_drm_private *priv =3D gpu->dev->dev_private; + struct msm_mmu *mmu =3D to_msm_vm(gpu->vm)->mmu; const char *type =3D "UNKNOWN"; bool do_devcoredump =3D info && (info->fsr & ARM_SMMU_FSR_SS) && !READ_ONCE(gpu->crashstate); @@ -305,7 +308,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, if (priv->stall_enabled) { priv->stall_enabled =3D false; =20 - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); + mmu->funcs->set_stall(mmu, false); } =20 priv->stall_reenable_time =3D ktime_add_ms(ktime_get(), 500); @@ -405,7 +408,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, return 0; case MSM_PARAM_FAULTS: if (ctx->vm) - *value =3D gpu->global_faults + ctx->vm->faults; + *value =3D gpu->global_faults + to_msm_vm(ctx->vm)->faults; else *value =3D gpu->global_faults; return 0; @@ -415,12 +418,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_= context *ctx, case MSM_PARAM_VA_START: if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->base.mm_start; + *value =3D ctx->vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->base.mm_range; + *value =3D ctx->vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value =3D adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index b1761f990aa1..8650bbd8698e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -622,11 +622,11 @@ void adreno_show_object(struct drm_printer *p, void *= *ptr, int len, * Common helper function to initialize the default address space for arm-= smmu * attached targets */ -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev); =20 -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/= disp/dpu1/dpu_kms.c index 2c5687a188b6..f7d0f39bcc5b 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1098,17 +1098,17 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dp= u_kms) if (!dpu_kms->base.vm) return; =20 - mmu =3D dpu_kms->base.vm->mmu; + mmu =3D to_msm_vm(dpu_kms->base.vm)->mmu; =20 mmu->funcs->detach(mmu); - msm_gem_vm_put(dpu_kms->base.vm); + drm_gpuvm_put(dpu_kms->base.vm); =20 dpu_kms->base.vm =3D NULL; } =20 static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 vm =3D msm_kms_init_vm(dpu_kms->dev); if (IS_ERR(vm)) diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm= /disp/mdp4/mdp4_kms.c index a867c684c6d6..9acde91ad6c3 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -122,15 +122,16 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(kms)); struct device *dev =3D mdp4_kms->dev->dev; - struct msm_gem_vm *vm =3D kms->vm; =20 if (mdp4_kms->blank_cursor_iova) msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); =20 - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu =3D to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } =20 if (mdp4_kms->rpm_enabled) @@ -398,7 +399,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms =3D NULL; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int ret; u32 major, minor; unsigned long max_clk; diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm= /disp/mdp5/mdp5_kms.c index 9dca0385a42d..b6e6bd1f95ee 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,12 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms =3D to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_vm *vm =3D kms->vm; =20 - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu =3D to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } =20 mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +501,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms =3D priv->kms; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int i, ret; =20 ret =3D mdp5_init(to_platform_device(dev->dev), dev); diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/d= si_host.c index 16335ebd21e4..2d1699b7dc93 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { =20 /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,7 +1146,7 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host= , int size) uint64_t iova; u8 *data; =20 - msm_host->vm =3D msm_gem_vm_get(priv->kms->vm); + msm_host->vm =3D drm_gpuvm_get(priv->kms->vm); =20 data =3D msm_gem_kernel_new(dev, size, MSM_BO_WC, msm_host->vm, @@ -1194,7 +1194,7 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) =20 if (msm_host->tx_gem_obj) { msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); - msm_gem_vm_put(msm_host->vm); + drm_gpuvm_put(msm_host->vm); msm_host->tx_gem_obj =3D NULL; msm_host->vm =3D NULL; } diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index eb009bd193e3..0fe3c9a24baa 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,8 +48,6 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_vm; -struct msm_gem_vma; struct msm_disp_state; =20 #define MAX_CRTCS 8 @@ -253,7 +251,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); =20 -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); =20 int msm_ioctl_gem_submit(struct drm_device *dev, void *data, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 3b17d83f6673..8ae2f326ec54 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -78,7 +78,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb,= struct seq_file *m) int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb) { struct msm_drm_private *priv =3D fb->dev->dev_private; - struct msm_gem_vm *vm =3D priv->kms->vm; + struct drm_gpuvm *vm =3D priv->kms->vm; struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); int ret, i, n =3D fb->format->num_planes; =20 @@ -102,7 +102,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb,= bool needs_dirtyfb) void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirty= fb) { struct msm_drm_private *priv =3D fb->dev->dev_private; - struct msm_gem_vm *vm =3D priv->kms->vm; + struct drm_gpuvm *vm =3D priv->kms->vm; struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); int i, n =3D fb->format->num_planes; =20 diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 82293806219a..763bafcff4cc 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -45,20 +45,20 @@ static int msm_gem_open(struct drm_gem_object *obj, str= uct drm_file *file) =20 static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, bool close); =20 -static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) +static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) { msm_gem_assert_locked(obj); - drm_gpuvm_resv_assert_held(&vm->base); + drm_gpuvm_resv_assert_held(vm); =20 - struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(&vm->base, obj); + struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(vm, obj); if (vm_bo) { struct drm_gpuva *vma; =20 drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm !=3D &vm->base) + if (vma->vm !=3D vm) continue; - msm_gem_vma_purge(to_msm_vma(vma)); - msm_gem_vma_close(to_msm_vma(vma)); + msm_gem_vma_purge(vma); + msm_gem_vma_close(vma); break; } =20 @@ -89,7 +89,7 @@ static void msm_gem_close(struct drm_gem_object *obj, str= uct drm_file *file) msecs_to_jiffies(1000)); =20 msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); - put_iova_spaces(obj, &ctx->vm->base, true); + put_iova_spaces(obj, ctx->vm, true); detach_vm(obj, ctx->vm); drm_exec_fini(&exec); /* drop locks */ } @@ -386,8 +386,8 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } =20 -static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +static struct drm_gpuva *lookup_vma(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { struct drm_gpuvm_bo *vm_bo; =20 @@ -397,13 +397,13 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_= object *obj, struct drm_gpuva *vma; =20 drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm =3D=3D &vm->base) { + if (vma->vm =3D=3D vm) { /* lookup_vma() should only be used in paths * with at most one vma per vm */ GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); =20 - return to_msm_vma(vma); + return vma; } } } @@ -433,22 +433,20 @@ put_iova_spaces(struct drm_gem_object *obj, struct dr= m_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); =20 drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); - - msm_gem_vma_purge(msm_vma); + msm_gem_vma_purge(vma); if (close) - msm_gem_vma_close(msm_vma); + msm_gem_vma_close(vma); } =20 drm_gpuvm_bo_put(vm_bo); } } =20 -static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, - u64 range_start, u64 range_end) +static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm, u64 range_start, + u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; =20 msm_gem_assert_locked(obj); =20 @@ -457,14 +455,14 @@ static struct msm_gem_vma *get_vma_locked(struct drm_= gem_object *obj, if (!vma) { vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); } else { - GEM_WARN_ON(vma->base.va.addr < range_start); - GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); + GEM_WARN_ON(vma->va.addr < range_start); + GEM_WARN_ON((vma->va.addr + obj->size) > range_end); } =20 return vma; } =20 -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma = *vma) +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct page **pages; @@ -517,17 +515,17 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) update_lru_active(obj); } =20 -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { return get_vma_locked(obj, vm, 0, U64_MAX); } =20 static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; int ret; =20 msm_gem_assert_locked(obj); @@ -538,7 +536,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, =20 ret =3D msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova =3D vma->base.va.addr; + *iova =3D vma->va.addr; pin_obj_locked(obj); } =20 @@ -550,8 +548,8 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { struct drm_exec exec; int ret; @@ -564,8 +562,8 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_objec= t *obj, } =20 /* get iova and pin it. Should have a matching put */ -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm = *vm, + uint64_t *iova) { return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } @@ -574,10 +572,10 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *o= bj, * Get an iova but don't pin it. Doesn't need a put because iovas are curr= ently * valid for the life of the object */ -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; struct drm_exec exec; int ret =3D 0; =20 @@ -586,7 +584,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else { - *iova =3D vma->base.va.addr; + *iova =3D vma->va.addr; } drm_exec_fini(&exec); /* drop locks */ =20 @@ -594,9 +592,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } =20 static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { - struct msm_gem_vma *vma =3D lookup_vma(obj, vm); + struct drm_gpuva *vma =3D lookup_vma(obj, vm); =20 if (!vma) return 0; @@ -615,7 +613,7 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova) + struct drm_gpuvm *vm, uint64_t iova) { struct drm_exec exec; int ret =3D 0; @@ -624,11 +622,11 @@ int msm_gem_set_iova(struct drm_gem_object *obj, if (!iova) { ret =3D clear_iova(obj, vm); } else { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; vma =3D get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->base.va.addr !=3D iova)) { + } else if (GEM_WARN_ON(vma->va.addr !=3D iova)) { clear_iova(obj, vm); ret =3D -EBUSY; } @@ -643,10 +641,9 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * purged until something else (shrinker, mm_notifier, destroy, etc) decid= es * to get rid of it */ -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; struct drm_exec exec; =20 msm_gem_lock_vm_and_obj(&exec, obj, vm); @@ -1276,9 +1273,9 @@ struct drm_gem_object *msm_gem_import(struct drm_devi= ce *dev, return ERR_PTR(ret); } =20 -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova) +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t f= lags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova) { void *vaddr; struct drm_gem_object *obj =3D msm_gem_new(dev, size, flags); @@ -1311,8 +1308,7 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint= 32_t size, =20 } =20 -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm) +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm) { if (IS_ERR_OR_NULL(bo)) return; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 33885a08cdd7..892e4132fa72 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -79,12 +79,7 @@ struct msm_gem_vm { }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) =20 -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm); - -void msm_gem_vm_put(struct msm_gem_vm *vm); - -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, u64 va_start, u64 va_size, bool managed); =20 @@ -113,12 +108,12 @@ struct msm_gem_vma { }; #define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) =20 -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct msm_gem_vma *vma); -int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sg= t, int size); -void msm_gem_vma_close(struct msm_gem_vma *vma); +void msm_gem_vma_purge(struct drm_gpuva *vma); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt,= int size); +void msm_gem_vma_close(struct drm_gpuva *vma); =20 struct msm_gem_object { struct drm_gem_object base; @@ -163,22 +158,21 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) =20 uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma = *vma); +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm); -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova); +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm); +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +int msm_gem_set_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm); + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm = *vm, + uint64_t *iova); +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -199,11 +193,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct= drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, char *name); struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova); -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm); +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t f= lags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova); +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -254,14 +247,14 @@ msm_gem_unlock(struct drm_gem_object *obj) static inline int msm_gem_lock_vm_and_obj(struct drm_exec *exec, struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { int ret =3D 0; =20 drm_exec_init(exec, 0, 2); drm_exec_until_all_locked (exec) { - ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); - if (!ret && (obj->resv !=3D drm_gpuvm_resv(&vm->base))) + ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(vm)); + if (!ret && (obj->resv !=3D drm_gpuvm_resv(vm))) ret =3D drm_exec_lock_obj(exec, obj); drm_exec_retry_on_contention(exec); if (GEM_WARN_ON(ret)) @@ -328,7 +321,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index bd8e465e8049..d8ff6aeb04ab 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -264,7 +264,7 @@ static int submit_lock_objects(struct msm_gem_submit *s= ubmit) =20 drm_exec_until_all_locked (&submit->exec) { ret =3D drm_exec_lock_obj(&submit->exec, - drm_gpuvm_resv_obj(&submit->vm->base)); + drm_gpuvm_resv_obj(submit->vm)); drm_exec_retry_on_contention(&submit->exec); if (ret) goto error; @@ -315,7 +315,7 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) =20 for (i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; - struct msm_gem_vma *vma; + struct drm_gpuva *vma; =20 /* if locking succeeded, pin bo: */ vma =3D msm_gem_get_vma_locked(obj, submit->vm); @@ -328,8 +328,8 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) if (ret) break; =20 - submit->bos[i].vm_bo =3D drm_gpuvm_bo_get(vma->base.vm_bo); - submit->bos[i].iova =3D vma->base.va.addr; + submit->bos[i].vm_bo =3D drm_gpuvm_bo_get(vma->vm_bo); + submit->bos[i].iova =3D vma->va.addr; } =20 /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index ccb20897a2b0..df8eb910ca31 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -20,52 +20,38 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } =20 - -void msm_gem_vm_put(struct msm_gem_vm *vm) -{ - if (vm) - drm_gpuvm_put(&vm->base); -} - -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm) -{ - if (!IS_ERR_OR_NULL(vm)) - drm_gpuvm_get(&vm->base); - - return vm; -} - /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct msm_gem_vma *vma) +void msm_gem_vma_purge(struct drm_gpuva *vma) { - struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); - unsigned size =3D vma->base.va.range; + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); + unsigned size =3D vma->va.range; =20 /* Don't do anything if the memory isn't mapped */ - if (!vma->mapped) + if (!msm_vma->mapped) return; =20 - vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); + vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); =20 - vma->mapped =3D false; + msm_vma->mapped =3D false; } =20 /* Map and pin vma: */ int -msm_gem_vma_map(struct msm_gem_vma *vma, int prot, +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); int ret; =20 - if (GEM_WARN_ON(!vma->base.va.addr)) + if (GEM_WARN_ON(!vma->va.addr)) return -EINVAL; =20 - if (vma->mapped) + if (msm_vma->mapped) return 0; =20 - vma->mapped =3D true; + msm_vma->mapped =3D true; =20 /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold @@ -76,38 +62,40 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); + ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); =20 if (ret) { - vma->mapped =3D false; + msm_vma->mapped =3D false; } =20 return ret; } =20 /* Close an iova. Warn if it is still in use */ -void msm_gem_vma_close(struct msm_gem_vma *vma) +void msm_gem_vma_close(struct drm_gpuva *vma) { - struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); =20 - GEM_WARN_ON(vma->mapped); + GEM_WARN_ON(msm_vma->mapped); =20 drm_gpuvm_resv_assert_held(&vm->base); =20 - if (vma->base.va.addr) - drm_mm_remove_node(&vma->node); + if (vma->va.addr && vm->managed) + drm_mm_remove_node(&msm_vma->node); =20 - drm_gpuva_remove(&vma->base); - drm_gpuva_unlink(&vma->base); + drm_gpuva_remove(vma); + drm_gpuva_unlink(vma); =20 kfree(vma); } =20 /* Create a new vma and allocate an iova for it */ -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -149,7 +137,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_o= bject *obj, drm_gpuva_link(&vma->base, vm_bo); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); =20 - return vma; + return &vma->base; =20 err_va_remove: drm_gpuva_remove(&vma->base); @@ -179,7 +167,7 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { * handles virtual address allocation, and both async and sync operations * are supported. */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, u64 va_start, u64 va_size, bool managed) { @@ -215,7 +203,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, =20 drm_mm_init(&vm->mm, va_start, va_size); =20 - return vm; + return &vm->base; =20 err_free_vm: kfree(vm); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 47268aae7d54..fc4d6c9049b0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -285,7 +285,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *= gpu, =20 if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info =3D &state->fault_info; - struct msm_mmu *mmu =3D submit->vm->mmu; + struct msm_mmu *mmu =3D to_msm_vm(submit->vm)->mmu; =20 msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -390,7 +390,7 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; if (submit->vm) - submit->vm->faults++; + to_msm_vm(submit->vm)->faults++; =20 get_comm_cmdline(submit, &comm, &cmd); =20 @@ -828,10 +828,11 @@ static int get_clocks(struct platform_device *pdev, s= truct msm_gpu *gpu) } =20 /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_vm *vm =3D NULL; + struct drm_gpuvm *vm =3D NULL; + if (!gpu) return NULL; =20 @@ -842,11 +843,11 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct= task_struct *task) if (gpu->funcs->create_private_vm) { vm =3D gpu->funcs->create_private_vm(gpu); if (!IS_ERR(vm)) - vm->pid =3D get_pid(task_pid(task)); + to_msm_vm(vm)->pid =3D get_pid(task_pid(task)); } =20 if (IS_ERR_OR_NULL(vm)) - vm =3D msm_gem_vm_get(gpu->vm); + vm =3D drm_gpuvm_get(gpu->vm); =20 return vm; } @@ -1014,8 +1015,9 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); =20 if (!IS_ERR_OR_NULL(gpu->vm)) { - gpu->vm->mmu->funcs->detach(gpu->vm->mmu); - msm_gem_vm_put(gpu->vm); + struct msm_mmu *mmu =3D to_msm_vm(gpu->vm)->mmu; + mmu->funcs->detach(mmu); + drm_gpuvm_put(gpu->vm); } =20 if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 9d69dcad6612..231577656fae 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,8 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_devi= ce *pdev); - struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_devic= e *pdev); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); =20 /** @@ -234,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; =20 - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -357,7 +357,7 @@ struct msm_context { int queueid; =20 /** @vm: the per-process GPU address-space */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /** @kref: the reference count */ struct kref ref; @@ -667,7 +667,7 @@ int msm_gpu_init(struct drm_device *drm, struct platfor= m_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); =20 -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); =20 void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 6458bd82a0cd..e82b8569a468 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned lo= ng iova, int flags, void return -ENOSYS; } =20 -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct msm_mmu *mmu; struct device *mdp_dev =3D dev->dev; struct device *mdss_dev =3D mdp_dev->parent; @@ -212,7 +212,7 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *d= ev) return vm; } =20 - msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(vm)->mmu, kms, msm_kms_fault_handler); =20 return vm; } diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index f45996a03e15..7cdb2eb67700 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; =20 /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index 6298233c3568..8ced49c7557b 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } =20 - msm_gem_vm_put(ctx->vm); + drm_gpuvm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); --=20 2.50.0