From nobody Wed Oct 8 12:40:19 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04E6C225403 for ; Sun, 29 Jun 2025 14:07:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751206048; cv=none; b=YZ5cu0WCgL+UfSoyLmbdbFDxclY7ZMBDLgrXnDm3GG8wRCNXcsSGa6dvYLNPKPPZUmHY7QP4C+V4gKvS2EdWWzFnU4rmrA+69DWXfTRyXqdS1/K5nBXg0M+qVQzIR8L2sdel5mTjNCprLD87V1+nIzM1To6NL+mLDuWzHpaGkUU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751206048; c=relaxed/simple; bh=XR9vPQGrieR6Ug/EEy85SB4DweE/pIFG2CwokzanSSE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZOXpCvFxycw2sY6b0p1UIjcljgMQLkEbFOE+O2dDEsCzkBcRnUAtl5MPE6Yv0hmmXqvFGHwsV+2ZK9AODWi8j/KHEs8C0Lht+JVroSC44lLM7TX63ZvD6dDuD/asHmo2k7mvBwToSwljb0gr48Q2bS3NQ/9LNxbz6tsQJDz1J/M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=SqnauyrR; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="SqnauyrR" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TDqW5R026426 for ; Sun, 29 Jun 2025 14:07:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=dsIULYxwynH kgYr1fPCvCIhbfI+13wgLq4NAYLMpehg=; b=SqnauyrRv6f6YuEVySzv2BLdA9j tqbzsLZPHYucuYojSxIzC+uRbAPF1//7Tkn+bwZBAiI1J7Qt7xdRr7S9IhaF0rzR T6rul43xxaJeqc0u+Y8bNuo7bUzEw+r90FFF1juV3k3AOZi5Bylrlsdm6Dj6169W wDacYJJ9/yNRRA2TqNoa33+ZQdjoAB2jk8dyhl75UcLRFT4pfjd+YtP09wOflQMM MU3dK/7clKszeaUz8rjm95Z1Ly12mMn/pW9QjkTTzLAngLl7vxbFlIwOQUx6oPgr B5YVuXgrHbmiTSl+yhm/RIBRaOsxH+cBqXIC5M2lhlPj4r3/Tnn4a5zo0Eg== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7bvj9px-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 14:07:22 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-7425efba1a3so4041584b3a.0 for ; Sun, 29 Jun 2025 07:07:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751206042; x=1751810842; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dsIULYxwynHkgYr1fPCvCIhbfI+13wgLq4NAYLMpehg=; b=IxHfb6fcZcS554hYD3HKfSfSOa+DDJzCPzYIYmP2HnVDERdg6BggdpGf1FBV1c4RtD AC2yzzDd4QpcXmx9OaE51HEduThnyZJSG/80/3fFH1K6l0ed2a2UCrU980tCnwy/ym1f LZJ24bnsOOZrrp2lQ2QDs46fvTSORDtFmj//roN1/wZdjCsZy3cCuVJ66Zhq97ztJR1g dB0UkigMynfHe8b06F3LS1XLAKAt9/kGbq915Ph251WPKpvpVTMzDfIybNkw40jVP87v FT2WK/HXdzwe3fCh0j1kCMYMl8jvfXKwTIhfQuNB3dx80E2R6UzztLHiUEiYQsb/d3k9 ZEAw== X-Forwarded-Encrypted: i=1; AJvYcCUPWiJcyYeiKhpWXwey6UPfFszVeXcTye4WVHZlXoaXzBdhsdHZ980SoahY13dzFXtl32HJuNLon+MT864=@vger.kernel.org X-Gm-Message-State: AOJu0YxLAcIKkI1wqlLU0r903y+8wswagN+v5QpBhm3g9RPfJODOKVOw 1pomkJHERvpKd5T+AQdJPwO6fUtB/LIE4zGQb6K9+z12kvJeZTelVeIMciEpsojdTbVrCLRpStn grEbIaWG2odjap1IU6lJ+hStPQTVasfSWjOoH2jp+ea74X2tm3GfKRxiWZmOoatpSr6k= X-Gm-Gg: ASbGnctwjFVe1qUfDd+MkUDnnMsnHLBqp3aVCnel9BswIH6VZgo3gRZgTt/LlE8HPmh e0LyR/dDX5O8Tgt99jGRUX6JJT7ZWBD/2kbs51tB0l5GdVDcjihoamrQJGqs8U1wS5IoezVxCJH dcOkXRQJ/dwBumL2+Ao+pxaE0auGHxPxcUxrxdOu+GztWQ5dwwIHm3GR5zmuGzB8bvcONiXJh2r sQVuOM9qit9BBh11/GlDt1jg+fdrL3I0esYRNoZDWOepjcCm+5BvaSw/1dF39VM23LSgem3Sdzr G0ID6iE8+7GaUWEyjJgLC4HrPzFu6Rt2 X-Received: by 2002:aa7:8882:0:b0:744:a240:fb1b with SMTP id d2e1a72fcca58-74af78d5d3fmr13214070b3a.5.1751206040529; Sun, 29 Jun 2025 07:07:20 -0700 (PDT) X-Google-Smtp-Source: AGHT+IElqngczGJVeMkDYqqEFXu8QBGBZX9pHNIDu13XHqwiBDe4vY+UbfG04ulcqtyC9Tue3YxOEA== X-Received: by 2002:aa7:8882:0:b0:744:a240:fb1b with SMTP id d2e1a72fcca58-74af78d5d3fmr13214003b3a.5.1751206039464; Sun, 29 Jun 2025 07:07:19 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af55722f2sm7161900b3a.96.2025.06.29.07.07.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 07:07:18 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Dmitry Baryshkov , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , Arnd Bergmann , Jun Nie , Krzysztof Kozlowski , Eugene Lepshy , Haoxiang Li , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v8 06/42] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Date: Sun, 29 Jun 2025 07:03:09 -0700 Message-ID: <20250629140537.30850-7-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629140537.30850-1-robin.clark@oss.qualcomm.com> References: <20250629140537.30850-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: XjXpt8RZCRf1pr9Y4FLhN1OkdE4aw5ns X-Authority-Analysis: v=2.4 cv=RJCzH5i+ c=1 sm=1 tr=0 ts=6861489a cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=kLMotZ-wiLr96KG7Bc8A:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-ORIG-GUID: XjXpt8RZCRf1pr9Y4FLhN1OkdE4aw5ns X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDExOSBTYWx0ZWRfXweaf1RiSqDXd dIB8giXCn7qbTeYQhYHLU7KfT0azYX9bX0XDSU/Bii/wo9fFs4U4vmNCQdgDVwatUYRzewM+zIW 6cAwodxlX+3zNBgdyoLuAMKVm8vTbJ1dmyA2xXw3f8e+CTG4uyvpkM0gF1ys7CYK+vHq8GIP15o kWETM8E8APlSAJNExNQWzE5OkXnHzyB5FH9NXlm8Z+a+U23U93mfe0dgHEWPAkd8O1q6vwqWYMG 3eNquMLxaKg9Ec/Y7CyHEx3bBiM7qnzlfGfFnk0lAOW7okUZx1HMYc4IGAq0r7S7aqmhOLQebqU nlVsHfaBCowZtVF13Ql4k+YKDBikaaiCr28L+s2x82XcVqTaclK1n6NGJkv6kABXdZ5n1jzQvE3 LopNsh5Zb7g9fLS+L0E/yjvB0S4znX47lE3PfGbdr2Vsy8atB3goyXFNUtTVUhAKz/DzS/0r X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 priorityscore=1501 mlxlogscore=999 adultscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 phishscore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290119 Content-Type: text/plain; charset="utf-8" From: Rob Clark Re-aligning naming to better match drm_gpuvm terminology will make things less confusing at the end of the drm_gpuvm conversion. This is just rename churn, no functional change. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 18 ++-- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_debugfs.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 22 ++--- drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 26 +++--- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 45 +++++---- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 46 ++++----- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 18 ++-- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 4 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 24 ++--- drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 12 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 12 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 14 +-- drivers/gpu/drm/msm/msm_drv.c | 8 +- drivers/gpu/drm/msm/msm_drv.h | 10 +- drivers/gpu/drm/msm/msm_fb.c | 10 +- drivers/gpu/drm/msm/msm_fbdev.c | 2 +- drivers/gpu/drm/msm/msm_gem.c | 74 +++++++-------- drivers/gpu/drm/msm/msm_gem.h | 34 +++---- drivers/gpu/drm/msm/msm_gem_submit.c | 6 +- drivers/gpu/drm/msm/msm_gem_vma.c | 93 +++++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 46 ++++----- drivers/gpu/drm/msm/msm_gpu.h | 16 ++-- drivers/gpu/drm/msm/msm_kms.c | 16 ++-- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 41 files changed, 348 insertions(+), 352 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 379a3d346c30..5eb063ed0b46 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; =20 - a2xx_gpummu_params(gpu->aspace->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); =20 DBG("%s", gpu->name); =20 @@ -466,19 +466,19 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struc= t msm_gpu *gpu) return state; } =20 -static struct msm_gem_address_space * -a2xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pde= v) +static struct msm_gem_vm * +a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu =3D a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 - aspace =3D msm_gem_address_space_create(mmu, "gpu", SZ_16M, + vm =3D msm_gem_vm_create(mmu, "gpu", SZ_16M, 0xfff * SZ_64K); =20 - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); =20 - return aspace; + return vm; } =20 static u32 a2xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -504,7 +504,7 @@ static const struct adreno_gpu_funcs funcs =3D { #endif .gpu_state_get =3D a2xx_gpu_state_get, .gpu_state_put =3D adreno_gpu_state_put, - .create_address_space =3D a2xx_create_address_space, + .create_vm =3D a2xx_create_vm, .get_rptr =3D a2xx_get_rptr, }, }; @@ -551,7 +551,7 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers =3D a220_registers; =20 - if (!gpu->aspace) { + if (!gpu->vm) { dev_err(dev->dev, "No memory protection without MMU\n"); if (!allow_vram_carveout) { ret =3D -ENXIO; diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a3xx_gpu.c index b6df115bb567..434e6ededf83 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -526,7 +526,7 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_busy =3D a3xx_gpu_busy, .gpu_state_get =3D a3xx_gpu_state_get, .gpu_state_put =3D adreno_gpu_state_put, - .create_address_space =3D adreno_create_address_space, + .create_vm =3D adreno_create_vm, .get_rptr =3D a3xx_get_rptr, }, }; @@ -581,7 +581,7 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } =20 - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a4xx_gpu.c index f1b18a6663f7..2c75debcfd84 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -645,7 +645,7 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_busy =3D a4xx_gpu_busy, .gpu_state_get =3D a4xx_gpu_state_get, .gpu_state_put =3D adreno_gpu_state_put, - .create_address_space =3D adreno_create_address_space, + .create_vm =3D adreno_create_vm, .get_rptr =3D a4xx_get_rptr, }, .get_timestamp =3D a4xx_get_timestamp, @@ -695,7 +695,7 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0xffff0000ffff0000ull; =20 - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c b/drivers/gpu/drm/ms= m/adreno/a5xx_debugfs.c index 169b8fe688f8..625a4e787d8f 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c @@ -116,13 +116,13 @@ reset_set(void *data, u64 val) adreno_gpu->fw[ADRENO_FW_PFP] =3D NULL; =20 if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); a5xx_gpu->pm4_bo =3D NULL; } =20 if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); a5xx_gpu->pfp_bo =3D NULL; } diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a5xx_gpu.c index 60aef0796236..dc31bc0afca4 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -622,7 +622,7 @@ static int a5xx_ucode_load(struct msm_gpu *gpu) a5xx_gpu->shadow =3D msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a5xx_gpu->shadow_bo, + gpu->vm, &a5xx_gpu->shadow_bo, &a5xx_gpu->shadow_iova); =20 if (IS_ERR(a5xx_gpu->shadow)) @@ -1042,22 +1042,22 @@ static void a5xx_destroy(struct msm_gpu *gpu) a5xx_preempt_fini(gpu); =20 if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); } =20 if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); } =20 if (a5xx_gpu->gpmu_bo) { - msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->gpmu_bo); } =20 if (a5xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->shadow_bo); } =20 @@ -1457,7 +1457,7 @@ static int a5xx_crashdumper_init(struct msm_gpu *gpu, struct a5xx_crashdumper *dumper) { dumper->ptr =3D msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); =20 if (!IS_ERR(dumper->ptr)) @@ -1557,7 +1557,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_g= pu *gpu, =20 if (a5xx_crashdumper_run(gpu, &dumper)) { kfree(a5xx_state->hlsqregs); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); return; } =20 @@ -1565,7 +1565,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_g= pu *gpu, memcpy(a5xx_state->hlsqregs, dumper.ptr + (256 * SZ_1K), count * sizeof(u32)); =20 - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); } =20 static struct msm_gpu_state *a5xx_gpu_state_get(struct msm_gpu *gpu) @@ -1713,7 +1713,7 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_busy =3D a5xx_gpu_busy, .gpu_state_get =3D a5xx_gpu_state_get, .gpu_state_put =3D a5xx_gpu_state_put, - .create_address_space =3D adreno_create_address_space, + .create_vm =3D adreno_create_vm, .get_rptr =3D a5xx_get_rptr, }, .get_timestamp =3D a5xx_get_timestamp, @@ -1786,8 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } =20 - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, a5xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); =20 /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/= adreno/a5xx_power.c index 6b91e0bd1514..d6da7351cfbb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c @@ -363,7 +363,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu) bosize =3D (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2; =20 ptr =3D msm_gem_kernel_new(drm, bosize, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &a5xx_gpu->gpmu_bo, &a5xx_gpu->gpmu_iova); if (IS_ERR(ptr)) return; diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/ms= m/adreno/a5xx_preempt.c index b5f9d40687d5..e4924b5e1c48 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -255,7 +255,7 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, =20 ptr =3D msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_RECORD_SIZE + A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); =20 if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -263,9 +263,9 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, /* The buffer to store counters needs to be unprivileged */ counters =3D msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC, gpu->aspace, &counters_bo, &counters_iova); + MSM_BO_WC, gpu->vm, &counters_bo, &counters_iova); if (IS_ERR(counters)) { - msm_gem_kernel_put(bo, gpu->aspace); + msm_gem_kernel_put(bo, gpu->vm); return PTR_ERR(counters); } =20 @@ -296,8 +296,8 @@ void a5xx_preempt_fini(struct msm_gpu *gpu) int i; =20 for (i =3D 0; i < gpu->nr_rings; i++) { - msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->aspace); - msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->aspace); + msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->vm); + msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->vm); } } =20 diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.c index 38c0f8ef85c3..848acc382b7d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,15 +1259,15 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) =20 static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { - msm_gem_kernel_put(gmu->hfi.obj, gmu->aspace); - msm_gem_kernel_put(gmu->debug.obj, gmu->aspace); - msm_gem_kernel_put(gmu->icache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dcache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dummy.obj, gmu->aspace); - msm_gem_kernel_put(gmu->log.obj, gmu->aspace); - - gmu->aspace->mmu->funcs->detach(gmu->aspace->mmu); - msm_gem_address_space_put(gmu->aspace); + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); + msm_gem_kernel_put(gmu->debug.obj, gmu->vm); + msm_gem_kernel_put(gmu->icache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dcache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); + msm_gem_kernel_put(gmu->log.obj, gmu->vm); + + gmu->vm->mmu->funcs->detach(gmu->vm->mmu); + msm_gem_vm_put(gmu->vm); } =20 static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo = *bo, @@ -1296,7 +1296,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu= , struct a6xx_gmu_bo *bo, if (IS_ERR(bo->obj)) return PTR_ERR(bo->obj); =20 - ret =3D msm_gem_get_and_pin_iova_range(bo->obj, gmu->aspace, &bo->iova, + ret =3D msm_gem_get_and_pin_iova_range(bo->obj, gmu->vm, &bo->iova, range_start, range_end); if (ret) { drm_gem_object_put(bo->obj); @@ -1321,9 +1321,9 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); =20 - gmu->aspace =3D msm_gem_address_space_create(mmu, "gmu", 0x0, 0x80000000); - if (IS_ERR(gmu->aspace)) - return PTR_ERR(gmu->aspace); + gmu->vm =3D msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + if (IS_ERR(gmu->vm)) + return PTR_ERR(gmu->vm); =20 return 0; } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.h index b2d4489b4024..fc288dfe889f 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; =20 - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index a8e6f62b6873..5078152eb8d3 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, if (ctx->seqno =3D=3D ring->cur_ctx_seqno) return; =20 - if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) return; =20 if (adreno_gpu->info->family >=3D ADRENO_7XX_GEN1) { @@ -970,7 +970,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) =20 msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw"); if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); =20 a6xx_gpu->sqe_bo =3D NULL; @@ -987,7 +987,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->shadow =3D msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->shadow_bo, + gpu->vm, &a6xx_gpu->shadow_bo, &a6xx_gpu->shadow_iova); =20 if (IS_ERR(a6xx_gpu->shadow)) @@ -998,7 +998,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) =20 a6xx_gpu->pwrup_reglist_ptr =3D msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->pwrup_reglist_bo, + gpu->vm, &a6xx_gpu->pwrup_reglist_bo, &a6xx_gpu->pwrup_reglist_iova); =20 if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr)) @@ -2211,12 +2211,12 @@ static void a6xx_destroy(struct msm_gpu *gpu) struct a6xx_gpu *a6xx_gpu =3D to_a6xx_gpu(adreno_gpu); =20 if (a6xx_gpu->sqe_bo) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); } =20 if (a6xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->shadow_bo); } =20 @@ -2256,8 +2256,8 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, st= ruct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } =20 -static struct msm_gem_address_space * -a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pde= v) +static struct msm_gem_vm * +a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); struct a6xx_gpu *a6xx_gpu =3D to_a6xx_gpu(adreno_gpu); @@ -2271,22 +2271,22 @@ a6xx_create_address_space(struct msm_gpu *gpu, stru= ct platform_device *pdev) !device_iommu_capable(&pdev->dev, IOMMU_CAP_CACHE_COHERENCY)) quirks |=3D IO_PGTABLE_QUIRK_ARM_OUTER_WBWA; =20 - return adreno_iommu_create_address_space(gpu, pdev, quirks); + return adreno_iommu_create_vm(gpu, pdev, quirks); } =20 -static struct msm_gem_address_space * -a6xx_create_private_address_space(struct msm_gpu *gpu) +static struct msm_gem_vm * +a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; =20 - mmu =3D msm_iommu_pagetable_create(gpu->aspace->mmu); + mmu =3D msm_iommu_pagetable_create(gpu->vm->mmu); =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); =20 - return msm_gem_address_space_create(mmu, + return msm_gem_vm_create(mmu, "gpu", ADRENO_VM_START, - adreno_private_address_space_size(gpu)); + adreno_private_vm_size(gpu)); } =20 static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *= ring) @@ -2403,8 +2403,8 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_state_get =3D a6xx_gpu_state_get, .gpu_state_put =3D a6xx_gpu_state_put, #endif - .create_address_space =3D a6xx_create_address_space, - .create_private_address_space =3D a6xx_create_private_address_space, + .create_vm =3D a6xx_create_vm, + .create_private_vm =3D a6xx_create_private_vm, .get_rptr =3D a6xx_get_rptr, .progress =3D a6xx_progress, }, @@ -2432,8 +2432,8 @@ static const struct adreno_gpu_funcs funcs_gmuwrapper= =3D { .gpu_state_get =3D a6xx_gpu_state_get, .gpu_state_put =3D a6xx_gpu_state_put, #endif - .create_address_space =3D a6xx_create_address_space, - .create_private_address_space =3D a6xx_create_private_address_space, + .create_vm =3D a6xx_create_vm, + .create_private_vm =3D a6xx_create_private_vm, .get_rptr =3D a6xx_get_rptr, .progress =3D a6xx_progress, }, @@ -2463,8 +2463,8 @@ static const struct adreno_gpu_funcs funcs_a7xx =3D { .gpu_state_get =3D a6xx_gpu_state_get, .gpu_state_put =3D a6xx_gpu_state_put, #endif - .create_address_space =3D a6xx_create_address_space, - .create_private_address_space =3D a6xx_create_private_address_space, + .create_vm =3D a6xx_create_vm, + .create_private_vm =3D a6xx_create_private_vm, .get_rptr =3D a6xx_get_rptr, .progress =3D a6xx_progress, }, @@ -2560,9 +2560,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0x1fffffffff000ull; =20 - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, - a6xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); =20 a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/= msm/adreno/a6xx_gpu_state.c index 341a72a67401..ff06bb75b76d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -132,7 +132,7 @@ static int a6xx_crashdumper_init(struct msm_gpu *gpu, struct a6xx_crashdumper *dumper) { dumper->ptr =3D msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); =20 if (!IS_ERR(dumper->ptr)) @@ -1619,7 +1619,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_g= pu *gpu) a7xx_get_clusters(gpu, a6xx_state, dumper); a7xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); =20 - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } =20 a7xx_get_post_crashdumper_registers(gpu, a6xx_state); @@ -1631,7 +1631,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_g= pu *gpu) a6xx_get_clusters(gpu, a6xx_state, dumper); a6xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); =20 - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } } =20 diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/ms= m/adreno/a6xx_preempt.c index 3b17fd2dba89..f6194a57f794 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -344,7 +344,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, =20 ptr =3D msm_gem_kernel_new(gpu->dev, PREEMPT_RECORD_SIZE(adreno_gpu), - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); =20 if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -362,7 +362,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr =3D msm_gem_kernel_new(gpu->dev, PREEMPT_SMMU_INFO_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &bo, &iova); + gpu->vm, &bo, &iova); =20 if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -377,7 +377,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, =20 struct a7xx_cp_smmu_info *smmu_info_ptr =3D ptr; =20 - msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); =20 smmu_info_ptr->magic =3D GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 =3D ttbr; @@ -405,7 +405,7 @@ void a6xx_preempt_fini(struct msm_gpu *gpu) int i; =20 for (i =3D 0; i < gpu->nr_rings; i++) - msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace); + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->vm); } =20 void a6xx_preempt_init(struct msm_gpu *gpu) @@ -431,7 +431,7 @@ void a6xx_preempt_init(struct msm_gpu *gpu) a6xx_gpu->preempt_postamble_ptr =3D msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &a6xx_gpu->preempt_postamble_bo, + gpu->vm, &a6xx_gpu->preempt_postamble_bo, &a6xx_gpu->preempt_postamble_iova); =20 preempt_prepare_postamble(a6xx_gpu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 5f4de4c25b97..be723fe4de2b 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 p= asid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } =20 -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev) +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev) { - return adreno_iommu_create_address_space(gpu, pdev, 0); + return adreno_iommu_create_vm(gpu, pdev, 0); } =20 -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks) +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; u64 start, size; =20 mmu =3D msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -224,16 +224,15 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu, start =3D max_t(u64, SZ_16M, geometry->aperture_start); size =3D geometry->aperture_end - start + 1; =20 - aspace =3D msm_gem_address_space_create(mmu, "gpu", - start & GENMASK_ULL(48, 0), size); + vm =3D msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); =20 - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); =20 - return aspace; + return vm; } =20 -u64 adreno_private_address_space_size(struct msm_gpu *gpu) +u64 adreno_private_vm_size(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); struct adreno_smmu_priv *adreno_smmu =3D dev_get_drvdata(&gpu->pdev->dev); @@ -275,7 +274,7 @@ void adreno_check_and_reenable_stall(struct adreno_gpu = *adreno_gpu) !READ_ONCE(gpu->crashstate)) { priv->stall_enabled =3D true; =20 - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); } spin_unlock_irqrestore(&priv->fault_stall_lock, flags); } @@ -303,8 +302,9 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, if (priv->stall_enabled) { priv->stall_enabled =3D false; =20 - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); } + priv->stall_reenable_time =3D ktime_add_ms(ktime_get(), 500); spin_unlock_irqrestore(&priv->fault_stall_lock, irq_flags); =20 @@ -401,8 +401,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, *value =3D 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->aspace) - *value =3D gpu->global_faults + ctx->aspace->faults; + if (ctx->vm) + *value =3D gpu->global_faults + ctx->vm->faults; else *value =3D gpu->global_faults; return 0; @@ -410,14 +410,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_= context *ctx, *value =3D gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->aspace =3D=3D gpu->aspace) + if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->aspace->va_start; + *value =3D ctx->vm->va_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->aspace =3D=3D gpu->aspace) + if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->aspace->va_size; + *value =3D ctx->vm->va_size; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value =3D adreno_gpu->ubwc_config.highest_bank_bit; @@ -607,7 +607,7 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_g= pu *gpu, void *ptr; =20 ptr =3D msm_gem_kernel_new(gpu->dev, fw->size - 4, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, &bo, iova); + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &bo, iova); =20 if (IS_ERR(ptr)) return ERR_CAST(ptr); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index a4abafca7782..4fa4b11442ba 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -580,7 +580,7 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) =20 /* Put vm_start above 32b to catch issues with not setting xyz_BASE_HI */ #define ADRENO_VM_START 0x100000000ULL -u64 adreno_private_address_space_size(struct msm_gpu *gpu); +u64 adreno_private_vm_size(struct msm_gpu *gpu); int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, @@ -623,14 +623,14 @@ void adreno_show_object(struct drm_printer *p, void *= *ptr, int len, * Common helper function to initialize the default address space for arm-= smmu * attached targets */ -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev); - -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks); +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev); + +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks); =20 int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flag= s, struct adreno_smmu_fault_info *info, const char *block, diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/= gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 849fea580a4c..32e208ee946d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct d= pu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc =3D to_dpu_encoder_phys_wb(phys_enc); @@ -576,13 +576,13 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct= dpu_encoder_phys *phys_enc =20 wb_enc->wb_job =3D job; wb_enc->wb_conn =3D job->connector; - aspace =3D phys_enc->dpu_kms->base.aspace; + vm =3D phys_enc->dpu_kms->base.vm; =20 wb_cfg =3D &wb_enc->wb_cfg; =20 memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); =20 - ret =3D msm_framebuffer_prepare(job->fb, aspace, false); + ret =3D msm_framebuffer_prepare(job->fb, vm, false); if (ret) { DPU_ERROR("prep fb failed, %d\n", ret); return; @@ -596,7 +596,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct d= pu_encoder_phys *phys_enc return; } =20 - dpu_format_populate_addrs(aspace, job->fb, &wb_cfg->dest); + dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest); =20 wb_cfg->dest.width =3D job->fb->width; wb_cfg->dest.height =3D job->fb->height; @@ -619,14 +619,14 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct= dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc =3D to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 if (!job->fb) return; =20 - aspace =3D phys_enc->dpu_kms->base.aspace; + vm =3D phys_enc->dpu_kms->base.vm; =20 - msm_framebuffer_cleanup(job->fb, aspace, false); + msm_framebuffer_cleanup(job->fb, vm, false); wb_enc->wb_job =3D NULL; wb_enc->wb_conn =3D NULL; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/= msm/disp/dpu1/dpu_formats.c index 59c9427da7dd..d115b79af771 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } =20 -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *= aspace, +static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -282,7 +282,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_= gem_address_space *aspace uint32_t base_addr =3D 0; bool meta; =20 - base_addr =3D msm_framebuffer_iova(fb, aspace, 0); + base_addr =3D msm_framebuffer_iova(fb, vm, 0); =20 fmt =3D msm_framebuffer_format(fb); meta =3D MSM_FORMAT_IS_UBWC(fmt); @@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_= gem_address_space *aspace } } =20 -static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space= *aspace, +static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -363,17 +363,17 @@ static void _dpu_format_populate_addrs_linear(struct = msm_gem_address_space *aspa =20 /* Populate addresses for simple formats here */ for (i =3D 0; i < layout->num_planes; ++i) - layout->plane_addr[i] =3D msm_framebuffer_iova(fb, aspace, i); -} + layout->plane_addr[i] =3D msm_framebuffer_iova(fb, vm, i); + } =20 /** * dpu_format_populate_addrs - populate buffer addresses based on * mmu, fb, and format found in the fb - * @aspace: address space pointer + * @vm: address space pointer * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -384,7 +384,7 @@ void dpu_format_populate_addrs(struct msm_gem_address_s= pace *aspace, /* Populate the addresses given the fb */ if (MSM_FORMAT_IS_UBWC(fmt) || MSM_FORMAT_IS_TILE(fmt)) - _dpu_format_populate_addrs_ubwc(aspace, fb, layout); + _dpu_format_populate_addrs_ubwc(vm, fb, layout); else - _dpu_format_populate_addrs_linear(aspace, fb, layout); + _dpu_format_populate_addrs_linear(vm, fb, layout); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/= msm/disp/dpu1/dpu_formats.h index c6145d43aa3f..989f3e13c497 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 = *supported_formats, return false; } =20 -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); =20 diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/= disp/dpu1/dpu_kms.c index 1fd82b6747e9..2c5687a188b6 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1095,26 +1095,26 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dp= u_kms) { struct msm_mmu *mmu; =20 - if (!dpu_kms->base.aspace) + if (!dpu_kms->base.vm) return; =20 - mmu =3D dpu_kms->base.aspace->mmu; + mmu =3D dpu_kms->base.vm->mmu; =20 mmu->funcs->detach(mmu); - msm_gem_address_space_put(dpu_kms->base.aspace); + msm_gem_vm_put(dpu_kms->base.vm); =20 - dpu_kms->base.aspace =3D NULL; + dpu_kms->base.vm =3D NULL; } =20 static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 - aspace =3D msm_kms_init_aspace(dpu_kms->dev); - if (IS_ERR(aspace)) - return PTR_ERR(aspace); + vm =3D msm_kms_init_vm(dpu_kms->dev); + if (IS_ERR(vm)) + return PTR_ERR(vm); =20 - dpu_kms->base.aspace =3D aspace; + dpu_kms->base.vm =3D vm; =20 return 0; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/ms= m/disp/dpu1/dpu_plane.c index 421138bc3cb7..6d47f43f52f7 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -71,7 +71,7 @@ static const uint32_t qcom_compressed_supported_formats[]= =3D { =20 /* * struct dpu_plane - local dpu plane structure - * @aspace: address space pointer + * @vm: address space pointer * @csc_ptr: Points to dpu_csc_cfg structure to use for current * @catalog: Points to dpu catalog structure * @revalidate: force revalidation of all the plane properties @@ -654,8 +654,8 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, =20 DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id); =20 - /* cache aspace */ - pstate->aspace =3D kms->base.aspace; + /* cache vm */ + pstate->vm =3D kms->base.vm; =20 /* * TODO: Need to sort out the msm_framebuffer_prepare() call below so @@ -664,9 +664,9 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, */ drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - if (pstate->aspace) { + if (pstate->vm) { ret =3D msm_framebuffer_prepare(new_state->fb, - pstate->aspace, pstate->needs_dirtyfb); + pstate->vm, pstate->needs_dirtyfb); if (ret) { DPU_ERROR("failed to prepare framebuffer\n"); return ret; @@ -689,7 +689,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plan= e, =20 DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id); =20 - msm_framebuffer_cleanup(old_state->fb, old_pstate->aspace, + msm_framebuffer_cleanup(old_state->fb, old_pstate->vm, old_pstate->needs_dirtyfb); } =20 @@ -1457,7 +1457,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_p= lane *plane, pstate->needs_qos_remap |=3D (is_rt_pipe !=3D pdpu->is_rt_pipe); pdpu->is_rt_pipe =3D is_rt_pipe; =20 - dpu_format_populate_addrs(pstate->aspace, new_state->fb, &pstate->layout); + dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout); =20 DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT ", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/ms= m/disp/dpu1/dpu_plane.h index acd5725175cd..3578f52048a5 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -17,7 +17,7 @@ /** * struct dpu_plane_state: Define dpu extension of drm plane state object * @base: base drm plane state object - * @aspace: pointer to address space for input/output buffers + * @vm: pointer to address space for input/output buffers * @pipe: software pipe description * @r_pipe: software pipe description of the second pipe * @pipe_cfg: software pipe configuration @@ -34,7 +34,7 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/ms= m/disp/mdp4/mdp4_crtc.c index b8610aa806ea..0133c0c01a0b 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c @@ -120,7 +120,7 @@ static void unref_cursor_worker(struct drm_flip_work *w= ork, void *val) struct mdp4_kms *mdp4_kms =3D get_kms(&mdp4_crtc->base); struct msm_kms *kms =3D &mdp4_kms->base.base; =20 - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } =20 @@ -369,7 +369,7 @@ static void update_cursor(struct drm_crtc *crtc) if (next_bo) { /* take a obj ref + iova ref when we start scanning out: */ drm_gem_object_get(next_bo); - msm_gem_get_and_pin_iova(next_bo, kms->aspace, &iova); + msm_gem_get_and_pin_iova(next_bo, kms->vm, &iova); =20 /* enable cursor: */ mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_SIZE(dma), @@ -427,7 +427,7 @@ static int mdp4_crtc_cursor_set(struct drm_crtc *crtc, } =20 if (cursor_bo) { - ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, &iova); + ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &iova); if (ret) goto fail; } else { diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm= /disp/mdp4/mdp4_kms.c index 7e942c1337b3..5cb4a4bae2a6 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -122,15 +122,15 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(kms)); struct device *dev =3D mdp4_kms->dev->dev; - struct msm_gem_address_space *aspace =3D kms->aspace; + struct msm_gem_vm *vm =3D kms->vm; =20 if (mdp4_kms->blank_cursor_iova) - msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->aspace); + msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); =20 - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } =20 if (mdp4_kms->rpm_enabled) @@ -398,7 +398,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms =3D NULL; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int ret; u32 major, minor; unsigned long max_clk; @@ -467,19 +467,19 @@ static int mdp4_kms_init(struct drm_device *dev) } else if (!mmu) { DRM_DEV_INFO(dev->dev, "no iommu, fallback to phys " "contig buffers for scanout\n"); - aspace =3D NULL; + vm =3D NULL; } else { - aspace =3D msm_gem_address_space_create(mmu, + vm =3D msm_gem_vm_create(mmu, "mdp4", 0x1000, 0x100000000 - 0x1000); =20 - if (IS_ERR(aspace)) { + if (IS_ERR(vm)) { if (!IS_ERR(mmu)) mmu->funcs->destroy(mmu); - ret =3D PTR_ERR(aspace); + ret =3D PTR_ERR(vm); goto fail; } =20 - kms->aspace =3D aspace; + kms->vm =3D vm; } =20 ret =3D modeset_init(mdp4_kms); @@ -496,7 +496,7 @@ static int mdp4_kms_init(struct drm_device *dev) goto fail; } =20 - ret =3D msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->aspace, + ret =3D msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->vm, &mdp4_kms->blank_cursor_iova); if (ret) { DRM_DEV_ERROR(dev->dev, "could not pin blank-cursor bo: %d\n", ret); diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/m= sm/disp/mdp4/mdp4_plane.c index 3fefb2088008..7743be6167f8 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c @@ -87,7 +87,7 @@ static int mdp4_plane_prepare_fb(struct drm_plane *plane, =20 drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - return msm_framebuffer_prepare(new_state->fb, kms->aspace, false); + return msm_framebuffer_prepare(new_state->fb, kms->vm, false); } =20 static void mdp4_plane_cleanup_fb(struct drm_plane *plane, @@ -102,7 +102,7 @@ static void mdp4_plane_cleanup_fb(struct drm_plane *pla= ne, return; =20 DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, false); + msm_framebuffer_cleanup(fb, kms->vm, false); } =20 =20 @@ -153,13 +153,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *= plane, MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); =20 mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } =20 static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms, diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/ms= m/disp/mdp5/mdp5_crtc.c index 0f653e62b4a0..298861f373b0 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -169,7 +169,7 @@ static void unref_cursor_worker(struct drm_flip_work *w= ork, void *val) struct mdp5_kms *mdp5_kms =3D get_kms(&mdp5_crtc->base); struct msm_kms *kms =3D &mdp5_kms->base.base; =20 - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } =20 @@ -993,7 +993,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (!cursor_bo) return -ENOENT; =20 - ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, + ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &mdp5_crtc->cursor.iova); if (ret) { drm_gem_object_put(cursor_bo); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm= /disp/mdp5/mdp5_kms.c index 3fcca7a3d82e..9dca0385a42d 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,11 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms =3D to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_address_space *aspace =3D kms->aspace; + struct msm_gem_vm *vm =3D kms->vm; =20 - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } =20 mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +500,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms =3D priv->kms; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int i, ret; =20 ret =3D mdp5_init(to_platform_device(dev->dev), dev); @@ -534,13 +534,13 @@ static int mdp5_kms_init(struct drm_device *dev) } mdelay(16); =20 - aspace =3D msm_kms_init_aspace(mdp5_kms->dev); - if (IS_ERR(aspace)) { - ret =3D PTR_ERR(aspace); + vm =3D msm_kms_init_vm(mdp5_kms->dev); + if (IS_ERR(vm)) { + ret =3D PTR_ERR(vm); goto fail; } =20 - kms->aspace =3D aspace; + kms->vm =3D vm; =20 pm_runtime_put_sync(&pdev->dev); =20 diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/m= sm/disp/mdp5/mdp5_plane.c index bb1601921938..9f68a4747203 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -144,7 +144,7 @@ static int mdp5_plane_prepare_fb(struct drm_plane *plan= e, =20 drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - return msm_framebuffer_prepare(new_state->fb, kms->aspace, needs_dirtyfb); + return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb); } =20 static void mdp5_plane_cleanup_fb(struct drm_plane *plane, @@ -159,7 +159,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *pla= ne, return; =20 DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, needed_dirtyfb); + msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb); } =20 static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_= state, @@ -478,13 +478,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_= kms, MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); =20 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } =20 /* Note: mdp5_plane->pipe_lock must be locked */ diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/d= si_host.c index 4d75529c0e85..16335ebd21e4 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { =20 /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,10 +1146,10 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_ho= st, int size) uint64_t iova; u8 *data; =20 - msm_host->aspace =3D msm_gem_address_space_get(priv->kms->aspace); + msm_host->vm =3D msm_gem_vm_get(priv->kms->vm); =20 data =3D msm_gem_kernel_new(dev, size, MSM_BO_WC, - msm_host->aspace, + msm_host->vm, &msm_host->tx_gem_obj, &iova); =20 if (IS_ERR(data)) { @@ -1193,10 +1193,10 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) return; =20 if (msm_host->tx_gem_obj) { - msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->aspace); - msm_gem_address_space_put(msm_host->aspace); + msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); + msm_gem_vm_put(msm_host->vm); msm_host->tx_gem_obj =3D NULL; - msm_host->aspace =3D NULL; + msm_host->vm =3D NULL; } =20 if (msm_host->tx_buf) @@ -1327,7 +1327,7 @@ int dsi_dma_base_get_6g(struct msm_dsi_host *msm_host= , uint64_t *dma_base) return -EINVAL; =20 return msm_gem_get_and_pin_iova(msm_host->tx_gem_obj, - priv->kms->aspace, dma_base); + priv->kms->vm, dma_base); } =20 int dsi_dma_base_get_v2(struct msm_dsi_host *msm_host, uint64_t *dma_base) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 324ee2089b34..49c868e33d70 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -349,7 +349,7 @@ static int context_init(struct drm_device *dev, struct = drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); =20 - ctx->aspace =3D msm_gpu_create_private_address_space(priv->gpu, current); + ctx->vm =3D msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv =3D ctx; =20 ctx->seqno =3D atomic_inc_return(&ident); @@ -527,7 +527,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *d= ev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->aspace, iova); + return msm_gem_get_iova(obj, ctx->vm, iova); } =20 static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -541,13 +541,13 @@ static int msm_ioctl_gem_info_set_iova(struct drm_dev= ice *dev, return -EINVAL; =20 /* Only supported if per-process address space is supported: */ - if (priv->gpu->aspace =3D=3D ctx->aspace) + if (priv->gpu->vm =3D=3D ctx->vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); =20 if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; =20 - return msm_gem_set_iova(obj, ctx->aspace, iova); + return msm_gem_set_iova(obj, ctx->vm, iova); } =20 static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index c8afb1ea6040..8aa3412c6e36 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,7 +48,7 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_address_space; +struct msm_gem_vm; struct msm_gem_vma; struct msm_disp_state; =20 @@ -264,7 +264,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); =20 -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev); +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); =20 int msm_ioctl_gem_submit(struct drm_device *dev, void *data, @@ -286,11 +286,11 @@ int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); =20 int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needs_dirtyfb); + struct msm_gem_vm *vm, bool needs_dirtyfb); void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needed_dirtyfb); + struct msm_gem_vm *vm, bool needed_dirtyfb); uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane); + struct msm_gem_vm *vm, int plane); struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int = plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb= ); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 09268e416843..6df318b73534 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -76,7 +76,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb,= struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needs_dirtyfb) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); @@ -88,7 +88,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, atomic_inc(&msm_fb->prepare_count); =20 for (i =3D 0; i < n; i++) { - ret =3D msm_gem_get_and_pin_iova(fb->obj[i], aspace, &msm_fb->iova[i]); + ret =3D msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)\n", fb->base.id, i, msm_fb->iova[i], ret); if (ret) @@ -99,7 +99,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, } =20 void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needed_dirtyfb) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); @@ -109,14 +109,14 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *= fb, refcount_dec(&msm_fb->dirtyfb); =20 for (i =3D 0; i < n; i++) - msm_gem_unpin_iova(fb->obj[i], aspace); + msm_gem_unpin_iova(fb->obj[i], vm); =20 if (!atomic_dec_return(&msm_fb->prepare_count)) memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } =20 uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane) + struct msm_gem_vm *vm, int plane) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbde= v.c index c62249b1ab3d..b5969374d53f 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -122,7 +122,7 @@ int msm_fbdev_driver_fbdev_probe(struct drm_fb_helper *= helper, * in panic (ie. lock-safe, etc) we could avoid pinning the * buffer now: */ - ret =3D msm_gem_get_and_pin_iova(bo, priv->kms->aspace, &paddr); + ret =3D msm_gem_get_and_pin_iova(bo, priv->kms->vm, &paddr); if (ret) { DRM_DEV_ERROR(dev->dev, "failed to get buffer obj iova: %d\n", ret); goto fail; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 259919b0e887..5e6c88b85fd3 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -398,14 +398,14 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *o= bj) } =20 static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct msm_gem_vma *vma; =20 msm_gem_assert_locked(obj); =20 - vma =3D msm_gem_vma_new(aspace); + vma =3D msm_gem_vma_new(vm); if (!vma) return ERR_PTR(-ENOMEM); =20 @@ -415,7 +415,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_objec= t *obj, } =20 static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct msm_gem_vma *vma; @@ -423,7 +423,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_ob= ject *obj, msm_gem_assert_locked(obj); =20 list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace =3D=3D aspace) + if (vma->vm =3D=3D vm) return vma; } =20 @@ -454,7 +454,7 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) msm_gem_assert_locked(obj); =20 list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace) { + if (vma->vm) { msm_gem_vma_purge(vma); if (close) msm_gem_vma_close(vma); @@ -477,19 +477,19 @@ put_iova_vmas(struct drm_gem_object *obj) } =20 static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; =20 msm_gem_assert_locked(obj); =20 - vma =3D lookup_vma(obj, aspace); + vma =3D lookup_vma(obj, vm); =20 if (!vma) { int ret; =20 - vma =3D add_vma(obj, aspace); + vma =3D add_vma(obj, vm); if (IS_ERR(vma)) return vma; =20 @@ -561,13 +561,13 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) } =20 struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - return get_vma_locked(obj, aspace, 0, U64_MAX); + return get_vma_locked(obj, vm, 0, U64_MAX); } =20 static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; @@ -575,7 +575,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, =20 msm_gem_assert_locked(obj); =20 - vma =3D get_vma_locked(obj, aspace, range_start, range_end); + vma =3D get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); =20 @@ -593,13 +593,13 @@ static int get_and_pin_iova_range_locked(struct drm_g= em_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { int ret; =20 msm_gem_lock(obj); - ret =3D get_and_pin_iova_range_locked(obj, aspace, iova, range_start, ran= ge_end); + ret =3D get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_e= nd); msm_gem_unlock(obj); =20 return ret; @@ -607,9 +607,9 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_objec= t *obj, =20 /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { - return msm_gem_get_and_pin_iova_range(obj, aspace, iova, 0, U64_MAX); + return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } =20 /* @@ -617,13 +617,13 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *o= bj, * valid for the life of the object */ int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; int ret =3D 0; =20 msm_gem_lock(obj); - vma =3D get_vma_locked(obj, aspace, 0, U64_MAX); + vma =3D get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else { @@ -635,9 +635,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } =20 static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - struct msm_gem_vma *vma =3D lookup_vma(obj, aspace); + struct msm_gem_vma *vma =3D lookup_vma(obj, vm); =20 if (!vma) return 0; @@ -657,20 +657,20 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova) + struct msm_gem_vm *vm, uint64_t iova) { int ret =3D 0; =20 msm_gem_lock(obj); if (!iova) { - ret =3D clear_iova(obj, aspace); + ret =3D clear_iova(obj, vm); } else { struct msm_gem_vma *vma; - vma =3D get_vma_locked(obj, aspace, iova, iova + obj->size); + vma =3D get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else if (GEM_WARN_ON(vma->iova !=3D iova)) { - clear_iova(obj, aspace); + clear_iova(obj, vm); ret =3D -EBUSY; } } @@ -685,12 +685,12 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * to get rid of it */ void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_vma *vma; =20 msm_gem_lock(obj); - vma =3D lookup_vma(obj, aspace); + vma =3D lookup_vma(obj, vm); if (!GEM_WARN_ON(!vma)) { msm_gem_unpin_locked(obj); } @@ -1008,23 +1008,23 @@ void msm_gem_describe(struct drm_gem_object *obj, s= truct seq_file *m, =20 list_for_each_entry(vma, &msm_obj->vmas, list) { const char *name, *comm; - if (vma->aspace) { - struct msm_gem_address_space *aspace =3D vma->aspace; + if (vma->vm) { + struct msm_gem_vm *vm =3D vma->vm; struct task_struct *task =3D - get_pid_task(aspace->pid, PIDTYPE_PID); + get_pid_task(vm->pid, PIDTYPE_PID); if (task) { comm =3D kstrdup(task->comm, GFP_KERNEL); put_task_struct(task); } else { comm =3D NULL; } - name =3D aspace->name; + name =3D vm->name; } else { name =3D comm =3D NULL; } - seq_printf(m, " [%s%s%s: aspace=3D%p, %08llx,%s]", + seq_printf(m, " [%s%s%s: vm=3D%p, %08llx,%s]", name, comm ? ":" : "", comm ? comm : "", - vma->aspace, vma->iova, + vma->vm, vma->iova, vma->mapped ? "mapped" : "unmapped"); kfree(comm); } @@ -1349,7 +1349,7 @@ struct drm_gem_object *msm_gem_import(struct drm_devi= ce *dev, } =20 void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova) { void *vaddr; @@ -1360,14 +1360,14 @@ void *msm_gem_kernel_new(struct drm_device *dev, ui= nt32_t size, return ERR_CAST(obj); =20 if (iova) { - ret =3D msm_gem_get_and_pin_iova(obj, aspace, iova); + ret =3D msm_gem_get_and_pin_iova(obj, vm, iova); if (ret) goto err; } =20 vaddr =3D msm_gem_get_vaddr(obj); if (IS_ERR(vaddr)) { - msm_gem_unpin_iova(obj, aspace); + msm_gem_unpin_iova(obj, vm); ret =3D PTR_ERR(vaddr); goto err; } @@ -1384,13 +1384,13 @@ void *msm_gem_kernel_new(struct drm_device *dev, ui= nt32_t size, } =20 void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { if (IS_ERR_OR_NULL(bo)) return; =20 msm_gem_put_vaddr(bo); - msm_gem_unpin_iova(bo, aspace); + msm_gem_unpin_iova(bo, vm); drm_gem_object_put(bo); } =20 diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index ba5c4ff76292..64ea3ed213c1 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -22,7 +22,7 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash mem= ory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping = */ =20 -struct msm_gem_address_space { +struct msm_gem_vm { const char *name; /* NOTE: mm managed at the page level, size is in # of pages * and position mm_node->start is in # of pages: @@ -47,13 +47,13 @@ struct msm_gem_address_space { uint64_t va_size; }; =20 -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace); +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm); =20 -void msm_gem_address_space_put(struct msm_gem_address_space *aspace); +void msm_gem_vm_put(struct msm_gem_vm *vm); =20 -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size); =20 struct msm_fence_context; @@ -61,12 +61,12 @@ struct msm_fence_context; struct msm_gem_vma { struct drm_mm_node node; uint64_t iova; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head list; /* node in msm_gem_object::vmas */ bool mapped; }; =20 -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace); +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); @@ -127,18 +127,18 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj= , struct msm_gem_vma *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova); + struct msm_gem_vm *vm, uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -160,10 +160,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct= drm_file *file, struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova); void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -257,7 +257,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 3aabf7f1da6d..a59816b6b6de 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_de= vice *dev, =20 kref_init(&submit->ref); submit->dev =3D dev; - submit->aspace =3D queue->ctx->aspace; + submit->vm =3D queue->ctx->vm; submit->gpu =3D gpu; submit->cmd =3D (void *)&submit->bos[nr_bos]; submit->queue =3D queue; @@ -311,7 +311,7 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) struct msm_gem_vma *vma; =20 /* if locking succeeded, pin bo: */ - vma =3D msm_gem_get_vma_locked(obj, submit->aspace); + vma =3D msm_gem_get_vma_locked(obj, submit->vm); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); break; @@ -669,7 +669,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, if (args->pad) return -EINVAL; =20 - if (unlikely(!ctx->aspace) && !capable(CAP_SYS_RAWIO)) { + if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); return -EPERM; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 11e842dda73c..9419692f0cc8 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -10,45 +10,44 @@ #include "msm_mmu.h" =20 static void -msm_gem_address_space_destroy(struct kref *kref) +msm_gem_vm_destroy(struct kref *kref) { - struct msm_gem_address_space *aspace =3D container_of(kref, - struct msm_gem_address_space, kref); - - drm_mm_takedown(&aspace->mm); - if (aspace->mmu) - aspace->mmu->funcs->destroy(aspace->mmu); - put_pid(aspace->pid); - kfree(aspace); + struct msm_gem_vm *vm =3D container_of(kref, struct msm_gem_vm, kref); + + drm_mm_takedown(&vm->mm); + if (vm->mmu) + vm->mmu->funcs->destroy(vm->mmu); + put_pid(vm->pid); + kfree(vm); } =20 =20 -void msm_gem_address_space_put(struct msm_gem_address_space *aspace) +void msm_gem_vm_put(struct msm_gem_vm *vm) { - if (aspace) - kref_put(&aspace->kref, msm_gem_address_space_destroy); + if (vm) + kref_put(&vm->kref, msm_gem_vm_destroy); } =20 -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace) +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm) { - if (!IS_ERR_OR_NULL(aspace)) - kref_get(&aspace->kref); + if (!IS_ERR_OR_NULL(vm)) + kref_get(&vm->kref); =20 - return aspace; + return vm; } =20 /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; unsigned size =3D vma->node.size; =20 /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; =20 - aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); =20 vma->mapped =3D false; } @@ -58,7 +57,7 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; int ret; =20 if (GEM_WARN_ON(!vma->iova)) @@ -69,7 +68,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, =20 vma->mapped =3D true; =20 - if (!aspace) + if (!vm) return 0; =20 /* @@ -81,7 +80,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, size, prot); + ret =3D vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); =20 if (ret) { vma->mapped =3D false; @@ -93,21 +92,21 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; =20 GEM_WARN_ON(vma->mapped); =20 - spin_lock(&aspace->lock); + spin_lock(&vm->lock); if (vma->iova) drm_mm_remove_node(&vma->node); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); =20 vma->iova =3D 0; =20 - msm_gem_address_space_put(aspace); + msm_gem_vm_put(vm); } =20 -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) { struct msm_gem_vma *vma; =20 @@ -115,7 +114,7 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_addr= ess_space *aspace) if (!vma) return NULL; =20 - vma->aspace =3D aspace; + vma->vm =3D vm; =20 return vma; } @@ -124,20 +123,20 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_ad= dress_space *aspace) int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; int ret; =20 - if (GEM_WARN_ON(!aspace)) + if (GEM_WARN_ON(!vm)) return -EINVAL; =20 if (GEM_WARN_ON(vma->iova)) return -EBUSY; =20 - spin_lock(&aspace->lock); - ret =3D drm_mm_insert_node_in_range(&aspace->mm, &vma->node, + spin_lock(&vm->lock); + ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); =20 if (ret) return ret; @@ -145,33 +144,33 @@ int msm_gem_vma_init(struct msm_gem_vma *vma, int siz= e, vma->iova =3D vma->node.start; vma->mapped =3D false; =20 - kref_get(&aspace->kref); + kref_get(&vm->kref); =20 return 0; } =20 -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); =20 - aspace =3D kzalloc(sizeof(*aspace), GFP_KERNEL); - if (!aspace) + vm =3D kzalloc(sizeof(*vm), GFP_KERNEL); + if (!vm) return ERR_PTR(-ENOMEM); =20 - spin_lock_init(&aspace->lock); - aspace->name =3D name; - aspace->mmu =3D mmu; - aspace->va_start =3D va_start; - aspace->va_size =3D size; + spin_lock_init(&vm->lock); + vm->name =3D name; + vm->mmu =3D mmu; + vm->va_start =3D va_start; + vm->va_size =3D size; =20 - drm_mm_init(&aspace->mm, va_start, size); + drm_mm_init(&vm->mm, va_start, size); =20 - kref_init(&aspace->kref); + kref_init(&vm->kref); =20 - return aspace; + return vm; } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index a8280b579832..3400a6ca8fd8 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -285,7 +285,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *= gpu, =20 if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info =3D &state->fault_info; - struct msm_mmu *mmu =3D submit->aspace->mmu; + struct msm_mmu *mmu =3D submit->vm->mmu; =20 msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -389,8 +389,8 @@ static void recover_worker(struct kthread_work *work) =20 /* Increment the fault counts */ submit->queue->faults++; - if (submit->aspace) - submit->aspace->faults++; + if (submit->vm) + submit->vm->faults++; =20 get_comm_cmdline(submit, &comm, &cmd); =20 @@ -828,10 +828,10 @@ static int get_clocks(struct platform_device *pdev, s= truct msm_gpu *gpu) } =20 /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_stru= ct *task) +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_address_space *aspace =3D NULL; + struct msm_gem_vm *vm =3D NULL; if (!gpu) return NULL; =20 @@ -839,16 +839,16 @@ msm_gpu_create_private_address_space(struct msm_gpu *= gpu, struct task_struct *ta * If the target doesn't support private address spaces then return * the global one */ - if (gpu->funcs->create_private_address_space) { - aspace =3D gpu->funcs->create_private_address_space(gpu); - if (!IS_ERR(aspace)) - aspace->pid =3D get_pid(task_pid(task)); + if (gpu->funcs->create_private_vm) { + vm =3D gpu->funcs->create_private_vm(gpu); + if (!IS_ERR(vm)) + vm->pid =3D get_pid(task_pid(task)); } =20 - if (IS_ERR_OR_NULL(aspace)) - aspace =3D msm_gem_address_space_get(gpu->aspace); + if (IS_ERR_OR_NULL(vm)) + vm =3D msm_gem_vm_get(gpu->vm); =20 - return aspace; + return vm; } =20 int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, @@ -943,18 +943,18 @@ int msm_gpu_init(struct drm_device *drm, struct platf= orm_device *pdev, msm_devfreq_init(gpu); =20 =20 - gpu->aspace =3D gpu->funcs->create_address_space(gpu, pdev); + gpu->vm =3D gpu->funcs->create_vm(gpu, pdev); =20 - if (gpu->aspace =3D=3D NULL) + if (gpu->vm =3D=3D NULL) DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", nam= e); - else if (IS_ERR(gpu->aspace)) { - ret =3D PTR_ERR(gpu->aspace); + else if (IS_ERR(gpu->vm)) { + ret =3D PTR_ERR(gpu->vm); goto fail; } =20 memptrs =3D msm_gem_kernel_new(drm, sizeof(struct msm_rbmemptrs) * nr_rings, - check_apriv(gpu, MSM_BO_WC), gpu->aspace, &gpu->memptrs_bo, + check_apriv(gpu, MSM_BO_WC), gpu->vm, &gpu->memptrs_bo, &memptrs_iova); =20 if (IS_ERR(memptrs)) { @@ -998,7 +998,7 @@ int msm_gpu_init(struct drm_device *drm, struct platfor= m_device *pdev, gpu->rb[i] =3D NULL; } =20 - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); =20 platform_set_drvdata(pdev, NULL); return ret; @@ -1015,11 +1015,11 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) gpu->rb[i] =3D NULL; } =20 - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); =20 - if (!IS_ERR_OR_NULL(gpu->aspace)) { - gpu->aspace->mmu->funcs->detach(gpu->aspace->mmu); - msm_gem_address_space_put(gpu->aspace); + if (!IS_ERR_OR_NULL(gpu->vm)) { + gpu->vm->mmu->funcs->detach(gpu->vm->mmu); + msm_gem_vm_put(gpu->vm); } =20 if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d30a1eedfda6..9d69dcad6612 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,10 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_address_space *(*create_address_space) - (struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_address_space *(*create_private_address_space) - (struct msm_gpu *gpu); + struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_devi= ce *pdev); + struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); =20 /** @@ -236,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; =20 - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -358,8 +356,8 @@ struct msm_context { */ int queueid; =20 - /** @aspace: the per-process GPU address-space */ - struct msm_gem_address_space *aspace; + /** @vm: the per-process GPU address-space */ + struct msm_gem_vm *vm; =20 /** @kref: the reference count */ struct kref ref; @@ -669,8 +667,8 @@ int msm_gpu_init(struct drm_device *drm, struct platfor= m_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); =20 -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_stru= ct *task); +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); =20 void msm_gpu_cleanup(struct msm_gpu *gpu); =20 diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 35d5397e73b4..88504c4b842f 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned lo= ng iova, int flags, void return -ENOSYS; } =20 -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct msm_mmu *mmu; struct device *mdp_dev =3D dev->dev; struct device *mdss_dev =3D mdp_dev->parent; @@ -204,17 +204,17 @@ struct msm_gem_address_space *msm_kms_init_aspace(str= uct drm_device *dev) return NULL; } =20 - aspace =3D msm_gem_address_space_create(mmu, "mdp_kms", + vm =3D msm_gem_vm_create(mmu, "mdp_kms", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { - dev_err(mdp_dev, "aspace create, error %pe\n", aspace); + if (IS_ERR(vm)) { + dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); - return aspace; + return vm; } =20 - msm_mmu_set_fault_handler(aspace->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); =20 - return aspace; + return vm; } =20 void msm_drm_kms_uninit(struct device *dev) diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index 43b58d052ee6..f45996a03e15 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; =20 /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm= _ringbuffer.c index 89dce15eed3b..552b8da9e5f7 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -84,7 +84,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu = *gpu, int id, =20 ring->start =3D msm_gem_kernel_new(gpu->dev, MSM_GPU_RINGBUFFER_SZ, check_apriv(gpu, MSM_BO_WC | MSM_BO_GPU_READONLY), - gpu->aspace, &ring->bo, &ring->iova); + gpu->vm, &ring->bo, &ring->iova); =20 if (IS_ERR(ring->start)) { ret =3D PTR_ERR(ring->start); @@ -131,7 +131,7 @@ void msm_ringbuffer_destroy(struct msm_ringbuffer *ring) =20 msm_fence_context_free(ring->fctx); =20 - msm_gem_kernel_put(ring->bo, ring->gpu->aspace); + msm_gem_kernel_put(ring->bo, ring->gpu->vm); =20 kfree(ring); } diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index 1acc0fe36353..6298233c3568 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } =20 - msm_gem_address_space_put(ctx->aspace); + msm_gem_vm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); --=20 2.50.0