From nobody Wed Oct 8 12:40:19 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD978225408 for ; Sun, 29 Jun 2025 14:07:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751206047; cv=none; b=dZbKrcRGG1Bh9cJvs+u8/IhEB2O7dr2bVEPqbL6MudLoDIp3Wxm/qrcBTdpMg38TEMj3Yv2vGqgfU6bFw0Hh45ddxW3F1rO7T64NlyeLMtGPvHqwtdensVzw1eupgNjNQXYo91+vpNfM1rNekMRfyZSj9+Lz/g8P25oTZtfIXQs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751206047; c=relaxed/simple; bh=b5tuGWPYE6QnDeDqiAEIvFXMk2X3Ub5gGngeK/IcH8c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IoJWUA1SxctN78LRi6ESJK9E25LYkCBMJgbt367S+8RpLkc4lneGbdAKX7P11cPz0ndzhlfUAseEzJfHUhKpUySd8lYEh7Pin9BP0LAti2qFeFLma06Wo3bky+04uTNqcZPmt+dSjwA6GVMIDzIlB6tk7vOOGEiB3SMTvAxe1VE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=pEV022w+; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="pEV022w+" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TB0JlB010328 for ; Sun, 29 Jun 2025 14:07:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=jQkMQAyUbCC WgV1hQTgPBMjaOTBgABotCahc1KmlBHk=; b=pEV022w+AtC2VhYydCh5wY5biDp TIlkQzMZr7TmtIprWr5dBmL+z1WcTemWwOzZW9B1r1qDXzJZlLcWRObg5rv3qjGf O0U/BozhMwR5d0RIarc//TeabB//87PpRe8EsSt8/O4S+bNdqJPq+gKt7SaLDpae QCn/3u+cJysNnDf2WsvZzMi6lrxJ58B1SnrRDjCYXpeoqk9y8vMCNIiuXoDpClAB hOefEaMJG1vGcSOmXAEEknbPPb8OuSTJ8zSW5oTdogxV2BEwbxCailO5VbGTqcpw RHrBPDU+zG3PjG2Bvh1AqH3DFxHh9klq4t82UxNc/NxPBRwcU8mAaw+dWjA== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8s9a5w7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 14:07:23 +0000 (GMT) Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-b3184712fd8so959371a12.3 for ; Sun, 29 Jun 2025 07:07:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751206042; x=1751810842; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jQkMQAyUbCCWgV1hQTgPBMjaOTBgABotCahc1KmlBHk=; b=rACxZdcqjDzDIpCrc8Nq1CushDBaYeJbGldTtOqU+ux7pSuNnCnieGA5qQy2MNsxrk QT/wcaU28petulIBz0zyeDY5dQHyKQO1f6MHs3qjCiau/pXRBT0na8pJt5uPgzuXsMcF 5FjIoiJQFLwpovl0NwuDe3rHhWQZyEoiDfc0CZSQhrY7PSDKC1bZkmf+aUIcSoGEB+01 pGN6KxwOAeZPVE5/tpR6Z1taQ+5kkZ7FWwf0kElfBJUV8OopMCbJsA3F2nSgRwP+EHeS VDQ3DMkKbCzbxXgMChS+UiuFX3+GkKIujjqepK6A9qjAkYdlVorhhPbPqHXG/NGQYAxb lOXQ== X-Forwarded-Encrypted: i=1; AJvYcCUdjoO8Bdu2lvHtC8rCC3z0q5+VOq0gAU+HJaYuZRrbKaoFbdsMKd6jCujHgFswVJSRl8lT/fpYmb7SUi4=@vger.kernel.org X-Gm-Message-State: AOJu0Yz/13SIIKPoI32KSofTi5cWqSAZ9PfYHPEfRzrht98JNjbH4K6V v6W1FhcSfa9qnuWMo3XBcs/512dt05fRb6CF6hJzjuuMYA+yH65urjfdB+S/oSwZ0RMmGPYblCp s6pEgx5aPSv/C4HiAOHSs4hJzHk71e0OP9HoUUbNn4gUz2NTPgVFBYviGo3Yf0JmjKtU= X-Gm-Gg: ASbGncu5z2ocJDWJt6V/zl5gEOb4MVeqzCJWS6CV0wvv+mRu+ffADi76dGx2a8a9EKz FNhNdC8hMjONpLWyQchkQSdndPC2e3g8+8TK4TDetZC0eBsTTBu/BLi2Blc5A9Yi/gD2vNzIk/P LKoeoAnWoQAtEzZWEF5DtQJho9gYEWbePH6CxX8Sci0KNxsARfevIGs9hvmQWRnT92CZYQo64KY 9Xg4rErxxngWMKF/CDWRvMJpFv55vUwq8BtBRzqHIwntZ5aYDEuFJRl+botU++NEw50ZDeLMywr Q4yLdLzG1CoenHSclwK/Tqs15xBaKuiD X-Received: by 2002:a17:902:ebc4:b0:234:a139:1203 with SMTP id d9443c01a7336-23ac460552emr142074925ad.32.1751206042012; Sun, 29 Jun 2025 07:07:22 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHaYFFgqBYd8pF8b310jIcBaKWvtkU/YjbVaAQpe6J3fwTKA3mPBl+nUDpKd/7vTkTHAtk0dw== X-Received: by 2002:a17:902:ebc4:b0:234:a139:1203 with SMTP id d9443c01a7336-23ac460552emr142074345ad.32.1751206041442; Sun, 29 Jun 2025 07:07:21 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3bbf9esm59451245ad.191.2025.06.29.07.07.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 07:07:21 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v8 07/42] drm/msm: Remove vram carveout support Date: Sun, 29 Jun 2025 07:03:10 -0700 Message-ID: <20250629140537.30850-8-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629140537.30850-1-robin.clark@oss.qualcomm.com> References: <20250629140537.30850-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=H/Pbw/Yi c=1 sm=1 tr=0 ts=6861489b cx=c_pps a=Oh5Dbbf/trHjhBongsHeRQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=2B-FfE5MDWLa5Dp4eRYA:9 a=_Vgx9l1VpLgwpw_dHYaR:22 X-Proofpoint-ORIG-GUID: mJkV1YvNXBRlW5PfR7uppgsiDtTThjZk X-Proofpoint-GUID: mJkV1YvNXBRlW5PfR7uppgsiDtTThjZk X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDExOCBTYWx0ZWRfX+PB1vaNO4H9m DYmo9cMR+N7XLUm/2JeQqYrJM2vZZipmZi6jGbxw29Ukascp+NRLAxBpi5rC9J+RYikuPC4GXt0 actXHgEY6ON+T5RvVzZUU20uHZtcQA+inlMlZk+D1QTBuSGwtsTcoo6OJMU8KQZecaCgrEPf/St OPmsPzMAqEBU59vrvBqkp1dkCnhD+82EqQYqtpJn97z2aI+DmFD755NU4xr2i9J8bPOHi366lR2 FRVzErSDWV9nDxDs8rwooatH8EwHeBS3kVkPTTrgR6x+VllcMMhBgZYp0MQOe2M+1sgcjH1TBIm f6oURw5b1HRWzBUMlOJRKus89QVXK6bbgNxeOOTKikYrGX90QZv1C9lubUwV461fV5r/WCCsUrn NIUlE9IIM3us7fyJobvHthUhfqhL9Bbqr4NZHEBexqIdD5ix/oH98ca4f0lXjbetcwCgA/oI X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 priorityscore=1501 clxscore=1015 mlxscore=0 lowpriorityscore=0 spamscore=0 adultscore=0 bulkscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290118 Content-Type: text/plain; charset="utf-8" From: Rob Clark It is standing in the way of drm_gpuvm / VM_BIND support. Not to mention frequently broken and rarely tested. And I think only needed for a 10yr old not quite upstream SoC (msm8974). Maybe we can add support back in later, but I'm doubtful. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 8 -- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 4 - drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 - drivers/gpu/drm/msm/msm_drv.c | 117 +----------------- drivers/gpu/drm/msm/msm_drv.h | 11 -- drivers/gpu/drm/msm/msm_gem.c | 131 ++------------------- drivers/gpu/drm/msm/msm_gem.h | 5 - drivers/gpu/drm/msm/msm_gem_submit.c | 5 - drivers/gpu/drm/msm/msm_gpu.c | 6 +- 14 files changed, 19 insertions(+), 309 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 5eb063ed0b46..095bae92e3e8 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -551,14 +551,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers =3D a220_registers; =20 - if (!gpu->vm) { - dev_err(dev->dev, "No memory protection without MMU\n"); - if (!allow_vram_carveout) { - ret =3D -ENXIO; - goto fail; - } - } - return gpu; =20 fail: diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a3xx_gpu.c index 434e6ededf83..a956cd79195e 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -581,21 +581,6 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } =20 - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret =3D -ENXIO; - goto fail; - } - } - icc_path =3D devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret =3D PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a4xx_gpu.c index 2c75debcfd84..83f6329accba 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -695,21 +695,6 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0xffff0000ffff0000ull; =20 - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret =3D -ENXIO; - goto fail; - } - } - icc_path =3D devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret =3D PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a5xx_gpu.c index dc31bc0afca4..04138a06724b 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,8 +1786,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } =20 - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); =20 /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 5078152eb8d3..7b3be2b46cc4 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2560,8 +2560,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0x1fffffffff000ull; =20 - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); =20 a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/m= sm/adreno/adreno_device.c index 16e7ac444efd..27dbbb302081 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -16,10 +16,6 @@ bool snapshot_debugbus =3D false; MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devc= oredump (if not fused off)"); module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600); =20 -bool allow_vram_carveout =3D false; -MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place= of IOMMU"); -module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600); - int enable_preemption =3D -1; MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=3Don= , 0=3Ddisable, -1=3Dauto (default))"); module_param(enable_preemption, int, 0600); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index be723fe4de2b..0f71c39696a5 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -209,7 +209,9 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, u64 start, size; =20 mmu =3D msm_iommu_gpu_new(&pdev->dev, gpu, quirks); - if (IS_ERR_OR_NULL(mmu)) + if (!mmu) + return ERR_PTR(-ENODEV); + else if (IS_ERR_OR_NULL(mmu)) return ERR_CAST(mmu); =20 geometry =3D msm_iommu_get_geometry(mmu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index 4fa4b11442ba..b1761f990aa1 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -18,7 +18,6 @@ #include "adreno_pm4.xml.h" =20 extern bool snapshot_debugbus; -extern bool allow_vram_carveout; =20 enum { ADRENO_FW_PM4 =3D 0, diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 49c868e33d70..c314fd470d69 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -46,12 +46,6 @@ #define MSM_VERSION_MINOR 12 #define MSM_VERSION_PATCHLEVEL 0 =20 -static void msm_deinit_vram(struct drm_device *ddev); - -static char *vram =3D "16m"; -MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPU= MMU)"); -module_param(vram, charp, 0); - bool dumpstate; MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors"); module_param(dumpstate, bool, 0600); @@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev) if (priv->kms) msm_drm_kms_uninit(dev); =20 - msm_deinit_vram(ddev); - component_unbind_all(dev, ddev); =20 ddev->dev_private =3D NULL; @@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev) return 0; } =20 -bool msm_use_mmu(struct drm_device *dev) -{ - struct msm_drm_private *priv =3D dev->dev_private; - - /* - * a2xx comes with its own MMU - * On other platforms IOMMU can be declared specified either for the - * MDP/DPU device or for its parent, MDSS device. - */ - return priv->is_a2xx || - device_iommu_mapped(dev->dev) || - device_iommu_mapped(dev->dev->parent); -} - -static int msm_init_vram(struct drm_device *dev) -{ - struct msm_drm_private *priv =3D dev->dev_private; - struct device_node *node; - unsigned long size =3D 0; - int ret =3D 0; - - /* In the device-tree world, we could have a 'memory-region' - * phandle, which gives us a link to our "vram". Allocating - * is all nicely abstracted behind the dma api, but we need - * to know the entire size to allocate it all in one go. There - * are two cases: - * 1) device with no IOMMU, in which case we need exclusive - * access to a VRAM carveout big enough for all gpu - * buffers - * 2) device with IOMMU, but where the bootloader puts up - * a splash screen. In this case, the VRAM carveout - * need only be large enough for fbdev fb. But we need - * exclusive access to the buffer to avoid the kernel - * using those pages for other purposes (which appears - * as corruption on screen before we have a chance to - * load and do initial modeset) - */ - - node =3D of_parse_phandle(dev->dev->of_node, "memory-region", 0); - if (node) { - struct resource r; - ret =3D of_address_to_resource(node, 0, &r); - of_node_put(node); - if (ret) - return ret; - size =3D r.end - r.start + 1; - DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start); - - /* if we have no IOMMU, then we need to use carveout allocator. - * Grab the entire DMA chunk carved out in early startup in - * mach-msm: - */ - } else if (!msm_use_mmu(dev)) { - DRM_INFO("using %s VRAM carveout\n", vram); - size =3D memparse(vram, NULL); - } - - if (size) { - unsigned long attrs =3D 0; - void *p; - - priv->vram.size =3D size; - - drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1); - spin_lock_init(&priv->vram.lock); - - attrs |=3D DMA_ATTR_NO_KERNEL_MAPPING; - attrs |=3D DMA_ATTR_WRITE_COMBINE; - - /* note that for no-kernel-mapping, the vaddr returned - * is bogus, but non-null if allocation succeeded: - */ - p =3D dma_alloc_attrs(dev->dev, size, - &priv->vram.paddr, GFP_KERNEL, attrs); - if (!p) { - DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n"); - priv->vram.paddr =3D 0; - return -ENOMEM; - } - - DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n", - (uint32_t)priv->vram.paddr, - (uint32_t)(priv->vram.paddr + size)); - } - - return ret; -} - -static void msm_deinit_vram(struct drm_device *ddev) -{ - struct msm_drm_private *priv =3D ddev->dev_private; - unsigned long attrs =3D DMA_ATTR_NO_KERNEL_MAPPING; - - if (!priv->vram.paddr) - return; - - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, - attrs); -} - static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv =3D dev_get_drvdata(dev); @@ -260,16 +151,12 @@ static int msm_drm_init(struct device *dev, const str= uct drm_driver *drv) goto err_destroy_wq; } =20 - ret =3D msm_init_vram(ddev); - if (ret) - goto err_destroy_wq; - dma_set_max_seg_size(dev, UINT_MAX); =20 /* Bind all our sub-components: */ ret =3D component_bind_all(dev, ddev); if (ret) - goto err_deinit_vram; + goto err_destroy_wq; =20 ret =3D msm_gem_shrinker_init(ddev); if (ret) @@ -306,8 +193,6 @@ static int msm_drm_init(struct device *dev, const struc= t drm_driver *drv) =20 return ret; =20 -err_deinit_vram: - msm_deinit_vram(ddev); err_destroy_wq: destroy_workqueue(priv->wq); err_put_dev: diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 8aa3412c6e36..761e7e221ad9 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -183,17 +183,6 @@ struct msm_drm_private { =20 struct msm_drm_thread event_thread[MAX_CRTCS]; =20 - /* VRAM carveout, used when no IOMMU: */ - struct { - unsigned long size; - dma_addr_t paddr; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: - */ - struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ - } vram; - struct notifier_block vmap_notifier; struct shrinker *shrinker; =20 diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5e6c88b85fd3..b83790cc08df 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,24 +17,8 @@ #include =20 #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_gpu.h" -#include "msm_mmu.h" - -static dma_addr_t physaddr(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_drm_private *priv =3D obj->dev->dev_private; - return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) + - priv->vram.paddr; -} - -static bool use_pages(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - return !msm_obj->vram_node; -} =20 static void update_device_mem(struct msm_drm_private *priv, ssize_t size) { @@ -135,36 +119,6 @@ static void update_lru(struct drm_gem_object *obj) mutex_unlock(&priv->lru.lock); } =20 -/* allocate pages from VRAM carveout, used when no IOMMU: */ -static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_drm_private *priv =3D obj->dev->dev_private; - dma_addr_t paddr; - struct page **p; - int ret, i; - - p =3D kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!p) - return ERR_PTR(-ENOMEM); - - spin_lock(&priv->vram.lock); - ret =3D drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages); - spin_unlock(&priv->vram.lock); - if (ret) { - kvfree(p); - return ERR_PTR(ret); - } - - paddr =3D physaddr(obj); - for (i =3D 0; i < npages; i++) { - p[i] =3D pfn_to_page(__phys_to_pfn(paddr)); - paddr +=3D PAGE_SIZE; - } - - return p; -} - static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); @@ -176,10 +130,7 @@ static struct page **get_pages(struct drm_gem_object *= obj) struct page **p; int npages =3D obj->size >> PAGE_SHIFT; =20 - if (use_pages(obj)) - p =3D drm_gem_get_pages(obj); - else - p =3D get_pages_vram(obj, npages); + p =3D drm_gem_get_pages(obj); =20 if (IS_ERR(p)) { DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n", @@ -212,18 +163,6 @@ static struct page **get_pages(struct drm_gem_object *= obj) return msm_obj->pages; } =20 -static void put_pages_vram(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_drm_private *priv =3D obj->dev->dev_private; - - spin_lock(&priv->vram.lock); - drm_mm_remove_node(msm_obj->vram_node); - spin_unlock(&priv->vram.lock); - - kvfree(msm_obj->pages); -} - static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); @@ -244,10 +183,7 @@ static void put_pages(struct drm_gem_object *obj) =20 update_device_mem(obj->dev->dev_private, -obj->size); =20 - if (use_pages(obj)) - drm_gem_put_pages(obj, msm_obj->pages, true, false); - else - put_pages_vram(obj); + drm_gem_put_pages(obj, msm_obj->pages, true, false); =20 msm_obj->pages =3D NULL; update_lru(obj); @@ -1207,19 +1143,10 @@ struct drm_gem_object *msm_gem_new(struct drm_devic= e *dev, uint32_t size, uint32 struct msm_drm_private *priv =3D dev->dev_private; struct msm_gem_object *msm_obj; struct drm_gem_object *obj =3D NULL; - bool use_vram =3D false; int ret; =20 size =3D PAGE_ALIGN(size); =20 - if (!msm_use_mmu(dev)) - use_vram =3D true; - else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) - use_vram =3D true; - - if (GEM_WARN_ON(use_vram && !priv->vram.size)) - return ERR_PTR(-EINVAL); - /* Disallow zero sized objects as they make the underlying * infrastructure grumpy */ @@ -1232,44 +1159,16 @@ struct drm_gem_object *msm_gem_new(struct drm_devic= e *dev, uint32_t size, uint32 =20 msm_obj =3D to_msm_bo(obj); =20 - if (use_vram) { - struct msm_gem_vma *vma; - struct page **pages; - - drm_gem_private_object_init(dev, obj, size); - - msm_gem_lock(obj); - - vma =3D add_vma(obj, NULL); - msm_gem_unlock(obj); - if (IS_ERR(vma)) { - ret =3D PTR_ERR(vma); - goto fail; - } - - to_msm_bo(obj)->vram_node =3D &vma->node; - - msm_gem_lock(obj); - pages =3D get_pages(obj); - msm_gem_unlock(obj); - if (IS_ERR(pages)) { - ret =3D PTR_ERR(pages); - goto fail; - } - - vma->iova =3D physaddr(obj); - } else { - ret =3D drm_gem_object_init(dev, obj, size); - if (ret) - goto fail; - /* - * Our buffers are kept pinned, so allocating them from the - * MOVABLE zone is a really bad idea, and conflicts with CMA. - * See comments above new_inode() why this is required _and_ - * expected if you're going to pin these pages. - */ - mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); - } + ret =3D drm_gem_object_init(dev, obj, size); + if (ret) + goto fail; + /* + * Our buffers are kept pinned, so allocating them from the + * MOVABLE zone is a really bad idea, and conflicts with CMA. + * See comments above new_inode() why this is required _and_ + * expected if you're going to pin these pages. + */ + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); =20 drm_gem_lru_move_tail(&priv->lru.unbacked, obj); =20 @@ -1297,12 +1196,6 @@ struct drm_gem_object *msm_gem_import(struct drm_dev= ice *dev, uint32_t size; int ret, npages; =20 - /* if we don't have IOMMU, don't bother pretending we can import: */ - if (!msm_use_mmu(dev)) { - DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n"); - return ERR_PTR(-EINVAL); - } - size =3D PAGE_ALIGN(dmabuf->size); =20 ret =3D msm_gem_new_impl(dev, size, MSM_BO_WC, &obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 64ea3ed213c1..e47e187ecd00 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -102,11 +102,6 @@ struct msm_gem_object { =20 struct list_head vmas; /* list of msm_gem_vma */ =20 - /* For physically contiguous buffers. Used when we don't have - * an IOMMU. Also used for stolen/splashscreen buffer. - */ - struct drm_mm_node *vram_node; - char name[32]; /* Identifier to print for the debugfs files */ =20 /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index a59816b6b6de..c184b1a1f522 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -669,11 +669,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void = *data, if (args->pad) return -EINVAL; =20 - if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { - DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); - return -EPERM; - } - /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 3400a6ca8fd8..47268aae7d54 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -942,12 +942,8 @@ int msm_gpu_init(struct drm_device *drm, struct platfo= rm_device *pdev, =20 msm_devfreq_init(gpu); =20 - gpu->vm =3D gpu->funcs->create_vm(gpu, pdev); - - if (gpu->vm =3D=3D NULL) - DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", nam= e); - else if (IS_ERR(gpu->vm)) { + if (IS_ERR(gpu->vm)) { ret =3D PTR_ERR(gpu->vm); goto fail; } --=20 2.50.0