From nobody Thu Oct 2 10:57:10 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E581244664 for ; Thu, 18 Sep 2025 03:51:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758167481; cv=none; b=QHVIHa5srsyI64IwJVOdgF5e03B3ufamUC0PHFNkYuk96WaAzmyNp4EzpfQCxtjcfiJuyHzAbcmobK2K61w6/ec4r5tLlwqpjzVItZhc8kcm1SgCglg+T3m8ZAK/6c9Hph9ME3IpXZVR8iiAZbHA5yXfA8GMjTixJp4lZTOoKcY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758167481; c=relaxed/simple; bh=n1jsPxkiDoDjeGzSxaWrdwHm2EzDZrO/DhyX5ok9ZkU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=NNJ0Md4aKsjrWU0PRgiy8u4oNKdfxLro4YPb/ayc833x0NCwH4Q51/QUajiF7Z8AW50HMrvKThefWzlAeURY5YOzbFR2gaW1FRUxAhAqknKZUuk+3yJIBlJhfOsIZWrfrkzTapO/aI1FWrOUskkmYNKmF5bUJ99u2C21rRxtJDE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=VqnvnRXl; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="VqnvnRXl" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 58I1FWB9013713 for ; Thu, 18 Sep 2025 03:51:19 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= RkZe+kneCZcqUTCXv49AXWAvghtTrXyc5dpFgLO8PBU=; b=VqnvnRXlP6KvEO+S vWs5UjO9/qWSjXicrJPqWCIaTDXvc0wkf0cTG3PrHUVU8XjAb+9GYh8BSHI0QXZp 8Tc1387V3r+kexpJvziCkqxIC3TfUwSKEgmPBYdhzvWHhdJpbffRpsJfdrm+gdFO 6aheDDAJK4WdKZisnjsnhMe3034vxDTeGj7f0khE/cZwV5v6hGene++vu4Ikp7U/ fSmjgm5PuJVX5Yt/kmgA7oq8u3sa07abNGfPSUv7b6bY4fj/YuXqnzUdL+V7B2n/ hf61GQ1UxdjTyi5WqbEdZvvU2bE4+bzYkG/kKapICujal32/Wte/xT9+7n6yLYlj ZFG2jw== Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 497fxymtgd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 18 Sep 2025 03:51:19 +0000 (GMT) Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-78ea15d3583so8496806d6.1 for ; Wed, 17 Sep 2025 20:51:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758167478; x=1758772278; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RkZe+kneCZcqUTCXv49AXWAvghtTrXyc5dpFgLO8PBU=; b=QyS41M1Ec2QhCZauGaN41TsWMKKjWUvOSTONxqZA/Bl4ApFUgRKsRv7wollzJVwkbB /c6iv/XJ1zK6tEP5l1AbNtYgTr9a9C0SFNWvlVpl1UJyR3jkma/SUCsB5YirltWKgt3Q wZxDRhHha8J24v3LXOl5s6UV1gnkwN2SOnx32YY3prBdhoUoXijAlhitvk/fz8rW6hz6 PK8mw45a+mr7BPDS+3kUISF2H1Y75uhFtHghcDILw0EmFdqARCIwV3k6NarAeO/PwUzf bsbIA8bri/1T5mHBSTsaa+1euoIOJW8RVqJSgnonNLvx4Z0FIGbaqJjbwcyrN8S4uEM9 80ow== X-Forwarded-Encrypted: i=1; AJvYcCVZDTe8qai7rSFiyBMvTomYStsafIXYxd+/baGeq7hyLlT4Pz60PiONcom9sdnfU5ihrZYJnzDRanpLqK8=@vger.kernel.org X-Gm-Message-State: AOJu0YxepN5ZQxwIisKPFB09uhg77EduV0lCUULuXkuQTRgULcESqbOU A/jFAmCWNqs+8eh6ikz84AlPebH4/NCNSVoLhxEGJB3GV6McqqwkVB7k6Y+Ic60a44o7ZqFRosu E9KP6rzicwkksI7Xr9BIJsibJUjlG4vU5gLvM9ter+u12HiGvqtxhJ3LfN5kcQqmrLJw= X-Gm-Gg: ASbGncvfePxXMDuq5vA7qN4u+lWrzT4vS5wsNn4NyIHkfuYGP2+1rG+x5i6RbUnIGa7 7Mq0EJwDXyMe2bJ56gFlqw/ZftpNge4NdaE2eex8sNGFkvBFmyRh1UyvMeAJoUtYAHIqd1no+pC /CZlzOvO18hwTPIHQSPMB6xMM5q/eAlqNemmacw3ykVrVoKrC6RUUYGvQsETNn2IX+4pUP7oxCV s9JUDxQof/i+QKWc0iTZ1V2fAxD6w5+HfHKzRgoJ/0JyFqHRDo/TnoywetSkX/E5Hk8xtPevhrX qcQlw8aeVh8Hpe3i7emFVZeA0Ayzs47tJVF/MRDX4DscElwfWxwlETa7mRknLujS5eknsvpdH3g RJ0XcjavQqImaCu7YikYBbbaIpd9pi0fhp3LfI1qYccpfy/gD2KkT X-Received: by 2002:a05:6214:4104:b0:721:cbee:3a5c with SMTP id 6a1803df08f44-78ecef1ae02mr47255056d6.48.1758167478167; Wed, 17 Sep 2025 20:51:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHSx6ImXWka4qzu25nsGIPKG3jxYLOXD5VmYgqTlB6n69eBvtJTy4miHF7fghHQwXJ6uEWe3g== X-Received: by 2002:a05:6214:4104:b0:721:cbee:3a5c with SMTP id 6a1803df08f44-78ecef1ae02mr47254786d6.48.1758167477633; Wed, 17 Sep 2025 20:51:17 -0700 (PDT) Received: from umbar.lan (2001-14ba-a0c3-3a00-264b-feff-fe8b-be8a.rev.dnainternet.fi. [2001:14ba:a0c3:3a00:264b:feff:fe8b:be8a]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-361aa38c4f7sm2799911fa.62.2025.09.17.20.51.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 20:51:15 -0700 (PDT) From: Dmitry Baryshkov Date: Thu, 18 Sep 2025 06:50:22 +0300 Subject: [PATCH v5 1/5] drm/msm: correct separate_gpu_kms description Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250918-msm-gpu-split-v5-1-44486f44d27d@oss.qualcomm.com> References: <20250918-msm-gpu-split-v5-0-44486f44d27d@oss.qualcomm.com> In-Reply-To: <20250918-msm-gpu-split-v5-0-44486f44d27d@oss.qualcomm.com> To: Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Konrad Dybcio Cc: linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=1406; i=dmitry.baryshkov@oss.qualcomm.com; h=from:subject:message-id; bh=n1jsPxkiDoDjeGzSxaWrdwHm2EzDZrO/DhyX5ok9ZkU=; b=owEBbQGS/pANAwAKAYs8ij4CKSjVAcsmYgBoy4Gth5pDDKZjVIenXb81Ygke05JopoBwjyJK1 zEuck0IZpSJATMEAAEKAB0WIQRMcISVXLJjVvC4lX+LPIo+Aiko1QUCaMuBrQAKCRCLPIo+Aiko 1WQkB/4lg9tvQI4us/U5LqqNXJTGG36WkG5XJKtFnQcWFkKRzDRqk1ZNk4h9Mb3mPueYFIhsp8V BbzTyeamtCaqZ04D1qJMYL5eJwlgY4uL1spsdfLVR6Gi0qlqlEMDNidNoU4Dag9RWFOKia0N/Js Gr9EHf6lYOvHRjIiG1Hv8bT5u51c+GuTWMq9/QvZeSdizjJTeUSf87xvUG1VXyuLJALiOctg4RO okIthfyiGYNZm2Zl7TuRexJqoWjBn42AfFrtxnWDa4axY7m4DH+OLxrGjfVcNH6YSRarKwHvcKd ulw8+OVYkJs/GR3EXnuymyTQIpbFggCf3XuZO67duHMS3ilD X-Developer-Key: i=dmitry.baryshkov@oss.qualcomm.com; a=openpgp; fpr=8F88381DD5C873E4AE487DA5199BF1243632046A X-Authority-Analysis: v=2.4 cv=e50GSbp/ c=1 sm=1 tr=0 ts=68cb81b7 cx=c_pps a=wEM5vcRIz55oU/E2lInRtA==:117 a=xqWC_Br6kY4A:10 a=IkcTkHD0fZMA:10 a=yJojWOMRYYMA:10 a=EUspDBNiAAAA:8 a=NR_zzOBYvoclL3rq_QoA:9 a=QEXdDO2ut3YA:10 a=OIgjcC2v60KrkQgK7BGD:22 X-Proofpoint-GUID: 9fl-xHfc0SHxqfgLTvmVyazidaEtphiz X-Proofpoint-ORIG-GUID: 9fl-xHfc0SHxqfgLTvmVyazidaEtphiz X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwOTE2MDIwMiBTYWx0ZWRfX12kYVnpnFMKQ b7OLAp4rlFTbrX1lcOd50XCmmBAw1xixxWowBwWCUwhpRbkObLFm9y7l3x2nqHIJSQ5tqCg26ca 6JBwxTAiZXbg4Xpxde/RxQUOHJyl4UMQcyA4o0+qSwG4piq8Eb7IKJiCbSba6aYykWtiORsrJO8 pcfnjDFLkKMFkC/CVeDp8/YM07nDdibw55xelqMqviq8XvWfwlG3H88oH+s5/DFLBDlUZURNn4o f5Yqj43UKU6kYbOUfm+UTMbyasT8FKclw/tXBagMLEjpVEyaWLwUU/AL6M1YxQGuwDs/dugVTXR o/ATeXd/0H9jLUHNc8FpDRpfnmfmzXmrBUiJimRL1gjIa5jpPIYSFpTCHhtUj0bVnHByKLSlgO2 sjCukr7Z X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-17_01,2025-09-17_02,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 bulkscore=0 clxscore=1015 spamscore=0 priorityscore=1501 phishscore=0 malwarescore=0 suspectscore=0 adultscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2509160202 While applying commit 217ed15bd399 ("drm/msm: enable separate binding of GPU and display devices") the module param was renamed from separate_gpu_drm to separate_gpu_kms. However param name inside MODULE_PARAM_DESC wasn't updated to reflect the new name. Update MODULE_PARAM_DESC to use current name for the module param. Fixes: 217ed15bd399 ("drm/msm: enable separate binding of GPU and display d= evices") Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/msm_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 7e977fec4100792394dccf59097a01c2b2556608..06ab78e1a2c583352c08a62e6cf= 250bacde9b75b 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -55,7 +55,7 @@ MODULE_PARM_DESC(modeset, "Use kernel modesetting [KMS] (= 1=3Don (default), 0=3Ddisab module_param(modeset, bool, 0600); =20 static bool separate_gpu_kms; -MODULE_PARM_DESC(separate_gpu_drm, "Use separate DRM device for the GPU (0= =3Dsingle DRM device for both GPU and display (default), 1=3Dtwo DRM device= s)"); +MODULE_PARM_DESC(separate_gpu_kms, "Use separate DRM device for the GPU (0= =3Dsingle DRM device for both GPU and display (default), 1=3Dtwo DRM device= s)"); module_param(separate_gpu_kms, bool, 0400); =20 DECLARE_FAULT_ATTR(fail_gem_alloc); --=20 2.47.3 From nobody Thu Oct 2 10:57:10 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC781244664 for ; Thu, 18 Sep 2025 03:51:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758167490; cv=none; b=lN/xqSiLM9tcfixlSbabAZwYEfLO5ZKt7Zc9ktl8Yfqh6Ix1Hy0EYdCDKPfBGg1X3hUU/9jD8aD00fG4bHP01VGMDlk1+E0r5P2FZeK1Ypd+ejSTf8G0RPb0L+q7T3oczx4Zs3a4j4TEfDe6uXmvCdSbxuGSnEcXZ5BvaSISW1Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758167490; c=relaxed/simple; bh=WLIFbUDXYPNgrJ9+lsaVYZc3zz8rOqXHti2zHO7Ew7g=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=l5WX0usf4Gra6KZ/qO+JsY+2tgMDtchV7ybyZNffAmdtb4UUPJHo2GBOl2vyWCDXREY9vk5XogUBo5kPzL13uhV5xNNK0gYVttVfCoX6CymeYLN06c3c/DgRRBlIsLMJDKmm6jUbbJlWcmffSIaeeq6JpmYyo32s7BrkJoqxy1s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=lr0qGkGL; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="lr0qGkGL" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 58I3W3ve014173 for ; Thu, 18 Sep 2025 03:51:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= epLu5R6CcICO0JDsjPteSVEFh0+ik0rCpTgLC84kQpQ=; b=lr0qGkGLDtyx5gYa daYckLgCvBwJ7UAu/U072usbootVzMQwJK2+AHfVj7XWJ4VwnRPZNkJE30A6A+ek 68pG9g8IHPbtuOFLfmXO+eah6FifF18rbialkySHgSp8gP2N4ld5oOT9XEdjZzyu nEuAfseanJuMXCeBBEd/yDfv6vAYS1eo1h/exp0gQ1diuX++4I0J8zmLPyQzkoY/ 1x0AxXAhO877WRylrjYYqne0bcMbPkhNF/qlLUx9V61EUJL7THqu5jLxnLFenUUl SUgLQxeKPBJBzD25iO7kJAhun4kW1S4aSI7wFlYf4TDEHpVdeEkyecCOU/cGeXYv xLre9A== Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 497fxu4u14-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 18 Sep 2025 03:51:25 +0000 (GMT) Received: by mail-qt1-f200.google.com with SMTP id d75a77b69052e-4b5f818eea9so8155211cf.2 for ; Wed, 17 Sep 2025 20:51:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758167484; x=1758772284; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=epLu5R6CcICO0JDsjPteSVEFh0+ik0rCpTgLC84kQpQ=; b=tfl2cZr/dM23sHadTj95hyhSBJvGUVvkUMIVqig4YQxXIJNBZrVb/kZu0zmxvmAJbr I6T/wX7x0cwkbc3qBqNiEWFwwOutuPYbKpcNViWZepYWkjdF3S3S3GtirCfPPF4jTA2N Xj+b6u+5KXcnDgSO7nHGro9GzAoyK3kSL54OcD6t66PgZw0L37pSGYJhezvoDYUV6Shh kag1gWFSEd4XAnErnNIkUu74KsZn31HWFEP4wEWF1C7YqpOofTCs86eyZvawj/P4BSbh FC44ZqpJd82dvZSsSJ3d3P5mrFdZx7n2yucJAURS10LHNDF7dFdWgBbGqlh0+cXBSlUy qCAA== X-Forwarded-Encrypted: i=1; AJvYcCUj+ZNcixGcKvNwQz+6QBmCE9fOAQRTIfb+oLgLvTtExKW03GF9gy8PBpVBddtuZHhCF4MyZGJ/+Mzgh3I=@vger.kernel.org X-Gm-Message-State: AOJu0YzKfxdEx7tAUp8rKg9d8KvlyIWE6DVODO86+J2KS8MA0344lpHi vvWwVoF7My2sBSCs7N/5dRhfb4MIgw+akT7sEaf4ZxEDQf+kU5XmLj821IErC88WaYxnjQd3ZYe 4hfNJQwRj8S7SZYdZ+QlPu+HLI8V2QX9jigP42Z+vPrFakRHAjT3BpRC5UFuj2wT02v+o9GhAiq E= X-Gm-Gg: ASbGnctRvA2U748wRJl4bc2gPdQaxTeyKjWTbQaA+W78XkrBlyNFXyBQXglEboZDQyV iMw+MNJaxcT+b0ZOz8E+9aqA6isewEG+Cz2Dp2FRsOW0AoctZzjJ0uCTbzJ7wFh8/iJescb36a4 4vWiVWU/hpDrFUKUwaGsHTZuyeoxPF4CI9cwQI1mwjNFq2DdgP8HxJP722DHNjbo8l1r9ToTQWm 37PjzgSh2M/m/CELh9bx85MfcnZzflr1uYJxX6PKwXKvcJJgDS65dJDox+NQRRL3WE8pXTxsRBF 4gBxLaRbB4ur+BUN30gSjspsRbJCfY/nqF7HGS3RhGECBAMvDDhGXUbrE9LCwqukHUKp6bV7++C RSO70BXYX30z9O4hLmSp36A+FXI+uBn4b9RA/5g5ww92KYtV15VHQ X-Received: by 2002:ac8:7dcc:0:b0:4b5:d932:15d2 with SMTP id d75a77b69052e-4ba68716584mr47309881cf.23.1758167482533; Wed, 17 Sep 2025 20:51:22 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF/R+tFUo2Sw3ZWevl+31QwEbfpOJbUUZqsApN4QRR4cZbdwL3nDMvYr5zEYwBOsZL6Dw9syA== X-Received: by 2002:ac8:7dcc:0:b0:4b5:d932:15d2 with SMTP id d75a77b69052e-4ba68716584mr47309531cf.23.1758167481589; Wed, 17 Sep 2025 20:51:21 -0700 (PDT) Received: from umbar.lan (2001-14ba-a0c3-3a00-264b-feff-fe8b-be8a.rev.dnainternet.fi. [2001:14ba:a0c3:3a00:264b:feff:fe8b:be8a]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-361aa38c4f7sm2799911fa.62.2025.09.17.20.51.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 20:51:19 -0700 (PDT) From: Dmitry Baryshkov Date: Thu, 18 Sep 2025 06:50:23 +0300 Subject: [PATCH v5 2/5] drm/msm: split VM_BIND from the rest of GEM VMA code Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250918-msm-gpu-split-v5-2-44486f44d27d@oss.qualcomm.com> References: <20250918-msm-gpu-split-v5-0-44486f44d27d@oss.qualcomm.com> In-Reply-To: <20250918-msm-gpu-split-v5-0-44486f44d27d@oss.qualcomm.com> To: Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Konrad Dybcio Cc: linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=66862; i=dmitry.baryshkov@oss.qualcomm.com; h=from:subject:message-id; bh=WLIFbUDXYPNgrJ9+lsaVYZc3zz8rOqXHti2zHO7Ew7g=; b=owGbwMvMwMXYbdNlx6SpcZXxtFoSQ8bpxrW///LWlDXr/lv07W7ZwZIlfxnWc3Ze/dPaME/Yz nzi228CnYzGLAyMXAyyYoosPgUtU2M2JYd92DG1HmYQKxPIFAYuTgGYiIEHB0P3cq1q8Ty+rLac nu3hHi3dGoq+79Y92/vCvF8hLFKMq9N/Ufmrx0FHtj1qa1acvt5XPXOG38dX3t8Vzn32CrXblFv mau0V8NH5fp+9TNqewpxJvMZnty70yviYwh/L+IKhfufM2jmB/f9PCK96sfDpVTGGw8E5K3K0PO zz5ySI2cr7zWmsvt3lu9R3wy8vzxr/ig5GWXaVPOvy0u7M2b8jjv3K9rqu8ULy8pmyeUxrf1Z8/ OUirepyiOeE6pyAm9uMvk1WMZ9iJu4Zfefweq2zf/jl9t1r2XtEzfv3xVQ7N/bFMia7WL/yCqkc Wzxn57TyW2xPlutfX6n152NcZNDBsmCmr4KfcgPEqjR1AQ== X-Developer-Key: i=dmitry.baryshkov@oss.qualcomm.com; a=openpgp; fpr=8F88381DD5C873E4AE487DA5199BF1243632046A X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwOTE2MDIwMiBTYWx0ZWRfXwh2gsac52CfG o8BsAOub4mwDY6WamZ4csiwAP5KMhfUzwyVY4IQU3QaLOU0BP4zu00TUM1VXYRiM3rYnIDngfO0 Y5bPNtYAZFSi6dl7Re5rX5NzH8+kqoJGioNlgi7kyDpEpVNozDR7y/L7bwyIsGJyIhZMNSyMzbL /nLokV8Qw/Rru+NYSzEIev7kQNyYtipUqEueYEyRYcLvY3kCiCi8aMPAphquZfm+SXb0xY9ZMYx sbny7+xPDy8OqWgX+C6DQgA27t/dmSKGoKXG/g2xKh9RIhy6IDaBlCQfiQCh9gCWnp7kDAX98KM 8nnjziDcxzPa7lppA7IPlbzMVjQRKQoJ8vDh+YkO53MqP3idMnaqcIMmutsKaH6wRzFpYv0qYBu l31slJtV X-Proofpoint-ORIG-GUID: x8EFvdXFbJcqO3b_gn1belJIvO07yEii X-Authority-Analysis: v=2.4 cv=R+UDGcRX c=1 sm=1 tr=0 ts=68cb81be cx=c_pps a=JbAStetqSzwMeJznSMzCyw==:117 a=xqWC_Br6kY4A:10 a=IkcTkHD0fZMA:10 a=yJojWOMRYYMA:10 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=Qwn5WkwXvA7cP8aYHMUA:9 a=3Xm2iyL4JTYXtahU:21 a=QEXdDO2ut3YA:10 a=uxP6HrT_eTzRwkO_Te1X:22 X-Proofpoint-GUID: x8EFvdXFbJcqO3b_gn1belJIvO07yEii X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-17_01,2025-09-17_02,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 phishscore=0 adultscore=0 clxscore=1015 priorityscore=1501 bulkscore=0 malwarescore=0 suspectscore=0 impostorscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2509160202 In preparation to disabling GPU functionality split VM_BIND-related functions (which are used only for the GPU) from the rest of the GEM VMA implementation. Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/msm_gem_vm_bind.c | 1116 +++++++++++++++++++++++++++++= ++ drivers/gpu/drm/msm/msm_gem_vma.c | 1177 +----------------------------= ---- drivers/gpu/drm/msm/msm_gem_vma.h | 105 +++ 4 files changed, 1225 insertions(+), 1174 deletions(-) diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index 0c0dfb25f01b193b10946fae20138caf32cf0ed2..d7876c154b0aa2cb0164c4b1fb7= 900b1a42db46b 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -115,6 +115,7 @@ msm-y +=3D \ msm_gem_shrinker.o \ msm_gem_submit.o \ msm_gem_vma.o \ + msm_gem_vm_bind.o \ msm_gpu.o \ msm_gpu_devfreq.o \ msm_io_utils.o \ diff --git a/drivers/gpu/drm/msm/msm_gem_vm_bind.c b/drivers/gpu/drm/msm/ms= m_gem_vm_bind.c new file mode 100644 index 0000000000000000000000000000000000000000..683a5307a609ae7f5c366b4e0dd= cdd98039ddea1 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_gem_vm_bind.c @@ -0,0 +1,1116 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2016 Red Hat + * Author: Rob Clark + */ + +#include +#include + +#include +#include + +#include "msm_drv.h" +#include "msm_gem.h" +#include "msm_gem_vma.h" +#include "msm_gpu.h" +#include "msm_mmu.h" +#include "msm_syncobj.h" + +/** + * struct msm_vma_op - A MAP or UNMAP operation + */ +struct msm_vm_op { + /** @op: The operation type */ + enum { + MSM_VM_OP_MAP =3D 1, + MSM_VM_OP_UNMAP, + } op; + union { + /** @map: Parameters used if op =3D=3D MSM_VMA_OP_MAP */ + struct msm_vm_map_op map; + /** @unmap: Parameters used if op =3D=3D MSM_VMA_OP_UNMAP */ + struct msm_vm_unmap_op unmap; + }; + /** @node: list head in msm_vm_bind_job::vm_ops */ + struct list_head node; + + /** + * @obj: backing object for pages to be mapped/unmapped + * + * Async unmap ops, in particular, must hold a reference to the + * original GEM object backing the mapping that will be unmapped. + * But the same can be required in the map path, for example if + * there is not a corresponding unmap op, such as process exit. + * + * This ensures that the pages backing the mapping are not freed + * before the mapping is torn down. + */ + struct drm_gem_object *obj; +}; + +/** + * struct msm_vm_bind_job - Tracking for a VM_BIND ioctl + * + * A table of userspace requested VM updates (MSM_VM_BIND_OP_UNMAP/MAP/MAP= _NULL) + * gets applied to the vm, generating a list of VM ops (MSM_VM_OP_MAP/UNMA= P) + * which are applied to the pgtables asynchronously. For example a usersp= ace + * requested MSM_VM_BIND_OP_MAP could end up generating both an MSM_VM_OP_= UNMAP + * to unmap an existing mapping, and a MSM_VM_OP_MAP to apply the new mapp= ing. + */ +struct msm_vm_bind_job { + /** @base: base class for drm_sched jobs */ + struct drm_sched_job base; + /** @fence: The fence that is signaled when job completes */ + struct dma_fence *fence; + /** @vm: The VM being operated on */ + struct drm_gpuvm *vm; + /** @queue: The queue that the job runs on */ + struct msm_gpu_submitqueue *queue; + /** @prealloc: Tracking for pre-allocated MMU pgtable pages */ + struct msm_mmu_prealloc prealloc; + /** @vm_ops: a list of struct msm_vm_op */ + struct list_head vm_ops; + /** @bos_pinned: are the GEM objects being bound pinned? */ + bool bos_pinned; + /** @nr_ops: the number of userspace requested ops */ + unsigned int nr_ops; + /** + * @ops: the userspace requested ops + * + * The userspace requested ops are copied/parsed and validated + * before we start applying the updates to try to do as much up- + * front error checking as possible, to avoid the VM being in an + * undefined state due to partially executed VM_BIND. + * + * This table also serves to hold a reference to the backing GEM + * objects. + */ + struct msm_vm_bind_op { + uint32_t op; + uint32_t flags; + union { + struct drm_gem_object *obj; + uint32_t handle; + }; + uint64_t obj_offset; + uint64_t iova; + uint64_t range; + } ops[]; +}; + +#define job_foreach_bo(_obj, _job) \ + for (unsigned int i =3D 0; i < (_job)->nr_ops; i++) \ + if (((_obj) =3D (_job)->ops[i].obj)) + +static inline struct msm_vm_bind_job *to_msm_vm_bind_job(struct drm_sched_= job *job) +{ + return container_of(job, struct msm_vm_bind_job, base); +} + +struct op_arg { + unsigned int flags; + struct msm_vm_bind_job *job; + const struct msm_vm_bind_op *op; + bool kept; +}; + +static void +vm_op_enqueue(struct op_arg *arg, struct msm_vm_op _op) +{ + struct msm_vm_op *op =3D kmalloc(sizeof(*op), GFP_KERNEL); + *op =3D _op; + list_add_tail(&op->node, &arg->job->vm_ops); + + if (op->obj) + drm_gem_object_get(op->obj); +} + +static struct drm_gpuva * +vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) +{ + return msm_gem_vma_new(arg->job->vm, op->gem.obj, op->gem.offset, + op->va.addr, op->va.addr + op->va.range); +} + +int msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *_arg) +{ + struct op_arg *arg =3D _arg; + struct drm_gem_object *obj =3D op->map.gem.obj; + struct drm_gpuva *vma; + struct sg_table *sgt; + unsigned int prot; + + if (arg->kept) + return 0; + + vma =3D vma_from_op(arg, &op->map); + if (WARN_ON(IS_ERR(vma))) + return PTR_ERR(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + vma->flags =3D ((struct op_arg *)arg)->flags; + + if (obj) { + sgt =3D to_msm_bo(obj)->sgt; + prot =3D msm_gem_prot(obj); + } else { + sgt =3D NULL; + prot =3D IOMMU_READ | IOMMU_WRITE; + } + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_MAP, + .map =3D { + .sgt =3D sgt, + .iova =3D vma->va.addr, + .range =3D vma->va.range, + .offset =3D vma->gem.offset, + .prot =3D prot, + .queue_id =3D arg->job->queue->id, + }, + .obj =3D vma->gem.obj, + }); + + to_msm_vma(vma)->mapped =3D true; + + return 0; +} + +int msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) +{ + struct msm_vm_bind_job *job =3D ((struct op_arg *)arg)->job; + struct drm_gpuvm *vm =3D job->vm; + struct drm_gpuva *orig_vma =3D op->remap.unmap->va; + struct drm_gpuva *prev_vma =3D NULL, *next_vma =3D NULL; + struct drm_gpuvm_bo *vm_bo =3D orig_vma->vm_bo; + bool mapped =3D to_msm_vma(orig_vma)->mapped; + unsigned int flags; + + vm_dbg("orig_vma: %p:%p:%p: %016llx %016llx", vm, orig_vma, + orig_vma->gem.obj, orig_vma->va.addr, orig_vma->va.range); + + if (mapped) { + uint64_t unmap_start, unmap_range; + + drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range= ); + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_UNMAP, + .unmap =3D { + .iova =3D unmap_start, + .range =3D unmap_range, + .queue_id =3D job->queue->id, + }, + .obj =3D orig_vma->gem.obj, + }); + + /* + * Part of this GEM obj is still mapped, but we're going to kill the + * existing VMA and replace it with one or two new ones (ie. two if + * the unmapped range is in the middle of the existing (unmap) VMA). + * So just set the state to unmapped: + */ + to_msm_vma(orig_vma)->mapped =3D false; + } + + /* + * Hold a ref to the vm_bo between the msm_gem_vma_close() and the + * creation of the new prev/next vma's, in case the vm_bo is tracked + * in the VM's evict list: + */ + if (vm_bo) + drm_gpuvm_bo_get(vm_bo); + + /* + * The prev_vma and/or next_vma are replacing the unmapped vma, and + * therefore should preserve it's flags: + */ + flags =3D orig_vma->flags; + + msm_gem_vma_close(orig_vma); + + if (op->remap.prev) { + prev_vma =3D vma_from_op(arg, op->remap.prev); + if (WARN_ON(IS_ERR(prev_vma))) + return PTR_ERR(prev_vma); + + vm_dbg("prev_vma: %p:%p: %016llx %016llx", vm, prev_vma, + prev_vma->va.addr, prev_vma->va.range); + to_msm_vma(prev_vma)->mapped =3D mapped; + prev_vma->flags =3D flags; + } + + if (op->remap.next) { + next_vma =3D vma_from_op(arg, op->remap.next); + if (WARN_ON(IS_ERR(next_vma))) + return PTR_ERR(next_vma); + + vm_dbg("next_vma: %p:%p: %016llx %016llx", vm, next_vma, + next_vma->va.addr, next_vma->va.range); + to_msm_vma(next_vma)->mapped =3D mapped; + next_vma->flags =3D flags; + } + + if (!mapped) + drm_gpuvm_bo_evict(vm_bo, true); + + /* Drop the previous ref: */ + drm_gpuvm_bo_put(vm_bo); + + return 0; +} + +int msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *_arg) +{ + struct op_arg *arg =3D _arg; + struct drm_gpuva *vma =3D op->unmap.va; + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + /* + * Detect in-place remap. Turnip does this to change the vma flags, + * in particular MSM_VMA_DUMP. In this case we want to avoid actually + * touching the page tables, as that would require synchronization + * against SUBMIT jobs running on the GPU. + */ + if (op->unmap.keep && + (arg->op->op =3D=3D MSM_VM_BIND_OP_MAP) && + (vma->gem.obj =3D=3D arg->op->obj) && + (vma->gem.offset =3D=3D arg->op->obj_offset) && + (vma->va.addr =3D=3D arg->op->iova) && + (vma->va.range =3D=3D arg->op->range)) { + /* We are only expecting a single in-place unmap+map cb pair: */ + WARN_ON(arg->kept); + + /* Leave the existing VMA in place, but signal that to the map cb: */ + arg->kept =3D true; + + /* Only flags are changing, so update that in-place: */ + unsigned int orig_flags =3D vma->flags & (DRM_GPUVA_USERBITS - 1); + + vma->flags =3D orig_flags | arg->flags; + + return 0; + } + + if (!msm_vma->mapped) + goto out_close; + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_UNMAP, + .unmap =3D { + .iova =3D vma->va.addr, + .range =3D vma->va.range, + .queue_id =3D arg->job->queue->id, + }, + .obj =3D vma->gem.obj, + }); + + msm_vma->mapped =3D false; + +out_close: + msm_gem_vma_close(vma); + + return 0; +} + +static struct dma_fence * +msm_vma_job_run(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job =3D to_msm_vm_bind_job(_job); + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct drm_gem_object *obj; + int ret =3D vm->unusable ? -EINVAL : 0; + + vm_dbg(""); + + mutex_lock(&vm->mmu_lock); + vm->mmu->prealloc =3D &job->prealloc; + + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op =3D + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + + switch (op->op) { + case MSM_VM_OP_MAP: + /* + * On error, stop trying to map new things.. but we + * still want to process the unmaps (or in particular, + * the drm_gem_object_put()s) + */ + if (!ret) + ret =3D vm_map_op(vm, &op->map); + break; + case MSM_VM_OP_UNMAP: + vm_unmap_op(vm, &op->unmap); + break; + } + drm_gem_object_put(op->obj); + list_del(&op->node); + kfree(op); + } + + vm->mmu->prealloc =3D NULL; + mutex_unlock(&vm->mmu_lock); + + /* + * We failed to perform at least _some_ of the pgtable updates, so + * now the VM is in an undefined state. Game over! + */ + if (ret) + msm_gem_vm_unusable(job->vm); + + job_foreach_bo(obj, job) { + msm_gem_lock(obj); + msm_gem_unpin_locked(obj); + msm_gem_unlock(obj); + } + + /* VM_BIND ops are synchronous, so no fence to wait on: */ + return NULL; +} + +static void +msm_vma_job_free(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job =3D to_msm_vm_bind_job(_job); + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct drm_gem_object *obj; + + vm->mmu->funcs->prealloc_cleanup(vm->mmu, &job->prealloc); + + atomic_sub(job->prealloc.count, &vm->prealloc_throttle.in_flight); + + drm_sched_job_cleanup(_job); + + job_foreach_bo(obj, job) + drm_gem_object_put(obj); + + msm_submitqueue_put(job->queue); + dma_fence_put(job->fence); + + /* In error paths, we could have unexecuted ops: */ + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op =3D + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + list_del(&op->node); + kfree(op); + } + + wake_up(&vm->prealloc_throttle.wait); + + kfree(job); +} + +static const struct drm_sched_backend_ops msm_vm_bind_ops =3D { + .run_job =3D msm_vma_job_run, + .free_job =3D msm_vma_job_free +}; + +int msm_gem_vm_sched_init(struct msm_gem_vm *vm, struct drm_device *drm) +{ + struct drm_sched_init_args args =3D { + .ops =3D &msm_vm_bind_ops, + .num_rqs =3D 1, + .credit_limit =3D 1, + .timeout =3D MAX_SCHEDULE_TIMEOUT, + .name =3D "msm-vm-bind", + .dev =3D drm->dev, + }; + int ret; + + ret =3D drm_sched_init(&vm->sched, &args); + if (ret) + return ret; + + init_waitqueue_head(&vm->prealloc_throttle.wait); + + return 0; +} + +void msm_gem_vm_sched_fini(struct msm_gem_vm *vm) +{ + /* Kill the scheduler now, so we aren't racing with it for cleanup: */ + drm_sched_stop(&vm->sched, NULL); + drm_sched_fini(&vm->sched); +} + +static struct msm_vm_bind_job * +vm_bind_job_create(struct drm_device *dev, struct drm_file *file, + struct msm_gpu_submitqueue *queue, uint32_t nr_ops) +{ + struct msm_vm_bind_job *job; + uint64_t sz; + int ret; + + sz =3D struct_size(job, ops, nr_ops); + + if (sz > SIZE_MAX) + return ERR_PTR(-ENOMEM); + + job =3D kzalloc(sz, GFP_KERNEL | __GFP_NOWARN); + if (!job) + return ERR_PTR(-ENOMEM); + + ret =3D drm_sched_job_init(&job->base, queue->entity, 1, queue, + file->client_id); + if (ret) { + kfree(job); + return ERR_PTR(ret); + } + + job->vm =3D msm_context_vm(dev, queue->ctx); + job->queue =3D queue; + INIT_LIST_HEAD(&job->vm_ops); + + return job; +} + +static bool invalid_alignment(uint64_t addr) +{ + /* + * Technically this is about GPU alignment, not CPU alignment. But + * I've not seen any qcom SoC where the SMMU does not support the + * CPU's smallest page size. + */ + return !PAGE_ALIGNED(addr); +} + +static int +lookup_op(struct msm_vm_bind_job *job, const struct drm_msm_vm_bind_op *op) +{ + struct drm_device *dev =3D job->vm->drm; + int i =3D job->nr_ops++; + int ret =3D 0; + + job->ops[i].op =3D op->op; + job->ops[i].handle =3D op->handle; + job->ops[i].obj_offset =3D op->obj_offset; + job->ops[i].iova =3D op->iova; + job->ops[i].range =3D op->range; + job->ops[i].flags =3D op->flags; + + if (op->flags & ~MSM_VM_BIND_OP_FLAGS) + ret =3D UERR(EINVAL, dev, "invalid flags: %x\n", op->flags); + + if (invalid_alignment(op->iova)) + ret =3D UERR(EINVAL, dev, "invalid address: %016llx\n", op->iova); + + if (invalid_alignment(op->obj_offset)) + ret =3D UERR(EINVAL, dev, "invalid bo_offset: %016llx\n", op->obj_offset= ); + + if (invalid_alignment(op->range)) + ret =3D UERR(EINVAL, dev, "invalid range: %016llx\n", op->range); + + if (!drm_gpuvm_range_valid(job->vm, op->iova, op->range)) + ret =3D UERR(EINVAL, dev, "invalid range: %016llx, %016llx\n", op->iova,= op->range); + + /* + * MAP must specify a valid handle. But the handle MBZ for + * UNMAP or MAP_NULL. + */ + if (op->op =3D=3D MSM_VM_BIND_OP_MAP) { + if (!op->handle) + ret =3D UERR(EINVAL, dev, "invalid handle\n"); + } else if (op->handle) { + ret =3D UERR(EINVAL, dev, "handle must be zero\n"); + } + + switch (op->op) { + case MSM_VM_BIND_OP_MAP: + case MSM_VM_BIND_OP_MAP_NULL: + case MSM_VM_BIND_OP_UNMAP: + break; + default: + ret =3D UERR(EINVAL, dev, "invalid op: %u\n", op->op); + break; + } + + return ret; +} + +/* + * ioctl parsing, parameter validation, and GEM handle lookup + */ +static int +vm_bind_job_lookup_ops(struct msm_vm_bind_job *job, struct drm_msm_vm_bind= *args, + struct drm_file *file, int *nr_bos) +{ + struct drm_device *dev =3D job->vm->drm; + int ret =3D 0; + int cnt =3D 0; + int i =3D -1; + + if (args->nr_ops =3D=3D 1) { + /* Single op case, the op is inlined: */ + ret =3D lookup_op(job, &args->op); + } else { + for (unsigned int i =3D 0; i < args->nr_ops; i++) { + struct drm_msm_vm_bind_op op; + void __user *userptr =3D + u64_to_user_ptr(args->ops + (i * sizeof(op))); + + /* make sure we don't have garbage flags, in case we hit + * error path before flags is initialized: + */ + job->ops[i].flags =3D 0; + + if (copy_from_user(&op, userptr, sizeof(op))) { + ret =3D -EFAULT; + break; + } + + ret =3D lookup_op(job, &op); + if (ret) + break; + } + } + + if (ret) { + job->nr_ops =3D 0; + goto out; + } + + spin_lock(&file->table_lock); + + for (i =3D 0; i < args->nr_ops; i++) { + struct msm_vm_bind_op *op =3D &job->ops[i]; + struct drm_gem_object *obj; + + if (!op->handle) { + op->obj =3D NULL; + continue; + } + + /* + * normally use drm_gem_object_lookup(), but for bulk lookup + * all under single table_lock just hit object_idr directly: + */ + obj =3D idr_find(&file->object_idr, op->handle); + if (!obj) { + ret =3D UERR(EINVAL, dev, "invalid handle %u at index %u\n", op->handle= , i); + goto out_unlock; + } + + drm_gem_object_get(obj); + + op->obj =3D obj; + cnt++; + + if ((op->range + op->obj_offset) > obj->size) { + ret =3D UERR(EINVAL, dev, "invalid range: %016llx + %016llx > %016zx\n", + op->range, op->obj_offset, obj->size); + goto out_unlock; + } + } + + *nr_bos =3D cnt; + +out_unlock: + spin_unlock(&file->table_lock); + + if (ret) { + for (; i >=3D 0; i--) { + struct msm_vm_bind_op *op =3D &job->ops[i]; + + if (!op->obj) + continue; + + drm_gem_object_put(op->obj); + op->obj =3D NULL; + } + } +out: + return ret; +} + +static void +prealloc_count(struct msm_vm_bind_job *job, + struct msm_vm_bind_op *first, + struct msm_vm_bind_op *last) +{ + struct msm_mmu *mmu =3D to_msm_vm(job->vm)->mmu; + + if (!first) + return; + + uint64_t start_iova =3D first->iova; + uint64_t end_iova =3D last->iova + last->range; + + mmu->funcs->prealloc_count(mmu, &job->prealloc, start_iova, end_iova - st= art_iova); +} + +static bool +ops_are_same_pte(struct msm_vm_bind_op *first, struct msm_vm_bind_op *next) +{ + /* + * Last level pte covers 2MB.. so we should merge two ops, from + * the PoV of figuring out how much pgtable pages to pre-allocate + * if they land in the same 2MB range: + */ + uint64_t pte_mask =3D ~(SZ_2M - 1); + + return ((first->iova + first->range) & pte_mask) =3D=3D (next->iova & pte= _mask); +} + +/* + * Determine the amount of memory to prealloc for pgtables. For sparse im= ages, + * in particular, userspace plays some tricks with the order of page mappi= ngs + * to get the desired swizzle pattern, resulting in a large # of tiny MAP = ops. + * So detect when multiple MAP operations are physically contiguous, and c= ount + * them as a single mapping. Otherwise the prealloc_count() will not real= ize + * they can share pagetable pages and vastly overcount. + */ +static int +vm_bind_prealloc_count(struct msm_vm_bind_job *job) +{ + struct msm_vm_bind_op *first =3D NULL, *last =3D NULL; + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + int ret; + + for (int i =3D 0; i < job->nr_ops; i++) { + struct msm_vm_bind_op *op =3D &job->ops[i]; + + /* We only care about MAP/MAP_NULL: */ + if (op->op =3D=3D MSM_VM_BIND_OP_UNMAP) + continue; + + /* + * If op is contiguous with last in the current range, then + * it becomes the new last in the range and we continue + * looping: + */ + if (last && ops_are_same_pte(last, op)) { + last =3D op; + continue; + } + + /* + * If op is not contiguous with the current range, flush + * the current range and start anew: + */ + prealloc_count(job, first, last); + first =3D last =3D op; + } + + /* Flush the remaining range: */ + prealloc_count(job, first, last); + + /* + * Now that we know the needed amount to pre-alloc, throttle on pending + * VM_BIND jobs if we already have too much pre-alloc memory in flight + */ + ret =3D wait_event_interruptible( + vm->prealloc_throttle.wait, + atomic_read(&vm->prealloc_throttle.in_flight) <=3D 1024); + if (ret) + return ret; + + atomic_add(job->prealloc.count, &vm->prealloc_throttle.in_flight); + + return 0; +} + +/* + * Lock VM and GEM objects + */ +static int +vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exe= c) +{ + int ret; + + /* Lock VM and objects: */ + drm_exec_until_all_locked(exec) { + ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(job->vm)); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + + for (unsigned int i =3D 0; i < job->nr_ops; i++) { + const struct msm_vm_bind_op *op =3D &job->ops[i]; + + switch (op->op) { + case MSM_VM_BIND_OP_UNMAP: + ret =3D drm_gpuvm_sm_unmap_exec_lock(job->vm, exec, + op->iova, + op->obj_offset); + break; + case MSM_VM_BIND_OP_MAP: + case MSM_VM_BIND_OP_MAP_NULL: { + struct drm_gpuvm_map_req map_req =3D { + .map.va.addr =3D op->iova, + .map.va.range =3D op->range, + .map.gem.obj =3D op->obj, + .map.gem.offset =3D op->obj_offset, + }; + + ret =3D drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, &map_req); + break; + } + default: + /* + * lookup_op() should have already thrown an error for + * invalid ops + */ + WARN_ON("unreachable"); + } + + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + } + } + + return 0; +} + +/* + * Pin GEM objects, ensuring that we have backing pages. Pinning will move + * the object to the pinned LRU so that the shrinker knows to first consid= er + * other objects for evicting. + */ +static int +vm_bind_job_pin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + /* + * First loop, before holding the LRU lock, avoids holding the + * LRU lock while calling msm_gem_pin_vma_locked (which could + * trigger get_pages()) + */ + job_foreach_bo(obj, job) { + struct page **pages; + + pages =3D msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); + if (IS_ERR(pages)) + return PTR_ERR(pages); + } + + struct msm_drm_private *priv =3D job->vm->drm->dev_private; + + /* + * A second loop while holding the LRU lock (a) avoids acquiring/dropping + * the LRU lock for each individual bo, while (b) avoiding holding the + * LRU lock while calling msm_gem_pin_vma_locked() (which could trigger + * get_pages() which could trigger reclaim.. and if we held the LRU lock + * could trigger deadlock with the shrinker). + */ + mutex_lock(&priv->lru.lock); + job_foreach_bo(obj, job) + msm_gem_pin_obj_locked(obj); + mutex_unlock(&priv->lru.lock); + + job->bos_pinned =3D true; + + return 0; +} + +/* + * Unpin GEM objects. Normally this is done after the bind job is run. + */ +static void +vm_bind_job_unpin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + if (!job->bos_pinned) + return; + + job_foreach_bo(obj, job) + msm_gem_unpin_locked(obj); + + job->bos_pinned =3D false; +} + +/* + * Pre-allocate pgtable memory, and translate the VM bind requests into a + * sequence of pgtable updates to be applied asynchronously. + */ +static int +vm_bind_job_prepare(struct msm_vm_bind_job *job) +{ + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct msm_mmu *mmu =3D vm->mmu; + int ret; + + ret =3D mmu->funcs->prealloc_allocate(mmu, &job->prealloc); + if (ret) + return ret; + + for (unsigned int i =3D 0; i < job->nr_ops; i++) { + const struct msm_vm_bind_op *op =3D &job->ops[i]; + struct op_arg arg =3D { + .job =3D job, + .op =3D op, + }; + + switch (op->op) { + case MSM_VM_BIND_OP_UNMAP: + ret =3D drm_gpuvm_sm_unmap(job->vm, &arg, op->iova, + op->range); + break; + case MSM_VM_BIND_OP_MAP: + if (op->flags & MSM_VM_BIND_OP_DUMP) + arg.flags |=3D MSM_VMA_DUMP; + fallthrough; + case MSM_VM_BIND_OP_MAP_NULL: { + struct drm_gpuvm_map_req map_req =3D { + .map.va.addr =3D op->iova, + .map.va.range =3D op->range, + .map.gem.obj =3D op->obj, + .map.gem.offset =3D op->obj_offset, + }; + + ret =3D drm_gpuvm_sm_map(job->vm, &arg, &map_req); + break; + } + default: + /* + * lookup_op() should have already thrown an error for + * invalid ops + */ + BUG_ON("unreachable"); + } + + if (ret) { + /* + * If we've already started modifying the vm, we can't + * adequetly describe to userspace the intermediate + * state the vm is in. So throw up our hands! + */ + if (i > 0) + msm_gem_vm_unusable(job->vm); + return ret; + } + } + + return 0; +} + +/* + * Attach fences to the GEM objects being bound. This will signify to + * the shrinker that they are busy even after dropping the locks (ie. + * drm_exec_fini()) + */ +static void +vm_bind_job_attach_fences(struct msm_vm_bind_job *job) +{ + for (unsigned int i =3D 0; i < job->nr_ops; i++) { + struct drm_gem_object *obj =3D job->ops[i].obj; + + if (!obj) + continue; + + dma_resv_add_fence(obj->resv, job->fence, + DMA_RESV_USAGE_KERNEL); + } +} + +int +msm_ioctl_vm_bind(struct drm_device *dev, void *data, struct drm_file *fil= e) +{ + struct msm_drm_private *priv =3D dev->dev_private; + struct drm_msm_vm_bind *args =3D data; + struct msm_context *ctx =3D file->driver_priv; + struct msm_vm_bind_job *job =3D NULL; + struct msm_gpu *gpu =3D priv->gpu; + struct msm_gpu_submitqueue *queue; + struct msm_syncobj_post_dep *post_deps =3D NULL; + struct drm_syncobj **syncobjs_to_reset =3D NULL; + struct sync_file *sync_file =3D NULL; + struct dma_fence *fence; + int out_fence_fd =3D -1; + int ret, nr_bos =3D 0; + unsigned int i; + + if (!gpu) + return -ENXIO; + + /* + * Maybe we could allow just UNMAP ops? OTOH userspace should just + * immediately close the device file and all will be torn down. + */ + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + + /* + * Technically, you cannot create a VM_BIND submitqueue in the first + * place, if you haven't opted in to VM_BIND context. But it is + * cleaner / less confusing, to check this case directly. + */ + if (!msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "context does not support vmbind"); + + if (args->flags & ~MSM_VM_BIND_FLAGS) + return UERR(EINVAL, dev, "invalid flags"); + + queue =3D msm_submitqueue_get(ctx, args->queue_id); + if (!queue) + return -ENOENT; + + if (!(queue->flags & MSM_SUBMITQUEUE_VM_BIND)) { + ret =3D UERR(EINVAL, dev, "Invalid queue type"); + goto out_post_unlock; + } + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + out_fence_fd =3D get_unused_fd_flags(O_CLOEXEC); + if (out_fence_fd < 0) { + ret =3D out_fence_fd; + goto out_post_unlock; + } + } + + job =3D vm_bind_job_create(dev, file, queue, args->nr_ops); + if (IS_ERR(job)) { + ret =3D PTR_ERR(job); + goto out_post_unlock; + } + + ret =3D mutex_lock_interruptible(&queue->lock); + if (ret) + goto out_post_unlock; + + if (args->flags & MSM_VM_BIND_FENCE_FD_IN) { + struct dma_fence *in_fence; + + in_fence =3D sync_file_get_fence(args->fence_fd); + + if (!in_fence) { + ret =3D UERR(EINVAL, dev, "invalid in-fence"); + goto out_unlock; + } + + ret =3D drm_sched_job_add_dependency(&job->base, in_fence); + if (ret) + goto out_unlock; + } + + if (args->in_syncobjs > 0) { + syncobjs_to_reset =3D msm_syncobj_parse_deps(dev, &job->base, + file, args->in_syncobjs, + args->nr_in_syncobjs, + args->syncobj_stride); + if (IS_ERR(syncobjs_to_reset)) { + ret =3D PTR_ERR(syncobjs_to_reset); + goto out_unlock; + } + } + + if (args->out_syncobjs > 0) { + post_deps =3D msm_syncobj_parse_post_deps(dev, file, + args->out_syncobjs, + args->nr_out_syncobjs, + args->syncobj_stride); + if (IS_ERR(post_deps)) { + ret =3D PTR_ERR(post_deps); + goto out_unlock; + } + } + + ret =3D vm_bind_job_lookup_ops(job, args, file, &nr_bos); + if (ret) + goto out_unlock; + + ret =3D vm_bind_prealloc_count(job); + if (ret) + goto out_unlock; + + struct drm_exec exec; + unsigned int flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBL= E_WAIT; + + drm_exec_init(&exec, flags, nr_bos + 1); + + ret =3D vm_bind_job_lock_objects(job, &exec); + if (ret) + goto out; + + ret =3D vm_bind_job_pin_objects(job); + if (ret) + goto out; + + ret =3D vm_bind_job_prepare(job); + if (ret) + goto out; + + drm_sched_job_arm(&job->base); + + job->fence =3D dma_fence_get(&job->base.s_fence->finished); + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + sync_file =3D sync_file_create(job->fence); + if (!sync_file) + ret =3D -ENOMEM; + } + + if (ret) + goto out; + + vm_bind_job_attach_fences(job); + + /* + * The job can be free'd (and fence unref'd) at any point after + * drm_sched_entity_push_job(), so we need to hold our own ref + */ + fence =3D dma_fence_get(job->fence); + + drm_sched_entity_push_job(&job->base); + + msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); + msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, fence); + + dma_fence_put(fence); + +out: + if (ret) + vm_bind_job_unpin_objects(job); + + drm_exec_fini(&exec); +out_unlock: + mutex_unlock(&queue->lock); +out_post_unlock: + if (ret) { + if (out_fence_fd >=3D 0) + put_unused_fd(out_fence_fd); + if (sync_file) + fput(sync_file->file); + } else if (sync_file) { + fd_install(out_fence_fd, sync_file->file); + args->fence_fd =3D out_fence_fd; + } + + if (!IS_ERR_OR_NULL(job)) { + if (ret) + msm_vma_job_free(&job->base); + } else { + /* + * If the submit hasn't yet taken ownership of the queue + * then we need to drop the reference ourself: + */ + msm_submitqueue_put(queue); + } + + if (!IS_ERR_OR_NULL(post_deps)) { + for (i =3D 0; i < args->nr_out_syncobjs; ++i) { + kfree(post_deps[i].chain); + drm_syncobj_put(post_deps[i].syncobj); + } + kfree(post_deps); + } + + if (!IS_ERR_OR_NULL(syncobjs_to_reset)) { + for (i =3D 0; i < args->nr_in_syncobjs; ++i) { + if (syncobjs_to_reset[i]) + drm_syncobj_put(syncobjs_to_reset[i]); + } + kfree(syncobjs_to_reset); + } + + return ret; +} diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 8316af1723c227f919594446c3721e1a948cbc9e..3f44d1d973137d99aa1a3d9e267= 39c34e1acc534 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -11,150 +11,15 @@ =20 #include "msm_drv.h" #include "msm_gem.h" +#include "msm_gem_vma.h" #include "msm_gpu.h" #include "msm_mmu.h" #include "msm_syncobj.h" =20 -#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##= __VA_ARGS__) - static uint vm_log_shift =3D 0; MODULE_PARM_DESC(vm_log_shift, "Length of VM op log"); module_param_named(vm_log_shift, vm_log_shift, uint, 0600); =20 -/** - * struct msm_vm_map_op - create new pgtable mapping - */ -struct msm_vm_map_op { - /** @iova: start address for mapping */ - uint64_t iova; - /** @range: size of the region to map */ - uint64_t range; - /** @offset: offset into @sgt to map */ - uint64_t offset; - /** @sgt: pages to map, or NULL for a PRR mapping */ - struct sg_table *sgt; - /** @prot: the mapping protection flags */ - int prot; - - /** - * @queue_id: The id of the submitqueue the operation is performed - * on, or zero for (in particular) UNMAP ops triggered outside of - * a submitqueue (ie. process cleanup) - */ - int queue_id; -}; - -/** - * struct msm_vm_unmap_op - unmap a range of pages from pgtable - */ -struct msm_vm_unmap_op { - /** @iova: start address for unmap */ - uint64_t iova; - /** @range: size of region to unmap */ - uint64_t range; - - /** @reason: The reason for the unmap */ - const char *reason; - - /** - * @queue_id: The id of the submitqueue the operation is performed - * on, or zero for (in particular) UNMAP ops triggered outside of - * a submitqueue (ie. process cleanup) - */ - int queue_id; -}; - -/** - * struct msm_vma_op - A MAP or UNMAP operation - */ -struct msm_vm_op { - /** @op: The operation type */ - enum { - MSM_VM_OP_MAP =3D 1, - MSM_VM_OP_UNMAP, - } op; - union { - /** @map: Parameters used if op =3D=3D MSM_VMA_OP_MAP */ - struct msm_vm_map_op map; - /** @unmap: Parameters used if op =3D=3D MSM_VMA_OP_UNMAP */ - struct msm_vm_unmap_op unmap; - }; - /** @node: list head in msm_vm_bind_job::vm_ops */ - struct list_head node; - - /** - * @obj: backing object for pages to be mapped/unmapped - * - * Async unmap ops, in particular, must hold a reference to the - * original GEM object backing the mapping that will be unmapped. - * But the same can be required in the map path, for example if - * there is not a corresponding unmap op, such as process exit. - * - * This ensures that the pages backing the mapping are not freed - * before the mapping is torn down. - */ - struct drm_gem_object *obj; -}; - -/** - * struct msm_vm_bind_job - Tracking for a VM_BIND ioctl - * - * A table of userspace requested VM updates (MSM_VM_BIND_OP_UNMAP/MAP/MAP= _NULL) - * gets applied to the vm, generating a list of VM ops (MSM_VM_OP_MAP/UNMA= P) - * which are applied to the pgtables asynchronously. For example a usersp= ace - * requested MSM_VM_BIND_OP_MAP could end up generating both an MSM_VM_OP_= UNMAP - * to unmap an existing mapping, and a MSM_VM_OP_MAP to apply the new mapp= ing. - */ -struct msm_vm_bind_job { - /** @base: base class for drm_sched jobs */ - struct drm_sched_job base; - /** @vm: The VM being operated on */ - struct drm_gpuvm *vm; - /** @fence: The fence that is signaled when job completes */ - struct dma_fence *fence; - /** @queue: The queue that the job runs on */ - struct msm_gpu_submitqueue *queue; - /** @prealloc: Tracking for pre-allocated MMU pgtable pages */ - struct msm_mmu_prealloc prealloc; - /** @vm_ops: a list of struct msm_vm_op */ - struct list_head vm_ops; - /** @bos_pinned: are the GEM objects being bound pinned? */ - bool bos_pinned; - /** @nr_ops: the number of userspace requested ops */ - unsigned int nr_ops; - /** - * @ops: the userspace requested ops - * - * The userspace requested ops are copied/parsed and validated - * before we start applying the updates to try to do as much up- - * front error checking as possible, to avoid the VM being in an - * undefined state due to partially executed VM_BIND. - * - * This table also serves to hold a reference to the backing GEM - * objects. - */ - struct msm_vm_bind_op { - uint32_t op; - uint32_t flags; - union { - struct drm_gem_object *obj; - uint32_t handle; - }; - uint64_t obj_offset; - uint64_t iova; - uint64_t range; - } ops[]; -}; - -#define job_foreach_bo(obj, _job) \ - for (unsigned i =3D 0; i < (_job)->nr_ops; i++) \ - if ((obj =3D (_job)->ops[i].obj)) - -static inline struct msm_vm_bind_job *to_msm_vm_bind_job(struct drm_sched_= job *job) -{ - return container_of(job, struct msm_vm_bind_job, base); -} - static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -221,49 +86,6 @@ msm_gem_vm_unusable(struct drm_gpuvm *gpuvm) mutex_unlock(&vm->mmu_lock); } =20 -static void -vm_log(struct msm_gem_vm *vm, const char *op, uint64_t iova, uint64_t rang= e, int queue_id) -{ - int idx; - - if (!vm->managed) - lockdep_assert_held(&vm->mmu_lock); - - vm_dbg("%s:%p:%d: %016llx %016llx", op, vm, queue_id, iova, iova + range); - - if (!vm->log) - return; - - idx =3D vm->log_idx; - vm->log[idx].op =3D op; - vm->log[idx].iova =3D iova; - vm->log[idx].range =3D range; - vm->log[idx].queue_id =3D queue_id; - vm->log_idx =3D (vm->log_idx + 1) & ((1 << vm->log_shift) - 1); -} - -static void -vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) -{ - const char *reason =3D op->reason; - - if (!reason) - reason =3D "unmap"; - - vm_log(vm, reason, op->iova, op->range, op->queue_id); - - vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); -} - -static int -vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) -{ - vm_log(vm, "map", op->iova, op->range, op->queue_id); - - return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, - op->range, op->prot); -} - /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma, const char *reason) { @@ -455,219 +277,6 @@ msm_gem_vm_bo_validate(struct drm_gpuvm_bo *vm_bo, st= ruct drm_exec *exec) return 0; } =20 -struct op_arg { - unsigned flags; - struct msm_vm_bind_job *job; - const struct msm_vm_bind_op *op; - bool kept; -}; - -static void -vm_op_enqueue(struct op_arg *arg, struct msm_vm_op _op) -{ - struct msm_vm_op *op =3D kmalloc(sizeof(*op), GFP_KERNEL); - *op =3D _op; - list_add_tail(&op->node, &arg->job->vm_ops); - - if (op->obj) - drm_gem_object_get(op->obj); -} - -static struct drm_gpuva * -vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) -{ - return msm_gem_vma_new(arg->job->vm, op->gem.obj, op->gem.offset, - op->va.addr, op->va.addr + op->va.range); -} - -static int -msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *_arg) -{ - struct op_arg *arg =3D _arg; - struct msm_vm_bind_job *job =3D arg->job; - struct drm_gem_object *obj =3D op->map.gem.obj; - struct drm_gpuva *vma; - struct sg_table *sgt; - unsigned prot; - - if (arg->kept) - return 0; - - vma =3D vma_from_op(arg, &op->map); - if (WARN_ON(IS_ERR(vma))) - return PTR_ERR(vma); - - vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, - vma->va.addr, vma->va.range); - - vma->flags =3D ((struct op_arg *)arg)->flags; - - if (obj) { - sgt =3D to_msm_bo(obj)->sgt; - prot =3D msm_gem_prot(obj); - } else { - sgt =3D NULL; - prot =3D IOMMU_READ | IOMMU_WRITE; - } - - vm_op_enqueue(arg, (struct msm_vm_op){ - .op =3D MSM_VM_OP_MAP, - .map =3D { - .sgt =3D sgt, - .iova =3D vma->va.addr, - .range =3D vma->va.range, - .offset =3D vma->gem.offset, - .prot =3D prot, - .queue_id =3D job->queue->id, - }, - .obj =3D vma->gem.obj, - }); - - to_msm_vma(vma)->mapped =3D true; - - return 0; -} - -static int -msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) -{ - struct msm_vm_bind_job *job =3D ((struct op_arg *)arg)->job; - struct drm_gpuvm *vm =3D job->vm; - struct drm_gpuva *orig_vma =3D op->remap.unmap->va; - struct drm_gpuva *prev_vma =3D NULL, *next_vma =3D NULL; - struct drm_gpuvm_bo *vm_bo =3D orig_vma->vm_bo; - bool mapped =3D to_msm_vma(orig_vma)->mapped; - unsigned flags; - - vm_dbg("orig_vma: %p:%p:%p: %016llx %016llx", vm, orig_vma, - orig_vma->gem.obj, orig_vma->va.addr, orig_vma->va.range); - - if (mapped) { - uint64_t unmap_start, unmap_range; - - drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range= ); - - vm_op_enqueue(arg, (struct msm_vm_op){ - .op =3D MSM_VM_OP_UNMAP, - .unmap =3D { - .iova =3D unmap_start, - .range =3D unmap_range, - .queue_id =3D job->queue->id, - }, - .obj =3D orig_vma->gem.obj, - }); - - /* - * Part of this GEM obj is still mapped, but we're going to kill the - * existing VMA and replace it with one or two new ones (ie. two if - * the unmapped range is in the middle of the existing (unmap) VMA). - * So just set the state to unmapped: - */ - to_msm_vma(orig_vma)->mapped =3D false; - } - - /* - * Hold a ref to the vm_bo between the msm_gem_vma_close() and the - * creation of the new prev/next vma's, in case the vm_bo is tracked - * in the VM's evict list: - */ - if (vm_bo) - drm_gpuvm_bo_get(vm_bo); - - /* - * The prev_vma and/or next_vma are replacing the unmapped vma, and - * therefore should preserve it's flags: - */ - flags =3D orig_vma->flags; - - msm_gem_vma_close(orig_vma); - - if (op->remap.prev) { - prev_vma =3D vma_from_op(arg, op->remap.prev); - if (WARN_ON(IS_ERR(prev_vma))) - return PTR_ERR(prev_vma); - - vm_dbg("prev_vma: %p:%p: %016llx %016llx", vm, prev_vma, prev_vma->va.ad= dr, prev_vma->va.range); - to_msm_vma(prev_vma)->mapped =3D mapped; - prev_vma->flags =3D flags; - } - - if (op->remap.next) { - next_vma =3D vma_from_op(arg, op->remap.next); - if (WARN_ON(IS_ERR(next_vma))) - return PTR_ERR(next_vma); - - vm_dbg("next_vma: %p:%p: %016llx %016llx", vm, next_vma, next_vma->va.ad= dr, next_vma->va.range); - to_msm_vma(next_vma)->mapped =3D mapped; - next_vma->flags =3D flags; - } - - if (!mapped) - drm_gpuvm_bo_evict(vm_bo, true); - - /* Drop the previous ref: */ - drm_gpuvm_bo_put(vm_bo); - - return 0; -} - -static int -msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *_arg) -{ - struct op_arg *arg =3D _arg; - struct msm_vm_bind_job *job =3D arg->job; - struct drm_gpuva *vma =3D op->unmap.va; - struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); - - vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, - vma->va.addr, vma->va.range); - - /* - * Detect in-place remap. Turnip does this to change the vma flags, - * in particular MSM_VMA_DUMP. In this case we want to avoid actually - * touching the page tables, as that would require synchronization - * against SUBMIT jobs running on the GPU. - */ - if (op->unmap.keep && - (arg->op->op =3D=3D MSM_VM_BIND_OP_MAP) && - (vma->gem.obj =3D=3D arg->op->obj) && - (vma->gem.offset =3D=3D arg->op->obj_offset) && - (vma->va.addr =3D=3D arg->op->iova) && - (vma->va.range =3D=3D arg->op->range)) { - /* We are only expecting a single in-place unmap+map cb pair: */ - WARN_ON(arg->kept); - - /* Leave the existing VMA in place, but signal that to the map cb: */ - arg->kept =3D true; - - /* Only flags are changing, so update that in-place: */ - unsigned orig_flags =3D vma->flags & (DRM_GPUVA_USERBITS - 1); - vma->flags =3D orig_flags | arg->flags; - - return 0; - } - - if (!msm_vma->mapped) - goto out_close; - - vm_op_enqueue(arg, (struct msm_vm_op){ - .op =3D MSM_VM_OP_UNMAP, - .unmap =3D { - .iova =3D vma->va.addr, - .range =3D vma->va.range, - .queue_id =3D job->queue->id, - }, - .obj =3D vma->gem.obj, - }); - - msm_vma->mapped =3D false; - -out_close: - msm_gem_vma_close(vma); - - return 0; -} - static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { .vm_free =3D msm_gem_vm_free, .vm_bo_validate =3D msm_gem_vm_bo_validate, @@ -676,99 +285,6 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { .sm_step_unmap =3D msm_gem_vm_sm_step_unmap, }; =20 -static struct dma_fence * -msm_vma_job_run(struct drm_sched_job *_job) -{ - struct msm_vm_bind_job *job =3D to_msm_vm_bind_job(_job); - struct msm_gem_vm *vm =3D to_msm_vm(job->vm); - struct drm_gem_object *obj; - int ret =3D vm->unusable ? -EINVAL : 0; - - vm_dbg(""); - - mutex_lock(&vm->mmu_lock); - vm->mmu->prealloc =3D &job->prealloc; - - while (!list_empty(&job->vm_ops)) { - struct msm_vm_op *op =3D - list_first_entry(&job->vm_ops, struct msm_vm_op, node); - - switch (op->op) { - case MSM_VM_OP_MAP: - /* - * On error, stop trying to map new things.. but we - * still want to process the unmaps (or in particular, - * the drm_gem_object_put()s) - */ - if (!ret) - ret =3D vm_map_op(vm, &op->map); - break; - case MSM_VM_OP_UNMAP: - vm_unmap_op(vm, &op->unmap); - break; - } - drm_gem_object_put(op->obj); - list_del(&op->node); - kfree(op); - } - - vm->mmu->prealloc =3D NULL; - mutex_unlock(&vm->mmu_lock); - - /* - * We failed to perform at least _some_ of the pgtable updates, so - * now the VM is in an undefined state. Game over! - */ - if (ret) - msm_gem_vm_unusable(job->vm); - - job_foreach_bo (obj, job) { - msm_gem_lock(obj); - msm_gem_unpin_locked(obj); - msm_gem_unlock(obj); - } - - /* VM_BIND ops are synchronous, so no fence to wait on: */ - return NULL; -} - -static void -msm_vma_job_free(struct drm_sched_job *_job) -{ - struct msm_vm_bind_job *job =3D to_msm_vm_bind_job(_job); - struct msm_gem_vm *vm =3D to_msm_vm(job->vm); - struct drm_gem_object *obj; - - vm->mmu->funcs->prealloc_cleanup(vm->mmu, &job->prealloc); - - atomic_sub(job->prealloc.count, &vm->prealloc_throttle.in_flight); - - drm_sched_job_cleanup(_job); - - job_foreach_bo (obj, job) - drm_gem_object_put(obj); - - msm_submitqueue_put(job->queue); - dma_fence_put(job->fence); - - /* In error paths, we could have unexecuted ops: */ - while (!list_empty(&job->vm_ops)) { - struct msm_vm_op *op =3D - list_first_entry(&job->vm_ops, struct msm_vm_op, node); - list_del(&op->node); - kfree(op); - } - - wake_up(&vm->prealloc_throttle.wait); - - kfree(job); -} - -static const struct drm_sched_backend_ops msm_vm_bind_ops =3D { - .run_job =3D msm_vma_job_run, - .free_job =3D msm_vma_job_free -}; - /** * msm_gem_vm_create() - Create and initialize a &msm_gem_vm * @drm: the drm device @@ -811,20 +327,9 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_m= mu *mmu, const char *name, } =20 if (!managed) { - struct drm_sched_init_args args =3D { - .ops =3D &msm_vm_bind_ops, - .num_rqs =3D 1, - .credit_limit =3D 1, - .timeout =3D MAX_SCHEDULE_TIMEOUT, - .name =3D "msm-vm-bind", - .dev =3D drm->dev, - }; - - ret =3D drm_sched_init(&vm->sched, &args); + ret =3D msm_gem_vm_sched_init(vm, drm); if (ret) goto err_free_dummy; - - init_waitqueue_head(&vm->prealloc_throttle.wait); } =20 drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, @@ -889,9 +394,7 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) if (vm->last_fence) dma_fence_wait(vm->last_fence, false); =20 - /* Kill the scheduler now, so we aren't racing with it for cleanup: */ - drm_sched_stop(&vm->sched, NULL); - drm_sched_fini(&vm->sched); + msm_gem_vm_sched_fini(vm); =20 /* Tear down any remaining mappings: */ drm_exec_init(&exec, 0, 2); @@ -924,677 +427,3 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) } drm_exec_fini(&exec); } - - -static struct msm_vm_bind_job * -vm_bind_job_create(struct drm_device *dev, struct drm_file *file, - struct msm_gpu_submitqueue *queue, uint32_t nr_ops) -{ - struct msm_vm_bind_job *job; - uint64_t sz; - int ret; - - sz =3D struct_size(job, ops, nr_ops); - - if (sz > SIZE_MAX) - return ERR_PTR(-ENOMEM); - - job =3D kzalloc(sz, GFP_KERNEL | __GFP_NOWARN); - if (!job) - return ERR_PTR(-ENOMEM); - - ret =3D drm_sched_job_init(&job->base, queue->entity, 1, queue, - file->client_id); - if (ret) { - kfree(job); - return ERR_PTR(ret); - } - - job->vm =3D msm_context_vm(dev, queue->ctx); - job->queue =3D queue; - INIT_LIST_HEAD(&job->vm_ops); - - return job; -} - -static bool invalid_alignment(uint64_t addr) -{ - /* - * Technically this is about GPU alignment, not CPU alignment. But - * I've not seen any qcom SoC where the SMMU does not support the - * CPU's smallest page size. - */ - return !PAGE_ALIGNED(addr); -} - -static int -lookup_op(struct msm_vm_bind_job *job, const struct drm_msm_vm_bind_op *op) -{ - struct drm_device *dev =3D job->vm->drm; - int i =3D job->nr_ops++; - int ret =3D 0; - - job->ops[i].op =3D op->op; - job->ops[i].handle =3D op->handle; - job->ops[i].obj_offset =3D op->obj_offset; - job->ops[i].iova =3D op->iova; - job->ops[i].range =3D op->range; - job->ops[i].flags =3D op->flags; - - if (op->flags & ~MSM_VM_BIND_OP_FLAGS) - ret =3D UERR(EINVAL, dev, "invalid flags: %x\n", op->flags); - - if (invalid_alignment(op->iova)) - ret =3D UERR(EINVAL, dev, "invalid address: %016llx\n", op->iova); - - if (invalid_alignment(op->obj_offset)) - ret =3D UERR(EINVAL, dev, "invalid bo_offset: %016llx\n", op->obj_offset= ); - - if (invalid_alignment(op->range)) - ret =3D UERR(EINVAL, dev, "invalid range: %016llx\n", op->range); - - if (!drm_gpuvm_range_valid(job->vm, op->iova, op->range)) - ret =3D UERR(EINVAL, dev, "invalid range: %016llx, %016llx\n", op->iova,= op->range); - - /* - * MAP must specify a valid handle. But the handle MBZ for - * UNMAP or MAP_NULL. - */ - if (op->op =3D=3D MSM_VM_BIND_OP_MAP) { - if (!op->handle) - ret =3D UERR(EINVAL, dev, "invalid handle\n"); - } else if (op->handle) { - ret =3D UERR(EINVAL, dev, "handle must be zero\n"); - } - - switch (op->op) { - case MSM_VM_BIND_OP_MAP: - case MSM_VM_BIND_OP_MAP_NULL: - case MSM_VM_BIND_OP_UNMAP: - break; - default: - ret =3D UERR(EINVAL, dev, "invalid op: %u\n", op->op); - break; - } - - return ret; -} - -/* - * ioctl parsing, parameter validation, and GEM handle lookup - */ -static int -vm_bind_job_lookup_ops(struct msm_vm_bind_job *job, struct drm_msm_vm_bind= *args, - struct drm_file *file, int *nr_bos) -{ - struct drm_device *dev =3D job->vm->drm; - int ret =3D 0; - int cnt =3D 0; - int i =3D -1; - - if (args->nr_ops =3D=3D 1) { - /* Single op case, the op is inlined: */ - ret =3D lookup_op(job, &args->op); - } else { - for (unsigned i =3D 0; i < args->nr_ops; i++) { - struct drm_msm_vm_bind_op op; - void __user *userptr =3D - u64_to_user_ptr(args->ops + (i * sizeof(op))); - - /* make sure we don't have garbage flags, in case we hit - * error path before flags is initialized: - */ - job->ops[i].flags =3D 0; - - if (copy_from_user(&op, userptr, sizeof(op))) { - ret =3D -EFAULT; - break; - } - - ret =3D lookup_op(job, &op); - if (ret) - break; - } - } - - if (ret) { - job->nr_ops =3D 0; - goto out; - } - - spin_lock(&file->table_lock); - - for (i =3D 0; i < args->nr_ops; i++) { - struct msm_vm_bind_op *op =3D &job->ops[i]; - struct drm_gem_object *obj; - - if (!op->handle) { - op->obj =3D NULL; - continue; - } - - /* - * normally use drm_gem_object_lookup(), but for bulk lookup - * all under single table_lock just hit object_idr directly: - */ - obj =3D idr_find(&file->object_idr, op->handle); - if (!obj) { - ret =3D UERR(EINVAL, dev, "invalid handle %u at index %u\n", op->handle= , i); - goto out_unlock; - } - - drm_gem_object_get(obj); - - op->obj =3D obj; - cnt++; - - if ((op->range + op->obj_offset) > obj->size) { - ret =3D UERR(EINVAL, dev, "invalid range: %016llx + %016llx > %016zx\n", - op->range, op->obj_offset, obj->size); - goto out_unlock; - } - } - - *nr_bos =3D cnt; - -out_unlock: - spin_unlock(&file->table_lock); - - if (ret) { - for (; i >=3D 0; i--) { - struct msm_vm_bind_op *op =3D &job->ops[i]; - - if (!op->obj) - continue; - - drm_gem_object_put(op->obj); - op->obj =3D NULL; - } - } -out: - return ret; -} - -static void -prealloc_count(struct msm_vm_bind_job *job, - struct msm_vm_bind_op *first, - struct msm_vm_bind_op *last) -{ - struct msm_mmu *mmu =3D to_msm_vm(job->vm)->mmu; - - if (!first) - return; - - uint64_t start_iova =3D first->iova; - uint64_t end_iova =3D last->iova + last->range; - - mmu->funcs->prealloc_count(mmu, &job->prealloc, start_iova, end_iova - st= art_iova); -} - -static bool -ops_are_same_pte(struct msm_vm_bind_op *first, struct msm_vm_bind_op *next) -{ - /* - * Last level pte covers 2MB.. so we should merge two ops, from - * the PoV of figuring out how much pgtable pages to pre-allocate - * if they land in the same 2MB range: - */ - uint64_t pte_mask =3D ~(SZ_2M - 1); - return ((first->iova + first->range) & pte_mask) =3D=3D (next->iova & pte= _mask); -} - -/* - * Determine the amount of memory to prealloc for pgtables. For sparse im= ages, - * in particular, userspace plays some tricks with the order of page mappi= ngs - * to get the desired swizzle pattern, resulting in a large # of tiny MAP = ops. - * So detect when multiple MAP operations are physically contiguous, and c= ount - * them as a single mapping. Otherwise the prealloc_count() will not real= ize - * they can share pagetable pages and vastly overcount. - */ -static int -vm_bind_prealloc_count(struct msm_vm_bind_job *job) -{ - struct msm_vm_bind_op *first =3D NULL, *last =3D NULL; - struct msm_gem_vm *vm =3D to_msm_vm(job->vm); - int ret; - - for (int i =3D 0; i < job->nr_ops; i++) { - struct msm_vm_bind_op *op =3D &job->ops[i]; - - /* We only care about MAP/MAP_NULL: */ - if (op->op =3D=3D MSM_VM_BIND_OP_UNMAP) - continue; - - /* - * If op is contiguous with last in the current range, then - * it becomes the new last in the range and we continue - * looping: - */ - if (last && ops_are_same_pte(last, op)) { - last =3D op; - continue; - } - - /* - * If op is not contiguous with the current range, flush - * the current range and start anew: - */ - prealloc_count(job, first, last); - first =3D last =3D op; - } - - /* Flush the remaining range: */ - prealloc_count(job, first, last); - - /* - * Now that we know the needed amount to pre-alloc, throttle on pending - * VM_BIND jobs if we already have too much pre-alloc memory in flight - */ - ret =3D wait_event_interruptible( - vm->prealloc_throttle.wait, - atomic_read(&vm->prealloc_throttle.in_flight) <=3D 1024); - if (ret) - return ret; - - atomic_add(job->prealloc.count, &vm->prealloc_throttle.in_flight); - - return 0; -} - -/* - * Lock VM and GEM objects - */ -static int -vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exe= c) -{ - int ret; - - /* Lock VM and objects: */ - drm_exec_until_all_locked (exec) { - ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(job->vm)); - drm_exec_retry_on_contention(exec); - if (ret) - return ret; - - for (unsigned i =3D 0; i < job->nr_ops; i++) { - const struct msm_vm_bind_op *op =3D &job->ops[i]; - - switch (op->op) { - case MSM_VM_BIND_OP_UNMAP: - ret =3D drm_gpuvm_sm_unmap_exec_lock(job->vm, exec, - op->iova, - op->obj_offset); - break; - case MSM_VM_BIND_OP_MAP: - case MSM_VM_BIND_OP_MAP_NULL: { - struct drm_gpuvm_map_req map_req =3D { - .map.va.addr =3D op->iova, - .map.va.range =3D op->range, - .map.gem.obj =3D op->obj, - .map.gem.offset =3D op->obj_offset, - }; - - ret =3D drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, &map_req); - break; - } - default: - /* - * lookup_op() should have already thrown an error for - * invalid ops - */ - WARN_ON("unreachable"); - } - - drm_exec_retry_on_contention(exec); - if (ret) - return ret; - } - } - - return 0; -} - -/* - * Pin GEM objects, ensuring that we have backing pages. Pinning will move - * the object to the pinned LRU so that the shrinker knows to first consid= er - * other objects for evicting. - */ -static int -vm_bind_job_pin_objects(struct msm_vm_bind_job *job) -{ - struct drm_gem_object *obj; - - /* - * First loop, before holding the LRU lock, avoids holding the - * LRU lock while calling msm_gem_pin_vma_locked (which could - * trigger get_pages()) - */ - job_foreach_bo (obj, job) { - struct page **pages; - - pages =3D msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); - if (IS_ERR(pages)) - return PTR_ERR(pages); - } - - struct msm_drm_private *priv =3D job->vm->drm->dev_private; - - /* - * A second loop while holding the LRU lock (a) avoids acquiring/dropping - * the LRU lock for each individual bo, while (b) avoiding holding the - * LRU lock while calling msm_gem_pin_vma_locked() (which could trigger - * get_pages() which could trigger reclaim.. and if we held the LRU lock - * could trigger deadlock with the shrinker). - */ - mutex_lock(&priv->lru.lock); - job_foreach_bo (obj, job) - msm_gem_pin_obj_locked(obj); - mutex_unlock(&priv->lru.lock); - - job->bos_pinned =3D true; - - return 0; -} - -/* - * Unpin GEM objects. Normally this is done after the bind job is run. - */ -static void -vm_bind_job_unpin_objects(struct msm_vm_bind_job *job) -{ - struct drm_gem_object *obj; - - if (!job->bos_pinned) - return; - - job_foreach_bo (obj, job) - msm_gem_unpin_locked(obj); - - job->bos_pinned =3D false; -} - -/* - * Pre-allocate pgtable memory, and translate the VM bind requests into a - * sequence of pgtable updates to be applied asynchronously. - */ -static int -vm_bind_job_prepare(struct msm_vm_bind_job *job) -{ - struct msm_gem_vm *vm =3D to_msm_vm(job->vm); - struct msm_mmu *mmu =3D vm->mmu; - int ret; - - ret =3D mmu->funcs->prealloc_allocate(mmu, &job->prealloc); - if (ret) - return ret; - - for (unsigned i =3D 0; i < job->nr_ops; i++) { - const struct msm_vm_bind_op *op =3D &job->ops[i]; - struct op_arg arg =3D { - .job =3D job, - .op =3D op, - }; - - switch (op->op) { - case MSM_VM_BIND_OP_UNMAP: - ret =3D drm_gpuvm_sm_unmap(job->vm, &arg, op->iova, - op->range); - break; - case MSM_VM_BIND_OP_MAP: - if (op->flags & MSM_VM_BIND_OP_DUMP) - arg.flags |=3D MSM_VMA_DUMP; - fallthrough; - case MSM_VM_BIND_OP_MAP_NULL: { - struct drm_gpuvm_map_req map_req =3D { - .map.va.addr =3D op->iova, - .map.va.range =3D op->range, - .map.gem.obj =3D op->obj, - .map.gem.offset =3D op->obj_offset, - }; - - ret =3D drm_gpuvm_sm_map(job->vm, &arg, &map_req); - break; - } - default: - /* - * lookup_op() should have already thrown an error for - * invalid ops - */ - BUG_ON("unreachable"); - } - - if (ret) { - /* - * If we've already started modifying the vm, we can't - * adequetly describe to userspace the intermediate - * state the vm is in. So throw up our hands! - */ - if (i > 0) - msm_gem_vm_unusable(job->vm); - return ret; - } - } - - return 0; -} - -/* - * Attach fences to the GEM objects being bound. This will signify to - * the shrinker that they are busy even after dropping the locks (ie. - * drm_exec_fini()) - */ -static void -vm_bind_job_attach_fences(struct msm_vm_bind_job *job) -{ - for (unsigned i =3D 0; i < job->nr_ops; i++) { - struct drm_gem_object *obj =3D job->ops[i].obj; - - if (!obj) - continue; - - dma_resv_add_fence(obj->resv, job->fence, - DMA_RESV_USAGE_KERNEL); - } -} - -int -msm_ioctl_vm_bind(struct drm_device *dev, void *data, struct drm_file *fil= e) -{ - struct msm_drm_private *priv =3D dev->dev_private; - struct drm_msm_vm_bind *args =3D data; - struct msm_context *ctx =3D file->driver_priv; - struct msm_vm_bind_job *job =3D NULL; - struct msm_gpu *gpu =3D priv->gpu; - struct msm_gpu_submitqueue *queue; - struct msm_syncobj_post_dep *post_deps =3D NULL; - struct drm_syncobj **syncobjs_to_reset =3D NULL; - struct sync_file *sync_file =3D NULL; - struct dma_fence *fence; - int out_fence_fd =3D -1; - int ret, nr_bos =3D 0; - unsigned i; - - if (!gpu) - return -ENXIO; - - /* - * Maybe we could allow just UNMAP ops? OTOH userspace should just - * immediately close the device file and all will be torn down. - */ - if (to_msm_vm(ctx->vm)->unusable) - return UERR(EPIPE, dev, "context is unusable"); - - /* - * Technically, you cannot create a VM_BIND submitqueue in the first - * place, if you haven't opted in to VM_BIND context. But it is - * cleaner / less confusing, to check this case directly. - */ - if (!msm_context_is_vmbind(ctx)) - return UERR(EINVAL, dev, "context does not support vmbind"); - - if (args->flags & ~MSM_VM_BIND_FLAGS) - return UERR(EINVAL, dev, "invalid flags"); - - queue =3D msm_submitqueue_get(ctx, args->queue_id); - if (!queue) - return -ENOENT; - - if (!(queue->flags & MSM_SUBMITQUEUE_VM_BIND)) { - ret =3D UERR(EINVAL, dev, "Invalid queue type"); - goto out_post_unlock; - } - - if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { - out_fence_fd =3D get_unused_fd_flags(O_CLOEXEC); - if (out_fence_fd < 0) { - ret =3D out_fence_fd; - goto out_post_unlock; - } - } - - job =3D vm_bind_job_create(dev, file, queue, args->nr_ops); - if (IS_ERR(job)) { - ret =3D PTR_ERR(job); - goto out_post_unlock; - } - - ret =3D mutex_lock_interruptible(&queue->lock); - if (ret) - goto out_post_unlock; - - if (args->flags & MSM_VM_BIND_FENCE_FD_IN) { - struct dma_fence *in_fence; - - in_fence =3D sync_file_get_fence(args->fence_fd); - - if (!in_fence) { - ret =3D UERR(EINVAL, dev, "invalid in-fence"); - goto out_unlock; - } - - ret =3D drm_sched_job_add_dependency(&job->base, in_fence); - if (ret) - goto out_unlock; - } - - if (args->in_syncobjs > 0) { - syncobjs_to_reset =3D msm_syncobj_parse_deps(dev, &job->base, - file, args->in_syncobjs, - args->nr_in_syncobjs, - args->syncobj_stride); - if (IS_ERR(syncobjs_to_reset)) { - ret =3D PTR_ERR(syncobjs_to_reset); - goto out_unlock; - } - } - - if (args->out_syncobjs > 0) { - post_deps =3D msm_syncobj_parse_post_deps(dev, file, - args->out_syncobjs, - args->nr_out_syncobjs, - args->syncobj_stride); - if (IS_ERR(post_deps)) { - ret =3D PTR_ERR(post_deps); - goto out_unlock; - } - } - - ret =3D vm_bind_job_lookup_ops(job, args, file, &nr_bos); - if (ret) - goto out_unlock; - - ret =3D vm_bind_prealloc_count(job); - if (ret) - goto out_unlock; - - struct drm_exec exec; - unsigned flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WA= IT; - drm_exec_init(&exec, flags, nr_bos + 1); - - ret =3D vm_bind_job_lock_objects(job, &exec); - if (ret) - goto out; - - ret =3D vm_bind_job_pin_objects(job); - if (ret) - goto out; - - ret =3D vm_bind_job_prepare(job); - if (ret) - goto out; - - drm_sched_job_arm(&job->base); - - job->fence =3D dma_fence_get(&job->base.s_fence->finished); - - if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { - sync_file =3D sync_file_create(job->fence); - if (!sync_file) - ret =3D -ENOMEM; - } - - if (ret) - goto out; - - vm_bind_job_attach_fences(job); - - /* - * The job can be free'd (and fence unref'd) at any point after - * drm_sched_entity_push_job(), so we need to hold our own ref - */ - fence =3D dma_fence_get(job->fence); - - drm_sched_entity_push_job(&job->base); - - msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); - msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, fence); - - dma_fence_put(fence); - -out: - if (ret) - vm_bind_job_unpin_objects(job); - - drm_exec_fini(&exec); -out_unlock: - mutex_unlock(&queue->lock); -out_post_unlock: - if (ret) { - if (out_fence_fd >=3D 0) - put_unused_fd(out_fence_fd); - if (sync_file) - fput(sync_file->file); - } else if (sync_file) { - fd_install(out_fence_fd, sync_file->file); - args->fence_fd =3D out_fence_fd; - } - - if (!IS_ERR_OR_NULL(job)) { - if (ret) - msm_vma_job_free(&job->base); - } else { - /* - * If the submit hasn't yet taken ownership of the queue - * then we need to drop the reference ourself: - */ - msm_submitqueue_put(queue); - } - - if (!IS_ERR_OR_NULL(post_deps)) { - for (i =3D 0; i < args->nr_out_syncobjs; ++i) { - kfree(post_deps[i].chain); - drm_syncobj_put(post_deps[i].syncobj); - } - kfree(post_deps); - } - - if (!IS_ERR_OR_NULL(syncobjs_to_reset)) { - for (i =3D 0; i < args->nr_in_syncobjs; ++i) { - if (syncobjs_to_reset[i]) - drm_syncobj_put(syncobjs_to_reset[i]); - } - kfree(syncobjs_to_reset); - } - - return ret; -} diff --git a/drivers/gpu/drm/msm/msm_gem_vma.h b/drivers/gpu/drm/msm/msm_ge= m_vma.h new file mode 100644 index 0000000000000000000000000000000000000000..f702f81529e72b86bffb4960408= f1912bc65851a --- /dev/null +++ b/drivers/gpu/drm/msm/msm_gem_vma.h @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2016 Red Hat + * Author: Rob Clark + */ + +#ifndef _MSM_GEM_VMA_H_ +#define _MSM_GEM_VMA_H_ + +#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##= __VA_ARGS__) + +/** + * struct msm_vm_map_op - create new pgtable mapping + */ +struct msm_vm_map_op { + /** @iova: start address for mapping */ + uint64_t iova; + /** @range: size of the region to map */ + uint64_t range; + /** @offset: offset into @sgt to map */ + uint64_t offset; + /** @sgt: pages to map, or NULL for a PRR mapping */ + struct sg_table *sgt; + /** @prot: the mapping protection flags */ + int prot; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; +}; + +/** + * struct msm_vm_unmap_op - unmap a range of pages from pgtable + */ +struct msm_vm_unmap_op { + /** @iova: start address for unmap */ + uint64_t iova; + /** @range: size of region to unmap */ + uint64_t range; + + /** @reason: The reason for the unmap */ + const char *reason; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; +}; + +static void +vm_log(struct msm_gem_vm *vm, const char *op, uint64_t iova, uint64_t rang= e, int queue_id) +{ + int idx; + + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + + vm_dbg("%s:%p:%d: %016llx %016llx", op, vm, queue_id, iova, iova + range); + + if (!vm->log) + return; + + idx =3D vm->log_idx; + vm->log[idx].op =3D op; + vm->log[idx].iova =3D iova; + vm->log[idx].range =3D range; + vm->log[idx].queue_id =3D queue_id; + vm->log_idx =3D (vm->log_idx + 1) & ((1 << vm->log_shift) - 1); +} + +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + const char *reason =3D op->reason; + + if (!reason) + reason =3D "unmap"; + + vm_log(vm, reason, op->iova, op->range, op->queue_id); + + vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); +} + +static int +vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) +{ + vm_log(vm, "map", op->iova, op->range, op->queue_id); + + return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, + op->range, op->prot); +} + +int msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *_arg); +int msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg); +int msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *_arg); + +int msm_gem_vm_sched_init(struct msm_gem_vm *vm, struct drm_device *drm); +void msm_gem_vm_sched_fini(struct msm_gem_vm *vm); + +#endif --=20 2.47.3 From nobody Thu Oct 2 10:57:10 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 499FA24BBFD for ; Thu, 18 Sep 2025 03:51:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758167491; cv=none; b=n3W4XZ4eW5YgV97RVjuuv6K9QeFT59vrxWhZqYhR7ddJAdiRWzOF8TMqUFEpO8NrJ6aMqRaxwPuvZb7Zt/9CUTq91p+2l14Eca3bCA25fox9Pqfs9nys/NPqv1oRP6AXTWLUNSj7rGKceT/G1vYGf5DbRchlmRGRAGzAc/EFZq8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758167491; c=relaxed/simple; bh=C3v8bPEKnu4N1pNpZUfC3RGMYAdkcdrJw1D6hEMyow4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ObjXo5Yy7YrJcwRhJOwihGPtgtbjWXqFQ7keh74ae014lvKHOUyCsH14j4cDPrkTceFhg+YUCweHGMXV/s0tTxmkZq9u+Ho3OG3Vgyd4u0XC8dZbpKR9NpHwr0XHAxRybNnOfarsdCdDjnf+rDwYba6s1KB4fKQaa23aEQdNxKY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=W+NSEzaq; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="W+NSEzaq" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 58I3Y2Lk026796 for ; Thu, 18 Sep 2025 03:51:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= KjDatx3pgsFlaMVHvhcmbGG7Lr4uVfJAQPjgyzQxqhk=; b=W+NSEzaqQm8lMEbA W+MumcTEawMMevVjP1G4G8VstIBbDthjRKKNHeVcbKo8LxrP8eKflQ9XlaathEg7 ygpOWdvNWIB2qyApt1Vm9XnGCVIiw5tkjwkpDVUn0uBNJNtnHAiSPAgbxBYyszZt 9PuwwK3Vrh+oXkiNYsf45LBUJxbH+63XB0UpfxDe8y//nSeUiloOauM86JEjDoBS VhM09dRfn6MhOuF67a41xHx7gjt4r8kDTaptQThTiknCJMimYNvEHbjlz602RMrj vQ15fo9iB60IFsIRotAp1z95F1+Tvw2IQ8Z3WEvZ6UhC3FJUwdaJPB8dyzd0t36Y yhaFGQ== Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 497fxt4u3s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 18 Sep 2025 03:51:26 +0000 (GMT) Received: by mail-qt1-f199.google.com with SMTP id d75a77b69052e-4b5f290579aso10206211cf.0 for ; Wed, 17 Sep 2025 20:51:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758167485; x=1758772285; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KjDatx3pgsFlaMVHvhcmbGG7Lr4uVfJAQPjgyzQxqhk=; b=NSD++wK27aswBCpVravdvy19G04tjGUZM6jlv6Jvzq/UdyM82/u2EVPK/FJloLABTj 0FMqH5pullcPPdX2APavJ+IIO6MlXG7QtAVmm8LGg6xrQM51JLoSauIMSShXhhyB9ycR UMs+3oSOxDq0gjx7id7zeWrHeJPrG0DR6Zk+vuN4506kvdzlZyRU1wV8LeRkWEw9kSR9 6aPQ9pVBsfuu+Y07xCz9w0auqh/9UwRkOJpcnOnb5k+4MddqYHZEKWDKA9lVU/bfsd5I e6Skaiw5daMippzV9+Q4ykbzNWEYtt3+JAT2geN7MsF19fUTKj+xVR1ZAgeXNPHWVyTq x9Sw== X-Forwarded-Encrypted: i=1; AJvYcCWZLploQaal11bcRsTY90sVxROUbinqAsUg2LKCIN5//qZIUN2GHPP/LsaidYOV/RrzSWBuyuzJ7daaOg4=@vger.kernel.org X-Gm-Message-State: AOJu0YyZJXK9EPuAAU0gQfh2cAy9lJYyMmCR0t0eIBmSKRAIw1r7xWvV xO8hKQMwu4HWnVwYVeHdOGD0rhFrcw62IR69rws0de4/Hu0dAs15Urtqg5t7y0+iPVpkhq4nnxg SWwtzxHtPMvD2th85hwl8lEGTN3rgPhxAgy2aGbPICpMweOeU4+gQRRKfRCcuvCePC70= X-Gm-Gg: ASbGncvmoQ0+PtxqCa3ppxrCSLd/CkFSemF+9qc1oKR/kCOmF+gb3iW2SsobYKSAILD gBH4Yn5ki8Vu4trCVyjtVQBbUuxvRJcJ8TcTB/xBkO3Ql9iqumoTjvfbkj6+5oks3VSIUDRaXoV AWhlwqz+l2EzOoCu7Ek12yf+i0xk7kJiGkF++L6kVCBukeAvW8qr2dderZp0SzZEgx1ZsQr0YCC MRki8QLiOwC/KSl0mNcu+SCOD8huHOORA0G5pmU9Zu/c4qs1/8MoPzGEFxwOskCR1pFSL5HMcJq IW/QgDwE1f5pOLWH8afonwdsiPmaVuvvMq5eOk6vSc2XI9cQebQqUHx5koWR+3jxU50ZWLJTkX5 aaxqNjXhKMa67TWGeBS06+uCrLX46QFaGtc2blyim9SBavaE80sRJ X-Received: by 2002:a05:622a:5905:b0:4b2:dee2:6498 with SMTP id d75a77b69052e-4ba687170d5mr48007671cf.28.1758167484549; Wed, 17 Sep 2025 20:51:24 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGJkTi8JBmYXbqf8LSUj3LGysoGnPKWFUHLDlvKqUyZ5Vm2UBcSDocxn6eFr71DibkblZ+CZQ== X-Received: by 2002:a05:622a:5905:b0:4b2:dee2:6498 with SMTP id d75a77b69052e-4ba687170d5mr48007431cf.28.1758167483987; Wed, 17 Sep 2025 20:51:23 -0700 (PDT) Received: from umbar.lan (2001-14ba-a0c3-3a00-264b-feff-fe8b-be8a.rev.dnainternet.fi. [2001:14ba:a0c3:3a00:264b:feff:fe8b:be8a]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-361aa38c4f7sm2799911fa.62.2025.09.17.20.51.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 20:51:23 -0700 (PDT) From: Dmitry Baryshkov Date: Thu, 18 Sep 2025 06:50:24 +0300 Subject: [PATCH v5 3/5] drm/msm: split away IOCTLs implementation Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250918-msm-gpu-split-v5-3-44486f44d27d@oss.qualcomm.com> References: <20250918-msm-gpu-split-v5-0-44486f44d27d@oss.qualcomm.com> In-Reply-To: <20250918-msm-gpu-split-v5-0-44486f44d27d@oss.qualcomm.com> To: Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Konrad Dybcio Cc: linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=28439; i=dmitry.baryshkov@oss.qualcomm.com; h=from:subject:message-id; bh=C3v8bPEKnu4N1pNpZUfC3RGMYAdkcdrJw1D6hEMyow4=; b=owEBbQGS/pANAwAKAYs8ij4CKSjVAcsmYgBoy4GtRQ2xMV7P8tuFj4PzTI5ufPnX1JsYId/Xp sARZdAgr6uJATMEAAEKAB0WIQRMcISVXLJjVvC4lX+LPIo+Aiko1QUCaMuBrQAKCRCLPIo+Aiko 1ZKFB/0clVWRly6rBGuraOpwnVpDLu3y37mmbtxKOrv57BfyRscItQCXixkWi+rZ2GA/YRChYkO kcZ+GD5E9X9dDVQq0bnKWTl7ywbUVLdSrF0JnAQGveNCZgetzCNNOBPhq6jEK2F+p5ufl7OpAhC UyMzl8ZmsbZVSXSPjXA9dOYvrO7pcRYvopFzpsWy94EIl1KPD/Xc5W8NndKAoqkcyC9SBo3LPEB ol1bGt8UVaeD7J4H8ljOuoaLbFp4FMy4pzl/xYssl7iBQUW/HdF7x4TyNk2eYzgSieIveVvzvVk MNZ+7H24BDtjALuTGbYoUH2lPCx/kL2GYDMcTVvY8YdV+Kjc X-Developer-Key: i=dmitry.baryshkov@oss.qualcomm.com; a=openpgp; fpr=8F88381DD5C873E4AE487DA5199BF1243632046A X-Proofpoint-GUID: 9Uo0ZltzwH5XwshwD0iTZJJqeP3XWyDM X-Authority-Analysis: v=2.4 cv=bIMWIO+Z c=1 sm=1 tr=0 ts=68cb81be cx=c_pps a=WeENfcodrlLV9YRTxbY/uA==:117 a=xqWC_Br6kY4A:10 a=IkcTkHD0fZMA:10 a=yJojWOMRYYMA:10 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=APMymX6OwX3YqAwivccA:9 a=QEXdDO2ut3YA:10 a=kacYvNCVWA4VmyqE58fU:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwOTE2MDIwMiBTYWx0ZWRfX6/vZbFqmo7fq MdInNhURiArNjXA+mX/BYc8TYYt9okpSBIHv0zRwxn1IYMoR09WaoeAwkqbEuzeuG9bh48DMusT Q083845Nig3z0nQ6ldcOVDUx/joXziTr/wPrixm+FXnffVKm129YUkckk7CQCj2PZ6vCo+2A8g8 DXVJcNHzKDHAthiEP0uVBJKiKP2vaNCEYKP5BC3+GzwCjzAnPxJglFA1se6BnL0QICQL33oQ1NM SCpAVs8lxf8dQgyzgBysVSClEqfLg511qLdD3i8M2Z/oAAmNQ4XWy5ca2LKhuQp31udLgy6Sw5x Y6YRkKSITN+LjgZBGWeJ5+kdUImLdDo1rkMkIaLKpr6pJeu6Ck+DUChOy39Y21whwieM3uR44kH JxtEQIPh X-Proofpoint-ORIG-GUID: 9Uo0ZltzwH5XwshwD0iTZJJqeP3XWyDM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-17_01,2025-09-17_02,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 impostorscore=0 malwarescore=0 adultscore=0 priorityscore=1501 suspectscore=0 spamscore=0 bulkscore=0 phishscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2509160202 The IOCTL interface is only used for interfacing the GPU parts of the driver. In preparation to disabling GPU functionality split MSM IOCTLs to a separate source file. Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/msm_drv.c | 489 +-----------------------------------= ---- drivers/gpu/drm/msm/msm_ioctl.c | 484 ++++++++++++++++++++++++++++++++++++= +++ drivers/gpu/drm/msm/msm_ioctl.h | 37 +++ 4 files changed, 523 insertions(+), 488 deletions(-) diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index d7876c154b0aa2cb0164c4b1fb7900b1a42db46b..0ac977a6ed01d91111d706995f3= 41ced29f5ca8d 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -119,6 +119,7 @@ msm-y +=3D \ msm_gpu.o \ msm_gpu_devfreq.o \ msm_io_utils.o \ + msm_ioctl.o \ msm_iommu.o \ msm_perf.o \ msm_rd.o \ diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 06ab78e1a2c583352c08a62e6cf250bacde9b75b..ba984cc71d1d3aa341e0f4532b7= 093adcd25d3b0 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -8,8 +8,6 @@ #include #include #include -#include -#include =20 #include #include @@ -18,8 +16,8 @@ =20 #include "msm_drv.h" #include "msm_debugfs.h" -#include "msm_gem.h" #include "msm_gpu.h" +#include "msm_ioctl.h" #include "msm_kms.h" =20 /* @@ -296,491 +294,6 @@ static void msm_postclose(struct drm_device *dev, str= uct drm_file *file) context_close(ctx); } =20 -/* - * DRM ioctls: - */ - -static int msm_ioctl_get_param(struct drm_device *dev, void *data, - struct drm_file *file) -{ - struct msm_drm_private *priv =3D dev->dev_private; - struct drm_msm_param *args =3D data; - struct msm_gpu *gpu; - - /* for now, we just have 3d pipe.. eventually this would need to - * be more clever to dispatch to appropriate gpu module: - */ - if ((args->pipe !=3D MSM_PIPE_3D0) || (args->pad !=3D 0)) - return -EINVAL; - - gpu =3D priv->gpu; - - if (!gpu) - return -ENXIO; - - return gpu->funcs->get_param(gpu, file->driver_priv, - args->param, &args->value, &args->len); -} - -static int msm_ioctl_set_param(struct drm_device *dev, void *data, - struct drm_file *file) -{ - struct msm_drm_private *priv =3D dev->dev_private; - struct drm_msm_param *args =3D data; - struct msm_gpu *gpu; - - if ((args->pipe !=3D MSM_PIPE_3D0) || (args->pad !=3D 0)) - return -EINVAL; - - gpu =3D priv->gpu; - - if (!gpu) - return -ENXIO; - - return gpu->funcs->set_param(gpu, file->driver_priv, - args->param, args->value, args->len); -} - -static int msm_ioctl_gem_new(struct drm_device *dev, void *data, - struct drm_file *file) -{ - struct drm_msm_gem_new *args =3D data; - uint32_t flags =3D args->flags; - - if (args->flags & ~MSM_BO_FLAGS) { - DRM_ERROR("invalid flags: %08x\n", args->flags); - return -EINVAL; - } - - /* - * Uncached CPU mappings are deprecated, as of: - * - * 9ef364432db4 ("drm/msm: deprecate MSM_BO_UNCACHED (map as writecombine= instead)") - * - * So promote them to WC. - */ - if (flags & MSM_BO_UNCACHED) { - flags &=3D ~MSM_BO_CACHED; - flags |=3D MSM_BO_WC; - } - - if (should_fail(&fail_gem_alloc, args->size)) - return -ENOMEM; - - return msm_gem_new_handle(dev, file, args->size, - args->flags, &args->handle, NULL); -} - -static inline ktime_t to_ktime(struct drm_msm_timespec timeout) -{ - return ktime_set(timeout.tv_sec, timeout.tv_nsec); -} - -static int msm_ioctl_gem_cpu_prep(struct drm_device *dev, void *data, - struct drm_file *file) -{ - struct drm_msm_gem_cpu_prep *args =3D data; - struct drm_gem_object *obj; - ktime_t timeout =3D to_ktime(args->timeout); - int ret; - - if (args->op & ~MSM_PREP_FLAGS) { - DRM_ERROR("invalid op: %08x\n", args->op); - return -EINVAL; - } - - obj =3D drm_gem_object_lookup(file, args->handle); - if (!obj) - return -ENOENT; - - ret =3D msm_gem_cpu_prep(obj, args->op, &timeout); - - drm_gem_object_put(obj); - - return ret; -} - -static int msm_ioctl_gem_cpu_fini(struct drm_device *dev, void *data, - struct drm_file *file) -{ - struct drm_msm_gem_cpu_fini *args =3D data; - struct drm_gem_object *obj; - int ret; - - obj =3D drm_gem_object_lookup(file, args->handle); - if (!obj) - return -ENOENT; - - ret =3D msm_gem_cpu_fini(obj); - - drm_gem_object_put(obj); - - return ret; -} - -static int msm_ioctl_gem_info_iova(struct drm_device *dev, - struct drm_file *file, struct drm_gem_object *obj, - uint64_t *iova) -{ - struct msm_drm_private *priv =3D dev->dev_private; - struct msm_context *ctx =3D file->driver_priv; - - if (!priv->gpu) - return -EINVAL; - - if (msm_context_is_vmbind(ctx)) - return UERR(EINVAL, dev, "VM_BIND is enabled"); - - if (should_fail(&fail_gem_iova, obj->size)) - return -ENOMEM; - - /* - * Don't pin the memory here - just get an address so that userspace can - * be productive - */ - return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova); -} - -static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, - struct drm_file *file, struct drm_gem_object *obj, - uint64_t iova) -{ - struct msm_drm_private *priv =3D dev->dev_private; - struct msm_context *ctx =3D file->driver_priv; - struct drm_gpuvm *vm =3D msm_context_vm(dev, ctx); - - if (!priv->gpu) - return -EINVAL; - - if (msm_context_is_vmbind(ctx)) - return UERR(EINVAL, dev, "VM_BIND is enabled"); - - /* Only supported if per-process address space is supported: */ - if (priv->gpu->vm =3D=3D vm) - return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); - - if (should_fail(&fail_gem_iova, obj->size)) - return -ENOMEM; - - return msm_gem_set_iova(obj, vm, iova); -} - -static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, - __user void *metadata, - u32 metadata_size) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - void *new_metadata; - void *buf; - int ret; - - /* Impose a moderate upper bound on metadata size: */ - if (metadata_size > 128) { - return -EOVERFLOW; - } - - /* Use a temporary buf to keep copy_from_user() outside of gem obj lock: = */ - buf =3D memdup_user(metadata, metadata_size); - if (IS_ERR(buf)) - return PTR_ERR(buf); - - ret =3D msm_gem_lock_interruptible(obj); - if (ret) - goto out; - - new_metadata =3D - krealloc(msm_obj->metadata, metadata_size, GFP_KERNEL); - if (!new_metadata) { - ret =3D -ENOMEM; - goto out; - } - - msm_obj->metadata =3D new_metadata; - msm_obj->metadata_size =3D metadata_size; - memcpy(msm_obj->metadata, buf, metadata_size); - - msm_gem_unlock(obj); - -out: - kfree(buf); - - return ret; -} - -static int msm_ioctl_gem_info_get_metadata(struct drm_gem_object *obj, - __user void *metadata, - u32 *metadata_size) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - void *buf; - int ret, len; - - if (!metadata) { - /* - * Querying the size is inherently racey, but - * EXT_external_objects expects the app to confirm - * via device and driver UUIDs that the exporter and - * importer versions match. All we can do from the - * kernel side is check the length under obj lock - * when userspace tries to retrieve the metadata - */ - *metadata_size =3D msm_obj->metadata_size; - return 0; - } - - ret =3D msm_gem_lock_interruptible(obj); - if (ret) - return ret; - - /* Avoid copy_to_user() under gem obj lock: */ - len =3D msm_obj->metadata_size; - buf =3D kmemdup(msm_obj->metadata, len, GFP_KERNEL); - - msm_gem_unlock(obj); - - if (*metadata_size < len) { - ret =3D -ETOOSMALL; - } else if (copy_to_user(metadata, buf, len)) { - ret =3D -EFAULT; - } else { - *metadata_size =3D len; - } - - kfree(buf); - - return 0; -} - -static int msm_ioctl_gem_info(struct drm_device *dev, void *data, - struct drm_file *file) -{ - struct drm_msm_gem_info *args =3D data; - struct drm_gem_object *obj; - struct msm_gem_object *msm_obj; - int i, ret =3D 0; - - if (args->pad) - return -EINVAL; - - switch (args->info) { - case MSM_INFO_GET_OFFSET: - case MSM_INFO_GET_IOVA: - case MSM_INFO_SET_IOVA: - case MSM_INFO_GET_FLAGS: - /* value returned as immediate, not pointer, so len=3D=3D0: */ - if (args->len) - return -EINVAL; - break; - case MSM_INFO_SET_NAME: - case MSM_INFO_GET_NAME: - case MSM_INFO_SET_METADATA: - case MSM_INFO_GET_METADATA: - break; - default: - return -EINVAL; - } - - obj =3D drm_gem_object_lookup(file, args->handle); - if (!obj) - return -ENOENT; - - msm_obj =3D to_msm_bo(obj); - - switch (args->info) { - case MSM_INFO_GET_OFFSET: - args->value =3D msm_gem_mmap_offset(obj); - break; - case MSM_INFO_GET_IOVA: - ret =3D msm_ioctl_gem_info_iova(dev, file, obj, &args->value); - break; - case MSM_INFO_SET_IOVA: - ret =3D msm_ioctl_gem_info_set_iova(dev, file, obj, args->value); - break; - case MSM_INFO_GET_FLAGS: - if (drm_gem_is_imported(obj)) { - ret =3D -EINVAL; - break; - } - /* Hide internal kernel-only flags: */ - args->value =3D to_msm_bo(obj)->flags & MSM_BO_FLAGS; - ret =3D 0; - break; - case MSM_INFO_SET_NAME: - /* length check should leave room for terminating null: */ - if (args->len >=3D sizeof(msm_obj->name)) { - ret =3D -EINVAL; - break; - } - if (copy_from_user(msm_obj->name, u64_to_user_ptr(args->value), - args->len)) { - msm_obj->name[0] =3D '\0'; - ret =3D -EFAULT; - break; - } - msm_obj->name[args->len] =3D '\0'; - for (i =3D 0; i < args->len; i++) { - if (!isprint(msm_obj->name[i])) { - msm_obj->name[i] =3D '\0'; - break; - } - } - break; - case MSM_INFO_GET_NAME: - if (args->value && (args->len < strlen(msm_obj->name))) { - ret =3D -ETOOSMALL; - break; - } - args->len =3D strlen(msm_obj->name); - if (args->value) { - if (copy_to_user(u64_to_user_ptr(args->value), - msm_obj->name, args->len)) - ret =3D -EFAULT; - } - break; - case MSM_INFO_SET_METADATA: - ret =3D msm_ioctl_gem_info_set_metadata( - obj, u64_to_user_ptr(args->value), args->len); - break; - case MSM_INFO_GET_METADATA: - ret =3D msm_ioctl_gem_info_get_metadata( - obj, u64_to_user_ptr(args->value), &args->len); - break; - } - - drm_gem_object_put(obj); - - return ret; -} - -static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id, - ktime_t timeout, uint32_t flags) -{ - struct dma_fence *fence; - int ret; - - if (fence_after(fence_id, queue->last_fence)) { - DRM_ERROR_RATELIMITED("waiting on invalid fence: %u (of %u)\n", - fence_id, queue->last_fence); - return -EINVAL; - } - - /* - * Map submitqueue scoped "seqno" (which is actually an idr key) - * back to underlying dma-fence - * - * The fence is removed from the fence_idr when the submit is - * retired, so if the fence is not found it means there is nothing - * to wait for - */ - spin_lock(&queue->idr_lock); - fence =3D idr_find(&queue->fence_idr, fence_id); - if (fence) - fence =3D dma_fence_get_rcu(fence); - spin_unlock(&queue->idr_lock); - - if (!fence) - return 0; - - if (flags & MSM_WAIT_FENCE_BOOST) - dma_fence_set_deadline(fence, ktime_get()); - - ret =3D dma_fence_wait_timeout(fence, true, timeout_to_jiffies(&timeout)); - if (ret =3D=3D 0) { - ret =3D -ETIMEDOUT; - } else if (ret !=3D -ERESTARTSYS) { - ret =3D 0; - } - - dma_fence_put(fence); - - return ret; -} - -static int msm_ioctl_wait_fence(struct drm_device *dev, void *data, - struct drm_file *file) -{ - struct msm_drm_private *priv =3D dev->dev_private; - struct drm_msm_wait_fence *args =3D data; - struct msm_gpu_submitqueue *queue; - int ret; - - if (args->flags & ~MSM_WAIT_FENCE_FLAGS) { - DRM_ERROR("invalid flags: %08x\n", args->flags); - return -EINVAL; - } - - if (!priv->gpu) - return 0; - - queue =3D msm_submitqueue_get(file->driver_priv, args->queueid); - if (!queue) - return -ENOENT; - - ret =3D wait_fence(queue, args->fence, to_ktime(args->timeout), args->fla= gs); - - msm_submitqueue_put(queue); - - return ret; -} - -static int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, - struct drm_file *file) -{ - struct drm_msm_gem_madvise *args =3D data; - struct drm_gem_object *obj; - int ret; - - switch (args->madv) { - case MSM_MADV_DONTNEED: - case MSM_MADV_WILLNEED: - break; - default: - return -EINVAL; - } - - obj =3D drm_gem_object_lookup(file, args->handle); - if (!obj) { - return -ENOENT; - } - - ret =3D msm_gem_madvise(obj, args->madv); - if (ret >=3D 0) { - args->retained =3D ret; - ret =3D 0; - } - - drm_gem_object_put(obj); - - return ret; -} - - -static int msm_ioctl_submitqueue_new(struct drm_device *dev, void *data, - struct drm_file *file) -{ - struct drm_msm_submitqueue *args =3D data; - - if (args->flags & ~MSM_SUBMITQUEUE_FLAGS) - return -EINVAL; - - return msm_submitqueue_create(dev, file->driver_priv, args->prio, - args->flags, &args->id); -} - -static int msm_ioctl_submitqueue_query(struct drm_device *dev, void *data, - struct drm_file *file) -{ - return msm_submitqueue_query(dev, file->driver_priv, data); -} - -static int msm_ioctl_submitqueue_close(struct drm_device *dev, void *data, - struct drm_file *file) -{ - u32 id =3D *(u32 *) data; - - return msm_submitqueue_remove(file->driver_priv, id); -} - static const struct drm_ioctl_desc msm_ioctls[] =3D { DRM_IOCTL_DEF_DRV(MSM_GET_PARAM, msm_ioctl_get_param, DRM_RENDER_AL= LOW), DRM_IOCTL_DEF_DRV(MSM_SET_PARAM, msm_ioctl_set_param, DRM_RENDER_AL= LOW), diff --git a/drivers/gpu/drm/msm/msm_ioctl.c b/drivers/gpu/drm/msm/msm_ioct= l.c new file mode 100644 index 0000000000000000000000000000000000000000..837be6849684fa72887cb7d7094= 89d54e01c1a5c --- /dev/null +++ b/drivers/gpu/drm/msm/msm_ioctl.c @@ -0,0 +1,484 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2016-2018, 2020-2021 The Linux Foundation. All rights res= erved. + * Copyright (C) 2013 Red Hat + * Author: Rob Clark + */ + +#include +#include +#include +#include + +#include +#include +#include + +#include "msm_drv.h" +#include "msm_gpu.h" +#include "msm_ioctl.h" + +/* + * DRM ioctls: + */ + +static inline ktime_t to_ktime(struct drm_msm_timespec timeout) +{ + return ktime_set(timeout.tv_sec, timeout.tv_nsec); +} + +int msm_ioctl_get_param(struct drm_device *dev, void *data, struct drm_fil= e *file) +{ + struct msm_drm_private *priv =3D dev->dev_private; + struct drm_msm_param *args =3D data; + struct msm_gpu *gpu; + + /* for now, we just have 3d pipe.. eventually this would need to + * be more clever to dispatch to appropriate gpu module: + */ + if ((args->pipe !=3D MSM_PIPE_3D0) || (args->pad !=3D 0)) + return -EINVAL; + + gpu =3D priv->gpu; + + if (!gpu) + return -ENXIO; + + return gpu->funcs->get_param(gpu, file->driver_priv, + args->param, &args->value, &args->len); +} + +int msm_ioctl_set_param(struct drm_device *dev, void *data, struct drm_fil= e *file) +{ + struct msm_drm_private *priv =3D dev->dev_private; + struct drm_msm_param *args =3D data; + struct msm_gpu *gpu; + + if ((args->pipe !=3D MSM_PIPE_3D0) || (args->pad !=3D 0)) + return -EINVAL; + + gpu =3D priv->gpu; + + if (!gpu) + return -ENXIO; + + return gpu->funcs->set_param(gpu, file->driver_priv, + args->param, args->value, args->len); +} + +int msm_ioctl_gem_new(struct drm_device *dev, void *data, struct drm_file = *file) +{ + struct drm_msm_gem_new *args =3D data; + uint32_t flags =3D args->flags; + + if (args->flags & ~MSM_BO_FLAGS) { + DRM_ERROR("invalid flags: %08x\n", args->flags); + return -EINVAL; + } + + /* + * Uncached CPU mappings are deprecated, as of: + * + * 9ef364432db4 ("drm/msm: deprecate MSM_BO_UNCACHED (map as writecombine= instead)") + * + * So promote them to WC. + */ + if (flags & MSM_BO_UNCACHED) { + flags &=3D ~MSM_BO_CACHED; + flags |=3D MSM_BO_WC; + } + + if (should_fail(&fail_gem_alloc, args->size)) + return -ENOMEM; + + return msm_gem_new_handle(dev, file, args->size, + args->flags, &args->handle, NULL); +} + +int msm_ioctl_gem_cpu_prep(struct drm_device *dev, void *data, struct drm_= file *file) +{ + struct drm_msm_gem_cpu_prep *args =3D data; + struct drm_gem_object *obj; + ktime_t timeout =3D to_ktime(args->timeout); + int ret; + + if (args->op & ~MSM_PREP_FLAGS) { + DRM_ERROR("invalid op: %08x\n", args->op); + return -EINVAL; + } + + obj =3D drm_gem_object_lookup(file, args->handle); + if (!obj) + return -ENOENT; + + ret =3D msm_gem_cpu_prep(obj, args->op, &timeout); + + drm_gem_object_put(obj); + + return ret; +} + +int msm_ioctl_gem_cpu_fini(struct drm_device *dev, void *data, struct drm_= file *file) +{ + struct drm_msm_gem_cpu_fini *args =3D data; + struct drm_gem_object *obj; + int ret; + + obj =3D drm_gem_object_lookup(file, args->handle); + if (!obj) + return -ENOENT; + + ret =3D msm_gem_cpu_fini(obj); + + drm_gem_object_put(obj); + + return ret; +} + +int msm_ioctl_gem_info_iova(struct drm_device *dev, struct drm_file *file, + struct drm_gem_object *obj, uint64_t *iova) +{ + struct msm_drm_private *priv =3D dev->dev_private; + struct msm_context *ctx =3D file->driver_priv; + + if (!priv->gpu) + return -EINVAL; + + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + + if (should_fail(&fail_gem_iova, obj->size)) + return -ENOMEM; + + /* + * Don't pin the memory here - just get an address so that userspace can + * be productive + */ + return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova); +} + +int msm_ioctl_gem_info_set_iova(struct drm_device *dev, struct drm_file *f= ile, + struct drm_gem_object *obj, uint64_t iova) +{ + struct msm_drm_private *priv =3D dev->dev_private; + struct msm_context *ctx =3D file->driver_priv; + struct drm_gpuvm *vm =3D msm_context_vm(dev, ctx); + + if (!priv->gpu) + return -EINVAL; + + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + + /* Only supported if per-process address space is supported: */ + if (priv->gpu->vm =3D=3D vm) + return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); + + if (should_fail(&fail_gem_iova, obj->size)) + return -ENOMEM; + + return msm_gem_set_iova(obj, vm, iova); +} + +int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, + __user void *metadata, u32 metadata_size) +{ + struct msm_gem_object *msm_obj =3D to_msm_bo(obj); + void *new_metadata; + void *buf; + int ret; + + /* Impose a moderate upper bound on metadata size: */ + if (metadata_size > 128) + return -EOVERFLOW; + + /* Use a temporary buf to keep copy_from_user() outside of gem obj lock: = */ + buf =3D memdup_user(metadata, metadata_size); + if (IS_ERR(buf)) + return PTR_ERR(buf); + + ret =3D msm_gem_lock_interruptible(obj); + if (ret) + goto out; + + new_metadata =3D + krealloc(msm_obj->metadata, metadata_size, GFP_KERNEL); + if (!new_metadata) { + ret =3D -ENOMEM; + goto out; + } + + msm_obj->metadata =3D new_metadata; + msm_obj->metadata_size =3D metadata_size; + memcpy(msm_obj->metadata, buf, metadata_size); + + msm_gem_unlock(obj); + +out: + kfree(buf); + + return ret; +} + +int msm_ioctl_gem_info_get_metadata(struct drm_gem_object *obj, + __user void *metadata, u32 *metadata_size) +{ + struct msm_gem_object *msm_obj =3D to_msm_bo(obj); + void *buf; + int ret, len; + + if (!metadata) { + /* + * Querying the size is inherently racey, but + * EXT_external_objects expects the app to confirm + * via device and driver UUIDs that the exporter and + * importer versions match. All we can do from the + * kernel side is check the length under obj lock + * when userspace tries to retrieve the metadata + */ + *metadata_size =3D msm_obj->metadata_size; + return 0; + } + + ret =3D msm_gem_lock_interruptible(obj); + if (ret) + return ret; + + /* Avoid copy_to_user() under gem obj lock: */ + len =3D msm_obj->metadata_size; + buf =3D kmemdup(msm_obj->metadata, len, GFP_KERNEL); + + msm_gem_unlock(obj); + + if (*metadata_size < len) + ret =3D -ETOOSMALL; + else if (copy_to_user(metadata, buf, len)) + ret =3D -EFAULT; + else + *metadata_size =3D len; + + kfree(buf); + + return 0; +} + +int msm_ioctl_gem_info(struct drm_device *dev, void *data, struct drm_file= *file) +{ + struct drm_msm_gem_info *args =3D data; + struct drm_gem_object *obj; + struct msm_gem_object *msm_obj; + int i, ret =3D 0; + + if (args->pad) + return -EINVAL; + + switch (args->info) { + case MSM_INFO_GET_OFFSET: + case MSM_INFO_GET_IOVA: + case MSM_INFO_SET_IOVA: + case MSM_INFO_GET_FLAGS: + /* value returned as immediate, not pointer, so len=3D=3D0: */ + if (args->len) + return -EINVAL; + break; + case MSM_INFO_SET_NAME: + case MSM_INFO_GET_NAME: + case MSM_INFO_SET_METADATA: + case MSM_INFO_GET_METADATA: + break; + default: + return -EINVAL; + } + + obj =3D drm_gem_object_lookup(file, args->handle); + if (!obj) + return -ENOENT; + + msm_obj =3D to_msm_bo(obj); + + switch (args->info) { + case MSM_INFO_GET_OFFSET: + args->value =3D msm_gem_mmap_offset(obj); + break; + case MSM_INFO_GET_IOVA: + ret =3D msm_ioctl_gem_info_iova(dev, file, obj, &args->value); + break; + case MSM_INFO_SET_IOVA: + ret =3D msm_ioctl_gem_info_set_iova(dev, file, obj, args->value); + break; + case MSM_INFO_GET_FLAGS: + if (drm_gem_is_imported(obj)) { + ret =3D -EINVAL; + break; + } + /* Hide internal kernel-only flags: */ + args->value =3D to_msm_bo(obj)->flags & MSM_BO_FLAGS; + ret =3D 0; + break; + case MSM_INFO_SET_NAME: + /* length check should leave room for terminating null: */ + if (args->len >=3D sizeof(msm_obj->name)) { + ret =3D -EINVAL; + break; + } + if (copy_from_user(msm_obj->name, u64_to_user_ptr(args->value), + args->len)) { + msm_obj->name[0] =3D '\0'; + ret =3D -EFAULT; + break; + } + msm_obj->name[args->len] =3D '\0'; + for (i =3D 0; i < args->len; i++) { + if (!isprint(msm_obj->name[i])) { + msm_obj->name[i] =3D '\0'; + break; + } + } + break; + case MSM_INFO_GET_NAME: + if (args->value && (args->len < strlen(msm_obj->name))) { + ret =3D -ETOOSMALL; + break; + } + args->len =3D strlen(msm_obj->name); + if (args->value) { + if (copy_to_user(u64_to_user_ptr(args->value), + msm_obj->name, args->len)) + ret =3D -EFAULT; + } + break; + case MSM_INFO_SET_METADATA: + ret =3D msm_ioctl_gem_info_set_metadata( + obj, u64_to_user_ptr(args->value), args->len); + break; + case MSM_INFO_GET_METADATA: + ret =3D msm_ioctl_gem_info_get_metadata( + obj, u64_to_user_ptr(args->value), &args->len); + break; + } + + drm_gem_object_put(obj); + + return ret; +} + +static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id, + ktime_t timeout, uint32_t flags) +{ + struct dma_fence *fence; + int ret; + + if (fence_after(fence_id, queue->last_fence)) { + DRM_ERROR_RATELIMITED("waiting on invalid fence: %u (of %u)\n", + fence_id, queue->last_fence); + return -EINVAL; + } + + /* + * Map submitqueue scoped "seqno" (which is actually an idr key) + * back to underlying dma-fence + * + * The fence is removed from the fence_idr when the submit is + * retired, so if the fence is not found it means there is nothing + * to wait for + */ + spin_lock(&queue->idr_lock); + fence =3D idr_find(&queue->fence_idr, fence_id); + if (fence) + fence =3D dma_fence_get_rcu(fence); + spin_unlock(&queue->idr_lock); + + if (!fence) + return 0; + + if (flags & MSM_WAIT_FENCE_BOOST) + dma_fence_set_deadline(fence, ktime_get()); + + ret =3D dma_fence_wait_timeout(fence, true, timeout_to_jiffies(&timeout)); + if (ret =3D=3D 0) + ret =3D -ETIMEDOUT; + else if (ret !=3D -ERESTARTSYS) + ret =3D 0; + + dma_fence_put(fence); + + return ret; +} + +int msm_ioctl_wait_fence(struct drm_device *dev, void *data, struct drm_fi= le *file) +{ + struct msm_drm_private *priv =3D dev->dev_private; + struct drm_msm_wait_fence *args =3D data; + struct msm_gpu_submitqueue *queue; + int ret; + + if (args->flags & ~MSM_WAIT_FENCE_FLAGS) { + DRM_ERROR("invalid flags: %08x\n", args->flags); + return -EINVAL; + } + + if (!priv->gpu) + return 0; + + queue =3D msm_submitqueue_get(file->driver_priv, args->queueid); + if (!queue) + return -ENOENT; + + ret =3D wait_fence(queue, args->fence, to_ktime(args->timeout), args->fla= gs); + + msm_submitqueue_put(queue); + + return ret; +} + +int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, struct drm_f= ile *file) +{ + struct drm_msm_gem_madvise *args =3D data; + struct drm_gem_object *obj; + int ret; + + switch (args->madv) { + case MSM_MADV_DONTNEED: + case MSM_MADV_WILLNEED: + break; + default: + return -EINVAL; + } + + obj =3D drm_gem_object_lookup(file, args->handle); + if (!obj) + return -ENOENT; + + ret =3D msm_gem_madvise(obj, args->madv); + if (ret >=3D 0) { + args->retained =3D ret; + ret =3D 0; + } + + drm_gem_object_put(obj); + + return ret; +} + +int msm_ioctl_submitqueue_new(struct drm_device *dev, void *data, struct d= rm_file *file) +{ + struct drm_msm_submitqueue *args =3D data; + + if (args->flags & ~MSM_SUBMITQUEUE_FLAGS) + return -EINVAL; + + return msm_submitqueue_create(dev, file->driver_priv, args->prio, + args->flags, &args->id); +} + +int msm_ioctl_submitqueue_query(struct drm_device *dev, void *data, struct= drm_file *file) +{ + return msm_submitqueue_query(dev, file->driver_priv, data); +} + +int msm_ioctl_submitqueue_close(struct drm_device *dev, void *data, struct= drm_file *file) +{ + u32 id =3D *(u32 *) data; + + return msm_submitqueue_remove(file->driver_priv, id); +} diff --git a/drivers/gpu/drm/msm/msm_ioctl.h b/drivers/gpu/drm/msm/msm_ioct= l.h new file mode 100644 index 0000000000000000000000000000000000000000..5711476a00df4773b12020a37bf= b3ceb964c19ee --- /dev/null +++ b/drivers/gpu/drm/msm/msm_ioctl.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2013 Red Hat + * Author: Rob Clark + */ + +#ifndef __MSM_IOCTLS_H__ +#define __MSM_IOCTLS_H__ + +#include + +struct drm_device; +struct drm_file; +struct drm_gem_object; + +int msm_ioctl_get_param(struct drm_device *dev, void *data, struct drm_fil= e *file); +int msm_ioctl_set_param(struct drm_device *dev, void *data, struct drm_fil= e *file); +int msm_ioctl_gem_new(struct drm_device *dev, void *data, struct drm_file = *file); +int msm_ioctl_gem_cpu_prep(struct drm_device *dev, void *data, struct drm_= file *file); +int msm_ioctl_gem_cpu_fini(struct drm_device *dev, void *data, struct drm_= file *file); +int msm_ioctl_gem_info_iova(struct drm_device *dev, struct drm_file *file, + struct drm_gem_object *obj, uint64_t *iova); +int msm_ioctl_gem_info_set_iova(struct drm_device *dev, struct drm_file *f= ile, + struct drm_gem_object *obj, uint64_t iova); +int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, + __user void *metadata, u32 metadata_size); +int msm_ioctl_gem_info_get_metadata(struct drm_gem_object *obj, + __user void *metadata, u32 *metadata_size); +int msm_ioctl_gem_info(struct drm_device *dev, void *data, struct drm_file= *file); +int msm_ioctl_wait_fence(struct drm_device *dev, void *data, struct drm_fi= le *file); +int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, struct drm_f= ile *file); +int msm_ioctl_submitqueue_new(struct drm_device *dev, void *data, struct d= rm_file *file); +int msm_ioctl_submitqueue_query(struct drm_device *dev, void *data, struct= drm_file *file); +int msm_ioctl_submitqueue_close(struct drm_device *dev, void *data, struct= drm_file *file); + +#endif --=20 2.47.3 From nobody Thu Oct 2 10:57:10 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4216F2512F5 for ; Thu, 18 Sep 2025 03:51:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758167493; cv=none; b=r9drgFoDNvaXP19/RW1XGY/BAK2DFq6UlQJZGLjktht/BcDnZyAmt8PY0iT1AdyGmjnBwxLtPkdqZPoe8WocqjbER66KGxbAM6dT0pEVulx7FJr/LZQBrcAXf+3ye0Wwslo6SGAJ/2EXCP4CmbXrPAZiL04RPGksse9y6WVdF4U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758167493; c=relaxed/simple; bh=cmVxIwbHXyzCxztNllALNU3duros/7BQgMvQkbzx8bQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ADuD77YOKTb/+7xw/W8ri/UpatFsBzlGkgVXkgO3pVODW1EnAKitr73kzak8uA7VN0/bp5ZHgwbr4ZBLP0VsLhwfKrLZllGDvTBM44LGneDDc8351zOGUs64YexME+phJw3pOLFlHZTxuqwiKVLwovSvNU92j02j0gZxqxw7lMA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=dH3TuDoW; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="dH3TuDoW" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 58HMaGhp011110 for ; Thu, 18 Sep 2025 03:51:28 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= Q8pEieLnnU1CR6YLMZOp6lmhyubXHH1VH3gcTO5X+f4=; b=dH3TuDoWiSXVbSFM T2tmZhn9MqujfdqFPPpztJu+baP3/ewxsSB3T1GMYNMV30AxM49s9fFwvBca1aBq IaK5z+G9aZywNRtX0bdDUCNOOxJ5JR2v8W77L7eqWBcfX9MBSQrjBJfbebcav+eA xNcEwFAjwzp2czMh4yku6BJWpqg4ZoReOwnMVUcxHx0ppqvFChQ9DP9KN594E/Yi G2zckEko+atY3lHzXS9GM9891M9yYiV0Ut8yQn2cuJtoaMEZhM34tcU5xC5vIX4d NdI0lFgVwRZ4rHKtsHmx4cHpajKkAnOKxNp5RTYNEw0D+jquDGNmmYJeigt4z9bf 9YGF5A== Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4985wx0rp8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 18 Sep 2025 03:51:28 +0000 (GMT) Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-78e50889f83so10998326d6.1 for ; Wed, 17 Sep 2025 20:51:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758167487; x=1758772287; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Q8pEieLnnU1CR6YLMZOp6lmhyubXHH1VH3gcTO5X+f4=; b=Z0YmkKGXhG/9Vu0Hr7/oVZ5uUVmxSqAwl2uSqt2cd42JdegEo1YWvy8efT42IlIzfw T6eJwDqSiiw0UW+8TnXjTB1oS21Tp5ScM395CyPQzFZH4cxDz2lQb663tMVS4mDS8zWj PsOhk+hyBYI8EWIQX77DC5b9XXPWmaKazRy/eCKsaJACn3LPGrY3hdMNH/nRmLTMYGsK 8UPj28kMg5RMqJ3sMnivo63R+D3DLE7JmXCNKX4VIPpysOnOqGubTYxUoA0urxTraK+1 GO/L3HoQP3GygNsd9CZz/9giezE1GnWkA+9+y0Y8xYBr2PNLrtdrS/AGPNmGCaHseKiL ew0g== X-Forwarded-Encrypted: i=1; AJvYcCVN59oXoQfqpse7gR9jqsxcfaf6M+JHGehjk54RIiMfbwO2OwjczhAXspwzZO+3EE9uq79mlY9cAcmtSBA=@vger.kernel.org X-Gm-Message-State: AOJu0YwZvlwi+MZ5sYWegGHaIa7oT6jONt95H56qmG7ae03fQchlbUI+ QT2y4DIVsbUzRAsC9Be729flhNDimsH0LA7jcpo81BV0udMbCftmthAdQVLinE74qritAmq6Ovl 00mrUTGN4pyFc1+xH2XkgKLnwEpCwE/1RAcZnptGyNvhAU4UmsPs64JgJUzRmogaBQnc= X-Gm-Gg: ASbGncsZzTbs9Are4k5JqroeMzFbxchF5BNxOCmCzNsua3j6WWxP1vxmpxP1DGlkBEP c4IyjA8v3guNhpHwwREwfovF2NH6No4CqpB4QBoeoAj6psHBdJUHs//q2LXCFuzyCNnasmG63/I nB41G/THgh/K4D8OzfP0l3MnfF2S2FB0wZjCRhAfF6cV17wbBIDIKJfA1IpQT2fKpaNqUqoEUK6 jFLzxLPCQD8af9A97POGfmt7ggHVWb9xdQvDA8Zzuh8VN+DtOhq47YZyDRessxdnWtMfLoZ0neg +EQI1JsTtivGf/3iZOEtaaijhTHiB1oh+vCuErWtO2Dbc2lH5jKPfqL6H4p/NLvQxNQBsI0RVQY mb6icVQ3vCoaQ2RnJg4RpHaPIiq93gerhN57T8O0qKB+L5Ob2niRS X-Received: by 2002:ad4:5dc9:0:b0:786:dc95:7b4a with SMTP id 6a1803df08f44-78eced25f39mr65169816d6.54.1758167487122; Wed, 17 Sep 2025 20:51:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEk/rb/tJOsKEcrAB5A+o45s/d3UOtqhZMhqpsdkKnv6gTG6h9rL4DnyqnG624CqyYFXl5Yzw== X-Received: by 2002:ad4:5dc9:0:b0:786:dc95:7b4a with SMTP id 6a1803df08f44-78eced25f39mr65169496d6.54.1758167486526; Wed, 17 Sep 2025 20:51:26 -0700 (PDT) Received: from umbar.lan (2001-14ba-a0c3-3a00-264b-feff-fe8b-be8a.rev.dnainternet.fi. [2001:14ba:a0c3:3a00:264b:feff:fe8b:be8a]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-361aa38c4f7sm2799911fa.62.2025.09.17.20.51.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 20:51:24 -0700 (PDT) From: Dmitry Baryshkov Date: Thu, 18 Sep 2025 06:50:25 +0300 Subject: [PATCH v5 4/5] drm/msm: split debugfs implementation Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250918-msm-gpu-split-v5-4-44486f44d27d@oss.qualcomm.com> References: <20250918-msm-gpu-split-v5-0-44486f44d27d@oss.qualcomm.com> In-Reply-To: <20250918-msm-gpu-split-v5-0-44486f44d27d@oss.qualcomm.com> To: Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Konrad Dybcio Cc: linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=27511; i=dmitry.baryshkov@oss.qualcomm.com; h=from:subject:message-id; bh=cmVxIwbHXyzCxztNllALNU3duros/7BQgMvQkbzx8bQ=; b=owEBbQGS/pANAwAKAYs8ij4CKSjVAcsmYgBoy4GuGcdpmEy/pGL9w9rvSE5CHL+zimvCyKBB+ DLVnwrgkiCJATMEAAEKAB0WIQRMcISVXLJjVvC4lX+LPIo+Aiko1QUCaMuBrgAKCRCLPIo+Aiko 1cFsB/4yA56bEtwMPsMlFx0csJkuDGK0c3DUaO4jNIDcL0iXAc8xh85z6TOJHbFqMLWDan0gO8c 7N1o8xgkK0fW1meKT40pWpvvt67trgDpjq8f+tUa1zf1XuUUfYDsmI1/hvZ5/YsGBdOaGLm/vkV dLl9M+MLVXCWwWpwKppEKlip+F3fmgTdrdbIphBByoIT8ZvDgUWxKP6zRAu2mxLr8QlMalEQuCE SsDcZBj0iXvddyI6e5J/ph303EcyS0DwyoT1eQ+qXlFLaOa0C2niRY0kEmuQVdvj7NSvsODx5Qe q1anV5s27c14hrGRMEVUxHVmM32G619ZAgpHc8FN4/FeYizI X-Developer-Key: i=dmitry.baryshkov@oss.qualcomm.com; a=openpgp; fpr=8F88381DD5C873E4AE487DA5199BF1243632046A X-Proofpoint-ORIG-GUID: hGtRS4lPmFMMMWFs_Pq-ditr9gqgKy4Q X-Proofpoint-GUID: hGtRS4lPmFMMMWFs_Pq-ditr9gqgKy4Q X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwOTE3MDIxOSBTYWx0ZWRfX0kWoJTkhfa1P THQ5kv5eQk+29WYtp0ZhlNEOLg52J+Vi/aAH2zVzGNUgD4Aytu8NGltb/ZHUWJNsf7GNRQI7AaP uKLNg1o8eFaSnVBceGMuAjFdZm1RMQiSNQQJwUX8kg9J0XuO6uYr2XmQ7ZIAOYY0ty3Jea6xElZ quQx2WRktw6D1P34Vd9+sQvzz55v0Ura1YUS+IBzrRfZmGnlbWeqQ0a8rHo0e5DEtjsOzsyvGu0 SizqM4YmwGydFgJZ9CwbP9xFUwn+/RncDXq1YfoiEoCJFPvwl5n6an00Evvu2oh6L+Qc242P4wW sG0N40weJDs0yr3Xb4BfY3WLeHgj8xFjwvjbaDgG3ZHs2mViyfAquis750aRP4a6hwkyXSpPpt4 DawXgFzX X-Authority-Analysis: v=2.4 cv=Fq8F/3rq c=1 sm=1 tr=0 ts=68cb81c0 cx=c_pps a=wEM5vcRIz55oU/E2lInRtA==:117 a=xqWC_Br6kY4A:10 a=IkcTkHD0fZMA:10 a=yJojWOMRYYMA:10 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=tnbKlc_mvaWQH_wKN8AA:9 a=QEXdDO2ut3YA:10 a=OIgjcC2v60KrkQgK7BGD:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-17_01,2025-09-17_02,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 malwarescore=0 impostorscore=0 phishscore=0 priorityscore=1501 bulkscore=0 spamscore=0 suspectscore=0 adultscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2509170219 In preparation for making the GPU supporting code optional split the debugfs code into three main pieces: GEM (always enabled), KMS (only enabled if KMS driver parts are enabled) and GPU (currently always enabled, will become optional later). Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/Makefile | 4 +- drivers/gpu/drm/msm/msm_debugfs.c | 420 ------------------------------= ---- drivers/gpu/drm/msm/msm_debugfs.h | 14 -- drivers/gpu/drm/msm/msm_drv.c | 21 +- drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_gem.h | 8 + drivers/gpu/drm/msm/msm_gem_debugfs.c | 96 ++++++++ drivers/gpu/drm/msm/msm_gpu_debugfs.c | 213 +++++++++++++++++ drivers/gpu/drm/msm/msm_kms.h | 8 + drivers/gpu/drm/msm/msm_kms_debugfs.c | 129 +++++++++++ 10 files changed, 476 insertions(+), 441 deletions(-) diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index 0ac977a6ed01d91111d706995f341ced29f5ca8d..a475479fe201cb03937d30ee913= c2e178675384e 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -107,10 +107,10 @@ msm-display-$(CONFIG_DRM_MSM_KMS) +=3D \ disp/msm_disp_snapshot_util.o \ =20 msm-y +=3D \ - msm_debugfs.o \ msm_drv.o \ msm_fence.o \ msm_gem.o \ + msm_gem_debugfs.o \ msm_gem_prime.o \ msm_gem_shrinker.o \ msm_gem_submit.o \ @@ -118,6 +118,7 @@ msm-y +=3D \ msm_gem_vm_bind.o \ msm_gpu.o \ msm_gpu_devfreq.o \ + msm_gpu_debugfs.o \ msm_io_utils.o \ msm_ioctl.o \ msm_iommu.o \ @@ -133,6 +134,7 @@ msm-$(CONFIG_DRM_MSM_KMS) +=3D \ msm_atomic_tracepoints.o \ msm_fb.o \ msm_kms.o \ + msm_kms_debugfs.o \ =20 msm-$(CONFIG_DRM_MSM_KMS_FBDEV) +=3D msm_fbdev.o =20 diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_de= bugfs.c deleted file mode 100644 index 97dc70876442f9aa932677edbed5d26f6095e7ee..000000000000000000000000000= 0000000000000 --- a/drivers/gpu/drm/msm/msm_debugfs.c +++ /dev/null @@ -1,420 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Copyright (C) 2013-2016 Red Hat - * Author: Rob Clark - */ - -#ifdef CONFIG_DEBUG_FS - -#include -#include - -#include -#include -#include -#include - -#include "msm_drv.h" -#include "msm_gpu.h" -#include "msm_kms.h" -#include "msm_debugfs.h" -#include "disp/msm_disp_snapshot.h" - -/* - * GPU Snapshot: - */ - -struct msm_gpu_show_priv { - struct msm_gpu_state *state; - struct drm_device *dev; -}; - -static int msm_gpu_show(struct seq_file *m, void *arg) -{ - struct drm_printer p =3D drm_seq_file_printer(m); - struct msm_gpu_show_priv *show_priv =3D m->private; - struct msm_drm_private *priv =3D show_priv->dev->dev_private; - struct msm_gpu *gpu =3D priv->gpu; - int ret; - - ret =3D mutex_lock_interruptible(&gpu->lock); - if (ret) - return ret; - - drm_printf(&p, "%s Status:\n", gpu->name); - gpu->funcs->show(gpu, show_priv->state, &p); - - mutex_unlock(&gpu->lock); - - return 0; -} - -static int msm_gpu_release(struct inode *inode, struct file *file) -{ - struct seq_file *m =3D file->private_data; - struct msm_gpu_show_priv *show_priv =3D m->private; - struct msm_drm_private *priv =3D show_priv->dev->dev_private; - struct msm_gpu *gpu =3D priv->gpu; - - mutex_lock(&gpu->lock); - gpu->funcs->gpu_state_put(show_priv->state); - mutex_unlock(&gpu->lock); - - kfree(show_priv); - - return single_release(inode, file); -} - -static int msm_gpu_open(struct inode *inode, struct file *file) -{ - struct drm_device *dev =3D inode->i_private; - struct msm_drm_private *priv =3D dev->dev_private; - struct msm_gpu *gpu =3D priv->gpu; - struct msm_gpu_show_priv *show_priv; - int ret; - - if (!gpu || !gpu->funcs->gpu_state_get) - return -ENODEV; - - show_priv =3D kmalloc(sizeof(*show_priv), GFP_KERNEL); - if (!show_priv) - return -ENOMEM; - - ret =3D mutex_lock_interruptible(&gpu->lock); - if (ret) - goto free_priv; - - pm_runtime_get_sync(&gpu->pdev->dev); - msm_gpu_hw_init(gpu); - show_priv->state =3D gpu->funcs->gpu_state_get(gpu); - pm_runtime_put_sync(&gpu->pdev->dev); - - mutex_unlock(&gpu->lock); - - if (IS_ERR(show_priv->state)) { - ret =3D PTR_ERR(show_priv->state); - goto free_priv; - } - - show_priv->dev =3D dev; - - ret =3D single_open(file, msm_gpu_show, show_priv); - if (ret) - goto free_priv; - - return 0; - -free_priv: - kfree(show_priv); - return ret; -} - -static const struct file_operations msm_gpu_fops =3D { - .owner =3D THIS_MODULE, - .open =3D msm_gpu_open, - .read =3D seq_read, - .llseek =3D seq_lseek, - .release =3D msm_gpu_release, -}; - -#ifdef CONFIG_DRM_MSM_KMS -static int msm_fb_show(struct seq_file *m, void *arg) -{ - struct drm_info_node *node =3D m->private; - struct drm_device *dev =3D node->minor->dev; - struct drm_framebuffer *fb, *fbdev_fb =3D NULL; - - if (dev->fb_helper && dev->fb_helper->fb) { - seq_puts(m, "fbcon "); - fbdev_fb =3D dev->fb_helper->fb; - msm_framebuffer_describe(fbdev_fb, m); - } - - mutex_lock(&dev->mode_config.fb_lock); - list_for_each_entry(fb, &dev->mode_config.fb_list, head) { - if (fb =3D=3D fbdev_fb) - continue; - - seq_puts(m, "user "); - msm_framebuffer_describe(fb, m); - } - mutex_unlock(&dev->mode_config.fb_lock); - - return 0; -} - -static struct drm_info_list msm_kms_debugfs_list[] =3D { - { "fb", msm_fb_show }, -}; - -/* - * Display Snapshot: - */ - -static int msm_kms_show(struct seq_file *m, void *arg) -{ - struct drm_printer p =3D drm_seq_file_printer(m); - struct msm_disp_state *state =3D m->private; - - msm_disp_state_print(state, &p); - - return 0; -} - -static int msm_kms_release(struct inode *inode, struct file *file) -{ - struct seq_file *m =3D file->private_data; - struct msm_disp_state *state =3D m->private; - - msm_disp_state_free(state); - - return single_release(inode, file); -} - -static int msm_kms_open(struct inode *inode, struct file *file) -{ - struct drm_device *dev =3D inode->i_private; - struct msm_drm_private *priv =3D dev->dev_private; - struct msm_disp_state *state; - int ret; - - if (!priv->kms) - return -ENODEV; - - ret =3D mutex_lock_interruptible(&priv->kms->dump_mutex); - if (ret) - return ret; - - state =3D msm_disp_snapshot_state_sync(priv->kms); - - mutex_unlock(&priv->kms->dump_mutex); - - if (IS_ERR(state)) { - return PTR_ERR(state); - } - - ret =3D single_open(file, msm_kms_show, state); - if (ret) { - msm_disp_state_free(state); - return ret; - } - - return 0; -} - -static const struct file_operations msm_kms_fops =3D { - .owner =3D THIS_MODULE, - .open =3D msm_kms_open, - .read =3D seq_read, - .llseek =3D seq_lseek, - .release =3D msm_kms_release, -}; - -static void msm_debugfs_kms_init(struct drm_minor *minor) -{ - struct drm_device *dev =3D minor->dev; - struct msm_drm_private *priv =3D dev->dev_private; - - drm_debugfs_create_files(msm_kms_debugfs_list, - ARRAY_SIZE(msm_kms_debugfs_list), - minor->debugfs_root, minor); - debugfs_create_file("kms", 0400, minor->debugfs_root, - dev, &msm_kms_fops); - - if (priv->kms->funcs->debugfs_init) - priv->kms->funcs->debugfs_init(priv->kms, minor); - -} -#else /* ! CONFIG_DRM_MSM_KMS */ -static void msm_debugfs_kms_init(struct drm_minor *minor) -{ -} -#endif - -/* - * Other debugfs: - */ - -static unsigned long last_shrink_freed; - -static int -shrink_get(void *data, u64 *val) -{ - *val =3D last_shrink_freed; - - return 0; -} - -static int -shrink_set(void *data, u64 val) -{ - struct drm_device *dev =3D data; - - last_shrink_freed =3D msm_gem_shrinker_shrink(dev, val); - - return 0; -} - -DEFINE_DEBUGFS_ATTRIBUTE(shrink_fops, - shrink_get, shrink_set, - "0x%08llx\n"); - -/* - * Return the number of microseconds to wait until stall-on-fault is - * re-enabled. If 0 then it is already enabled or will be re-enabled on the - * next submit (unless there's a leftover devcoredump). This is useful for - * kernel tests that intentionally produce a fault and check the devcoredu= mp to - * wait until the cooldown period is over. - */ - -static int -stall_reenable_time_get(void *data, u64 *val) -{ - struct msm_drm_private *priv =3D data; - unsigned long irq_flags; - - spin_lock_irqsave(&priv->fault_stall_lock, irq_flags); - - if (priv->stall_enabled) - *val =3D 0; - else - *val =3D max(ktime_us_delta(priv->stall_reenable_time, ktime_get()), 0); - - spin_unlock_irqrestore(&priv->fault_stall_lock, irq_flags); - - return 0; -} - -DEFINE_DEBUGFS_ATTRIBUTE(stall_reenable_time_fops, - stall_reenable_time_get, NULL, - "%lld\n"); - -static int msm_gem_show(struct seq_file *m, void *arg) -{ - struct drm_info_node *node =3D m->private; - struct drm_device *dev =3D node->minor->dev; - struct msm_drm_private *priv =3D dev->dev_private; - int ret; - - ret =3D mutex_lock_interruptible(&priv->obj_lock); - if (ret) - return ret; - - msm_gem_describe_objects(&priv->objects, m); - - mutex_unlock(&priv->obj_lock); - - return 0; -} - -static int msm_mm_show(struct seq_file *m, void *arg) -{ - struct drm_info_node *node =3D m->private; - struct drm_device *dev =3D node->minor->dev; - struct drm_printer p =3D drm_seq_file_printer(m); - - drm_mm_print(&dev->vma_offset_manager->vm_addr_space_mm, &p); - - return 0; -} - -static struct drm_info_list msm_debugfs_list[] =3D { - {"gem", msm_gem_show}, - { "mm", msm_mm_show }, -}; - -static int late_init_minor(struct drm_minor *minor) -{ - struct drm_device *dev; - struct msm_drm_private *priv; - int ret; - - if (!minor) - return 0; - - dev =3D minor->dev; - priv =3D dev->dev_private; - - if (!priv->gpu_pdev) - return 0; - - ret =3D msm_rd_debugfs_init(minor); - if (ret) { - DRM_DEV_ERROR(dev->dev, "could not install rd debugfs\n"); - return ret; - } - - ret =3D msm_perf_debugfs_init(minor); - if (ret) { - DRM_DEV_ERROR(dev->dev, "could not install perf debugfs\n"); - return ret; - } - - return 0; -} - -int msm_debugfs_late_init(struct drm_device *dev) -{ - int ret; - ret =3D late_init_minor(dev->primary); - if (ret) - return ret; - ret =3D late_init_minor(dev->render); - return ret; -} - -static void msm_debugfs_gpu_init(struct drm_minor *minor) -{ - struct drm_device *dev =3D minor->dev; - struct msm_drm_private *priv =3D dev->dev_private; - struct dentry *gpu_devfreq; - - debugfs_create_file("gpu", S_IRUSR, minor->debugfs_root, - dev, &msm_gpu_fops); - - debugfs_create_u32("hangcheck_period_ms", 0600, minor->debugfs_root, - &priv->hangcheck_period); - - debugfs_create_bool("disable_err_irq", 0600, minor->debugfs_root, - &priv->disable_err_irq); - - debugfs_create_file("stall_reenable_time_us", 0400, minor->debugfs_root, - priv, &stall_reenable_time_fops); - - gpu_devfreq =3D debugfs_create_dir("devfreq", minor->debugfs_root); - - debugfs_create_bool("idle_clamp",0600, gpu_devfreq, - &priv->gpu_clamp_to_idle); - - debugfs_create_u32("upthreshold",0600, gpu_devfreq, - &priv->gpu_devfreq_config.upthreshold); - - debugfs_create_u32("downdifferential",0600, gpu_devfreq, - &priv->gpu_devfreq_config.downdifferential); -} - -void msm_debugfs_init(struct drm_minor *minor) -{ - struct drm_device *dev =3D minor->dev; - struct msm_drm_private *priv =3D dev->dev_private; - - drm_debugfs_create_files(msm_debugfs_list, - ARRAY_SIZE(msm_debugfs_list), - minor->debugfs_root, minor); - - if (priv->gpu_pdev) - msm_debugfs_gpu_init(minor); - - if (priv->kms) - msm_debugfs_kms_init(minor); - - debugfs_create_file("shrink", S_IRWXU, minor->debugfs_root, - dev, &shrink_fops); - - fault_create_debugfs_attr("fail_gem_alloc", minor->debugfs_root, - &fail_gem_alloc); - fault_create_debugfs_attr("fail_gem_iova", minor->debugfs_root, - &fail_gem_iova); -} -#endif - diff --git a/drivers/gpu/drm/msm/msm_debugfs.h b/drivers/gpu/drm/msm/msm_de= bugfs.h deleted file mode 100644 index ef58f66abbb341eccfbfeff9d759141e30ccc937..000000000000000000000000000= 0000000000000 --- a/drivers/gpu/drm/msm/msm_debugfs.h +++ /dev/null @@ -1,14 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (C) 2016 Red Hat - * Author: Rob Clark - */ - -#ifndef __MSM_DEBUGFS_H__ -#define __MSM_DEBUGFS_H__ - -#ifdef CONFIG_DEBUG_FS -void msm_debugfs_init(struct drm_minor *minor); -#endif - -#endif /* __MSM_DEBUGFS_H__ */ diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index ba984cc71d1d3aa341e0f4532b7093adcd25d3b0..28a5da1d1391f6c3cb2bfd17515= 4016f8987b752 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -15,7 +15,6 @@ #include =20 #include "msm_drv.h" -#include "msm_debugfs.h" #include "msm_gpu.h" #include "msm_ioctl.h" #include "msm_kms.h" @@ -64,6 +63,22 @@ bool msm_gpu_no_components(void) return separate_gpu_kms; } =20 +#ifdef CONFIG_DEBUG_FS +static void msm_debugfs_late_init(struct drm_device *dev) +{ + msm_gpu_debugfs_late_init(dev); +} + +static void msm_debugfs_init(struct drm_minor *minor) +{ + msm_gpu_debugfs_init(minor); + + msm_kms_debugfs_init(minor); + + msm_gem_debugfs_init(minor); +} +#endif + static int msm_drm_uninit(struct device *dev, const struct component_ops *= gpu_ops) { struct platform_device *pdev =3D to_platform_device(dev); @@ -171,9 +186,7 @@ static int msm_drm_init(struct device *dev, const struc= t drm_driver *drv, if (ret) goto err_msm_uninit; =20 - ret =3D msm_debugfs_late_init(ddev); - if (ret) - goto err_msm_uninit; + msm_debugfs_late_init(ddev); =20 if (priv->kms_init) msm_drm_kms_post_init(dev); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 6d847d593f1aebdf90e4389ef7ecdf5721d910a5..646ddf2c320ac94ff7b0f5c21da= b60fe777a10bf 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -436,7 +436,8 @@ static inline void msm_mdss_unregister(void) {} =20 #ifdef CONFIG_DEBUG_FS void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file = *m); -int msm_debugfs_late_init(struct drm_device *dev); +void msm_gpu_debugfs_init(struct drm_minor *minor); +void msm_gpu_debugfs_late_init(struct drm_device *dev); int msm_rd_debugfs_init(struct drm_minor *minor); void msm_rd_debugfs_cleanup(struct msm_drm_private *priv); __printf(3, 4) @@ -445,7 +446,6 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct= msm_gem_submit *submit, int msm_perf_debugfs_init(struct drm_minor *minor); void msm_perf_debugfs_cleanup(struct msm_drm_private *priv); #else -static inline int msm_debugfs_late_init(struct drm_device *dev) { return 0= ; } __printf(3, 4) static inline void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index a4cf31853c5008e171c3ad72cde1004c60fe5212..3a0086a883a2c2e57b01a5add17= be852f2877865 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -498,4 +498,12 @@ static inline void msm_gem_submit_put(struct msm_gem_s= ubmit *submit) =20 void msm_submit_retire(struct msm_gem_submit *submit); =20 +#ifdef CONFIG_DEBUG_FS +void msm_gem_debugfs_init(struct drm_minor *minor); +#else +static inline void msm_gem_debugfs_init(struct drm_minor *minor) +{ +} +#endif + #endif /* __MSM_GEM_H__ */ diff --git a/drivers/gpu/drm/msm/msm_gem_debugfs.c b/drivers/gpu/drm/msm/ms= m_gem_debugfs.c new file mode 100644 index 0000000000000000000000000000000000000000..1e7fccb17479d80cb6fae90490f= 53148190a4417 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_gem_debugfs.c @@ -0,0 +1,96 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2013-2016 Red Hat + * Author: Rob Clark + */ + +#ifdef CONFIG_DEBUG_FS + +#include +#include + +#include +#include + +#include "msm_gem.h" + +/* + * Other debugfs: + */ + +static unsigned long last_shrink_freed; + +static int +shrink_get(void *data, u64 *val) +{ + *val =3D last_shrink_freed; + + return 0; +} + +static int +shrink_set(void *data, u64 val) +{ + struct drm_device *dev =3D data; + + last_shrink_freed =3D msm_gem_shrinker_shrink(dev, val); + + return 0; +} + +DEFINE_DEBUGFS_ATTRIBUTE(shrink_fops, + shrink_get, shrink_set, + "0x%08llx\n"); + +static int msm_gem_show(struct seq_file *m, void *arg) +{ + struct drm_info_node *node =3D m->private; + struct drm_device *dev =3D node->minor->dev; + struct msm_drm_private *priv =3D dev->dev_private; + int ret; + + ret =3D mutex_lock_interruptible(&priv->obj_lock); + if (ret) + return ret; + + msm_gem_describe_objects(&priv->objects, m); + + mutex_unlock(&priv->obj_lock); + + return 0; +} + +static int msm_mm_show(struct seq_file *m, void *arg) +{ + struct drm_info_node *node =3D m->private; + struct drm_device *dev =3D node->minor->dev; + struct drm_printer p =3D drm_seq_file_printer(m); + + drm_mm_print(&dev->vma_offset_manager->vm_addr_space_mm, &p); + + return 0; +} + +static struct drm_info_list msm_debugfs_list[] =3D { + {"gem", msm_gem_show}, + { "mm", msm_mm_show }, +}; + +void msm_gem_debugfs_init(struct drm_minor *minor) +{ + struct drm_device *dev =3D minor->dev; + + drm_debugfs_create_files(msm_debugfs_list, + ARRAY_SIZE(msm_debugfs_list), + minor->debugfs_root, minor); + + debugfs_create_file("shrink", 0700, minor->debugfs_root, + dev, &shrink_fops); + + fault_create_debugfs_attr("fail_gem_alloc", minor->debugfs_root, + &fail_gem_alloc); + fault_create_debugfs_attr("fail_gem_iova", minor->debugfs_root, + &fail_gem_iova); +} +#endif + diff --git a/drivers/gpu/drm/msm/msm_gpu_debugfs.c b/drivers/gpu/drm/msm/ms= m_gpu_debugfs.c new file mode 100644 index 0000000000000000000000000000000000000000..7a070160ddac711a1c731a4fb7f= b099b8dfcdc01 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_gpu_debugfs.c @@ -0,0 +1,213 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2013-2016 Red Hat + * Author: Rob Clark + */ + +#ifdef CONFIG_DEBUG_FS + +#include +#include + +#include +#include +#include +#include + +#include "msm_drv.h" +#include "msm_gpu.h" +#include "msm_kms.h" +#include "disp/msm_disp_snapshot.h" + +/* + * GPU Snapshot: + */ + +struct msm_gpu_show_priv { + struct msm_gpu_state *state; + struct drm_device *dev; +}; + +static int msm_gpu_show(struct seq_file *m, void *arg) +{ + struct drm_printer p =3D drm_seq_file_printer(m); + struct msm_gpu_show_priv *show_priv =3D m->private; + struct msm_drm_private *priv =3D show_priv->dev->dev_private; + struct msm_gpu *gpu =3D priv->gpu; + int ret; + + ret =3D mutex_lock_interruptible(&gpu->lock); + if (ret) + return ret; + + drm_printf(&p, "%s Status:\n", gpu->name); + gpu->funcs->show(gpu, show_priv->state, &p); + + mutex_unlock(&gpu->lock); + + return 0; +} + +static int msm_gpu_release(struct inode *inode, struct file *file) +{ + struct seq_file *m =3D file->private_data; + struct msm_gpu_show_priv *show_priv =3D m->private; + struct msm_drm_private *priv =3D show_priv->dev->dev_private; + struct msm_gpu *gpu =3D priv->gpu; + + mutex_lock(&gpu->lock); + gpu->funcs->gpu_state_put(show_priv->state); + mutex_unlock(&gpu->lock); + + kfree(show_priv); + + return single_release(inode, file); +} + +static int msm_gpu_open(struct inode *inode, struct file *file) +{ + struct drm_device *dev =3D inode->i_private; + struct msm_drm_private *priv =3D dev->dev_private; + struct msm_gpu *gpu =3D priv->gpu; + struct msm_gpu_show_priv *show_priv; + int ret; + + if (!gpu || !gpu->funcs->gpu_state_get) + return -ENODEV; + + show_priv =3D kmalloc(sizeof(*show_priv), GFP_KERNEL); + if (!show_priv) + return -ENOMEM; + + ret =3D mutex_lock_interruptible(&gpu->lock); + if (ret) + goto free_priv; + + pm_runtime_get_sync(&gpu->pdev->dev); + msm_gpu_hw_init(gpu); + show_priv->state =3D gpu->funcs->gpu_state_get(gpu); + pm_runtime_put_sync(&gpu->pdev->dev); + + mutex_unlock(&gpu->lock); + + if (IS_ERR(show_priv->state)) { + ret =3D PTR_ERR(show_priv->state); + goto free_priv; + } + + show_priv->dev =3D dev; + + ret =3D single_open(file, msm_gpu_show, show_priv); + if (ret) + goto free_priv; + + return 0; + +free_priv: + kfree(show_priv); + return ret; +} + +static const struct file_operations msm_gpu_fops =3D { + .owner =3D THIS_MODULE, + .open =3D msm_gpu_open, + .read =3D seq_read, + .llseek =3D seq_lseek, + .release =3D msm_gpu_release, +}; + +/* + * Return the number of microseconds to wait until stall-on-fault is + * re-enabled. If 0 then it is already enabled or will be re-enabled on the + * next submit (unless there's a leftover devcoredump). This is useful for + * kernel tests that intentionally produce a fault and check the devcoredu= mp to + * wait until the cooldown period is over. + */ + +static int +stall_reenable_time_get(void *data, u64 *val) +{ + struct msm_drm_private *priv =3D data; + unsigned long irq_flags; + + spin_lock_irqsave(&priv->fault_stall_lock, irq_flags); + + if (priv->stall_enabled) + *val =3D 0; + else + *val =3D max(ktime_us_delta(priv->stall_reenable_time, ktime_get()), 0); + + spin_unlock_irqrestore(&priv->fault_stall_lock, irq_flags); + + return 0; +} + +DEFINE_DEBUGFS_ATTRIBUTE(stall_reenable_time_fops, + stall_reenable_time_get, NULL, + "%lld\n"); + +void msm_gpu_debugfs_init(struct drm_minor *minor) +{ + struct drm_device *dev =3D minor->dev; + struct msm_drm_private *priv =3D dev->dev_private; + struct dentry *gpu_devfreq; + + if (!priv->gpu_pdev) + return; + + debugfs_create_file("gpu", 0400, minor->debugfs_root, + dev, &msm_gpu_fops); + + debugfs_create_u32("hangcheck_period_ms", 0600, minor->debugfs_root, + &priv->hangcheck_period); + + debugfs_create_bool("disable_err_irq", 0600, minor->debugfs_root, + &priv->disable_err_irq); + + debugfs_create_file("stall_reenable_time_us", 0400, minor->debugfs_root, + priv, &stall_reenable_time_fops); + + gpu_devfreq =3D debugfs_create_dir("devfreq", minor->debugfs_root); + + debugfs_create_bool("idle_clamp", 0600, gpu_devfreq, + &priv->gpu_clamp_to_idle); + + debugfs_create_u32("upthreshold", 0600, gpu_devfreq, + &priv->gpu_devfreq_config.upthreshold); + + debugfs_create_u32("downdifferential", 0600, gpu_devfreq, + &priv->gpu_devfreq_config.downdifferential); +} + +static void late_init_minor(struct drm_minor *minor) +{ + int ret; + + if (!minor) + return; + + ret =3D msm_rd_debugfs_init(minor); + if (ret) { + drm_err(minor->dev, "could not install rd debugfs\n"); + return; + } + + ret =3D msm_perf_debugfs_init(minor); + if (ret) { + drm_err(minor->dev, "could not install perf debugfs\n"); + return; + } +} + +void msm_gpu_debugfs_late_init(struct drm_device *dev) +{ + struct msm_drm_private *priv =3D dev->dev_private; + + if (!priv->gpu_pdev) + return; + + late_init_minor(dev->primary); + + late_init_minor(dev->render); +} +#endif diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index 8a7be7b854deea9b763ec45df275fab77d806e44..ce7d73e57ee7e23272ef23a06ae= 5bc3d35b5bf98 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -240,4 +240,12 @@ static inline void msm_drm_kms_uninit(struct device *d= ev) =20 #endif =20 +#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_DRM_MSM_KMS) +void msm_kms_debugfs_init(struct drm_minor *minor); +#else +static inline void msm_kms_debugfs_init(struct drm_minor *minor) +{ +} +#endif + #endif /* __MSM_KMS_H__ */ diff --git a/drivers/gpu/drm/msm/msm_kms_debugfs.c b/drivers/gpu/drm/msm/ms= m_kms_debugfs.c new file mode 100644 index 0000000000000000000000000000000000000000..58975ee220b6f2b2dc581b864b2= f22e6c12e7583 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_kms_debugfs.c @@ -0,0 +1,129 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2013-2016 Red Hat + * Author: Rob Clark + */ + +#ifdef CONFIG_DEBUG_FS + +#include + +#include +#include +#include +#include + +#include "msm_drv.h" +#include "msm_kms.h" +#include "disp/msm_disp_snapshot.h" + +static int msm_fb_show(struct seq_file *m, void *arg) +{ + struct drm_info_node *node =3D m->private; + struct drm_device *dev =3D node->minor->dev; + struct drm_framebuffer *fb, *fbdev_fb =3D NULL; + + if (dev->fb_helper && dev->fb_helper->fb) { + seq_puts(m, "fbcon "); + fbdev_fb =3D dev->fb_helper->fb; + msm_framebuffer_describe(fbdev_fb, m); + } + + mutex_lock(&dev->mode_config.fb_lock); + list_for_each_entry(fb, &dev->mode_config.fb_list, head) { + if (fb =3D=3D fbdev_fb) + continue; + + seq_puts(m, "user "); + msm_framebuffer_describe(fb, m); + } + mutex_unlock(&dev->mode_config.fb_lock); + + return 0; +} + +static struct drm_info_list msm_kms_debugfs_list[] =3D { + { "fb", msm_fb_show }, +}; + +/* + * Display Snapshot: + */ + +static int msm_kms_show(struct seq_file *m, void *arg) +{ + struct drm_printer p =3D drm_seq_file_printer(m); + struct msm_disp_state *state =3D m->private; + + msm_disp_state_print(state, &p); + + return 0; +} + +static int msm_kms_release(struct inode *inode, struct file *file) +{ + struct seq_file *m =3D file->private_data; + struct msm_disp_state *state =3D m->private; + + msm_disp_state_free(state); + + return single_release(inode, file); +} + +static int msm_kms_open(struct inode *inode, struct file *file) +{ + struct drm_device *dev =3D inode->i_private; + struct msm_drm_private *priv =3D dev->dev_private; + struct msm_disp_state *state; + int ret; + + if (!priv->kms) + return -ENODEV; + + ret =3D mutex_lock_interruptible(&priv->kms->dump_mutex); + if (ret) + return ret; + + state =3D msm_disp_snapshot_state_sync(priv->kms); + + mutex_unlock(&priv->kms->dump_mutex); + + if (IS_ERR(state)) + return PTR_ERR(state); + + ret =3D single_open(file, msm_kms_show, state); + if (ret) { + msm_disp_state_free(state); + return ret; + } + + return 0; +} + +static const struct file_operations msm_kms_fops =3D { + .owner =3D THIS_MODULE, + .open =3D msm_kms_open, + .read =3D seq_read, + .llseek =3D seq_lseek, + .release =3D msm_kms_release, +}; + +void msm_kms_debugfs_init(struct drm_minor *minor) +{ + struct drm_device *dev =3D minor->dev; + struct msm_drm_private *priv =3D dev->dev_private; + + if (!priv->kms) + return; + + drm_debugfs_create_files(msm_kms_debugfs_list, + ARRAY_SIZE(msm_kms_debugfs_list), + minor->debugfs_root, minor); + debugfs_create_file("kms", 0400, minor->debugfs_root, + dev, &msm_kms_fops); + + if (priv->kms->funcs->debugfs_init) + priv->kms->funcs->debugfs_init(priv->kms, minor); + +} +#endif --=20 2.47.3 From nobody Thu Oct 2 10:57:10 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76B7F2475C7 for ; Thu, 18 Sep 2025 03:51:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758167496; cv=none; b=i3HGs3QrxX+iEvoo/0ltDN0HPFE/6Sc5a5O9I/iwv6QX9R1bBBtVW7FtcDLjDcijRRMFeuME5MJHXs4UOJn24hKOB93oA3Mi1HzLT3RhYDejfd34o5f+uYt3x4ftcUqSw/sy+T/MShnTvkR/8DAhVubKUBubQ29zi2KhoOOOhtI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758167496; c=relaxed/simple; bh=MPlM4YS1WNe5lcUQo7syRlnRAdiC33DpGMTUZ2Yq/Ws=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=OgDT6zP6s1Qq5Cw99YayQi32y34AuO6r5qrzeZjGoAVovrDjQw0tPduregafbJ8rhk+3mvAu9s4itsVECzeqyszsOCG0RXtzMhfMmLnFdADUU4iPAtD/VFBFNCzfYRTJVQin+VHq1P38PVeX7xbGax/I4zafeYOJwzMefXoNq44= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=G81Oh7NX; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="G81Oh7NX" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 58I3WGA9014174 for ; Thu, 18 Sep 2025 03:51:32 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 2aFg0keuUaiCvcSu6LI8UzHfBFFR9GyPPp6KA0oigY8=; b=G81Oh7NX+qOcxkYg Q6OflROxqJ7IfuuhuWjWWYxvZLS/LesVGUVLkWGl5/tT4MvhjR1rOfjDqmSCLhlm ZTMSLj6D5EAOokT50k6wRreo9PnMMwo35tXSmjnuHfi6ZSaB4EjDPPuDwwNaCyWD /eS7c1TrX6nmDWHEeQhCHpqMvdfUS9otgT/Zi3Q58qjEOO//gMqNyRytmMJxJTUV sx48L/FXDgybcjFNH3l1uHcNAPzyGqVBPFM6lPRUszW7ZQGDHiRoA7PWjgNsDuT8 v95HTGnqmi6/ldKgh0eOuX4vyB9JAvTEecHgrBo2sKOfBKtz9rKQTD7CvGnsT6R+ 7752MA== Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 497fxu4u1e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 18 Sep 2025 03:51:32 +0000 (GMT) Received: by mail-qt1-f197.google.com with SMTP id d75a77b69052e-4b5f290579aso10207931cf.0 for ; Wed, 17 Sep 2025 20:51:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758167491; x=1758772291; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2aFg0keuUaiCvcSu6LI8UzHfBFFR9GyPPp6KA0oigY8=; b=udSZOMwY/9YGb6+N14nNYqHHcnX8VoEn9YJCzPYJo0uqdXTowGbYrducFeV3H3FHx/ 2T+nXxVBOlGipN0F11P8RIIgj4qJ/1lSuUOMyI6Og+GjfRufM5KymxAxMaha5hTRTMug drA2tTas3uJjuI/v6c4fynv6NKW2IWksC136qPSo3OMKFpz1xDdv8al5fYOqIKFnkqN1 R/oAIanFNowkOyMBPRWOI542L3uyCLosAz+bE/Np0Z4H8/LREXLn13YqPJ5k3hHocq1A oyqWjN8DMarVifcsz4VI0DDusYYWSSBRrggVn2Zwe3mWkiB63SLTTRdN2h9hhcjnaUQl YYlA== X-Forwarded-Encrypted: i=1; AJvYcCWjg0OuV6HqJNo+8P9u+wV/0Be5gDVtH6Be+M1t1qEUW2K52nClW1AGy6z/pq4r+yjgvJ66GDO69Gr3ACY=@vger.kernel.org X-Gm-Message-State: AOJu0Yy6XmMCMNqvSz/nw2mtz9rpa5PG7dakLg5h/g4kgWk98IDTzcRc lKQ51ZFmlDdKDoM6wIli3MmXpYCZjNJ0xCuN/SSvbBNkJ8olovfTT72Dn+BK5tXjrcLRTO0yPKr kb9CzIs7MDtylcJrCzwm0yPLptRqH0nQAx5o84HP9vamkQPIDDS6Jk5Ln3Z3zs1GEwGM= X-Gm-Gg: ASbGncuCz5Ck1JinCnijd1FECw/Hdt83+ylotEXmrWCluIfsEu/6nZS1k8THFYT5+6J dQxNOZEkSikUlKEaH1nIlXxxjnaQ9imJUXLDLnQGRmEN+Z1yYzgx/cXIsYWaprAMzbtvgeLXIkF dxy5rZSCn83N+lWBIxyPaZCI/zqr+CrcasgywIWrbYF0XWbwf7MmZhQFAdBB+zCJ+4eBd7hQ7lw d1dNz5b9c8v4R+p8LEgsIOl4lY7Lcgf+HDoAAf6LWQEQQABtxrElWwBs2QwKL5wsyovKQ+znfxc KJcgmB0vqB9lrR37mZvEApn81GEyM+aFukDFc7basUPTkygZJQLuTEmZE6jwiG49Q9Sm9+Zi8/5 OLCnugpHSkorT5EZFYD+fU9KV4GoNs7OZTrWzY5EbxJp2voroxS7p X-Received: by 2002:ac8:5a8f:0:b0:4b2:ee19:ed57 with SMTP id d75a77b69052e-4ba6799d2d2mr50926781cf.12.1758167490668; Wed, 17 Sep 2025 20:51:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGNpP8PjKrWWX8fJxpN8mpxakNvZGAhPJUW6Kzmnc2OVvCJOd10/Vr/df7omOH16dGAJDtG/Q== X-Received: by 2002:ac8:5a8f:0:b0:4b2:ee19:ed57 with SMTP id d75a77b69052e-4ba6799d2d2mr50926541cf.12.1758167490063; Wed, 17 Sep 2025 20:51:30 -0700 (PDT) Received: from umbar.lan (2001-14ba-a0c3-3a00-264b-feff-fe8b-be8a.rev.dnainternet.fi. [2001:14ba:a0c3:3a00:264b:feff:fe8b:be8a]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-361aa38c4f7sm2799911fa.62.2025.09.17.20.51.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 20:51:28 -0700 (PDT) From: Dmitry Baryshkov Date: Thu, 18 Sep 2025 06:50:26 +0300 Subject: [PATCH v5 5/5] drm/msm: make it possible to disable GPU support Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250918-msm-gpu-split-v5-5-44486f44d27d@oss.qualcomm.com> References: <20250918-msm-gpu-split-v5-0-44486f44d27d@oss.qualcomm.com> In-Reply-To: <20250918-msm-gpu-split-v5-0-44486f44d27d@oss.qualcomm.com> To: Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Konrad Dybcio Cc: linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=25185; i=dmitry.baryshkov@oss.qualcomm.com; h=from:subject:message-id; bh=MPlM4YS1WNe5lcUQo7syRlnRAdiC33DpGMTUZ2Yq/Ws=; b=owEBbQGS/pANAwAKAYs8ij4CKSjVAcsmYgBoy4GuRkoMds06CJWjhcR2NzxMhRnAcqHKWf65Q UBaLZI4D7SJATMEAAEKAB0WIQRMcISVXLJjVvC4lX+LPIo+Aiko1QUCaMuBrgAKCRCLPIo+Aiko 1YrEB/9X7sjrqwipG+XeWj1RvHGXAxylXQ4CkdDdhkpQZpcH7gGEmUzfdKNozD5doZ/w6gMo8Jr EtztzmbNeXpC01xGb3wEgzpCcADZBWNzZCOOq5XsHIyjxd1Euz6X7ARCIxThTlAlhwFfxvqlC7l w0O/JPfkodeK7esAmxAK15Hcm4Lvs/vyyC7M+1o2M/jcPXTKLscLh5QJBtqMvy8DebTgpd+cAVW 97vzuWvGPbxJ3xmn7rNdHSya81hY9bGq91eX8iNfHr3yTyUhE+jCoWwULx59JkdgDXfM3d9rr1l XjrTRKpeGZ8xyRCNAq4coVqWdTmiiu1lwndBC+NCHIP6XzMp X-Developer-Key: i=dmitry.baryshkov@oss.qualcomm.com; a=openpgp; fpr=8F88381DD5C873E4AE487DA5199BF1243632046A X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwOTE2MDIwMiBTYWx0ZWRfX6yItfyhVMQd6 rHKrUJvvq/DYpdf7k0UNd+6r/Nz4W5vo3hThL89Rqrv9x3ZBBil4lJG0faSGpPnMdfmq7HkYhi6 zjEAje3afxzdSdBCAor/YbFxjVUOK/4gxDM4pwo6L+bK9n22W0PKJvU0r7N1bM45zx5PL5MWxHg 59qPAyXRHR5kjQvQ6pEdICQ8L+DDlM/nToRKaIsaBf4DwK29kQFaondcD18UWjBhOTai1Cq9DJI LcPr37sc3zYtEbRKkbG0sPMxRfmji1Di00qaQm/RT5pS+5own0pDbUuxlj0K5GnVsrow6IidK3g 2cMH8i5+FDZ3tJSOSXAUJjsic03zvheYiExJ2YgG4nItMbEg9Yw0HRQM16Q3rTslJUmI6ZOB1vy Qacl5NAS X-Proofpoint-ORIG-GUID: Hkka6U-mwh3UASce7Rd9DP69T-69yzRz X-Authority-Analysis: v=2.4 cv=R+UDGcRX c=1 sm=1 tr=0 ts=68cb81c4 cx=c_pps a=EVbN6Ke/fEF3bsl7X48z0g==:117 a=xqWC_Br6kY4A:10 a=IkcTkHD0fZMA:10 a=yJojWOMRYYMA:10 a=EUspDBNiAAAA:8 a=O_sw1XWOr4TBHZv0ZsgA:9 a=Stk6j59YW_Pzu7xC:21 a=QEXdDO2ut3YA:10 a=a_PwQJl-kcHnX1M80qC6:22 X-Proofpoint-GUID: Hkka6U-mwh3UASce7Rd9DP69T-69yzRz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-17_01,2025-09-17_02,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 phishscore=0 adultscore=0 clxscore=1015 priorityscore=1501 bulkscore=0 malwarescore=0 suspectscore=0 impostorscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2509160202 Some of the platforms don't have onboard GPU or don't provide support for the GPU in the drm/msm driver. Make it possible to disable the GPU part of the driver and build the KMS-only part. Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/Kconfig | 27 +++++-- drivers/gpu/drm/msm/Makefile | 15 ++-- drivers/gpu/drm/msm/msm_drv.c | 133 ++++++++++++++----------------= ---- drivers/gpu/drm/msm/msm_drv.h | 16 ---- drivers/gpu/drm/msm/msm_gem.h | 2 + drivers/gpu/drm/msm/msm_gem_vma.h | 14 ++++ drivers/gpu/drm/msm/msm_gpu.c | 45 ++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 111 +++++++++++++++++++++++----- drivers/gpu/drm/msm/msm_submitqueue.c | 12 +-- 9 files changed, 240 insertions(+), 135 deletions(-) diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig index 250246f81ea94f01a016e8938f08e1aa4ce02442..f833aa2e6263ea5509d77cac42f= 94c7fe34e6ece 100644 --- a/drivers/gpu/drm/msm/Kconfig +++ b/drivers/gpu/drm/msm/Kconfig @@ -13,33 +13,43 @@ config DRM_MSM depends on QCOM_COMMAND_DB || QCOM_COMMAND_DB=3Dn depends on PM select IOMMU_IO_PGTABLE - select QCOM_MDT_LOADER if ARCH_QCOM select REGULATOR - select DRM_EXEC select DRM_GPUVM - select DRM_SCHED select SHMEM select TMPFS - select QCOM_SCM select QCOM_UBWC_CONFIG select WANT_DEV_COREDUMP select SND_SOC_HDMI_CODEC if SND_SOC - select SYNC_FILE select PM_OPP - select NVMEM select PM_GENERIC_DOMAINS select TRACE_GPU_MEM help DRM/KMS driver for MSM/snapdragon. =20 +config DRM_MSM_ADRENO + bool "Qualcomm Adreno GPU support" + default y + depends on DRM_MSM + select DRM_EXEC + select DRM_SCHED + select NVMEM + select QCOM_MDT_LOADER if ARCH_QCOM + select QCOM_SCM if ARCH_QCOM + select SYNC_FILE + help + Enable support for the GPU present on most of Qualcomm Snapdragon + platforms. Without this option the driver will only support the + unaccelerated display output. + If you are unsure, say Y. + config DRM_MSM_GPU_STATE bool - depends on DRM_MSM && (DEBUG_FS || DEV_COREDUMP) + depends on DRM_MSM_ADRENO && (DEBUG_FS || DEV_COREDUMP) default y =20 config DRM_MSM_GPU_SUDO bool "Enable SUDO flag on submits" - depends on DRM_MSM && EXPERT + depends on DRM_MSM_ADRENO && EXPERT default n help Enable userspace that has CAP_SYS_RAWIO to submit GPU commands @@ -189,6 +199,7 @@ config DRM_MSM_HDMI default y select DRM_DISPLAY_HDMI_HELPER select DRM_DISPLAY_HDMI_STATE_HELPER + select QCOM_SCM help Compile in support for the HDMI output MSM DRM driver. It can be a primary or a secondary display on device. Note that this is used diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index a475479fe201cb03937d30ee913c2e178675384e..ffa0767601fc8b2bc8f60506f0a= ac6f08a41f3c5 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -108,26 +108,29 @@ msm-display-$(CONFIG_DRM_MSM_KMS) +=3D \ =20 msm-y +=3D \ msm_drv.o \ - msm_fence.o \ msm_gem.o \ msm_gem_debugfs.o \ msm_gem_prime.o \ msm_gem_shrinker.o \ - msm_gem_submit.o \ msm_gem_vma.o \ + msm_io_utils.o \ + msm_iommu.o \ + msm_gpu_tracepoints.o \ + +msm-$(CONFIG_DRM_MSM_ADRENO) +=3D \ + msm_fence.o \ + msm_gem_submit.o \ msm_gem_vm_bind.o \ msm_gpu.o \ + msm_gpu_debugfs.o \ msm_gpu_devfreq.o \ msm_gpu_debugfs.o \ - msm_io_utils.o \ msm_ioctl.o \ - msm_iommu.o \ msm_perf.o \ msm_rd.o \ msm_ringbuffer.o \ msm_submitqueue.o \ msm_syncobj.o \ - msm_gpu_tracepoints.o \ =20 msm-$(CONFIG_DRM_MSM_KMS) +=3D \ msm_atomic.o \ @@ -163,7 +166,7 @@ msm-display-$(CONFIG_DRM_MSM_DSI_14NM_PHY) +=3D dsi/phy= /dsi_phy_14nm.o msm-display-$(CONFIG_DRM_MSM_DSI_10NM_PHY) +=3D dsi/phy/dsi_phy_10nm.o msm-display-$(CONFIG_DRM_MSM_DSI_7NM_PHY) +=3D dsi/phy/dsi_phy_7nm.o =20 -msm-y +=3D $(adreno-y) +msm-$(CONFIG_DRM_MSM_ADRENO) +=3D $(adreno-y) msm-$(CONFIG_DRM_MSM_KMS) +=3D $(msm-display-y) =20 obj-$(CONFIG_DRM_MSM) +=3D msm.o diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 28a5da1d1391f6c3cb2bfd175154016f8987b752..f7fb80b6c6d333149eaef17407c= fc06d2f1abf3f 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -51,7 +51,11 @@ static bool modeset =3D true; MODULE_PARM_DESC(modeset, "Use kernel modesetting [KMS] (1=3Don (default),= 0=3Ddisable)"); module_param(modeset, bool, 0600); =20 +#ifndef CONFIG_DRM_MSM_ADRENO +static bool separate_gpu_kms =3D true; +#else static bool separate_gpu_kms; +#endif MODULE_PARM_DESC(separate_gpu_kms, "Use separate DRM device for the GPU (0= =3Dsingle DRM device for both GPU and display (default), 1=3Dtwo DRM device= s)"); module_param(separate_gpu_kms, bool, 0400); =20 @@ -204,53 +208,20 @@ static int msm_drm_init(struct device *dev, const str= uct drm_driver *drv, return ret; } =20 -/* - * DRM operations: - */ - -static void load_gpu(struct drm_device *dev) +void __msm_context_destroy(struct kref *kref) { - static DEFINE_MUTEX(init_lock); - struct msm_drm_private *priv =3D dev->dev_private; + struct msm_context *ctx =3D container_of(kref, struct msm_context, ref); =20 - mutex_lock(&init_lock); + msm_submitqueue_fini(ctx); =20 - if (!priv->gpu) - priv->gpu =3D adreno_load_gpu(dev); + drm_gpuvm_put(ctx->vm); =20 - mutex_unlock(&init_lock); -} - -/** - * msm_context_vm - lazily create the context's VM - * - * @dev: the drm device - * @ctx: the context - * - * The VM is lazily created, so that userspace has a chance to opt-in to h= aving - * a userspace managed VM before the VM is created. - * - * Note that this does not return a reference to the VM. Once the VM is c= reated, - * it exists for the lifetime of the context. - */ -struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_contex= t *ctx) -{ - static DEFINE_MUTEX(init_lock); - struct msm_drm_private *priv =3D dev->dev_private; - - /* Once ctx->vm is created it is valid for the lifetime of the context: */ - if (ctx->vm) - return ctx->vm; - - mutex_lock(&init_lock); - if (!ctx->vm) { - ctx->vm =3D msm_gpu_create_private_vm( - priv->gpu, current, !ctx->userspace_managed_vm); - - } - mutex_unlock(&init_lock); +#ifdef CONFIG_DRM_MSM_ADRENO + kfree(ctx->comm); + kfree(ctx->cmdline); +#endif =20 - return ctx->vm; + kfree(ctx); } =20 static int context_init(struct drm_device *dev, struct drm_file *file) @@ -262,9 +233,6 @@ static int context_init(struct drm_device *dev, struct = drm_file *file) if (!ctx) return -ENOMEM; =20 - INIT_LIST_HEAD(&ctx->submitqueues); - rwlock_init(&ctx->queuelock); - kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); =20 @@ -280,7 +248,7 @@ static int msm_open(struct drm_device *dev, struct drm_= file *file) /* For now, load gpu on open.. to avoid the requirement of having * firmware in the initrd. */ - load_gpu(dev); + msm_gpu_load(dev); =20 return context_init(dev, file); } @@ -307,31 +275,13 @@ static void msm_postclose(struct drm_device *dev, str= uct drm_file *file) context_close(ctx); } =20 -static const struct drm_ioctl_desc msm_ioctls[] =3D { - DRM_IOCTL_DEF_DRV(MSM_GET_PARAM, msm_ioctl_get_param, DRM_RENDER_AL= LOW), - DRM_IOCTL_DEF_DRV(MSM_SET_PARAM, msm_ioctl_set_param, DRM_RENDER_AL= LOW), - DRM_IOCTL_DEF_DRV(MSM_GEM_NEW, msm_ioctl_gem_new, DRM_RENDER_AL= LOW), - DRM_IOCTL_DEF_DRV(MSM_GEM_INFO, msm_ioctl_gem_info, DRM_RENDER_AL= LOW), - DRM_IOCTL_DEF_DRV(MSM_GEM_CPU_PREP, msm_ioctl_gem_cpu_prep, DRM_RENDER_AL= LOW), - DRM_IOCTL_DEF_DRV(MSM_GEM_CPU_FINI, msm_ioctl_gem_cpu_fini, DRM_RENDER_AL= LOW), - DRM_IOCTL_DEF_DRV(MSM_GEM_SUBMIT, msm_ioctl_gem_submit, DRM_RENDER_AL= LOW), - DRM_IOCTL_DEF_DRV(MSM_WAIT_FENCE, msm_ioctl_wait_fence, DRM_RENDER_AL= LOW), - DRM_IOCTL_DEF_DRV(MSM_GEM_MADVISE, msm_ioctl_gem_madvise, DRM_RENDER_AL= LOW), - DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_NEW, msm_ioctl_submitqueue_new, DRM= _RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_CLOSE, msm_ioctl_submitqueue_close, DRM= _RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_QUERY, msm_ioctl_submitqueue_query, DRM= _RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(MSM_VM_BIND, msm_ioctl_vm_bind, DRM_RENDER_AL= LOW), -}; - static void msm_show_fdinfo(struct drm_printer *p, struct drm_file *file) { struct drm_device *dev =3D file->minor->dev; struct msm_drm_private *priv =3D dev->dev_private; =20 - if (!priv->gpu) - return; - - msm_gpu_show_fdinfo(priv->gpu, file->driver_priv, p); + if (priv->gpu) + msm_gpu_show_fdinfo(priv->gpu, file->driver_priv, p); =20 drm_show_memory_stats(p, file); } @@ -357,6 +307,23 @@ static const struct file_operations fops =3D { DRIVER_MODESET | \ 0 ) =20 +#ifdef CONFIG_DRM_MSM_ADRENO +static const struct drm_ioctl_desc msm_ioctls[] =3D { + DRM_IOCTL_DEF_DRV(MSM_GET_PARAM, msm_ioctl_get_param, DRM_RENDER_AL= LOW), + DRM_IOCTL_DEF_DRV(MSM_SET_PARAM, msm_ioctl_set_param, DRM_RENDER_AL= LOW), + DRM_IOCTL_DEF_DRV(MSM_GEM_NEW, msm_ioctl_gem_new, DRM_RENDER_AL= LOW), + DRM_IOCTL_DEF_DRV(MSM_GEM_INFO, msm_ioctl_gem_info, DRM_RENDER_AL= LOW), + DRM_IOCTL_DEF_DRV(MSM_GEM_CPU_PREP, msm_ioctl_gem_cpu_prep, DRM_RENDER_AL= LOW), + DRM_IOCTL_DEF_DRV(MSM_GEM_CPU_FINI, msm_ioctl_gem_cpu_fini, DRM_RENDER_AL= LOW), + DRM_IOCTL_DEF_DRV(MSM_GEM_SUBMIT, msm_ioctl_gem_submit, DRM_RENDER_AL= LOW), + DRM_IOCTL_DEF_DRV(MSM_WAIT_FENCE, msm_ioctl_wait_fence, DRM_RENDER_AL= LOW), + DRM_IOCTL_DEF_DRV(MSM_GEM_MADVISE, msm_ioctl_gem_madvise, DRM_RENDER_AL= LOW), + DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_NEW, msm_ioctl_submitqueue_new, DRM= _RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_CLOSE, msm_ioctl_submitqueue_close, DRM= _RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_QUERY, msm_ioctl_submitqueue_query, DRM= _RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(MSM_VM_BIND, msm_ioctl_vm_bind, DRM_RENDER_AL= LOW), +}; + static const struct drm_driver msm_driver =3D { .driver_features =3D DRIVER_FEATURES_GPU | DRIVER_FEATURES_KMS, .open =3D msm_open, @@ -380,39 +347,40 @@ static const struct drm_driver msm_driver =3D { .patchlevel =3D MSM_VERSION_PATCHLEVEL, }; =20 -static const struct drm_driver msm_kms_driver =3D { - .driver_features =3D DRIVER_FEATURES_KMS, +static const struct drm_driver msm_gpu_driver =3D { + .driver_features =3D DRIVER_FEATURES_GPU, .open =3D msm_open, .postclose =3D msm_postclose, - .dumb_create =3D msm_gem_dumb_create, - .dumb_map_offset =3D msm_gem_dumb_map_offset, .gem_prime_import_sg_table =3D msm_gem_prime_import_sg_table, #ifdef CONFIG_DEBUG_FS .debugfs_init =3D msm_debugfs_init, #endif - MSM_FBDEV_DRIVER_OPS, .show_fdinfo =3D msm_show_fdinfo, + .ioctls =3D msm_ioctls, + .num_ioctls =3D ARRAY_SIZE(msm_ioctls), .fops =3D &fops, - .name =3D "msm-kms", + .name =3D "msm", .desc =3D "MSM Snapdragon DRM", .major =3D MSM_VERSION_MAJOR, .minor =3D MSM_VERSION_MINOR, .patchlevel =3D MSM_VERSION_PATCHLEVEL, }; +#endif =20 -static const struct drm_driver msm_gpu_driver =3D { - .driver_features =3D DRIVER_FEATURES_GPU, +static const struct drm_driver msm_kms_driver =3D { + .driver_features =3D DRIVER_FEATURES_KMS, .open =3D msm_open, .postclose =3D msm_postclose, + .dumb_create =3D msm_gem_dumb_create, + .dumb_map_offset =3D msm_gem_dumb_map_offset, .gem_prime_import_sg_table =3D msm_gem_prime_import_sg_table, #ifdef CONFIG_DEBUG_FS .debugfs_init =3D msm_debugfs_init, #endif + MSM_FBDEV_DRIVER_OPS, .show_fdinfo =3D msm_show_fdinfo, - .ioctls =3D msm_ioctls, - .num_ioctls =3D ARRAY_SIZE(msm_ioctls), .fops =3D &fops, - .name =3D "msm", + .name =3D "msm-kms", .desc =3D "MSM Snapdragon DRM", .major =3D MSM_VERSION_MAJOR, .minor =3D MSM_VERSION_MINOR, @@ -511,6 +479,7 @@ bool msm_disp_drv_should_bind(struct device *dev, bool = dpu_driver) } #endif =20 +#ifdef CONFIG_DRM_MSM_ADRENO /* * We don't know what's the best binding to link the gpu with the drm devi= ce. * Fow now, we just hunt for all the possible gpus that we support, and ad= d them @@ -549,6 +518,12 @@ static int msm_drm_bind(struct device *dev) &msm_driver, NULL); } +#else +static int msm_drm_bind(struct device *dev) +{ + return msm_drm_init(dev, &msm_kms_driver, NULL); +} +#endif =20 static void msm_drm_unbind(struct device *dev) { @@ -583,11 +558,13 @@ int msm_drv_probe(struct device *master_dev, return ret; } =20 +#ifdef CONFIG_DRM_MSM_ADRENO if (!msm_gpu_no_components()) { ret =3D add_gpu_components(master_dev, &match); if (ret) return ret; } +#endif =20 /* on all devices that I am aware of, iommu's which can map * any address the cpu can see are used: @@ -603,6 +580,7 @@ int msm_drv_probe(struct device *master_dev, return 0; } =20 +#ifdef CONFIG_DRM_MSM_ADRENO int msm_gpu_probe(struct platform_device *pdev, const struct component_ops *ops) { @@ -630,6 +608,7 @@ void msm_gpu_remove(struct platform_device *pdev, { msm_drm_uninit(&pdev->dev, ops); } +#endif =20 static int __init msm_drm_register(void) { diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 646ddf2c320ac94ff7b0f5c21dab60fe777a10bf..dd77e26895fb493ce7318158143= 4fb42885a089e 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -436,22 +436,6 @@ static inline void msm_mdss_unregister(void) {} =20 #ifdef CONFIG_DEBUG_FS void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file = *m); -void msm_gpu_debugfs_init(struct drm_minor *minor); -void msm_gpu_debugfs_late_init(struct drm_device *dev); -int msm_rd_debugfs_init(struct drm_minor *minor); -void msm_rd_debugfs_cleanup(struct msm_drm_private *priv); -__printf(3, 4) -void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *su= bmit, - const char *fmt, ...); -int msm_perf_debugfs_init(struct drm_minor *minor); -void msm_perf_debugfs_cleanup(struct msm_drm_private *priv); -#else -__printf(3, 4) -static inline void msm_rd_dump_submit(struct msm_rd_state *rd, - struct msm_gem_submit *submit, - const char *fmt, ...) {} -static inline void msm_rd_debugfs_cleanup(struct msm_drm_private *priv) {} -static inline void msm_perf_debugfs_cleanup(struct msm_drm_private *priv) = {} #endif =20 struct clk *msm_clk_get(struct platform_device *pdev, const char *name); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 3a0086a883a2c2e57b01a5add17be852f2877865..088a84dbc564066310c6ef9d907= 7b802c73babb9 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -68,6 +68,7 @@ struct msm_gem_vm { /** @base: Inherit from drm_gpuvm. */ struct drm_gpuvm base; =20 +#ifdef CONFIG_DRM_MSM_ADRENO /** * @sched: Scheduler used for asynchronous VM_BIND request. * @@ -94,6 +95,7 @@ struct msm_gem_vm { */ atomic_t in_flight; } prealloc_throttle; +#endif =20 /** * @mm: Memory management for kernel managed VA allocations diff --git a/drivers/gpu/drm/msm/msm_gem_vma.h b/drivers/gpu/drm/msm/msm_ge= m_vma.h index f702f81529e72b86bffb4960408f1912bc65851a..0cf92b111c17bfc1a7d3db10e43= 95face1afaa83 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.h +++ b/drivers/gpu/drm/msm/msm_gem_vma.h @@ -95,11 +95,25 @@ vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_ma= p_op *op) op->range, op->prot); } =20 +#ifdef CONFIG_DRM_MSM_ADRENO int msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *_arg); int msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg); int msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *_arg); =20 int msm_gem_vm_sched_init(struct msm_gem_vm *vm, struct drm_device *drm); void msm_gem_vm_sched_fini(struct msm_gem_vm *vm); +#else + +#define msm_gem_vm_sm_step_map NULL +#define msm_gem_vm_sm_step_remap NULL +#define msm_gem_vm_sm_step_unmap NULL + +static inline int msm_gem_vm_sched_init(struct msm_gem_vm *vm, struct drm_= device *drm) +{ + return -EINVAL; +} + +static inline void msm_gem_vm_sched_fini(struct msm_gem_vm *vm) {} +#endif =20 #endif diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 17759abc46d7d7af4117b1d71f1d5fba6ba0b61c..9ac6f04e95a61143dc6372fde16= 5d45a306a495c 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -1146,3 +1146,48 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) =20 platform_set_drvdata(gpu->pdev, NULL); } + +void msm_gpu_load(struct drm_device *dev) +{ + static DEFINE_MUTEX(init_lock); + struct msm_drm_private *priv =3D dev->dev_private; + + mutex_lock(&init_lock); + + if (!priv->gpu) + priv->gpu =3D adreno_load_gpu(dev); + + mutex_unlock(&init_lock); +} + +/** + * msm_context_vm - lazily create the context's VM + * + * @dev: the drm device + * @ctx: the context + * + * The VM is lazily created, so that userspace has a chance to opt-in to h= aving + * a userspace managed VM before the VM is created. + * + * Note that this does not return a reference to the VM. Once the VM is c= reated, + * it exists for the lifetime of the context. + */ +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_contex= t *ctx) +{ + static DEFINE_MUTEX(init_lock); + struct msm_drm_private *priv =3D dev->dev_private; + + /* Once ctx->vm is created it is valid for the lifetime of the context: */ + if (ctx->vm) + return ctx->vm; + + mutex_lock(&init_lock); + if (!ctx->vm) { + ctx->vm =3D msm_gpu_create_private_vm( + priv->gpu, current, !ctx->userspace_managed_vm); + + } + mutex_unlock(&init_lock); + + return ctx->vm; +} diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index a597f2bee30b6370ecc3639bfe1072c85993e789..def2edadbface07d26c6e7c6add= 0d08352b8d748 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -345,20 +345,6 @@ struct msm_gpu_perfcntr { * struct msm_context - per-drm_file context */ struct msm_context { - /** @queuelock: synchronizes access to submitqueues list */ - rwlock_t queuelock; - - /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */ - struct list_head submitqueues; - - /** - * @queueid: - * - * Counter incremented each time a submitqueue is created, used to - * assign &msm_gpu_submitqueue.id - */ - int queueid; - /** * @closed: The device file associated with this context has been closed. * @@ -394,6 +380,20 @@ struct msm_context { * pointer to the previous context. */ int seqno; +#ifdef CONFIG_DRM_MSM_ADRENO + /** @queuelock: synchronizes access to submitqueues list */ + rwlock_t queuelock; + + /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */ + struct list_head submitqueues; + + /** + * @queueid: + * + * Counter incremented each time a submitqueue is created, used to + * assign &msm_gpu_submitqueue.id + */ + int queueid; =20 /** * @sysprof: @@ -455,6 +455,7 @@ struct msm_context { * level. */ struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS= ]; +#endif =20 /** * @ctx_mem: @@ -613,6 +614,7 @@ struct msm_gpu_state { struct msm_gpu_state_bo *bos; }; =20 +#ifdef CONFIG_DRM_MSM_ADRENO static inline void gpu_write(struct msm_gpu *gpu, u32 reg, u32 data) { trace_msm_gpu_regaccess(reg); @@ -673,6 +675,7 @@ void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct ms= m_context *ctx, struct drm_printer *p); =20 int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx); +void msm_submitqueue_fini(struct msm_context *ctx); struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id); int msm_submitqueue_create(struct drm_device *drm, @@ -688,6 +691,44 @@ void msm_submitqueue_destroy(struct kref *kref); int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, = int sysprof); void __msm_context_destroy(struct kref *kref); =20 +static inline void msm_submitqueue_put(struct msm_gpu_submitqueue *queue) +{ + if (queue) + kref_put(&queue->ref, msm_submitqueue_destroy); +} + +int msm_context_set_sysprof(struct msm_context *ctx, + struct msm_gpu *gpu, int sysprof); +#else +static inline void msm_gpu_show_fdinfo(struct msm_gpu *gpu, + struct msm_context *ctx, + struct drm_printer *p) +{ +} + +static inline int msm_submitqueue_init(struct drm_device *drm, struct msm_= context *ctx) +{ + return -ENXIO; +} + +static inline void msm_submitqueue_fini(struct msm_context *ctx) +{ +} + +static inline void msm_submitqueue_close(struct msm_context *ctx) +{ +} + +static inline int msm_context_set_sysprof(struct msm_context *ctx, + struct msm_gpu *gpu, + int sysprof) +{ + return 0; +} +#endif + +void __msm_context_destroy(struct kref *kref); + static inline void msm_context_put(struct msm_context *ctx) { kref_put(&ctx->ref, __msm_context_destroy); @@ -700,6 +741,7 @@ static inline struct msm_context *msm_context_get( return ctx; } =20 +#ifdef CONFIG_DRM_MSM_ADRENO void msm_devfreq_init(struct msm_gpu *gpu); void msm_devfreq_cleanup(struct msm_gpu *gpu); void msm_devfreq_resume(struct msm_gpu *gpu); @@ -726,6 +768,7 @@ struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, bool kernel_managed); =20 +void msm_gpu_load(struct drm_device *dev); void msm_gpu_cleanup(struct msm_gpu *gpu); =20 struct msm_gpu *adreno_load_gpu(struct drm_device *dev); @@ -733,12 +776,6 @@ bool adreno_has_gpu(struct device_node *node); void __init adreno_register(void); void __exit adreno_unregister(void); =20 -static inline void msm_submitqueue_put(struct msm_gpu_submitqueue *queue) -{ - if (queue) - kref_put(&queue->ref, msm_submitqueue_destroy); -} - static inline struct msm_gpu_state *msm_gpu_crashstate_get(struct msm_gpu = *gpu) { struct msm_gpu_state *state =3D NULL; @@ -776,5 +813,39 @@ void msm_gpu_fault_crashstate_capture(struct msm_gpu *= gpu, struct msm_gpu_fault_ #define check_apriv(gpu, flags) \ (((gpu)->hw_apriv ? MSM_BO_MAP_PRIV : 0) | (flags)) =20 +#else /* ! CONFIG_DRM_MSM_ADRENO */ +static inline void __init adreno_register(void) +{ +} + +static inline void __exit adreno_unregister(void) +{ +} + +static inline void msm_gpu_load(struct drm_device *dev) +{ +} +#endif /* ! CONFIG_DRM_MSM_ADRENO */ + +#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_DRM_MSM_ADRENO) +void msm_gpu_debugfs_init(struct drm_minor *minor); +void msm_gpu_debugfs_late_init(struct drm_device *dev); +int msm_rd_debugfs_init(struct drm_minor *minor); +void msm_rd_debugfs_cleanup(struct msm_drm_private *priv); +__printf(3, 4) +void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *su= bmit, + const char *fmt, ...); +int msm_perf_debugfs_init(struct drm_minor *minor); +void msm_perf_debugfs_cleanup(struct msm_drm_private *priv); +#else +static inline void msm_gpu_debugfs_init(struct drm_minor *minor) {} +static inline void msm_gpu_debugfs_late_init(struct drm_device *dev) {} +__printf(3, 4) +static inline void msm_rd_dump_submit(struct msm_rd_state *rd, + struct msm_gem_submit *submit, + const char *fmt, ...) {} +static inline void msm_rd_debugfs_cleanup(struct msm_drm_private *priv) {} +static inline void msm_perf_debugfs_cleanup(struct msm_drm_private *priv) = {} +#endif =20 #endif /* __MSM_GPU_H__ */ diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index d53dfad16bde7d5ae7b1e48f221696d525a10965..aa8fe0ccd80b4942bc78195a40f= f80aaac9459e2 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -49,10 +49,8 @@ int msm_context_set_sysprof(struct msm_context *ctx, str= uct msm_gpu *gpu, int sy return 0; } =20 -void __msm_context_destroy(struct kref *kref) +void msm_submitqueue_fini(struct msm_context *ctx) { - struct msm_context *ctx =3D container_of(kref, - struct msm_context, ref); int i; =20 for (i =3D 0; i < ARRAY_SIZE(ctx->entities); i++) { @@ -62,11 +60,6 @@ void __msm_context_destroy(struct kref *kref) drm_sched_entity_destroy(ctx->entities[i]); kfree(ctx->entities[i]); } - - drm_gpuvm_put(ctx->vm); - kfree(ctx->comm); - kfree(ctx->cmdline); - kfree(ctx); } =20 void msm_submitqueue_destroy(struct kref *kref) @@ -264,6 +257,9 @@ int msm_submitqueue_init(struct drm_device *drm, struct= msm_context *ctx) struct msm_drm_private *priv =3D drm->dev_private; int default_prio, max_priority; =20 + INIT_LIST_HEAD(&ctx->submitqueues); + rwlock_init(&ctx->queuelock); + if (!priv->gpu) return -ENODEV; =20 --=20 2.47.3