From nobody Wed Oct 8 10:02:27 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66D832264BB for ; Sun, 29 Jun 2025 20:15:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228153; cv=none; b=lgWcnTUqBvGaKkkql97KUnDHP3uACfF1nRrplkRSOl2D4p2Wj7WYTCuF6LGStyZkEPUtInQ9Hsj8f1PvGvwfUhki0axEJkUvU4rBLxVwbq1sYMWr6hwKZJBGFJXDSNhDjYz57979x6AA7JpmRJzFOx4s786h2bgKL6C/6CWz9JM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228153; c=relaxed/simple; bh=8HUCsu07hAS/bP/QpqqCRLMr8xpiWxvBNsplcx3L6YU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Rl+jglDdBjDcuKSyIhyLylJVrIgxbQ6WTra/7uvQck+pfBKafMJYqNd4EeycMAUXATjhzzuUcLDGjU+sF7YhwLmJjzxGjT2dYr+kTYbqPAM/amFmLEFoWGuD0DlyDKfClxnskNP87R+wXfH4oZr0FQhrN21schliC0vH6dH6lIw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=WBueRC+D; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="WBueRC+D" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TGSN2b032256 for ; Sun, 29 Jun 2025 20:15:49 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=/I8w26relc0 8VGTXOUezTNb1wMArJgWD1sc2fjaEtTk=; b=WBueRC+DWtG4IoBeo0td/gZKv9s LsG7wlfllHBIx3iAzAgDrI09reJm1oAWqhAwPNvi6WUmxXEH1YD3BleCbjGmnD9b PVLrrn1jtayMTJv+ScgufcCsct/69OQwB+MtqvgE26MEZXOs2OB6wtq4Om7sVca6 oFu7J5NoSh4XnvbSt5xMkrcKckX76gT6IwY7IvJOIJMDKwIZIwcI7+LvRb20QSNs WOHos/5y/ZcHpzE0AMZTRbJddhmUe8Xel7aiYbmj26xO5I/0Yyp3+0LwmsdxOcqo HrZheKGcXcX6C3yDp59TyyOiEOuAwmDxzCkCgRO2ZoZybcojbrMl9JK7KUQ== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8m62m8s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:15:49 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-235e1d66fa6so34116985ad.0 for ; Sun, 29 Jun 2025 13:15:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228148; x=1751832948; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/I8w26relc08VGTXOUezTNb1wMArJgWD1sc2fjaEtTk=; b=jbbmRYgEkPMZwEQSQrZABQv+nnK8/1NupFcJlVphTv2iEGbe+FLBzMTyIFbnvnUBKa CFyrLYhp9wECvPER7pm//3/hviKpqzLSMeawitIyRVMjzL7wYPS3a80CA/sEeMO9rJyl KvBSmmj7yM8+dsiH3KGWy4SZ5sYU1fEYD0ybBk5a94kr9i9iCyyXY5sq/1w65dQhQDhG E/dldRFYd/pl0cwXutX/gnCJtU1363VZIxKZGDOjeHpYfEaacBu40/HkPDOmet0S91NK j8TWhFZgm3vHOzPZAGRJutgAABvT8h1rMDbNel8JjAEbOsNfS/N9LHuwrY1k+bgiK+3z bXMw== X-Forwarded-Encrypted: i=1; AJvYcCXNoYnglUrFm2e4kRSArqVnPMf2szo7pYYEJYTSbA+q4N/owgXtUHRJRYYKGGI9BEaP3cBRGQO/IABBLSI=@vger.kernel.org X-Gm-Message-State: AOJu0YwDwJUXi2beh/4D4DUWbU3pYSvBmcoAaU2+0F/80Cy4Gfdyw6VP u9BRetgRd/CeqjRLjgLxGLnJ5JVw2qgKCe8eZQ8JoRuum31vrq3xnXF5oVMGLkkqZgzf75AzVlD zw4+5BY4loxyu3MssHpHbrY6xqTnmRgNiHn2iEWt8wFu1tTEN/YEz9+8CzP5E/YgNae0= X-Gm-Gg: ASbGnctQjNFeeqaIIVHdPQTPw1ndpBPDJwAPOyDoYC5jIlxJC120Sss/yMjKUUuxoRQ OvPH/J6JoBbzFVQz3sWv1eviGI35r7t9BXqu3vNCrOzYfM2PFURpc4Hxby9yaFb0RorEonvC0ss v6Nj676Wli1ShBpgtb0IxNbmWLxvsx+j4OzbTXN68iPvJ8SMCPtR59+5ynLxpiC4yTXe2UgOg7U Sta419tfJeRP6yaCaN1lI5r8rnQEKGm+gS3BbOmV26mM6RUPU9PTbBudv3pxOV4i2JeFnxHGwp9 5xBDjK0acqJl9q609ql/Zlg4sGwEblNLmQ== X-Received: by 2002:a17:902:ecd1:b0:234:a139:1210 with SMTP id d9443c01a7336-23ac4645042mr135124485ad.53.1751228148612; Sun, 29 Jun 2025 13:15:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFeOYc4WiR21Bq+E+1OUk+x6dI7IFpiybjWV5ofoONV3kzSQEbyT8y1EfO5WQ1B1UfNOdo3Mw== X-Received: by 2002:a17:902:ecd1:b0:234:a139:1210 with SMTP id d9443c01a7336-23ac4645042mr135124245ad.53.1751228148224; Sun, 29 Jun 2025 13:15:48 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3c54b8sm67165975ad.217.2025.06.29.13.15.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:15:47 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Danilo Krummrich , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 01/42] drm/gpuvm: Fix doc comments Date: Sun, 29 Jun 2025 13:12:44 -0700 Message-ID: <20250629201530.25775-2-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX0U54BpczuFN3 XUrwcOwVcuI9FldjLFpKLvGyyzvqtMvokQdM4EJqd5f4qKPXjMhEl3VsKvz+hnH7DG7FPiQsL0p nLoOn3P6HyQ+LMmKPXoHHEZeQLXxgjwPaeqzWiIE6Stqxd83xgslJ83M4tKuP+0vz2j99Mr34vP ntC9QNdj5jZdx4xC3NcDta83uxeqAR82lKdWZFRTcd4Q+iQsxmOC+yonYQkf0s2LRtWo0nML4EF jXe/gTXODwtqC4tlma8pKyO5FHBNC485/6eIMu1j2/nxG1hheGM/X5URBU0aQ1mTEWm7BMkKWok zGvjk0Ch7P32rvZ2IqcIaaDRGSBTNCDwWKYD6Mu+/kGxxdNF7F0paL5ZzE+4QKdhms+z0atCnoa N4tvEt0nkDgc2JvYG/yQs7aoEKRGNvgIqXhaJ90aAgVaf6RK7cHraP9BFN7GY+nVinTkkYa2 X-Authority-Analysis: v=2.4 cv=Fq0F/3rq c=1 sm=1 tr=0 ts=68619ef5 cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=VwQbUJbxAAAA:8 a=pGLkceISAAAA:8 a=5Yek0j1-lJT91Je3aSMA:9 a=+jEqtf1s3R9VXZ0wqowq2kgwd+I=:19 a=0bXxn9q0MV6snEgNplNhOjQmxlI=:19 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-GUID: h0gg9l64Q1T2CDLDog8GQiiJZFtnes00 X-Proofpoint-ORIG-GUID: h0gg9l64Q1T2CDLDog8GQiiJZFtnes00 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 mlxscore=0 suspectscore=0 adultscore=0 clxscore=1015 mlxlogscore=999 impostorscore=0 bulkscore=0 malwarescore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" Correctly summerize drm_gpuvm_sm_map/unmap, and fix the parameter order and names. Just something I noticed in passing. v2: Don't rename the arg names in prototypes to match function declarations [Danilo] Signed-off-by: Rob Clark Acked-by: Danilo Krummrich Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/drm_gpuvm.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index f9eb56f24bef..0ca717130541 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -2299,13 +2299,13 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, } =20 /** - * drm_gpuvm_sm_map() - creates the &drm_gpuva_op split/merge steps + * drm_gpuvm_sm_map() - calls the &drm_gpuva_op split/merge steps * @gpuvm: the &drm_gpuvm representing the GPU VA space + * @priv: pointer to a driver private data structure * @req_addr: the start address of the new mapping * @req_range: the range of the new mapping * @req_obj: the &drm_gem_object to map * @req_offset: the offset within the &drm_gem_object - * @priv: pointer to a driver private data structure * * This function iterates the given range of the GPU VA space. It utilizes= the * &drm_gpuvm_ops to call back into the driver providing the split and mer= ge @@ -2349,7 +2349,7 @@ drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv, EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map); =20 /** - * drm_gpuvm_sm_unmap() - creates the &drm_gpuva_ops to split on unmap + * drm_gpuvm_sm_unmap() - calls the &drm_gpuva_ops to split on unmap * @gpuvm: the &drm_gpuvm representing the GPU VA space * @priv: pointer to a driver private data structure * @req_addr: the start address of the range to unmap --=20 2.50.0 From nobody Wed Oct 8 10:02:27 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1D3C23C4F5 for ; Sun, 29 Jun 2025 20:15:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228153; cv=none; b=JVFiWbTlTZRoy5HASrzPaDD/L9jkuD3sUxUvTW1gBSGaiRIPsEfsl5lVu4kgvlVNhx+fML3SLNTQtUeS9kTrwvz9Pyo1ugrRxvSXtjdXA8kk2p0SM8z1axgMikKJt4yXViksqAOltBtJEjsGV3ykoTANdV8M+nUi3Ar6+MEkEW4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228153; c=relaxed/simple; bh=fUFp5T48iHQMMb+w+CljarmzayVBiB1y2lUv7gNJA+Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=n5hSRvGeCzqkLWyNXtQH/HlV+x9o1/0BZD81749PE5mfLb1Q3Gb9+xL08NPw83gFNUF7qXHv9oUTjvqqheukX4TsEysgyZ5AAUExZv348dngSBSvsgiqmIi7/3vy5kmsYWBlRzVbVQE/D9nGtno71527GA5JRBaFgLkmpXkf2L4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=n3jSGl9i; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="n3jSGl9i" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TIDZ0s012252 for ; Sun, 29 Jun 2025 20:15:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Q9hZi7WyneL FKJYzB24G6qPoC5N3fZK4sv14UWJEX/w=; b=n3jSGl9ixpAI11KCtJ6s+UX6bhh sIDUhBjUd6k/Az0e9bjYhCAkJJQTtx4jHPXnI3ziCHncNUIwiyX4HocNl1bbnFK4 tw/PD84d3jq9qeCGQRMeDwP/PEMgCDa9V6yg6t2BNzxt+iIU1juHf/20HSdW7m9b VITgbXORRztJyuknVLNtGdzxhSeJOW5+9Uxzmtxkc7rD14mr3xyU2hVwBal69oPu Qo4B/uFiuHhb9FMIyShz489duME5GZLWbh8sDT4KKyRUWjfiYtc/FYhLYfSXucd+ I9qSDBAeXD2P9mTe0U3xA7tmrZW8iYudUxV5mNl2KRsL21QYc3vc9K+AU0A== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j63k2teb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:15:50 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-235c897d378so28676925ad.1 for ; Sun, 29 Jun 2025 13:15:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228150; x=1751832950; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Q9hZi7WyneLFKJYzB24G6qPoC5N3fZK4sv14UWJEX/w=; b=TD72JTccFITCsAI73xkhkwjPo2pfR+kwEw3pwKxgF1S2t4zbtlvDMTfQavtSCTHgVX ziYiI2SQd6d2gCs/ZXyOYHWK32VPHe3G+TPcosgOnV3N/w9O4jAkD6DMlXuQLQZ5RKUN CLc9Jqn6Q+eEN0fTwPMOSSAOTJ0S9YASgsD0LN8TYQLBaevSe7j95ZlQ4rKRMyercI83 ot33NRDaId6oDIt0Klp1HAdNVvtP8KxfLcaak0J3MgZGnHVDRNKeRwvVRmfa9jOp3BWd CX5n6X01mkBR4+7FKg9lty4YuziMmdY7xR8QsZk12Yjkn+gqqUL8UqlJfUlpd39FZLYD Unig== X-Forwarded-Encrypted: i=1; AJvYcCWGB/ihalAHGZu5ml7sr8coO49i2fNb9S76Qccb56gQ4Es8Sqn7RkwVZ9msFK6K66sUhimO02KO/fOGZe0=@vger.kernel.org X-Gm-Message-State: AOJu0YzQYPYdG1OupGJAhIL7Kr2CoQtTnZFqeo7hTpSm370zeMypAlVl LGaMs35QxjUF+p0+EqVAcjwZosOYX7O4T9qNO9MtGAkdsRg/DQIlnElR4lDbj51mFd2WmhWqiEd muCf0h1EzVqH3gtJ0uEK433xZYHHP1TsVO6wmkbmSfBDGOTfbLnIRTD1w9M1ni+x6JII= X-Gm-Gg: ASbGnctjwsfzpknsTBqZEQuoL3+rlLdpuHmcgafvp+bbakPoZkdMpd9Bxh2GX45fVvV x+OcZwEci8tCKhinBjobJ2VtRwoNgl9Pk6tasPPg0H+VjpFVwIcBiBokcYB/2ow718ztkNkK8bb hT4LznARkVdpWW+I/1v8B+Z/GgzOec7E1FlDV/YsoMWYEOs22qdgKEuHoz1MHFCfJcOUE3c7f9I Dkdw2tWIcJ9wnAjJWutnSUoQTFvBdp4QQOKhXldfyTrjcr7bQmWT7rS38MZkFxKVB4+cBtXhN3t JJ9p4ftF/dBBQmJrQiXjk48PvVFQl3mgqw== X-Received: by 2002:a17:902:f709:b0:234:c5c1:9b73 with SMTP id d9443c01a7336-23ac465d9cfmr194782245ad.36.1751228149965; Sun, 29 Jun 2025 13:15:49 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFBZp4KZgwrEI6FDcHEYVJoxJkjmXqWfz5vRcyogp+uCc4z+KjzVqLAqMcc42XPdeMvwWa/qA== X-Received: by 2002:a17:902:f709:b0:234:c5c1:9b73 with SMTP id d9443c01a7336-23ac465d9cfmr194782005ad.36.1751228149529; Sun, 29 Jun 2025 13:15:49 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3bc037sm63349185ad.196.2025.06.29.13.15.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:15:49 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Danilo Krummrich , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 02/42] drm/gpuvm: Add locking helpers Date: Sun, 29 Jun 2025 13:12:45 -0700 Message-ID: <20250629201530.25775-3-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=ZKfXmW7b c=1 sm=1 tr=0 ts=68619ef6 cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=VwQbUJbxAAAA:8 a=gXmLzIc8hE4PcKHQkQgA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX/mI1P0cDJwwC Vv0TdJA+PfA2w8OLxOdHkFKzncOnpIzGGIik/UVott2cYbWu4ItQt3W2NTDAQxW9/VXFpWmd7Hg Ap7bLOAICRWYt2TOqDZuhfeFAZxDcMqTz6SyWDFm8vIy4DnrvBj1bfz9mZQljvzw1q3CM5FuEM8 QeqL0U9o5Pd5vmnhSHJWJos8SxSVOPmxt+2Xii0ocIAqmV7oP2Z6s72tbrlwTLjMASv+4FwhZ1v Pe0iIAZBhoODz8Mk0GuIx552UEn2oehx/vKo2j6gJIZ0ekPEGfBa+SfZgG7XtsGa1FfPJIZYnAU WXuPukT1xaBVvdeUVF1ERnm6RU1llnuZ6d+weQX6opMMxc0JAaQOyH41spGwag/eQHgDhwOlsbB w7hea3ylxvppvNLQ5EDc9JVxf65wX/rb25TR7bq/cvb6zrK2fYM0NYTcrXpKhlvZbzLFHNzu X-Proofpoint-ORIG-GUID: RSdfjDS9JA36M6xMNlXVoXa9y5pacEEV X-Proofpoint-GUID: RSdfjDS9JA36M6xMNlXVoXa9y5pacEEV X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 mlxscore=0 mlxlogscore=999 spamscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 impostorscore=0 malwarescore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" For UNMAP/REMAP steps we could be needing to lock objects that are not explicitly listed in the VM_BIND ioctl in order to tear-down unmapped VAs. These helpers handle locking/preparing the needed objects. Note that these functions do not strictly require the VM changes to be applied before the next drm_gpuvm_sm_map_lock()/_unmap_lock() call. In the case that VM changes from an earlier drm_gpuvm_sm_map()/_unmap() call result in a differing sequence of steps when the VM changes are actually applied, it will be the same set of GEM objects involved, so the locking is still correct. v2: Rename to drm_gpuvm_sm_*_exec_locked() [Danilo] v3: Expand comments to show expected usage, and explain how the usage is safe in the case of overlapping driver VM_BIND ops. Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco Acked-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuvm.c | 126 ++++++++++++++++++++++++++++++++++++ include/drm/drm_gpuvm.h | 8 +++ 2 files changed, 134 insertions(+) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 0ca717130541..a811471b888e 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -2390,6 +2390,132 @@ drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *p= riv, } EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap); =20 +static int +drm_gpuva_sm_step_lock(struct drm_gpuva_op *op, void *priv) +{ + struct drm_exec *exec =3D priv; + + switch (op->op) { + case DRM_GPUVA_OP_REMAP: + if (op->remap.unmap->va->gem.obj) + return drm_exec_lock_obj(exec, op->remap.unmap->va->gem.obj); + return 0; + case DRM_GPUVA_OP_UNMAP: + if (op->unmap.va->gem.obj) + return drm_exec_lock_obj(exec, op->unmap.va->gem.obj); + return 0; + default: + return 0; + } +} + +static const struct drm_gpuvm_ops lock_ops =3D { + .sm_step_map =3D drm_gpuva_sm_step_lock, + .sm_step_remap =3D drm_gpuva_sm_step_lock, + .sm_step_unmap =3D drm_gpuva_sm_step_lock, +}; + +/** + * drm_gpuvm_sm_map_exec_lock() - locks the objects touched by a drm_gpuvm= _sm_map() + * @gpuvm: the &drm_gpuvm representing the GPU VA space + * @exec: the &drm_exec locking context + * @num_fences: for newly mapped objects, the # of fences to reserve + * @req_addr: the start address of the range to unmap + * @req_range: the range of the mappings to unmap + * @req_obj: the &drm_gem_object to map + * @req_offset: the offset within the &drm_gem_object + * + * This function locks (drm_exec_lock_obj()) objects that will be unmapped/ + * remapped, and locks+prepares (drm_exec_prepare_object()) objects that + * will be newly mapped. + * + * The expected usage is: + * + * vm_bind { + * struct drm_exec exec; + * + * // IGNORE_DUPLICATES is required, INTERRUPTIBLE_WAIT is recommen= ded: + * drm_exec_init(&exec, IGNORE_DUPLICATES | INTERRUPTIBLE_WAIT, 0); + * + * drm_exec_until_all_locked (&exec) { + * for_each_vm_bind_operation { + * switch (op->op) { + * case DRIVER_OP_UNMAP: + * ret =3D drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, o= p->addr, op->range); + * break; + * case DRIVER_OP_MAP: + * ret =3D drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num= _fences, + * op->addr, op->range, + * obj, op->obj_offset= ); + * break; + * } + * + * drm_exec_retry_on_contention(&exec); + * if (ret) + * return ret; + * } + * } + * } + * + * This enables all locking to be performed before the driver begins modif= ying + * the VM. This is safe to do in the case of overlapping DRIVER_VM_BIND_O= Ps, + * where an earlier op can alter the sequence of steps generated for a lat= er + * op, because the later altered step will involve the same GEM object(s) + * already seen in the earlier locking step. For example: + * + * 1) An earlier driver DRIVER_OP_UNMAP op removes the need for a + * DRM_GPUVA_OP_REMAP/UNMAP step. This is safe because we've already + * locked the GEM object in the earlier DRIVER_OP_UNMAP op. + * + * 2) An earlier DRIVER_OP_MAP op overlaps with a later DRIVER_OP_MAP/UNMAP + * op, introducing a DRM_GPUVA_OP_REMAP/UNMAP that wouldn't have been + * required without the earlier DRIVER_OP_MAP. This is safe because we= 've + * already locked the GEM object in the earlier DRIVER_OP_MAP step. + * + * Returns: 0 on success or a negative error codec + */ +int +drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, unsigned int num_fences, + u64 req_addr, u64 req_range, + struct drm_gem_object *req_obj, u64 req_offset) +{ + if (req_obj) { + int ret =3D drm_exec_prepare_obj(exec, req_obj, num_fences); + if (ret) + return ret; + } + + return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec, + req_addr, req_range, + req_obj, req_offset); + +} +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_exec_lock); + +/** + * drm_gpuvm_sm_unmap_exec_lock() - locks the objects touched by drm_gpuvm= _sm_unmap() + * @gpuvm: the &drm_gpuvm representing the GPU VA space + * @exec: the &drm_exec locking context + * @req_addr: the start address of the range to unmap + * @req_range: the range of the mappings to unmap + * + * This function locks (drm_exec_lock_obj()) objects that will be unmapped/ + * remapped by drm_gpuvm_sm_unmap(). + * + * See drm_gpuvm_sm_map_exec_lock() for expected usage. + * + * Returns: 0 on success or a negative error code + */ +int +drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec *exe= c, + u64 req_addr, u64 req_range) +{ + return __drm_gpuvm_sm_unmap(gpuvm, &lock_ops, exec, + req_addr, req_range); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap_exec_lock); + static struct drm_gpuva_op * gpuva_op_alloc(struct drm_gpuvm *gpuvm) { diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 2a9629377633..274532facfd6 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -1211,6 +1211,14 @@ int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *= priv, int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv, u64 addr, u64 range); =20 +int drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, unsigned int num_fences, + u64 req_addr, u64 req_range, + struct drm_gem_object *obj, u64 offset); + +int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec = *exec, + u64 req_addr, u64 req_range); + void drm_gpuva_map(struct drm_gpuvm *gpuvm, struct drm_gpuva *va, struct drm_gpuva_op_map *op); --=20 2.50.0 From nobody Wed Oct 8 10:02:27 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83B1323BF9E for ; Sun, 29 Jun 2025 20:15:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228156; cv=none; b=h287bBWHtx4xqn0BgzUb/iishDOBQemjp6ObafQqbq+J0pmTWlCQFDk8HqLrj/QiqDjmrgRg6JU+hTY6I8knU02wgCkW1THXIEjjS4mseRqQ87Hk7evn2mLj6HgUGi4wqfmUhm1Y85CVHO5yVwCDvOCtToiEDtyKZ9yzDJBkX7s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228156; c=relaxed/simple; bh=XRVvkiZdyuUD3BzpHrxk2CyWGEjPti1XCiRHl00bWx4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XHI87UIFu7ZMdVgCwNE0B05RTQVbcKNQ3UlOEWBxaZ38DGEEYWpLzixl74/FWzfJTlGz0X4oeJFpNwOSotzInL2MwHKRWIsV7kquh+Vm9aUiszxewPiWFxsUtiFSBv/1BqF5eZP3a0IKClRNVe7SSbtqHb0UfmhEtRGUJmTbDhU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=LPnQNg1X; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="LPnQNg1X" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TFhGQx017759 for ; Sun, 29 Jun 2025 20:15:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=UaLuKm/3fKy TMFvsdO3Mo5+i/aFeJ7ZuYdAPUo3GE+o=; b=LPnQNg1XSAYcbSetYNHooRrnO8W qZbNVhN4xoK+0FMTyGYBD1L4TffW6GYQrCiC2rAPFUbN5ZPYWoz6iQPQxWelTPhF krkQxrM53++qvwas99wSVUukqHh1CWHHGu9f16yaolRChhe5yfEPOdwhsm0pjoKl rIYOfdoiBq4gOKg4AHam67BPFLjcSps3bFJpKgrMCGRdug1wgQazJs3N08LqYySx KJZm+sstuoEpS0NH5Fp7Yr+6FalWX6lSm7F1b2+gvXErrZ8xeiSFkf18+EjM7C8U 8yZSK8Sy8LC5INYakJySvOZRN4UOvZ5rvGB09vRi4luvooNkIrsM7qyN+uQ== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8m62m8y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:15:52 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-234b133b428so25519135ad.3 for ; Sun, 29 Jun 2025 13:15:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228151; x=1751832951; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UaLuKm/3fKyTMFvsdO3Mo5+i/aFeJ7ZuYdAPUo3GE+o=; b=Lsx4AuTTavpOUWSeWOaPlppkP16SR+wpiexo56R1FI9YUz8Ht3hiDv3aQM8XqWbVUn 4eTGYjfqGigMJzVgJmUlukuF1MgkXkBzgcXoVlR1AJbfZTBMdIEW/DSe+U+60YuqoR+J 7/PeOXrr4XlUXVOeLSHNSsM5jqhzJ3Y7rvJK4PaXrLWTDdDnld7W9jeN4w3LcNzESBhj H8JC/dx+7HqZmFecFSvlGZI9LlrrA++TqeLsylAG59QPnFw9+pntIGT/Md50YhyhCSvc dcPXnz8U1c6iC2q7MqG2bXwIHaYC1MOkBl/WEMYCyB7YyB8NksEJUQKMEE1TNcHksfJt cmeA== X-Forwarded-Encrypted: i=1; AJvYcCWg1XlC5AXFeo2R8KUet/GSlfcQC+/lXBupvm2MHb89jPLwxS4fTgM/yMtVaZhCF/8kbdqvj6iv0YSS/Rg=@vger.kernel.org X-Gm-Message-State: AOJu0Yw5Qkj+WD5A5TP0+2OTVwGwfqUnFwq9cWzODWR+u0OoT7IRWW8k DmKlL8/o0AIAQe5l/f30w7Xbzwt5cbSu5dVOrXgattydrBy7XQMfJFpRUt6R2ft8JD+RwVdV1nT wcc+AhcQYsRCL09LXWMsTOo68U2MtshxZPKCT9g9MfCh36M1HxWlAhNQqNOIMc5kjbG0= X-Gm-Gg: ASbGncsYjk26cYzK6QmuJkjLxieukUqQXO2K3HdhhruxruJBBu6eAclh6QcB/AUFpqh +uKCx7Ij/TK7iGdVKXnZAkiM3OtyGtzCO2JlNZoZYm+HBCA7u2Ib1KI1sLA8vSLWvwzfJ/GrqGY OXPfvCept+ZBrzhP2c4chvIzRHrq2lSX0RFd01mtcCLNGhFfCqX2AM/wGi5dJqwcK1cxGu+gj3e tcBQ4OqkRSBKyrCwskoZVbtwFoHFcrDC4g4u4DwxPIvxNeAQKwXqRohS5OS/HZcPSlDH+iV9aG3 bWYFjCgVUlRzXHsHayDPVdfF+BuILo3WMA== X-Received: by 2002:a17:903:244c:b0:224:910:23f6 with SMTP id d9443c01a7336-23ac463e8d8mr166592345ad.45.1751228151503; Sun, 29 Jun 2025 13:15:51 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHF91VIBntcqp9WNNfCaG+mV6yoeu+EQxzldo5DrH4AVc4zimszgPr9aZqGuN+EaweEGWuwtA== X-Received: by 2002:a17:903:244c:b0:224:910:23f6 with SMTP id d9443c01a7336-23ac463e8d8mr166592025ad.45.1751228151042; Sun, 29 Jun 2025 13:15:51 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b34e3026c5fsm5959735a12.28.2025.06.29.13.15.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:15:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 03/42] drm/gem: Add ww_acquire_ctx support to drm_gem_lru_scan() Date: Sun, 29 Jun 2025 13:12:46 -0700 Message-ID: <20250629201530.25775-4-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfXyxUmATl1oh/P MpCCYjdszImgqww8kJxbGd6mRN01C4oDHJTr9n21mA0MaJcGpMBTWMKruojay8M35GKd4snhoqk atx7Qb5SiOJjPhW2Qc/xvXLBvPomkHTLqgBzvOYOFNe1YtvS4JMtd6L4NDqmII254nu4ugUgANj 4iDygX+BqMZVaJbayUgO1UhDm/gj31WkH9r8qa34HrollB2B6Lojo0IUTu3rRDGMtUMPDnzZNCW a+D7mGFd3X2L3KwCJKqSZLOWF+07DRB2zzBC0Y05Z6R8OFw97V95PKRmFjKj0MKzNg/xwu4IAuT c9O1nm/6jqtkGrWaoi5CpHc4HMGRMieXBRdTlAoZ4DOrbveKjB7nTOmAj/hrPwjyhorKhONgnGr rdKm7I+RUB9JzzO5J9QWX5wCkucHieSJOBCj1kHGw6eLqIlqfwqZ9Cz+ZXwLaYQHAVujF+nL X-Authority-Analysis: v=2.4 cv=Fq0F/3rq c=1 sm=1 tr=0 ts=68619ef8 cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=Xs8HUT0FnXyYc1zbtowA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-GUID: d4gXhFk8Rgfp3DaEUXYUAU-ECsvtmeg_ X-Proofpoint-ORIG-GUID: d4gXhFk8Rgfp3DaEUXYUAU-ECsvtmeg_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 mlxscore=0 suspectscore=0 adultscore=0 clxscore=1015 mlxlogscore=999 impostorscore=0 bulkscore=0 malwarescore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark If the callback is going to have to attempt to grab more locks, it is useful to have an ww_acquire_ctx to avoid locking order problems. Why not use the drm_exec helper instead? Mainly because (a) where ww_acquire_init() is called is awkward, and (b) we don't really need to retry after backoff, we can just move on to the next object. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/drm_gem.c | 14 +++++++++++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 24 +++++++++++++----------- include/drm/drm_gem.h | 10 ++++++---- 3 files changed, 30 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 1e659d2660f7..95158cd7145e 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1460,12 +1460,14 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail); * @nr_to_scan: The number of pages to try to reclaim * @remaining: The number of pages left to reclaim, should be initialized = by caller * @shrink: Callback to try to shrink/reclaim the object. + * @ticket: Optional ww_acquire_ctx context to use for locking */ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned int nr_to_scan, unsigned long *remaining, - bool (*shrink)(struct drm_gem_object *obj)) + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticke= t), + struct ww_acquire_ctx *ticket) { struct drm_gem_lru still_in_lru; struct drm_gem_object *obj; @@ -1498,17 +1500,20 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, */ mutex_unlock(lru->lock); =20 + if (ticket) + ww_acquire_init(ticket, &reservation_ww_class); + /* * Note that this still needs to be trylock, since we can * hit shrinker in response to trying to get backing pages * for this obj (ie. while it's lock is already held) */ - if (!dma_resv_trylock(obj->resv)) { + if (!ww_mutex_trylock(&obj->resv->lock, ticket)) { *remaining +=3D obj->size >> PAGE_SHIFT; goto tail; } =20 - if (shrink(obj)) { + if (shrink(obj, ticket)) { freed +=3D obj->size >> PAGE_SHIFT; =20 /* @@ -1522,6 +1527,9 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, =20 dma_resv_unlock(obj->resv); =20 + if (ticket) + ww_acquire_fini(ticket); + tail: drm_gem_object_put(obj); mutex_lock(lru->lock); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/m= sm_gem_shrinker.c index 07ca4ddfe4e3..de185fc34084 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -44,7 +44,7 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct = shrink_control *sc) } =20 static bool -purge(struct drm_gem_object *obj) +purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!is_purgeable(to_msm_bo(obj))) return false; @@ -58,7 +58,7 @@ purge(struct drm_gem_object *obj) } =20 static bool -evict(struct drm_gem_object *obj) +evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (is_unevictable(to_msm_bo(obj))) return false; @@ -79,21 +79,21 @@ wait_for_idle(struct drm_gem_object *obj) } =20 static bool -active_purge(struct drm_gem_object *obj) +active_purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!wait_for_idle(obj)) return false; =20 - return purge(obj); + return purge(obj, ticket); } =20 static bool -active_evict(struct drm_gem_object *obj) +active_evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!wait_for_idle(obj)) return false; =20 - return evict(obj); + return evict(obj, ticket); } =20 static unsigned long @@ -102,7 +102,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct= shrink_control *sc) struct msm_drm_private *priv =3D shrinker->private_data; struct { struct drm_gem_lru *lru; - bool (*shrink)(struct drm_gem_object *obj); + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket= ); bool cond; unsigned long freed; unsigned long remaining; @@ -122,8 +122,9 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct= shrink_control *sc) continue; stages[i].freed =3D drm_gem_lru_scan(stages[i].lru, nr, - &stages[i].remaining, - stages[i].shrink); + &stages[i].remaining, + stages[i].shrink, + NULL); nr -=3D stages[i].freed; freed +=3D stages[i].freed; remaining +=3D stages[i].remaining; @@ -164,7 +165,7 @@ msm_gem_shrinker_shrink(struct drm_device *dev, unsigne= d long nr_to_scan) static const int vmap_shrink_limit =3D 15; =20 static bool -vmap_shrink(struct drm_gem_object *obj) +vmap_shrink(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!is_vunmapable(to_msm_bo(obj))) return false; @@ -192,7 +193,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsign= ed long event, void *ptr) unmapped +=3D drm_gem_lru_scan(lrus[idx], vmap_shrink_limit - unmapped, &remaining, - vmap_shrink); + vmap_shrink, + NULL); } =20 *(unsigned long *)ptr +=3D unmapped; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index a3133a08267c..02b5e9402e32 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -559,10 +559,12 @@ void drm_gem_lru_init(struct drm_gem_lru *lru, struct= mutex *lock); void drm_gem_lru_remove(struct drm_gem_object *obj); void drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_= object *obj); void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object = *obj); -unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, - unsigned int nr_to_scan, - unsigned long *remaining, - bool (*shrink)(struct drm_gem_object *obj)); +unsigned long +drm_gem_lru_scan(struct drm_gem_lru *lru, + unsigned int nr_to_scan, + unsigned long *remaining, + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticke= t), + struct ww_acquire_ctx *ticket); =20 int drm_gem_evict_locked(struct drm_gem_object *obj); =20 --=20 2.50.0 From nobody Wed Oct 8 10:02:27 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30A9A23E340 for ; Sun, 29 Jun 2025 20:15:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228157; cv=none; b=IVqIQip30glPFm8dVuHxo7ZQYT9uPHBic4Acye8z8AQGMmyYW+klPG8GMlhtVfEgJVpBzNARrTYBZeH47WE271XBRQjbk6KOGfc3Z4GEBhsf8v1AUanyKnph5ETGY5aR100hxFwfclsrTO8Ft8kdMySUjDH8VE86tbl84ben6e8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228157; c=relaxed/simple; bh=tk6N32eJ7GdNrzAEVW5eaRrz3mViSH8EIgbl5lfs/ZA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mWOThsLRjTfvIGvjJFUlZWt2CUcA9bRw7kTDsn+SmDYJGGpbIOwCkN02s6EyJbsBNJIj4kNOBzw/wwgOA797FdB06mIqlh30JMv2XcbUgpI96hANERWyYeXTWQWalVhLfNmM77vlR9tKYBzUe61K2B+fXuYESeV6/hIMM7csrvs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=F01V9s+U; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="F01V9s+U" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TEMfbu015857 for ; Sun, 29 Jun 2025 20:15:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=TaFvg5tKZ6Z RWJDBBSLZsqiTWRlMEYOfJPHM6b2hu5U=; b=F01V9s+U2sM2gGauBHGeSM7uiuW l68EUgZY8CzBIAJsHd/2monU7jEPRwiMsPQL3sHOGipti6eyMFaXE/GawEig6jF7 bBR/pfqaKT7SSPKUR2hTniU+d2GY1Z93ZqU9Q1Cf7Y9O9SjkaRVYIQfkCNFAlAL5 vEPJ8r/8UVDaYnKEY/KF9MHdn8TpXJ1SRpZ0Y3z5fm2uzLnsswqd72E3StkL+kVR 7AtRjM8t5ELjSYQ5P6RNdowKAhmHGPTVE+WvQcUa84qTcVspTvQFQAbBm+zr4NZL OgNapSNHRoy1oTG0D9ICyp+szypcSu5TCJxFm/lfvPacll+UKlBXjrSKg1Q== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7bvjpms-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:15:54 +0000 (GMT) Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-b31e1ef8890so985878a12.2 for ; Sun, 29 Jun 2025 13:15:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228153; x=1751832953; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TaFvg5tKZ6ZRWJDBBSLZsqiTWRlMEYOfJPHM6b2hu5U=; b=BCqjx8YZjp9MUsRuLRM5djfAXK3rMAZeGsrMyzzlVvS+yswEE0p8Fkx+eFtno3BJHQ hK9zM2KNsFqQCooaDtrkWvmHDFXjw7JenVlw/YQrCCyX9PK4PA9RNz4RGxXLl/IUsM+q uLxRQfGDq5golb0H1TBnUtn5kL9EuqdDKIxaEM0Vdh1/OCaLwaJgVLeHG8W8/IX5ekyf /HtNmFtbp6fDtnkrsOCjsgn1oEKlEyUZkSfv/NdiHkhRntuXXKyV4JpW8XU3Jzn6AtYf kIhN3Zn+j9xpEhCO0HjECXPjQKqQDy8qJJ2DJUfZCgFob03Jqh8aUsm6HpX3ugebIv9i UVnQ== X-Forwarded-Encrypted: i=1; AJvYcCXljURr6HK8WbxiH3//79xPcbOwx9Th/IngfynK1GY9pnhXONtKoRKY8NSUKOg9YSb2/5ccjHqgIrtbN6o=@vger.kernel.org X-Gm-Message-State: AOJu0YyLm9dW4WnKp24sdmj66cekTHh5W2VSQrJPeqZre4tOplRQ2SZ3 GUG3Pb47hYa9gPfExeX7jFhSjHAzcU08s+FLq7DwMkclfhKkK6aCZfwZI5QKLW5LWcGkir7mgat IPSIgT+FELj6quaZEkwv1+DIY+zYO2qDsIPloOeijy7PSHL04w0DcRfsXGt0W6/81lng= X-Gm-Gg: ASbGnctdFm4CQ3E1K7pibJEYCIPV9BSK1+Ahdk02nljm6/tvApUyjOe1CAXzULFRoGy lNBFJEEOiz8Yf6+zPh5DDqC+mBVsv36R42hUwgvFB+76RXi3QEws0/Dl4i5gxdlaLJkaqybLpBI iAkSO4Y1Fy7MIBOnb6J2zlknhtSC/jCMCk9Us1TS0PmNoLAjR/8XRflQqEpdjPdK81RFBgawGv0 JxsW10vLHkboEsTPmu0z5yHu1814i0SYikXtPFIG7kJMy6xAe4nfxuj/u+WUcXfjM9JgIBpPMMV 78t25/ifdCOZ0w6WuvOGop72ruveL6SWFg== X-Received: by 2002:a17:902:d2c6:b0:237:d192:ac4a with SMTP id d9443c01a7336-23ac4637a94mr146908525ad.51.1751228153266; Sun, 29 Jun 2025 13:15:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHxJXAS8FgUvjyKVgh2LBJz7QZEt3fG2ucaLf2bhPudM7DnVPdu3KLT6WYaroyDgPFMMtcKGQ== X-Received: by 2002:a17:902:d2c6:b0:237:d192:ac4a with SMTP id d9443c01a7336-23ac4637a94mr146908185ad.51.1751228152756; Sun, 29 Jun 2025 13:15:52 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb2e1be4sm64282215ad.24.2025.06.29.13.15.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:15:52 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Dmitry Baryshkov , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 04/42] drm/msm: Rename msm_file_private -> msm_context Date: Sun, 29 Jun 2025 13:12:47 -0700 Message-ID: <20250629201530.25775-5-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: vl64xuZWXZTyLG6kX_9IVf_HiI-1rQ0Q X-Authority-Analysis: v=2.4 cv=RJCzH5i+ c=1 sm=1 tr=0 ts=68619efa cx=c_pps a=Oh5Dbbf/trHjhBongsHeRQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=f_Su2nKqq5cPEY4CjHUA:9 a=_Vgx9l1VpLgwpw_dHYaR:22 X-Proofpoint-ORIG-GUID: vl64xuZWXZTyLG6kX_9IVf_HiI-1rQ0Q X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX/4SmrrXmKozB K8atLrgwUW38I/VZJNhs/6zf2LkwbMyW+cd+rnXTxtNmmM+ICPEH7vA8wDhVsqPn/XlUV97iwD9 vQmmBxfFzDbr8keT024jdAw101BJ7TUzf+RnzxwcX2NLU00vstJb0ghhAb+wEltmOMSpVBg7t+x tf8oX8wy8TNrNmJRV1+3pYNpyVQA9PKvk9qG4JcLlSpFjsLVc55K3A6tAFk1CSZBppXNHEdDXaL LE3iE7rRl/3NAW8Do9o3SHTX6lwqzLYOp+7UbOHmN5gJkO9wxfCGyZNAIC7qRo9qpvHNq2DwJaV HbcmhA/imA5Jg1jQf64n2oFBeGVxEJMI0Tgq0LL5pNwKtiDeTNEUiucAws7tk7XTqvt0svxsfZc ELG3j/1GTF61PrKAFtTON2tinsEC/jlcIXE3RvE1KM7yK725r0F0l1Eotri2b/c2R8hdc4d1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 priorityscore=1501 mlxlogscore=999 adultscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 phishscore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +-- drivers/gpu/drm/msm/msm_drv.c | 14 ++++----- drivers/gpu/drm/msm/msm_gem.c | 2 +- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.c | 4 +-- drivers/gpu/drm/msm/msm_gpu.h | 39 ++++++++++++------------- drivers/gpu/drm/msm/msm_submitqueue.c | 27 +++++++++-------- 9 files changed, 49 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 491fde0083a2..a8e6f62b6873 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -111,7 +111,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, struct msm_ringbuffer *ring, struct msm_gem_submit *submit) { bool sysprof =3D refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; - struct msm_file_private *ctx =3D submit->queue->ctx; + struct msm_context *ctx =3D submit->queue->ctx; struct adreno_gpu *adreno_gpu =3D &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 86bff915c3e7..5f4de4c25b97 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -351,7 +351,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, return 0; } =20 -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); @@ -439,7 +439,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_fi= le_private *ctx, } } =20 -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len) { struct drm_device *drm =3D gpu->dev; @@ -485,7 +485,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_fi= le_private *ctx, case MSM_PARAM_SYSPROF: if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); - return msm_file_private_set_sysprof(ctx, gpu, value); + return msm_context_set_sysprof(ctx, gpu, value); default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index bc063594a359..a4abafca7782 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -581,9 +581,9 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) /* Put vm_start above 32b to catch issues with not setting xyz_BASE_HI */ #define ADRENO_VM_START 0x100000000ULL u64 adreno_private_address_space_size(struct msm_gpu *gpu); -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); const struct firmware *adreno_request_fw(struct adreno_gpu *adreno_gpu, const char *fwname); diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index d007687c2446..324ee2089b34 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -337,7 +337,7 @@ static int context_init(struct drm_device *dev, struct = drm_file *file) { static atomic_t ident =3D ATOMIC_INIT(0); struct msm_drm_private *priv =3D dev->dev_private; - struct msm_file_private *ctx; + struct msm_context *ctx; =20 ctx =3D kzalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) @@ -367,23 +367,23 @@ static int msm_open(struct drm_device *dev, struct dr= m_file *file) return context_init(dev, file); } =20 -static void context_close(struct msm_file_private *ctx) +static void context_close(struct msm_context *ctx) { msm_submitqueue_close(ctx); - msm_file_private_put(ctx); + msm_context_put(ctx); } =20 static void msm_postclose(struct drm_device *dev, struct drm_file *file) { struct msm_drm_private *priv =3D dev->dev_private; - struct msm_file_private *ctx =3D file->driver_priv; + struct msm_context *ctx =3D file->driver_priv; =20 /* * It is not possible to set sysprof param to non-zero if gpu * is not initialized: */ if (priv->gpu) - msm_file_private_set_sysprof(ctx, priv->gpu, 0); + msm_context_set_sysprof(ctx, priv->gpu, 0); =20 context_close(ctx); } @@ -515,7 +515,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *d= ev, uint64_t *iova) { struct msm_drm_private *priv =3D dev->dev_private; - struct msm_file_private *ctx =3D file->driver_priv; + struct msm_context *ctx =3D file->driver_priv; =20 if (!priv->gpu) return -EINVAL; @@ -535,7 +535,7 @@ static int msm_ioctl_gem_info_set_iova(struct drm_devic= e *dev, uint64_t iova) { struct msm_drm_private *priv =3D dev->dev_private; - struct msm_file_private *ctx =3D file->driver_priv; + struct msm_context *ctx =3D file->driver_priv; =20 if (!priv->gpu) return -EINVAL; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 2995e80fec3b..259919b0e887 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -44,7 +44,7 @@ static void update_device_mem(struct msm_drm_private *pri= v, ssize_t size) =20 static void update_ctx_mem(struct drm_file *file, ssize_t size) { - struct msm_file_private *ctx =3D file->driver_priv; + struct msm_context *ctx =3D file->driver_priv; uint64_t ctx_mem =3D atomic64_add_return(size, &ctx->ctx_mem); =20 rcu_read_lock(); /* Locks file->pid! */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index d4f71bb54e84..3aabf7f1da6d 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -651,7 +651,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, { struct msm_drm_private *priv =3D dev->dev_private; struct drm_msm_gem_submit *args =3D data; - struct msm_file_private *ctx =3D file->driver_priv; + struct msm_context *ctx =3D file->driver_priv; struct msm_gem_submit *submit =3D NULL; struct msm_gpu *gpu =3D priv->gpu; struct msm_gpu_submitqueue *queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 3947f7ba1421..a8280b579832 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -148,7 +148,7 @@ int msm_gpu_pm_suspend(struct msm_gpu *gpu) return 0; } =20 -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p) { drm_printf(p, "drm-engine-gpu:\t%llu ns\n", ctx->elapsed_ns); @@ -342,7 +342,7 @@ static void retire_submits(struct msm_gpu *gpu); =20 static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, c= har **cmd) { - struct msm_file_private *ctx =3D submit->queue->ctx; + struct msm_context *ctx =3D submit->queue->ctx; struct task_struct *task; =20 WARN_ON(!mutex_is_locked(&submit->gpu->lock)); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 5bf7cd985b9c..937b7cdddadd 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -22,7 +22,7 @@ struct msm_gem_submit; struct msm_gpu_perfcntr; struct msm_gpu_state; -struct msm_file_private; +struct msm_context; =20 struct msm_gpu_config { const char *ioname; @@ -44,9 +44,9 @@ struct msm_gpu_config { * + z180_gpu */ struct msm_gpu_funcs { - int (*get_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*get_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); - int (*set_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*set_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); int (*hw_init)(struct msm_gpu *gpu); =20 @@ -341,7 +341,7 @@ struct msm_gpu_perfcntr { #define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORI= TY_HIGH) =20 /** - * struct msm_file_private - per-drm_file context + * struct msm_context - per-drm_file context * * @queuelock: synchronizes access to submitqueues list * @submitqueues: list of &msm_gpu_submitqueue created by userspace @@ -351,7 +351,7 @@ struct msm_gpu_perfcntr { * @ref: reference count * @seqno: unique per process seqno */ -struct msm_file_private { +struct msm_context { rwlock_t queuelock; struct list_head submitqueues; int queueid; @@ -506,7 +506,7 @@ struct msm_gpu_submitqueue { u32 ring_nr; int faults; uint32_t last_fence; - struct msm_file_private *ctx; + struct msm_context *ctx; struct list_head node; struct idr fence_idr; struct spinlock idr_lock; @@ -602,33 +602,32 @@ static inline void gpu_write64(struct msm_gpu *gpu, u= 32 reg, u64 val) int msm_gpu_pm_suspend(struct msm_gpu *gpu); int msm_gpu_pm_resume(struct msm_gpu *gpu); =20 -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p); =20 -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *= ctx); -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *c= tx, +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx); +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id); int msm_submitqueue_create(struct drm_device *drm, - struct msm_file_private *ctx, + struct msm_context *ctx, u32 prio, u32 flags, u32 *id); -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private = *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args); -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); -void msm_submitqueue_close(struct msm_file_private *ctx); +int msm_submitqueue_remove(struct msm_context *ctx, u32 id); +void msm_submitqueue_close(struct msm_context *ctx); =20 void msm_submitqueue_destroy(struct kref *kref); =20 -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof); -void __msm_file_private_destroy(struct kref *kref); +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, = int sysprof); +void __msm_context_destroy(struct kref *kref); =20 -static inline void msm_file_private_put(struct msm_file_private *ctx) +static inline void msm_context_put(struct msm_context *ctx) { - kref_put(&ctx->ref, __msm_file_private_destroy); + kref_put(&ctx->ref, __msm_context_destroy); } =20 -static inline struct msm_file_private *msm_file_private_get( - struct msm_file_private *ctx) +static inline struct msm_context *msm_context_get( + struct msm_context *ctx) { kref_get(&ctx->ref); return ctx; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index 7fed1de63b5d..1acc0fe36353 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -7,8 +7,7 @@ =20 #include "msm_gpu.h" =20 -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof) +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, = int sysprof) { /* * Since pm_runtime and sysprof_active are both refcounts, we @@ -46,10 +45,10 @@ int msm_file_private_set_sysprof(struct msm_file_privat= e *ctx, return 0; } =20 -void __msm_file_private_destroy(struct kref *kref) +void __msm_context_destroy(struct kref *kref) { - struct msm_file_private *ctx =3D container_of(kref, - struct msm_file_private, ref); + struct msm_context *ctx =3D container_of(kref, + struct msm_context, ref); int i; =20 for (i =3D 0; i < ARRAY_SIZE(ctx->entities); i++) { @@ -73,12 +72,12 @@ void msm_submitqueue_destroy(struct kref *kref) =20 idr_destroy(&queue->fence_idr); =20 - msm_file_private_put(queue->ctx); + msm_context_put(queue->ctx); =20 kfree(queue); } =20 -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *c= tx, +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; @@ -101,7 +100,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct = msm_file_private *ctx, return NULL; } =20 -void msm_submitqueue_close(struct msm_file_private *ctx) +void msm_submitqueue_close(struct msm_context *ctx) { struct msm_gpu_submitqueue *entry, *tmp; =20 @@ -119,7 +118,7 @@ void msm_submitqueue_close(struct msm_file_private *ctx) } =20 static struct drm_sched_entity * -get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, +get_sched_entity(struct msm_context *ctx, struct msm_ringbuffer *ring, unsigned ring_nr, enum drm_sched_priority sched_prio) { static DEFINE_MUTEX(entity_lock); @@ -155,7 +154,7 @@ get_sched_entity(struct msm_file_private *ctx, struct m= sm_ringbuffer *ring, return ctx->entities[idx]; } =20 -int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private= *ctx, +int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, u32 prio, u32 flags, u32 *id) { struct msm_drm_private *priv =3D drm->dev_private; @@ -200,7 +199,7 @@ int msm_submitqueue_create(struct drm_device *drm, stru= ct msm_file_private *ctx, =20 write_lock(&ctx->queuelock); =20 - queue->ctx =3D msm_file_private_get(ctx); + queue->ctx =3D msm_context_get(ctx); queue->id =3D ctx->queueid++; =20 if (id) @@ -221,7 +220,7 @@ int msm_submitqueue_create(struct drm_device *drm, stru= ct msm_file_private *ctx, * Create the default submit-queue (id=3D=3D0), used for backwards compati= bility * for userspace that pre-dates the introduction of submitqueues. */ -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *= ctx) +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx) { struct msm_drm_private *priv =3D drm->dev_private; int default_prio, max_priority; @@ -261,7 +260,7 @@ static int msm_submitqueue_query_faults(struct msm_gpu_= submitqueue *queue, return ret ? -EFAULT : 0; } =20 -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private = *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args) { struct msm_gpu_submitqueue *queue; @@ -282,7 +281,7 @@ int msm_submitqueue_query(struct drm_device *drm, struc= t msm_file_private *ctx, return ret; } =20 -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id) +int msm_submitqueue_remove(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; =20 --=20 2.50.0 From nobody Wed Oct 8 10:02:27 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8881623F40F for ; Sun, 29 Jun 2025 20:15:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228158; cv=none; b=eY7JUgsZ+ew2Y9z2vlxqFqBGXHTSvEATDc6Cq7VjiauCMqgdrjFSjpj6oO+2iQFF+XdNCpoEmlT4JU4wWV4aoGJIpRQvOexbi/x1lyQRWAdeZbl+f9kUdBB/8F+M/55/cQlTL1SmtgHxlXSsiK29YbW3xmmEWo+WcMttc5xs5cM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228158; c=relaxed/simple; bh=IKliov0J2pa5trVjg7A0Pz5LzH/hN/LQHJAojZPCWiI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YKWt5hKJaeUa8DsYhI8XrPYfTUYlDSc9oPCQw7Hf3y//JmlOceJjJSRS12MShrPeSkJadI2LqiBdVtrdIRok+3desNU1PsOcV9xpYlOVDffXu6OKMAIrAFin6KUfj6X5FoN6myCXy+To7g4MixMmXspnfA6rPX/mXbhAsqQha3I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=E7N3heOc; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="E7N3heOc" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TFgfre028908 for ; Sun, 29 Jun 2025 20:15:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Eq5kxLTTzjq j4KYoKYMeeMDOnDpMwOj1SboVzjrNiGw=; b=E7N3heOcx/wkizhHj/1SljOU1k1 ZwEoyYKAwIV25v7ul4WeTbDK7Uz8du6VG7PEDFesXihyu5H1DFCPjpJMHW553nZf h5tf6u6wdMD9oIC4+P0BijzKS3o59vqc6pcZhIcdIvrpLctByQ7nhgifAX5vr/UQ OhNDYL02ZqJ415UASVkiEZh4ioUJl6oOTqPwFyW0G0yf7XRmTpGwuVACSkXy0Nwc etZoKq8Xufvh3tHS+WKLTycFtdnfXexcAwsMrJSW7+sLu72Fw95dDV988tIaftTX Z2irKmDx5PX3Zw0bDb6LUYfHRCNEboeOIyYTssYDc/We7NRMpbE1K8M9rbA== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7bvjpmu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:15:55 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-235e1d70d67so8207185ad.0 for ; Sun, 29 Jun 2025 13:15:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228155; x=1751832955; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Eq5kxLTTzjqj4KYoKYMeeMDOnDpMwOj1SboVzjrNiGw=; b=LRa3xHIVCnP8SJXIAAZ4ZKk2V4utX6guqIl8602U0oQHH+Wy2zmcCEMMhi8JDKDqQw QJQN5Z8Rf3sBtOGDSwSZdL861AUuVDwCJeU4BZT65bJ1ZaTihpU+FSiUawjNAJXADw0A m3x5bMUQ1Rv13Vw15PTZbvDL9cVbOYSA++g1s6R06O4bqBNOMtdQpO95bSB6flmrvfpw wbqivS48R22HQBumMKzEXGIi3rv91iDyOL0LZb7zIWgC8i8TaxZvxPEqyJITIjmicHcx nE2i4DRqHisbg5rZl9eoDhCUgyNDW/ubvqenO1QZEnehlNI87ysWf1uGX8WmrMt0vg2d KO5Q== X-Forwarded-Encrypted: i=1; AJvYcCWProWpAwBy+9OSqe3Zeq7Qb0ilJevFPxGEhfFV9iuZP+LEQflIpb7HGmQfm8TO+pMP0qhXGhSm121EAag=@vger.kernel.org X-Gm-Message-State: AOJu0YwuTOynPMIWa7gCbLWzxN0P0ldgs8EXJJhSuHS3iqbo6jmvifeh WqrOjsBYl3HRvO+E1oRFYsPRli9gj5xFOhE7L0+j6mJhUpuIZ6yu/uyAzzbJctRr1uvIhEPdb72 rTnbv32kDTrWkxbLtrKeRVWO9HCxHmE/FhF//w9SnKM2WGskqJ+E8c7PJaFNz0UdDpBw= X-Gm-Gg: ASbGncsjczQsX2Behme8mO7NQ4qy6YGxwK22zs9WqzFGthhWN3fApbtdXeEsbkBoVRW LPYNRdxksZWwePs0p7Mbb/KbNi/Xvz5FJfn9VdRZTZ03kZpIktOoubtHPMYkLRiNpXFjNkr96KZ ikKAPKaPT0mzOg8MmNuPmKcfA1NkYAsIzHf9blcTTiAGH6v9AZxtpVAmylDg2knFU7HdYMbI2xn RX01GpOlwz14Y/7Z6IVK2lvOkR2onOJuVdqWz9vApFsKCOV+xx3zBSlaQMwtyKpDCylaC/Pkn+m JfLXBUF95GYZZX72N3easzENXfKS5DxYLw== X-Received: by 2002:a17:902:e881:b0:235:f143:9b07 with SMTP id d9443c01a7336-23ac3dec40amr193803035ad.5.1751228154660; Sun, 29 Jun 2025 13:15:54 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF7YzmMzYaVmo4a1PENReEL/5V/swGt42s5m7GGennouRebdx4cHCuSsKP1wUZ1/kFFwcF0lQ== X-Received: by 2002:a17:902:e881:b0:235:f143:9b07 with SMTP id d9443c01a7336-23ac3dec40amr193802645ad.5.1751228154205; Sun, 29 Jun 2025 13:15:54 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb2f2569sm66066195ad.64.2025.06.29.13.15.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:15:53 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Dmitry Baryshkov , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 05/42] drm/msm: Improve msm_context comments Date: Sun, 29 Jun 2025 13:12:48 -0700 Message-ID: <20250629201530.25775-6-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: Vlq-OLXUnkurABOGLexHZ7iP7o5eSUyX X-Authority-Analysis: v=2.4 cv=RJCzH5i+ c=1 sm=1 tr=0 ts=68619efb cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=WI1w4SIrP7wptn5q9fwA:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-ORIG-GUID: Vlq-OLXUnkurABOGLexHZ7iP7o5eSUyX X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX4CwEZsQiZ4Gw yWCMDGv94g7FMgGQr2y+XkoNe6qficY2b47wpd4DtWpXndzL4XAZdkWXgwY54Yd3BNwLU/dIh8/ R0PwL3ekZvAl8ZO25ufU+490RsPYUHXzb/91KVIcWYqtrRUn4Jhxf4IK0Z1E13sSQCSx3sJbgzn cRUbumuI1b1y1Azrny4AwEobtH1G4H9HlAW22ztUwbyJAwlUKDw8glvktktAmDJNy7j3bEURi7h kcwWeaane7AEf6XHGM9B42qCQpO8spld7hmH08fo30fRgaLJf2oRoYCjnU/y3aoBkc4tO0j0JWk VJk8qc2at/121akEXRcYTnslb4ulC1mvyksq5IEftuJ3MYJNoNRFhKqKRP06iSABXUFhC7xy0kh lCavhUa3qzkh0hTSw28XfrxNtKE35pXVEcRhT7CS8XdVac3OmsxxgYCRrxgUXRKE/mdn1Dnt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 priorityscore=1501 mlxlogscore=999 adultscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 phishscore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Just some tidying up. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gpu.h | 44 +++++++++++++++++++++++------------ 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 937b7cdddadd..d30a1eedfda6 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -342,25 +342,39 @@ struct msm_gpu_perfcntr { =20 /** * struct msm_context - per-drm_file context - * - * @queuelock: synchronizes access to submitqueues list - * @submitqueues: list of &msm_gpu_submitqueue created by userspace - * @queueid: counter incremented each time a submitqueue is created, - * used to assign &msm_gpu_submitqueue.id - * @aspace: the per-process GPU address-space - * @ref: reference count - * @seqno: unique per process seqno */ struct msm_context { + /** @queuelock: synchronizes access to submitqueues list */ rwlock_t queuelock; + + /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */ struct list_head submitqueues; + + /** + * @queueid: + * + * Counter incremented each time a submitqueue is created, used to + * assign &msm_gpu_submitqueue.id + */ int queueid; + + /** @aspace: the per-process GPU address-space */ struct msm_gem_address_space *aspace; + + /** @kref: the reference count */ struct kref ref; + + /** + * @seqno: + * + * A unique per-process sequence number. Used to detect context + * switches, without relying on keeping a, potentially dangling, + * pointer to the previous context. + */ int seqno; =20 /** - * sysprof: + * @sysprof: * * The value of MSM_PARAM_SYSPROF set by userspace. This is * intended to be used by system profiling tools like Mesa's @@ -378,21 +392,21 @@ struct msm_context { int sysprof; =20 /** - * comm: Overridden task comm, see MSM_PARAM_COMM + * @comm: Overridden task comm, see MSM_PARAM_COMM * * Accessed under msm_gpu::lock */ char *comm; =20 /** - * cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE + * @cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE * * Accessed under msm_gpu::lock */ char *cmdline; =20 /** - * elapsed: + * @elapsed: * * The total (cumulative) elapsed time GPU was busy with rendering * from this context in ns. @@ -400,7 +414,7 @@ struct msm_context { uint64_t elapsed_ns; =20 /** - * cycles: + * @cycles: * * The total (cumulative) GPU cycles elapsed attributed to this * context. @@ -408,7 +422,7 @@ struct msm_context { uint64_t cycles; =20 /** - * entities: + * @entities: * * Table of per-priority-level sched entities used by submitqueues * associated with this &drm_file. Because some userspace apps @@ -421,7 +435,7 @@ struct msm_context { struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS= ]; =20 /** - * ctx_mem: + * @ctx_mem: * * Total amount of memory of GEM buffers with handles attached for * this context. --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 14E611FDA8C for ; Sun, 29 Jun 2025 20:16:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228180; cv=none; b=QM9J+HlBwMTAjSUCma/apy8B3lk5h5iFLjzqbvctl0MNjzqoWj2FSF63zGSFCWGayynfuJh/9XeFwgTmJMaVorwn5F1bUotaetptCIeWqz3Tw8fzCJIvchKZs0HkPV4Qaid69/1WJsb+oP8ZsxLeTYmPx6Tp3o2WK4+X8mHlOjY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228180; c=relaxed/simple; bh=XR9vPQGrieR6Ug/EEy85SB4DweE/pIFG2CwokzanSSE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IlNjOGREeB3hHe1Suicb5aVzcL4iUR41hTAOQSgS+0LJrPQV5qmHdQLtSsO/H3WjYS6KKLCIoIcIPRPNHbZnHepnKQnJxDtBtULsD/d6QLI2Say3EIhQKvcDCE5h3YnqWMmjz4KbsiQkkYz0SvGhwyza7NFEmMIZvKLpXyIby1A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=psMJ6jCT; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="psMJ6jCT" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TKEb8V029499 for ; Sun, 29 Jun 2025 20:16:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=dsIULYxwynH kgYr1fPCvCIhbfI+13wgLq4NAYLMpehg=; b=psMJ6jCT40luk3vSQQ9/99i7Zkc CffEdowsCx85KaTc1g/0c+dKGD6F+XAW+2Ks2sE86GUYaTGfhLpNAQN7LU11kD+i cGoY8Tdb8C8RR4FTQBQJomlA/HJOKG7oinRn/AlavGnlSl8g/05E1GR0xb6MCqG5 jMejLkml2JN42hRaAN8ExSzpe4ju2MEj0cJVm6ScQEoXHEkfRhdEQiyEh9UYhUYP J/PRzA4TlrIS6pclwIlYLhS/NxU2qi5ySjlCOZsrKj0OaC7UyFC3o3TKhwxxemDY gE/eNPKOdMrcmxsiWUPf5yA+I1L515KKxStiknVs6CpZBo9coGoV5LoGNEg== Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j63k2tfn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:15 +0000 (GMT) Received: by mail-pg1-f197.google.com with SMTP id 41be03b00d2f7-b2fa1a84568so2836809a12.1 for ; Sun, 29 Jun 2025 13:16:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228174; x=1751832974; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dsIULYxwynHkgYr1fPCvCIhbfI+13wgLq4NAYLMpehg=; b=hp5APJaMjXdcV73GLXD/qtqWPSMTkfutzvDLZTjLl/CKoAG7kxb9M6YV427SdGaG+v Q6I0Do28NL7LzZmFXWxKf3rzb08TN40sQDhnLhS/lxJO1JOk3T0FKzZzt3jznR8zws7i 3kZszSmJ7igNC2MzeQzCRzvdFRyQkhymqE6pQ8Pn/VHXHFrOkJQIN9XXFk1Ypqex1GJ7 b5++ebTwTuxPBMGhtg503gyom6W1TH6lRnwwJ7/4HUEdkvR1gW1PnXeOwCEWM5pLGm3H NZjbha/hrujmllTLq/XqNu3GNSWDXuWf9bTkIVwP0fCnBLReQjN/RQpLku1x8ZXZtWFk HyEw== X-Forwarded-Encrypted: i=1; AJvYcCUp2zWbUViZtC9/oBAGp1wO5rDtGinpsGxLFfTOojj8EYueI2FGNXpzOOIKzwFVt04DEpJmHlLl32H6AWc=@vger.kernel.org X-Gm-Message-State: AOJu0YxdSZ3+yNGBtr9oczqjEeSbo6IqFbZpJdWajSsPkq+hnENuIeND IG+ESf4DgMbg8viYwgZk22Yj8x3EwALxGihb3PZTYgoQri08Ii8ttbAfbF4fDZNo4DGMVdpw0En YD6fAE5uy8OEH7TeUF2/KgIl2k+vLBu+Yli5Tu6Jd7yU4R62wGIR4hthpY6ZUkP5GSm0= X-Gm-Gg: ASbGncvBncNPa0D5ob2HkDCceJG/ZjRkA91mDS1kmeCA8F0Y2kAfGJS7qOVvaYOhl1R qeww3CXqa7kIn1nJb37bJrBQm/nPaMytmRRS/iItEGrUiB+WezuJruLEcT9ZCX+Sjx/v23oGkz2 4j7c2dq1mGGAsIhNYKoG8wZjSyn0AmshtbF6fxi9i3TG+ZwQ/KY1P8ovbPbHDY6sJGzngZd83mP lvqsq0vmc3BoIH6Gn9TJTPu8xSC2iqg4jfawy9dOn33H6rxuatvaBOSC4j2eZIqOlMEiqm4F2nh 8w3Ir36Q7B6NN9Cj+Jqs0rVPZ6AtP1DOuA== X-Received: by 2002:a17:903:3d05:b0:234:986c:66cf with SMTP id d9443c01a7336-23ac59abb64mr181420215ad.16.1751228172723; Sun, 29 Jun 2025 13:16:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IENZi5F7P0WjdivEEnEjPLiZxKjAVTfIGAOAnGPbnG2WoC+rkHv4BuJBh1dKAqHyiYapIwvbw== X-Received: by 2002:a17:903:3d05:b0:234:986c:66cf with SMTP id d9443c01a7336-23ac59abb64mr181419565ad.16.1751228171732; Sun, 29 Jun 2025 13:16:11 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3b0052sm64454625ad.158.2025.06.29.13.16.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:11 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Dmitry Baryshkov , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , Arnd Bergmann , Jun Nie , Krzysztof Kozlowski , Eugene Lepshy , Haoxiang Li , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 06/42] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Date: Sun, 29 Jun 2025 13:12:49 -0700 Message-ID: <20250629201530.25775-7-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=ZKfXmW7b c=1 sm=1 tr=0 ts=68619f0f cx=c_pps a=rz3CxIlbcmazkYymdCej/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=kLMotZ-wiLr96KG7Bc8A:9 a=bFCP_H2QrGi7Okbo017w:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX2Exte9RaNPaE bctBd+AT/mR7C0Suh0QSaEG9MScfMQsiqOggoaQOwSdqiBzbdWX0q2/h6AE6yAZdErE7qEiarBa XFC6u8qmIqlJBfqCagGsAqAJ79cnSaTdABaPlicpb/g/DaEQLRaFGPk16X5A98S/YxmxoIymziv +jFq6PduSvVXEFFlKIx+ycw2D2/h9sx1LWETM7hhvcprV8rCFIxJ/7oVAH3B8iOdPD0z0FkQX0/ zpIkmdBgn03i6o/6VXF7eqc7xwXVhzIM1M65EY0KLTSdYUmsM92WuEJwuQ4o9tjuxasSuLTiTvm EPWkXUa+Rhz3pUuIap/9kGg3g+SCA8dKqlX/LbBVcXlJZoAKya3FSP2G8C3xUJ1XZZu6ptvTj8v HivA8JCdj2j0goZAZfo6NDFDSPy/adQbyFrHfCNWZm4H45AtJi1jE04zZrkeDcNt3n1XtTbL X-Proofpoint-ORIG-GUID: NHIc8NhXuP1-iPSq84TMcXsrK16tSdfR X-Proofpoint-GUID: NHIc8NhXuP1-iPSq84TMcXsrK16tSdfR X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 mlxscore=0 mlxlogscore=999 spamscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 impostorscore=0 malwarescore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark Re-aligning naming to better match drm_gpuvm terminology will make things less confusing at the end of the drm_gpuvm conversion. This is just rename churn, no functional change. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 18 ++-- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_debugfs.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 22 ++--- drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 26 +++--- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 45 +++++---- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 46 ++++----- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 18 ++-- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 4 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 24 ++--- drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 12 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 12 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 14 +-- drivers/gpu/drm/msm/msm_drv.c | 8 +- drivers/gpu/drm/msm/msm_drv.h | 10 +- drivers/gpu/drm/msm/msm_fb.c | 10 +- drivers/gpu/drm/msm/msm_fbdev.c | 2 +- drivers/gpu/drm/msm/msm_gem.c | 74 +++++++-------- drivers/gpu/drm/msm/msm_gem.h | 34 +++---- drivers/gpu/drm/msm/msm_gem_submit.c | 6 +- drivers/gpu/drm/msm/msm_gem_vma.c | 93 +++++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 46 ++++----- drivers/gpu/drm/msm/msm_gpu.h | 16 ++-- drivers/gpu/drm/msm/msm_kms.c | 16 ++-- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 41 files changed, 348 insertions(+), 352 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 379a3d346c30..5eb063ed0b46 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; =20 - a2xx_gpummu_params(gpu->aspace->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); =20 DBG("%s", gpu->name); =20 @@ -466,19 +466,19 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struc= t msm_gpu *gpu) return state; } =20 -static struct msm_gem_address_space * -a2xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pde= v) +static struct msm_gem_vm * +a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu =3D a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 - aspace =3D msm_gem_address_space_create(mmu, "gpu", SZ_16M, + vm =3D msm_gem_vm_create(mmu, "gpu", SZ_16M, 0xfff * SZ_64K); =20 - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); =20 - return aspace; + return vm; } =20 static u32 a2xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -504,7 +504,7 @@ static const struct adreno_gpu_funcs funcs =3D { #endif .gpu_state_get =3D a2xx_gpu_state_get, .gpu_state_put =3D adreno_gpu_state_put, - .create_address_space =3D a2xx_create_address_space, + .create_vm =3D a2xx_create_vm, .get_rptr =3D a2xx_get_rptr, }, }; @@ -551,7 +551,7 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers =3D a220_registers; =20 - if (!gpu->aspace) { + if (!gpu->vm) { dev_err(dev->dev, "No memory protection without MMU\n"); if (!allow_vram_carveout) { ret =3D -ENXIO; diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a3xx_gpu.c index b6df115bb567..434e6ededf83 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -526,7 +526,7 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_busy =3D a3xx_gpu_busy, .gpu_state_get =3D a3xx_gpu_state_get, .gpu_state_put =3D adreno_gpu_state_put, - .create_address_space =3D adreno_create_address_space, + .create_vm =3D adreno_create_vm, .get_rptr =3D a3xx_get_rptr, }, }; @@ -581,7 +581,7 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } =20 - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a4xx_gpu.c index f1b18a6663f7..2c75debcfd84 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -645,7 +645,7 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_busy =3D a4xx_gpu_busy, .gpu_state_get =3D a4xx_gpu_state_get, .gpu_state_put =3D adreno_gpu_state_put, - .create_address_space =3D adreno_create_address_space, + .create_vm =3D adreno_create_vm, .get_rptr =3D a4xx_get_rptr, }, .get_timestamp =3D a4xx_get_timestamp, @@ -695,7 +695,7 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0xffff0000ffff0000ull; =20 - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c b/drivers/gpu/drm/ms= m/adreno/a5xx_debugfs.c index 169b8fe688f8..625a4e787d8f 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c @@ -116,13 +116,13 @@ reset_set(void *data, u64 val) adreno_gpu->fw[ADRENO_FW_PFP] =3D NULL; =20 if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); a5xx_gpu->pm4_bo =3D NULL; } =20 if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); a5xx_gpu->pfp_bo =3D NULL; } diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a5xx_gpu.c index 60aef0796236..dc31bc0afca4 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -622,7 +622,7 @@ static int a5xx_ucode_load(struct msm_gpu *gpu) a5xx_gpu->shadow =3D msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a5xx_gpu->shadow_bo, + gpu->vm, &a5xx_gpu->shadow_bo, &a5xx_gpu->shadow_iova); =20 if (IS_ERR(a5xx_gpu->shadow)) @@ -1042,22 +1042,22 @@ static void a5xx_destroy(struct msm_gpu *gpu) a5xx_preempt_fini(gpu); =20 if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); } =20 if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); } =20 if (a5xx_gpu->gpmu_bo) { - msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->gpmu_bo); } =20 if (a5xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->shadow_bo); } =20 @@ -1457,7 +1457,7 @@ static int a5xx_crashdumper_init(struct msm_gpu *gpu, struct a5xx_crashdumper *dumper) { dumper->ptr =3D msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); =20 if (!IS_ERR(dumper->ptr)) @@ -1557,7 +1557,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_g= pu *gpu, =20 if (a5xx_crashdumper_run(gpu, &dumper)) { kfree(a5xx_state->hlsqregs); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); return; } =20 @@ -1565,7 +1565,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_g= pu *gpu, memcpy(a5xx_state->hlsqregs, dumper.ptr + (256 * SZ_1K), count * sizeof(u32)); =20 - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); } =20 static struct msm_gpu_state *a5xx_gpu_state_get(struct msm_gpu *gpu) @@ -1713,7 +1713,7 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_busy =3D a5xx_gpu_busy, .gpu_state_get =3D a5xx_gpu_state_get, .gpu_state_put =3D a5xx_gpu_state_put, - .create_address_space =3D adreno_create_address_space, + .create_vm =3D adreno_create_vm, .get_rptr =3D a5xx_get_rptr, }, .get_timestamp =3D a5xx_get_timestamp, @@ -1786,8 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } =20 - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, a5xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); =20 /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/= adreno/a5xx_power.c index 6b91e0bd1514..d6da7351cfbb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c @@ -363,7 +363,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu) bosize =3D (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2; =20 ptr =3D msm_gem_kernel_new(drm, bosize, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &a5xx_gpu->gpmu_bo, &a5xx_gpu->gpmu_iova); if (IS_ERR(ptr)) return; diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/ms= m/adreno/a5xx_preempt.c index b5f9d40687d5..e4924b5e1c48 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -255,7 +255,7 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, =20 ptr =3D msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_RECORD_SIZE + A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); =20 if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -263,9 +263,9 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, /* The buffer to store counters needs to be unprivileged */ counters =3D msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC, gpu->aspace, &counters_bo, &counters_iova); + MSM_BO_WC, gpu->vm, &counters_bo, &counters_iova); if (IS_ERR(counters)) { - msm_gem_kernel_put(bo, gpu->aspace); + msm_gem_kernel_put(bo, gpu->vm); return PTR_ERR(counters); } =20 @@ -296,8 +296,8 @@ void a5xx_preempt_fini(struct msm_gpu *gpu) int i; =20 for (i =3D 0; i < gpu->nr_rings; i++) { - msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->aspace); - msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->aspace); + msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->vm); + msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->vm); } } =20 diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.c index 38c0f8ef85c3..848acc382b7d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,15 +1259,15 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) =20 static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { - msm_gem_kernel_put(gmu->hfi.obj, gmu->aspace); - msm_gem_kernel_put(gmu->debug.obj, gmu->aspace); - msm_gem_kernel_put(gmu->icache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dcache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dummy.obj, gmu->aspace); - msm_gem_kernel_put(gmu->log.obj, gmu->aspace); - - gmu->aspace->mmu->funcs->detach(gmu->aspace->mmu); - msm_gem_address_space_put(gmu->aspace); + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); + msm_gem_kernel_put(gmu->debug.obj, gmu->vm); + msm_gem_kernel_put(gmu->icache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dcache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); + msm_gem_kernel_put(gmu->log.obj, gmu->vm); + + gmu->vm->mmu->funcs->detach(gmu->vm->mmu); + msm_gem_vm_put(gmu->vm); } =20 static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo = *bo, @@ -1296,7 +1296,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu= , struct a6xx_gmu_bo *bo, if (IS_ERR(bo->obj)) return PTR_ERR(bo->obj); =20 - ret =3D msm_gem_get_and_pin_iova_range(bo->obj, gmu->aspace, &bo->iova, + ret =3D msm_gem_get_and_pin_iova_range(bo->obj, gmu->vm, &bo->iova, range_start, range_end); if (ret) { drm_gem_object_put(bo->obj); @@ -1321,9 +1321,9 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); =20 - gmu->aspace =3D msm_gem_address_space_create(mmu, "gmu", 0x0, 0x80000000); - if (IS_ERR(gmu->aspace)) - return PTR_ERR(gmu->aspace); + gmu->vm =3D msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + if (IS_ERR(gmu->vm)) + return PTR_ERR(gmu->vm); =20 return 0; } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.h index b2d4489b4024..fc288dfe889f 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; =20 - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index a8e6f62b6873..5078152eb8d3 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, if (ctx->seqno =3D=3D ring->cur_ctx_seqno) return; =20 - if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) return; =20 if (adreno_gpu->info->family >=3D ADRENO_7XX_GEN1) { @@ -970,7 +970,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) =20 msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw"); if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); =20 a6xx_gpu->sqe_bo =3D NULL; @@ -987,7 +987,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->shadow =3D msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->shadow_bo, + gpu->vm, &a6xx_gpu->shadow_bo, &a6xx_gpu->shadow_iova); =20 if (IS_ERR(a6xx_gpu->shadow)) @@ -998,7 +998,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) =20 a6xx_gpu->pwrup_reglist_ptr =3D msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->pwrup_reglist_bo, + gpu->vm, &a6xx_gpu->pwrup_reglist_bo, &a6xx_gpu->pwrup_reglist_iova); =20 if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr)) @@ -2211,12 +2211,12 @@ static void a6xx_destroy(struct msm_gpu *gpu) struct a6xx_gpu *a6xx_gpu =3D to_a6xx_gpu(adreno_gpu); =20 if (a6xx_gpu->sqe_bo) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); } =20 if (a6xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->shadow_bo); } =20 @@ -2256,8 +2256,8 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, st= ruct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } =20 -static struct msm_gem_address_space * -a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pde= v) +static struct msm_gem_vm * +a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); struct a6xx_gpu *a6xx_gpu =3D to_a6xx_gpu(adreno_gpu); @@ -2271,22 +2271,22 @@ a6xx_create_address_space(struct msm_gpu *gpu, stru= ct platform_device *pdev) !device_iommu_capable(&pdev->dev, IOMMU_CAP_CACHE_COHERENCY)) quirks |=3D IO_PGTABLE_QUIRK_ARM_OUTER_WBWA; =20 - return adreno_iommu_create_address_space(gpu, pdev, quirks); + return adreno_iommu_create_vm(gpu, pdev, quirks); } =20 -static struct msm_gem_address_space * -a6xx_create_private_address_space(struct msm_gpu *gpu) +static struct msm_gem_vm * +a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; =20 - mmu =3D msm_iommu_pagetable_create(gpu->aspace->mmu); + mmu =3D msm_iommu_pagetable_create(gpu->vm->mmu); =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); =20 - return msm_gem_address_space_create(mmu, + return msm_gem_vm_create(mmu, "gpu", ADRENO_VM_START, - adreno_private_address_space_size(gpu)); + adreno_private_vm_size(gpu)); } =20 static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *= ring) @@ -2403,8 +2403,8 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_state_get =3D a6xx_gpu_state_get, .gpu_state_put =3D a6xx_gpu_state_put, #endif - .create_address_space =3D a6xx_create_address_space, - .create_private_address_space =3D a6xx_create_private_address_space, + .create_vm =3D a6xx_create_vm, + .create_private_vm =3D a6xx_create_private_vm, .get_rptr =3D a6xx_get_rptr, .progress =3D a6xx_progress, }, @@ -2432,8 +2432,8 @@ static const struct adreno_gpu_funcs funcs_gmuwrapper= =3D { .gpu_state_get =3D a6xx_gpu_state_get, .gpu_state_put =3D a6xx_gpu_state_put, #endif - .create_address_space =3D a6xx_create_address_space, - .create_private_address_space =3D a6xx_create_private_address_space, + .create_vm =3D a6xx_create_vm, + .create_private_vm =3D a6xx_create_private_vm, .get_rptr =3D a6xx_get_rptr, .progress =3D a6xx_progress, }, @@ -2463,8 +2463,8 @@ static const struct adreno_gpu_funcs funcs_a7xx =3D { .gpu_state_get =3D a6xx_gpu_state_get, .gpu_state_put =3D a6xx_gpu_state_put, #endif - .create_address_space =3D a6xx_create_address_space, - .create_private_address_space =3D a6xx_create_private_address_space, + .create_vm =3D a6xx_create_vm, + .create_private_vm =3D a6xx_create_private_vm, .get_rptr =3D a6xx_get_rptr, .progress =3D a6xx_progress, }, @@ -2560,9 +2560,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0x1fffffffff000ull; =20 - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, - a6xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); =20 a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/= msm/adreno/a6xx_gpu_state.c index 341a72a67401..ff06bb75b76d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -132,7 +132,7 @@ static int a6xx_crashdumper_init(struct msm_gpu *gpu, struct a6xx_crashdumper *dumper) { dumper->ptr =3D msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); =20 if (!IS_ERR(dumper->ptr)) @@ -1619,7 +1619,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_g= pu *gpu) a7xx_get_clusters(gpu, a6xx_state, dumper); a7xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); =20 - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } =20 a7xx_get_post_crashdumper_registers(gpu, a6xx_state); @@ -1631,7 +1631,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_g= pu *gpu) a6xx_get_clusters(gpu, a6xx_state, dumper); a6xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); =20 - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } } =20 diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/ms= m/adreno/a6xx_preempt.c index 3b17fd2dba89..f6194a57f794 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -344,7 +344,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, =20 ptr =3D msm_gem_kernel_new(gpu->dev, PREEMPT_RECORD_SIZE(adreno_gpu), - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); =20 if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -362,7 +362,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr =3D msm_gem_kernel_new(gpu->dev, PREEMPT_SMMU_INFO_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &bo, &iova); + gpu->vm, &bo, &iova); =20 if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -377,7 +377,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, =20 struct a7xx_cp_smmu_info *smmu_info_ptr =3D ptr; =20 - msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); =20 smmu_info_ptr->magic =3D GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 =3D ttbr; @@ -405,7 +405,7 @@ void a6xx_preempt_fini(struct msm_gpu *gpu) int i; =20 for (i =3D 0; i < gpu->nr_rings; i++) - msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace); + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->vm); } =20 void a6xx_preempt_init(struct msm_gpu *gpu) @@ -431,7 +431,7 @@ void a6xx_preempt_init(struct msm_gpu *gpu) a6xx_gpu->preempt_postamble_ptr =3D msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &a6xx_gpu->preempt_postamble_bo, + gpu->vm, &a6xx_gpu->preempt_postamble_bo, &a6xx_gpu->preempt_postamble_iova); =20 preempt_prepare_postamble(a6xx_gpu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 5f4de4c25b97..be723fe4de2b 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 p= asid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } =20 -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev) +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev) { - return adreno_iommu_create_address_space(gpu, pdev, 0); + return adreno_iommu_create_vm(gpu, pdev, 0); } =20 -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks) +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; u64 start, size; =20 mmu =3D msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -224,16 +224,15 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu, start =3D max_t(u64, SZ_16M, geometry->aperture_start); size =3D geometry->aperture_end - start + 1; =20 - aspace =3D msm_gem_address_space_create(mmu, "gpu", - start & GENMASK_ULL(48, 0), size); + vm =3D msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); =20 - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); =20 - return aspace; + return vm; } =20 -u64 adreno_private_address_space_size(struct msm_gpu *gpu) +u64 adreno_private_vm_size(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); struct adreno_smmu_priv *adreno_smmu =3D dev_get_drvdata(&gpu->pdev->dev); @@ -275,7 +274,7 @@ void adreno_check_and_reenable_stall(struct adreno_gpu = *adreno_gpu) !READ_ONCE(gpu->crashstate)) { priv->stall_enabled =3D true; =20 - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); } spin_unlock_irqrestore(&priv->fault_stall_lock, flags); } @@ -303,8 +302,9 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, if (priv->stall_enabled) { priv->stall_enabled =3D false; =20 - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); } + priv->stall_reenable_time =3D ktime_add_ms(ktime_get(), 500); spin_unlock_irqrestore(&priv->fault_stall_lock, irq_flags); =20 @@ -401,8 +401,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, *value =3D 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->aspace) - *value =3D gpu->global_faults + ctx->aspace->faults; + if (ctx->vm) + *value =3D gpu->global_faults + ctx->vm->faults; else *value =3D gpu->global_faults; return 0; @@ -410,14 +410,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_= context *ctx, *value =3D gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->aspace =3D=3D gpu->aspace) + if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->aspace->va_start; + *value =3D ctx->vm->va_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->aspace =3D=3D gpu->aspace) + if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->aspace->va_size; + *value =3D ctx->vm->va_size; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value =3D adreno_gpu->ubwc_config.highest_bank_bit; @@ -607,7 +607,7 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_g= pu *gpu, void *ptr; =20 ptr =3D msm_gem_kernel_new(gpu->dev, fw->size - 4, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, &bo, iova); + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &bo, iova); =20 if (IS_ERR(ptr)) return ERR_CAST(ptr); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index a4abafca7782..4fa4b11442ba 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -580,7 +580,7 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) =20 /* Put vm_start above 32b to catch issues with not setting xyz_BASE_HI */ #define ADRENO_VM_START 0x100000000ULL -u64 adreno_private_address_space_size(struct msm_gpu *gpu); +u64 adreno_private_vm_size(struct msm_gpu *gpu); int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, @@ -623,14 +623,14 @@ void adreno_show_object(struct drm_printer *p, void *= *ptr, int len, * Common helper function to initialize the default address space for arm-= smmu * attached targets */ -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev); - -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks); +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev); + +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks); =20 int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flag= s, struct adreno_smmu_fault_info *info, const char *block, diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/= gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 849fea580a4c..32e208ee946d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct d= pu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc =3D to_dpu_encoder_phys_wb(phys_enc); @@ -576,13 +576,13 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct= dpu_encoder_phys *phys_enc =20 wb_enc->wb_job =3D job; wb_enc->wb_conn =3D job->connector; - aspace =3D phys_enc->dpu_kms->base.aspace; + vm =3D phys_enc->dpu_kms->base.vm; =20 wb_cfg =3D &wb_enc->wb_cfg; =20 memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); =20 - ret =3D msm_framebuffer_prepare(job->fb, aspace, false); + ret =3D msm_framebuffer_prepare(job->fb, vm, false); if (ret) { DPU_ERROR("prep fb failed, %d\n", ret); return; @@ -596,7 +596,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct d= pu_encoder_phys *phys_enc return; } =20 - dpu_format_populate_addrs(aspace, job->fb, &wb_cfg->dest); + dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest); =20 wb_cfg->dest.width =3D job->fb->width; wb_cfg->dest.height =3D job->fb->height; @@ -619,14 +619,14 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct= dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc =3D to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 if (!job->fb) return; =20 - aspace =3D phys_enc->dpu_kms->base.aspace; + vm =3D phys_enc->dpu_kms->base.vm; =20 - msm_framebuffer_cleanup(job->fb, aspace, false); + msm_framebuffer_cleanup(job->fb, vm, false); wb_enc->wb_job =3D NULL; wb_enc->wb_conn =3D NULL; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/= msm/disp/dpu1/dpu_formats.c index 59c9427da7dd..d115b79af771 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } =20 -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *= aspace, +static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -282,7 +282,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_= gem_address_space *aspace uint32_t base_addr =3D 0; bool meta; =20 - base_addr =3D msm_framebuffer_iova(fb, aspace, 0); + base_addr =3D msm_framebuffer_iova(fb, vm, 0); =20 fmt =3D msm_framebuffer_format(fb); meta =3D MSM_FORMAT_IS_UBWC(fmt); @@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_= gem_address_space *aspace } } =20 -static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space= *aspace, +static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -363,17 +363,17 @@ static void _dpu_format_populate_addrs_linear(struct = msm_gem_address_space *aspa =20 /* Populate addresses for simple formats here */ for (i =3D 0; i < layout->num_planes; ++i) - layout->plane_addr[i] =3D msm_framebuffer_iova(fb, aspace, i); -} + layout->plane_addr[i] =3D msm_framebuffer_iova(fb, vm, i); + } =20 /** * dpu_format_populate_addrs - populate buffer addresses based on * mmu, fb, and format found in the fb - * @aspace: address space pointer + * @vm: address space pointer * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -384,7 +384,7 @@ void dpu_format_populate_addrs(struct msm_gem_address_s= pace *aspace, /* Populate the addresses given the fb */ if (MSM_FORMAT_IS_UBWC(fmt) || MSM_FORMAT_IS_TILE(fmt)) - _dpu_format_populate_addrs_ubwc(aspace, fb, layout); + _dpu_format_populate_addrs_ubwc(vm, fb, layout); else - _dpu_format_populate_addrs_linear(aspace, fb, layout); + _dpu_format_populate_addrs_linear(vm, fb, layout); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/= msm/disp/dpu1/dpu_formats.h index c6145d43aa3f..989f3e13c497 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 = *supported_formats, return false; } =20 -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); =20 diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/= disp/dpu1/dpu_kms.c index 1fd82b6747e9..2c5687a188b6 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1095,26 +1095,26 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dp= u_kms) { struct msm_mmu *mmu; =20 - if (!dpu_kms->base.aspace) + if (!dpu_kms->base.vm) return; =20 - mmu =3D dpu_kms->base.aspace->mmu; + mmu =3D dpu_kms->base.vm->mmu; =20 mmu->funcs->detach(mmu); - msm_gem_address_space_put(dpu_kms->base.aspace); + msm_gem_vm_put(dpu_kms->base.vm); =20 - dpu_kms->base.aspace =3D NULL; + dpu_kms->base.vm =3D NULL; } =20 static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 - aspace =3D msm_kms_init_aspace(dpu_kms->dev); - if (IS_ERR(aspace)) - return PTR_ERR(aspace); + vm =3D msm_kms_init_vm(dpu_kms->dev); + if (IS_ERR(vm)) + return PTR_ERR(vm); =20 - dpu_kms->base.aspace =3D aspace; + dpu_kms->base.vm =3D vm; =20 return 0; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/ms= m/disp/dpu1/dpu_plane.c index 421138bc3cb7..6d47f43f52f7 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -71,7 +71,7 @@ static const uint32_t qcom_compressed_supported_formats[]= =3D { =20 /* * struct dpu_plane - local dpu plane structure - * @aspace: address space pointer + * @vm: address space pointer * @csc_ptr: Points to dpu_csc_cfg structure to use for current * @catalog: Points to dpu catalog structure * @revalidate: force revalidation of all the plane properties @@ -654,8 +654,8 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, =20 DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id); =20 - /* cache aspace */ - pstate->aspace =3D kms->base.aspace; + /* cache vm */ + pstate->vm =3D kms->base.vm; =20 /* * TODO: Need to sort out the msm_framebuffer_prepare() call below so @@ -664,9 +664,9 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, */ drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - if (pstate->aspace) { + if (pstate->vm) { ret =3D msm_framebuffer_prepare(new_state->fb, - pstate->aspace, pstate->needs_dirtyfb); + pstate->vm, pstate->needs_dirtyfb); if (ret) { DPU_ERROR("failed to prepare framebuffer\n"); return ret; @@ -689,7 +689,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plan= e, =20 DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id); =20 - msm_framebuffer_cleanup(old_state->fb, old_pstate->aspace, + msm_framebuffer_cleanup(old_state->fb, old_pstate->vm, old_pstate->needs_dirtyfb); } =20 @@ -1457,7 +1457,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_p= lane *plane, pstate->needs_qos_remap |=3D (is_rt_pipe !=3D pdpu->is_rt_pipe); pdpu->is_rt_pipe =3D is_rt_pipe; =20 - dpu_format_populate_addrs(pstate->aspace, new_state->fb, &pstate->layout); + dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout); =20 DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT ", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/ms= m/disp/dpu1/dpu_plane.h index acd5725175cd..3578f52048a5 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -17,7 +17,7 @@ /** * struct dpu_plane_state: Define dpu extension of drm plane state object * @base: base drm plane state object - * @aspace: pointer to address space for input/output buffers + * @vm: pointer to address space for input/output buffers * @pipe: software pipe description * @r_pipe: software pipe description of the second pipe * @pipe_cfg: software pipe configuration @@ -34,7 +34,7 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/ms= m/disp/mdp4/mdp4_crtc.c index b8610aa806ea..0133c0c01a0b 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c @@ -120,7 +120,7 @@ static void unref_cursor_worker(struct drm_flip_work *w= ork, void *val) struct mdp4_kms *mdp4_kms =3D get_kms(&mdp4_crtc->base); struct msm_kms *kms =3D &mdp4_kms->base.base; =20 - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } =20 @@ -369,7 +369,7 @@ static void update_cursor(struct drm_crtc *crtc) if (next_bo) { /* take a obj ref + iova ref when we start scanning out: */ drm_gem_object_get(next_bo); - msm_gem_get_and_pin_iova(next_bo, kms->aspace, &iova); + msm_gem_get_and_pin_iova(next_bo, kms->vm, &iova); =20 /* enable cursor: */ mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_SIZE(dma), @@ -427,7 +427,7 @@ static int mdp4_crtc_cursor_set(struct drm_crtc *crtc, } =20 if (cursor_bo) { - ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, &iova); + ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &iova); if (ret) goto fail; } else { diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm= /disp/mdp4/mdp4_kms.c index 7e942c1337b3..5cb4a4bae2a6 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -122,15 +122,15 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(kms)); struct device *dev =3D mdp4_kms->dev->dev; - struct msm_gem_address_space *aspace =3D kms->aspace; + struct msm_gem_vm *vm =3D kms->vm; =20 if (mdp4_kms->blank_cursor_iova) - msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->aspace); + msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); =20 - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } =20 if (mdp4_kms->rpm_enabled) @@ -398,7 +398,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms =3D NULL; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int ret; u32 major, minor; unsigned long max_clk; @@ -467,19 +467,19 @@ static int mdp4_kms_init(struct drm_device *dev) } else if (!mmu) { DRM_DEV_INFO(dev->dev, "no iommu, fallback to phys " "contig buffers for scanout\n"); - aspace =3D NULL; + vm =3D NULL; } else { - aspace =3D msm_gem_address_space_create(mmu, + vm =3D msm_gem_vm_create(mmu, "mdp4", 0x1000, 0x100000000 - 0x1000); =20 - if (IS_ERR(aspace)) { + if (IS_ERR(vm)) { if (!IS_ERR(mmu)) mmu->funcs->destroy(mmu); - ret =3D PTR_ERR(aspace); + ret =3D PTR_ERR(vm); goto fail; } =20 - kms->aspace =3D aspace; + kms->vm =3D vm; } =20 ret =3D modeset_init(mdp4_kms); @@ -496,7 +496,7 @@ static int mdp4_kms_init(struct drm_device *dev) goto fail; } =20 - ret =3D msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->aspace, + ret =3D msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->vm, &mdp4_kms->blank_cursor_iova); if (ret) { DRM_DEV_ERROR(dev->dev, "could not pin blank-cursor bo: %d\n", ret); diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/m= sm/disp/mdp4/mdp4_plane.c index 3fefb2088008..7743be6167f8 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c @@ -87,7 +87,7 @@ static int mdp4_plane_prepare_fb(struct drm_plane *plane, =20 drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - return msm_framebuffer_prepare(new_state->fb, kms->aspace, false); + return msm_framebuffer_prepare(new_state->fb, kms->vm, false); } =20 static void mdp4_plane_cleanup_fb(struct drm_plane *plane, @@ -102,7 +102,7 @@ static void mdp4_plane_cleanup_fb(struct drm_plane *pla= ne, return; =20 DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, false); + msm_framebuffer_cleanup(fb, kms->vm, false); } =20 =20 @@ -153,13 +153,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *= plane, MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); =20 mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } =20 static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms, diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/ms= m/disp/mdp5/mdp5_crtc.c index 0f653e62b4a0..298861f373b0 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -169,7 +169,7 @@ static void unref_cursor_worker(struct drm_flip_work *w= ork, void *val) struct mdp5_kms *mdp5_kms =3D get_kms(&mdp5_crtc->base); struct msm_kms *kms =3D &mdp5_kms->base.base; =20 - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } =20 @@ -993,7 +993,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (!cursor_bo) return -ENOENT; =20 - ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, + ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &mdp5_crtc->cursor.iova); if (ret) { drm_gem_object_put(cursor_bo); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm= /disp/mdp5/mdp5_kms.c index 3fcca7a3d82e..9dca0385a42d 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,11 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms =3D to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_address_space *aspace =3D kms->aspace; + struct msm_gem_vm *vm =3D kms->vm; =20 - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } =20 mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +500,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms =3D priv->kms; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int i, ret; =20 ret =3D mdp5_init(to_platform_device(dev->dev), dev); @@ -534,13 +534,13 @@ static int mdp5_kms_init(struct drm_device *dev) } mdelay(16); =20 - aspace =3D msm_kms_init_aspace(mdp5_kms->dev); - if (IS_ERR(aspace)) { - ret =3D PTR_ERR(aspace); + vm =3D msm_kms_init_vm(mdp5_kms->dev); + if (IS_ERR(vm)) { + ret =3D PTR_ERR(vm); goto fail; } =20 - kms->aspace =3D aspace; + kms->vm =3D vm; =20 pm_runtime_put_sync(&pdev->dev); =20 diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/m= sm/disp/mdp5/mdp5_plane.c index bb1601921938..9f68a4747203 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -144,7 +144,7 @@ static int mdp5_plane_prepare_fb(struct drm_plane *plan= e, =20 drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - return msm_framebuffer_prepare(new_state->fb, kms->aspace, needs_dirtyfb); + return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb); } =20 static void mdp5_plane_cleanup_fb(struct drm_plane *plane, @@ -159,7 +159,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *pla= ne, return; =20 DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, needed_dirtyfb); + msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb); } =20 static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_= state, @@ -478,13 +478,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_= kms, MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); =20 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } =20 /* Note: mdp5_plane->pipe_lock must be locked */ diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/d= si_host.c index 4d75529c0e85..16335ebd21e4 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { =20 /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,10 +1146,10 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_ho= st, int size) uint64_t iova; u8 *data; =20 - msm_host->aspace =3D msm_gem_address_space_get(priv->kms->aspace); + msm_host->vm =3D msm_gem_vm_get(priv->kms->vm); =20 data =3D msm_gem_kernel_new(dev, size, MSM_BO_WC, - msm_host->aspace, + msm_host->vm, &msm_host->tx_gem_obj, &iova); =20 if (IS_ERR(data)) { @@ -1193,10 +1193,10 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) return; =20 if (msm_host->tx_gem_obj) { - msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->aspace); - msm_gem_address_space_put(msm_host->aspace); + msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); + msm_gem_vm_put(msm_host->vm); msm_host->tx_gem_obj =3D NULL; - msm_host->aspace =3D NULL; + msm_host->vm =3D NULL; } =20 if (msm_host->tx_buf) @@ -1327,7 +1327,7 @@ int dsi_dma_base_get_6g(struct msm_dsi_host *msm_host= , uint64_t *dma_base) return -EINVAL; =20 return msm_gem_get_and_pin_iova(msm_host->tx_gem_obj, - priv->kms->aspace, dma_base); + priv->kms->vm, dma_base); } =20 int dsi_dma_base_get_v2(struct msm_dsi_host *msm_host, uint64_t *dma_base) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 324ee2089b34..49c868e33d70 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -349,7 +349,7 @@ static int context_init(struct drm_device *dev, struct = drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); =20 - ctx->aspace =3D msm_gpu_create_private_address_space(priv->gpu, current); + ctx->vm =3D msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv =3D ctx; =20 ctx->seqno =3D atomic_inc_return(&ident); @@ -527,7 +527,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *d= ev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->aspace, iova); + return msm_gem_get_iova(obj, ctx->vm, iova); } =20 static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -541,13 +541,13 @@ static int msm_ioctl_gem_info_set_iova(struct drm_dev= ice *dev, return -EINVAL; =20 /* Only supported if per-process address space is supported: */ - if (priv->gpu->aspace =3D=3D ctx->aspace) + if (priv->gpu->vm =3D=3D ctx->vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); =20 if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; =20 - return msm_gem_set_iova(obj, ctx->aspace, iova); + return msm_gem_set_iova(obj, ctx->vm, iova); } =20 static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index c8afb1ea6040..8aa3412c6e36 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,7 +48,7 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_address_space; +struct msm_gem_vm; struct msm_gem_vma; struct msm_disp_state; =20 @@ -264,7 +264,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); =20 -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev); +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); =20 int msm_ioctl_gem_submit(struct drm_device *dev, void *data, @@ -286,11 +286,11 @@ int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); =20 int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needs_dirtyfb); + struct msm_gem_vm *vm, bool needs_dirtyfb); void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needed_dirtyfb); + struct msm_gem_vm *vm, bool needed_dirtyfb); uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane); + struct msm_gem_vm *vm, int plane); struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int = plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb= ); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 09268e416843..6df318b73534 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -76,7 +76,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb,= struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needs_dirtyfb) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); @@ -88,7 +88,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, atomic_inc(&msm_fb->prepare_count); =20 for (i =3D 0; i < n; i++) { - ret =3D msm_gem_get_and_pin_iova(fb->obj[i], aspace, &msm_fb->iova[i]); + ret =3D msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)\n", fb->base.id, i, msm_fb->iova[i], ret); if (ret) @@ -99,7 +99,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, } =20 void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needed_dirtyfb) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); @@ -109,14 +109,14 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *= fb, refcount_dec(&msm_fb->dirtyfb); =20 for (i =3D 0; i < n; i++) - msm_gem_unpin_iova(fb->obj[i], aspace); + msm_gem_unpin_iova(fb->obj[i], vm); =20 if (!atomic_dec_return(&msm_fb->prepare_count)) memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } =20 uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane) + struct msm_gem_vm *vm, int plane) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbde= v.c index c62249b1ab3d..b5969374d53f 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -122,7 +122,7 @@ int msm_fbdev_driver_fbdev_probe(struct drm_fb_helper *= helper, * in panic (ie. lock-safe, etc) we could avoid pinning the * buffer now: */ - ret =3D msm_gem_get_and_pin_iova(bo, priv->kms->aspace, &paddr); + ret =3D msm_gem_get_and_pin_iova(bo, priv->kms->vm, &paddr); if (ret) { DRM_DEV_ERROR(dev->dev, "failed to get buffer obj iova: %d\n", ret); goto fail; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 259919b0e887..5e6c88b85fd3 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -398,14 +398,14 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *o= bj) } =20 static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct msm_gem_vma *vma; =20 msm_gem_assert_locked(obj); =20 - vma =3D msm_gem_vma_new(aspace); + vma =3D msm_gem_vma_new(vm); if (!vma) return ERR_PTR(-ENOMEM); =20 @@ -415,7 +415,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_objec= t *obj, } =20 static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct msm_gem_vma *vma; @@ -423,7 +423,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_ob= ject *obj, msm_gem_assert_locked(obj); =20 list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace =3D=3D aspace) + if (vma->vm =3D=3D vm) return vma; } =20 @@ -454,7 +454,7 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) msm_gem_assert_locked(obj); =20 list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace) { + if (vma->vm) { msm_gem_vma_purge(vma); if (close) msm_gem_vma_close(vma); @@ -477,19 +477,19 @@ put_iova_vmas(struct drm_gem_object *obj) } =20 static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; =20 msm_gem_assert_locked(obj); =20 - vma =3D lookup_vma(obj, aspace); + vma =3D lookup_vma(obj, vm); =20 if (!vma) { int ret; =20 - vma =3D add_vma(obj, aspace); + vma =3D add_vma(obj, vm); if (IS_ERR(vma)) return vma; =20 @@ -561,13 +561,13 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) } =20 struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - return get_vma_locked(obj, aspace, 0, U64_MAX); + return get_vma_locked(obj, vm, 0, U64_MAX); } =20 static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; @@ -575,7 +575,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, =20 msm_gem_assert_locked(obj); =20 - vma =3D get_vma_locked(obj, aspace, range_start, range_end); + vma =3D get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); =20 @@ -593,13 +593,13 @@ static int get_and_pin_iova_range_locked(struct drm_g= em_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { int ret; =20 msm_gem_lock(obj); - ret =3D get_and_pin_iova_range_locked(obj, aspace, iova, range_start, ran= ge_end); + ret =3D get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_e= nd); msm_gem_unlock(obj); =20 return ret; @@ -607,9 +607,9 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_objec= t *obj, =20 /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { - return msm_gem_get_and_pin_iova_range(obj, aspace, iova, 0, U64_MAX); + return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } =20 /* @@ -617,13 +617,13 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *o= bj, * valid for the life of the object */ int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; int ret =3D 0; =20 msm_gem_lock(obj); - vma =3D get_vma_locked(obj, aspace, 0, U64_MAX); + vma =3D get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else { @@ -635,9 +635,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } =20 static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - struct msm_gem_vma *vma =3D lookup_vma(obj, aspace); + struct msm_gem_vma *vma =3D lookup_vma(obj, vm); =20 if (!vma) return 0; @@ -657,20 +657,20 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova) + struct msm_gem_vm *vm, uint64_t iova) { int ret =3D 0; =20 msm_gem_lock(obj); if (!iova) { - ret =3D clear_iova(obj, aspace); + ret =3D clear_iova(obj, vm); } else { struct msm_gem_vma *vma; - vma =3D get_vma_locked(obj, aspace, iova, iova + obj->size); + vma =3D get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else if (GEM_WARN_ON(vma->iova !=3D iova)) { - clear_iova(obj, aspace); + clear_iova(obj, vm); ret =3D -EBUSY; } } @@ -685,12 +685,12 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * to get rid of it */ void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_vma *vma; =20 msm_gem_lock(obj); - vma =3D lookup_vma(obj, aspace); + vma =3D lookup_vma(obj, vm); if (!GEM_WARN_ON(!vma)) { msm_gem_unpin_locked(obj); } @@ -1008,23 +1008,23 @@ void msm_gem_describe(struct drm_gem_object *obj, s= truct seq_file *m, =20 list_for_each_entry(vma, &msm_obj->vmas, list) { const char *name, *comm; - if (vma->aspace) { - struct msm_gem_address_space *aspace =3D vma->aspace; + if (vma->vm) { + struct msm_gem_vm *vm =3D vma->vm; struct task_struct *task =3D - get_pid_task(aspace->pid, PIDTYPE_PID); + get_pid_task(vm->pid, PIDTYPE_PID); if (task) { comm =3D kstrdup(task->comm, GFP_KERNEL); put_task_struct(task); } else { comm =3D NULL; } - name =3D aspace->name; + name =3D vm->name; } else { name =3D comm =3D NULL; } - seq_printf(m, " [%s%s%s: aspace=3D%p, %08llx,%s]", + seq_printf(m, " [%s%s%s: vm=3D%p, %08llx,%s]", name, comm ? ":" : "", comm ? comm : "", - vma->aspace, vma->iova, + vma->vm, vma->iova, vma->mapped ? "mapped" : "unmapped"); kfree(comm); } @@ -1349,7 +1349,7 @@ struct drm_gem_object *msm_gem_import(struct drm_devi= ce *dev, } =20 void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova) { void *vaddr; @@ -1360,14 +1360,14 @@ void *msm_gem_kernel_new(struct drm_device *dev, ui= nt32_t size, return ERR_CAST(obj); =20 if (iova) { - ret =3D msm_gem_get_and_pin_iova(obj, aspace, iova); + ret =3D msm_gem_get_and_pin_iova(obj, vm, iova); if (ret) goto err; } =20 vaddr =3D msm_gem_get_vaddr(obj); if (IS_ERR(vaddr)) { - msm_gem_unpin_iova(obj, aspace); + msm_gem_unpin_iova(obj, vm); ret =3D PTR_ERR(vaddr); goto err; } @@ -1384,13 +1384,13 @@ void *msm_gem_kernel_new(struct drm_device *dev, ui= nt32_t size, } =20 void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { if (IS_ERR_OR_NULL(bo)) return; =20 msm_gem_put_vaddr(bo); - msm_gem_unpin_iova(bo, aspace); + msm_gem_unpin_iova(bo, vm); drm_gem_object_put(bo); } =20 diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index ba5c4ff76292..64ea3ed213c1 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -22,7 +22,7 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash mem= ory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping = */ =20 -struct msm_gem_address_space { +struct msm_gem_vm { const char *name; /* NOTE: mm managed at the page level, size is in # of pages * and position mm_node->start is in # of pages: @@ -47,13 +47,13 @@ struct msm_gem_address_space { uint64_t va_size; }; =20 -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace); +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm); =20 -void msm_gem_address_space_put(struct msm_gem_address_space *aspace); +void msm_gem_vm_put(struct msm_gem_vm *vm); =20 -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size); =20 struct msm_fence_context; @@ -61,12 +61,12 @@ struct msm_fence_context; struct msm_gem_vma { struct drm_mm_node node; uint64_t iova; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head list; /* node in msm_gem_object::vmas */ bool mapped; }; =20 -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace); +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); @@ -127,18 +127,18 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj= , struct msm_gem_vma *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova); + struct msm_gem_vm *vm, uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -160,10 +160,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct= drm_file *file, struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova); void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -257,7 +257,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 3aabf7f1da6d..a59816b6b6de 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_de= vice *dev, =20 kref_init(&submit->ref); submit->dev =3D dev; - submit->aspace =3D queue->ctx->aspace; + submit->vm =3D queue->ctx->vm; submit->gpu =3D gpu; submit->cmd =3D (void *)&submit->bos[nr_bos]; submit->queue =3D queue; @@ -311,7 +311,7 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) struct msm_gem_vma *vma; =20 /* if locking succeeded, pin bo: */ - vma =3D msm_gem_get_vma_locked(obj, submit->aspace); + vma =3D msm_gem_get_vma_locked(obj, submit->vm); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); break; @@ -669,7 +669,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, if (args->pad) return -EINVAL; =20 - if (unlikely(!ctx->aspace) && !capable(CAP_SYS_RAWIO)) { + if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); return -EPERM; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 11e842dda73c..9419692f0cc8 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -10,45 +10,44 @@ #include "msm_mmu.h" =20 static void -msm_gem_address_space_destroy(struct kref *kref) +msm_gem_vm_destroy(struct kref *kref) { - struct msm_gem_address_space *aspace =3D container_of(kref, - struct msm_gem_address_space, kref); - - drm_mm_takedown(&aspace->mm); - if (aspace->mmu) - aspace->mmu->funcs->destroy(aspace->mmu); - put_pid(aspace->pid); - kfree(aspace); + struct msm_gem_vm *vm =3D container_of(kref, struct msm_gem_vm, kref); + + drm_mm_takedown(&vm->mm); + if (vm->mmu) + vm->mmu->funcs->destroy(vm->mmu); + put_pid(vm->pid); + kfree(vm); } =20 =20 -void msm_gem_address_space_put(struct msm_gem_address_space *aspace) +void msm_gem_vm_put(struct msm_gem_vm *vm) { - if (aspace) - kref_put(&aspace->kref, msm_gem_address_space_destroy); + if (vm) + kref_put(&vm->kref, msm_gem_vm_destroy); } =20 -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace) +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm) { - if (!IS_ERR_OR_NULL(aspace)) - kref_get(&aspace->kref); + if (!IS_ERR_OR_NULL(vm)) + kref_get(&vm->kref); =20 - return aspace; + return vm; } =20 /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; unsigned size =3D vma->node.size; =20 /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; =20 - aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); =20 vma->mapped =3D false; } @@ -58,7 +57,7 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; int ret; =20 if (GEM_WARN_ON(!vma->iova)) @@ -69,7 +68,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, =20 vma->mapped =3D true; =20 - if (!aspace) + if (!vm) return 0; =20 /* @@ -81,7 +80,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, size, prot); + ret =3D vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); =20 if (ret) { vma->mapped =3D false; @@ -93,21 +92,21 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; =20 GEM_WARN_ON(vma->mapped); =20 - spin_lock(&aspace->lock); + spin_lock(&vm->lock); if (vma->iova) drm_mm_remove_node(&vma->node); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); =20 vma->iova =3D 0; =20 - msm_gem_address_space_put(aspace); + msm_gem_vm_put(vm); } =20 -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) { struct msm_gem_vma *vma; =20 @@ -115,7 +114,7 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_addr= ess_space *aspace) if (!vma) return NULL; =20 - vma->aspace =3D aspace; + vma->vm =3D vm; =20 return vma; } @@ -124,20 +123,20 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_ad= dress_space *aspace) int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; int ret; =20 - if (GEM_WARN_ON(!aspace)) + if (GEM_WARN_ON(!vm)) return -EINVAL; =20 if (GEM_WARN_ON(vma->iova)) return -EBUSY; =20 - spin_lock(&aspace->lock); - ret =3D drm_mm_insert_node_in_range(&aspace->mm, &vma->node, + spin_lock(&vm->lock); + ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); =20 if (ret) return ret; @@ -145,33 +144,33 @@ int msm_gem_vma_init(struct msm_gem_vma *vma, int siz= e, vma->iova =3D vma->node.start; vma->mapped =3D false; =20 - kref_get(&aspace->kref); + kref_get(&vm->kref); =20 return 0; } =20 -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); =20 - aspace =3D kzalloc(sizeof(*aspace), GFP_KERNEL); - if (!aspace) + vm =3D kzalloc(sizeof(*vm), GFP_KERNEL); + if (!vm) return ERR_PTR(-ENOMEM); =20 - spin_lock_init(&aspace->lock); - aspace->name =3D name; - aspace->mmu =3D mmu; - aspace->va_start =3D va_start; - aspace->va_size =3D size; + spin_lock_init(&vm->lock); + vm->name =3D name; + vm->mmu =3D mmu; + vm->va_start =3D va_start; + vm->va_size =3D size; =20 - drm_mm_init(&aspace->mm, va_start, size); + drm_mm_init(&vm->mm, va_start, size); =20 - kref_init(&aspace->kref); + kref_init(&vm->kref); =20 - return aspace; + return vm; } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index a8280b579832..3400a6ca8fd8 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -285,7 +285,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *= gpu, =20 if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info =3D &state->fault_info; - struct msm_mmu *mmu =3D submit->aspace->mmu; + struct msm_mmu *mmu =3D submit->vm->mmu; =20 msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -389,8 +389,8 @@ static void recover_worker(struct kthread_work *work) =20 /* Increment the fault counts */ submit->queue->faults++; - if (submit->aspace) - submit->aspace->faults++; + if (submit->vm) + submit->vm->faults++; =20 get_comm_cmdline(submit, &comm, &cmd); =20 @@ -828,10 +828,10 @@ static int get_clocks(struct platform_device *pdev, s= truct msm_gpu *gpu) } =20 /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_stru= ct *task) +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_address_space *aspace =3D NULL; + struct msm_gem_vm *vm =3D NULL; if (!gpu) return NULL; =20 @@ -839,16 +839,16 @@ msm_gpu_create_private_address_space(struct msm_gpu *= gpu, struct task_struct *ta * If the target doesn't support private address spaces then return * the global one */ - if (gpu->funcs->create_private_address_space) { - aspace =3D gpu->funcs->create_private_address_space(gpu); - if (!IS_ERR(aspace)) - aspace->pid =3D get_pid(task_pid(task)); + if (gpu->funcs->create_private_vm) { + vm =3D gpu->funcs->create_private_vm(gpu); + if (!IS_ERR(vm)) + vm->pid =3D get_pid(task_pid(task)); } =20 - if (IS_ERR_OR_NULL(aspace)) - aspace =3D msm_gem_address_space_get(gpu->aspace); + if (IS_ERR_OR_NULL(vm)) + vm =3D msm_gem_vm_get(gpu->vm); =20 - return aspace; + return vm; } =20 int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, @@ -943,18 +943,18 @@ int msm_gpu_init(struct drm_device *drm, struct platf= orm_device *pdev, msm_devfreq_init(gpu); =20 =20 - gpu->aspace =3D gpu->funcs->create_address_space(gpu, pdev); + gpu->vm =3D gpu->funcs->create_vm(gpu, pdev); =20 - if (gpu->aspace =3D=3D NULL) + if (gpu->vm =3D=3D NULL) DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", nam= e); - else if (IS_ERR(gpu->aspace)) { - ret =3D PTR_ERR(gpu->aspace); + else if (IS_ERR(gpu->vm)) { + ret =3D PTR_ERR(gpu->vm); goto fail; } =20 memptrs =3D msm_gem_kernel_new(drm, sizeof(struct msm_rbmemptrs) * nr_rings, - check_apriv(gpu, MSM_BO_WC), gpu->aspace, &gpu->memptrs_bo, + check_apriv(gpu, MSM_BO_WC), gpu->vm, &gpu->memptrs_bo, &memptrs_iova); =20 if (IS_ERR(memptrs)) { @@ -998,7 +998,7 @@ int msm_gpu_init(struct drm_device *drm, struct platfor= m_device *pdev, gpu->rb[i] =3D NULL; } =20 - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); =20 platform_set_drvdata(pdev, NULL); return ret; @@ -1015,11 +1015,11 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) gpu->rb[i] =3D NULL; } =20 - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); =20 - if (!IS_ERR_OR_NULL(gpu->aspace)) { - gpu->aspace->mmu->funcs->detach(gpu->aspace->mmu); - msm_gem_address_space_put(gpu->aspace); + if (!IS_ERR_OR_NULL(gpu->vm)) { + gpu->vm->mmu->funcs->detach(gpu->vm->mmu); + msm_gem_vm_put(gpu->vm); } =20 if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d30a1eedfda6..9d69dcad6612 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,10 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_address_space *(*create_address_space) - (struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_address_space *(*create_private_address_space) - (struct msm_gpu *gpu); + struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_devi= ce *pdev); + struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); =20 /** @@ -236,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; =20 - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -358,8 +356,8 @@ struct msm_context { */ int queueid; =20 - /** @aspace: the per-process GPU address-space */ - struct msm_gem_address_space *aspace; + /** @vm: the per-process GPU address-space */ + struct msm_gem_vm *vm; =20 /** @kref: the reference count */ struct kref ref; @@ -669,8 +667,8 @@ int msm_gpu_init(struct drm_device *drm, struct platfor= m_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); =20 -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_stru= ct *task); +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); =20 void msm_gpu_cleanup(struct msm_gpu *gpu); =20 diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 35d5397e73b4..88504c4b842f 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned lo= ng iova, int flags, void return -ENOSYS; } =20 -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct msm_mmu *mmu; struct device *mdp_dev =3D dev->dev; struct device *mdss_dev =3D mdp_dev->parent; @@ -204,17 +204,17 @@ struct msm_gem_address_space *msm_kms_init_aspace(str= uct drm_device *dev) return NULL; } =20 - aspace =3D msm_gem_address_space_create(mmu, "mdp_kms", + vm =3D msm_gem_vm_create(mmu, "mdp_kms", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { - dev_err(mdp_dev, "aspace create, error %pe\n", aspace); + if (IS_ERR(vm)) { + dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); - return aspace; + return vm; } =20 - msm_mmu_set_fault_handler(aspace->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); =20 - return aspace; + return vm; } =20 void msm_drm_kms_uninit(struct device *dev) diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index 43b58d052ee6..f45996a03e15 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; =20 /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm= _ringbuffer.c index 89dce15eed3b..552b8da9e5f7 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -84,7 +84,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu = *gpu, int id, =20 ring->start =3D msm_gem_kernel_new(gpu->dev, MSM_GPU_RINGBUFFER_SZ, check_apriv(gpu, MSM_BO_WC | MSM_BO_GPU_READONLY), - gpu->aspace, &ring->bo, &ring->iova); + gpu->vm, &ring->bo, &ring->iova); =20 if (IS_ERR(ring->start)) { ret =3D PTR_ERR(ring->start); @@ -131,7 +131,7 @@ void msm_ringbuffer_destroy(struct msm_ringbuffer *ring) =20 msm_fence_context_free(ring->fctx); =20 - msm_gem_kernel_put(ring->bo, ring->gpu->aspace); + msm_gem_kernel_put(ring->bo, ring->gpu->vm); =20 kfree(ring); } diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index 1acc0fe36353..6298233c3568 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } =20 - msm_gem_address_space_put(ctx->aspace); + msm_gem_vm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C7C822FE08 for ; Sun, 29 Jun 2025 20:16:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228178; cv=none; b=P9lguXWNqH0x35nX27+DmhNjNVu4YfVDs6XOYlu+vEEj+xNstOsTJG10ZTN0miE52Qe3rvT6wTHclZLDpd5s/EdUr6cORlJRqdWSLSIdBzBLL6cnIHNNbVJMy5crDeK+XbS9ZkQZAND1D8xoBOzrAVkWuixX2LPddYsUDgvL8DM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228178; c=relaxed/simple; bh=b5tuGWPYE6QnDeDqiAEIvFXMk2X3Ub5gGngeK/IcH8c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=guPr8EX677jD0uv1Ob/+xH3jQiyDOXxG0O62VkmPxO83nYF+xPuyZOKXR4x6v34mjJvs1ODRBO5G3a8QaywxYrfmQ7myUUyOg6DuWV1Ajf4CIlpt+b7jatTNznxAHjmGMxIY0VLA2EIcM+7Yiy1twzzUsEqPOafDMJ5BkKPJWCc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=N8Kqh5na; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="N8Kqh5na" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TKCBWf020077 for ; Sun, 29 Jun 2025 20:16:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=jQkMQAyUbCC WgV1hQTgPBMjaOTBgABotCahc1KmlBHk=; b=N8Kqh5naxgmNZywvH7IswsCaS2h CtMdV/AolTQRpWK4rnLbj09wDE3F2e9eolSrs5n6+LIf+oZXeINHjeUyngEdwPx0 flb2w4+qa/FEz+uene+LjvMeOY4AX2c64IIu9rZfvlHrc6LOIce9ABU6PMUqtEmE fX+/QV8UQvY6MQEVSr0+34yDLs02L4bhWqquEJVAQxEaZtFoXCOzfe4DShz2zvDr mLb2SM9gJCRU6oSKwjxSrassaX29t6moigC8kV+Q8xqJjbsMnOjccNrk+QweqE3k J8+jY2IIwcDCwiqG04tlvsnol92m+88KffnVGjGoJv8HXy10s29vFcu9cwA== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j95htk49-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:15 +0000 (GMT) Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-b3184712fd8so1032400a12.3 for ; Sun, 29 Jun 2025 13:16:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228174; x=1751832974; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jQkMQAyUbCCWgV1hQTgPBMjaOTBgABotCahc1KmlBHk=; b=lWMwrvg+db/+LdCSM5t0yOGvWnN/v+37TtKkM1bDTJBRde5vdBlh2VgEmF2d6yX5Gi V03Ckt07RmFYN9przH44MUkWgJi06tPNKCea1I9dV2HOrPSlrkcsO9FPAq6iYfwsfcc+ JCOo5XGyVSoUSEIRZWMTgwJp1sNF2IzBc1uQ2Bw6Ny6HEjV48y4nSA1bkwres5AkEjEA +u70JdiROxwfpU/1Y14X00r88TJ+BLkApiczruUrJLBLMdU4XYjVTPhJonbNHSCWb623 USfeeLYnlF8OFTQcQbfzdWIzVNBi9zfL7uHHwh+Z2afXi3ZdgWuUvX5kBVeOI6D8Byj3 0iWQ== X-Forwarded-Encrypted: i=1; AJvYcCVqnEFlJQ5PhKCNChlAcIQ4dd3fJv99lWW12Ap2Th5P5EUVA4uVYDbI44A0S918oQb3p3+veZjNpWxX0rE=@vger.kernel.org X-Gm-Message-State: AOJu0YyrmQnfbnYOGcQElJ1TVOdl6uW5JrCIe6+IxCpUVDCsTRgAHAXT MC/hBsmhlI0P7zT4bVhJu3mLz2eTaZ+Lz7H2Set+6G9ows5rsbAcwvsmE8GoDmDSIkcZIb+6p21 OmZCPZyifDnNphsM2GtgQdaYvAFUBRVpaeGubQ5iVDTlKDWsD6n+5ddXUI2v8Jqv/yIs= X-Gm-Gg: ASbGncuxPl3QF0dx5Pf1Cj7lz82PMszrAoutX2qmziegqSsX20RBXslcde+ts6eUeK6 ze78hesyS3JQdxPXiGe25JYP/PvX9zT41w0qPyVU6x6RpOqafKxfxI4y32GVTgvHErC+biBQHpk 9M8yRxvh6IFo5q4Q2RnRbnnPNbkvWYO2kJEatC71UvK4AdSGsNlgC1l0Qx1Zl1ICww4JAYhDOq6 NU99YYwA2lZOz48y8hdMzHOSNEaDzzpDdXhbOvCfdzpbu3yergzuPZNbZ7U9hEM3dmoL/CjXBjZ Gs0daTJ0UVpQEA4sVJPM6biBb7/KjiqK0Q== X-Received: by 2002:a05:6a20:748f:b0:215:f656:6632 with SMTP id adf61e73a8af0-220a180a309mr17244355637.29.1751228174348; Sun, 29 Jun 2025 13:16:14 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHqcsYYvQglv3Y0JcN8PgNVI95TnoaHsLjfOVefjaQh55hOrE7otF0frH78NT2R411VXGvrFA== X-Received: by 2002:a05:6a20:748f:b0:215:f656:6632 with SMTP id adf61e73a8af0-220a180a309mr17244316637.29.1751228173792; Sun, 29 Jun 2025 13:16:13 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b34e31bea17sm6336213a12.46.2025.06.29.13.16.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:13 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 07/42] drm/msm: Remove vram carveout support Date: Sun, 29 Jun 2025 13:12:50 -0700 Message-ID: <20250629201530.25775-8-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX7e++T+Ux5CuM W/aTNoijdydI8Bs0wGcvRBwU5yzWjYomFmO5Rf4zTB1BeFJE/A4XxEnpCIMMvuI+hpX9mTrefZA YtuyalIV09k2Lm4cJkmKDVcQEE56MFMKN8Dm+F2SAjvAEhdgz5Sss+o7w9YAVRHjmRWWzoRgJ4C rJGoPpoFH3V1rdssxHJIcNM0LuIT9KGiDq+g5IFMHZUpzEV3ZoN0s3KMiV+VvsN6O1AMynYbipV bU8KjfVhKHjpE66GYNw8x2oGS/3OSW9hJzZYqBVKcrRMMB0VVsQ76Moi4FiVxDGlRuc8XLj5wye e1Eecfj8eU1nupZAZ3Og9crMQXwBVSDZtkIZd91hnuWQKqzBqqw7D1TmKF/KNfT6tBALgcGeD/m Iw+q6PQXmJW+eW3u8ITdTiKf42JEgVQXyUTw67MUL6Lu/qHEn09D5RWRKewgmmsNowWcL36k X-Proofpoint-ORIG-GUID: X0hyQHY2w8pEXNNbZMhE4FcPgzOn3XCd X-Authority-Analysis: v=2.4 cv=EuHSrTcA c=1 sm=1 tr=0 ts=68619f0f cx=c_pps a=Oh5Dbbf/trHjhBongsHeRQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=2B-FfE5MDWLa5Dp4eRYA:9 a=_Vgx9l1VpLgwpw_dHYaR:22 X-Proofpoint-GUID: X0hyQHY2w8pEXNNbZMhE4FcPgzOn3XCd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxlogscore=999 malwarescore=0 mlxscore=0 phishscore=0 spamscore=0 adultscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark It is standing in the way of drm_gpuvm / VM_BIND support. Not to mention frequently broken and rarely tested. And I think only needed for a 10yr old not quite upstream SoC (msm8974). Maybe we can add support back in later, but I'm doubtful. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 8 -- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 4 - drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 - drivers/gpu/drm/msm/msm_drv.c | 117 +----------------- drivers/gpu/drm/msm/msm_drv.h | 11 -- drivers/gpu/drm/msm/msm_gem.c | 131 ++------------------- drivers/gpu/drm/msm/msm_gem.h | 5 - drivers/gpu/drm/msm/msm_gem_submit.c | 5 - drivers/gpu/drm/msm/msm_gpu.c | 6 +- 14 files changed, 19 insertions(+), 309 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 5eb063ed0b46..095bae92e3e8 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -551,14 +551,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers =3D a220_registers; =20 - if (!gpu->vm) { - dev_err(dev->dev, "No memory protection without MMU\n"); - if (!allow_vram_carveout) { - ret =3D -ENXIO; - goto fail; - } - } - return gpu; =20 fail: diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a3xx_gpu.c index 434e6ededf83..a956cd79195e 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -581,21 +581,6 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } =20 - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret =3D -ENXIO; - goto fail; - } - } - icc_path =3D devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret =3D PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a4xx_gpu.c index 2c75debcfd84..83f6329accba 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -695,21 +695,6 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0xffff0000ffff0000ull; =20 - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret =3D -ENXIO; - goto fail; - } - } - icc_path =3D devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret =3D PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a5xx_gpu.c index dc31bc0afca4..04138a06724b 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,8 +1786,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } =20 - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); =20 /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 5078152eb8d3..7b3be2b46cc4 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2560,8 +2560,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0x1fffffffff000ull; =20 - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); =20 a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/m= sm/adreno/adreno_device.c index 16e7ac444efd..27dbbb302081 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -16,10 +16,6 @@ bool snapshot_debugbus =3D false; MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devc= oredump (if not fused off)"); module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600); =20 -bool allow_vram_carveout =3D false; -MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place= of IOMMU"); -module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600); - int enable_preemption =3D -1; MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=3Don= , 0=3Ddisable, -1=3Dauto (default))"); module_param(enable_preemption, int, 0600); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index be723fe4de2b..0f71c39696a5 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -209,7 +209,9 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, u64 start, size; =20 mmu =3D msm_iommu_gpu_new(&pdev->dev, gpu, quirks); - if (IS_ERR_OR_NULL(mmu)) + if (!mmu) + return ERR_PTR(-ENODEV); + else if (IS_ERR_OR_NULL(mmu)) return ERR_CAST(mmu); =20 geometry =3D msm_iommu_get_geometry(mmu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index 4fa4b11442ba..b1761f990aa1 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -18,7 +18,6 @@ #include "adreno_pm4.xml.h" =20 extern bool snapshot_debugbus; -extern bool allow_vram_carveout; =20 enum { ADRENO_FW_PM4 =3D 0, diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 49c868e33d70..c314fd470d69 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -46,12 +46,6 @@ #define MSM_VERSION_MINOR 12 #define MSM_VERSION_PATCHLEVEL 0 =20 -static void msm_deinit_vram(struct drm_device *ddev); - -static char *vram =3D "16m"; -MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPU= MMU)"); -module_param(vram, charp, 0); - bool dumpstate; MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors"); module_param(dumpstate, bool, 0600); @@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev) if (priv->kms) msm_drm_kms_uninit(dev); =20 - msm_deinit_vram(ddev); - component_unbind_all(dev, ddev); =20 ddev->dev_private =3D NULL; @@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev) return 0; } =20 -bool msm_use_mmu(struct drm_device *dev) -{ - struct msm_drm_private *priv =3D dev->dev_private; - - /* - * a2xx comes with its own MMU - * On other platforms IOMMU can be declared specified either for the - * MDP/DPU device or for its parent, MDSS device. - */ - return priv->is_a2xx || - device_iommu_mapped(dev->dev) || - device_iommu_mapped(dev->dev->parent); -} - -static int msm_init_vram(struct drm_device *dev) -{ - struct msm_drm_private *priv =3D dev->dev_private; - struct device_node *node; - unsigned long size =3D 0; - int ret =3D 0; - - /* In the device-tree world, we could have a 'memory-region' - * phandle, which gives us a link to our "vram". Allocating - * is all nicely abstracted behind the dma api, but we need - * to know the entire size to allocate it all in one go. There - * are two cases: - * 1) device with no IOMMU, in which case we need exclusive - * access to a VRAM carveout big enough for all gpu - * buffers - * 2) device with IOMMU, but where the bootloader puts up - * a splash screen. In this case, the VRAM carveout - * need only be large enough for fbdev fb. But we need - * exclusive access to the buffer to avoid the kernel - * using those pages for other purposes (which appears - * as corruption on screen before we have a chance to - * load and do initial modeset) - */ - - node =3D of_parse_phandle(dev->dev->of_node, "memory-region", 0); - if (node) { - struct resource r; - ret =3D of_address_to_resource(node, 0, &r); - of_node_put(node); - if (ret) - return ret; - size =3D r.end - r.start + 1; - DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start); - - /* if we have no IOMMU, then we need to use carveout allocator. - * Grab the entire DMA chunk carved out in early startup in - * mach-msm: - */ - } else if (!msm_use_mmu(dev)) { - DRM_INFO("using %s VRAM carveout\n", vram); - size =3D memparse(vram, NULL); - } - - if (size) { - unsigned long attrs =3D 0; - void *p; - - priv->vram.size =3D size; - - drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1); - spin_lock_init(&priv->vram.lock); - - attrs |=3D DMA_ATTR_NO_KERNEL_MAPPING; - attrs |=3D DMA_ATTR_WRITE_COMBINE; - - /* note that for no-kernel-mapping, the vaddr returned - * is bogus, but non-null if allocation succeeded: - */ - p =3D dma_alloc_attrs(dev->dev, size, - &priv->vram.paddr, GFP_KERNEL, attrs); - if (!p) { - DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n"); - priv->vram.paddr =3D 0; - return -ENOMEM; - } - - DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n", - (uint32_t)priv->vram.paddr, - (uint32_t)(priv->vram.paddr + size)); - } - - return ret; -} - -static void msm_deinit_vram(struct drm_device *ddev) -{ - struct msm_drm_private *priv =3D ddev->dev_private; - unsigned long attrs =3D DMA_ATTR_NO_KERNEL_MAPPING; - - if (!priv->vram.paddr) - return; - - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, - attrs); -} - static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv =3D dev_get_drvdata(dev); @@ -260,16 +151,12 @@ static int msm_drm_init(struct device *dev, const str= uct drm_driver *drv) goto err_destroy_wq; } =20 - ret =3D msm_init_vram(ddev); - if (ret) - goto err_destroy_wq; - dma_set_max_seg_size(dev, UINT_MAX); =20 /* Bind all our sub-components: */ ret =3D component_bind_all(dev, ddev); if (ret) - goto err_deinit_vram; + goto err_destroy_wq; =20 ret =3D msm_gem_shrinker_init(ddev); if (ret) @@ -306,8 +193,6 @@ static int msm_drm_init(struct device *dev, const struc= t drm_driver *drv) =20 return ret; =20 -err_deinit_vram: - msm_deinit_vram(ddev); err_destroy_wq: destroy_workqueue(priv->wq); err_put_dev: diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 8aa3412c6e36..761e7e221ad9 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -183,17 +183,6 @@ struct msm_drm_private { =20 struct msm_drm_thread event_thread[MAX_CRTCS]; =20 - /* VRAM carveout, used when no IOMMU: */ - struct { - unsigned long size; - dma_addr_t paddr; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: - */ - struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ - } vram; - struct notifier_block vmap_notifier; struct shrinker *shrinker; =20 diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5e6c88b85fd3..b83790cc08df 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,24 +17,8 @@ #include =20 #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_gpu.h" -#include "msm_mmu.h" - -static dma_addr_t physaddr(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_drm_private *priv =3D obj->dev->dev_private; - return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) + - priv->vram.paddr; -} - -static bool use_pages(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - return !msm_obj->vram_node; -} =20 static void update_device_mem(struct msm_drm_private *priv, ssize_t size) { @@ -135,36 +119,6 @@ static void update_lru(struct drm_gem_object *obj) mutex_unlock(&priv->lru.lock); } =20 -/* allocate pages from VRAM carveout, used when no IOMMU: */ -static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_drm_private *priv =3D obj->dev->dev_private; - dma_addr_t paddr; - struct page **p; - int ret, i; - - p =3D kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!p) - return ERR_PTR(-ENOMEM); - - spin_lock(&priv->vram.lock); - ret =3D drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages); - spin_unlock(&priv->vram.lock); - if (ret) { - kvfree(p); - return ERR_PTR(ret); - } - - paddr =3D physaddr(obj); - for (i =3D 0; i < npages; i++) { - p[i] =3D pfn_to_page(__phys_to_pfn(paddr)); - paddr +=3D PAGE_SIZE; - } - - return p; -} - static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); @@ -176,10 +130,7 @@ static struct page **get_pages(struct drm_gem_object *= obj) struct page **p; int npages =3D obj->size >> PAGE_SHIFT; =20 - if (use_pages(obj)) - p =3D drm_gem_get_pages(obj); - else - p =3D get_pages_vram(obj, npages); + p =3D drm_gem_get_pages(obj); =20 if (IS_ERR(p)) { DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n", @@ -212,18 +163,6 @@ static struct page **get_pages(struct drm_gem_object *= obj) return msm_obj->pages; } =20 -static void put_pages_vram(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_drm_private *priv =3D obj->dev->dev_private; - - spin_lock(&priv->vram.lock); - drm_mm_remove_node(msm_obj->vram_node); - spin_unlock(&priv->vram.lock); - - kvfree(msm_obj->pages); -} - static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); @@ -244,10 +183,7 @@ static void put_pages(struct drm_gem_object *obj) =20 update_device_mem(obj->dev->dev_private, -obj->size); =20 - if (use_pages(obj)) - drm_gem_put_pages(obj, msm_obj->pages, true, false); - else - put_pages_vram(obj); + drm_gem_put_pages(obj, msm_obj->pages, true, false); =20 msm_obj->pages =3D NULL; update_lru(obj); @@ -1207,19 +1143,10 @@ struct drm_gem_object *msm_gem_new(struct drm_devic= e *dev, uint32_t size, uint32 struct msm_drm_private *priv =3D dev->dev_private; struct msm_gem_object *msm_obj; struct drm_gem_object *obj =3D NULL; - bool use_vram =3D false; int ret; =20 size =3D PAGE_ALIGN(size); =20 - if (!msm_use_mmu(dev)) - use_vram =3D true; - else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) - use_vram =3D true; - - if (GEM_WARN_ON(use_vram && !priv->vram.size)) - return ERR_PTR(-EINVAL); - /* Disallow zero sized objects as they make the underlying * infrastructure grumpy */ @@ -1232,44 +1159,16 @@ struct drm_gem_object *msm_gem_new(struct drm_devic= e *dev, uint32_t size, uint32 =20 msm_obj =3D to_msm_bo(obj); =20 - if (use_vram) { - struct msm_gem_vma *vma; - struct page **pages; - - drm_gem_private_object_init(dev, obj, size); - - msm_gem_lock(obj); - - vma =3D add_vma(obj, NULL); - msm_gem_unlock(obj); - if (IS_ERR(vma)) { - ret =3D PTR_ERR(vma); - goto fail; - } - - to_msm_bo(obj)->vram_node =3D &vma->node; - - msm_gem_lock(obj); - pages =3D get_pages(obj); - msm_gem_unlock(obj); - if (IS_ERR(pages)) { - ret =3D PTR_ERR(pages); - goto fail; - } - - vma->iova =3D physaddr(obj); - } else { - ret =3D drm_gem_object_init(dev, obj, size); - if (ret) - goto fail; - /* - * Our buffers are kept pinned, so allocating them from the - * MOVABLE zone is a really bad idea, and conflicts with CMA. - * See comments above new_inode() why this is required _and_ - * expected if you're going to pin these pages. - */ - mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); - } + ret =3D drm_gem_object_init(dev, obj, size); + if (ret) + goto fail; + /* + * Our buffers are kept pinned, so allocating them from the + * MOVABLE zone is a really bad idea, and conflicts with CMA. + * See comments above new_inode() why this is required _and_ + * expected if you're going to pin these pages. + */ + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); =20 drm_gem_lru_move_tail(&priv->lru.unbacked, obj); =20 @@ -1297,12 +1196,6 @@ struct drm_gem_object *msm_gem_import(struct drm_dev= ice *dev, uint32_t size; int ret, npages; =20 - /* if we don't have IOMMU, don't bother pretending we can import: */ - if (!msm_use_mmu(dev)) { - DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n"); - return ERR_PTR(-EINVAL); - } - size =3D PAGE_ALIGN(dmabuf->size); =20 ret =3D msm_gem_new_impl(dev, size, MSM_BO_WC, &obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 64ea3ed213c1..e47e187ecd00 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -102,11 +102,6 @@ struct msm_gem_object { =20 struct list_head vmas; /* list of msm_gem_vma */ =20 - /* For physically contiguous buffers. Used when we don't have - * an IOMMU. Also used for stolen/splashscreen buffer. - */ - struct drm_mm_node *vram_node; - char name[32]; /* Identifier to print for the debugfs files */ =20 /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index a59816b6b6de..c184b1a1f522 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -669,11 +669,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void = *data, if (args->pad) return -EINVAL; =20 - if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { - DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); - return -EPERM; - } - /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 3400a6ca8fd8..47268aae7d54 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -942,12 +942,8 @@ int msm_gpu_init(struct drm_device *drm, struct platfo= rm_device *pdev, =20 msm_devfreq_init(gpu); =20 - gpu->vm =3D gpu->funcs->create_vm(gpu, pdev); - - if (gpu->vm =3D=3D NULL) - DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", nam= e); - else if (IS_ERR(gpu->vm)) { + if (IS_ERR(gpu->vm)) { ret =3D PTR_ERR(gpu->vm); goto fail; } --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6503E238C3D for ; Sun, 29 Jun 2025 20:16:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228179; cv=none; b=Jw59Vv6fB5vIA+Q1T+d1FIxD0EbgD1bO6khqZLsH9uM01PdQDBXndqDgDVY0ljLsLlI+a97xsCyt+vmlKzt0XXqwCOY6Q/R9z0885i440NZl9PlaM96GW/SIj+7Ejp2hg8oRrrDSVNJOwW78fXY9deuanHJ6RQAbXtBYbDayGKY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228179; c=relaxed/simple; bh=Gm6UhJ7UKI9DiYCOXoBXcpqw8slOxn54iQtmUjp8vaI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gCKBeMie+6QKxObPxlvNCxioRAMdR9VMpqZaEnMU+uYMvJgNPt5G4bJj0Q6L63DBK8Ks2OMPJWCJFLhaX1YsoqSGxLGxfiENvCIHdRNa+bRJfww5O4OQW6WXyf97nIj4qf+FVLKNgmU5PLXjt/P7In1K5uThy6hVcTwkMnvcunY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=oHTvEF1K; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="oHTvEF1K" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TJQL3V008381 for ; Sun, 29 Jun 2025 20:16:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=sVxQxqcInWG kGdZkfQeRJ3wtUjNtIfv/ihKTiUvsmA8=; b=oHTvEF1KOTyRjmBitWtjfyQi9d7 RWH6rjdkIlBV0XTeCgeLLsdytj+I2h8iRsq5ZXt3woWHsIBIAlYwnWC5pIGWbw0o ZiXFiW785yJdqMj5CFcyo/iNaBlSLA5xenwkDLVPjOZ9KokrCtJo3Rvveqps172R wFIIvgRBbZcSfkVKWjZFHyi73rxUvvzSoDTNYfO6PJbfq8aarZjNUl3MWGQsskv6 Z1MuJU505tXlFRNXb6983TlIBz2QBCJm7nAKcmnlLeagch0T4gfIXJHT1b4sc0jG 9dUBcqbIPwsTc0w38wHUWmvWqP4i5XCYTtubdQls+Ctsm2eoU+FhAPHriJQ== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j9pcjjc7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:16 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-2355651d204so29539185ad.2 for ; Sun, 29 Jun 2025 13:16:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228176; x=1751832976; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sVxQxqcInWGkGdZkfQeRJ3wtUjNtIfv/ihKTiUvsmA8=; b=YI+IARqXXo+WLo9DFy9CET7VLS+qDBNNRNj38ozxfzxrUOvCATzdaTgOULtJuVJHxN qAFonOz2ninDicK6zEJkxpxR5SRSBdjy2bu/l6qadBA4cTy6GIHl22SsbjKrZjG4Q1ji UZTOIMhBmu1Rlo7QftCEkHzihODH5XL0e2eraaj/tSQB9RtBSRYY3FoAKGzxkb278OEA 59CBx6KBrgvhbdr7wHQ3ZXdmy42xJE6Q0MFwRPOGXWQ4OfAJkeWPyNcvnhzl0C6rPvdA CRGmFgUZSJjRiTdKf3aNNN4TTGQjG35BfvgD9+UeMepBX6IgiQMCY/K7nG/w6wu/gF5A ByVQ== X-Forwarded-Encrypted: i=1; AJvYcCWs4jvAkW+4ZC7tyafdzIS0VXy70YMMaQyk31ByN5JLflVdO/NV3KCAHXPZjcybT+xwwzswubW0I14WLuI=@vger.kernel.org X-Gm-Message-State: AOJu0YwuVwttk4tfdIeilgA2tMrM0Q4gI/JRNEZyHGOtGPw9I93zyHav 7Ea5Tg2IVBBHNnSq0cupi/PJHTjvuWSeZfrv0BMxRofZLNrqRKsDPsbkt/rmb0YWYY9lohaodgK 0Lm+FV8OL0gtHCG9yHSKjslLf3xdADydrJGQEMn0ZY3RqQkdBXwkP9LwsOuGRA2XKVTM= X-Gm-Gg: ASbGncuAQ2BbhtqzJpCS7N2xme00AfTwzkoIF2Fx2OBiWZDP2n9Y+Zk6mxXFjMJ7wch EqnG/rNJhd6+UQNlGwJxTKoxmtubyUjLIQvHe7kMVroIvVJbhQzSG9RPvICHvviz5rzyTwnVlZE HODOAA9KkqEqcE4GIfpGOsLX8qdIeHxufgTq6F2Mc07/e5DXQ7S4gGwbXcGDC7Q/C/CEUZ943Rc J5uhrKDloieJbO3hVEKvoudhMxPL1+hXU241d9gNwRocf2EglRDNxMHxHkF5jEWvb7qiuxBPiT+ 1bjVxJ47I3rvchlBnC20ENMeC4jZuOldbA== X-Received: by 2002:a17:902:ebc7:b0:235:f298:cbbe with SMTP id d9443c01a7336-23ac40e028bmr169451605ad.12.1751228175848; Sun, 29 Jun 2025 13:16:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHNbJUtGP9MrWR24HPcC4/u3xJtP1wRXJgvFGUHLgeKLArllLw9NNHdWnW/5xcDbywPkb6T0g== X-Received: by 2002:a17:902:ebc7:b0:235:f298:cbbe with SMTP id d9443c01a7336-23ac40e028bmr169451305ad.12.1751228175405; Sun, 29 Jun 2025 13:16:15 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3c6953sm67738195ad.226.2025.06.29.13.16.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:14 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 08/42] drm/msm: Collapse vma allocation and initialization Date: Sun, 29 Jun 2025 13:12:51 -0700 Message-ID: <20250629201530.25775-9-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: IfrCuKUYkCNnXjf4OOBWh0AywAY9mTN0 X-Proofpoint-ORIG-GUID: IfrCuKUYkCNnXjf4OOBWh0AywAY9mTN0 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX6HVN2cHCcqtm yVIo3C86yzLqx248zfNHEWTXgu6O8EIYLabOPXKxHetwxPzs7h4i8ChmCub53r1zE135a0c7/77 Nk4ylO1zQBqUK0XX/YEILmPvRtyWFPV58GWlJuCiZlX8Djsmk5Ren0qgFLRPKg8hyYocPItz9/h sU/Y4NcmlnIYSwW42vzEWuZ12B89s1OZNY5E4wAcMOgve/WQ3In6CalsEOZ3A0F/ukx5PZVDCq/ zYZfHt+xl40Ta6PoIcO9iz8sDtl3d1w3PYLVkwYGBD+03JL0qJbPxpP/ljEbbMZ66xeT0RMT/1v miIk/mBVNX70Q2jMqgTLaWS8LBOcf0Z5kooGVt04fa2YQNJ+vcZogWZbxnYa5fDOXHDeaw0udrF ZRd2X71sFQFA6o8eaMGYeDcnsiMTYAN1lkuxxt4RCghkCH6C7BQpuk8XUDZ9C/HOwjhWAKp3 X-Authority-Analysis: v=2.4 cv=QMFoRhLL c=1 sm=1 tr=0 ts=68619f10 cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=amM6-9Z1s-9IjK5NNPIA:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 priorityscore=1501 bulkscore=0 spamscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 lowpriorityscore=0 suspectscore=0 impostorscore=0 clxscore=1015 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Now that we've dropped vram carveout support, we can collapse vma allocation and initialization. This better matches how things work with drm_gpuvm. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 30 +++----------------------- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 36 +++++++++++++------------------ 3 files changed, 20 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b83790cc08df..9fa830209b1e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -333,23 +333,6 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *ob= j) return offset; } =20 -static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_gem_vma *vma; - - msm_gem_assert_locked(obj); - - vma =3D msm_gem_vma_new(vm); - if (!vma) - return ERR_PTR(-ENOMEM); - - list_add_tail(&vma->list, &msm_obj->vmas); - - return vma; -} - static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_vm *vm) { @@ -416,6 +399,7 @@ static struct msm_gem_vma *get_vma_locked(struct drm_ge= m_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { + struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct msm_gem_vma *vma; =20 msm_gem_assert_locked(obj); @@ -423,18 +407,10 @@ static struct msm_gem_vma *get_vma_locked(struct drm_= gem_object *obj, vma =3D lookup_vma(obj, vm); =20 if (!vma) { - int ret; - - vma =3D add_vma(obj, vm); + vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); if (IS_ERR(vma)) return vma; - - ret =3D msm_gem_vma_init(vma, obj->size, - range_start, range_end); - if (ret) { - del_vma(vma); - return ERR_PTR(ret); - } + list_add_tail(&vma->list, &msm_obj->vmas); } else { GEM_WARN_ON(vma->iova < range_start); GEM_WARN_ON((vma->iova + obj->size) > range_end); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index e47e187ecd00..cf1e86252219 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -66,8 +66,8 @@ struct msm_gem_vma { bool mapped; }; =20 -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sg= t, int size); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 9419692f0cc8..6d18364f321c 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -106,47 +106,41 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) msm_gem_vm_put(vm); } =20 -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) +/* Create a new vma and allocate an iova for it */ +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, + u64 range_start, u64 range_end) { struct msm_gem_vma *vma; + int ret; =20 vma =3D kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) - return NULL; + return ERR_PTR(-ENOMEM); =20 vma->vm =3D vm; =20 - return vma; -} - -/* Initialize a new vma and allocate an iova for it */ -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, - u64 range_start, u64 range_end) -{ - struct msm_gem_vm *vm =3D vma->vm; - int ret; - - if (GEM_WARN_ON(!vm)) - return -EINVAL; - - if (GEM_WARN_ON(vma->iova)) - return -EBUSY; - spin_lock(&vm->lock); ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, - size, PAGE_SIZE, 0, + obj->size, PAGE_SIZE, 0, range_start, range_end, 0); spin_unlock(&vm->lock); =20 if (ret) - return ret; + goto err_free_vma; =20 vma->iova =3D vma->node.start; vma->mapped =3D false; =20 + INIT_LIST_HEAD(&vma->list); + kref_get(&vm->kref); =20 - return 0; + return vma; + +err_free_vma: + kfree(vma); + return ERR_PTR(ret); } =20 struct msm_gem_vm * --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9EA923B61A for ; Sun, 29 Jun 2025 20:16:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228180; cv=none; b=b12zVk0L5S9S35Jw4++4XxYTJhRH6ijpKEWcSdS3pWeFsAVNiESD8UqSWvBW+iep62Wq7yZrasTyJ2lk33/oiJi1PeU6+lcxtQ2Oalp7hyETurTqbJtdFy3tKh8kXcWExvnwX1ZrYRCTiBb0L5tu6kdvp0+4VUzuT+4fpwlbXWg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228180; c=relaxed/simple; bh=au593a8+7mGZ4QlvT4X7cpF6yg7CCF/F42yjIFbyfCA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l3RkbSI4PLS+/X89GMTLNHtapZ/feySL0AMcdofUDlbWBppM8W6sUDNXcuggQrZdUwjlk/L8cKMC3FIHk0EIR28EkxFm0D9q5tQqCAxrsjnuPkB7toigzSQUXg/KLhOPjcHvICLcloVEBbsjifS/imxLair+d6DxYNa++8P+A7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=B73z6o3N; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="B73z6o3N" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TBMQf0029314 for ; Sun, 29 Jun 2025 20:16:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=eRArZHg5kja 6OQg/YP1uRgtCqLo62Lhjfwd+5rkVLKM=; b=B73z6o3NUOnTBeLRUz/v4CmCQuC NG0NvTygtQIVavwWwnSyU+uTSygQRtJT5ZZoQXGzz83y7ou/2k0HwoL0MboB2Hq6 COEAlCbEjz/+hUcF64VO1IUF4pktiFoPpWxBerjO019dh06E8Gjtur8f4ErBeQiy CDIN18t6q/1/2slPbQrTJpzHMu+dGxmJRUDQEVbUHfocOyDpIoaq97VkBrky8R+9 19/Tn43glh9WVt9wJzVC/9YAdiq4AM+L8uxkNE5kA/J8CpKCPs9zyiKMvfe8VUPk VCmCEn7B+XGmnU9GnEKJ4/SE50uQcWQku2UQ1UewhRHwJ4gUIh0WAldycXg== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j9pcjjcd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:18 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-234fedd3e51so11025495ad.1 for ; Sun, 29 Jun 2025 13:16:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228177; x=1751832977; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eRArZHg5kja6OQg/YP1uRgtCqLo62Lhjfwd+5rkVLKM=; b=XDgUJTNYfPhtUQtKjYmYlKP7b1gZIMjiex5YQsgF2MClXUBVjblwLJjPNgspiCoS4z 8u7r0WGH80/36TH4AWtaWAkRF+vpA1xI2461Qtl78kqyqTbjU2FPFYLt+cmVqdpVVMly X1x393Y1dkdr1iPWwRO5jp2gFJ/AXER27cZJ+lYdB8goAe+4ly3hlJHlULgtW2lG0vWl MbsnMRKj9Uq2Q/S3N7eOPokweEq2wZEh3IAVQMgMhuDY4sukanYdD/PIQ21wH0LfqmjE 6lbyTqj40fHi5qUJXzpd0+6D0CI3gsKCGe5hG2L8A9ePdao0ZQnuQkl5RhAw1IO2HWTd cjog== X-Forwarded-Encrypted: i=1; AJvYcCVBQYmPBoQQ9S1wiC0d7l9U8TpQ89fW8CESsZEuxujCivzN5dBFxebIE7BqWqpO0wfEvXl2Tba8Fd3Okjo=@vger.kernel.org X-Gm-Message-State: AOJu0YyqHCZ9Dhm91Hz7KmhLXC3N29qDT27LCo0IzGpNrbS9wgg0OThh 7G0SHOkaMGhHyhzDdggWX5s07tYJ1QGM6AcJ1yFpOvfFU5M0zAhQm23migOiQgAM9yz0Nr3IlAf mskiFSNgQTUl3X0QZNWZpyMxyHCm7eJMvXrvQ6SwAFBtYDjTOgji0IAG/xmqSdQAfOZg= X-Gm-Gg: ASbGnctOj+P+ZfhQBbfMiYKYqOaeYtMATCFbqU60hB33c5idWodgTMssAdDHu717/bf imh/yaDkxc4f/lWlWaoNiNbL40HGcS0/3NfdGEyymA2fnsUmeRjiVdakmP5fEkcKvDhLzvreG1w RuZooEtmYgzqUiCu4UEoVoge3BDlLZTP7aUIWGV0tLEUWGymxSUJFvWqWwvqtu7ZRTq4lNCOCjS zQnm/2vzGUEKpEKDS0VmcCe11Y/cLN6BeZFk2rLCy+CT9835wTSHQqFGsgRIeYuxwutRXtDU1Zi g3TV0kqe1PY1pADdoayTmoUDZg9OelGkCw== X-Received: by 2002:a17:902:ce87:b0:235:7c6:ebbf with SMTP id d9443c01a7336-23ac46342d3mr183275275ad.35.1751228177202; Sun, 29 Jun 2025 13:16:17 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGtZqvQxwClmeSyChDksjaanlaNbbsqfLSli1B/K5dWQvgddHns82/yR5/PricjffDUOwiSEw== X-Received: by 2002:a17:902:ce87:b0:235:7c6:ebbf with SMTP id d9443c01a7336-23ac46342d3mr183274885ad.35.1751228176702; Sun, 29 Jun 2025 13:16:16 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb39bf26sm64300085ad.107.2025.06.29.13.16.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:16 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 09/42] drm/msm: Collapse vma close and delete Date: Sun, 29 Jun 2025 13:12:52 -0700 Message-ID: <20250629201530.25775-10-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: 7vWQNB7LtOM9v2M8Oi9tvW1fwHr8TVR6 X-Proofpoint-ORIG-GUID: 7vWQNB7LtOM9v2M8Oi9tvW1fwHr8TVR6 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX1CbaQNuAqLcV fDXgQtA3hr85gyBY6f4C9V8jsjnvr/a0Odi2DqhPSQtoQk4gTg1N3uYd2XtDfqbUc1msMTG7sBn uSaAQ1lmVxMG432wim+Ei4o3NW0QgvfUAHJ1kt0mD1bmbE7bgeipSTS3Sgzbtmr15L6n+yjfjYE 0DnZcM4FGFRYW6mAn1mtlQth7BNjZglRgzj+092EjJrZSQRWSCZOlTLaSsra7zCVus+15m+Tyzf f2YsENGDrB6JEdaDwLsmckFyK5NGDTxv4phOQlL6/Cmtj7+nNmJNjsPQVMR7lKsaAFDJTBf+OIq XfafMRz8EgFveud5WEizOvMJpDsK+kBPV1qmQw22TSMprGVzsdXa8sgEDvUb51z4wP4uXLF8vUe 2mZv9lBMurLbB4g1wTKrgIuUDgxJXmTB8p9re4wPxx0GlJYj0/KoAbZor73ulN14CIgv/9bl X-Authority-Analysis: v=2.4 cv=QMFoRhLL c=1 sm=1 tr=0 ts=68619f12 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=RIvuzEnNBJp2qadMtJ0A:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 priorityscore=1501 bulkscore=0 spamscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 lowpriorityscore=0 suspectscore=0 impostorscore=0 clxscore=1015 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark This fits better drm_gpuvm/drm_gpuva. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 16 +++------------- drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++ 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 9fa830209b1e..7b0430628834 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -349,15 +349,6 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_o= bject *obj, return NULL; } =20 -static void del_vma(struct msm_gem_vma *vma) -{ - if (!vma) - return; - - list_del(&vma->list); - kfree(vma); -} - /* * If close is true, this also closes the VMA (releasing the allocated * iova range) in addition to removing the iommu mapping. In the eviction @@ -368,11 +359,11 @@ static void put_iova_spaces(struct drm_gem_object *obj, bool close) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_gem_vma *vma; + struct msm_gem_vma *vma, *tmp; =20 msm_gem_assert_locked(obj); =20 - list_for_each_entry(vma, &msm_obj->vmas, list) { + list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { if (vma->vm) { msm_gem_vma_purge(vma); if (close) @@ -391,7 +382,7 @@ put_iova_vmas(struct drm_gem_object *obj) msm_gem_assert_locked(obj); =20 list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - del_vma(vma); + msm_gem_vma_close(vma); } } =20 @@ -556,7 +547,6 @@ static int clear_iova(struct drm_gem_object *obj, =20 msm_gem_vma_purge(vma); msm_gem_vma_close(vma); - del_vma(vma); =20 return 0; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 6d18364f321c..ca29e81d79d2 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -102,8 +102,10 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) spin_unlock(&vm->lock); =20 vma->iova =3D 0; + list_del(&vma->list); =20 msm_gem_vm_put(vm); + kfree(vma); } =20 /* Create a new vma and allocate an iova for it */ --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D14F733985 for ; Sun, 29 Jun 2025 20:16:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228183; cv=none; b=OtJnTVSMY7LARrVnxVt1Ds/LK2a+EbB832+oZoERzAx0zf7Sj1KWcz2RAEUO6MR2ZpqAM247LXobDtXsNMyOOeY22reqN78bzRN3GMJqfhwHgEKy1OEixEAV1/rQk2fnRQBYjIILxGjK9BswXvY4JZTFhSzudRhupkK3RMTppFk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228183; c=relaxed/simple; bh=iDEV2orWmuIXKriA3mh+ffbHZMX8dfi8nDJuhBaCic0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gxtdHxg+Zcl2jm4FA3ql1DrAZt6Mu/QYrcCWxsmZ/BozcE1nw16svF9LDWYKFO/BWVjiQMbSnwZIuRUn58XtWX5uq80Jx3HdFMo0mK2gpYFrhHV7w9uVhP2S/XzWnmYFmc6vOjctuGd3Eybw3ggUNAlfXSvYwi1b9YKWZ12LutM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=TEOaRKgE; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="TEOaRKgE" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TJmLOh031909 for ; Sun, 29 Jun 2025 20:16:19 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=/ZGyoxYE2vw gSzvJVWN2YN0dI0s8P9cmHBqgH65Sao8=; b=TEOaRKgEKXZl8BVMyYJNO1xYFMo x3PKu+eoiNJTKuj0pA+s4MaHzMIk5igdemEau9XbwnkAKzRAY2xHkG40yVqWlF83 vsrUs+g3OZA+88YnFkjTBHPRoaJDjTyPt9+iDSElpv+ILQuXUiiXkFQ8ZfSArb2X 0G2zalrrEmr90ylqCWkJRI5jgIbX0cc1jt0QKjnm4g8YmVFKXOUjhfJxPRIQldPj ik65WTUeUr83UGaH9+ne8r5f67RbII0uIpmYnQeojcEY6wT76EIBXCONgdawH5tP yjsx/2m+JrdVRo0aSDFTReI7fI/3sCDxfrjEmmuWzUWiyL5xgR/C0OzMz8w== Received: from mail-pj1-f71.google.com (mail-pj1-f71.google.com [209.85.216.71]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8fxan70-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:19 +0000 (GMT) Received: by mail-pj1-f71.google.com with SMTP id 98e67ed59e1d1-311d670ad35so1016824a91.3 for ; Sun, 29 Jun 2025 13:16:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228178; x=1751832978; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/ZGyoxYE2vwgSzvJVWN2YN0dI0s8P9cmHBqgH65Sao8=; b=LgsEYEjy4PmCxH3m8CaiA+alPzRyAjhq6UhAVORET42ZLAg4Z7RJqPVbFcayzPnV7q h9hk37Ugk58cql7vZR1wXWzjY6Ftv/RV+bmEbG9ipjNVeiMF0hiwErsXhjiMXf+u8hM9 /Thal09kVs8ghgjyrFNsiD/41Au0TQ4F77KRZWbA3n/9Wzr4dpefUQ+kkqldMWzd/RuK fEIbGHVQwn8ambYpcp74VREBm5CZoF+e6fZzDu5R4ottYIC09hIs5oVZB650a7A1uDuc kOoDTWfpvmY7JWatr9QrrWMGFRaDI6Xu1hs/vQwEs6ZbXqiYjdsm0sOcj1XB84cvKL+T 9LTQ== X-Forwarded-Encrypted: i=1; AJvYcCVIVSwmyWb7Os9D2pCw6ex3n60smZYIa37YvQPemeWINU4HhErpbOwZrzidXiRuabmjTu+GJbVXYX9I8vE=@vger.kernel.org X-Gm-Message-State: AOJu0YwRVorRQuiWM+0Mk/3umXmZujLdewPK/rE+ztt9O4DOhBZmK4TN XbRuirQ3zDZQG+sFHHPcFvXbo8R1I3jrDgFFCILTrzjaLuJL4WrPLHe84rii9esvtMtsQsiElUt O7h4ltv162QLq2u3DT+UpUDXKrPCFp7fI5xnl4Yf9zzu56nQ+9QjEmgaceafgllyi6MQ= X-Gm-Gg: ASbGncslihLi+Lcu+FMjnFLCcvMgkxUjbKUFnEprjWhZkQaTJp3X2qqNH2Zcde+eM0O Gae3L0cnvMrMBAMU+Aqh2XMV3DKwEyim3fyFn/kL/3F81z7ff62rN1YiGUmxNxI9HPfi128BXw5 cAI17cLetTdfGbUT8h7ZeLksCVOv1VT91wmTcdaTgumhEyUeW94d/R7f9trXT7NZYsuGGa09gHc jgt2/S53xXPXMzbs9AdoQrasx4vOqPH6MZGFkrxgFnucCI7jY2DWhXqqE4rqbHffE0cJQ052f2/ sCmd9mxvJL9vPtWi9ei6uYUp1fMx8j5wJg== X-Received: by 2002:a17:90b:4c89:b0:312:29e:9ec9 with SMTP id 98e67ed59e1d1-318c92ee68dmr17157229a91.24.1751228178453; Sun, 29 Jun 2025 13:16:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHcqaNgJEnMkUx7hpJGEVz350AjhfdZASStdapMozh2ARhQFgf1aXDQ0y4avr5wbj9GJufNNg== X-Received: by 2002:a17:90b:4c89:b0:312:29e:9ec9 with SMTP id 98e67ed59e1d1-318c92ee68dmr17157206a91.24.1751228178042; Sun, 29 Jun 2025 13:16:18 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-315f54414acsm12293812a91.47.2025.06.29.13.16.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 10/42] drm/msm: Don't close VMAs on purge Date: Sun, 29 Jun 2025 13:12:53 -0700 Message-ID: <20250629201530.25775-11-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfXxIJQKNXtbkEx +buZ1vRHOyXpN9OFFs9YfOKNYQ7Z10Rkxi5/bhxPtz+72u3hrAfuzq4VZaQUbjs6rENZiQPB53a fIf6g/XPpc1/zVsyRgtPYLoM+6l+llGo2r9x7GO+UJF0jByDGeqM3Y9r4OZEx0VrV9kfAbGG0+c GfK1JpA3U2t3+oJhMpYzIMS7cfomg6hb9j/++n3af2WHrx7h2Mnz3vTh28nyew5ctRzwOq0VmIv Ti0VuqqfDKvo//FDXw3DCzU++Z2tkDrd2ELVhykl+AmjBfk0mCI29AJI7y49T3aCiNuqaByM0Ua HTYBqyXaBXUXc6uU15fJ9/TMqYC1CQJISv83FqeKCN8yLvRWMlKR6FOeUJF8ergCwYyGTf8Gnye KmSdTvHpOqvQHxjD9PCKQ4gQzVZFfmza6AT8Irs+xN0dh4Z4i/wdwSjACSyzpOyVY0U2u/ds X-Proofpoint-GUID: O46CK6JIFgRXxxgruElAfOR52mtKyXGc X-Proofpoint-ORIG-GUID: O46CK6JIFgRXxxgruElAfOR52mtKyXGc X-Authority-Analysis: v=2.4 cv=TqPmhCXh c=1 sm=1 tr=0 ts=68619f13 cx=c_pps a=UNFcQwm+pnOIJct1K4W+Mw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=nDn0RKeFDUDCjqZoaxcA:9 a=uKXjsCUrEbL0IQVhDsJ9:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 bulkscore=0 mlxlogscore=999 suspectscore=0 adultscore=0 phishscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 spamscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Previously we'd also tear down the VMA, making the address space available again. But with drm_gpuvm conversion, this would require holding the locks of all VMs the GEM object is mapped in. Which is problematic for the shrinker. Instead just let the VMA hang around until the GEM object is freed. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 7b0430628834..a20ae783f244 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -755,7 +755,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); =20 /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, true); + put_iova_spaces(obj, false); =20 msm_gem_vunmap(obj); =20 --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7099A23DEAD for ; Sun, 29 Jun 2025 20:16:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228191; cv=none; b=I+61RMUTMtg/TPxrXJAbhlvY/UbVgxCgR3ATbx7boWEqVO1yPU5hBb3jevr4uhUj9r7GI2wEgot7wgu8yr5ftcoi0dUa0l5DZZsaNI74rZVIcr6ry1G3Ui44JKgzFUl/B0ZRL3746O4LKmWF8zFSG3yYQ+3JkO1J0E30DMs4EkE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228191; c=relaxed/simple; bh=yPGRVEN5atAh6PvikGlnpquu59gk3d9csrYniAxJMwU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NmLWhEspAj6kjHGfukhMHb5pXhv+DnaPrDXQw21E8g/Iz+j3PQwdWaAI66NtBY7Rd55usK1+oyZog47hzcjojzLhkPJuaBQNwLlQ8jgQAlCd6jDfFZdGfqHMDJNGPC6footnFO1CdyWEFidaFqjlzeQfm31Wzdji4dXxFH0r+Vw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=cptCQ7V/; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="cptCQ7V/" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TJuPV1026473 for ; Sun, 29 Jun 2025 20:16:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=ffNCZ5XDzp7 SJXui2zRh3gHkvyLV2Hk4elpuiMMFAxg=; b=cptCQ7V/UaSf41bot/BCXpSb/O7 zGjHRoToexfZbF9EGRuBHy6S3Qmaev5wqQHdm/ackLxP/lIyV9MA2wMZH98pZaFV c1cV9vPNabQcrDakkZ6wpASLRiUaQ+hGqL7d0MHg84EhiMtoQcdrhp6A8Y/u6hcV hxB8w0NF6PTzald/ZgoL0SOQtNjed7ZpfvTVy4EWVD6jDtqt8w0ElxJWGBSI3csc y4ZD+/yMvwk0XXp/NVMPonvGVK7OVoM5KFBlS0qqx1LbN8OgqhGzKbsQozpgXTVo 0dlXAtOXA+p5iLBTVC/spC+nmowlruX/g4woWCp+NLWL/f8eQeYREpPgAaA== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j9jpth6t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:28 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-23689228a7fso48480595ad.1 for ; Sun, 29 Jun 2025 13:16:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228188; x=1751832988; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ffNCZ5XDzp7SJXui2zRh3gHkvyLV2Hk4elpuiMMFAxg=; b=Tl3AhFAix5Q0cb4P/TOIkd2BGMsQlvvwQjt3RcLpJLOns5Keiaqd8tl/NOGOCYzoav M+At2ri1Mdr9vb7t28TnkP84/QHL3/BBa7mECWnZOz/On5pYxciuiJNKhUk4h+HqUlda HfrJC4Zic7yapovOR95F5Tik1X3f6WqnrSqUKrP00g/ama5c+E52Ogu9bxwP3eCSx94o WNflSVFLSjQ8DI368sW8jUv39gZ5yxPocCGCHoisNreNi4XbEQukA6P0f0D4NT8XYHSw P00vd9YRLTuQZNDoAZuR0T8XxzIiKVMOBFxWyquhXS/h5o6W2qPoXcnF3rxMrhqSH1UR W3kA== X-Forwarded-Encrypted: i=1; AJvYcCUd6rJ1z8rjGgkFuS2pLvMyHUtS3ckia5xk6OhMjGvVT8J51JydlY3FFlSo4Iq41tzwUlIegiC8J9YGMXY=@vger.kernel.org X-Gm-Message-State: AOJu0YwDWEenAbU1Y2f67bY6m5Pu3dyc9dRlnqbyYD0wr3FVCQ8Ohntb 6Kw1/ngBwsJzWpk17kOb2Oa245iJ3EkjhXu2y+9Owo2nh8WsqxRCJqARwgmXO3kFXrq41LBlrTK j5xHfPp0yVITjy25Xk3RfoC9DmNlPD4K790xwUfkWFMfbvMCOzvBqhpQSxNsOAw35e/w= X-Gm-Gg: ASbGncurhespeoBIuu5QwfI2T+LxZFzQmpHkElWl3gm3Id9qsjlLLoYY+IwDsBhZqN8 BN0UmV34gLGf/ucBD2/bJrz+UUqTTPruxosac1pDV16W0VRy2KqXuRCSELj36DXPnsME3HIbmdN hIP5aP5J6oG7MerNb+YVTaOuO7qSr6Tjagy7b6ph3OH1m81+6qLN3dQoNRdpIKro/Qrv/KOwauk XWaMT9oxYy/1rfgHrFCHamggESReIAtoaFogGI8MqRDNIIL6hx3M8Ixpgc/eERNKqGqYlBruG39 34BOOJr9l3R+z1BeVLgwdWwCidafuqeNIA== X-Received: by 2002:a17:903:98f:b0:234:ba37:87b6 with SMTP id d9443c01a7336-23ac40dee6emr158250425ad.17.1751228187368; Sun, 29 Jun 2025 13:16:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF7o0RW3N/i13uGyAhfRSsSBpZ5aa7CUOvpIC0IKeEOln+PijIG1fotDAm70TiEAzh7du2ksQ== X-Received: by 2002:a17:903:98f:b0:234:ba37:87b6 with SMTP id d9443c01a7336-23ac40dee6emr158250075ad.17.1751228186812; Sun, 29 Jun 2025 13:16:26 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb39b95csm66144605ad.139.2025.06.29.13.16.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:26 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Jun Nie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 11/42] drm/msm: Stop passing vm to msm_framebuffer Date: Sun, 29 Jun 2025 13:12:54 -0700 Message-ID: <20250629201530.25775-12-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX487zNCAeOg2i qTdWLm5/IzvAOoGlE2NaK7P4VPpB6I0cSnLlJGlC4OaYKA09PLKwBfxKGnxURkLkwZYMmA59sxt aTBCU84DhCrI3b4O+c9uzMvlkzcn9QgNke89acrhf+wOkTj1FJA/qqbJVodhfH/wUHJBaYtVJri i3VvTIT9FSUHUYc5gqXHK0Rgj7BjQSZ+2Yl002U6s/j93wKpUba3ON5D2OJlPSroDY9jSSmpDg9 KtXI459csxRxB75oC7hgr2BaoXtNRSOkOgTmHerufJaN+e8oD9G/IL/8BqxzdKMG1hoGjNNJD5U S7Li7ET5oWTE4ZxizFtygOe/A+/8OIJ9sxlQFUaptzhun3N8xAnl2mL6xsNFvFeGppSFpgarHRM DA3HDsI9P+lWdoFGIqwLz/9uF31XxHPNzTlQAEQ0c/MntI05YNxJl4OQ8QWea/NsAn5jo5eM X-Proofpoint-ORIG-GUID: aUA7ZxzPKIurBXtuee-RKcmvNHjA7Z04 X-Authority-Analysis: v=2.4 cv=Tq7mhCXh c=1 sm=1 tr=0 ts=68619f1c cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=HL1_eh3RQQ0Sznupcm4A:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-GUID: aUA7ZxzPKIurBXtuee-RKcmvNHjA7Z04 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 lowpriorityscore=0 adultscore=0 priorityscore=1501 impostorscore=0 phishscore=0 mlxscore=0 spamscore=0 bulkscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" The fb only deals with kms->vm, so make that explicit. This will start letting us refcount the # of times the fb is pinned, so we can only unpin the vma after last user of the fb is done. Having a single reference count really only works if there is only a single vm. Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 11 +++------- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 +++++++---------- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 3 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 20 ++++++------------- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 2 -- drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 18 ++++++----------- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 18 ++++++----------- drivers/gpu/drm/msm/msm_drv.h | 9 +++------ drivers/gpu/drm/msm/msm_fb.c | 15 +++++++------- 9 files changed, 39 insertions(+), 75 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/= gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 32e208ee946d..9a54da1c9e3c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,6 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct d= pu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_vm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc =3D to_dpu_encoder_phys_wb(phys_enc); @@ -576,13 +575,12 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct= dpu_encoder_phys *phys_enc =20 wb_enc->wb_job =3D job; wb_enc->wb_conn =3D job->connector; - vm =3D phys_enc->dpu_kms->base.vm; =20 wb_cfg =3D &wb_enc->wb_cfg; =20 memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); =20 - ret =3D msm_framebuffer_prepare(job->fb, vm, false); + ret =3D msm_framebuffer_prepare(job->fb, false); if (ret) { DPU_ERROR("prep fb failed, %d\n", ret); return; @@ -596,7 +594,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct d= pu_encoder_phys *phys_enc return; } =20 - dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest); + dpu_format_populate_addrs(job->fb, &wb_cfg->dest); =20 wb_cfg->dest.width =3D job->fb->width; wb_cfg->dest.height =3D job->fb->height; @@ -619,14 +617,11 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct= dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc =3D to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_vm *vm; =20 if (!job->fb) return; =20 - vm =3D phys_enc->dpu_kms->base.vm; - - msm_framebuffer_cleanup(job->fb, vm, false); + msm_framebuffer_cleanup(job->fb, false); wb_enc->wb_job =3D NULL; wb_enc->wb_conn =3D NULL; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/= msm/disp/dpu1/dpu_formats.c index d115b79af771..b0d585c5315c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,15 +274,14 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } =20 -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +static void _dpu_format_populate_addrs_ubwc(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { const struct msm_format *fmt; uint32_t base_addr =3D 0; bool meta; =20 - base_addr =3D msm_framebuffer_iova(fb, vm, 0); + base_addr =3D msm_framebuffer_iova(fb, 0); =20 fmt =3D msm_framebuffer_format(fb); meta =3D MSM_FORMAT_IS_UBWC(fmt); @@ -355,26 +354,23 @@ static void _dpu_format_populate_addrs_ubwc(struct ms= m_gem_vm *vm, } } =20 -static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +static void _dpu_format_populate_addrs_linear(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { unsigned int i; =20 /* Populate addresses for simple formats here */ for (i =3D 0; i < layout->num_planes; ++i) - layout->plane_addr[i] =3D msm_framebuffer_iova(fb, vm, i); + layout->plane_addr[i] =3D msm_framebuffer_iova(fb, i); } =20 /** * dpu_format_populate_addrs - populate buffer addresses based on * mmu, fb, and format found in the fb - * @vm: address space pointer * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +void dpu_format_populate_addrs(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { const struct msm_format *fmt; @@ -384,7 +380,7 @@ void dpu_format_populate_addrs(struct msm_gem_vm *vm, /* Populate the addresses given the fb */ if (MSM_FORMAT_IS_UBWC(fmt) || MSM_FORMAT_IS_TILE(fmt)) - _dpu_format_populate_addrs_ubwc(vm, fb, layout); + _dpu_format_populate_addrs_ubwc(fb, layout); else - _dpu_format_populate_addrs_linear(vm, fb, layout); + _dpu_format_populate_addrs_linear(fb, layout); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/= msm/disp/dpu1/dpu_formats.h index 989f3e13c497..dc03f522e616 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,8 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 = *supported_formats, return false; } =20 -void dpu_format_populate_addrs(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +void dpu_format_populate_addrs(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); =20 int dpu_format_populate_plane_sizes( diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/ms= m/disp/dpu1/dpu_plane.c index 6d47f43f52f7..07f0461223c3 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -646,7 +646,6 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, struct drm_framebuffer *fb =3D new_state->fb; struct dpu_plane *pdpu =3D to_dpu_plane(plane); struct dpu_plane_state *pstate =3D to_dpu_plane_state(new_state); - struct dpu_kms *kms =3D _dpu_plane_get_kms(&pdpu->base); int ret; =20 if (!new_state->fb) @@ -654,9 +653,6 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, =20 DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id); =20 - /* cache vm */ - pstate->vm =3D kms->base.vm; - /* * TODO: Need to sort out the msm_framebuffer_prepare() call below so * we can use msm_atomic_prepare_fb() instead of doing the @@ -664,13 +660,10 @@ static int dpu_plane_prepare_fb(struct drm_plane *pla= ne, */ drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - if (pstate->vm) { - ret =3D msm_framebuffer_prepare(new_state->fb, - pstate->vm, pstate->needs_dirtyfb); - if (ret) { - DPU_ERROR("failed to prepare framebuffer\n"); - return ret; - } + ret =3D msm_framebuffer_prepare(new_state->fb, pstate->needs_dirtyfb); + if (ret) { + DPU_ERROR("failed to prepare framebuffer\n"); + return ret; } =20 return 0; @@ -689,8 +682,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plan= e, =20 DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id); =20 - msm_framebuffer_cleanup(old_state->fb, old_pstate->vm, - old_pstate->needs_dirtyfb); + msm_framebuffer_cleanup(old_state->fb, old_pstate->needs_dirtyfb); } =20 static int dpu_plane_check_inline_rotation(struct dpu_plane *pdpu, @@ -1457,7 +1449,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_p= lane *plane, pstate->needs_qos_remap |=3D (is_rt_pipe !=3D pdpu->is_rt_pipe); pdpu->is_rt_pipe =3D is_rt_pipe; =20 - dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout); + dpu_format_populate_addrs(new_state->fb, &pstate->layout); =20 DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT ", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/ms= m/disp/dpu1/dpu_plane.h index 3578f52048a5..a3a6e9028333 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -17,7 +17,6 @@ /** * struct dpu_plane_state: Define dpu extension of drm plane state object * @base: base drm plane state object - * @vm: pointer to address space for input/output buffers * @pipe: software pipe description * @r_pipe: software pipe description of the second pipe * @pipe_cfg: software pipe configuration @@ -34,7 +33,6 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_vm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/m= sm/disp/mdp4/mdp4_plane.c index 7743be6167f8..098c3b5ff2b2 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c @@ -79,30 +79,25 @@ static const struct drm_plane_funcs mdp4_plane_funcs = =3D { static int mdp4_plane_prepare_fb(struct drm_plane *plane, struct drm_plane_state *new_state) { - struct msm_drm_private *priv =3D plane->dev->dev_private; - struct msm_kms *kms =3D priv->kms; - if (!new_state->fb) return 0; =20 drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - return msm_framebuffer_prepare(new_state->fb, kms->vm, false); + return msm_framebuffer_prepare(new_state->fb, false); } =20 static void mdp4_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *old_state) { struct mdp4_plane *mdp4_plane =3D to_mdp4_plane(plane); - struct mdp4_kms *mdp4_kms =3D get_kms(plane); - struct msm_kms *kms =3D &mdp4_kms->base.base; struct drm_framebuffer *fb =3D old_state->fb; =20 if (!fb) return; =20 DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->vm, false); + msm_framebuffer_cleanup(fb, false); } =20 =20 @@ -141,7 +136,6 @@ static void mdp4_plane_set_scanout(struct drm_plane *pl= ane, { struct mdp4_plane *mdp4_plane =3D to_mdp4_plane(plane); struct mdp4_kms *mdp4_kms =3D get_kms(plane); - struct msm_kms *kms =3D &mdp4_kms->base.base; enum mdp4_pipe pipe =3D mdp4_plane->pipe; =20 mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRC_STRIDE_A(pipe), @@ -153,13 +147,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *= plane, MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); =20 mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 0)); + msm_framebuffer_iova(fb, 0)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 1)); + msm_framebuffer_iova(fb, 1)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 2)); + msm_framebuffer_iova(fb, 2)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 3)); + msm_framebuffer_iova(fb, 3)); } =20 static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms, diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/m= sm/disp/mdp5/mdp5_plane.c index 9f68a4747203..7c790406d533 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -135,8 +135,6 @@ static const struct drm_plane_funcs mdp5_plane_funcs = =3D { static int mdp5_plane_prepare_fb(struct drm_plane *plane, struct drm_plane_state *new_state) { - struct msm_drm_private *priv =3D plane->dev->dev_private; - struct msm_kms *kms =3D priv->kms; bool needs_dirtyfb =3D to_mdp5_plane_state(new_state)->needs_dirtyfb; =20 if (!new_state->fb) @@ -144,14 +142,12 @@ static int mdp5_plane_prepare_fb(struct drm_plane *pl= ane, =20 drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb); + return msm_framebuffer_prepare(new_state->fb, needs_dirtyfb); } =20 static void mdp5_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *old_state) { - struct mdp5_kms *mdp5_kms =3D get_kms(plane); - struct msm_kms *kms =3D &mdp5_kms->base.base; struct drm_framebuffer *fb =3D old_state->fb; bool needed_dirtyfb =3D to_mdp5_plane_state(old_state)->needs_dirtyfb; =20 @@ -159,7 +155,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *pla= ne, return; =20 DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb); + msm_framebuffer_cleanup(fb, needed_dirtyfb); } =20 static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_= state, @@ -467,8 +463,6 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_km= s, enum mdp5_pipe pipe, struct drm_framebuffer *fb) { - struct msm_kms *kms =3D &mdp5_kms->base.base; - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_STRIDE_A(pipe), MDP5_PIPE_SRC_STRIDE_A_P0(fb->pitches[0]) | MDP5_PIPE_SRC_STRIDE_A_P1(fb->pitches[1])); @@ -478,13 +472,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_= kms, MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); =20 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 0)); + msm_framebuffer_iova(fb, 0)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 1)); + msm_framebuffer_iova(fb, 1)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 2)); + msm_framebuffer_iova(fb, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 3)); + msm_framebuffer_iova(fb, 3)); } =20 /* Note: mdp5_plane->pipe_lock must be locked */ diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 761e7e221ad9..eb009bd193e3 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -274,12 +274,9 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(s= truct drm_device *dev, int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); =20 -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needs_dirtyfb); -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needed_dirtyfb); -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane); +int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb= ); +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirty= fb); +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane); struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int = plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb= ); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 6df318b73534..8a3b88130f4d 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -75,10 +75,10 @@ void msm_framebuffer_describe(struct drm_framebuffer *f= b, struct seq_file *m) =20 /* prepare/pin all the fb's bo's for scanout. */ -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needs_dirtyfb) +int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb) { + struct msm_drm_private *priv =3D fb->dev->dev_private; + struct msm_gem_vm *vm =3D priv->kms->vm; struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); int ret, i, n =3D fb->format->num_planes; =20 @@ -98,10 +98,10 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, return 0; } =20 -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needed_dirtyfb) +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirty= fb) { + struct msm_drm_private *priv =3D fb->dev->dev_private; + struct msm_gem_vm *vm =3D priv->kms->vm; struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); int i, n =3D fb->format->num_planes; =20 @@ -115,8 +115,7 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } =20 -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane) +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF7A423E32D for ; Sun, 29 Jun 2025 20:16:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228192; cv=none; b=uYSpE38zlCFlgQeUkHztS5CkZ6A0oB3WdB08Ie+4diZCGcDv2cv5WhNsAGGIeF/IdRH8fahuJZJ24PvBzY86vRG1ks+FWp+lmG7EeiEK/omX4aDfSIVqVVYhWAXPOhF1nqXdHTqny2zS/f504QKhKMWepCn+dmhSJXoKKE0nC10= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228192; c=relaxed/simple; bh=LF3WmOXEduF0Yox49Tqnc+p3MIWedMqeUKG9yrG7bm4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oMs2qypukNXu7DLVbCXDP29ggMUxKjnG0ZnTWhilYphN3gqbvr4dUHK/IecldDD+oWnoW8F9oSAvGnDXIOnnf5WqY5f4yHltNQsHl38AF1vYks8GUfvurhUEHxgawT115syf0r78THlChvurim0273NfFhJlh/NDcHA8J0rUQdU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=e4u/sYh2; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="e4u/sYh2" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TKC7tO027812 for ; Sun, 29 Jun 2025 20:16:30 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=gBDfAmEkonF ZxrTQANE1o507a0jHlpQ+o9S1Q9lveGU=; b=e4u/sYh26Nu+yUfdzM8Vm2zWilv XJBsX7EHwcKoHwP8YGOgaSdTUZZ2vzfbSEGoboK/KdGO1Z8343Ihc4Wjw4yxmsMD cZN5TwIeKJDgr0pv6b7dayED+I9nWDQdu6/S9NOZn9VMO8Ml90KzkDNGvglE7QnR z88U9I+Xw6H42rEGwHzV4THbVlu78CBK1d1BYeqXKxZoE1Tg/ODFQL6Lf5RXqDAW 28tic4tDS5qdoKNkmzTFgGaBXpmEe4tgjFbS/JxU1Gd8Y3qYVjjyxWnlgN6Xktsk Dugrte7Vxjnrtf+lfibvnFa7rd/VL8o0/Nn/LIIX+TLVXX1C2ImXfjo3vNQ== Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7qm2mse-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:29 +0000 (GMT) Received: by mail-pg1-f198.google.com with SMTP id 41be03b00d2f7-b115fb801bcso4626588a12.3 for ; Sun, 29 Jun 2025 13:16:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228188; x=1751832988; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gBDfAmEkonFZxrTQANE1o507a0jHlpQ+o9S1Q9lveGU=; b=sjdZN2SyWeGgjwJevREJbwd0bv95ek7dN8pcPPAGxeK5Yf1ShvIvzMK2+/cJY1WBqg sFsQC7v9b2CYc/6s76aLl+0TlForJgaBxOpiIk6BogYuDl5nujCEcrEd1tKNKJzNN9gZ uoFfdWH1t3dEmvDioBlbriP9J7+oNbdi17qdNX1LzYuUhXU38vXfNZLC45K2Xldtxl9Q 4mNh+TxAWA+FRx0ZYEa4pxMfcXxkInBwy5Y0oubXRNVG7mR80BOvyYIF3T9LkjjhN101 Gz7v8ki6w3qB7x2G2T7098jE4onZBKki9KKFzAeczZozGWkLpWvZbnUJjBRK6aIKq3xO x/wQ== X-Forwarded-Encrypted: i=1; AJvYcCX/ldno0ZHbUfzOm83fp7Xl80l4HtojvSLJZnETAZ681szn6Qf0oiUFwgEnFoajF6TUI42n9Rz8Ii4mop4=@vger.kernel.org X-Gm-Message-State: AOJu0Ywg8FciHNg08ip4DI8e/cZ1/7B0JvsvooRzvL7n+1nuB8kkbBYl kHYjuhDzoZSWmeACGWm1rMk+GsEow9aRRzFCUwKFpnyt7eA0NnsEkIdnuXCO5SYl1qXKHn7xQzR 3b9bWqQHxrojGB2QPsSPn1/W2ivGttSopu07wyWdZoWKeSmgJDKz4m67EydcAe52uUCU= X-Gm-Gg: ASbGncuMJTfk2JQ6z0jGFxlt7YHq7W76lS5bTlnNmDPnTn8gswaXKvXVt5LOU48xAqM Ofxneg5qM0aYxIbwQub7CZNkkn0eZZaguSII6eQ+xXavBxzRWUo4Bz3vQ1lamdX4FwW807w7Qcx 8LrmwgO8RoLgNFnCBNP4nrKTivXjsjbNB2QdLM1bWmeN1blTGeuCidsFx+YlNrkIpT8Tb77IC5z LA2yV+y9LEuwSAny/Xz2uupr7rhab2qktNm9biDRME6QANmaA6pb1DRRVjH9gm4b9Vvwo/FAGwG 1GBTL9hNDdt/hSPAQdPBXr3rYFwVNHqk6g== X-Received: by 2002:a05:6a20:6a28:b0:21f:53e4:1930 with SMTP id adf61e73a8af0-220a181b968mr17871079637.22.1751228188449; Sun, 29 Jun 2025 13:16:28 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGxxbBSzM9ICK5hml2CqQZ7X32XpGFPJlqTsRIpEN5BqVeVXDnuT+nk46z1uPRQyZhJCmaLVA== X-Received: by 2002:a05:6a20:6a28:b0:21f:53e4:1930 with SMTP id adf61e73a8af0-220a181b968mr17871048637.22.1751228188064; Sun, 29 Jun 2025 13:16:28 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af541d233sm7603622b3a.61.2025.06.29.13.16.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:27 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 12/42] drm/msm: Refcount framebuffer pins Date: Sun, 29 Jun 2025 13:12:55 -0700 Message-ID: <20250629201530.25775-13-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=C4TpyRP+ c=1 sm=1 tr=0 ts=68619f1d cx=c_pps a=Qgeoaf8Lrialg5Z894R3/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=MhmIxDhvR8qEtQvFyXAA:9 a=x9snwWr2DeNwDh03kgHS:22 X-Proofpoint-ORIG-GUID: PEvRSSzeQRp5tquWRJluJXwdGjxgcH-6 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfXyP37o0AgYBG7 igzYb5OiCre3jB3HFuZB103tGNdb6i8Gb8H5pZRCjDNLMjK4ue1n7+iVmqYKikD4ofgqUSPxVv1 aDtuurBNo5qR8RQ4EYoDJ98Oq0qnRauTByaWeLn6nahElzn5790bael6xVE6XzFF84ocREtVxXX A/6iAcUMltKTYeFX8Pb4vZ945kxLkDr9YHOMOT2HNwyvxDgBddgXIsgp4uG2cQTmf2SQvD2qmuJ Te671FwSJy7/R1NWSoxBemi3+kPTbJGeevg8Kwq1cm/gjKLsoYvcRw8brRqE1Wq350GTrKVhXOZ qO5+5tJmXQk4YMj7CCEuitoqAp/nWr/U7vz8odGAMwpuGDSHHLf5QfZY+PwD3YsCBdH7XDuPrLB cLF8AY1R1yAILt0/mGZHy54coMtlwYbRQr3sbPvb5cnIX6MIHivwjX6Ep180sQTqAROuMoF1 X-Proofpoint-GUID: PEvRSSzeQRp5tquWRJluJXwdGjxgcH-6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 phishscore=0 suspectscore=0 bulkscore=0 lowpriorityscore=0 clxscore=1015 priorityscore=1501 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" We were already keeping a refcount of # of prepares (pins), to clear the iova array. Use that to avoid unpinning the iova until the last cleanup (unpin). This way, when msm_gem_unpin_iova() actually tears down the mapping, we won't have problems if the fb is being scanned out on another display (for example). Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_fb.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 8a3b88130f4d..3b17d83f6673 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -85,7 +85,8 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, b= ool needs_dirtyfb) if (needs_dirtyfb) refcount_inc(&msm_fb->dirtyfb); =20 - atomic_inc(&msm_fb->prepare_count); + if (atomic_inc_return(&msm_fb->prepare_count) > 1) + return 0; =20 for (i =3D 0; i < n; i++) { ret =3D msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); @@ -108,11 +109,13 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *= fb, bool needed_dirtyfb) if (needed_dirtyfb) refcount_dec(&msm_fb->dirtyfb); =20 + if (atomic_dec_return(&msm_fb->prepare_count)) + return; + + memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); + for (i =3D 0; i < n; i++) msm_gem_unpin_iova(fb->obj[i], vm); - - if (!atomic_dec_return(&msm_fb->prepare_count)) - memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } =20 uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane) --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D41D2245022 for ; Sun, 29 Jun 2025 20:16:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228196; cv=none; b=hdBNyz1/veC/N/e4ikI5Rjl9EAvak7m0/rW/OPnwS1IH+ixrk6nfNoEUw/dkzwYCuiHR48IIepLTPhc2Qarxc/wCFF45kf8I+eG+kZ7bvb/WlLfjEjfo4YDdKAkOYHZtmQX4qXyn2XjdvdPS7iHvTX0OcEEqRheqSM9KNYEh1Eg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228196; c=relaxed/simple; bh=Nv78ArpK7CU+mbti5Ijt6tW7PK+v8aiEO0+p4szLN9c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GxJuh7uUNKL7L3T1O43oh+de99NN9RtFG6urNOxg8ZAKEBMaihuYtJFh96ReVw/19YIrqV9Hn7EWFVF7PF2Rs8F1SMfULjW/riI1SrbIQyDMRw3B2uwZWNncxoS9tF/QyaLjboJJDmkTsgkjYtoya/WpFCw7Nyi15r1wnpAD2Gs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Uf5fFYwP; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Uf5fFYwP" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TFUhNm009543 for ; Sun, 29 Jun 2025 20:16:33 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=HXWQlBzwRs/ YFZcQ0J8q0tRYFCQz7L7Uc+xvyU8KUAI=; b=Uf5fFYwP5Dmmw3iddxncVjSSC1Z Nr7wE2zdf9aY5/rRkdQQB3NPDUUFxJfGky00pQPGKpFKl4vu89amaeIVACOIy0jr 1meBlwrJiB67UqJRZKfQfxZ+BFIjLMhdwUT9GW0fimdMzmF1vxPSjoUu2mb4Grbf vhEBI0L8fcUURFGLi36WFWaUmh5t4meedHFe9zjIoRPmLHyRvYiS1ovhoH4MPrkN 79C3aPuWUxbTvBGzs66fojan9N8jOAFzPaVVe4yv4WOQ5gPC134oV6MZ3eqD+hN5 hZA2FXH7IER309J8oRuq3gmeFeJj32UqFpLHvY6LdhvsrblKAF1CZ0oohJw== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j9jpth71-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:32 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-234906c5e29so17679425ad.0 for ; Sun, 29 Jun 2025 13:16:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228192; x=1751832992; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HXWQlBzwRs/YFZcQ0J8q0tRYFCQz7L7Uc+xvyU8KUAI=; b=ohbwoG/MtLAt8WZpQomCiQIIt9d+8844xMv/V+BCZaw1M+TetiYjYEqSNCpVhVNb6G ltRp6ysdcq0CeMnBuZT+UK6Hgaw/HYc5CqTZQMuq0HNEg/OVbI38FxIGaGk6GLK951LN hb5VzUzC6Y+KAhMMCBEJPrcoPCHvc3exw6gtFs5e6B0WOLTGMJbCWnekPWPLNyiY63AI c84/QWlSbKSL8FoHWZhbvA4mAqKs6slOSTMs8+SEd2HFG40Z3ZSR8JUQFfFUzGbwwsv8 QuMLS9J0oijp1A8b7Kf0P7oJIzqMQMaztVHHfgKOknEn48Rtse+cIF9Fobb/YBNHt0I+ JHaA== X-Forwarded-Encrypted: i=1; AJvYcCU+C92wkEUtF1azr7SUvqoMQW7U4mdMXwj5FDsKCMk4W8S11kPIyr1NHHiRwlhxbKLQdpxfZUhh6I3Kobw=@vger.kernel.org X-Gm-Message-State: AOJu0YxpiVK6ktUZ62FyM6dIOwfJhLf1uNu6n0K3i8AhHeH7HwNrJ/P1 bg1Go7ksMNp0DXe7XgkuI5Ng1YzzFMmrGV/DUGaaqqaXVaGMkbX3uMOHFEFi07JYcfTPhDWtMm6 LOkqeKWFjRgGOEsyaj0PEJ0tIJMQnvb3xDb1CsuNE3NmTbwVf1NUaaqXKjqqLmtBG4oM82ohSdg Y= X-Gm-Gg: ASbGncva9657WmhgrmJpaSzKELMwGKnMgjfuiftw8chey8qWuozVaC3uvtymM5S+5h8 eeaocf69xZ5dYW2Kf6BR6K2dP9yhCVZbO7ImnyAcYdW2cVSwhWmM3iiIT+f7gSWjM17UDks40dN B5liUIiaLe7mC8QGHeoFLTkaha4/lx0n4Fy38zcOiR38iw8v0M1wNQfZ/BBVqbO9qz1KxKHq9hf ceiVOnXw9JtU/Dj4PApyZyaqvFe2JMpShA1zZxU0bZrL8AErdQV5HF2m0Sc+w13Ub20vtr/x1uk 2c70pSHz9F+o2CwVvnHaBynP0IkxuCrEOA== X-Received: by 2002:a17:903:19ce:b0:236:6f5f:caa9 with SMTP id d9443c01a7336-23ac4605d7cmr169452695ad.32.1751228191501; Sun, 29 Jun 2025 13:16:31 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHsROYSe4iTpySAFDhw6m9Aja1N6L0BaBc7TfMBG5/pXaEY/kDJWB5Ms2WOp+IKMU2FMUpnuw== X-Received: by 2002:a17:903:19ce:b0:236:6f5f:caa9 with SMTP id d9443c01a7336-23ac4605d7cmr169452335ad.32.1751228190875; Sun, 29 Jun 2025 13:16:30 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3bc73csm63838875ad.193.2025.06.29.13.16.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:30 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 13/42] drm/msm: drm_gpuvm conversion Date: Sun, 29 Jun 2025 13:12:56 -0700 Message-ID: <20250629201530.25775-14-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX2jNGJ9IGE2Tr IBy2/A9f66z8ZHTanpl0X1nh17utiAT9+hnuKqzTF1Aj3/8o5az50Q5fTawT9AeQf7avdKsEDz/ KPmnguD87H6T1OiJHiXzdyJcOCQl8I2K/N4ATr2xfpP8pBbXY2i9LTFkBk+PtMs+od5SiL5gMy2 5xJlQ9Dy9Whn6sz2BMblRf4AQT+LhlIXXEVZRIlgRHlSfSvguOWrIs3gKhSWxZRLWqFJo612L1M uxOjhKBJC7jjor7q2VNVrAkRTGxyE7jLWXt9SP74PPEMs80DZVfxHa6ovxuVONBoRbQ6rciqhOQ o82A37R9FwWIREVj93rX51OdE1mytuGupRB1+OjYuK2AS5aqYNeiC9K+jyamyJH8qkU7zGnoVts PL+TNFLRhVcYxs/R6HFr2KgYvU9U8n5/dl7UtrUNmuoAyZNxriEv4KXEsFw6JKiVpy4pmbl4 X-Proofpoint-ORIG-GUID: OPGU2N-C-Cr45DbsPiIarYeVPvcu12Ji X-Authority-Analysis: v=2.4 cv=Tq7mhCXh c=1 sm=1 tr=0 ts=68619f21 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=DWwiU9S3deBeNsXQ0cYA:9 a=hD0YvAvVATwCeCSz:21 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-GUID: OPGU2N-C-Cr45DbsPiIarYeVPvcu12Ji X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 lowpriorityscore=0 adultscore=0 priorityscore=1501 impostorscore=0 phishscore=0 mlxscore=0 spamscore=0 bulkscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Now that we've realigned deletion and allocation, switch over to using drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per VM, to allow mapping different parts of a single BO at different virtual addresses, which is a key requirement for sparse/VM_BIND. This prepares us for using drm_gpuvm to translate a batch of MAP/ MAP_NULL/UNMAP operations from userspace into a sequence of map/remap/ unmap steps for updating the page tables. Since, unlike our prior vm/vma setup, with drm_gpuvm the vm_bo holds a reference to the GEM object. To prevent reference loops causing us to leak all GEM objects, we implicitly tear down the mapping when the GEM handle is close or when the obj is unpinned. Which means the submit needs to also hold a reference to the vm_bo, to prevent the VMA from being torn down while the submit is in-flight. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/Kconfig | 1 + drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 5 +- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gem.c | 166 +++++++++++++++-------- drivers/gpu/drm/msm/msm_gem.h | 90 +++++++++--- drivers/gpu/drm/msm/msm_gem_submit.c | 7 +- drivers/gpu/drm/msm/msm_gem_vma.c | 140 +++++++++++++------ drivers/gpu/drm/msm/msm_kms.c | 4 +- 12 files changed, 300 insertions(+), 135 deletions(-) diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig index 7f127e2ae442..1523bc3e9540 100644 --- a/drivers/gpu/drm/msm/Kconfig +++ b/drivers/gpu/drm/msm/Kconfig @@ -21,6 +21,7 @@ config DRM_MSM select DRM_DISPLAY_HELPER select DRM_BRIDGE_CONNECTOR select DRM_EXEC + select DRM_GPUVM select DRM_KMS_HELPER select DRM_PANEL select DRM_BRIDGE diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 095bae92e3e8..889480aa13ba 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -472,8 +472,7 @@ a2xx_create_vm(struct msm_gpu *gpu, struct platform_dev= ice *pdev) struct msm_mmu *mmu =3D a2xx_gpummu_new(&pdev->dev, gpu); struct msm_gem_vm *vm; =20 - vm =3D msm_gem_vm_create(mmu, "gpu", SZ_16M, - 0xfff * SZ_64K); + vm =3D msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, tr= ue); =20 if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.c index 848acc382b7d..77d9ff9632d1 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1311,7 +1311,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu= , struct a6xx_gmu_bo *bo, return 0; } =20 -static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) +static int a6xx_gmu_memory_probe(struct drm_device *drm, struct a6xx_gmu *= gmu) { struct msm_mmu *mmu; =20 @@ -1321,7 +1321,7 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); =20 - gmu->vm =3D msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + gmu->vm =3D msm_gem_vm_create(drm, mmu, "gmu", 0x0, 0x80000000, true); if (IS_ERR(gmu->vm)) return PTR_ERR(gmu->vm); =20 @@ -1940,7 +1940,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct d= evice_node *node) if (ret) goto err_put_device; =20 - ret =3D a6xx_gmu_memory_probe(gmu); + ret =3D a6xx_gmu_memory_probe(adreno_gpu->base.dev, gmu); if (ret) goto err_put_device; =20 diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 7b3be2b46cc4..262129cb4415 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2284,9 +2284,8 @@ a6xx_create_private_vm(struct msm_gpu *gpu) if (IS_ERR(mmu)) return ERR_CAST(mmu); =20 - return msm_gem_vm_create(mmu, - "gpu", ADRENO_VM_START, - adreno_private_vm_size(gpu)); + return msm_gem_vm_create(gpu->dev, mmu, "gpu", ADRENO_VM_START, + adreno_private_vm_size(gpu), true); } =20 static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *= ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 0f71c39696a5..46199a6d0e41 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -226,7 +226,8 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, start =3D max_t(u64, SZ_16M, geometry->aperture_start); size =3D geometry->aperture_end - start + 1; =20 - vm =3D msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); + vm =3D msm_gem_vm_create(gpu->dev, mmu, "gpu", start & GENMASK_ULL(48, 0), + size, true); =20 if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); @@ -414,12 +415,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_= context *ctx, case MSM_PARAM_VA_START: if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->va_start; + *value =3D ctx->vm->base.mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->va_size; + *value =3D ctx->vm->base.mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value =3D adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm= /disp/mdp4/mdp4_kms.c index 5cb4a4bae2a6..a867c684c6d6 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -469,8 +469,9 @@ static int mdp4_kms_init(struct drm_device *dev) "contig buffers for scanout\n"); vm =3D NULL; } else { - vm =3D msm_gem_vm_create(mmu, - "mdp4", 0x1000, 0x100000000 - 0x1000); + vm =3D msm_gem_vm_create(dev, mmu, "mdp4", + 0x1000, 0x100000000 - 0x1000, + true); =20 if (IS_ERR(vm)) { if (!IS_ERR(mmu)) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index c314fd470d69..488fdf02aee9 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -780,6 +780,7 @@ static const struct file_operations fops =3D { =20 static const struct drm_driver msm_driver =3D { .driver_features =3D DRIVER_GEM | + DRIVER_GEM_GPUVA | DRIVER_RENDER | DRIVER_ATOMIC | DRIVER_MODESET | diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index a20ae783f244..664fb801c221 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -43,9 +43,53 @@ static int msm_gem_open(struct drm_gem_object *obj, stru= ct drm_file *file) return 0; } =20 +static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, bool close); + +static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) +{ + msm_gem_assert_locked(obj); + + struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(&vm->base, obj); + if (vm_bo) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + if (vma->vm !=3D &vm->base) + continue; + msm_gem_vma_purge(to_msm_vma(vma)); + msm_gem_vma_close(to_msm_vma(vma)); + break; + } + + drm_gpuvm_bo_put(vm_bo); + } +} + static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *fil= e) { + struct msm_context *ctx =3D file->driver_priv; + update_ctx_mem(file, -obj->size); + + /* + * If VM isn't created yet, nothing to cleanup. And in fact calling + * put_iova_spaces() with vm=3DNULL would be bad, in that it will tear- + * down the mappings of shared buffers in other contexts. + */ + if (!ctx->vm) + return; + + /* + * TODO we might need to kick this to a queue to avoid blocking + * in CLOSE ioctl + */ + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, + msecs_to_jiffies(1000)); + + msm_gem_lock(obj); + put_iova_spaces(obj, &ctx->vm->base, true); + detach_vm(obj, ctx->vm); + msm_gem_unlock(obj); } =20 /* @@ -167,6 +211,13 @@ static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); =20 + /* + * Skip gpuvm in the object free path to avoid a WARN_ON() splat. + * See explaination in msm_gem_assert_locked() + */ + if (kref_read(&obj->refcount)) + drm_gpuvm_bo_gem_evict(obj, true); + if (msm_obj->pages) { if (msm_obj->sgt) { /* For non-cached buffers, ensure the new @@ -334,16 +385,25 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *o= bj) } =20 static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct msm_gem_vm *vm) { - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_gem_vma *vma; + struct drm_gpuvm_bo *vm_bo; =20 msm_gem_assert_locked(obj); =20 - list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->vm =3D=3D vm) - return vma; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + if (vma->vm =3D=3D &vm->base) { + /* lookup_vma() should only be used in paths + * with at most one vma per vm + */ + GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); + + return to_msm_vma(vma); + } + } } =20 return NULL; @@ -356,33 +416,29 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_= object *obj, * mapping. */ static void -put_iova_spaces(struct drm_gem_object *obj, bool close) +put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool clo= se) { - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + struct drm_gpuvm_bo *vm_bo, *tmp; =20 msm_gem_assert_locked(obj); =20 - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - if (vma->vm) { - msm_gem_vma_purge(vma); - if (close) - msm_gem_vma_close(vma); - } - } -} + drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) { + struct drm_gpuva *vma, *vmatmp; =20 -/* Called with msm_obj locked */ -static void -put_iova_vmas(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + if (vm && vm_bo->vm !=3D vm) + continue; =20 - msm_gem_assert_locked(obj); + drm_gpuvm_bo_get(vm_bo); + + drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + + msm_gem_vma_purge(msm_vma); + if (close) + msm_gem_vma_close(msm_vma); + } =20 - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - msm_gem_vma_close(vma); + drm_gpuvm_bo_put(vm_bo); } } =20 @@ -390,7 +446,6 @@ static struct msm_gem_vma *get_vma_locked(struct drm_ge= m_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct msm_gem_vma *vma; =20 msm_gem_assert_locked(obj); @@ -399,12 +454,9 @@ static struct msm_gem_vma *get_vma_locked(struct drm_g= em_object *obj, =20 if (!vma) { vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); - if (IS_ERR(vma)) - return vma; - list_add_tail(&vma->list, &msm_obj->vmas); } else { - GEM_WARN_ON(vma->iova < range_start); - GEM_WARN_ON((vma->iova + obj->size) > range_end); + GEM_WARN_ON(vma->base.va.addr < range_start); + GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); } =20 return vma; @@ -484,7 +536,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, =20 ret =3D msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova =3D vma->iova; + *iova =3D vma->base.va.addr; pin_obj_locked(obj); } =20 @@ -530,7 +582,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else { - *iova =3D vma->iova; + *iova =3D vma->base.va.addr; } msm_gem_unlock(obj); =20 @@ -571,7 +623,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, vma =3D get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->iova !=3D iova)) { + } else if (GEM_WARN_ON(vma->base.va.addr !=3D iova)) { clear_iova(obj, vm); ret =3D -EBUSY; } @@ -593,9 +645,10 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj, =20 msm_gem_lock(obj); vma =3D lookup_vma(obj, vm); - if (!GEM_WARN_ON(!vma)) { + if (vma) { msm_gem_unpin_locked(obj); } + detach_vm(obj, vm); msm_gem_unlock(obj); } =20 @@ -755,7 +808,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); =20 /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, false); + put_iova_spaces(obj, NULL, false); =20 msm_gem_vunmap(obj); =20 @@ -763,8 +816,6 @@ void msm_gem_purge(struct drm_gem_object *obj) =20 put_pages(obj); =20 - put_iova_vmas(obj); - mutex_lock(&priv->lru.lock); /* A one-way transition: */ msm_obj->madv =3D __MSM_MADV_PURGED; @@ -795,7 +846,7 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(is_unevictable(msm_obj)); =20 /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, false); + put_iova_spaces(obj, NULL, false); =20 drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); =20 @@ -861,7 +912,6 @@ void msm_gem_describe(struct drm_gem_object *obj, struc= t seq_file *m, { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct dma_resv *robj =3D obj->resv; - struct msm_gem_vma *vma; uint64_t off =3D drm_vma_node_start(&obj->vma_node); const char *madv; =20 @@ -904,14 +954,17 @@ void msm_gem_describe(struct drm_gem_object *obj, str= uct seq_file *m, =20 seq_printf(m, " %08zu %9s %-32s\n", obj->size, madv, msm_obj->name); =20 - if (!list_empty(&msm_obj->vmas)) { + if (!list_empty(&obj->gpuva.list)) { + struct drm_gpuvm_bo *vm_bo; =20 seq_puts(m, " vmas:"); =20 - list_for_each_entry(vma, &msm_obj->vmas, list) { - const char *name, *comm; - if (vma->vm) { - struct msm_gem_vm *vm =3D vma->vm; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + const char *name, *comm; + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); struct task_struct *task =3D get_pid_task(vm->pid, PIDTYPE_PID); if (task) { @@ -920,15 +973,14 @@ void msm_gem_describe(struct drm_gem_object *obj, str= uct seq_file *m, } else { comm =3D NULL; } - name =3D vm->name; - } else { - name =3D comm =3D NULL; + name =3D vm->base.name; + + seq_printf(m, " [%s%s%s: vm=3D%p, %08llx, %smapped]", + name, comm ? ":" : "", comm ? comm : "", + vma->vm, vma->va.addr, + to_msm_vma(vma)->mapped ? "" : "un"); + kfree(comm); } - seq_printf(m, " [%s%s%s: vm=3D%p, %08llx,%s]", - name, comm ? ":" : "", comm ? comm : "", - vma->vm, vma->iova, - vma->mapped ? "mapped" : "unmapped"); - kfree(comm); } =20 seq_puts(m, "\n"); @@ -974,7 +1026,7 @@ static void msm_gem_free_object(struct drm_gem_object = *obj) list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); =20 - put_iova_spaces(obj, true); + put_iova_spaces(obj, NULL, true); =20 if (drm_gem_is_imported(obj)) { GEM_WARN_ON(msm_obj->vaddr); @@ -984,13 +1036,10 @@ static void msm_gem_free_object(struct drm_gem_objec= t *obj) */ kvfree(msm_obj->pages); =20 - put_iova_vmas(obj); - drm_prime_gem_destroy(obj, msm_obj->sgt); } else { msm_gem_vunmap(obj); put_pages(obj); - put_iova_vmas(obj); } =20 drm_gem_object_release(obj); @@ -1096,7 +1145,6 @@ static int msm_gem_new_impl(struct drm_device *dev, msm_obj->madv =3D MSM_MADV_WILLNEED; =20 INIT_LIST_HEAD(&msm_obj->node); - INIT_LIST_HEAD(&msm_obj->vmas); =20 *obj =3D &msm_obj->base; (*obj)->funcs =3D &msm_gem_object_funcs; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index cf1e86252219..4112370baf34 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -10,6 +10,7 @@ #include #include #include "drm/drm_exec.h" +#include "drm/drm_gpuvm.h" #include "drm/gpu_scheduler.h" #include "msm_drv.h" =20 @@ -22,30 +23,67 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash mem= ory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping = */ =20 +/** + * struct msm_gem_vm - VM object + * + * A VM object representing a GPU (or display or GMU or ...) virtual addre= ss + * space. + * + * In the case of GPU, if per-process address spaces are supported, the ad= dress + * space is split into two VMs, which map to TTBR0 and TTBR1 in the SMMU. = TTBR0 + * is used for userspace objects, and is unique per msm_context/drm_file, = while + * TTBR1 is the same for all processes. (The kernel controlled ringbuffer= and + * a few other kernel controlled buffers live in TTBR1.) + * + * The GPU TTBR0 vm can be managed by userspace or by the kernel, dependin= g on + * whether userspace supports VM_BIND. All other vm's are managed by the = kernel. + * (Managed by kernel means the kernel is responsible for VA allocation.) + * + * Note that because VM_BIND allows a given BO to be mapped multiple times= in + * a VM, and therefore have multiple VMA's in a VM, there is an extra obje= ct + * provided by drm_gpuvm infrastructure.. the drm_gpuvm_bo, which is not + * embedded in any larger driver structure. The GEM object holds a list of + * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma + * holds a reference to the vm_bo, and drops it when the vma is unlinked. + * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an + * existing vm_bo, or create a new one. Once the vma is linked, the ref + * to the vm_bo can be dropped (since the vma is holding one). + */ struct msm_gem_vm { - const char *name; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: + /** @base: Inherit from drm_gpuvm. */ + struct drm_gpuvm base; + + /** + * @mm: Memory management for kernel managed VA allocations + * + * Only used for kernel managed VMs, unused for user managed VMs. + * + * Protected by @mm_lock. */ struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ + + /** @mm_lock: protects @mm node allocation/removal */ + struct spinlock mm_lock; + + /** @vm_lock: protects gpuvm insert/remove/traverse */ + struct mutex vm_lock; + + /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; - struct kref kref; =20 - /* For address spaces associated with a specific process, this + /** + * @pid: For address spaces associated with a specific process, this * will be non-NULL: */ struct pid *pid; =20 - /* @faults: the number of GPU hangs associated with this address space */ + /** @faults: the number of GPU hangs associated with this address space */ int faults; =20 - /** @va_start: lowest possible address to allocate */ - uint64_t va_start; - - /** @va_size: the size of the address space (in bytes) */ - uint64_t va_size; + /** @managed: is this a kernel managed VM? */ + bool managed; }; +#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) =20 struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm); @@ -53,18 +91,33 @@ msm_gem_vm_get(struct msm_gem_vm *vm); void msm_gem_vm_put(struct msm_gem_vm *vm); =20 struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size); +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, + u64 va_start, u64 va_size, bool managed); =20 struct msm_fence_context; =20 +#define MSM_VMA_DUMP (DRM_GPUVA_USERBITS << 0) + +/** + * struct msm_gem_vma - a VMA mapping + * + * Represents a combination of a GEM object plus a VM. + */ struct msm_gem_vma { + /** @base: inherit from drm_gpuva */ + struct drm_gpuva base; + + /** + * @node: mm node for VA allocation + * + * Only used by kernel managed VMs + */ struct drm_mm_node node; - uint64_t iova; - struct msm_gem_vm *vm; - struct list_head list; /* node in msm_gem_object::vmas */ + + /** @mapped: Is this VMA mapped? */ bool mapped; }; +#define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) =20 struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, @@ -100,8 +153,6 @@ struct msm_gem_object { struct sg_table *sgt; void *vaddr; =20 - struct list_head vmas; /* list of msm_gem_vma */ - char name[32]; /* Identifier to print for the debugfs files */ =20 /* userspace metadata backchannel */ @@ -292,6 +343,7 @@ struct msm_gem_submit { struct drm_gem_object *obj; uint32_t handle; }; + struct drm_gpuvm_bo *vm_bo; uint64_t iova; } bos[]; }; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index c184b1a1f522..2de5a07392eb 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -321,7 +321,8 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) if (ret) break; =20 - submit->bos[i].iova =3D vma->iova; + submit->bos[i].vm_bo =3D drm_gpuvm_bo_get(vma->base.vm_bo); + submit->bos[i].iova =3D vma->base.va.addr; } =20 /* @@ -474,7 +475,11 @@ void msm_submit_retire(struct msm_gem_submit *submit) =20 for (i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; + struct drm_gpuvm_bo *vm_bo =3D submit->bos[i].vm_bo; =20 + msm_gem_lock(obj); + drm_gpuvm_bo_put(vm_bo); + msm_gem_unlock(obj); drm_gem_object_put(obj); } } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index ca29e81d79d2..1f4c9b5c2e8f 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -5,14 +5,13 @@ */ =20 #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_mmu.h" =20 static void -msm_gem_vm_destroy(struct kref *kref) +msm_gem_vm_free(struct drm_gpuvm *gpuvm) { - struct msm_gem_vm *vm =3D container_of(kref, struct msm_gem_vm, kref); + struct msm_gem_vm *vm =3D container_of(gpuvm, struct msm_gem_vm, base); =20 drm_mm_takedown(&vm->mm); if (vm->mmu) @@ -25,14 +24,14 @@ msm_gem_vm_destroy(struct kref *kref) void msm_gem_vm_put(struct msm_gem_vm *vm) { if (vm) - kref_put(&vm->kref, msm_gem_vm_destroy); + drm_gpuvm_put(&vm->base); } =20 struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm) { if (!IS_ERR_OR_NULL(vm)) - kref_get(&vm->kref); + drm_gpuvm_get(&vm->base); =20 return vm; } @@ -40,14 +39,14 @@ msm_gem_vm_get(struct msm_gem_vm *vm) /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm =3D vma->vm; - unsigned size =3D vma->node.size; + struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); + unsigned size =3D vma->base.va.range; =20 /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; =20 - vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); =20 vma->mapped =3D false; } @@ -57,10 +56,10 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm =3D vma->vm; + struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); int ret; =20 - if (GEM_WARN_ON(!vma->iova)) + if (GEM_WARN_ON(!vma->base.va.addr)) return -EINVAL; =20 if (vma->mapped) @@ -68,9 +67,6 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, =20 vma->mapped =3D true; =20 - if (!vm) - return 0; - /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -80,7 +76,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); + ret =3D vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); =20 if (ret) { vma->mapped =3D false; @@ -92,19 +88,20 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm =3D vma->vm; + struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); =20 GEM_WARN_ON(vma->mapped); =20 - spin_lock(&vm->lock); - if (vma->iova) + spin_lock(&vm->mm_lock); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->lock); + spin_unlock(&vm->mm_lock); =20 - vma->iova =3D 0; - list_del(&vma->list); + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + drm_gpuva_unlink(&vma->base); + mutex_unlock(&vm->vm_lock); =20 - msm_gem_vm_put(vm); kfree(vma); } =20 @@ -113,6 +110,7 @@ struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; =20 @@ -120,36 +118,83 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem= _object *obj, if (!vma) return ERR_PTR(-ENOMEM); =20 - vma->vm =3D vm; + if (vm->managed) { + spin_lock(&vm->mm_lock); + ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, + obj->size, PAGE_SIZE, 0, + range_start, range_end, 0); + spin_unlock(&vm->mm_lock); =20 - spin_lock(&vm->lock); - ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, - obj->size, PAGE_SIZE, 0, - range_start, range_end, 0); - spin_unlock(&vm->lock); + if (ret) + goto err_free_vma; =20 - if (ret) - goto err_free_vma; + range_start =3D vma->node.start; + range_end =3D range_start + obj->size; + } =20 - vma->iova =3D vma->node.start; + GEM_WARN_ON((range_end - range_start) > obj->size); + + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped =3D false; =20 - INIT_LIST_HEAD(&vma->list); + mutex_lock(&vm->vm_lock); + ret =3D drm_gpuva_insert(&vm->base, &vma->base); + mutex_unlock(&vm->vm_lock); + if (ret) + goto err_free_range; =20 - kref_get(&vm->kref); + vm_bo =3D drm_gpuvm_bo_obtain(&vm->base, obj); + if (IS_ERR(vm_bo)) { + ret =3D PTR_ERR(vm_bo); + goto err_va_remove; + } + + mutex_lock(&vm->vm_lock); + drm_gpuvm_bo_extobj_add(vm_bo); + drm_gpuva_link(&vma->base, vm_bo); + mutex_unlock(&vm->vm_lock); + GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); =20 return vma; =20 +err_va_remove: + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + mutex_unlock(&vm->vm_lock); +err_free_range: + if (vm->managed) + drm_mm_remove_node(&vma->node); err_free_vma: kfree(vma); return ERR_PTR(ret); } =20 +static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { + .vm_free =3D msm_gem_vm_free, +}; + +/** + * msm_gem_vm_create() - Create and initialize a &msm_gem_vm + * @drm: the drm device + * @mmu: the backing MMU objects handling mapping/unmapping + * @name: the name of the VM + * @va_start: the start offset of the VA space + * @va_size: the size of the VA space + * @managed: is it a kernel managed VM? + * + * In a kernel managed VM, the kernel handles address allocation, and only + * synchronous operations are supported. In a user managed VM, userspace + * handles virtual address allocation, and both async and sync operations + * are supported. + */ struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size) +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, + u64 va_start, u64 va_size, bool managed) { + enum drm_gpuvm_flags flags =3D 0; struct msm_gem_vm *vm; + struct drm_gem_object *dummy_gem; + int ret =3D 0; =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -158,15 +203,28 @@ msm_gem_vm_create(struct msm_mmu *mmu, const char *na= me, if (!vm) return ERR_PTR(-ENOMEM); =20 - spin_lock_init(&vm->lock); - vm->name =3D name; - vm->mmu =3D mmu; - vm->va_start =3D va_start; - vm->va_size =3D size; + dummy_gem =3D drm_gpuvm_resv_object_alloc(drm); + if (!dummy_gem) { + ret =3D -ENOMEM; + goto err_free_vm; + } + + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, + va_start, va_size, 0, 0, &msm_gpuvm_ops); + drm_gem_object_put(dummy_gem); + + spin_lock_init(&vm->mm_lock); + mutex_init(&vm->vm_lock); =20 - drm_mm_init(&vm->mm, va_start, size); + vm->mmu =3D mmu; + vm->managed =3D managed; =20 - kref_init(&vm->kref); + drm_mm_init(&vm->mm, va_start, va_size); =20 return vm; + +err_free_vm: + kfree(vm); + return ERR_PTR(ret); + } diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 88504c4b842f..6458bd82a0cd 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -204,8 +204,8 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *d= ev) return NULL; } =20 - vm =3D msm_gem_vm_create(mmu, "mdp_kms", - 0x1000, 0x100000000 - 0x1000); + vm =3D msm_gem_vm_create(dev, mmu, "mdp_kms", + 0x1000, 0x100000000 - 0x1000, true); if (IS_ERR(vm)) { dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F0E502459E0 for ; Sun, 29 Jun 2025 20:16:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228198; cv=none; b=VURAE8ddDu22xTGGW7GvK8n0e1TTdNct1DkNCt/A8UaPNdeA3zsXGCNKiIguXOLEjDaTfsHa0fOF7nzAVGkCY7v+rB/m1OkaFuN8W3NI9onEBIVJtyNEa5YxyJmvp0E/Hr3pVFZg+BTn4PtxNJeBTTUeqnKyx8bOXhRLeTvbXII= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228198; c=relaxed/simple; bh=IQIepr+3BXC4bz/2Itj+gYOSxVibtVXlxV/RLH8W7Hc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AZetDkHZHBk+7S05PbLJTqP0aC5uwF2PhR+BEqsRuNqzxtbdiFw1W4+YZsMNtBFCLrHeHDFRvaIVMkOuKkSWh+o0CvlMyZoVenmCctY5yfANoP8wKb+g2OdPuEuik1POcdY4WDpKrb7hVIPgHSjdaULxih9v0eHX1JRq4CClHew= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=am/sh3wB; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="am/sh3wB" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TIPlkf019982 for ; Sun, 29 Jun 2025 20:16:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=rew9/IzVuWZ 58MgfUMBVonrci/Um8bB8O/66NNJAsB0=; b=am/sh3wB/JYm2gxnZtO6JtQV6HD Og334Ebk4MaBTHNvZx01NPb0GuT+1C6JmCkqkUVudVw++kUEBGHbRKjoEClweXRA JHk1ZElx+b2e/JgEkCIkEuvCfeeAKwVoNZhXO0zb78SyEeH+2m0AWbgBf+MZ9Dx6 Xl2gmFHxECsgv3fF2Gt4AwUc4IrB9i9aXyUILHlSjzKkc7Ualk+VurgMXWzYHT7s SdNjy+CDfT2vlJTq6bQhU3TtY7QDNpf8ZP6CDPOMnP8OeDzv5lI1+G9qMyj5TN0l 1+beA3M9SB3hgJpw41RyI9JeSvlNO1j2Ll39lYhuQD0/WSm2IKOT0AWpw4g== Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8s9ajxu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:34 +0000 (GMT) Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-748efefedb5so4786622b3a.2 for ; Sun, 29 Jun 2025 13:16:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228193; x=1751832993; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rew9/IzVuWZ58MgfUMBVonrci/Um8bB8O/66NNJAsB0=; b=DOdqQTlFhLWn0n+qqYkv9mcsXoj1xYR2X8+jJZJXlV47C1DC7ZxIAghO7L4ydd32yA uTbBPbUfg/haZbDwCnZobEF093Aa+4dhVhmacVeiCEnySLMqg31kvCmCGfQCEBQHY1D4 eeCcAQMZxuOvhbaskV0MihC+/j52YlXwlOVZM062n/6kCrBSev8YIMgKfxj3vklmYaYK iKhY4Hh9q3OYUa0NYSGHQi6RbzzATUuCMHV/pWTElfmSQCgDz1EiYbiPrlxjVSiMdebc Xt0acZIP9AU6FkxK0HjweYB5R1l64g6DRL5XphwUhPUnFilnHeUginPxlwbfMngi+f8p C11g== X-Forwarded-Encrypted: i=1; AJvYcCVISpOEvzewIpFo3kGg4UTb82V+oPwHWnM8qMgs7nuFAaAN8sPWeTsonKCq5qKkH+g68zZ8Il+P90Zdfgg=@vger.kernel.org X-Gm-Message-State: AOJu0YypEmIkT1pkshPhVA9p+DqXthQhZac7VdhavP5GU08h2q6wZvLk rR/cojyNMYUv/MCyyb5Myep4P+Wsx7QleENc08Y1D7ehYNm4RRWnHzdpTEK1AXY1ENB2l+Ug4bB mM9Ph8DlW3ntCzY2l+lrmuSaXmjSQJrxWDXSNo6cN2Of6Uhql96QsCfYJlrlnk+v0+bc= X-Gm-Gg: ASbGncskG5QLR+DK3Z9pXtiQXk+jyL6NvPjGAW31i94ClllvIdP4no65vhIL7+dahbs HcIDQTJolC+Cd49kX7enFGRnObbKgLQljs64oTD9EoZ9qkhq7a0LQG2ZVsPF9LFPj+NeMEk5zkC B7XZYs5WKJIDoy1TY1um/WVMX0DodNYu9hQvWOZ9gwXVhdXGq+BJqMhlUmJ4YSqIbFJc13iKp4n Iq3QNX+mKs6fHjP5O47wsyRPyeA9ymQvMaYyG2MEhiiBI7QaDrPORM9T7AyohLkTeqVkl18I7Vk miTRGsornvAFyOXZmpGgHLs81cpPrXRyjA== X-Received: by 2002:a05:6a21:3394:b0:218:75de:d924 with SMTP id adf61e73a8af0-220a169c82emr17139855637.19.1751228193144; Sun, 29 Jun 2025 13:16:33 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF7GFsk+eX2YP22ak3g1I7ES2XzBc2z/m3uzS0cGb2huuSsB+nzCR9wmcYWqHBP15VZLXZiGg== X-Received: by 2002:a05:6a21:3394:b0:218:75de:d924 with SMTP id adf61e73a8af0-220a169c82emr17139808637.19.1751228192614; Sun, 29 Jun 2025 13:16:32 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b34e31c39fbsm6424184a12.42.2025.06.29.13.16.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:32 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v9 14/42] drm/msm: Convert vm locking Date: Sun, 29 Jun 2025 13:12:57 -0700 Message-ID: <20250629201530.25775-15-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=H/Pbw/Yi c=1 sm=1 tr=0 ts=68619f22 cx=c_pps a=rEQLjTOiSrHUhVqRoksmgQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=1nDRvngas_nmfiIOXgsA:9 a=2VI0MkxyNR6bbpdq8BZq:22 X-Proofpoint-ORIG-GUID: 5ssa5reWAzHCtXuyHlqQIfk7lS5NixpP X-Proofpoint-GUID: 5ssa5reWAzHCtXuyHlqQIfk7lS5NixpP X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX6qDjYvTQNovh nFNnDQJAqlvFav7qVolc0vpqMWCmjb5OOf+O7k5IAgclBu9RA3BAFIJW5WEGYLktKxHH5nvz0lQ cEjYnnDjmBrqgBWl4HzQxTR4EdXeR5DXa1lpTi7XzG7UMjb/Pgr1jqDoRCBn9HNxYwtOziSjQJB 7pTbATMdWeYq5SQGduwNkW2gzfw3eyeveygC1A7juqTj5IwUN2xlXgeMkvJgyQ+BJr0HOO3CoWb gm+tFNTa5dUCWLOcJmFLa0ckB/JpcMGu+5zZ6T7Q/fcMxrUJ3JcljXvIs+HITUeX8Y6nVXsa9jk uln8tVqjJZQ5myAae09Aiggvf77r0d7p4zKsvPA2kRUEgbx5VSjZ8psVv8nWLT3HL7ex/3hPKqn SUt6W5AuvcjRm/wmbRMet9Zwo7oR6jzExBCJnRqGP9fbx3onwhzueb8Sfrn3zaIu5ReqB4Yq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 priorityscore=1501 clxscore=1015 mlxscore=0 lowpriorityscore=0 spamscore=0 adultscore=0 bulkscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Convert to using the gpuvm's r_obj for serializing access to the VM. This way we can use the drm_exec helper for dealing with deadlock detection and backoff. This will let us deal with upcoming locking order conflicts with the VM_BIND implmentation (ie. in some scenarious we need to acquire the obj lock first, for ex. to iterate all the VMs an obj is bound in, and in other scenarious we need to acquire the VM lock first). Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 41 +++++++++---- drivers/gpu/drm/msm/msm_gem.h | 37 ++++++++++-- drivers/gpu/drm/msm/msm_gem_shrinker.c | 80 +++++++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_submit.c | 9 ++- drivers/gpu/drm/msm/msm_gem_vma.c | 24 +++----- 5 files changed, 152 insertions(+), 39 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 664fb801c221..82293806219a 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -48,6 +48,7 @@ static void put_iova_spaces(struct drm_gem_object *obj, s= truct drm_gpuvm *vm, bo static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) { msm_gem_assert_locked(obj); + drm_gpuvm_resv_assert_held(&vm->base); =20 struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(&vm->base, obj); if (vm_bo) { @@ -68,6 +69,7 @@ static void detach_vm(struct drm_gem_object *obj, struct = msm_gem_vm *vm) static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *fil= e) { struct msm_context *ctx =3D file->driver_priv; + struct drm_exec exec; =20 update_ctx_mem(file, -obj->size); =20 @@ -86,10 +88,10 @@ static void msm_gem_close(struct drm_gem_object *obj, s= truct drm_file *file) dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, msecs_to_jiffies(1000)); =20 - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, &ctx->vm->base, true); detach_vm(obj, ctx->vm); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ } =20 /* @@ -551,11 +553,12 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_obj= ect *obj, struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { + struct drm_exec exec; int ret; =20 - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); ret =3D get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_e= nd); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ =20 return ret; } @@ -575,16 +578,17 @@ int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; + struct drm_exec exec; int ret =3D 0; =20 - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); vma =3D get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else { *iova =3D vma->base.va.addr; } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ =20 return ret; } @@ -613,9 +617,10 @@ static int clear_iova(struct drm_gem_object *obj, int msm_gem_set_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t iova) { + struct drm_exec exec; int ret =3D 0; =20 - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); if (!iova) { ret =3D clear_iova(obj, vm); } else { @@ -628,7 +633,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, ret =3D -EBUSY; } } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ =20 return ret; } @@ -642,14 +647,15 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm) { struct msm_gem_vma *vma; + struct drm_exec exec; =20 - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); vma =3D lookup_vma(obj, vm); if (vma) { msm_gem_unpin_locked(obj); } detach_vm(obj, vm); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ } =20 int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, @@ -1021,12 +1027,27 @@ static void msm_gem_free_object(struct drm_gem_obje= ct *obj) struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct drm_device *dev =3D obj->dev; struct msm_drm_private *priv =3D dev->dev_private; + struct drm_exec exec; =20 mutex_lock(&priv->obj_lock); list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); =20 + /* + * We need to lock any VMs the object is still attached to, but not + * the object itself (see explaination in msm_gem_assert_locked()), + * so just open-code this special case: + */ + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } + } put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ =20 if (drm_gem_is_imported(obj)) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 4112370baf34..33885a08cdd7 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -62,12 +62,6 @@ struct msm_gem_vm { */ struct drm_mm mm; =20 - /** @mm_lock: protects @mm node allocation/removal */ - struct spinlock mm_lock; - - /** @vm_lock: protects gpuvm insert/remove/traverse */ - struct mutex vm_lock; - /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; =20 @@ -246,6 +240,37 @@ msm_gem_unlock(struct drm_gem_object *obj) dma_resv_unlock(obj->resv); } =20 +/** + * msm_gem_lock_vm_and_obj() - Helper to lock an obj + VM + * @exec: the exec context helper which will be initalized + * @obj: the GEM object to lock + * @vm: the VM to lock + * + * Operations which modify a VM frequently need to lock both the VM and + * the object being mapped/unmapped/etc. This helper uses drm_exec to + * acquire both locks, dealing with potential deadlock/backoff scenarios + * which arise when multiple locks are involved. + */ +static inline int +msm_gem_lock_vm_and_obj(struct drm_exec *exec, + struct drm_gem_object *obj, + struct msm_gem_vm *vm) +{ + int ret =3D 0; + + drm_exec_init(exec, 0, 2); + drm_exec_until_all_locked (exec) { + ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); + if (!ret && (obj->resv !=3D drm_gpuvm_resv(&vm->base))) + ret =3D drm_exec_lock_obj(exec, obj); + drm_exec_retry_on_contention(exec); + if (GEM_WARN_ON(ret)) + break; + } + + return ret; +} + static inline void msm_gem_assert_locked(struct drm_gem_object *obj) { diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/m= sm_gem_shrinker.c index de185fc34084..5faf6227584a 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -43,6 +43,75 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct= shrink_control *sc) return count; } =20 +static bool +with_vm_locks(struct ww_acquire_ctx *ticket, + void (*fn)(struct drm_gem_object *obj), + struct drm_gem_object *obj) +{ + /* + * Track last locked entry for for unwinding locks in error and + * success paths + */ + struct drm_gpuvm_bo *vm_bo, *last_locked =3D NULL; + int ret =3D 0; + + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct dma_resv *resv =3D drm_gpuvm_resv(vm_bo->vm); + + if (resv =3D=3D obj->resv) + continue; + + ret =3D dma_resv_lock(resv, ticket); + + /* + * Since we already skip the case when the VM and obj + * share a resv (ie. _NO_SHARE objs), we don't expect + * to hit a double-locking scenario... which the lock + * unwinding cannot really cope with. + */ + WARN_ON(ret =3D=3D -EALREADY); + + /* + * Don't bother with slow-lock / backoff / retry sequence, + * if we can't get the lock just give up and move on to + * the next object. + */ + if (ret) + goto out_unlock; + + /* + * Hold a ref to prevent the vm_bo from being freed + * and removed from the obj's gpuva list, as that would + * would result in missing the unlock below + */ + drm_gpuvm_bo_get(vm_bo); + + last_locked =3D vm_bo; + } + + fn(obj); + +out_unlock: + if (last_locked) { + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct dma_resv *resv =3D drm_gpuvm_resv(vm_bo->vm); + + if (resv =3D=3D obj->resv) + continue; + + dma_resv_unlock(resv); + + /* Drop the ref taken while locking: */ + drm_gpuvm_bo_put(vm_bo); + + if (last_locked =3D=3D vm_bo) + break; + } + } + + return ret =3D=3D 0; +} + static bool purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { @@ -52,9 +121,7 @@ purge(struct drm_gem_object *obj, struct ww_acquire_ctx = *ticket) if (msm_gem_active(obj)) return false; =20 - msm_gem_purge(obj); - - return true; + return with_vm_locks(ticket, msm_gem_purge, obj); } =20 static bool @@ -66,9 +133,7 @@ evict(struct drm_gem_object *obj, struct ww_acquire_ctx = *ticket) if (msm_gem_active(obj)) return false; =20 - msm_gem_evict(obj); - - return true; + return with_vm_locks(ticket, msm_gem_evict, obj); } =20 static bool @@ -100,6 +165,7 @@ static unsigned long msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv =3D shrinker->private_data; + struct ww_acquire_ctx ticket; struct { struct drm_gem_lru *lru; bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket= ); @@ -124,7 +190,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct= shrink_control *sc) drm_gem_lru_scan(stages[i].lru, nr, &stages[i].remaining, stages[i].shrink, - NULL); + &ticket); nr -=3D stages[i].freed; freed +=3D stages[i].freed; remaining +=3D stages[i].remaining; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 2de5a07392eb..bd8e465e8049 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -256,11 +256,18 @@ static int submit_lookup_cmds(struct msm_gem_submit *= submit, /* This is where we make sure all the bo's are reserved and pin'd: */ static int submit_lock_objects(struct msm_gem_submit *submit) { + unsigned flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WA= IT; int ret; =20 - drm_exec_init(&submit->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, submit->nr_bos); +// TODO need to add vm_bind path which locks vm resv + external objs + drm_exec_init(&submit->exec, flags, submit->nr_bos); =20 drm_exec_until_all_locked (&submit->exec) { + ret =3D drm_exec_lock_obj(&submit->exec, + drm_gpuvm_resv_obj(&submit->vm->base)); + drm_exec_retry_on_contention(&submit->exec); + if (ret) + goto error; for (unsigned i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; ret =3D drm_exec_prepare_obj(&submit->exec, obj, 1); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 1f4c9b5c2e8f..ccb20897a2b0 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -92,15 +92,13 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) =20 GEM_WARN_ON(vma->mapped); =20 - spin_lock(&vm->mm_lock); + drm_gpuvm_resv_assert_held(&vm->base); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->mm_lock); =20 - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); drm_gpuva_unlink(&vma->base); - mutex_unlock(&vm->vm_lock); =20 kfree(vma); } @@ -114,16 +112,16 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem= _object *obj, struct msm_gem_vma *vma; int ret; =20 + drm_gpuvm_resv_assert_held(&vm->base); + vma =3D kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) return ERR_PTR(-ENOMEM); =20 if (vm->managed) { - spin_lock(&vm->mm_lock); ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&vm->mm_lock); =20 if (ret) goto err_free_vma; @@ -137,9 +135,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_o= bject *obj, drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped =3D false; =20 - mutex_lock(&vm->vm_lock); ret =3D drm_gpuva_insert(&vm->base, &vma->base); - mutex_unlock(&vm->vm_lock); if (ret) goto err_free_range; =20 @@ -149,18 +145,14 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem= _object *obj, goto err_va_remove; } =20 - mutex_lock(&vm->vm_lock); drm_gpuvm_bo_extobj_add(vm_bo); drm_gpuva_link(&vma->base, vm_bo); - mutex_unlock(&vm->vm_lock); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); =20 return vma; =20 err_va_remove: - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); - mutex_unlock(&vm->vm_lock); err_free_range: if (vm->managed) drm_mm_remove_node(&vma->node); @@ -191,6 +183,11 @@ struct msm_gem_vm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, u64 va_start, u64 va_size, bool managed) { + /* + * We mostly want to use DRM_GPUVM_RESV_PROTECTED, except that + * makes drm_gpuvm_bo_evict() a no-op for extobjs (ie. we loose + * tracking that an extobj is evicted) :facepalm: + */ enum drm_gpuvm_flags flags =3D 0; struct msm_gem_vm *vm; struct drm_gem_object *dummy_gem; @@ -213,9 +210,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); =20 - spin_lock_init(&vm->mm_lock); - mutex_init(&vm->vm_lock); - vm->mmu =3D mmu; vm->managed =3D managed; =20 --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05E78247280 for ; Sun, 29 Jun 2025 20:16:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228204; cv=none; b=WTpr+dFu9gO8S+s9bT+gfOW4B/1x0HwlZpZZJ9L5m+mMvTGBbT6HdTH3hUU6+9a6w2dJ8UBx4kkLkrvBwHwvCvwBp4mxRIUoa23N3NfWKXRa8mr9Kdmn19rcEBYk5pYk/5x+9XAFqoLzN+0bkY/Yag0Vu2IAnDHMfeHxaAtwF6Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228204; c=relaxed/simple; bh=utD+EU1joHyPgdhBukUZWz5oj3sIgA6oZ09bBZk3Zt0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qDOk4VjQCdnFoTEH96t1vE6xYpRqV98XFRpPwi/g24BWzSqu233T7Ei9FUzR4Jk1lXQLpS+uZl6/jjT3d2IknsONRrys5jxdj/DzCnHIR6NPfSQsGlYoEqVcxYSGbrQVvR/9Y7ms5bxpaKW3jH5V6U2lywqX172iQiLjJOMxGT4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=R8+h2u4M; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="R8+h2u4M" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55THv4C9008985 for ; Sun, 29 Jun 2025 20:16:41 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=e9N226TKmG8 dr6P8X0n3BrfaNxHloLnA/9DS1innPPM=; b=R8+h2u4MSiQcKc8BrQI7AFcmnp2 VmjGh1nH27O2bKAyXVnB+qIPhEsBYzYn7xGY9jc+OWgzEIcS2DnS5XnFnqyh4j2o ZsSsu7se2v5lKwTnqRWXwrTXWs5SZtiKWkggUA94ooPCUbnv6miojtGmE38EJF18 ezLZIl63qIL+t5j9FSeoUfY2UCTuduNMa/vHE88Rpfy13jgKJ2BzMW98EISu/Kk9 ST7qChCGc4k7bydDgXsr2Lzcqr6OhWoMrO9caUXKx0C+lyRWXf7+VZ70g4S5Y0eT HZVZKLaa+5CdJ1AMy4wvAN7qP7Sd0jPBblozmXX826VfvGr/E2Qw1/pPw6g== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j95htk5u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:41 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-2349498f00eso35875695ad.0 for ; Sun, 29 Jun 2025 13:16:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228200; x=1751833000; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e9N226TKmG8dr6P8X0n3BrfaNxHloLnA/9DS1innPPM=; b=BuuzNnN6/VW6oJVd9SbNuIC26q6MGUyisVaYx14SXhCklUPdPlBm6s31FQn9VxseML wv4M7jINVL+rMIU9KhJ3HAjbTscWePzVzbxBej4folWszrBDS9xw93XD9jT8M02xxpa5 PKY/ADqzlyRHhQiUNCoFOSIkbnCu0eVVfzqsNkBmuwPrHziUe1VergGOWWPg5vSDhMbM ELh75vFl6VlwTW5cJ4heyoWaHKrzLhvK3qpGJJSP9qc1YiomxtmLW8pmf/JctMabnahP uDzSUrSqHPNCGnlpKkrUqGJKL6r7j77KH8Sh6wrPTvkwQyCKCUhbJgKbWDpxlgSiBxZe nbhA== X-Forwarded-Encrypted: i=1; AJvYcCUUC9UciGm4ZZlbcMftU7Xm6rv6lzPlhRVQ/qP/9I+o9AzVmBfqloKz4NvvUHWm9J/LxXX1fP2aO9754BA=@vger.kernel.org X-Gm-Message-State: AOJu0YwG76pK8L3hH94fbTL8MEdCCzoi+EsPn3jv9rj6zJ/iFiiE8SS4 8mPbf/SZQjbrYxo8EyOIoRU7TNjBRSQI0hey2lTSLAJ1kLnCoT55KnNNfioEdrvyk41GV1xiLhA qreh3qS11fpmqc19cX4FiOl6N2BwPV5cSblgGkB/YkWRc/8o1bLG4YlNG4lxRNVA4ffQ= X-Gm-Gg: ASbGncu0tu4W1sgfPCFwazsHfx8hcNpm4a1hV3nTcQkNdePLrZsmva2Xny08d/sXArw W6lXlGfqL5leHqs8+R4974jE2ztE0EGo+Eklgh/rSNs/QW2zvFAH20LoGg38vxlDbR6gG2GzBqW quzESFTXBgVkVhloSj4/wuQGqG3z1mZ9XUj/8RHoHc2YcVGwfNAaoj0a32FBQFWktb4nMmjZqdR peQVxAD5euET8ID7KM4i3/i9uo7hq9He6MI6qhaIqpLbU2RQJwmbX6pSLpkiHMxircqZKLZWrLm ZBrkVnxS7+rHzualVhxku0mtzpRrz7gleg== X-Received: by 2002:a17:902:ce12:b0:215:a303:24e9 with SMTP id d9443c01a7336-23ac4e97774mr169436265ad.3.1751228199461; Sun, 29 Jun 2025 13:16:39 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEz1Trjf+DHUb6PEfzch9tUkKExdYH3LkX+sviBA4allvA3GgPFBF97yXVEsHHXGzcZNY6ISA== X-Received: by 2002:a17:902:ce12:b0:215:a303:24e9 with SMTP id d9443c01a7336-23ac4e97774mr169435745ad.3.1751228198734; Sun, 29 Jun 2025 13:16:38 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3d27dbsm63037835ad.256.2025.06.29.13.16.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:38 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , Arnd Bergmann , Krzysztof Kozlowski , Eugene Lepshy , Haoxiang Li , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 15/42] drm/msm: Use drm_gpuvm types more Date: Sun, 29 Jun 2025 13:12:58 -0700 Message-ID: <20250629201530.25775-16-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX9/8rEVAJJtZU ZwK7K3emSXxW4guEQoWCZ5E8shEy9zDMsUAb2w5rdHjMmrJVt0T97IMSCSMWV5Ba9+UAwp6a3iE AOoQGoOHQloFyHlFVXCIenmwEbU+rw1ziGQeF8+uneqfWR/UsZmbUXmZA/3g+d/is+W3Ks0mJUk fLgPWPsrEMmMapYIM8g2uxkF0sIJB6H0TPBK3SA/MjFtDSMvRCW8brBeraY2NYviYegz7sP5bYT HqEGB270gMJkBrtR/A1ihWG/LiODrpxQ5Uq/7seuakZA7/UuIAXjaDk+V9uJ9GZ+U4LEU+BHTx1 P1rRwptTSnsh8fyvlQ6ky7oKa2FOHvUJvtLqXEg0P8i/wX+kSlDLi5XeUDkGGpBW2W/xkNHwFqP 0xQZL4UQL69DyaKae4q9/fVQ8PStwohkPH6hIlgLiMVo1B3UvlAZybrrJTBY+iWxEPeZFL0+ X-Proofpoint-ORIG-GUID: mVVCp_hZVTgdCCesa6O_OqfXK1Hi29e6 X-Authority-Analysis: v=2.4 cv=EuHSrTcA c=1 sm=1 tr=0 ts=68619f29 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=WETlIjWukoXTtCS2clAA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-GUID: mVVCp_hZVTgdCCesa6O_OqfXK1Hi29e6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxlogscore=999 malwarescore=0 mlxscore=0 phishscore=0 spamscore=0 adultscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark Most of the driver code doesn't need to reach in to msm specific fields, so just use the drm_gpuvm/drm_gpuva types directly. This should hopefully improve commonality with other drivers and make the code easier to understand. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 11 +-- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 19 +++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 11 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 11 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 6 +- drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_fb.c | 4 +- drivers/gpu/drm/msm/msm_gem.c | 94 +++++++++++------------ drivers/gpu/drm/msm/msm_gem.h | 59 +++++++------- drivers/gpu/drm/msm/msm_gem_submit.c | 8 +- drivers/gpu/drm/msm/msm_gem_vma.c | 70 +++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 18 +++-- drivers/gpu/drm/msm/msm_gpu.h | 10 +-- drivers/gpu/drm/msm/msm_kms.c | 6 +- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 23 files changed, 175 insertions(+), 189 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 889480aa13ba..ec38db45d8a3 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; =20 - a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(to_msm_vm(gpu->vm)->mmu, &pt_base, &tran_error); =20 DBG("%s", gpu->name); =20 @@ -466,11 +466,11 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struc= t msm_gpu *gpu) return state; } =20 -static struct msm_gem_vm * +static struct drm_gpuvm * a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu =3D a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 vm =3D msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, tr= ue); =20 diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a5xx_gpu.c index 04138a06724b..ee927d8cc0dc 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,7 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } =20 - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a5xx_fault_handler); =20 /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.c index 77d9ff9632d1..28e6705c6da6 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,6 +1259,8 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) =20 static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { + struct msm_mmu *mmu =3D to_msm_vm(gmu->vm)->mmu; + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); msm_gem_kernel_put(gmu->debug.obj, gmu->vm); msm_gem_kernel_put(gmu->icache.obj, gmu->vm); @@ -1266,8 +1268,8 @@ static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); msm_gem_kernel_put(gmu->log.obj, gmu->vm); =20 - gmu->vm->mmu->funcs->detach(gmu->vm->mmu); - msm_gem_vm_put(gmu->vm); + mmu->funcs->detach(mmu); + drm_gpuvm_put(gmu->vm); } =20 static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo = *bo, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.h index fc288dfe889f..d1ce11131ba6 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; =20 - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 262129cb4415..0b78888c58af 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, if (ctx->seqno =3D=3D ring->cur_ctx_seqno) return; =20 - if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) return; =20 if (adreno_gpu->info->family >=3D ADRENO_7XX_GEN1) { @@ -2256,7 +2256,7 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, st= ruct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } =20 -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); @@ -2274,12 +2274,12 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform= _device *pdev) return adreno_iommu_create_vm(gpu, pdev, quirks); } =20 -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; =20 - mmu =3D msm_iommu_pagetable_create(gpu->vm->mmu); + mmu =3D msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -2559,7 +2559,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0x1fffffffff000ull; =20 - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a6xx_fault_handler); =20 a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/ms= m/adreno/a6xx_preempt.c index f6194a57f794..9e7f2e5fb2b9 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -377,7 +377,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, =20 struct a7xx_cp_smmu_info *smmu_info_ptr =3D ptr; =20 - msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(to_msm_vm(gpu->vm)->mmu, &ttbr, &asid); =20 smmu_info_ptr->magic =3D GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 =3D ttbr; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 46199a6d0e41..676fc078d545 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 p= asid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } =20 -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { return adreno_iommu_create_vm(gpu, pdev, 0); } =20 -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; u64 start, size; =20 mmu =3D msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -275,9 +275,11 @@ void adreno_check_and_reenable_stall(struct adreno_gpu= *adreno_gpu) if (!priv->stall_enabled && ktime_after(ktime_get(), priv->stall_reenable_time) && !READ_ONCE(gpu->crashstate)) { + struct msm_mmu *mmu =3D to_msm_vm(gpu->vm)->mmu; + priv->stall_enabled =3D true; =20 - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); + mmu->funcs->set_stall(mmu, true); } spin_unlock_irqrestore(&priv->fault_stall_lock, flags); } @@ -292,6 +294,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, u32 scratch[4]) { struct msm_drm_private *priv =3D gpu->dev->dev_private; + struct msm_mmu *mmu =3D to_msm_vm(gpu->vm)->mmu; const char *type =3D "UNKNOWN"; bool do_devcoredump =3D info && (info->fsr & ARM_SMMU_FSR_SS) && !READ_ONCE(gpu->crashstate); @@ -305,7 +308,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, if (priv->stall_enabled) { priv->stall_enabled =3D false; =20 - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); + mmu->funcs->set_stall(mmu, false); } =20 priv->stall_reenable_time =3D ktime_add_ms(ktime_get(), 500); @@ -405,7 +408,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, return 0; case MSM_PARAM_FAULTS: if (ctx->vm) - *value =3D gpu->global_faults + ctx->vm->faults; + *value =3D gpu->global_faults + to_msm_vm(ctx->vm)->faults; else *value =3D gpu->global_faults; return 0; @@ -415,12 +418,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_= context *ctx, case MSM_PARAM_VA_START: if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->base.mm_start; + *value =3D ctx->vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->base.mm_range; + *value =3D ctx->vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value =3D adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index b1761f990aa1..8650bbd8698e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -622,11 +622,11 @@ void adreno_show_object(struct drm_printer *p, void *= *ptr, int len, * Common helper function to initialize the default address space for arm-= smmu * attached targets */ -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev); =20 -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/= disp/dpu1/dpu_kms.c index 2c5687a188b6..f7d0f39bcc5b 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1098,17 +1098,17 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dp= u_kms) if (!dpu_kms->base.vm) return; =20 - mmu =3D dpu_kms->base.vm->mmu; + mmu =3D to_msm_vm(dpu_kms->base.vm)->mmu; =20 mmu->funcs->detach(mmu); - msm_gem_vm_put(dpu_kms->base.vm); + drm_gpuvm_put(dpu_kms->base.vm); =20 dpu_kms->base.vm =3D NULL; } =20 static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 vm =3D msm_kms_init_vm(dpu_kms->dev); if (IS_ERR(vm)) diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm= /disp/mdp4/mdp4_kms.c index a867c684c6d6..9acde91ad6c3 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -122,15 +122,16 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(kms)); struct device *dev =3D mdp4_kms->dev->dev; - struct msm_gem_vm *vm =3D kms->vm; =20 if (mdp4_kms->blank_cursor_iova) msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); =20 - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu =3D to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } =20 if (mdp4_kms->rpm_enabled) @@ -398,7 +399,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms =3D NULL; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int ret; u32 major, minor; unsigned long max_clk; diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm= /disp/mdp5/mdp5_kms.c index 9dca0385a42d..b6e6bd1f95ee 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,12 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms =3D to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_vm *vm =3D kms->vm; =20 - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu =3D to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } =20 mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +501,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms =3D priv->kms; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int i, ret; =20 ret =3D mdp5_init(to_platform_device(dev->dev), dev); diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/d= si_host.c index 16335ebd21e4..2d1699b7dc93 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { =20 /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,7 +1146,7 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host= , int size) uint64_t iova; u8 *data; =20 - msm_host->vm =3D msm_gem_vm_get(priv->kms->vm); + msm_host->vm =3D drm_gpuvm_get(priv->kms->vm); =20 data =3D msm_gem_kernel_new(dev, size, MSM_BO_WC, msm_host->vm, @@ -1194,7 +1194,7 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) =20 if (msm_host->tx_gem_obj) { msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); - msm_gem_vm_put(msm_host->vm); + drm_gpuvm_put(msm_host->vm); msm_host->tx_gem_obj =3D NULL; msm_host->vm =3D NULL; } diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index eb009bd193e3..0fe3c9a24baa 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,8 +48,6 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_vm; -struct msm_gem_vma; struct msm_disp_state; =20 #define MAX_CRTCS 8 @@ -253,7 +251,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); =20 -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); =20 int msm_ioctl_gem_submit(struct drm_device *dev, void *data, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 3b17d83f6673..8ae2f326ec54 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -78,7 +78,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb,= struct seq_file *m) int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb) { struct msm_drm_private *priv =3D fb->dev->dev_private; - struct msm_gem_vm *vm =3D priv->kms->vm; + struct drm_gpuvm *vm =3D priv->kms->vm; struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); int ret, i, n =3D fb->format->num_planes; =20 @@ -102,7 +102,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb,= bool needs_dirtyfb) void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirty= fb) { struct msm_drm_private *priv =3D fb->dev->dev_private; - struct msm_gem_vm *vm =3D priv->kms->vm; + struct drm_gpuvm *vm =3D priv->kms->vm; struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); int i, n =3D fb->format->num_planes; =20 diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 82293806219a..763bafcff4cc 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -45,20 +45,20 @@ static int msm_gem_open(struct drm_gem_object *obj, str= uct drm_file *file) =20 static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, bool close); =20 -static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) +static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) { msm_gem_assert_locked(obj); - drm_gpuvm_resv_assert_held(&vm->base); + drm_gpuvm_resv_assert_held(vm); =20 - struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(&vm->base, obj); + struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(vm, obj); if (vm_bo) { struct drm_gpuva *vma; =20 drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm !=3D &vm->base) + if (vma->vm !=3D vm) continue; - msm_gem_vma_purge(to_msm_vma(vma)); - msm_gem_vma_close(to_msm_vma(vma)); + msm_gem_vma_purge(vma); + msm_gem_vma_close(vma); break; } =20 @@ -89,7 +89,7 @@ static void msm_gem_close(struct drm_gem_object *obj, str= uct drm_file *file) msecs_to_jiffies(1000)); =20 msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); - put_iova_spaces(obj, &ctx->vm->base, true); + put_iova_spaces(obj, ctx->vm, true); detach_vm(obj, ctx->vm); drm_exec_fini(&exec); /* drop locks */ } @@ -386,8 +386,8 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } =20 -static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +static struct drm_gpuva *lookup_vma(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { struct drm_gpuvm_bo *vm_bo; =20 @@ -397,13 +397,13 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_= object *obj, struct drm_gpuva *vma; =20 drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm =3D=3D &vm->base) { + if (vma->vm =3D=3D vm) { /* lookup_vma() should only be used in paths * with at most one vma per vm */ GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); =20 - return to_msm_vma(vma); + return vma; } } } @@ -433,22 +433,20 @@ put_iova_spaces(struct drm_gem_object *obj, struct dr= m_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); =20 drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); - - msm_gem_vma_purge(msm_vma); + msm_gem_vma_purge(vma); if (close) - msm_gem_vma_close(msm_vma); + msm_gem_vma_close(vma); } =20 drm_gpuvm_bo_put(vm_bo); } } =20 -static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, - u64 range_start, u64 range_end) +static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm, u64 range_start, + u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; =20 msm_gem_assert_locked(obj); =20 @@ -457,14 +455,14 @@ static struct msm_gem_vma *get_vma_locked(struct drm_= gem_object *obj, if (!vma) { vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); } else { - GEM_WARN_ON(vma->base.va.addr < range_start); - GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); + GEM_WARN_ON(vma->va.addr < range_start); + GEM_WARN_ON((vma->va.addr + obj->size) > range_end); } =20 return vma; } =20 -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma = *vma) +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct page **pages; @@ -517,17 +515,17 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) update_lru_active(obj); } =20 -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { return get_vma_locked(obj, vm, 0, U64_MAX); } =20 static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; int ret; =20 msm_gem_assert_locked(obj); @@ -538,7 +536,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, =20 ret =3D msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova =3D vma->base.va.addr; + *iova =3D vma->va.addr; pin_obj_locked(obj); } =20 @@ -550,8 +548,8 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { struct drm_exec exec; int ret; @@ -564,8 +562,8 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_objec= t *obj, } =20 /* get iova and pin it. Should have a matching put */ -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm = *vm, + uint64_t *iova) { return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } @@ -574,10 +572,10 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *o= bj, * Get an iova but don't pin it. Doesn't need a put because iovas are curr= ently * valid for the life of the object */ -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; struct drm_exec exec; int ret =3D 0; =20 @@ -586,7 +584,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else { - *iova =3D vma->base.va.addr; + *iova =3D vma->va.addr; } drm_exec_fini(&exec); /* drop locks */ =20 @@ -594,9 +592,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } =20 static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { - struct msm_gem_vma *vma =3D lookup_vma(obj, vm); + struct drm_gpuva *vma =3D lookup_vma(obj, vm); =20 if (!vma) return 0; @@ -615,7 +613,7 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova) + struct drm_gpuvm *vm, uint64_t iova) { struct drm_exec exec; int ret =3D 0; @@ -624,11 +622,11 @@ int msm_gem_set_iova(struct drm_gem_object *obj, if (!iova) { ret =3D clear_iova(obj, vm); } else { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; vma =3D get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->base.va.addr !=3D iova)) { + } else if (GEM_WARN_ON(vma->va.addr !=3D iova)) { clear_iova(obj, vm); ret =3D -EBUSY; } @@ -643,10 +641,9 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * purged until something else (shrinker, mm_notifier, destroy, etc) decid= es * to get rid of it */ -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; struct drm_exec exec; =20 msm_gem_lock_vm_and_obj(&exec, obj, vm); @@ -1276,9 +1273,9 @@ struct drm_gem_object *msm_gem_import(struct drm_devi= ce *dev, return ERR_PTR(ret); } =20 -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova) +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t f= lags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova) { void *vaddr; struct drm_gem_object *obj =3D msm_gem_new(dev, size, flags); @@ -1311,8 +1308,7 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint= 32_t size, =20 } =20 -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm) +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm) { if (IS_ERR_OR_NULL(bo)) return; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 33885a08cdd7..892e4132fa72 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -79,12 +79,7 @@ struct msm_gem_vm { }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) =20 -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm); - -void msm_gem_vm_put(struct msm_gem_vm *vm); - -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, u64 va_start, u64 va_size, bool managed); =20 @@ -113,12 +108,12 @@ struct msm_gem_vma { }; #define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) =20 -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct msm_gem_vma *vma); -int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sg= t, int size); -void msm_gem_vma_close(struct msm_gem_vma *vma); +void msm_gem_vma_purge(struct drm_gpuva *vma); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt,= int size); +void msm_gem_vma_close(struct drm_gpuva *vma); =20 struct msm_gem_object { struct drm_gem_object base; @@ -163,22 +158,21 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) =20 uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma = *vma); +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm); -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova); +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm); +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +int msm_gem_set_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm); + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm = *vm, + uint64_t *iova); +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -199,11 +193,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct= drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, char *name); struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova); -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm); +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t f= lags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova); +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -254,14 +247,14 @@ msm_gem_unlock(struct drm_gem_object *obj) static inline int msm_gem_lock_vm_and_obj(struct drm_exec *exec, struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { int ret =3D 0; =20 drm_exec_init(exec, 0, 2); drm_exec_until_all_locked (exec) { - ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); - if (!ret && (obj->resv !=3D drm_gpuvm_resv(&vm->base))) + ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(vm)); + if (!ret && (obj->resv !=3D drm_gpuvm_resv(vm))) ret =3D drm_exec_lock_obj(exec, obj); drm_exec_retry_on_contention(exec); if (GEM_WARN_ON(ret)) @@ -328,7 +321,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index bd8e465e8049..d8ff6aeb04ab 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -264,7 +264,7 @@ static int submit_lock_objects(struct msm_gem_submit *s= ubmit) =20 drm_exec_until_all_locked (&submit->exec) { ret =3D drm_exec_lock_obj(&submit->exec, - drm_gpuvm_resv_obj(&submit->vm->base)); + drm_gpuvm_resv_obj(submit->vm)); drm_exec_retry_on_contention(&submit->exec); if (ret) goto error; @@ -315,7 +315,7 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) =20 for (i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; - struct msm_gem_vma *vma; + struct drm_gpuva *vma; =20 /* if locking succeeded, pin bo: */ vma =3D msm_gem_get_vma_locked(obj, submit->vm); @@ -328,8 +328,8 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) if (ret) break; =20 - submit->bos[i].vm_bo =3D drm_gpuvm_bo_get(vma->base.vm_bo); - submit->bos[i].iova =3D vma->base.va.addr; + submit->bos[i].vm_bo =3D drm_gpuvm_bo_get(vma->vm_bo); + submit->bos[i].iova =3D vma->va.addr; } =20 /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index ccb20897a2b0..df8eb910ca31 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -20,52 +20,38 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } =20 - -void msm_gem_vm_put(struct msm_gem_vm *vm) -{ - if (vm) - drm_gpuvm_put(&vm->base); -} - -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm) -{ - if (!IS_ERR_OR_NULL(vm)) - drm_gpuvm_get(&vm->base); - - return vm; -} - /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct msm_gem_vma *vma) +void msm_gem_vma_purge(struct drm_gpuva *vma) { - struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); - unsigned size =3D vma->base.va.range; + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); + unsigned size =3D vma->va.range; =20 /* Don't do anything if the memory isn't mapped */ - if (!vma->mapped) + if (!msm_vma->mapped) return; =20 - vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); + vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); =20 - vma->mapped =3D false; + msm_vma->mapped =3D false; } =20 /* Map and pin vma: */ int -msm_gem_vma_map(struct msm_gem_vma *vma, int prot, +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); int ret; =20 - if (GEM_WARN_ON(!vma->base.va.addr)) + if (GEM_WARN_ON(!vma->va.addr)) return -EINVAL; =20 - if (vma->mapped) + if (msm_vma->mapped) return 0; =20 - vma->mapped =3D true; + msm_vma->mapped =3D true; =20 /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold @@ -76,38 +62,40 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); + ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); =20 if (ret) { - vma->mapped =3D false; + msm_vma->mapped =3D false; } =20 return ret; } =20 /* Close an iova. Warn if it is still in use */ -void msm_gem_vma_close(struct msm_gem_vma *vma) +void msm_gem_vma_close(struct drm_gpuva *vma) { - struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); =20 - GEM_WARN_ON(vma->mapped); + GEM_WARN_ON(msm_vma->mapped); =20 drm_gpuvm_resv_assert_held(&vm->base); =20 - if (vma->base.va.addr) - drm_mm_remove_node(&vma->node); + if (vma->va.addr && vm->managed) + drm_mm_remove_node(&msm_vma->node); =20 - drm_gpuva_remove(&vma->base); - drm_gpuva_unlink(&vma->base); + drm_gpuva_remove(vma); + drm_gpuva_unlink(vma); =20 kfree(vma); } =20 /* Create a new vma and allocate an iova for it */ -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -149,7 +137,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_o= bject *obj, drm_gpuva_link(&vma->base, vm_bo); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); =20 - return vma; + return &vma->base; =20 err_va_remove: drm_gpuva_remove(&vma->base); @@ -179,7 +167,7 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { * handles virtual address allocation, and both async and sync operations * are supported. */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, u64 va_start, u64 va_size, bool managed) { @@ -215,7 +203,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, =20 drm_mm_init(&vm->mm, va_start, va_size); =20 - return vm; + return &vm->base; =20 err_free_vm: kfree(vm); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 47268aae7d54..fc4d6c9049b0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -285,7 +285,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *= gpu, =20 if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info =3D &state->fault_info; - struct msm_mmu *mmu =3D submit->vm->mmu; + struct msm_mmu *mmu =3D to_msm_vm(submit->vm)->mmu; =20 msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -390,7 +390,7 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; if (submit->vm) - submit->vm->faults++; + to_msm_vm(submit->vm)->faults++; =20 get_comm_cmdline(submit, &comm, &cmd); =20 @@ -828,10 +828,11 @@ static int get_clocks(struct platform_device *pdev, s= truct msm_gpu *gpu) } =20 /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_vm *vm =3D NULL; + struct drm_gpuvm *vm =3D NULL; + if (!gpu) return NULL; =20 @@ -842,11 +843,11 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct= task_struct *task) if (gpu->funcs->create_private_vm) { vm =3D gpu->funcs->create_private_vm(gpu); if (!IS_ERR(vm)) - vm->pid =3D get_pid(task_pid(task)); + to_msm_vm(vm)->pid =3D get_pid(task_pid(task)); } =20 if (IS_ERR_OR_NULL(vm)) - vm =3D msm_gem_vm_get(gpu->vm); + vm =3D drm_gpuvm_get(gpu->vm); =20 return vm; } @@ -1014,8 +1015,9 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); =20 if (!IS_ERR_OR_NULL(gpu->vm)) { - gpu->vm->mmu->funcs->detach(gpu->vm->mmu); - msm_gem_vm_put(gpu->vm); + struct msm_mmu *mmu =3D to_msm_vm(gpu->vm)->mmu; + mmu->funcs->detach(mmu); + drm_gpuvm_put(gpu->vm); } =20 if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 9d69dcad6612..231577656fae 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,8 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_devi= ce *pdev); - struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_devic= e *pdev); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); =20 /** @@ -234,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; =20 - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -357,7 +357,7 @@ struct msm_context { int queueid; =20 /** @vm: the per-process GPU address-space */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /** @kref: the reference count */ struct kref ref; @@ -667,7 +667,7 @@ int msm_gpu_init(struct drm_device *drm, struct platfor= m_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); =20 -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); =20 void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 6458bd82a0cd..e82b8569a468 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned lo= ng iova, int flags, void return -ENOSYS; } =20 -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct msm_mmu *mmu; struct device *mdp_dev =3D dev->dev; struct device *mdss_dev =3D mdp_dev->parent; @@ -212,7 +212,7 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *d= ev) return vm; } =20 - msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(vm)->mmu, kms, msm_kms_fault_handler); =20 return vm; } diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index f45996a03e15..7cdb2eb67700 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; =20 /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index 6298233c3568..8ced49c7557b 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } =20 - msm_gem_vm_put(ctx->vm); + drm_gpuvm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D800024728C for ; Sun, 29 Jun 2025 20:16:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228204; cv=none; b=MoKyHzDkRu8VVidTf3o1VrTL8ECxaHST7pIZybkP2FdGQo2tuh14BjwK/Z3gOCEgt1kifLu6hKvJTdW7MdUke818Ux80IKYVdWPhXVAC5sMx8e3VbyWGQxFe5p9MWo21P9Zzvqq8zP5JYEKtvNi4O0OGIj9l00W8QDHdJf7T5RQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228204; c=relaxed/simple; bh=8ynhkJOScfi9ikAGiRHB32lhFvswULa1Hiuup/gNOrE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ks9ZBKIFkZuDtqvWyoGvubNXIIM51jc370wSZUTv0hLlBT60ZVjd33x5Q5vaDbqtUxyGoiFHGEQ/4tLJwMDQECZQSVT6Zr6Q+yXwLfz4fz2xkkCSzhOfcTFRW+BgcGR6noGd9HCCqZTnNxs+pwScFbzRUcudPXTC4x+vW0bJhwg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Q7ALFKfM; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Q7ALFKfM" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TANmZ5030508 for ; Sun, 29 Jun 2025 20:16:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=F07tE0vJpnj OaneIx6xxhc1GKaxYebaabLke4Ul+Piw=; b=Q7ALFKfMm4jtxOyMHeY5EvUsq9l 61NeJT5RolmYbyHuoiatb8eVQF2oSjXO4pFTJAJuyb4/9Oa8Xy26VyJMLyqOc0wc NoeaixiziPKm/tDFHfcGmG0Dl3lgsi+yTz49pgmSpKB3nSRWCDgSBJNLlbcBtjbt qGDsnIvnooFT2Dvh2GlBUjCepQE1jD//ALAcLEmnYZHkfOP75uioT9764trMl5Tx 34cevQHLeC/j6WGXQrWAwVPmMa5IQNUfXYIqPlKkOcLh3fe6Quc0Bm7nqjPGv5yw 3P06RIHL1OSeZTTTLE/hpHkLdAlw3TtaU22dTkb3IZRXNQHkvB3hMgo5zxg== Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j5rq2uyr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:41 +0000 (GMT) Received: by mail-pg1-f198.google.com with SMTP id 41be03b00d2f7-b34abbcdcf3so2707494a12.1 for ; Sun, 29 Jun 2025 13:16:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228200; x=1751833000; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F07tE0vJpnjOaneIx6xxhc1GKaxYebaabLke4Ul+Piw=; b=a4ZJI6+zdnj3SqydBgUD5LABjuB7XLgzHjvIn2DcGfEMaMR1E6nY2aP98eJy8fbV/3 FKBVxOJG73LDZ607z09dSUV14h9XuYzvAjKEZ1L/eoiI6ZN+//dlDaoN36Q3WNvguNq2 nJctkNm6oM7PbprwCeDDNMoclnIUYOeBiBD1AY7c04c8hKYsJZVBKJ13qj39ZaZjaqWM OREWhwX36oytvgwNbuqM1B60I17P+gI6pUw8fHQMymRiBybLY3xn0bIkG70iYbT8FP89 TvJ2znifC2xrvygVgxaDwE2Oh4qQ4FHY6AfrCubJkoL1G9pngpgHZdrgEueiXGMaY2Te zuZg== X-Forwarded-Encrypted: i=1; AJvYcCWLMCWboAejI8PGQSIrI3ipZQ9qaEtmtYcZBS+h51mxMJKphVCWIvwaZe+9PvSumgMAmFieKCRUFaRmhJ8=@vger.kernel.org X-Gm-Message-State: AOJu0YwQ6THQ5eDoN+guaBAfoo7kqnbnnPfKXx7pP2CseXOPIBIbZwy9 GzJ6Ym4j6t0aIiOLhRs+eie24TC28ftopmSRXDWfX9dENb+f7BYE0BoAD10Os489BRrkeYziu/X 9WIbjXyL6Z3NmddudG7bTl0TpRqHsudzpRRNwZD8DuakL0jH4ccy40Pwgs/cruSOFf+s= X-Gm-Gg: ASbGncu8CPnGYrne9MERVm11ococyOeu9D4NvqrzfjZKsbgpywgWPP0fTzU03hmPWYJ jPaBcAmHvvzmvTMzBbzvOX06a9XBvEzYb/n6Nz8uoH/wCJIjAcQqQmPewEvYaELzywq8YiLl1Dj 9t/YsM6D0QJDqPPp+PeM3tMX93H3Dm3p+okr4KnKkMYzS5HTnFqvl/tpzdFnZZ0RQE6pHvWc6K4 WWXHBdSc4c13tmnfjiF0ZZTY3XyD6tU+31Ormi7l1a3CtxQFfCbz6A4Bh+Mi3s7exThEA+dsRx8 9NrrQTnLUf7nXzilP+eCEY1O7WHOo6PibQ== X-Received: by 2002:a05:6a21:329c:b0:220:7994:1df1 with SMTP id adf61e73a8af0-220a16e46d5mr18754435637.31.1751228200540; Sun, 29 Jun 2025 13:16:40 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEELqQu+0TYmCQ+TNpM1MfGezE3TlblWt0dkgODaaYVhQOuwJeIcM6JHGprl9661zvuA9IyEQ== X-Received: by 2002:a05:6a21:329c:b0:220:7994:1df1 with SMTP id adf61e73a8af0-220a16e46d5mr18754407637.31.1751228200109; Sun, 29 Jun 2025 13:16:40 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b34e300d808sm6372068a12.14.2025.06.29.13.16.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:39 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 16/42] drm/msm: Split out helper to get iommu prot flags Date: Sun, 29 Jun 2025 13:12:59 -0700 Message-ID: <20250629201530.25775-17-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: gw0vn-GsLOt7Sn1TBP3rVNuWNZ0lWJXb X-Authority-Analysis: v=2.4 cv=eIYTjGp1 c=1 sm=1 tr=0 ts=68619f29 cx=c_pps a=Qgeoaf8Lrialg5Z894R3/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=8Ft5guHMEotweHOT_P0A:9 a=x9snwWr2DeNwDh03kgHS:22 X-Proofpoint-GUID: gw0vn-GsLOt7Sn1TBP3rVNuWNZ0lWJXb X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfXyRf7jMO+N4uh n3w9r1i/KcvbRW4pogTUGPX8VG46HxoyBFBp7H1r7gV7Ovzp4myDucV1uyz5USx1x0ZRUameRom +9lYCYh+ZiM1Z+1H54SAiEyNViA/f3NTCsszumjfaUH0trjxbm2/p5+6dWcIXjp7CnzizeZGAm7 0PoNCqQomLOPF8TDSLWEsUFh/uPICLS1Uq4cUM1+c3T7jtn8wYCOdxB25m5Rv3pmUaANiqqNB3j 2sIZQhdN3ejUz6xlvilYhcpClEF7sfCJjROGZRpOFAnpQoO1x7rqN2NTUG42oEoUk7/FA9aIack wlov/k9DzGpLZhDQPia6wkWkfeg9kcR9Pq1FQUXXQWx5bybOz7YTTSDErUD6vDzfyr3v7r8plsu CSnwCa2edcCwLt8IbMeKQTGiKT99gD+y+gmoC8b2qggsp7jsBKYlXxkUPEUYWKRTknFmZY/u X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 clxscore=1015 mlxlogscore=999 priorityscore=1501 adultscore=0 mlxscore=0 phishscore=0 bulkscore=0 spamscore=0 suspectscore=0 lowpriorityscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark We'll re-use this in the vm_bind path. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 12 ++++++++++-- drivers/gpu/drm/msm/msm_gem.h | 1 + 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 763bafcff4cc..20d5e4b4d057 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -462,10 +462,9 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem= _object *obj, return vma; } =20 -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma) +int msm_gem_prot(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct page **pages; int prot =3D IOMMU_READ; =20 if (!(msm_obj->flags & MSM_BO_GPU_READONLY)) @@ -477,6 +476,15 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj,= struct drm_gpuva *vma) if (msm_obj->flags & MSM_BO_CACHED_COHERENT) prot |=3D IOMMU_CACHE; =20 + return prot; +} + +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma) +{ + struct msm_gem_object *msm_obj =3D to_msm_bo(obj); + struct page **pages; + int prot =3D msm_gem_prot(obj); + msm_gem_assert_locked(obj); =20 pages =3D msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 892e4132fa72..a18872ab1393 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -158,6 +158,7 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) =20 uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_prot(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2ED2023F413 for ; Sun, 29 Jun 2025 20:16:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228205; cv=none; b=q7o1j5n0ePYWRObTWReTkGVyRroqGSp5b7a4ZqOefn9od7J8XsAKDT8uIyoN8NC1j7xfovWlamKZdq22NXt7tCIwazp5bZJlEfqjDCccZynOjQmYRMHBcvu+cBGrN4DqyIxZ4e/BQ5StnsZOT1zGfinpgOW3dLXDFOFXDSTxNuc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228205; c=relaxed/simple; bh=WdQPriMrB8x+9JQHTqw4IQFdqMSdfDZGIPe1vDHc1bo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Prqyn16hvSSRRJZhFf+Y1ecre59HD2p9brU7ybKcohGjot3fVVvn/pUScePw4VRXn2SIHrhdWxBlKQIcvnQrLC0hzsc+Gzu0SO+wD4LLBUJYKUzHofeUmgTC0b9TCOUuDoRyyJbxgLLfefHXUrV4IY+4A6znVIpuoHtNi6SLie8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=JU0GrvNA; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="JU0GrvNA" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TJkdLU008496 for ; Sun, 29 Jun 2025 20:16:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Dn6rNLtmwPZ aS21Jedvkq0LzDSIrl4pjgU4LG7FVLPE=; b=JU0GrvNAQfJZR4x8GCLxbbZeRjW DvNpxAk+o1G+RQu4+3ha7ZKL7Qjo1Hy7/vwM9C8UX7VTn4BsDLeOKS+hSul5pLzs D7bMd890bhHfZlPAnBhbijN/mfJKOiRl+OD3H1hWGOco3f2vqHw2I9X6mnEhlKCR 7gXCXYZGTK5I1oBNhtM0aUL12Ln3H+C88VkZhvKZ6Vr66hhiqFj6EO8O22dtirjR CNUHIVgbCjxu8V3mxUN6JA3LBvZnXoVKIQX6URobDbn2ycjqEKiOETCNT2fDS+w5 /muHPZG6AEB944smTkYQ30Bv6PCEnDy6+A+yATKu6B6PrPAJ9QNGdDCRvKQ== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j9jpth7r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:43 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-748a4f5e735so1328630b3a.3 for ; Sun, 29 Jun 2025 13:16:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228202; x=1751833002; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Dn6rNLtmwPZaS21Jedvkq0LzDSIrl4pjgU4LG7FVLPE=; b=AvnbMQPFt0VAhCDX1pWWNXKQWeOxtLDR1orV9kywZfLhMfkMKw8dbjuQ6NQc3gH+KK F7FqdF3FxAzUXXRSSpayFsZyA0p08b3iaPEJVurJ3BIclYpFAeD8qYjYz/VnB4nSxI9K l/Mq3XUMXjrcTmiHNFwK6M0h37L4JhLoyGJGj7DfCL/2bB7d/uGMUZOOXueN+uTilLs8 tVvqK/vyrdcTON2DUASqga7LjSi3rlDsh4iI0Y8wG8KNkzzeMkGyLHEDqWrfh+y+aIGb idmo0Tw4/vZcvzZBj2equubmeSr2qem4UYs43dK21+zVbCSUWX+JSmj0JnMtaoF8lFTt 0SjQ== X-Forwarded-Encrypted: i=1; AJvYcCURSbQLhPo1k/dep6n1o3SrVcOtLG11QPhuGlZ3VVuqcZj4mEJL9Yta+1oQVBkqFe4WceTsAgQONoev2n8=@vger.kernel.org X-Gm-Message-State: AOJu0YweDprcyUXVK9F07mInCiWUTAcYV6X8Oo9Zg7aUhmOIEQxYZM0B GkP6pW9W5TdDXqHrD4jhTCLOVGKoSZbXjUs/Xtgq9NreKjIazyvuCw9AOcgDD0zZwZ2Ps6kb29D KUSrJUF0wGLPgavxBiyqRQ8uXRRQ/iWmSAeg9FDnotRFgQibjx6FExTzfl8ahlZyAE18= X-Gm-Gg: ASbGncsj0U22uhJyfZx7iyZL9q3Mw3bUOb8eCeGRJ0o+MJ/O3kOzOH8bE3m03pQpIB+ cUxm2np9KnNxBxAm9TXHDhcfJDxyziA1WUHLOAKQPCwgH89s37q7Tdu1HBD3FM/oJvRO0qVV+bx CkbM1KVeBc+W2X+ihuZHCZZe8OxqI2LYn5sA136nWj20UAF9k0wC3dr5HeSesnYQGRdarF4x3F3 /BVxPZtctHZ0fwM0MIwa2vgfgeOPdT7EbkH/ibiS14BfArMxQ4G4JXEMJh6BV0+dPp00kXlvC6S RXAtHfdUKJqbyeT85eFC+KekCoyFFUuReA== X-Received: by 2002:a05:6a00:2da9:b0:742:a02e:dd8d with SMTP id d2e1a72fcca58-74af6ff20d6mr15577051b3a.20.1751228202279; Sun, 29 Jun 2025 13:16:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGUdIAJWlJ4mtDHmpo6q7Rc44psuaTXkc5I16tlu52tuQa9kChYwG4i2nBSS0b7NS46v+ugwQ== X-Received: by 2002:a05:6a00:2da9:b0:742:a02e:dd8d with SMTP id d2e1a72fcca58-74af6ff20d6mr15577021b3a.20.1751228201789; Sun, 29 Jun 2025 13:16:41 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af541b6e5sm6901721b3a.43.2025.06.29.13.16.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:41 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 17/42] drm/msm: Add mmu support for non-zero offset Date: Sun, 29 Jun 2025 13:13:00 -0700 Message-ID: <20250629201530.25775-18-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX91YkDIZEADa9 ckFTHDWUu11dJMRHsac7HTfqs1aaFUgkly0P7cjCM0cwN9qZC+Cep18LcxocJ0cyZJ0oPODZLiH +qipmqKFJA1CO4Z5VIT3AoQwDtnyNV0R9sbDzLJHaa1eSZ7+iTAC3rrtdYRcRPP4FAULNLNA5L1 w7bwbmTuOlgJHaNDYzgmSC4pQX/353c4rnVGjc9CbCHB5w/I6yb+BPUjmr71/K8LMOg9HooyPk6 mUC2n4wbe+K4wtOf0duzt2CQefBjyX/sl3QxOYpH5/fNY9aT5EHktCV0D4MG0uPLKNhheqoABkP 5xMUQAwXvxgYp2eqZcYA4EhJjVNAgp8O0703vjXz8/0nijdaOk3Z/OOc++H0LmcWO3zvat35D8l t0AQqBEmqzz4DR9wfPFfd/x6Pz725bIlDJHkdVAeYrvnicKfln2NLnaodwULua0bKShpQHSs X-Proofpoint-ORIG-GUID: Z37Dc8Ps7hTMiNFDSx3YxwUe0ba-t0Sn X-Authority-Analysis: v=2.4 cv=Tq7mhCXh c=1 sm=1 tr=0 ts=68619f2b cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=SJjE8ph6EfIcxDFTuEgA:9 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-GUID: Z37Dc8Ps7hTMiNFDSx3YxwUe0ba-t0Sn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 lowpriorityscore=0 adultscore=0 priorityscore=1501 impostorscore=0 phishscore=0 mlxscore=0 spamscore=0 bulkscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Only needs to be supported for iopgtables mmu, the other cases are either only used for kernel managed mappings (where offset is always zero) or devices which do not support sparse bindings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------ drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 6 files changed, 36 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm= /adreno/a2xx_gpummu.c index 4280f71e472a..0407c9bc8c1b 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c @@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu) } =20 static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct a2xx_gpummu *gpummu =3D to_a2xx_gpummu(mmu); unsigned idx =3D (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE; struct sg_dma_page_iter dma_iter; unsigned prot_bits =3D 0; =20 + WARN_ON(off !=3D 0); + if (prot & IOMMU_WRITE) prot_bits |=3D 1; if (prot & IOMMU_READ) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 20d5e4b4d057..5c71a4be0dfa 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -453,7 +453,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_= object *obj, vma =3D lookup_vma(obj, vm); =20 if (!vma) { - vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); + vma =3D msm_gem_vma_new(vm, obj, 0, range_start, range_end); } else { GEM_WARN_ON(vma->va.addr < range_start); GEM_WARN_ON((vma->va.addr + obj->size) > range_end); @@ -491,7 +491,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, = struct drm_gpuva *vma) if (IS_ERR(pages)) return PTR_ERR(pages); =20 - return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); + return msm_gem_vma_map(vma, prot, msm_obj->sgt); } =20 void msm_gem_unpin_locked(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index a18872ab1393..0e7b17b2093b 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -110,9 +110,9 @@ struct msm_gem_vma { =20 struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end); + u64 offset, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct drm_gpuva *vma); -int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt,= int size); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); =20 struct msm_gem_object { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index df8eb910ca31..ef0efd87e4a6 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma) =20 /* Map and pin vma: */ int -msm_gem_vma_map(struct drm_gpuva *vma, int prot, - struct sg_table *sgt, int size) +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); @@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); - + ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, + vma->gem.offset, vma->va.range, + prot); if (ret) { msm_vma->mapped =3D false; } @@ -93,7 +93,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma) /* Create a new vma and allocate an iova for it */ struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, - u64 range_start, u64 range_end) + u64 offset, u64 range_start, u64 range_end) { struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; @@ -107,6 +107,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, return ERR_PTR(-ENOMEM); =20 if (vm->managed) { + BUG_ON(offset !=3D 0); ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -120,7 +121,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, =20 GEM_WARN_ON((range_end - range_start) > obj->size); =20 - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, off= set); vma->mapped =3D false; =20 ret =3D drm_gpuva_insert(&vm->base, &vma->base); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index 739ce2c283a4..3c2eb59bfd49 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mm= u, u64 iova, } =20 static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu_pagetable *pagetable =3D to_pagetable(mmu); struct io_pgtable_ops *ops =3D pagetable->pgtbl_ops; @@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu= , u64 iova, size_t size =3D sg->length; phys_addr_t phys =3D sg_phys(sg); =20 + if (!len) + break; + + if (size <=3D off) { + off -=3D size; + continue; + } + + phys +=3D off; + size -=3D off; + size =3D min_t(size_t, size, len); + off =3D 0; + while (size) { size_t pgsize, count, mapped =3D 0; int ret; @@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu,= u64 iova, phys +=3D mapped; addr +=3D mapped; size -=3D mapped; + len -=3D mapped; =20 if (ret) { msm_iommu_pagetable_unmap(mmu, iova, addr - iova); @@ -388,11 +403,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu) } =20 static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu *iommu =3D to_msm_iommu(mmu); size_t ret; =20 + WARN_ON(off !=3D 0); + /* The arm-smmu driver expects the addresses to be sign extended */ if (iova & BIT_ULL(48)) iova |=3D GENMASK_ULL(63, 49); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index 0c694907140d..9d61999f4d42 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -12,7 +12,7 @@ struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, - size_t len, int prot); + size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*set_stall)(struct msm_mmu *mmu, bool enable); --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C86E248F7D for ; Sun, 29 Jun 2025 20:16:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228208; cv=none; b=YRJZF4e3jrrGx1EhPhIZnr2U1OGPgfeGcV59ygLpcbFEHTHU32OyRAIyZSEUzHDss50XzXqj1sGK5qXxWvIUZf0OEQamoMufgqodVWvok5SosRN5jlXV09qy+R8lduFQAvvvD9bA0GLgmDHK3ZEy9hWfmA8wWvYT0wsa4Y8esWE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228208; c=relaxed/simple; bh=N5h9QxX939KWcmj2LnrrApRV3L5jzVSUkXnQ9YEPILQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=T+zk652xxgKdyHmTjvaFnQslEgxK3IyU5N9dcrtS07u8kO5Odpr7SgUodk7Bhc3Duktwu2j183UKQKh7EcR5ng1I/Z3QGvYaarVWEOxYLTfozZcyYz1vLnqUgFalamo8ZGrXnjE/y4SbkQ8tKiCtwuFC1NZpMrg5hqOgsUloZ4w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=glr7MuR0; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="glr7MuR0" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TJoPBE020133 for ; Sun, 29 Jun 2025 20:16:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=JinTH7RBpIn edICQ29N4Lhfv00Y7qC0FBgXZnvj/oGw=; b=glr7MuR0iDjTP1Qad76TT0dZOhC Oz0QaszL/ho6mDpnn1Lz03vcRW4JieI2oqBHEEBDdvowscqQZB1pdc+AA0ZsCJuQ gSEElOXooyIsLx0LKOvjpyIr2uY2yYESKIz9PfVKkAniJOVgBl3lpVqV1A8OJZ8m 7MP2RLIwdpLcz7W3OEWLdFS41tq/o6kyTiuHb0uEwaouMJKrfgXKJCn1RDYvptyh boXe3Dmp6grslAEtXfbL7Q/F+4+ok2P0YMh/VnGts40ghNX6yZUfaeeEMd8wTavh uRuxza1YPfOQIeNxe6cX7yXxqWdkD3fZH+f5AGgTcfLQaF+BiwaUnRwofjQ== Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7qm2mt6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:45 +0000 (GMT) Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-748efefedb5so4786690b3a.2 for ; Sun, 29 Jun 2025 13:16:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228204; x=1751833004; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JinTH7RBpInedICQ29N4Lhfv00Y7qC0FBgXZnvj/oGw=; b=JkwNT+SNYzjtOSKBhQz1XAnQS0Ffv4qdNtvxJ+L8KI/jK1wJ1tjGoCaAxyIA+DRJ7l 0kCJbnL3y3EDUM5cvspB+U0WOcY8+JDy3Q0DyXlkDrkUGrWUEVQ1SBDtntnwnNg4iAmc WxXbIG5a2svqB0PIXYHDGTo1nzNABmEoUNVUM7y1hlEQXzDWnA2UDSdeKxsBrmn4zb2W +7BrB8SjTxvwyHBaKoFxG9ozNVK4nF+J6ixJxNCyKO6l+ENUAhHKQ8MEBfooULQPyWxf +6bfmMOm0n+i3Qa2I1YDbhbpefbC/Ltx/Vx7bKAkqL+BSOnQrHMI5d+SEQVr1chDDOMo BXOQ== X-Forwarded-Encrypted: i=1; AJvYcCWYz/dPUzvnPfnCzdwBnrh76BEPG7oxxYnVrO0NSBQx7YyISBHjdPVuqtneNP4KXPj+aU+JCXNaOJkLH7I=@vger.kernel.org X-Gm-Message-State: AOJu0YziyPELnBiD1fFxRkcUAE82SfmyeGeiS5w9X/jD9xSfLyj4OnGJ QZImdzKVxizAehXueQ5sCHxMZtYDfxrD/m5LzuzYgueJ+qP5jBmyn+3+RsVLR4UEVVR91JlovJx IMN2yt9JptYy6FmyayARnGJLysdjUy6fK2tmFkUVoqMRZiHpWZfc+B6a0XFguDNXuFlw= X-Gm-Gg: ASbGnct82BFlmLHKfVLIhmpLuHFbE4ihn8Z39PNxP037e3O5ErOuGpZF6dkek+2nGMd YsqMSJZeyp7ZObQtBa8r2x7qEzhfv6VlPQmSQQGH1CXbt+Ncz8Uc0CivSiad+d1YvH3CwwHJFo5 zhBQrBfoaYI6oSuJL+vmtQwyyBNQJv8mv4xXOWI+/BH5lwe6Z5dXso2NrMZOW9oGzgQtWXBalRf Q6bjaqXfYlcimnS6DYLPxUeoMx7+YIKA8jZGIFU+JsKy17DfaeqLvVLmOXFP1fyi5y0sq/jGVCc NFRlSMmm75vYrRyUsxzHjla1HDv2PVdVqg== X-Received: by 2002:a05:6a00:4b11:b0:749:93d:b098 with SMTP id d2e1a72fcca58-74af6fd7312mr13757422b3a.22.1751228203651; Sun, 29 Jun 2025 13:16:43 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEk7ORGHzSfU2DSxnHutHKKkh9bC7DWJEiEZd1kyNe53DB3/eUhy7wAbVuq/qFosk61gf5b5g== X-Received: by 2002:a05:6a00:4b11:b0:749:93d:b098 with SMTP id d2e1a72fcca58-74af6fd7312mr13757395b3a.22.1751228203166; Sun, 29 Jun 2025 13:16:43 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af541bd1fsm6998630b3a.47.2025.06.29.13.16.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:42 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 18/42] drm/msm: Add PRR support Date: Sun, 29 Jun 2025 13:13:01 -0700 Message-ID: <20250629201530.25775-19-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=C4TpyRP+ c=1 sm=1 tr=0 ts=68619f2d cx=c_pps a=rEQLjTOiSrHUhVqRoksmgQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=m2jltaIWnU9X2HFGTMUA:9 a=2VI0MkxyNR6bbpdq8BZq:22 X-Proofpoint-ORIG-GUID: zIxB2YB2E2D0T5NzvfEf7_1c4dbmhOUX X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfXwbkWpK2P9Gnf jrmRJflvAOO7FET1qNoRsCTggvjIBEdm9VPNpg2l+n+n9z8lm4FSKqAujBJAHiHVPfrEEXp/N2X WuUwU+dXWfeVpRSVye6ywzWGIrRUoB9ZIETuxy/MWMAqDnWWYdwNoPLgdGTv6IDuRPZ2jJXHz4Y LJs9eB7/KKN5e8rvxoK4JxzTqbbS78ki/lyLJAzBDbl+Ab3mhyJhKt+nN+PnSXzrL1J7Y41R14c HYV1SffMpD74cXE0pysuQWN0ebnh2zeQI5Ke6gKff2b2HPWTb3adnH3OBU1FS5cRdO4n7TNvZUd cIG0ullGUln3es1HJSvjXCw4nhNm0iuZRWI3qXuJTsJzjapMbcl64pvEBWZGAwWaLm0Hn29vHlK ymmpFRzq0gcCJPuTEDFWNfLv8dG8c3u2RWjJWSeU5tE1LHSCqFvKPJpV6OQsyBmZ5OeX7wes X-Proofpoint-GUID: zIxB2YB2E2D0T5NzvfEf7_1c4dbmhOUX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 phishscore=0 suspectscore=0 bulkscore=0 lowpriorityscore=0 clxscore=1015 priorityscore=1501 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Add PRR (Partial Resident Region) is a bypass address which make GPU writes go to /dev/null and reads return zero. This is used to implement vulkan sparse residency. To support PRR/NULL mappings, we allocate a page to reserve a physical address which we know will not be used as part of a GEM object, and configure the SMMU to use this address for PRR/NULL mappings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++++ drivers/gpu/drm/msm/msm_iommu.c | 62 ++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 2 + 3 files changed, 73 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 676fc078d545..12bf39c0516c 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -357,6 +357,13 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned= long iova, int flags, return 0; } =20 +static bool +adreno_smmu_has_prr(struct msm_gpu *gpu) +{ + struct adreno_smmu_priv *adreno_smmu =3D dev_get_drvdata(&gpu->pdev->dev); + return adreno_smmu && adreno_smmu->set_prr_addr; +} + int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { @@ -440,6 +447,9 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, case MSM_PARAM_UCHE_TRAP_BASE: *value =3D adreno_gpu->uche_trap_base; return 0; + case MSM_PARAM_HAS_PRR: + *value =3D adreno_smmu_has_prr(gpu); + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index 3c2eb59bfd49..a0c74ecdb11b 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -13,6 +13,7 @@ struct msm_iommu { struct msm_mmu base; struct iommu_domain *domain; atomic_t pagetables; + struct page *prr_page; }; =20 #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -112,6 +113,36 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *m= mu, u64 iova, return (size =3D=3D 0) ? 0 : -EINVAL; } =20 +static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size= _t len, int prot) +{ + struct msm_iommu_pagetable *pagetable =3D to_pagetable(mmu); + struct io_pgtable_ops *ops =3D pagetable->pgtbl_ops; + struct msm_iommu *iommu =3D to_msm_iommu(pagetable->parent); + phys_addr_t phys =3D page_to_phys(iommu->prr_page); + u64 addr =3D iova; + + while (len) { + size_t mapped =3D 0; + size_t size =3D PAGE_SIZE; + int ret; + + ret =3D ops->map_pages(ops, addr, phys, size, 1, prot, GFP_KERNEL, &mapp= ed); + + /* map_pages could fail after mapping some of the pages, + * so update the counters before error handling. + */ + addr +=3D mapped; + len -=3D mapped; + + if (ret) { + msm_iommu_pagetable_unmap(mmu, iova, addr - iova); + return -EINVAL; + } + } + + return 0; +} + static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, struct sg_table *sgt, size_t off, size_t len, int prot) @@ -122,6 +153,9 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu,= u64 iova, u64 addr =3D iova; unsigned int i; =20 + if (!sgt) + return msm_iommu_pagetable_map_prr(mmu, iova, len, prot); + for_each_sgtable_sg(sgt, sg, i) { size_t size =3D sg->length; phys_addr_t phys =3D sg_phys(sg); @@ -177,9 +211,16 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu= *mmu) * If this is the last attached pagetable for the parent, * disable TTBR0 in the arm-smmu driver */ - if (atomic_dec_return(&iommu->pagetables) =3D=3D 0) + if (atomic_dec_return(&iommu->pagetables) =3D=3D 0) { adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL); =20 + if (adreno_smmu->set_prr_bit) { + adreno_smmu->set_prr_bit(adreno_smmu->cookie, false); + __free_page(iommu->prr_page); + iommu->prr_page =3D NULL; + } + } + free_io_pgtable_ops(pagetable->pgtbl_ops); kfree(pagetable); } @@ -336,6 +377,25 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_= mmu *parent) kfree(pagetable); return ERR_PTR(ret); } + + BUG_ON(iommu->prr_page); + if (adreno_smmu->set_prr_bit) { + /* + * We need a zero'd page for two reasons: + * + * 1) Reserve a known physical address to use when + * mapping NULL / sparsely resident regions + * 2) Read back zero + * + * It appears the hw drops writes to the PRR region + * on the floor, but reads actually return whatever + * is in the PRR page. + */ + iommu->prr_page =3D alloc_page(GFP_KERNEL | __GFP_ZERO); + adreno_smmu->set_prr_addr(adreno_smmu->cookie, + page_to_phys(iommu->prr_page)); + adreno_smmu->set_prr_bit(adreno_smmu->cookie, true); + } } =20 /* Needed later for TLB flush */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2342cb90857e..5bc5e4526ccf 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -91,6 +91,8 @@ struct drm_msm_timespec { #define MSM_PARAM_UBWC_SWIZZLE 0x12 /* RO */ #define MSM_PARAM_MACROTILE_MODE 0x13 /* RO */ #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ +/* PRR (Partially Resident Region) is required for sparse residency: */ +#define MSM_PARAM_HAS_PRR 0x15 /* RO */ =20 /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62DEF248F65 for ; Sun, 29 Jun 2025 20:16:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228208; cv=none; b=h7v3cJ0Ja/yFj8QU0xcBWbl2tV8bgyqecUCiVyzbUD9PKSbxNg/p6gIt3F3v376y+l535hVfZJj5rtQhXhUN3UCsXefqG2PXIre7POpmu5/d0mZIaCC4OtK8H2kMW3IT5xNvVEABfaODXLx8NwkLmjH56z98iScJg4ZksbvqgLc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228208; c=relaxed/simple; bh=ESQLfaCKTG4R0R91DqiQm8NfAu6rY6ldXlZSVumZFv4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DdWmcUjz0GQ19nTybw4IbmhfhcHoEeNV+33KGxAO78MqEDCyP3TUAZDGwvoymSnUMt0GejPn/Q/AZbu0IqSmK2z1ZAuonDlECHNzbwjP/jbWbJLq9nfGtsHbtQwmRE9JAkP1tkWa35lfcG/Vz80d232qnAjkL1KOgadPdNi3diU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=JfFz9SbT; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="JfFz9SbT" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TDjTZY014995 for ; Sun, 29 Jun 2025 20:16:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Aea4FNs+8GG CBuby38XbOoX3esCsoQzCfRoTW94GwEs=; b=JfFz9SbT1oIaF0VZxe17kxpx1zQ y42O5dIOd7+otgw4y2tjiZa0ga+TJDQE578F1Lj/wr0MERp9ePDYYLEy5JP1MzTn jnu+KO38ASHn+07D+1JLTOifyxgSwAwc53gbSLc+Ga47lve5WwLrHJxqSOGQ8Ome nA8OrHRYanXlgpUKyj9IGU2JoC0BXzo9D15hDjIeTqmomR12ORLoAUyfD2mCBsrb xb8Qk03W8a6XfhCqZL9EcP6rmUiv4YLSEv9whb8KViPBsvgfmrNBxOxkubP4BgS6 uTLTigHSMhzWrDZT76x0dK1K87z3zQDfsDNz8D96yaYm/OpLexdfwa+GZZw== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7bvjpr5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:45 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-23494a515e3so27567015ad.2 for ; Sun, 29 Jun 2025 13:16:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228205; x=1751833005; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Aea4FNs+8GGCBuby38XbOoX3esCsoQzCfRoTW94GwEs=; b=XVPHzy5vd5qhYtfFDbLz0ZZUzY2Qo7Vrj1EUVtRVlJEni+UiwgQcOhRiPIaOyuzzQ1 tqo7pTpMfYA+qfZKMzZl4KxiGjCny21rdjpaDMo99WdaU7XS89TnSt355SZW+Ht8Easf Gx2MFJmVWfo9MI/PZiCLRACyDktFrMq+I4GIzaKMKbZygsG72+uDi4axqa49JOPJBp0v uB55ugWQ2EMUXOPMz7wxfPuJcG945kqab8beLGUgES1kPdvxcy4CcJW4SwTfhQTDpE/3 wHtImMVmIGP/G/HEAKLh0aDdZsBDaF3OVOvRD8Y6OUe94wch5x10wGCiMKOWtO6DvYtN mLvw== X-Forwarded-Encrypted: i=1; AJvYcCWeyLDdFhj7MVaNlJmlTPh+69SdJkr1EuXkinW301sV3NjhbPQYXF7NYJzO8bmKgDlZ94965jH25CZabOE=@vger.kernel.org X-Gm-Message-State: AOJu0Yzls1K5qzNXVMh4aw/vx24qLcwAe3U55uv6YxQH+i5tRrX8AG6v MVW3QmCz0OV3xYNHlAtHFuDf4Q8x9am2acGtHMESn0R/4Cykwq3RbH0vL3hHNFZh8pR5bADQRBm +0+S8XM+KtYxQVrTGKv8H0WjCZcpkXH2XsJk/3jBlIuXc5Vg/lc34DzmaAkqZzp7a3tM= X-Gm-Gg: ASbGncu/oXXpSBMJ065rbfpUogQ7cFpml+uOknmgPr/2xCBylRkKE0dgzgHtAdaWP9p YW+D3l/5VXhuazrHtYK4mvQDzQuZkiRBWsMfDbEdXLxd6ReM/CKRW74cGjBWoYMuERTFv1E5yJE qahLcw6eO5rLLI8U+X5DSyCFC+6aCOYxeJULmyQndEcuqIGGPhgljamYgV4jzoDTOqtT2lwE9hs Pqsjjs9xwVSr52MxOj1uB370cpaeJry97o/cb8us3u0bIfZkObKg1AUllGFLez2Nz8UDZKiiKzl X53T3/WL3+L7sm6/8i1qIvlldTi24f1dtw== X-Received: by 2002:a17:902:fc8d:b0:223:4537:65b1 with SMTP id d9443c01a7336-23ac460838amr190149725ad.36.1751228204909; Sun, 29 Jun 2025 13:16:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHi+FCZAiieRq2GK0HZyXrffH2uO26d9xwGr6VDhBhxu0D5CBsaqa1z27xVMPNiaaB4jVB9mw== X-Received: by 2002:a17:902:fc8d:b0:223:4537:65b1 with SMTP id d9443c01a7336-23ac460838amr190149445ad.36.1751228204550; Sun, 29 Jun 2025 13:16:44 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3b0052sm64461035ad.158.2025.06.29.13.16.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:44 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 19/42] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Date: Sun, 29 Jun 2025 13:13:02 -0700 Message-ID: <20250629201530.25775-20-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: PTAYTAJQedEQz9TQFI9lBChzVIGO5pWh X-Authority-Analysis: v=2.4 cv=RJCzH5i+ c=1 sm=1 tr=0 ts=68619f2d cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=jqtXezU9Yb2X64KsF0MA:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-ORIG-GUID: PTAYTAJQedEQz9TQFI9lBChzVIGO5pWh X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfXzFhI6jn1Ah7F hiIDGObRZHMLztz7a+0sh3ZUHOPtzapb2LuUqVj2k+SUjQFXyuLVGqRS4HrEcTTI2lggVS+1mQ2 NA52kATKOMlXRSrI/QTf3ItWwol2Bvj9SaCEoib87GX6y+zuZ1W2UgFbdd7SvIJhJPW9KnG2fMU HwpW/VuEjSehBtNtJj6mDXTo/uL4sa8904QmUrky3LHd81PsCQ0tEjrLmgvDUmpIHvy6aQEkYUt vq4PJYKRKNE7t4qfC+sbRJTUT1z8EuJEn5ymickM9o3/X0gCbZqixLFx7MUT6XmKRC/WRHQloja eoaayxt+p62W2rATVB9nJ+eBEC0ZGw2PsfLAbOGF1IP3Q6XPPBuuE6wwfy4A4c5OkuQ0LAHpZCY O0Dhq14XfHNGdHvNVmeGskQ9616n+HzE10+iekUiRHeJog/vmbb7CeTI7mFCpAJbT2CaWasA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 priorityscore=1501 mlxlogscore=999 adultscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 phishscore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5c71a4be0dfa..186d160b74de 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -57,7 +57,7 @@ static void detach_vm(struct drm_gem_object *obj, struct = drm_gpuvm *vm) drm_gpuvm_bo_for_each_va (vma, vm_bo) { if (vma->vm !=3D vm) continue; - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); msm_gem_vma_close(vma); break; } @@ -433,7 +433,7 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_= gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); =20 drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); if (close) msm_gem_vma_close(vma); } @@ -607,7 +607,7 @@ static int clear_iova(struct drm_gem_object *obj, if (!vma) return 0; =20 - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); msm_gem_vma_close(vma); =20 return 0; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 0e7b17b2093b..b5bf21f62f9d 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -111,7 +111,7 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 offset, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct drm_gpuva *vma); +void msm_gem_vma_unmap(struct drm_gpuva *vma); int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index ef0efd87e4a6..e16a8cafd8be 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -21,7 +21,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) } =20 /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct drm_gpuva *vma) +void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D70B223FC4C for ; Sun, 29 Jun 2025 20:16:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228210; cv=none; b=tgOqbXwBtMWBWMPPXQ9wXJKpNJhyqlvZ5sDcf9MJEEeSfkUGe37UcBcD66BEHTomHdQsmGsADFSQ+8FHcfUvVoSmRSuJYc/X1C4wwg+hiq1flp/QfdKstIYquY8OllsT/ziH1qUvqpVwbnGfuj42CFMWtirqrtvE3vRPGSGtBS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228210; c=relaxed/simple; bh=hFu0EsxKzuSQGzzTTvNrSL2I+uVbcASFTMQYh05jHrY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PIPd/TAHUzXR5Ad5Mt31h76revQ0ULo+1UtV+KNEEZWR9HPX36TnttvTFz93pXIsztUn9r6GSWUgzi+/dfDtoaqdf2OFkxXzZJ2s5G7csL2hNbbFk9Xh6okJlrODHv3Cu42fh6yS08/JREIFVcg/eb96GO094xKBdv7p/fMcnsM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=dWIAqm9D; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="dWIAqm9D" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TFnYHa005314 for ; Sun, 29 Jun 2025 20:16:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=hadax31kM4+ sOKW6+ZA9wpUzRSoBTDuckmpOBYZlztw=; b=dWIAqm9DjIy6I+bHiuy9AGa3Llt tljGI8vcWhDYeWznraeqP3dLKCFULD9GJTP6o1ZllXUvS3OVuMKD5+T0giGHaemg gToChcO0Ab0fMZ53tnUzbZs1t/h1NEP+I7PrSPmaqNYro+/+qkTPHs750QW7pRP1 YlVvAmByiL/WTTnSOmLOuH/6J9Mb2z2jnRFOY0u/F6NvXxxHoAAjicUAD94CVaXH oeTZU5R2cCnbGSgIgm31zZ6ZspwQiejE+WnXxjhNzA4mDYQqIzELx1Mj9V8P7v2V hZXV5oa2RrWO2dNrHVemGlFb+s6QWOr863DAM5AVq6GX6XFsDX4c8Wpm4Ow== Received: from mail-pj1-f69.google.com (mail-pj1-f69.google.com [209.85.216.69]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7d9tpk5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:47 +0000 (GMT) Received: by mail-pj1-f69.google.com with SMTP id 98e67ed59e1d1-31215090074so2337301a91.0 for ; Sun, 29 Jun 2025 13:16:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228206; x=1751833006; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hadax31kM4+sOKW6+ZA9wpUzRSoBTDuckmpOBYZlztw=; b=U+XThKof/cm1hkikMVvNf0V7FMFn5HXb4tPZmjQR2SjknCuf8qErrcbS7Zl4yJlX+K +Upx2U9u4q29ipnSGFtdRhvRC8r3wU8SyzGj/vxufxP6WhJUSABksctflJ5E7/cpC2Qc mmxduDir2QWcSfDRuAVyRmI5rQob3wYRHnxmEw41sceefKaXArxUhIU8rMZHMPZDhY8D FtW53S/xoVEPGwIfQZgD8JBowOZYBs8IgA02Y/AvKrYxOMjiW2n/0UwLAxl53++VEBZJ g22lHGMuqIIxs2CTVMozgqne3cCSRUFam/8+4tsl/u+zpP/Z2yFUMkDWyYLzlvNmqyQC eI6w== X-Forwarded-Encrypted: i=1; AJvYcCVKqwqqErxDzpzaEBoo2fwwtejtr4NdSrHPNt+5/gntpJcbo5GDmiCCAYq9ab4kIJxBjXNumAKNYpiCYzE=@vger.kernel.org X-Gm-Message-State: AOJu0YwzWxg+BZW9FiOULTC9Q883qyrtqnlMtMyhQ4hh3Tcnmt1RLLAU m5dy8oXcj7wHo2RaBi8jlacy13L2rJ+Jmy0O7qiytvYf7ArZ/YqjPGY/pKiWvykQgUsacRFzyRg B/fg/OAIzKlGx1AfsqwEndvmRqn79ZeM0iMbxWHtwTCr64aLlHVF/BfCuwpZ7kaDElA4= X-Gm-Gg: ASbGncvS78UJidXlFjlfftCYiwlmqO3H+i6TsNSHSH6gwYBJs2JZsOxGayBwWo3OYL8 fBAesH5K738dg5mro5CsdUwzJ3X0bL8m4rBus+ovLUMvxJ7h0gFVBd3QsVuDl9S871D5YorL3qH IDz98nIMm3Kh9picUWw27HVO6u3t9RBuVPG0mFgajU/zVkodLu8ZIsmZFr3W+Qakekv23wcfm/U eeHDZ1cTCYQZDBBod9OBYOJo9NZzQvLSGOTTAlvJHedV8fdGg5xgrSubfZ/MNRl9o3hUeKwjY6A XV897aZBVRphB24jEAb53tLooucAZ67uNw== X-Received: by 2002:a17:90b:4b0d:b0:313:28f1:fc33 with SMTP id 98e67ed59e1d1-318c910a238mr15355620a91.10.1751228206374; Sun, 29 Jun 2025 13:16:46 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFr+QQ/LPBQuRXSDVeP1yQmlH8FH0SOjYXUqYKGHVd3ZKkPXg2/4lxBBqn444PNPlZLSOChww== X-Received: by 2002:a17:90b:4b0d:b0:313:28f1:fc33 with SMTP id 98e67ed59e1d1-318c910a238mr15355587a91.10.1751228205899; Sun, 29 Jun 2025 13:16:45 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-315f5382f0dsm12695469a91.3.2025.06.29.13.16.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:45 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 20/42] drm/msm: Drop queued submits on lastclose() Date: Sun, 29 Jun 2025 13:13:03 -0700 Message-ID: <20250629201530.25775-21-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX1FCVE9SCa6sN 3hu5D3f29I8FzoKs+uFhIssQ0go2Onnh1ABgGfJVEbXiUDsN326v9NEKIEjLdnflFMQA/HglFxV BUBEqbHSQONmO3yqGpYqI+p3Pq/PmLT+CX8UGf1lhDZTaoVovzZa3J4nqk6IDgevKGjD37j3YWx Q0+ye2lsjoBJf9JT7C5pXHSvXf0butx65caXfzIhHZUU68M/pEceALvMj5qIPaALFbE9nlm0poO WDvG9GQfIpWncu//MqVShd51s6abG0Y1BuhWOznfFLkmt+u+RoVWW1Pa2LZysvigP9g7QOjW9Pm ihWEYYQtueXYxr1jwABh+4r78XQVjQlENtkaG3S6yYtxDeDCtGIU/whZdAaLpVtVM7udXa+hXDw Gx1CAkWZtmMrRssPd7SnAm2RC0Fee854W3QtoCU6L43S5W2LO9o2cbbSbH6byeo3yFr6GFSR X-Proofpoint-GUID: W2iX1rh7vu5oQgofvumuufFealQLLLfM X-Authority-Analysis: v=2.4 cv=RrbFLDmK c=1 sm=1 tr=0 ts=68619f2f cx=c_pps a=vVfyC5vLCtgYJKYeQD43oA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=Vc5hAS3c26tUa1HFGawA:9 a=rl5im9kqc5Lf4LNbBjHf:22 X-Proofpoint-ORIG-GUID: W2iX1rh7vu5oQgofvumuufFealQLLLfM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=999 adultscore=0 clxscore=1015 suspectscore=0 phishscore=0 bulkscore=0 malwarescore=0 spamscore=0 mlxscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark If we haven't written the submit into the ringbuffer yet, then drop it. The submit still retires through the normal path, to preserve fence signalling order, but we can skip the IB's to userspace cmdstream. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gpu.h | 8 ++++++++ drivers/gpu/drm/msm/msm_ringbuffer.c | 6 ++++++ 3 files changed, 15 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 488fdf02aee9..c4b0a38276fa 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -254,6 +254,7 @@ static int msm_open(struct drm_device *dev, struct drm_= file *file) =20 static void context_close(struct msm_context *ctx) { + ctx->closed =3D true; msm_submitqueue_close(ctx); msm_context_put(ctx); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 231577656fae..a35e1c7bbcdd 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -356,6 +356,14 @@ struct msm_context { */ int queueid; =20 + /** + * @closed: The device file associated with this context has been closed. + * + * Once the device is closed, any submits that have not been written + * to the ring buffer are no-op'd. + */ + bool closed; + /** @vm: the per-process GPU address-space */ struct drm_gpuvm *vm; =20 diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm= _ringbuffer.c index 552b8da9e5f7..b2f612e5dc79 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -17,6 +17,7 @@ static struct dma_fence *msm_job_run(struct drm_sched_job= *job) struct msm_fence_context *fctx =3D submit->ring->fctx; struct msm_gpu *gpu =3D submit->gpu; struct msm_drm_private *priv =3D gpu->dev->dev_private; + unsigned nr_cmds =3D submit->nr_cmds; int i; =20 msm_fence_init(submit->hw_fence, fctx); @@ -36,8 +37,13 @@ static struct dma_fence *msm_job_run(struct drm_sched_jo= b *job) /* TODO move submit path over to using a per-ring lock.. */ mutex_lock(&gpu->lock); =20 + if (submit->queue->ctx->closed) + submit->nr_cmds =3D 0; + msm_gpu_submit(gpu, submit); =20 + submit->nr_cmds =3D nr_cmds; + mutex_unlock(&gpu->lock); =20 return dma_fence_get(submit->hw_fence); --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98B2525229E for ; Sun, 29 Jun 2025 20:16:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228211; cv=none; b=PbNh1Q8+/SrYjUxEQ9slsox0085hVO4/FQi7ZbFhIbdBzlfJhN+JdIli7bcQTuNRt+aShXmOwANdqvLWjETUUrl9P+4janvxV+JbVySo5fIo7QKq+mytPRaMD/J9RYXmCupUDT8sWyjPpTzh14vVnsgG3G8UOIkJInSUsP7TFYQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228211; c=relaxed/simple; bh=B9YTTznctoHdBwrv5g0hUa2frK/TNvG5X6fwVs2vyWU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LCXkxjePa4i9xg9jj4f1H3yoMk7ZLVVOcuj0KneyImcxG1RuPuyXjanbALEJs9mP1XYT4sYwLxbPK19+GAc1E3xiyLFK/zqG0KIuCfh1mprOvYyu8ApGBWBAxENMlCK13qUedBM7jb7WS2C2Lw2HXWCFN7IV1UyquXBt7dgYOl8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=XehopLSb; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="XehopLSb" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TGrXQB024711 for ; Sun, 29 Jun 2025 20:16:49 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=P0BUJyXh6kP qdE8f3drw7XmhKLdbfVd0jGadfPkUqgU=; b=XehopLSbMYZ8zipaEeBYiedSxvw drUuZhaDiVNQJeDFUCk4CvXd8tUzxMRtB3IAg7gZojZH2aHrE4GikNsKDKEe/YFK yHWAUXhoy7e+/liS8X0/IPKxYQkW88g//+hsqr8r3aEp5nT4WSkn8KIf6zgw54sN vXjXoxaDQAbOewqC31p55W9kr9sdoEfvb6kt7dDnR1/ff4EUrx7Lx6kboiamhJ/q 81Y9FHY+vlI215tLX/JACuMMsCoPmFbDPp6lQVubUY50a9vhsS3iG5ScaE8taLO4 8z0MjXT0qxTk74NerImatBp+P3erkiA4k7NBhbT5BxLzTl8MHNjGFugLDrw== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j9jpth84-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:48 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-2365ab89b52so27837355ad.2 for ; Sun, 29 Jun 2025 13:16:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228208; x=1751833008; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P0BUJyXh6kPqdE8f3drw7XmhKLdbfVd0jGadfPkUqgU=; b=mZz/ioRnenw8ux9xe27gllNBN2gzNVtf+TVWEIH8U61uW1dkbC06AsIDjlHbr7z/3D Gxfbiok9Wg2UqRb/uM4mjHD/krhxfOdkAqGZ3OGgozE4p5NWULvWjp8Jt4SMAinKP3rF giVw6oK42EUGk/grzBZqy9S/7ezyQ2UtNzVPwtGr2nSlamKNZS/D7d0cF8PHZPzSwP2u 8/6D6UisZTv8PhhtiPsk8GVw/rxnXpC/7iBUeZCcnAqLwx0iLPjZglujmQ6cuFl24V5X fG7Eiehq6cppcA8O0Fk3v1F2gjUynbqvG7fj9IRcaXkw9M1cgSKT4yfUT9ve4lreKoWm ZYKg== X-Forwarded-Encrypted: i=1; AJvYcCVeR32jKXXRqoFsHddwtOa+mtIAON+fI2Ehc+6lu372XxNbXFVu46s880sqmRvpmCEdqQRVR/5k1737des=@vger.kernel.org X-Gm-Message-State: AOJu0Yw6X9D1y+onmUODTtye0a3z03l8ua35j5d2yhDvOsASmwtiEyCG rkvKn0r7xPSAB5P+pOGPfkzvmnW2Yb6fn2tQHZONfhYdtniGxslNRePgFZvQQwN/dCiir3KyAKg rVf0JsSavmEFCkei0o9t0PJjTP9wPm2ubrio987btRbwRQjpnQLH69gYdVmZPZPQRd0M= X-Gm-Gg: ASbGncuDDBETEkYegJNfqVC/P9j+lZWmDVQ1D2mDsj260Gq1Jh9gkAIR6Pl9Jbw6eME xOypxdIkKE3CgVCnTNMeZU5I9Qa1JjULWnhwDOwC6q2iKSYACho3f0/d8H1n7nEVOu6rnfqt2UG bInMLD7Iadz8LHR5hagyCwAXmyzbpITzNZfkYNcNZXqaZJ0IHpTViiAwXcJ0A+yRRmaInuraotn B/NlGiidJhGsTCCrikkjvGAsdjeMAqP6L/qpewWG764df/sZ53meZuzMynsfEzrD8kHn+kFelsx NohLf1XkuceX0AeVNJ0kIjczlJI7a/ZJPA== X-Received: by 2002:a17:903:1a8d:b0:235:f45f:ed2b with SMTP id d9443c01a7336-23ac45c0f72mr165769025ad.1.1751228207964; Sun, 29 Jun 2025 13:16:47 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFyVthO40tcQTCJ0muJg7MQ6d9drZ8K7M4xSAJ9r+yQe9hJqTnp7unHgV/QgHaOX/oZf6bRXg== X-Received: by 2002:a17:903:1a8d:b0:235:f45f:ed2b with SMTP id d9443c01a7336-23ac45c0f72mr165768705ad.1.1751228207469; Sun, 29 Jun 2025 13:16:47 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3adeeasm66898175ad.156.2025.06.29.13.16.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:47 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 21/42] drm/msm: Lazily create context VM Date: Sun, 29 Jun 2025 13:13:04 -0700 Message-ID: <20250629201530.25775-22-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfXzS1EnrYb4pXb fL7xUr7IIl2SYQSOW8KGXtGc++dYvg5KNFd97hWHZma+Da2+e5CPpDICsJbRf6sMJbjNoGKfFLN 9ngl9vgYB5t+CfkY+79b46D3zbVl3oh2UtrOOgCeOAHliCBUxwkUngFnJksYYC/pTkNjX7Fbral PVSIMXZp9814v5dqaK6V4/6qPEk+yFvv9z/hDOERNiVBIKFADjiDJod45ApQ9UkdN1znRLJZmXn CbnQUFU1ue08m/4L2q5SPZUGJpq9T4tGD+idT7Cjj62es4RBNWkW1HZKPihkVqOpScCP/kqLXw2 Shz4d0EMLNLCKz0E6o4wVuMZXCN3gdZlwzNrNQxuMaYQUFNhzQF3j3WKWajj1GnAggU/3Vr9zyx 9oPaWISJRt2m91XhcGg2/nCRPbBsqNbIj9B/S1g7DsknBo0/o2Ox9Ek3fT16NoRkfjMonei4 X-Proofpoint-ORIG-GUID: d8gAWyGE5gf0fKnty46Ic2SCJ5W9utPG X-Authority-Analysis: v=2.4 cv=Tq7mhCXh c=1 sm=1 tr=0 ts=68619f30 cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=K8YrE2tTMaBrqk7BmowA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-GUID: d8gAWyGE5gf0fKnty46Ic2SCJ5W9utPG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 lowpriorityscore=0 adultscore=0 priorityscore=1501 impostorscore=0 phishscore=0 mlxscore=0 spamscore=0 bulkscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark In the next commit, a way for userspace to opt-in to userspace managed VM is added. For this to work, we need to defer creation of the VM until it is needed. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 ++- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 14 +++++++----- drivers/gpu/drm/msm/msm_drv.c | 29 ++++++++++++++++++++----- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.h | 9 +++++++- 5 files changed, 43 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 0b78888c58af..7364b7e9c266 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -112,6 +112,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, { bool sysprof =3D refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; struct msm_context *ctx =3D submit->queue->ctx; + struct drm_gpuvm *vm =3D msm_context_vm(submit->dev, ctx); struct adreno_gpu *adreno_gpu =3D &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; @@ -120,7 +121,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, if (ctx->seqno =3D=3D ring->cur_ctx_seqno) return; =20 - if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(vm)->mmu, &ttbr, &asid)) return; =20 if (adreno_gpu->info->family >=3D ADRENO_7XX_GEN1) { diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 12bf39c0516c..2baf381ea401 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -369,6 +369,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); struct drm_device *drm =3D gpu->dev; + /* Note ctx can be NULL when called from rd_open(): */ + struct drm_gpuvm *vm =3D ctx ? msm_context_vm(drm, ctx) : NULL; =20 /* No pointer params yet */ if (*len !=3D 0) @@ -414,8 +416,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, *value =3D 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->vm) - *value =3D gpu->global_faults + to_msm_vm(ctx->vm)->faults; + if (vm) + *value =3D gpu->global_faults + to_msm_vm(vm)->faults; else *value =3D gpu->global_faults; return 0; @@ -423,14 +425,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_= context *ctx, *value =3D gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->vm =3D=3D gpu->vm) + if (vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->mm_start; + *value =3D vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->vm =3D=3D gpu->vm) + if (vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->mm_range; + *value =3D vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value =3D adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index c4b0a38276fa..5cbc2c7b1204 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -218,10 +218,29 @@ static void load_gpu(struct drm_device *dev) mutex_unlock(&init_lock); } =20 +/** + * msm_context_vm - lazily create the context's VM + * + * @dev: the drm device + * @ctx: the context + * + * The VM is lazily created, so that userspace has a chance to opt-in to h= aving + * a userspace managed VM before the VM is created. + * + * Note that this does not return a reference to the VM. Once the VM is c= reated, + * it exists for the lifetime of the context. + */ +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_contex= t *ctx) +{ + struct msm_drm_private *priv =3D dev->dev_private; + if (!ctx->vm) + ctx->vm =3D msm_gpu_create_private_vm(priv->gpu, current); + return ctx->vm; +} + static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident =3D ATOMIC_INIT(0); - struct msm_drm_private *priv =3D dev->dev_private; struct msm_context *ctx; =20 ctx =3D kzalloc(sizeof(*ctx), GFP_KERNEL); @@ -234,7 +253,6 @@ static int context_init(struct drm_device *dev, struct = drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); =20 - ctx->vm =3D msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv =3D ctx; =20 ctx->seqno =3D atomic_inc_return(&ident); @@ -413,7 +431,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *d= ev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->vm, iova); + return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova); } =20 static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -422,18 +440,19 @@ static int msm_ioctl_gem_info_set_iova(struct drm_dev= ice *dev, { struct msm_drm_private *priv =3D dev->dev_private; struct msm_context *ctx =3D file->driver_priv; + struct drm_gpuvm *vm =3D msm_context_vm(dev, ctx); =20 if (!priv->gpu) return -EINVAL; =20 /* Only supported if per-process address space is supported: */ - if (priv->gpu->vm =3D=3D ctx->vm) + if (priv->gpu->vm =3D=3D vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); =20 if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; =20 - return msm_gem_set_iova(obj, ctx->vm, iova); + return msm_gem_set_iova(obj, vm, iova); } =20 static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index d8ff6aeb04ab..068ca618376c 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_de= vice *dev, =20 kref_init(&submit->ref); submit->dev =3D dev; - submit->vm =3D queue->ctx->vm; + submit->vm =3D msm_context_vm(dev, queue->ctx); submit->gpu =3D gpu; submit->cmd =3D (void *)&submit->bos[nr_bos]; submit->queue =3D queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index a35e1c7bbcdd..29662742a7e1 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -364,7 +364,12 @@ struct msm_context { */ bool closed; =20 - /** @vm: the per-process GPU address-space */ + /** + * @vm: + * + * The per-process GPU address-space. Do not access directly, use + * msm_context_vm(). + */ struct drm_gpuvm *vm; =20 /** @kref: the reference count */ @@ -449,6 +454,8 @@ struct msm_context { atomic64_t ctx_mem; }; =20 +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_contex= t *ctx); + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched p= riority * --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EA86254AED for ; Sun, 29 Jun 2025 20:16:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228213; cv=none; b=DEa04FHuKYUKZnrBsxGTvrC57+WDgBYbwnhpXajvMSnxLD25kn4zZy9k4+KppDmBcmYPOBDYQfMrfpGfnupPCrG4fqZ5mm9eY0wHzAgCGreJn+c4TYRIXpzK88Qu3gA4u7Z+GTprX4NhKGVpQwsmmhBr/H5vF3Yrc7dKjfwqs6U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228213; c=relaxed/simple; bh=m0U9/s1Q9Od9FftI8RojXks8GPgRuLuUt0Z66pJguBw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bEEyB8QaMIM2eU4DtITWflNYemEy1KrbcB2+XPDpaneQt9QxhvzY3vxwmQwn9URrUw+LNV2njxTD4DmzA26aTRvIIJPJ6/ojZdflg2bhzVAAZJiK8COZ7dXDsw6mFAC6NCJhO1EZ4HdS7doemnoysESggBamTpyYx3fFCPqh3NQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=AcWUYfXe; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="AcWUYfXe" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TIDZ11012252 for ; Sun, 29 Jun 2025 20:16:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=S4kAEuyQKeE +qmE3bZkWMAOyZt7c6wX4bFKVn68PInY=; b=AcWUYfXeDAr+3tnQccnpkdHLT+O fLDQxzJY1HGC20hGo9SahYrs38I6aVsK+M01gtpw5oxrlRoY2jbNlEvq4D9Uj1wm 0Qro9NqOphPX66xy+3DxDme1TV4rKzryBNYpvk5N7lpGnxUWNro4p0sifw2O/JQJ zdLwX/J8pMBI+G7RGxU5PAUQ4JB7Ene8GjLzOOIi+hnUmQgHOEKMiyBO5YEGnlzU 7p7LP+1uiTTy1GGP+ayHdgbUARELNPrBHoglIPeimeXbYvcxP3NiDZi6oe5BWWbD /cetq9AqZRI3RycpDzng4RgRNgdKvBZAL7qWUs3b2vEhHSV7NEMTnWgelhA== Received: from mail-pj1-f72.google.com (mail-pj1-f72.google.com [209.85.216.72]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j63k2tj8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:50 +0000 (GMT) Received: by mail-pj1-f72.google.com with SMTP id 98e67ed59e1d1-311ae2b6647so2261125a91.0 for ; Sun, 29 Jun 2025 13:16:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228210; x=1751833010; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S4kAEuyQKeE+qmE3bZkWMAOyZt7c6wX4bFKVn68PInY=; b=IKNgR9HOISKZ7fG+5R5NDzRQofX3pSyxwMoJ4ycHzjd0F1yjXCSVkk8KSFSav3UtfB 3P2OhKxk1o32xg6djwAmhDD9kCEdRqiqm+GdHWRA2pi0n2Vw3fzHgAVP2U3K7A0dbpKZ 12630gd30AFDMrp2PBNzhIo4fucFfl8y3j7sypL2Qiqzj6wXwyENY0UYuec+bxkfuRUp PMVpptRmRQG61PjG6GP3sa34L2dvlCAQhzsMzf4sK4tc1L4MDKxwobgiM7YupHBcpJy+ rf2RN9BwDZnOliCPDmAinFxW3pEfG8qSP6323xRdD3rCb69Ou9ag/f/MqfvRI1qNcLxi u0Fg== X-Forwarded-Encrypted: i=1; AJvYcCUlAQEzDBnRrYJOVYpEO4LuwX4pJR5cGN1sT0bgbKtdeOsV1HaK2Sdu3d4sYjntqNMpZB/SCJmdRLjsfK4=@vger.kernel.org X-Gm-Message-State: AOJu0YxQiOx869rXqA4Cud8ivG5IqjgfTUloPs+6obFi2CJWcNjBrecW sgrJRhZ1NbVeth6jlMe3eZIfsRQ5+eRLASmftp35lJG8ILHkm99Jgg9tIMkbbmvpmMvCexPpTQC aCB8GR628ME4gyuyysA29NRqGuB4wcMN7OsCh/NjEDoedaainbbm+kdF0Q3p32YDlgJo= X-Gm-Gg: ASbGncu7hvs3mwIVrmL5uTEhu579kdojgmqG/w+oWMKs2Qz7aeZmuplZvLI2u+b04Kk PO8luMPW1TgjQChJ9hyy00f9K8ogCzvqOFVSAFKeDjM/YY7W3L1hNQEcjo/FxBJ9kqv3VQ+/Dq2 lH5ZaeDhPmA33ADG0AYLw3kPjy3a7LwBJ8Ojc4zMPASGKxPBsIXowrIrdaNgxWnXy/Bqm01yBZK ISQXrGIm318stUoj5u7x+0lt6CViYMSFTXZtKkp8Oo9/TrXJlFHrAufTzOOAfhrxDuCXQWr84Br YHnY7Auoye/g8U+VlQTOVKV4DR9XRfUogg== X-Received: by 2002:a17:90b:3d50:b0:313:b78:dc14 with SMTP id 98e67ed59e1d1-318c8d12fdemr18035320a91.0.1751228209622; Sun, 29 Jun 2025 13:16:49 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF6nTxOSnHirq4oNQbnpHK2fdyqzABLA2LE8fNqIqA4XMYtI/10wgAMlVQbHMSFwZWmx3ExcQ== X-Received: by 2002:a17:90b:3d50:b0:313:b78:dc14 with SMTP id 98e67ed59e1d1-318c8d12fdemr18035297a91.0.1751228209115; Sun, 29 Jun 2025 13:16:49 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-318c14fd38bsm7460035a91.34.2025.06.29.13.16.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:48 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 22/42] drm/msm: Add opt-in for VM_BIND Date: Sun, 29 Jun 2025 13:13:05 -0700 Message-ID: <20250629201530.25775-23-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=ZKfXmW7b c=1 sm=1 tr=0 ts=68619f32 cx=c_pps a=RP+M6JBNLl+fLTcSJhASfg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=iDRtRSvoPPFvr6RSarUA:9 a=iS9zxrgQBfv6-_F4QbHw:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX754L2Ap+njPG MXcAfQDLrGtostMO6BsJ705IBchPgPLOJxtbw/ujVKv+Mmpfyv9aAqgbZgr5BOxwnDZcyvXzddh ntm5l+dEkGnu9vu9Y5pbL/7fHOYXo5VouRN/gxgRfzm2f7Mla+XrX4xW4dyg4Kbqvf8V7CmqZMX QNK0DB4Z0jVXF4UBI4zZbUqO4Bu6+H/FvdcdVdlr9WEka9Ohjn/Q/BV+WvWJ6khRWyUDnJPI68v VNvgVI1I5MXnH+4jCpj5pqY/467u+XoFezb1tU8H3w81XvS5ROaH4ikxKW1NaGQVDdtKS4Cxp8P PlMn3WYO1AR4FsVe1TDy8KN9b7RJ0/Bqy9kSNh4pLc32dmy1PD6VphPAewg3sAD7K/bepEGmchs 937EuJkuJl0/DKClCtrkQWBddWCGX2DHdTgqZBbILukYKLQdC/s6qqsCtqxVvT1BdVlF+Yvq X-Proofpoint-ORIG-GUID: IupaO6NDV_WorzC5ufWy2kJHxwjhuYvI X-Proofpoint-GUID: IupaO6NDV_WorzC5ufWy2kJHxwjhuYvI X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 mlxscore=0 mlxlogscore=999 spamscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 impostorscore=0 malwarescore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark Add a SET_PARAM for userspace to request to manage to the VM itself, instead of getting a kernel managed VM. In order to transition to a userspace managed VM, this param must be set before any mappings are created. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 15 +++++++++++++ drivers/gpu/drm/msm/msm_drv.c | 22 +++++++++++++++++-- drivers/gpu/drm/msm/msm_gem.c | 8 +++++++ drivers/gpu/drm/msm/msm_gpu.c | 5 +++-- drivers/gpu/drm/msm/msm_gpu.h | 29 +++++++++++++++++++++++-- include/uapi/drm/msm_drm.h | 24 ++++++++++++++++++++ 7 files changed, 99 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 7364b7e9c266..62b5f294a2aa 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2276,7 +2276,7 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_d= evice *pdev) } =20 static struct drm_gpuvm * -a6xx_create_private_vm(struct msm_gpu *gpu) +a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; =20 @@ -2286,7 +2286,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu) return ERR_CAST(mmu); =20 return msm_gem_vm_create(gpu->dev, mmu, "gpu", ADRENO_VM_START, - adreno_private_vm_size(gpu), true); + adreno_private_vm_size(gpu), kernel_managed); } =20 static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *= ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 2baf381ea401..ff25e3dada04 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -504,6 +504,21 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_c= ontext *ctx, if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); return msm_context_set_sysprof(ctx, gpu, value); + case MSM_PARAM_EN_VM_BIND: + /* We can only support VM_BIND with per-process pgtables: */ + if (ctx->vm =3D=3D gpu->vm) + return UERR(EINVAL, drm, "requires per-process pgtables"); + + /* + * We can only swtich to VM_BIND mode if the VM has not yet + * been created: + */ + if (ctx->vm) + return UERR(EBUSY, drm, "VM already created"); + + ctx->userspace_managed_vm =3D value; + + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 5cbc2c7b1204..c1627cae6ae6 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -232,9 +232,21 @@ static void load_gpu(struct drm_device *dev) */ struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_contex= t *ctx) { + static DEFINE_MUTEX(init_lock); struct msm_drm_private *priv =3D dev->dev_private; - if (!ctx->vm) - ctx->vm =3D msm_gpu_create_private_vm(priv->gpu, current); + + /* Once ctx->vm is created it is valid for the lifetime of the context: */ + if (ctx->vm) + return ctx->vm; + + mutex_lock(&init_lock); + if (!ctx->vm) { + ctx->vm =3D msm_gpu_create_private_vm( + priv->gpu, current, !ctx->userspace_managed_vm); + + } + mutex_unlock(&init_lock); + return ctx->vm; } =20 @@ -424,6 +436,9 @@ static int msm_ioctl_gem_info_iova(struct drm_device *d= ev, if (!priv->gpu) return -EINVAL; =20 + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; =20 @@ -445,6 +460,9 @@ static int msm_ioctl_gem_info_set_iova(struct drm_devic= e *dev, if (!priv->gpu) return -EINVAL; =20 + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + /* Only supported if per-process address space is supported: */ if (priv->gpu->vm =3D=3D vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 186d160b74de..d16d3012434a 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -81,6 +81,14 @@ static void msm_gem_close(struct drm_gem_object *obj, st= ruct drm_file *file) if (!ctx->vm) return; =20 + /* + * VM_BIND does not depend on implicit teardown of VMAs on handle + * close, but instead on implicit teardown of the VM when the device + * is closed (see msm_gem_vm_close()) + */ + if (msm_context_is_vmbind(ctx)) + return; + /* * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index fc4d6c9049b0..c08c942d85a0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -829,7 +829,8 @@ static int get_clocks(struct platform_device *pdev, str= uct msm_gpu *gpu) =20 /* Return a new address space for a msm_drm_private instance */ struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed) { struct drm_gpuvm *vm =3D NULL; =20 @@ -841,7 +842,7 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct t= ask_struct *task) * the global one */ if (gpu->funcs->create_private_vm) { - vm =3D gpu->funcs->create_private_vm(gpu); + vm =3D gpu->funcs->create_private_vm(gpu, kernel_managed); if (!IS_ERR(vm)) to_msm_vm(vm)->pid =3D get_pid(task_pid(task)); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 29662742a7e1..b38a33a67ee9 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -79,7 +79,7 @@ struct msm_gpu_funcs { void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_devic= e *pdev); - struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu, bool kernel_m= anaged); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); =20 /** @@ -364,6 +364,14 @@ struct msm_context { */ bool closed; =20 + /** + * @userspace_managed_vm: + * + * Has userspace opted-in to userspace managed VM (ie. VM_BIND) via + * MSM_PARAM_EN_VM_BIND? + */ + bool userspace_managed_vm; + /** * @vm: * @@ -456,6 +464,22 @@ struct msm_context { =20 struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_contex= t *ctx); =20 +/** + * msm_context_is_vm_bind() - has userspace opted in to VM_BIND? + * + * @ctx: the drm_file context + * + * See MSM_PARAM_EN_VM_BIND. If userspace is managing the VM, it can + * do sparse binding including having multiple, potentially partial, + * mappings in the VM. Therefore certain legacy uabi (ie. GET_IOVA, + * SET_IOVA) are rejected because they don't have a sensible meaning. + */ +static inline bool +msm_context_is_vmbind(struct msm_context *ctx) +{ + return ctx->userspace_managed_vm; +} + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched p= riority * @@ -683,7 +707,8 @@ int msm_gpu_init(struct drm_device *drm, struct platfor= m_device *pdev, const char *name, struct msm_gpu_config *config); =20 struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed); =20 void msm_gpu_cleanup(struct msm_gpu *gpu); =20 diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 5bc5e4526ccf..b974f5a24dbc 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -93,6 +93,30 @@ struct drm_msm_timespec { #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ /* PRR (Partially Resident Region) is required for sparse residency: */ #define MSM_PARAM_HAS_PRR 0x15 /* RO */ +/* MSM_PARAM_EN_VM_BIND is set to 1 to enable VM_BIND ops. + * + * With VM_BIND enabled, userspace is required to allocate iova and use the + * VM_BIND ops for map/unmap ioctls. MSM_INFO_SET_IOVA and MSM_INFO_GET_I= OVA + * will be rejected. (The latter does not have a sensible meaning when a = BO + * can have multiple and/or partial mappings.) + * + * With VM_BIND enabled, userspace does not include a submit_bo table in t= he + * SUBMIT ioctl (this will be rejected), the resident set is determined by + * the the VM_BIND ops. + * + * Enabling VM_BIND will fail on devices which do not have per-process pgt= ables. + * And it is not allowed to disable VM_BIND once it has been enabled. + * + * Enabling VM_BIND should be done (attempted) prior to allocating any BOs= or + * submitqueues of type MSM_SUBMITQUEUE_VM_BIND. + * + * Relatedly, when VM_BIND mode is enabled, the kernel will not try to rec= over + * from GPU faults or failed async VM_BIND ops, in particular because it is + * difficult to communicate to userspace which op failed so that userspace + * could rewind and try again. When the VM is marked unusable, the SUBMIT + * ioctl will throw -EPIPE. + */ +#define MSM_PARAM_EN_VM_BIND 0x16 /* WO, once */ =20 /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47636239594 for ; Sun, 29 Jun 2025 20:16:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228215; cv=none; b=ZJZnpcP7cHnbZ1DpwDc20p9Xfrnv/+VVIPKf6AoP7BOq63HMESeiBMUUkfFlbyiway8I3XeiakzjSUkyH+wCqjXzY10zZbo24mcPEHIYmcPrC/YEDLY2h6tgKJTsa2obcF4ieb7tCH56K4n5RDByYuNEbzvMHlBEqzZ6ZDfALBs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228215; c=relaxed/simple; bh=Xrr+912y5vNYEYCSiqXQ5P/O+VoEFsvl61K5HGKqAeE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Om2ffM+RyTXwU4+E5z+v/l8Edlozr63rvK9iLyXXf1GEWmvOePAWWcB21ObnEO47RXYbiM07/kX7wanxEmWaJULEp1B2x9WnRvcbY+lVsUIeaMLQ39OW8R91QUico89sMkhuYqovvF3Vlpr7cC160hB6zcs6YNGps1oa/sYeL3M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=lEYYvRIg; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="lEYYvRIg" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TANmZ8030508 for ; Sun, 29 Jun 2025 20:16:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=/iFe2vNrJqp a0S0dcdbvotV9UbXoILAtHKmoL3TqRPw=; b=lEYYvRIgJ/c+BzHgUGEWGBUactw 3eZJLwb+PAuIaqgaRr436pNFshpMcEnkTbOPxM17OBCArFDyCQuWtEgWNPMllkk0 iyakN/9IpTVaPK7Qcf2nbfKs/XwlxTQj8SEUwv4Fk6TIhiPV1ZipG77FGKATDpPI /+u2zu4yFg/I9/JXACVNZqEwbCGhkVHrjSpsTHlTNDWEWvZIwy/uqCq2L5Xgun5h vUZql0vbMSKbZRDSWXJclHDsEEGUrmPE1J4YmwGE/kj1L+ZlsdqFQjX8N/bocez8 milxFHUbUrn6kUZp4hi+eVy0ddKW4BYYVZtCjQUTHHqvp0+8CR9zaoKdCbw== Received: from mail-pj1-f69.google.com (mail-pj1-f69.google.com [209.85.216.69]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j5rq2v05-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:52 +0000 (GMT) Received: by mail-pj1-f69.google.com with SMTP id 98e67ed59e1d1-313f8835f29so1805493a91.3 for ; Sun, 29 Jun 2025 13:16:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228211; x=1751833011; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/iFe2vNrJqpa0S0dcdbvotV9UbXoILAtHKmoL3TqRPw=; b=uWayh/SpVywSmI6c3Mzll/cExZqx8lNTvQO0utjah0O8EWxOIQJge3HtznMSgRGzBB hsHqL9RnpOU97w7XednStNR0M+grLuv+dfgV/8q1BqQUKDWwJHdK9kwR+SzJ5CQwEjZj sicyolmdWkUS8P+qVEnogMmXztvSRS4fwcR4L1ozuD2Wqjw0FjZlEt6J0KmDyflmcLCp XYql0+VQQtRcq+uLT0k6dMJmBjhNFSAWDvDkU41FCokYLeEaclc7ObMv2QiDToxuMUL9 Hvq7bQvyF7wndCY5aT/xPz1vNooZdCpinF21QmbUkNzVjOu6ao1PXOp0rB16j3pUkZW2 PH/Q== X-Forwarded-Encrypted: i=1; AJvYcCVL+R3T5uuN/2TNEJgmdxfbLt6Q8u4hD7Ohc/7X+eZqtSCQLtHuxJ92sOwiaKtq6gtY5ikZiSLKX4He4Ic=@vger.kernel.org X-Gm-Message-State: AOJu0Yz+rnk9cDqHDPW/uqziJLWhCwqusEVCP1KsfYbNEChYlJGay9FL Sx96qX35mGXVWJIaQtCZEw2uFj3vYLRsYyzA2O7UnmREzNcYmcsV90YADbjmktiiAxVUTRfZrLk mOUdyRiFS0V5jvHgane9zZHxbjhyf6ouasL8oK6tk5ci8THhtN1ocx91+ZxRaaLpz1OE= X-Gm-Gg: ASbGnctzzXEVnmeOft956W77Ojwx5IxC6GdLsHj5pZ4u3gIgJY15Q9kjDEGUcMnvIwQ yG0cShM/cId+WYbHgNaY94sjXw5VpeS3sKLT4gXTn9S2Bgmnacyq2+13MY2uo3vjag8YszvnGvp oHkm1+mxpuU4li7n6025DZQObLSmKnx3stxmQfzM0pf0HkaLrB9T/96uwLWdRdtNsGumiA59ZiV umYphC6tt9H+sUDO7xn7ijoYSYcYQpwCIBb3q2u/Wj3ba2YioyYKmQ4q+djTqlEo/fABe9sytUq ZmqEDIZZn/YfctKDJGuundCQ+6xvNaz6sQ== X-Received: by 2002:a17:90b:2681:b0:312:959:dc4d with SMTP id 98e67ed59e1d1-318c8ecd239mr16401473a91.7.1751228210950; Sun, 29 Jun 2025 13:16:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGajrdA6tIMydlgInm0aLxoXdGIewbBhz8hKgUeWqXuQe48LacCfrSpFmOEQkovD6sQvePVjQ== X-Received: by 2002:a17:90b:2681:b0:312:959:dc4d with SMTP id 98e67ed59e1d1-318c8ecd239mr16401441a91.7.1751228210493; Sun, 29 Jun 2025 13:16:50 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3c5a8csm64859905ad.223.2025.06.29.13.16.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 23/42] drm/msm: Mark VM as unusable on GPU hangs Date: Sun, 29 Jun 2025 13:13:06 -0700 Message-ID: <20250629201530.25775-24-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: u_-WcXgivKWQ0wan6rYqCZd-c_wqmTwa X-Authority-Analysis: v=2.4 cv=eIYTjGp1 c=1 sm=1 tr=0 ts=68619f34 cx=c_pps a=vVfyC5vLCtgYJKYeQD43oA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=DhdyfM8_h7Qj4WYt2N4A:9 a=rl5im9kqc5Lf4LNbBjHf:22 X-Proofpoint-GUID: u_-WcXgivKWQ0wan6rYqCZd-c_wqmTwa X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX32AOtHuFTA2V 7V/yBHEh+asJvjnnItZTeQD0YPOE2KzetRx5x26PXH9RNP6/3CicEQQ7N5qkhVxVWodqcLiwllb 6sJkpIlpe/75WtwdR2uKnBv+P7M323j32b38oUJythbl/vUpHIEgtD6aUEtq5WCV3vPZQUmWDOc DQkPrjDoH/HEhlS6uJQ8xu5FBT2v0FyfXYLbC93K5RqWGN64rGKk0qq2fItqJD71AqhHlP5Q2sj W8RMo/Vn+LQ1TcvRZl0ZmtApEU0KgvOHPZ3Y749DalVB7zB1jALSNN3O3e2SSnW90EgQSfGUbSt H/irUkX17guWW4YSHr4fkcWX6V8pWro0ye165dnTMiPQr48oyBOAs4wkMUIRkIsZuSfW8Q1dJa5 OkQu93vFwqcXCxFsgijo9Hbn79Y5Bz96t9StMZZZhJwcSy+ig0dLOfuTD3qqlTtu7SEvwszC X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 clxscore=1015 mlxlogscore=999 priorityscore=1501 adultscore=0 mlxscore=0 phishscore=0 bulkscore=0 spamscore=0 suspectscore=0 lowpriorityscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark If userspace has opted-in to VM_BIND, then GPU hangs and VM_BIND errors will mark the VM as unusable. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.h | 17 +++++++++++++++++ drivers/gpu/drm/msm/msm_gem_submit.c | 3 +++ drivers/gpu/drm/msm/msm_gpu.c | 16 ++++++++++++++-- 3 files changed, 34 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index b5bf21f62f9d..f2631a8c62b9 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -76,6 +76,23 @@ struct msm_gem_vm { =20 /** @managed: is this a kernel managed VM? */ bool managed; + + /** + * @unusable: True if the VM has turned unusable because something + * bad happened during an asynchronous request. + * + * We don't try to recover from such failures, because this implies + * informing userspace about the specific operation that failed, and + * hoping the userspace driver can replay things from there. This all + * sounds very complicated for little gain. + * + * Instead, we should just flag the VM as unusable, and fail any + * further request targeting this VM. + * + * As an analogy, this would be mapped to a VK_ERROR_DEVICE_LOST + * situation, where the logical device needs to be re-created. + */ + bool unusable; }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) =20 diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 068ca618376c..9562b6343e13 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -681,6 +681,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, if (args->pad) return -EINVAL; =20 + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index c08c942d85a0..0846f6c5169f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -389,8 +389,20 @@ static void recover_worker(struct kthread_work *work) =20 /* Increment the fault counts */ submit->queue->faults++; - if (submit->vm) - to_msm_vm(submit->vm)->faults++; + if (submit->vm) { + struct msm_gem_vm *vm =3D to_msm_vm(submit->vm); + + vm->faults++; + + /* + * If userspace has opted-in to VM_BIND (and therefore userspace + * management of the VM), faults mark the VM as unusuable. This + * matches vulkan expectations (vulkan is the main target for + * VM_BIND) + */ + if (!vm->managed) + vm->unusable =3D true; + } =20 get_comm_cmdline(submit, &comm, &cmd); =20 --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1379F240611 for ; Sun, 29 Jun 2025 20:16:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228217; cv=none; b=IHogq95jqDP4/Dz9Vy2LDu/eaZ6iMZ7qpyDOW0IoAc1n+JjsmYYacpjulN/aZKy6M2lcESmG6JmmSBatwgBvYv610PwywUIGKZbxzIx35SWCnr3cNFBWqkBwL+gvi8moNDUA1zycCz1Awc3rzg1hQ+dnjc/nsooGeufVsMab/Dg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228217; c=relaxed/simple; bh=YuCHHK211nqpvFmOlpnR7LLsU5KAAseFcypOymve7lM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=c2ATFxM/iPVvaBGbjVL7rb3hWT6JE+oMCF003UC4ObBKGd5SeNnBp3N+4J7H65YeRxDzJ1HLartAOMk07teMWbCfjkLMUX22yfQ3CLlyBtkuoS68kxkAHtKwYGeZp2c6Qr5M0yQti1TII3VNLp6O4Whi1luVT/VMe5WMQIsEvEc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=BTS7oMPV; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="BTS7oMPV" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TJmeOu032248 for ; Sun, 29 Jun 2025 20:16:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Dn2IbO0/ubD WRHxm3cfadMPbuPKYgkB0YKeKTE62Wnw=; b=BTS7oMPVvVkrT3R5ytS6GeI/QlQ I2LXgxl61XW3RhqlC4yvUxtXPKcOCJ7XhdCkNe7sPK03S/yMVQznK0nbkAaA9wyx txgxB8PudflKGwJgcJMiHLUX+ChxhnSEjj70/Hiz84Lfl4BrD7Vq8/vfvLplcRxd ScM8UFjSyziyH/Pk+HwuVZ4TNiFRYsADB5iB2lOIXy4ezREIJZbvxZuguZd5Z4oR AQX9yqVLaoaLX+hsYuyFvX7jS/OXBACThab+DGSgN3jCb4J9s8oBMKE4HD0cogzp Uul2YJlUsfbyr8WEa7w0naGzx0+cQHjNHO2CEP5cztNt/4dz9W9H0VUMfrQ== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8fxan8b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:53 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-23507382e64so10328865ad.2 for ; Sun, 29 Jun 2025 13:16:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228213; x=1751833013; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Dn2IbO0/ubDWRHxm3cfadMPbuPKYgkB0YKeKTE62Wnw=; b=G2k6I6rGzon4C2zx9ur8lxPbyB8gdRtTkhzy4PHynCTw6NM1k7fPxLlImUdpTQUm8y rabQl9F36yPxQJ9aMD5C+xn3fUvQGzi+YTpNBV0Zu6pwqjSqbyaijKBPp/a9qyMrU8uA nkjzvbPQy+wtPYLhGpiR3tbYiPD+F/IAsA436ibp/O54DjvmafaP2/n7M3dFCy5ZLah8 mm2wXFXROmUufCc1x016SMU6uuEbMmhzzFXB42rZPApcSG0PtnBLuck1YZCQmX0fdNj7 Ic65n8HrquyvlFOnrFxTBnhsFJ+Dw8/TK6T1sXJPPPP7s+NfFlMr+XWHsNuFpqkwlXnE IVpw== X-Forwarded-Encrypted: i=1; AJvYcCWVw9dot8fQdpRXaVKtJ16S35406Qe/UlfPRBHk4JVMm/ygbNQkwSUQw5iV4QTBfHnxmIP/1TWWQQGCiBM=@vger.kernel.org X-Gm-Message-State: AOJu0YybwLV1MgGbDeIvmfjC1EmJYeF3E3v7ygrwT/zwFWEN16EqLES/ j21kRKCe7mtE3I2nEKwGtMLy3oF/NMCEZoMpVqJY0gGr2nbkSNq1xwpOw22RTfwhr7iHz+qA8oS LK47IIgAc4zEI8Y3orvP7/1/ZRsfXuMPksqfZqDD2yRrs5yFyeV+b73u6wOqyDGcRbNM= X-Gm-Gg: ASbGnctONDCLXqyzNYBNWv/N+2e0QDBPoU0fCkv8KePWA61q071hRPT3c2mJW7MwFdD wcg3mpOeqK1pdpRAEgLD7vqhk0iFp/5PE/0VGGDIwA6xsmcx/B3cs/gC3TohGICHAU8mUsjKp40 1S/WmmII3HPLJsGUMDNqQerP0mUgApfST42Jbt6bz6tElluGsbJowuWtHVMeOm0mPBBUJyWHtjd t5iRFTODafNjY/xVshYOYcPlWD5zQwbnauV15HZPntLjD/o/CRfiUXHwy9l8S80StjB8IYpzm1P HxHLKMsV90Es5TKfDwH3XSkmaL/5VIfGEg== X-Received: by 2002:a17:903:90d:b0:234:c2e7:a102 with SMTP id d9443c01a7336-23ac460737bmr175224975ad.43.1751228212476; Sun, 29 Jun 2025 13:16:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFeA1tYAd7bfr1WUtJgWKUVLuskW38MgxuNwttD3gRKKh36qxhhXmusocB9o5of9/VCGqf64Q== X-Received: by 2002:a17:903:90d:b0:234:c2e7:a102 with SMTP id d9443c01a7336-23ac460737bmr175224585ad.43.1751228212009; Sun, 29 Jun 2025 13:16:52 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3ba4fesm64273895ad.199.2025.06.29.13.16.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:51 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v9 24/42] drm/msm: Add _NO_SHARE flag Date: Sun, 29 Jun 2025 13:13:07 -0700 Message-ID: <20250629201530.25775-25-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX0OzYS7vj3MtD 3SMc2POBOz9cbFKdn5oj22R9x8ThZdCtXlGKljc3eAqdAgyC3rK/q+B+qQDKaboB3Qj9r4LZD+N qXgW3LnLdhNvuYxDgq7fkbkfnjTikRRAX8BBWMmzKzlhZ20/XEEXiG5T701FvcNEGjp2wgBI7px iwoSOOaDE2asPNCvOF/9Q91WbWng5tfLDEDsBGjOAyymZYDgOIzEaAt9SGvIbG0AOpGi+yP5lGs CvpIyNvE66pmkQsEYBHK+v1Rg9cW/urILuPOgaLNEVngNKLoaS3XBRLN1WakSXpJiL5IWX4PuXV uTfvR1J9uOpNJuvagvIDPCqaZcToCwF8lwY17nC/1vIhTYbB4PtXVpVR2KGobuSTnL99xXPljp6 UMjHQX288bOAXeo6VxC/S5G469LKdutgYBqz0RHo1BGH6j/BYFXVygayREDfDbulx26D7EJ6 X-Proofpoint-GUID: 04E7sKz1GB3hbEdhm0glO_kvbZyr7_WA X-Proofpoint-ORIG-GUID: 04E7sKz1GB3hbEdhm0glO_kvbZyr7_WA X-Authority-Analysis: v=2.4 cv=TqPmhCXh c=1 sm=1 tr=0 ts=68619f35 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=5GAAy6agFmV6x6zTEMEA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 bulkscore=0 mlxlogscore=999 suspectscore=0 adultscore=0 phishscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 spamscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Buffers that are not shared between contexts can share a single resv object. This way drm_gpuvm will not track them as external objects, and submit-time validating overhead will be O(1) for all N non-shared BOs, instead of O(n). Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.h | 1 + drivers/gpu/drm/msm/msm_gem.c | 21 +++++++++++++++++++++ drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++ include/uapi/drm/msm_drm.h | 14 ++++++++++++++ 4 files changed, 51 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 0fe3c9a24baa..9b1ccb2b18f6 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -269,6 +269,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, stru= ct iosys_map *map); void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *ma= p); struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *de= v, struct dma_buf_attachment *attach, struct sg_table *sg); +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags= ); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); =20 diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index d16d3012434a..100d159d52e2 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -546,6 +546,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, =20 msm_gem_assert_locked(obj); =20 + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + vma =3D get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -1076,6 +1079,14 @@ static void msm_gem_free_object(struct drm_gem_objec= t *obj) put_pages(obj); } =20 + if (msm_obj->flags & MSM_BO_NO_SHARE) { + struct drm_gem_object *r_obj =3D + container_of(obj->resv, struct drm_gem_object, _resv); + + /* Drop reference we hold to shared resv obj: */ + drm_gem_object_put(r_obj); + } + drm_gem_object_release(obj); =20 kfree(msm_obj->metadata); @@ -1108,6 +1119,15 @@ int msm_gem_new_handle(struct drm_device *dev, struc= t drm_file *file, if (name) msm_gem_object_set_name(obj, "%s", name); =20 + if (flags & MSM_BO_NO_SHARE) { + struct msm_context *ctx =3D file->driver_priv; + struct drm_gem_object *r_obj =3D drm_gpuvm_resv_obj(ctx->vm); + + drm_gem_object_get(r_obj); + + obj->resv =3D r_obj->resv; + } + ret =3D drm_gem_handle_create(file, obj, handle); =20 /* drop reference from allocate - handle holds it now */ @@ -1140,6 +1160,7 @@ static const struct drm_gem_object_funcs msm_gem_obje= ct_funcs =3D { .free =3D msm_gem_free_object, .open =3D msm_gem_open, .close =3D msm_gem_close, + .export =3D msm_gem_prime_export, .pin =3D msm_gem_prime_pin, .unpin =3D msm_gem_prime_unpin, .get_sg_table =3D msm_gem_prime_get_sg_table, diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_= gem_prime.c index 2e37913d5a6a..4d93f2daeeaa 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_ge= m_object *obj) struct msm_gem_object *msm_obj =3D to_msm_bo(obj); int npages =3D obj->size >> PAGE_SHIFT; =20 + if (msm_obj->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EINVAL); + if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ return ERR_PTR(-ENOMEM); =20 @@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(str= uct drm_device *dev, return msm_gem_import(dev, attach->dmabuf, sg); } =20 + +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags) +{ + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EPERM); + + return drm_gem_prime_export(obj, flags); +} + int msm_gem_prime_pin(struct drm_gem_object *obj) { struct page **pages; @@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj) if (drm_gem_is_imported(obj)) return 0; =20 + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + pages =3D msm_gem_pin_pages_locked(obj); if (IS_ERR(pages)) ret =3D PTR_ERR(pages); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index b974f5a24dbc..1bccc347945c 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -140,6 +140,19 @@ struct drm_msm_param { =20 #define MSM_BO_SCANOUT 0x00000001 /* scanout capable */ #define MSM_BO_GPU_READONLY 0x00000002 +/* Private buffers do not need to be explicitly listed in the SUBMIT + * ioctl, unless referenced by a drm_msm_gem_submit_cmd. Private + * buffers may NOT be imported/exported or used for scanout (or any + * other situation where buffers can be indefinitely pinned, but + * cases other than scanout are all kernel owned BOs which are not + * visible to userspace). + * + * In exchange for those constraints, all private BOs associated with + * a single context (drm_file) share a single dma_resv, and if there + * has been no eviction since the last submit, there are no per-BO + * bookeeping to do, significantly cutting the SUBMIT overhead. + */ +#define MSM_BO_NO_SHARE 0x00000004 #define MSM_BO_CACHE_MASK 0x000f0000 /* cache modes */ #define MSM_BO_CACHED 0x00010000 @@ -149,6 +162,7 @@ struct drm_msm_param { =20 #define MSM_BO_FLAGS (MSM_BO_SCANOUT | \ MSM_BO_GPU_READONLY | \ + MSM_BO_NO_SHARE | \ MSM_BO_CACHE_MASK) =20 struct drm_msm_gem_new { --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E23F2254867 for ; Sun, 29 Jun 2025 20:16:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228217; cv=none; b=qBFKfmQvlwoiiekCVfbTrkmp2B1tTBxDvFMD9zLpBlZjVDMjqtdiwicGKUv96smmiTzV+bltDm1pFqzNm8X0cfzD1pNybp5bWKNAh0hX0oi5QBziRxVi9zPlQSaf7b3j2gax1jvd4ql06oFvPcOHz/KmTsmHOCM2r/d6NdVTCCo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228217; c=relaxed/simple; bh=e1ndf2hUIh/MotZmx2Og/v+YPEYB0tTP1wLOvcibMkU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QQGokJgkfeEIIeLrJticJHhu4U2eQurG/ptDMz5tFgPuFF0Y1AxggAhiDM4L7SBD1xqgvGAhjgJl+Q6MUXV4sVPSyhOTT6DkH9DYPde5J7+AgwvkmVJCHlGWdf5mxQq4BMwafPKKZSJVofnVKFrsH5kd01kXozZMLGNFTOrZHrI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Hw21Q6K5; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Hw21Q6K5" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TGTmrU007996 for ; Sun, 29 Jun 2025 20:16:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=MXd06Saq6/9 4jOhWiMbD0wlZ29ngCx2vyB54A9DMyFk=; b=Hw21Q6K5584PMaleLKBvAEKOLvU PfMLIfQ/q7iD1wwv4xmqE1S3x0eYxUfgPQHPImWOKTjCbZhqjiDSylHDkI09/tlx nocgV8FTfzHc2dnoFkYEvYDctSSkpOo5aP5Azuvi1RQ9tQfR68HgytdjsikfdV1c a34/esORWPg7VcPojv1lrrvHNT2+KakhF5qjVia2g02VETxa0kG8SGRUp57ybkcs qxTKlcfaX2egg+db7DpV4vedX+bWUb6xOeQzqbtEmgzDZXkKsE2a4AXPTY01z2XS jiwGaR1R/nxItrLY7ly3C4qTsdZE0OXqRYAa3Airpff5cie99c92A+7DoZg== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8s9ajyy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:54 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-748764d84feso2796767b3a.2 for ; Sun, 29 Jun 2025 13:16:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228214; x=1751833014; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MXd06Saq6/94jOhWiMbD0wlZ29ngCx2vyB54A9DMyFk=; b=jNkJTXL9nW26x04zHcPQBgOALRn+S7qNgVnqZr+9D1v2qzq3EQM+3+CjC3zmSSOea0 ull++zLho6DrmgQbnqaexDcUrPe5UJAFCRSFCj87AR9j/HrsdnI1sOjKzwGn/sbh3qxQ mP5DA3GOMc8P400o0andsn/aUr+NvIuRJ5zTh2XmAGIY9yMISnPCF3f35a3kcrt1MwTo ZOOC2gy4WqknJohJCSb1OYzBMU0e6NuS1qED1kFhSrUfg44RALHQbxWiq0/5QJYC88Mj oWpZvC08BoOvvEbkkuN0/CmcUp+WWUNdxThRyv2HDpJd9bNp5HqMY4/vcbKapsPTEOke +Jjg== X-Forwarded-Encrypted: i=1; AJvYcCW8LEGnI3I15ITZwS45RdvywKJIfYVIF6tmPqJzB9Gdeg1HLwT2wAV3gkPXldho1mKo650jgGQhNGQr3Lg=@vger.kernel.org X-Gm-Message-State: AOJu0YxD3kxz2vnIeoecz0dahaa28n8LSy3Vmxi+IV4x3OPupl6vqCKT h3KszbvpryS6R7+Rm4sVi+KxvQU7OGlm2TwvZLVaTzF6tOHlNAwBOYmHOropnWpEhvGXUTeLFk2 2vz7EdWIuHdlTiTeWs3Q26wF3eYbb09d9f0ADyxK9edRsqYvALsbkNGiam6NPxS3ZfbU= X-Gm-Gg: ASbGncueIzsj8pD53jQYXxBG69TphDqoo8gGYPoyU8NYZy8Q6Mhdqu2zw1EwagB+n68 ZVR4Inanajy7Xn8PEknvYELQ+5ArEjqjqR7RWNomzTjOrsFpDDmcuy2I66J2KKCAANvIc+kG3xK /xmshXg80bmyrwaVOdZJK9YkwScflTZh2YToSbkNvpHx9xRHr+lqpFcBn0PO67+FwICEX/LKMDA 1U0AXfvm3uXW8hKXj7t8ESuwdVShB+NOoRw/IALmdKB4LdXyz1nKfkdZRGQTAiKJ2HLkv7xzvYI t54GtY1jvu6oY9WdO3ca+QF8ZpgzWBnT/g== X-Received: by 2002:a05:6a00:130a:b0:742:da7c:3f30 with SMTP id d2e1a72fcca58-74af6f4cf7cmr15911472b3a.19.1751228213695; Sun, 29 Jun 2025 13:16:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFwS0jhkR8kO4XI31Ia1S/bTnmlQi728sKrSzukVGa/mmnt9dqDSkWMQ2dsZWOWEB2qBV+YJQ== X-Received: by 2002:a05:6a00:130a:b0:742:da7c:3f30 with SMTP id d2e1a72fcca58-74af6f4cf7cmr15911449b3a.19.1751228213309; Sun, 29 Jun 2025 13:16:53 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af5579d37sm6914468b3a.81.2025.06.29.13.16.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:52 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 25/42] drm/msm: Crashdump prep for sparse mappings Date: Sun, 29 Jun 2025 13:13:08 -0700 Message-ID: <20250629201530.25775-26-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=H/Pbw/Yi c=1 sm=1 tr=0 ts=68619f36 cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=bYXzjpskvHxJzFY9Y_MA:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-ORIG-GUID: 86iXqRFnDzBX8cAXto3b0FTBcp48a9gF X-Proofpoint-GUID: 86iXqRFnDzBX8cAXto3b0FTBcp48a9gF X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX6nHq/Wi9BmxL XeDdatm3kLnCq3ryhi9Im3rVA4A9DoFMksIgF09jb/8J4WIUgXHlFR7mgSi6fhH2wCgOXbrJE1z g7rNzlpub38qb9dUwBNr/Eq056qySOfC2gIEPD2CnU//jwqiYy0SL3IdV1guRrkMnMEcDAopgKm 75myEEL5ao9ZB0bY6S7zx2dglrGv85uP4fFc/bEEsl9bveIckul9PV+JZ/xFp+XyeK9jLM9mKIh 8tifH6eorCLZinhFki8tJiai4F4ZbqAmAa6Bzy3IcOMuIBEh21xFtHB4oMqGaHqJ7I3kMNVykWg hU6CaOnyipC1XXnbgBlAJ5sR560jnM8ApoZLWy7Uzkzyn05miVJWrf4k+XZC350Lw3rxEyPbO8p NXDKtu1DwgenyN7HvPh/+ogjKOZydksCP3v9UmZdGVSqzFnPAlKEKsEuzW85Udd3HVrfFDlg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 malwarescore=0 suspectscore=0 mlxlogscore=933 priorityscore=1501 clxscore=1015 mlxscore=0 lowpriorityscore=0 spamscore=0 adultscore=0 bulkscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark In this case, userspace could request dumping partial GEM obj mappings. Also drop use of should_dump() helper, which really only makes sense in the old submit->bos[] table world. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gpu.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0846f6c5169f..0a9d5ecbef7b 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -219,13 +219,14 @@ static void msm_gpu_devcoredump_free(void *data) } =20 static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, - struct drm_gem_object *obj, u64 iova, bool full) + struct drm_gem_object *obj, u64 iova, + bool full, size_t offset, size_t size) { struct msm_gpu_state_bo *state_bo =3D &state->bos[state->nr_bos]; struct msm_gem_object *msm_obj =3D to_msm_bo(obj); =20 /* Don't record write only objects */ - state_bo->size =3D obj->size; + state_bo->size =3D size; state_bo->flags =3D msm_obj->flags; state_bo->iova =3D iova; =20 @@ -236,7 +237,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_st= ate *state, if (full) { void *ptr; =20 - state_bo->data =3D kvmalloc(obj->size, GFP_KERNEL); + state_bo->data =3D kvmalloc(size, GFP_KERNEL); if (!state_bo->data) goto out; =20 @@ -249,7 +250,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_st= ate *state, goto out; } =20 - memcpy(state_bo->data, ptr, obj->size); + memcpy(state_bo->data, ptr + offset, size); msm_gem_put_vaddr(obj); } out: @@ -281,6 +282,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *= gpu, state->fault_info =3D *fault_info; =20 if (submit) { + extern bool rd_full; int i; =20 if (state->fault_info.ttbr0) { @@ -296,9 +298,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu = *gpu, sizeof(struct msm_gpu_state_bo), GFP_KERNEL); =20 for (i =3D 0; state->bos && i < submit->nr_bos; i++) { - msm_gpu_crashstate_get_bo(state, submit->bos[i].obj, - submit->bos[i].iova, - should_dump(submit, i)); + struct drm_gem_object *obj =3D submit->bos[i].obj; + bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, + dump, 0, obj->size); } } =20 --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 94DE925A645 for ; Sun, 29 Jun 2025 20:16:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228219; cv=none; b=S0X3hSSip74WPny18HDjAgNdPNgLq5rcqph+W6/2/7TbqcdDEyp4Hv7FCEwel0g6QHUJpwk2q4t3C6IobebEpC+TDyA8TPowS7+KNW8Byyz7knILw5oc7+eaUlXteldEfziV1N5kZNkF+GJi0zazrCYrno9c+xZotiuFpxIY9cQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228219; c=relaxed/simple; bh=r1zmu/qH1gARr8JF3JlMKkzrZ4FQv9/Rjdy5KUlvpfA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CI2s5Fc0cF3Qjk8aGMZVlxbVNYfQNn97lot1RtBMmx3WBIaAR6bJACJPOP+OpBCsOVc+otBkxAGUvgRR4MkKiedpiA2Lp6tTWa0msspcFx6Msmko1YR6OsvpH2Qo0R3wOx7bPzYFDZPlN/8DmO+R6OkWWtDrcXknkJ2PK+sMNtA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=jXakVDdU; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="jXakVDdU" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55T4hSdU031630 for ; Sun, 29 Jun 2025 20:16:56 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=POLN6v5Lpeq 4YKP9lCUNA2S7d/qpjdY9Nd2TrkcfwLQ=; b=jXakVDdUTAXAmDbl8kMVcFG7Poy NYeUqU4wYtJYTznqIKlLMdkB64iRMu4VxGlEf7rUxr6V4aj+EYwBcXxScLXkN6E+ I17FoWrVG36f3ft2xOxNoPEAjMBXmiAgkiLPyDdPKJoabrfhrPjuHs221FxmkK00 BFAUYDkiYV0/LwyzilPfBq1FwSAIGfWz/P/7RhHz0wLC2tGT1lgh5EvudVXMnX+2 hPAKJ6sRQbLVDztGxyC9ibOo7aq3SHFXMImizRKHK7Zj388KHIyqoOrtN5PkhhFQ UCGdhhGMrAYEszD5UH6Dbq5b8B0/soxZczk9roq+doV6UvaNyxg3Al/YcMQ== Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7d9tpkg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:56 +0000 (GMT) Received: by mail-pf1-f200.google.com with SMTP id d2e1a72fcca58-7492e654cdeso1490912b3a.3 for ; Sun, 29 Jun 2025 13:16:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228215; x=1751833015; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=POLN6v5Lpeq4YKP9lCUNA2S7d/qpjdY9Nd2TrkcfwLQ=; b=Z54k08ZOuxIi450UqjTe2WMKgbjaWD34hw/EypZsA/tS9IENRR/jZqUr26uTD0s0nL 3enAzsqDM5FuJATmxH9//KAcd7FDd1t/lOhCmZZ1jFIdBMoccN3Uo1YklICrqG+VcLWC vgSSigtLHex3e30JQrhJ5MRAqDxllCe1Hv2ozYKwY2DQi50Q84yh13f49Lb0+mFrvOPL yUjQqNO4sCjbSy7O0mVcSqD/s894krZDqFg8tOhQygir5CnN27F/JxwRrTtawmBnOJdw KNQSjrvGp4G+jWWWZ7d9mJ4tSVUyRq40v61py9eBAVnOkyIeOX5+blvLNnHBMF4VOCm8 wk6g== X-Forwarded-Encrypted: i=1; AJvYcCUI3ZbDdHAkQnw/RDXI59kE9X65Etmugc+eFxbuHt1HD0NZ80YMK9rN/rB6xJDknVteCt2w1KAQwqpb/U4=@vger.kernel.org X-Gm-Message-State: AOJu0Yz7PT6977B6GPJpSnUmqcqZLCWT81r+9HIxXosZVUr1JsjAbP2C ybxBdcxJXr/1S0bqpn3V8VX3+Y+FH8AoPR0i9n0DOkc9wkF51/tcUypJU4DlBvoUdTz7dMt3ch3 N4upR7yVbpdb6LjYh0MgFiuftWLbQ9gfQPdEvs79hjlmQ0zgaIUpcuz4TcUutdMFyj2A= X-Gm-Gg: ASbGncsAIgqNlEKiuUxRKWSL8KFSllecdk/RR4YB3HdYMLV2Cz+ta2zYU2SBXIoKWqC 0hKDsG5HBAkEwdLXU6tNUKw1P+aiCqIypwjB4DJpzoNrrSV2iIH2ieBXz4TVG0fxaoUO2M+m39v cvIrSVDCS4ulhEOWZK244KX6SMRntHTi9Xb6qqsOZetzi+sS2iM9h4g+q3TSLjPy65FRoxJZpjz kvd6dZcFCAUCpsNu0KPTgmhGGq9BcwLBenp6zSTDhVuM7j2EG9h9DR69YVuWWUj8VYasgR86nbe n66Dl6EYKGJ670+CNFwgZpf+mgMEv70YiQ== X-Received: by 2002:a05:6a21:2d4c:b0:220:5c12:efa3 with SMTP id adf61e73a8af0-220a17fa94fmr14766046637.38.1751228215059; Sun, 29 Jun 2025 13:16:55 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEHLdPOpOfhHnUXca9fxgn7Is+j9F3do3zCU7Zho79ZKpG8mV5KD0ItKS+VliQr14UVvffFHA== X-Received: by 2002:a05:6a21:2d4c:b0:220:5c12:efa3 with SMTP id adf61e73a8af0-220a17fa94fmr14766020637.38.1751228214658; Sun, 29 Jun 2025 13:16:54 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af54099casm6902136b3a.3.2025.06.29.13.16.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:54 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 26/42] drm/msm: rd dumping prep for sparse mappings Date: Sun, 29 Jun 2025 13:13:09 -0700 Message-ID: <20250629201530.25775-27-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX2u9wW6P1PFMa hoya1pJNtL2B5pqkblrG6pYTFSsl1oicZpzJS94Re4xP41sLdMunC1WlDxYm9fuGh7tldEqztT6 44r9eDJpE2Cdai+xlcR9oUXM0TzDSvXEpYczRX+qfLDt3KJtMBtunpNSnnXrvU4LZsCfgg6IWdj WrHMkwzqRXEAeFZU7ZRqoRr6Ay/7IYFjetNMpHM9oIF9F2Y6DutFLFdXkRAc9Z+7KgSXBgdMnRT 5eNs93v1k0iq0UGWyPn6VElQJEbWpy/cWj0SeONy+4Rkkl5N3oUDQ9tlpnzWNq8Pszeq0/OglAc TKEEYJo4C2pK6Y+zGdnVWYaB1ILYVFItsagLei5UkC4yMge+gddUcIRMfeZnPrerwpm9xyvX7l9 tWiWn8aaQmgFiXzesq8ZJEiCSuV5cRj007wIlysUrNmdVDpoY6ODOwkeMfkiU92WCZaFR3er X-Proofpoint-GUID: 2EO7IBgfoMS8I_8Ov2Ps_TJddxLw6KrK X-Authority-Analysis: v=2.4 cv=RrbFLDmK c=1 sm=1 tr=0 ts=68619f38 cx=c_pps a=mDZGXZTwRPZaeRUbqKGCBw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=uvlqqL4q8Y98p8K7alsA:9 a=zc0IvFSfCIW2DFIPzwfm:22 X-Proofpoint-ORIG-GUID: 2EO7IBgfoMS8I_8Ov2Ps_TJddxLw6KrK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=999 adultscore=0 clxscore=1015 suspectscore=0 phishscore=0 bulkscore=0 malwarescore=0 spamscore=0 mlxscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark Similar to the previous commit, add support for dumping partial mappings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.h | 10 --------- drivers/gpu/drm/msm/msm_rd.c | 38 ++++++++++++++++------------------- 2 files changed, 17 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f2631a8c62b9..3a5f81437b5d 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -403,14 +403,4 @@ static inline void msm_gem_submit_put(struct msm_gem_s= ubmit *submit) =20 void msm_submit_retire(struct msm_gem_submit *submit); =20 -/* helper to determine of a buffer in submit should be dumped, used for bo= th - * devcoredump and debugfs cmdstream dumping: - */ -static inline bool -should_dump(struct msm_gem_submit *submit, int idx) -{ - extern bool rd_full; - return rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); -} - #endif /* __MSM_GEM_H__ */ diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index 39138e190cb9..edbcb93410a9 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -308,21 +308,11 @@ void msm_rd_debugfs_cleanup(struct msm_drm_private *p= riv) priv->hangrd =3D NULL; } =20 -static void snapshot_buf(struct msm_rd_state *rd, - struct msm_gem_submit *submit, int idx, - uint64_t iova, uint32_t size, bool full) +static void snapshot_buf(struct msm_rd_state *rd, struct drm_gem_object *o= bj, + uint64_t iova, bool full, size_t offset, size_t size) { - struct drm_gem_object *obj =3D submit->bos[idx].obj; - unsigned offset =3D 0; const char *buf; =20 - if (iova) { - offset =3D iova - submit->bos[idx].iova; - } else { - iova =3D submit->bos[idx].iova; - size =3D obj->size; - } - /* * Always write the GPUADDR header so can get a complete list of all the * buffers in the cmd @@ -333,10 +323,6 @@ static void snapshot_buf(struct msm_rd_state *rd, if (!full) return; =20 - /* But only dump the contents of buffers marked READ */ - if (!(submit->bos[idx].flags & MSM_SUBMIT_BO_READ)) - return; - buf =3D msm_gem_get_vaddr_active(obj); if (IS_ERR(buf)) return; @@ -352,6 +338,7 @@ static void snapshot_buf(struct msm_rd_state *rd, void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *su= bmit, const char *fmt, ...) { + extern bool rd_full; struct task_struct *task; char msg[256]; int i, n; @@ -385,16 +372,25 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, stru= ct msm_gem_submit *submit, =20 rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); =20 - for (i =3D 0; i < submit->nr_bos; i++) - snapshot_buf(rd, submit, i, 0, 0, should_dump(submit, i)); + for (i =3D 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj =3D submit->bos[i].obj; + bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } =20 for (i =3D 0; i < submit->nr_cmds; i++) { uint32_t szd =3D submit->cmd[i].size; /* in dwords */ + int idx =3D submit->cmd[i].idx; + bool dump =3D rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); =20 /* snapshot cmdstream bo's (if we haven't already): */ - if (!should_dump(submit, i)) { - snapshot_buf(rd, submit, submit->cmd[i].idx, - submit->cmd[i].iova, szd * 4, true); + if (!dump) { + struct drm_gem_object *obj =3D submit->bos[idx].obj; + size_t offset =3D submit->cmd[i].iova - submit->bos[idx].iova; + + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); } } =20 --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 140A025B2FC for ; Sun, 29 Jun 2025 20:16:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228219; cv=none; b=YX8v+v45c5Oq58wSdSpe2JyfJ6ERRw8jm6sYKgGAJhEZJeP3fSZudYw7pJIm+vQ+zw6d6ralVsOT5zU029PE01jxzLkHfIt+iW7/nafTivs509Op/c9OU6WZcYQO5Djr5iTh6RNs3zvq1cqce5gy9cupm56+T4u8oO1ljBMC79g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228219; c=relaxed/simple; bh=B/MJJNslJOsQHJswCv8VSXBdVcsWoC/DDELjr3YL7Bg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d9A/Q4mDbT4VprYtaKhZKZwKdWnLAG6BkCi1WPcCIPkKE195GK5zePVLYvRL2N00WNShqIG2cJvqhUcAL4RVHaqFmM7pLAzY5gdgd2SQc+YnuosI6GkUerfo6Yn2tlCPBexNFCytuUqkL53YGNouSQQQDubbusIkicux/WCRdv8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=NLAWhJFP; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="NLAWhJFP" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55THv4CA008985 for ; Sun, 29 Jun 2025 20:16:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=wKngwCavAFH eJEVVeeWKWIsmvkttEuyPvRiuEbDcy3U=; b=NLAWhJFPbq39BjypQRne8ydDAaM OolAh9mjryiU5IR4UO0JxAmwV0D6/5e0bkjZyAn4KbFUEBpTylJaCPQfzuVtn5pi oG82sdajtE6GsEAoWing/jw5mB0wt7984SDNjudPS1d0IlV2wMF6wq4xKQyt58DN 6h71dEw34Er6vX41qI+u9uno4j9MvDUmTM+xI6ckWCO8MKbzlvEY48S4rR1l2PbP MVzhAhWYp/oGyXsWq1aVItSEctqXm/z5Xr5fThOb80h6CzH7ONP2hZe74tuue6xA TJjzAOHlDH1/UZn+ltJdl0GEedUwV749oaKtTamMawopr84fqDV/xlvYDLQ== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j95htk70-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:57 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-236725af87fso44131835ad.3 for ; Sun, 29 Jun 2025 13:16:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228216; x=1751833016; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wKngwCavAFHeJEVVeeWKWIsmvkttEuyPvRiuEbDcy3U=; b=UO2f54bXAPv7Xtp9O4F0tLk0DfW8VNxUp8z2YIuSkd1TNP1jCcGPZWl+Nj7j1GG2Nm jRiDlAMAlRV3Xlv73uJ44CWpq+CHchuCXoskxzH8KMt7U3Sbw3xp57DCTV7E8c3A7H/9 sgE/fvQOtXDSZpzdg955x56PLS7OoXKG32pQfkAz/JLhAZVGqAA7V+X9Ae1xHyDTbtWm WAc7yOVyxQEy23tbayI9i761uxkrVjlKLkDFTubD8SN3G5eMow73wmXLF3DG7lZlqsPb 1SVRAGzboAVYR0GISY1t0aSmjWaZKwP3VzwV4eFq41G4M02SaPP6THVQmYEzo1HYWTVH ctCQ== X-Forwarded-Encrypted: i=1; AJvYcCWYWc5KzMY1GWsRnrNMDXXMfD1pEJRL9JILJyB2eVvL8q+2lhdU3SVMbuOdjDJ2aiOmXtEBk1ffqwUmJk8=@vger.kernel.org X-Gm-Message-State: AOJu0YxGZLRJ9qfLYYYsBRIb/MQaCfsHHWVJMpuwTM5ZjnQOMG4tiaYp S53iXSB9KfLxMfngNBNUpi7G/aQFC7FtNqVQ16+kb5exwl44O035DRh8t9Qax5C3TYKAqAeSxRm SXPgmyTW9GXOQllnILItVojHbSo8zUdoX+KFQ4KukPgUPTnBT/3yfXuNP0HKbatvp1lw= X-Gm-Gg: ASbGnct096QKKnPal2wyaw4y1KV66+cQsAlUZ0Gn3C1zTnbTYTmrJLxnyRgIVZLew8h ld9mgX5WFzB6RdKEhRc69we59PfOazrHGizSLFTcKxRx1WRjO8Dco3OVqGsuVp2mldDN8WN62uO 4nrtTzs3Se+5vT9MTAk+Fo6ieYrwXrvbTk9y+VfcJt+90uWIyYl8E2wJeWHB8ja3fAgh2WvPcSz 2p1yfQTHieV+fT82gjQilzfSjhGwm+exl6PiyMZrNDe4Jia2OsPTylnaIowWCcqslXhaCf7cLoo BQVHy5WFzIFi/EB8N7fJ9y1bpfHfrLpilg== X-Received: by 2002:a17:902:e88e:b0:234:8a16:d62b with SMTP id d9443c01a7336-23ac40da316mr169235225ad.12.1751228216397; Sun, 29 Jun 2025 13:16:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHaWGPydQV216i89tCogIBG6+vmTYkFs0wd5R8OzSvv9tHVI2ml64sOXsCCbBPTAcj30aV1kw== X-Received: by 2002:a17:902:e88e:b0:234:8a16:d62b with SMTP id d9443c01a7336-23ac40da316mr169234945ad.12.1751228215987; Sun, 29 Jun 2025 13:16:55 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3b833bsm67473705ad.180.2025.06.29.13.16.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:55 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 27/42] drm/msm: Crashdump support for sparse Date: Sun, 29 Jun 2025 13:13:10 -0700 Message-ID: <20250629201530.25775-28-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX6INxKLkdlycv LICUayI2cFGHd6liVvBieXVqSbeQ+Uv5mwOOrKXlHOdR9khX1ED0YG6lmYc9mqE6/ZYP9fvj/qf S54d/KVirIW6dn4WRgN3yHkd6aOM/CWszZIBm/YzWC2cUPjbzlwfRmHyvLavbux5EWs3pEku2tp tKqAG7MgnEXTfQ3gcpT4jnj73f1Lw5/kBN/ev8GlAf35Cok+7PreC5XxctXmpNCFV/xV3/neB/3 TPsFBTr+4hCQopidnzmPGQvx/1tH50QbnlglM7E7nYhDJH+NitXlHeMACHTKEzolKFVAdjQZosB Li5GI1fpsw7cpu6VHKdFaYETYQyfdil0qg7EE9hm3NHzWSf91S+vGmBSL0pMoAZQHutSMDLpGYA g8xNSTiLrx6EjoDscPiObw4q2wrd0N6LcjGfS28uqV3vvzKYv6blSMvrreJjSsNqD95V2aJ7 X-Proofpoint-ORIG-GUID: fsxr8Xh9zjhfpbIvL-Bn8V6gzbIqy2k0 X-Authority-Analysis: v=2.4 cv=EuHSrTcA c=1 sm=1 tr=0 ts=68619f39 cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=MUnOxqT-vkRKCsmERf0A:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-GUID: fsxr8Xh9zjhfpbIvL-Bn8V6gzbIqy2k0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxlogscore=999 malwarescore=0 mlxscore=0 phishscore=0 spamscore=0 adultscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark In this case, we need to iterate the VMAs looking for ones with MSM_VMA_DUMP flag. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gpu.c | 96 ++++++++++++++++++++++++++--------- 1 file changed, 72 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0a9d5ecbef7b..ccd9ebfc5c7c 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -241,9 +241,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_st= ate *state, if (!state_bo->data) goto out; =20 - msm_gem_lock(obj); ptr =3D msm_gem_get_vaddr_active(obj); - msm_gem_unlock(obj); if (IS_ERR(ptr)) { kvfree(state_bo->data); state_bo->data =3D NULL; @@ -251,12 +249,75 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_= state *state, } =20 memcpy(state_bo->data, ptr + offset, size); - msm_gem_put_vaddr(obj); + msm_gem_put_vaddr_locked(obj); } out: state->nr_bos++; } =20 +static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem= _submit *submit) +{ + extern bool rd_full; + + if (!submit) + return; + + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_exec exec; + struct drm_gpuva *vma; + unsigned cnt =3D 0; + + drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES, 0); + drm_exec_until_all_locked(&exec) { + cnt =3D 0; + + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(submit->vm)); + drm_exec_retry_on_contention(&exec); + + drm_gpuvm_for_each_va (vma, submit->vm) { + if (!vma->gem.obj) + continue; + + cnt++; + drm_exec_lock_obj(&exec, vma->gem.obj); + drm_exec_retry_on_contention(&exec); + } + + } + + drm_gpuvm_for_each_va (vma, submit->vm) + cnt++; + + state->bos =3D kcalloc(cnt, sizeof(struct msm_gpu_state_bo), GFP_KERNEL); + + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump =3D rd_full || (vma->flags & MSM_VMA_DUMP); + + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + msm_gpu_crashstate_get_bo(state, vma->gem.obj, vma->va.addr, + dump, vma->gem.offset, vma->va.range); + } + + drm_exec_fini(&exec); + } else { + state->bos =3D kcalloc(submit->nr_bos, + sizeof(struct msm_gpu_state_bo), GFP_KERNEL); + + for (int i =3D 0; state->bos && i < submit->nr_bos; i++) { + struct drm_gem_object *obj =3D submit->bos[i].obj;; + bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + msm_gem_lock(obj); + msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, + dump, 0, obj->size); + msm_gem_unlock(obj); + } + } +} + static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, struct msm_gem_submit *submit, struct msm_gpu_fault_info *fault_info, char *comm, char *cmd) @@ -281,30 +342,17 @@ static void msm_gpu_crashstate_capture(struct msm_gpu= *gpu, if (fault_info) state->fault_info =3D *fault_info; =20 - if (submit) { - extern bool rd_full; - int i; - - if (state->fault_info.ttbr0) { - struct msm_gpu_fault_info *info =3D &state->fault_info; - struct msm_mmu *mmu =3D to_msm_vm(submit->vm)->mmu; + if (submit && state->fault_info.ttbr0) { + struct msm_gpu_fault_info *info =3D &state->fault_info; + struct msm_mmu *mmu =3D to_msm_vm(submit->vm)->mmu; =20 - msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, - &info->asid); - msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); - } - - state->bos =3D kcalloc(submit->nr_bos, - sizeof(struct msm_gpu_state_bo), GFP_KERNEL); - - for (i =3D 0; state->bos && i < submit->nr_bos; i++) { - struct drm_gem_object *obj =3D submit->bos[i].obj; - bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); - msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, - dump, 0, obj->size); - } + msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, + &info->asid); + msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); } =20 + crashstate_get_bos(state, submit); + /* Set the active crash state to be dumped on failure */ gpu->crashstate =3D state; =20 --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5FC6525CC69 for ; Sun, 29 Jun 2025 20:16:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228223; cv=none; b=E+h6MvcK9fehqCZwojktTthP9W1krbe3OlVwNIo8JTCgJu1Ks8at+U4V9q46hZSXGaU3xNfp2UzV/HxRuPBQZI9GV5i3wrRNVHiV3AoVyg9E/RFIb+TZ61Ugpqv0ogT+zdr1JjJ+lEgPxFTneGSDBlKOQfrZiGOPCtGUROajovU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228223; c=relaxed/simple; bh=oSLD17tD3i/SIJGyBuomwvpM409qt9O5IQtczoH19mw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=V2CZwold32bJbTQ7+FUIKi/X0dEXwZJ7N24w8TDD5kGqHThdHrdKpmQNEE+PItSd7NDo96MxI+0J5RN2FyVAp20Xlkq45kTWInK7OIfES6mCg29TnnwLREWVOdqQyr/e/2FLULThnF65RV0jyYijBBJwTGXZlWzx1Uzr8jTL01I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=fDnx0LU4; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="fDnx0LU4" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TDTXHR018757 for ; Sun, 29 Jun 2025 20:16:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=GiAM1wF2+vU wkpCq/AxW+9TLh9YmmG5Rgyh/h2VPEqQ=; b=fDnx0LU4V7NKf/OHFe3W029fWfR KMLSNdTZblHmLjLR4dw5jo8GT82U7nL331ZLB4da2Ebv10vQNaHHN7wZYlOuN8ZA AbFWH6WOffGfZEmZF9Z6umoVB38M2p+bvwxkZjlRfEtUafuRHAkoMrD7ZqVMFqxw C/wj1zYnnTqFzIOV13KLVfR8lPZcoyixUOICiTP9IO+PHn7PHMw0j9f9Y411uV69 LPatx/1jiGL1ixD9ax+1Kdq96MaiCpufalbmnhKIHKoUzcNVs7nCgmR9sa5BiLpx 03EXbT0njybEpmzmMwpMb+Ljh0cdiabMHwcOrRVWZyc+L8vBhEg1JIn33xw== Received: from mail-pj1-f71.google.com (mail-pj1-f71.google.com [209.85.216.71]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j95htk77-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:16:58 +0000 (GMT) Received: by mail-pj1-f71.google.com with SMTP id 98e67ed59e1d1-313d6d671ffso3183576a91.2 for ; Sun, 29 Jun 2025 13:16:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228218; x=1751833018; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GiAM1wF2+vUwkpCq/AxW+9TLh9YmmG5Rgyh/h2VPEqQ=; b=i7X+pJ8CIWTFF/dGFpxXQmNrfSSCYzjlTPo+SptJjfH6BlCod3/ogGfrmoqCDiQ+Ot CCXUiSvRuuWL7XvgAL4D9hz8bSKAnS/DJDP1R3CdvWiH/T7pXA+eMH3+gahU9e13l1yK a36Cz9Q3YirpGxchFHaRXVmxPt+Lq6zrmEnfDcWD/GxUP52a8qoYc5DkYle0EzfSX8nn smmdngUpvmTJdhtso0ZG+jmfa3NFGp/oYefJnMrlkQAM3V0bb4LwildYRv4H89iyKjdF x/x0irlzQr4AWtVzPhuFkWBhwKINQMTupkV/FwFtQIPUSZUPU17Y9QPPxZg7Lhhcz5gI feQg== X-Forwarded-Encrypted: i=1; AJvYcCWcoAtkk8/deBqinflmGWYCLxDwXOwp+oY/2hL52z96MIeB7JXYryQHwe9wFF/U38SicmV96YQVeAPey5U=@vger.kernel.org X-Gm-Message-State: AOJu0YyI+OuPzo1U6+iEKopO/DvyG+UkrMdfc0O6v9NsGEowlUBEScWC 5WJIRkd6SpyS9fnrsQHxWiGTOmpQuoJQdTPw7L8HQEZ6imriuO8k88pnsWZOpywTy6/K0+tfNmm okpu5w8UmFDFsvZHPAR+AHv1cG1FT2mdtZlHEXWjiSjXQxvq6tVB4vKtxrluCbDlh0OM= X-Gm-Gg: ASbGncvHG+ja4dZ6ABJo/RP3ksFWvAjRiJ3ooI5RcSTp1KH5pmywaeimfuUAGZGBZtB kwpb/1VuOt3l2e8ac6WRM/ejkmvRhzXfe1kehDHk/XtKU5+DVDEX9mJqVaRkdnK/QZKlmSb1msm 2Lf3PFzL1RzFq11OxKTZTKTCZ+K8SDtzdBp56qIbgkNFDeNmbZE/c8VfLLekUgVE06SizDQO6ND o8yJ5pl54e7qbpvtmoJqVst8Q97bL2vSridHJp1OPAhCSB3Okc/DOkkgPwAQoIIubLkRzGmJbOB WXJ90BtxO0C/SII4uTvRdQfSkzSjrQXNnA== X-Received: by 2002:a17:90b:47:b0:312:e76f:5213 with SMTP id 98e67ed59e1d1-318c9256c7emr14508881a91.28.1751228217634; Sun, 29 Jun 2025 13:16:57 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGp24Z6Ubl0Mg+UKuim2CPGQfJhFY6T/tHTvs17eA1pXtmVUu5AgaaxuE8ttGSFY1Jqy5xm5A== X-Received: by 2002:a17:90b:47:b0:312:e76f:5213 with SMTP id 98e67ed59e1d1-318c9256c7emr14508860a91.28.1751228217278; Sun, 29 Jun 2025 13:16:57 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-315f5382f02sm11646942a91.1.2025.06.29.13.16.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:56 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 28/42] drm/msm: rd dumping support for sparse Date: Sun, 29 Jun 2025 13:13:11 -0700 Message-ID: <20250629201530.25775-29-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX7mb2XkvwkR99 1CGwl5+PepOWNkD3gbz0lfOpzEfKqQPDSeiLeX48iZW8SICm90d5VUxLuH7pCNP2ceB+tozqvZX ejDBB3PdKeMonH6PiaJw1Z0JQG/zA+wmhEw2fgDeO4Z6wzBWOgDImkPQk1dVpOAdnsttfKGhKHC +O8Jx0exbg95F0eOwPGcdg9EAF0kI+QaySen007A2aPqqvR8gw7Vk4KC9MO/6r5I5dZuhPxMCRE F1G9JtMSAjbirwJNIqN23rMyH6nU4fREewUKHxVdiyVL+kp4325EYudzV3GzvZX4RNlHKSjaD0M j9WkN+gmJZ95MY2sx2lrtubDNPghvKKIbYae55OPGb8zbEmj4AHrqd1EpaGgQ00efvVeQt/2Ul0 sbD3sVgA/Hc+OOMhHgvi0heL/EbwHqZgD6FCdwDXGx0nFHYf0xv5BUaOAsTMtzqSwGTLQL4c X-Proofpoint-ORIG-GUID: ZdUpgq45x-o1DSbPun1_sO1UiYfBVIet X-Authority-Analysis: v=2.4 cv=EuHSrTcA c=1 sm=1 tr=0 ts=68619f3a cx=c_pps a=UNFcQwm+pnOIJct1K4W+Mw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=Oi01P0gpvwaEutKy2E0A:9 a=uKXjsCUrEbL0IQVhDsJ9:22 X-Proofpoint-GUID: ZdUpgq45x-o1DSbPun1_sO1UiYfBVIet X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxlogscore=999 malwarescore=0 mlxscore=0 phishscore=0 spamscore=0 adultscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark As with devcoredump, we need to iterate the VMAs to figure out what to dump. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_rd.c | 48 +++++++++++++++++++++++++----------- 1 file changed, 33 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index edbcb93410a9..54493a94dcb7 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -372,25 +372,43 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, stru= ct msm_gem_submit *submit, =20 rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); =20 - for (i =3D 0; i < submit->nr_bos; i++) { - struct drm_gem_object *obj =3D submit->bos[i].obj; - bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_gpuva *vma; =20 - snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); - } + drm_gpuvm_resv_assert_held(submit->vm); =20 - for (i =3D 0; i < submit->nr_cmds; i++) { - uint32_t szd =3D submit->cmd[i].size; /* in dwords */ - int idx =3D submit->cmd[i].idx; - bool dump =3D rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump =3D rd_full || (vma->flags & MSM_VMA_DUMP); + + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + snapshot_buf(rd, vma->gem.obj, vma->va.addr, dump, + vma->gem.offset, vma->va.range); + } + + } else { + for (i =3D 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj =3D submit->bos[i].obj; + bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } + + for (i =3D 0; i < submit->nr_cmds; i++) { + uint32_t szd =3D submit->cmd[i].size; /* in dwords */ + int idx =3D submit->cmd[i].idx; + bool dump =3D rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); =20 - /* snapshot cmdstream bo's (if we haven't already): */ - if (!dump) { - struct drm_gem_object *obj =3D submit->bos[idx].obj; - size_t offset =3D submit->cmd[i].iova - submit->bos[idx].iova; + /* snapshot cmdstream bo's (if we haven't already): */ + if (!dump) { + struct drm_gem_object *obj =3D submit->bos[idx].obj; + size_t offset =3D submit->cmd[i].iova - submit->bos[idx].iova; =20 - snapshot_buf(rd, obj, submit->cmd[i].iova, true, - offset, szd * 4); + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); + } } } =20 --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D3BE25D8F0 for ; Sun, 29 Jun 2025 20:17:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228225; cv=none; b=PHEh2rR2bE4ibfDq0FPWu35lorL3aN3JuC/QWNKkEwDhkNIJNHHA9WciYeWrws/iDnhbcllDIyT+erwMp30dubfpPR5JHpPGQGC7Z59kLf7KnsFS8vo5QOh+FAXAum2wcs4bRXSIaN5n2VpDOwBzxs3BN7NTQL0F3jd2EIHWvYY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228225; c=relaxed/simple; bh=r5Rh34eJlYEsfjB6VBgkUDpQ07rxKaam9Y+K93lKsy8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tk1C6uM/8gd3d7yzkWzpMCU0l+a+qvImjVMjF5M6/erY8NPVjfB0+bD58oJ8ULDCs6V4G2Hvd6/n34utgUYteq/CPn5fACZt899SHAKapmoJFvxgOo0aUH3mNRXv+dxXUsmDs0l9+1sRa43ky1mpQKxXkBFJgi6r7ZkI+nORFos= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=LYMrqbRW; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="LYMrqbRW" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TBLkAV012104 for ; Sun, 29 Jun 2025 20:17:02 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=pcVDmJx0Nao yKUEjzh14AEiEkv5Hih5RipJEl19C9hM=; b=LYMrqbRWQsQsWH1fKlB38q9NTqE YzilZbZrdpwhHuoVu9tLmZ999GGFLbaaDU8KRJk/h4MiLRkd9cnTnnpC09GSjHxn SiHdOnlfXREGbEKpTOmTvvQumRT/msI/5s7Bq4Yr+3Uw6JcTPin+i4s9EcXn+8oa L2TXh0OTW6D+K8mflJAQi27BAWJ/pJL1cGKpfONP7iGedZ0MQSo/F75ghkJAtFuz t5Xbc1hKwl5MxSz4QoONkeOfBUmAJs1o8seLyXIYzz2MbMtVVWypJzARnsZDv8XP oXVqmxVeGU7RXeK4jrLxDzBePp31IHM6d8MgGbBXw3aBn+h/LwOgZW1JDRQ== Received: from mail-pj1-f69.google.com (mail-pj1-f69.google.com [209.85.216.69]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j801tmpj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:02 +0000 (GMT) Received: by mail-pj1-f69.google.com with SMTP id 98e67ed59e1d1-312df02acf5so3308692a91.1 for ; Sun, 29 Jun 2025 13:17:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228221; x=1751833021; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pcVDmJx0NaoyKUEjzh14AEiEkv5Hih5RipJEl19C9hM=; b=YPBrlKkrE1T5Zovm4ugQ+tEHSXXNOJ4HNbvuDww8trhEXMyFEcaprHkciEzyPc0EJ0 u6Rcb3htVFNDJamB3YL3eZpjH+CkN2CpY8yjYrYD5vsXy1H/pIQMAFgV3uUFeBzI9o7Q dd+unES9x03LO1GIEZYVlYqOVd3GpT5/s3L5mjyu/qjxDJczXXIhoCh51xiadc2SHFoZ w5Ad+M2xy0ZMwN75ODAO+Hbh/sUSAMkB+FpW+ARSO/NoEnrJDFmUoU+f/3sOn4mQuDVj fOeixL2yT5Bj9ZVyofaKAI0GZhh6Hmg9zSpu6P6AMESzVDDP04XpFeURZpyzUHqC44oO Jcpw== X-Forwarded-Encrypted: i=1; AJvYcCVKltEHUeINW01k4HYBBoeB0e3nOA50UqaSslVtp7/I/06UQ4PMHwxcXkQ/ts1gT8kxr7aNbxXVGGKnUiM=@vger.kernel.org X-Gm-Message-State: AOJu0YxHxKGJJaxPYAE0vy3kX/AIBJQ1hV/V7VDIESuJzo8wKBpQPM9B q325qxRN0XCYLfM63wGhRh2V9luAaCYzDGuVvHkwPx+v+MT/o0XQpW7rDtpLSoTEMUIemiyY+nf MSFj1u+tufmup2kNq1cLP9DHHi2fzUB3tthxjkm14Q7L4+bfsVFP3HqKzD94KuvvpMrM= X-Gm-Gg: ASbGncuQHQ9wm6n2Jy84M5y8wzgvGnfXnL/VfeXeI0O46r66poSvXBL/qbI+R2s/L4K byDIiBuASGJL6p/LRWnln1IW84AF1RkFzP0SYXOTc4X9n4JAqtwDrfjuNsiIuOrRgtbk2S12wNg s4NvmN0czaOpCsL8CHaQMCx6Cq7QRgx4WOj3qqGx7jytmM/cHZ/KpKMAI1FNNV92Qj7TEEu3+/N XbJ9mgNA7aLxTTq2DN33xkzRCpX+FgJpe9ouW2ogbpQTKQYo7/gzGzlSujCmhALiaNLZTrTY7Kp 8ab9bQnFgaUikaRwEgEzPaE6id9/O7c8vw== X-Received: by 2002:a17:90b:55d0:b0:311:482a:f956 with SMTP id 98e67ed59e1d1-316d69bf0cbmr20976064a91.5.1751228220700; Sun, 29 Jun 2025 13:17:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEWeP46zq4kjbyVqFc3MAJe8D3GOmR9sz7KBc4XA8kpYnqqG1La3xmM5OoFVbsXCV4lYoXDuA== X-Received: by 2002:a17:90b:55d0:b0:311:482a:f956 with SMTP id 98e67ed59e1d1-316d69bf0cbmr20976037a91.5.1751228220076; Sun, 29 Jun 2025 13:17:00 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-318c1392233sm7236972a91.6.2025.06.29.13.16.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:16:59 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v9 29/42] drm/msm: Extract out syncobj helpers Date: Sun, 29 Jun 2025 13:13:12 -0700 Message-ID: <20250629201530.25775-30-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: oqxADYAMpmBaHl77av9ztlIy-NoXBRH_ X-Authority-Analysis: v=2.4 cv=YPWfyQGx c=1 sm=1 tr=0 ts=68619f3e cx=c_pps a=vVfyC5vLCtgYJKYeQD43oA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=ii7SkllToyZ1umWTbp4A:9 a=rl5im9kqc5Lf4LNbBjHf:22 X-Proofpoint-ORIG-GUID: oqxADYAMpmBaHl77av9ztlIy-NoXBRH_ X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX13n+n67bJ/du bzJV9yq/zr51oYDSbbYFHubkudTLtgU79ACBr2ULr8CROZj+l3A9EbxcIWy1EsQ77lka64Mv427 GN42+mZARNVXfkWsK2Cy5IFaqbU9UllVfqfqDbn2rbPQ1HToqyrpxExqIqRHAG1ieBvGE+9a+iX egRS4cjT7ulo9kXScMS6UatQtguXL0rFThRejowMeg75Lh90dX6/cQwkgHWKETJfhMyjqqgj3MK Y94Urra3nL7xNWQi8xaWY5Nl3F59rQ/rFGQBv4lBDHEx7vaEr6uI1H5C2Wa0qyVTa4/wkh0t+lk YobvjCvxVCafqa2nG+37cmpt0YD3sdcZDy56HYegXcNn7rkfT1UOx9Zhx7ibM/NSTQ4cTYAa0PS wKSiy8KMLssHJAvkDwPKKlnT52oz+2hzd+f4SWgXUd1cn8EXmsvOa1ae3QbYURfkflb5FGkm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 mlxlogscore=999 mlxscore=0 malwarescore=0 suspectscore=0 lowpriorityscore=0 clxscore=1015 impostorscore=0 adultscore=0 priorityscore=1501 bulkscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark We'll be re-using these for the VM_BIND ioctl. Also, rename a few things in the uapi header to reflect that syncobj use is not specific to the submit ioctl. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/msm_gem_submit.c | 192 ++------------------------- drivers/gpu/drm/msm/msm_syncobj.c | 172 ++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_syncobj.h | 37 ++++++ include/uapi/drm/msm_drm.h | 26 ++-- 5 files changed, 235 insertions(+), 193 deletions(-) create mode 100644 drivers/gpu/drm/msm/msm_syncobj.c create mode 100644 drivers/gpu/drm/msm/msm_syncobj.h diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index 7a2ada6e2d74..7e81441903a7 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -127,6 +127,7 @@ msm-y +=3D \ msm_rd.o \ msm_ringbuffer.o \ msm_submitqueue.o \ + msm_syncobj.o \ msm_gpu_tracepoints.o \ =20 msm-$(CONFIG_DRM_FBDEV_EMULATION) +=3D msm_fbdev.o diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 9562b6343e13..9f18771a1e88 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -16,6 +16,7 @@ #include "msm_gpu.h" #include "msm_gem.h" #include "msm_gpu_trace.h" +#include "msm_syncobj.h" =20 /* For userspace errors, use DRM_UT_DRIVER.. so that userspace can enable * error msgs for debugging, but we don't spam dmesg by default @@ -491,173 +492,6 @@ void msm_submit_retire(struct msm_gem_submit *submit) } } =20 -struct msm_submit_post_dep { - struct drm_syncobj *syncobj; - uint64_t point; - struct dma_fence_chain *chain; -}; - -static struct drm_syncobj **msm_parse_deps(struct msm_gem_submit *submit, - struct drm_file *file, - uint64_t in_syncobjs_addr, - uint32_t nr_in_syncobjs, - size_t syncobj_stride) -{ - struct drm_syncobj **syncobjs =3D NULL; - struct drm_msm_gem_submit_syncobj syncobj_desc =3D {0}; - int ret =3D 0; - uint32_t i, j; - - syncobjs =3D kcalloc(nr_in_syncobjs, sizeof(*syncobjs), - GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); - if (!syncobjs) - return ERR_PTR(-ENOMEM); - - for (i =3D 0; i < nr_in_syncobjs; ++i) { - uint64_t address =3D in_syncobjs_addr + i * syncobj_stride; - - if (copy_from_user(&syncobj_desc, - u64_to_user_ptr(address), - min(syncobj_stride, sizeof(syncobj_desc)))) { - ret =3D -EFAULT; - break; - } - - if (syncobj_desc.point && - !drm_core_check_feature(submit->dev, DRIVER_SYNCOBJ_TIMELINE)) { - ret =3D SUBMIT_ERROR(EOPNOTSUPP, submit, "syncobj timeline unsupported"= ); - break; - } - - if (syncobj_desc.flags & ~MSM_SUBMIT_SYNCOBJ_FLAGS) { - ret =3D SUBMIT_ERROR(EINVAL, submit, "invalid syncobj flags: %x", synco= bj_desc.flags); - break; - } - - ret =3D drm_sched_job_add_syncobj_dependency(&submit->base, file, - syncobj_desc.handle, syncobj_desc.point); - if (ret) - break; - - if (syncobj_desc.flags & MSM_SUBMIT_SYNCOBJ_RESET) { - syncobjs[i] =3D - drm_syncobj_find(file, syncobj_desc.handle); - if (!syncobjs[i]) { - ret =3D SUBMIT_ERROR(EINVAL, submit, "invalid syncobj handle: %u", i); - break; - } - } - } - - if (ret) { - for (j =3D 0; j <=3D i; ++j) { - if (syncobjs[j]) - drm_syncobj_put(syncobjs[j]); - } - kfree(syncobjs); - return ERR_PTR(ret); - } - return syncobjs; -} - -static void msm_reset_syncobjs(struct drm_syncobj **syncobjs, - uint32_t nr_syncobjs) -{ - uint32_t i; - - for (i =3D 0; syncobjs && i < nr_syncobjs; ++i) { - if (syncobjs[i]) - drm_syncobj_replace_fence(syncobjs[i], NULL); - } -} - -static struct msm_submit_post_dep *msm_parse_post_deps(struct drm_device *= dev, - struct drm_file *fi= le, - uint64_t syncobjs_a= ddr, - uint32_t nr_syncobj= s, - size_t syncobj_stri= de) -{ - struct msm_submit_post_dep *post_deps; - struct drm_msm_gem_submit_syncobj syncobj_desc =3D {0}; - int ret =3D 0; - uint32_t i, j; - - post_deps =3D kcalloc(nr_syncobjs, sizeof(*post_deps), - GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); - if (!post_deps) - return ERR_PTR(-ENOMEM); - - for (i =3D 0; i < nr_syncobjs; ++i) { - uint64_t address =3D syncobjs_addr + i * syncobj_stride; - - if (copy_from_user(&syncobj_desc, - u64_to_user_ptr(address), - min(syncobj_stride, sizeof(syncobj_desc)))) { - ret =3D -EFAULT; - break; - } - - post_deps[i].point =3D syncobj_desc.point; - - if (syncobj_desc.flags) { - ret =3D UERR(EINVAL, dev, "invalid syncobj flags"); - break; - } - - if (syncobj_desc.point) { - if (!drm_core_check_feature(dev, - DRIVER_SYNCOBJ_TIMELINE)) { - ret =3D UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); - break; - } - - post_deps[i].chain =3D dma_fence_chain_alloc(); - if (!post_deps[i].chain) { - ret =3D -ENOMEM; - break; - } - } - - post_deps[i].syncobj =3D - drm_syncobj_find(file, syncobj_desc.handle); - if (!post_deps[i].syncobj) { - ret =3D UERR(EINVAL, dev, "invalid syncobj handle"); - break; - } - } - - if (ret) { - for (j =3D 0; j <=3D i; ++j) { - dma_fence_chain_free(post_deps[j].chain); - if (post_deps[j].syncobj) - drm_syncobj_put(post_deps[j].syncobj); - } - - kfree(post_deps); - return ERR_PTR(ret); - } - - return post_deps; -} - -static void msm_process_post_deps(struct msm_submit_post_dep *post_deps, - uint32_t count, struct dma_fence *fence) -{ - uint32_t i; - - for (i =3D 0; post_deps && i < count; ++i) { - if (post_deps[i].chain) { - drm_syncobj_add_point(post_deps[i].syncobj, - post_deps[i].chain, - fence, post_deps[i].point); - post_deps[i].chain =3D NULL; - } else { - drm_syncobj_replace_fence(post_deps[i].syncobj, - fence); - } - } -} - int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file) { @@ -668,7 +502,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, struct msm_gpu *gpu =3D priv->gpu; struct msm_gpu_submitqueue *queue; struct msm_ringbuffer *ring; - struct msm_submit_post_dep *post_deps =3D NULL; + struct msm_syncobj_post_dep *post_deps =3D NULL; struct drm_syncobj **syncobjs_to_reset =3D NULL; struct sync_file *sync_file =3D NULL; int out_fence_fd =3D -1; @@ -745,10 +579,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void= *data, } =20 if (args->flags & MSM_SUBMIT_SYNCOBJ_IN) { - syncobjs_to_reset =3D msm_parse_deps(submit, file, - args->in_syncobjs, - args->nr_in_syncobjs, - args->syncobj_stride); + syncobjs_to_reset =3D msm_syncobj_parse_deps(dev, &submit->base, + file, args->in_syncobjs, + args->nr_in_syncobjs, + args->syncobj_stride); if (IS_ERR(syncobjs_to_reset)) { ret =3D PTR_ERR(syncobjs_to_reset); goto out_unlock; @@ -756,10 +590,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void= *data, } =20 if (args->flags & MSM_SUBMIT_SYNCOBJ_OUT) { - post_deps =3D msm_parse_post_deps(dev, file, - args->out_syncobjs, - args->nr_out_syncobjs, - args->syncobj_stride); + post_deps =3D msm_syncobj_parse_post_deps(dev, file, + args->out_syncobjs, + args->nr_out_syncobjs, + args->syncobj_stride); if (IS_ERR(post_deps)) { ret =3D PTR_ERR(post_deps); goto out_unlock; @@ -902,10 +736,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void = *data, args->fence =3D submit->fence_id; queue->last_fence =3D submit->fence_id; =20 - msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs); - msm_process_post_deps(post_deps, args->nr_out_syncobjs, - submit->user_fence); - + msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); + msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, submit->u= ser_fence); =20 out: submit_cleanup(submit, !!ret); diff --git a/drivers/gpu/drm/msm/msm_syncobj.c b/drivers/gpu/drm/msm/msm_sy= ncobj.c new file mode 100644 index 000000000000..4baa9f522c54 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_syncobj.c @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Google, Inc */ + +#include "drm/drm_drv.h" + +#include "msm_drv.h" +#include "msm_syncobj.h" + +struct drm_syncobj ** +msm_syncobj_parse_deps(struct drm_device *dev, + struct drm_sched_job *job, + struct drm_file *file, + uint64_t in_syncobjs_addr, + uint32_t nr_in_syncobjs, + size_t syncobj_stride) +{ + struct drm_syncobj **syncobjs =3D NULL; + struct drm_msm_syncobj syncobj_desc =3D {0}; + int ret =3D 0; + uint32_t i, j; + + syncobjs =3D kcalloc(nr_in_syncobjs, sizeof(*syncobjs), + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); + if (!syncobjs) + return ERR_PTR(-ENOMEM); + + for (i =3D 0; i < nr_in_syncobjs; ++i) { + uint64_t address =3D in_syncobjs_addr + i * syncobj_stride; + + if (copy_from_user(&syncobj_desc, + u64_to_user_ptr(address), + min(syncobj_stride, sizeof(syncobj_desc)))) { + ret =3D -EFAULT; + break; + } + + if (syncobj_desc.point && + !drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE)) { + ret =3D UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); + break; + } + + if (syncobj_desc.flags & ~MSM_SYNCOBJ_FLAGS) { + ret =3D UERR(EINVAL, dev, "invalid syncobj flags: %x", syncobj_desc.fla= gs); + break; + } + + ret =3D drm_sched_job_add_syncobj_dependency(job, file, + syncobj_desc.handle, + syncobj_desc.point); + if (ret) + break; + + if (syncobj_desc.flags & MSM_SYNCOBJ_RESET) { + syncobjs[i] =3D drm_syncobj_find(file, syncobj_desc.handle); + if (!syncobjs[i]) { + ret =3D UERR(EINVAL, dev, "invalid syncobj handle: %u", i); + break; + } + } + } + + if (ret) { + for (j =3D 0; j <=3D i; ++j) { + if (syncobjs[j]) + drm_syncobj_put(syncobjs[j]); + } + kfree(syncobjs); + return ERR_PTR(ret); + } + return syncobjs; +} + +void +msm_syncobj_reset(struct drm_syncobj **syncobjs, uint32_t nr_syncobjs) +{ + uint32_t i; + + for (i =3D 0; syncobjs && i < nr_syncobjs; ++i) { + if (syncobjs[i]) + drm_syncobj_replace_fence(syncobjs[i], NULL); + } +} + +struct msm_syncobj_post_dep * +msm_syncobj_parse_post_deps(struct drm_device *dev, + struct drm_file *file, + uint64_t syncobjs_addr, + uint32_t nr_syncobjs, + size_t syncobj_stride) +{ + struct msm_syncobj_post_dep *post_deps; + struct drm_msm_syncobj syncobj_desc =3D {0}; + int ret =3D 0; + uint32_t i, j; + + post_deps =3D kcalloc(nr_syncobjs, sizeof(*post_deps), + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); + if (!post_deps) + return ERR_PTR(-ENOMEM); + + for (i =3D 0; i < nr_syncobjs; ++i) { + uint64_t address =3D syncobjs_addr + i * syncobj_stride; + + if (copy_from_user(&syncobj_desc, + u64_to_user_ptr(address), + min(syncobj_stride, sizeof(syncobj_desc)))) { + ret =3D -EFAULT; + break; + } + + post_deps[i].point =3D syncobj_desc.point; + + if (syncobj_desc.flags) { + ret =3D UERR(EINVAL, dev, "invalid syncobj flags"); + break; + } + + if (syncobj_desc.point) { + if (!drm_core_check_feature(dev, + DRIVER_SYNCOBJ_TIMELINE)) { + ret =3D UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); + break; + } + + post_deps[i].chain =3D dma_fence_chain_alloc(); + if (!post_deps[i].chain) { + ret =3D -ENOMEM; + break; + } + } + + post_deps[i].syncobj =3D + drm_syncobj_find(file, syncobj_desc.handle); + if (!post_deps[i].syncobj) { + ret =3D UERR(EINVAL, dev, "invalid syncobj handle"); + break; + } + } + + if (ret) { + for (j =3D 0; j <=3D i; ++j) { + dma_fence_chain_free(post_deps[j].chain); + if (post_deps[j].syncobj) + drm_syncobj_put(post_deps[j].syncobj); + } + + kfree(post_deps); + return ERR_PTR(ret); + } + + return post_deps; +} + +void +msm_syncobj_process_post_deps(struct msm_syncobj_post_dep *post_deps, + uint32_t count, struct dma_fence *fence) +{ + uint32_t i; + + for (i =3D 0; post_deps && i < count; ++i) { + if (post_deps[i].chain) { + drm_syncobj_add_point(post_deps[i].syncobj, + post_deps[i].chain, + fence, post_deps[i].point); + post_deps[i].chain =3D NULL; + } else { + drm_syncobj_replace_fence(post_deps[i].syncobj, + fence); + } + } +} diff --git a/drivers/gpu/drm/msm/msm_syncobj.h b/drivers/gpu/drm/msm/msm_sy= ncobj.h new file mode 100644 index 000000000000..bcaa15d01da0 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_syncobj.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Google, Inc */ + +#ifndef __MSM_GEM_SYNCOBJ_H__ +#define __MSM_GEM_SYNCOBJ_H__ + +#include "drm/drm_device.h" +#include "drm/drm_syncobj.h" +#include "drm/gpu_scheduler.h" + +struct msm_syncobj_post_dep { + struct drm_syncobj *syncobj; + uint64_t point; + struct dma_fence_chain *chain; +}; + +struct drm_syncobj ** +msm_syncobj_parse_deps(struct drm_device *dev, + struct drm_sched_job *job, + struct drm_file *file, + uint64_t in_syncobjs_addr, + uint32_t nr_in_syncobjs, + size_t syncobj_stride); + +void msm_syncobj_reset(struct drm_syncobj **syncobjs, uint32_t nr_syncobjs= ); + +struct msm_syncobj_post_dep * +msm_syncobj_parse_post_deps(struct drm_device *dev, + struct drm_file *file, + uint64_t syncobjs_addr, + uint32_t nr_syncobjs, + size_t syncobj_stride); + +void msm_syncobj_process_post_deps(struct msm_syncobj_post_dep *post_deps, + uint32_t count, struct dma_fence *fence); + +#endif /* __MSM_GEM_SYNCOBJ_H__ */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 1bccc347945c..2c2fc4b284d0 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -220,6 +220,17 @@ struct drm_msm_gem_cpu_fini { * Cmdstream Submission: */ =20 +#define MSM_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */ +#define MSM_SYNCOBJ_FLAGS ( \ + MSM_SYNCOBJ_RESET | \ + 0) + +struct drm_msm_syncobj { + __u32 handle; /* in, syncobj handle. */ + __u32 flags; /* in, from MSM_SUBMIT_SYNCOBJ_FLAGS */ + __u64 point; /* in, timepoint for timeline syncobjs. */ +}; + /* The value written into the cmdstream is logically: * * ((relocbuf->gpuaddr + reloc_offset) << shift) | or @@ -309,17 +320,6 @@ struct drm_msm_gem_submit_bo { MSM_SUBMIT_FENCE_SN_IN | \ 0) =20 -#define MSM_SUBMIT_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */ -#define MSM_SUBMIT_SYNCOBJ_FLAGS ( \ - MSM_SUBMIT_SYNCOBJ_RESET | \ - 0) - -struct drm_msm_gem_submit_syncobj { - __u32 handle; /* in, syncobj handle. */ - __u32 flags; /* in, from MSM_SUBMIT_SYNCOBJ_FLAGS */ - __u64 point; /* in, timepoint for timeline syncobjs. */ -}; - /* Each cmdstream submit consists of a table of buffers involved, and * one or more cmdstream buffers. This allows for conditional execution * (context-restore), and IB buffers needed for per tile/bin draw cmds. @@ -333,8 +333,8 @@ struct drm_msm_gem_submit { __u64 cmds; /* in, ptr to array of submit_cmd's */ __s32 fence_fd; /* in/out fence fd (see MSM_SUBMIT_FENCE_FD_IN/OUT)= */ __u32 queueid; /* in, submitqueue id */ - __u64 in_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */ - __u64 out_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */ + __u64 in_syncobjs; /* in, ptr to array of drm_msm_syncobj */ + __u64 out_syncobjs; /* in, ptr to array of drm_msm_syncobj */ __u32 nr_in_syncobjs; /* in, number of entries in in_syncobj */ __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 17EE625DCE0 for ; Sun, 29 Jun 2025 20:17:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228225; cv=none; b=Nca782Th9r+jdRkYoAct1OiJvoSmUKYxF0L0Y2gSKRj18udBt3QaGVNi8FP7TCA9BP9gj1W+LOYUX3A2NsoTS5pfXU2qNMdzSXNwzTgfFf8VzYY+Oo9NIXnWK09gKx4Eq9yjADArSp5DaqBkBNdhj9ifthFhHSFw+1IfzHNysBc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228225; c=relaxed/simple; bh=VgCwVWqTpqRlfRQ5gyAJCTfkO3Reb3p+U/Zm/p//H08=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dlCa2OfpUCNSWJLZ962V9MOWM6ugVnNPwP46XTdNhoFmbhearF0b/lOOCEhzFpK5Rbox5D0CoKc87EHS5K4gq81eNNX/Nh+zO0CvI2DLULW0Vs6tlD9lDBU0M3xljyjITCF313cDRumQ6/ATDTO4nLQJkmMutueEWQA/kSHZqq4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=R4+Ea8mB; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="R4+Ea8mB" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TJKPlG032194 for ; Sun, 29 Jun 2025 20:17:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=fDzBgQEKZYU G26jz6qIHJn61zKgVSXScxT3NVbL/LOU=; b=R4+Ea8mB1bbZgj8rNwYI7Q7gj/u f0ZOn1qDVXl6TXzE8ixXPeuHxrGJCw+CrhBF43oNDGIvJF8LESlgG0QoeH/r/e/2 CxZ42oCb/0jn73QZSGe2SLv/fnWjSQjcigTt9mk5nqTpYtlM9pVPEfLDOCyW7U0l +2szWLga2shgsT9xdhabwDzyj9Px0bdqzpUNF1OEiUwir1gz0x+/BxTu1b7eOr1+ 5+yjwUVCMkgy1MfmjBcG9w+KNzbhR+xweXe3+axPQJy2WwVFybwJ1kfFOy+7Pa+8 Q9B5NuzzssWvFDdnofpefc53v/dSGjIZoiSTIPLemSQWiOUJ1E3MF4FxQFQ== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7qm2muk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:02 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-23494a515e3so27567765ad.2 for ; Sun, 29 Jun 2025 13:17:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228222; x=1751833022; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fDzBgQEKZYUG26jz6qIHJn61zKgVSXScxT3NVbL/LOU=; b=LE6w9V4jH5KHERXlzSv4b9sRe7YtHqDw4KdKDe+TnFtHIezEDt0KzXPhCH6kFaU4Hm VhFjhsueL5aFg62zu3W4i9Tz0AFIkJ1tzDNWqQ2PV5CBAARtIZn7ScK4lVo4PJUDi/w1 qw+R4eeBddrGdIX3DuqpWvXU6qD4X0YmNG/eIo362Gj1Be/EMMg0g5CgQHxlwEc47xqR pi+dBql0tsttZU0BEcSfIZtteW/PosSmihLQZk/wVJytheeT2UVsR0dKCtAEKpD1evwj aLwEt8tvaogS0rIxP99unYQazXPwA41qTbaFcwKqjxgcEVNVHx4Tra8v9VzOgxsK6RqF DEzQ== X-Forwarded-Encrypted: i=1; AJvYcCXh5mco5LYzdmGXzJsolTyGClzdHKJw7Y5eLhaM1NGPwYpnEsGk2TCaP525Tgl/utssajcM0C/qkEWI6NA=@vger.kernel.org X-Gm-Message-State: AOJu0Yz3ezbFtaP3J2htFJ1icvJaInVx25YIkZndQhSd+E0Ho7Ls7lsT 3TrFcOnr49Ty0FwJ/k8ZoM3G2bqVPJ0fnOvPX2orqhF3kLoScJjSKfazzCQYOswUmxCGfRWGAk8 St8hh20zEXBk4lZaFos8oG4uzXRNT5bMlDj1JgyRDlcj62/oMJfnr8Dlje1dL1wlIctE= X-Gm-Gg: ASbGncvZ5ruj3Qfd0a2O8/tWKsw2GgBm/cUFMA/vl3EnKulhNuN7D3yjdcRe90MUeiM +N55ZQdBtSdXJty+NSp7xcTfidsgyz/hbCrKTh/nImI1BXmJoD7E/ftmcWwhsnHrVvbVaAt0Aun p3TnDXL6SqXXefzIxQmQBc5YIqJ7ExqcV4TfGxKP/g/M7pAfVv9rey5C2vMyPEsIoTEocE/Nz4L FZVux7SMUGsmqfGzPJSHk0M9TVHmR9ZuLufK+tjn2e9n1mNpMli+WPQ8cV1ZS5wwj1qOpsfaUJM DRJ4ReP5a3twiQgvqeu7NJW3GWEVAHtRJg== X-Received: by 2002:a17:902:e945:b0:234:c8f6:1b11 with SMTP id d9443c01a7336-23ac460a998mr204143855ad.44.1751228221727; Sun, 29 Jun 2025 13:17:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFLpQD5GICYIqvvFhdJkmfy/Ovg7wYpC3gynOWCpzMdtDIXI235tmasxFkPxlw9AML6z9VilA== X-Received: by 2002:a17:902:e945:b0:234:c8f6:1b11 with SMTP id d9443c01a7336-23ac460a998mr204143515ad.44.1751228221362; Sun, 29 Jun 2025 13:17:01 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb3bf1f6sm65561795ad.204.2025.06.29.13.17.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:01 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 30/42] drm/msm: Use DMA_RESV_USAGE_BOOKKEEP/KERNEL Date: Sun, 29 Jun 2025 13:13:13 -0700 Message-ID: <20250629201530.25775-31-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=C4TpyRP+ c=1 sm=1 tr=0 ts=68619f3f cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=8Ft5guHMEotweHOT_P0A:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-ORIG-GUID: fFAi57vf8v5ZrMVymkMcMdhKqbQpD2-c X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX7sx8sTyjoZN2 CRfPyhqR4yy8Yvas3VT64t9dt9vLOwqNFr7EmzALm8uP4umL6ES3JIYDZ30tqpNR/wGjtVA/pHx VIDSMlcoXnLZYohOuDc/znZsuxao86wWnYWzES7teicKf/6eHSb212GjT6GQO1xRxXUqReQAVkA pMEguCKSl+kzsuTW2WxCL3WfXTpVTjOuIk0yMjrLTFtPjc5uzofo4Nwew/I/rpjyFoybOOvhmH3 Yg5TfFk2plilO18p0CwM383gLn+kcJ3IPesv5R8Uu9qBfUS51OD6CsVXlb3zjCe+FOgeA6k/6Zv jHaDNMLvp8io7tKV03wvDgZWZzhFvHJpfukyrDSNx9lQaQJL04vyseGOp/fBWrm0OgIbc+Ys+43 YxzVqHz/U5umFgQZouEsxYsdRSusA1+84gIhetJrEy30lu7I47kNxeunS2q/Jv+f6DpPbBs/ X-Proofpoint-GUID: fFAi57vf8v5ZrMVymkMcMdhKqbQpD2-c X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 phishscore=0 suspectscore=0 bulkscore=0 lowpriorityscore=0 clxscore=1015 priorityscore=1501 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Any place we wait for a BO to become idle, we should use BOOKKEEP usage, to ensure that it waits for _any_ activity. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 100d159d52e2..b688d397cc47 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -93,8 +93,8 @@ static void msm_gem_close(struct drm_gem_object *obj, str= uct drm_file *file) * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl */ - dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, - msecs_to_jiffies(1000)); + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_BOOKKEEP, false, + MAX_SCHEDULE_TIMEOUT); =20 msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, ctx->vm, true); @@ -895,7 +895,7 @@ bool msm_gem_active(struct drm_gem_object *obj) if (to_msm_bo(obj)->pin_count) return true; =20 - return !dma_resv_test_signaled(obj->resv, dma_resv_usage_rw(true)); + return !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_BOOKKEEP); } =20 int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *tim= eout) diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/m= sm_gem_shrinker.c index 5faf6227584a..1039e3c0a47b 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -139,7 +139,7 @@ evict(struct drm_gem_object *obj, struct ww_acquire_ctx= *ticket) static bool wait_for_idle(struct drm_gem_object *obj) { - enum dma_resv_usage usage =3D dma_resv_usage_rw(true); + enum dma_resv_usage usage =3D DMA_RESV_USAGE_BOOKKEEP; return dma_resv_wait_timeout(obj->resv, usage, false, 10) > 0; } =20 --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1EF3225EFB6 for ; Sun, 29 Jun 2025 20:17:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228228; cv=none; b=fr0elXq+Lt5kVfOvF369wOUve2703bFZ5ntdEBlvbeDCScfKzsuHzquP/WirwPpPeB/3YuBm0H6Zw/IWLnFn3/ACwV0vQXjtE9MeERMridsMKQ0AaKkMtnbGGJh5JCK4J+TwpPBZZNna96DcVHaqcrrb6v/+cBDzyRppI54nFBo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228228; c=relaxed/simple; bh=SAnJbWKW2vq739V1GVt3PN+Q/A9/mQPoudYPrWMRl7I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OVmfIiI92fRo/aG/XrjMWyy8fiQLZTnm7DVCpAotHrc/jYC6pxh7Db2fzF3YvhcwWhlAnDX4UQAvdUT8YsXlYJMeO42E0n8yEVOVb3llGLUC8kqsHGaaQboYzsGOarO5qAKh2ud2uStCGqhjzD02SAx6IRvD8r46SHI9LYlv8BE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=O++eyACN; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="O++eyACN" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TFgLFl022356 for ; Sun, 29 Jun 2025 20:17:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=u3Vk/605sFT lfOKNcEzCbK6kAX2mmjGgvJ9lJRRpvp8=; b=O++eyACN3/v/jDMp0VeTmNeIePJ XoShVW2q+KakC+oLgimjaKRT2cTj8MWjelV1BX7QO7SUwrbDKy9Lkxy6mptBgKru K51bfNlNgkDN14bMldNsKZtXM8kLzXEGX9RfFFW/wAZmZCD+gREJNHkqowED8cD9 WBj7rcgAsyyTSrH85NFvAaE6HtiCvr6tzbfm/ZP2WMVLyqmkawC8tSNNxCR7n/gA bl3SA+GPMfeFT6+DrrZu3pMUSG3ZfA8S1PtBXUYzdCiaPVqnAIvx5Bu/+PtbYUV6 nYa2LXcWRRm/A+YrNcRvQkciyJRfVUd2/WPf9kRl/Gi8n853NTiWwARV/wQ== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8s9ak0x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:05 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-23494a515e3so27567945ad.2 for ; Sun, 29 Jun 2025 13:17:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228224; x=1751833024; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=u3Vk/605sFTlfOKNcEzCbK6kAX2mmjGgvJ9lJRRpvp8=; b=eDE/VWVWFVy3MEUeqYltM7A9uf6ksjixyDwf3f4f/iJj72xWacSWyZyRmTgnmi/+p2 +rL11H4qa43dINPpYX1kcFKOllhrO+AadDt7xaFz+XLHFJ8lxFdUcHFb9eCgn4zfmcpo TFrNSecE8Nd63OJdhu0dKQFcpsDX14O91i7+7Q7XBAuVRR8l4dX0udhDJUktQFY98L3A Lt786rO8xZ/DJ3pVfW41yICAha90zJmHkGS+r5gW0v3IfW4CRFczJlHcEINUilpy2NVR 9oEVXySW/GIN9YzYbRWVzLM7yUvWs2xLFi5TvWTdCW+rKMWzey62Q9Gh1sE62YC0Mi7Y jijw== X-Forwarded-Encrypted: i=1; AJvYcCWVA2IEZN+7STKIRsKyvFRMZgox4cbtuYTvtcOkvnIanTlnSz+3hNPzucdEKIU6UsInNrvm5h60Nfj3Thw=@vger.kernel.org X-Gm-Message-State: AOJu0YwQ6+CQdT65360r1Z+IeeQzLqSUmMbQfPZSpZ1A7vAia3/LSTXd ko7fwkaZOyYS7wk2hWzR96n8EdSXGhBMiYNuf3lfOh7iGfUfKiTU2961eZeLJKDApTHYxcgxYYA ecRfjnQhi00DQGI4EDIzwszafoAu/bcJG4WHAw2mseZsYh0OCo+rEsZ1GPWJXN6whPGZb9fgX7y o= X-Gm-Gg: ASbGncv3voz91EL32MSOiE5d0ukBh/JDar1gx+0mTD2LTKtn+MdQAgt268RaX+uJ8W7 965amrgIVAVAC10/rLkTSqhtivMoMJGmLU4IFdN2JZ+l1M9s/FbvxTKyqLubpsgw0eOm7h8lsGa 8fTIQdDTElypKRczo9TJlTLIyWvJkydAS3QD2/ezTFw5zeUbBGFgOgbZErpv/OQ2yzSirxlPN7F Y4ACDKT0UqlIHXSpPKl+BlRIjOV1CzbT0B8Jmvm46kSk0nWiATgGaaHhheSnvXmFO+MrcXUWGIK v51gtzAwcwfw/qqpl9vQcA+IR1aFf9p6Gg== X-Received: by 2002:a17:903:1c6:b0:236:9726:7264 with SMTP id d9443c01a7336-23ac2d8687emr195661605ad.5.1751228223376; Sun, 29 Jun 2025 13:17:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGevPfQSIWojO32Pea5qxonOnZ6zvm7KiG1B4j9tv+GSvLjv4tUmn8e7L2D3YdqcPD003X3EA== X-Received: by 2002:a17:903:1c6:b0:236:9726:7264 with SMTP id d9443c01a7336-23ac2d8687emr195661285ad.5.1751228222930; Sun, 29 Jun 2025 13:17:02 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb2e21bdsm66541875ad.11.2025.06.29.13.17.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:02 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v9 31/42] drm/msm: Add VM_BIND submitqueue Date: Sun, 29 Jun 2025 13:13:14 -0700 Message-ID: <20250629201530.25775-32-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=H/Pbw/Yi c=1 sm=1 tr=0 ts=68619f41 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=PnHp71_pcMtKiH-pJVMA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-ORIG-GUID: HyYlHjXHByMQdPajriJgrA39uR97E-3u X-Proofpoint-GUID: HyYlHjXHByMQdPajriJgrA39uR97E-3u X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX8KfUfR1+Lf6X eyXk6YmWtSAQAewmrxGSPBepYlRM/eQPTr0iImyyOka6A/oD8gnwAe0niesQVXT7Fw3YU3jQHSl YMf0Eh8hNMXmS2d8xzXfQfmpMGzDJWlnuIeLk5lHcDXg+8uzz8bRfJXrAcASCcBNS6YDa80UVkQ GXpOqrm1RBZUJQyPQ962uoaLNsNOFY+kUMQ2wFzEJdWjfj5MFksHeDSgsW9cWRz15yQyCtQeb0m JsC8AVTrene3CevohCkI6RE//jPzrkRe1P2zDXVGE6DJ/NYfCz1wVXp744nVWZ1pXbZuvLL49HD FhUFV0VDXBBkzgXrmHU7xIQwTU4A6X72ZVZJ7zgt91JckE4CCsfen5EZcBeu3iXLhBfR9ioTQME Tdmx1iGy6V09ZsmWQyJW14rGg/jhi/ZRQERDHLzG0AYLGneLzExIYa0vvIrHcrdsbpE1w4Fe X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 priorityscore=1501 clxscore=1015 mlxscore=0 lowpriorityscore=0 spamscore=0 adultscore=0 bulkscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark This submitqueue type isn't tied to a hw ringbuffer, but instead executes on the CPU for performing async VM_BIND ops. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.h | 12 +++++ drivers/gpu/drm/msm/msm_gem_submit.c | 60 +++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_vma.c | 71 +++++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 3 ++ drivers/gpu/drm/msm/msm_submitqueue.c | 67 +++++++++++++++++++------ include/uapi/drm/msm_drm.h | 9 +++- 6 files changed, 197 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 3a5f81437b5d..af637409be39 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -53,6 +53,13 @@ struct msm_gem_vm { /** @base: Inherit from drm_gpuvm. */ struct drm_gpuvm base; =20 + /** + * @sched: Scheduler used for asynchronous VM_BIND request. + * + * Unused for kernel managed VMs (where all operations are synchronous). + */ + struct drm_gpu_scheduler sched; + /** * @mm: Memory management for kernel managed VA allocations * @@ -71,6 +78,9 @@ struct msm_gem_vm { */ struct pid *pid; =20 + /** @last_fence: Fence for last pending work scheduled on the VM */ + struct dma_fence *last_fence; + /** @faults: the number of GPU hangs associated with this address space */ int faults; =20 @@ -100,6 +110,8 @@ struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, u64 va_start, u64 va_size, bool managed); =20 +void msm_gem_vm_close(struct drm_gpuvm *gpuvm); + struct msm_fence_context; =20 #define MSM_VMA_DUMP (DRM_GPUVA_USERBITS << 0) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 9f18771a1e88..e2174b7d0e40 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -4,6 +4,7 @@ * Author: Rob Clark */ =20 +#include #include #include #include @@ -258,30 +259,43 @@ static int submit_lookup_cmds(struct msm_gem_submit *= submit, static int submit_lock_objects(struct msm_gem_submit *submit) { unsigned flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WA= IT; + struct drm_exec *exec =3D &submit->exec; int ret; =20 -// TODO need to add vm_bind path which locks vm resv + external objs drm_exec_init(&submit->exec, flags, submit->nr_bos); =20 + if (msm_context_is_vmbind(submit->queue->ctx)) { + drm_exec_until_all_locked (&submit->exec) { + ret =3D drm_gpuvm_prepare_vm(submit->vm, exec, 1); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + + ret =3D drm_gpuvm_prepare_objects(submit->vm, exec, 1); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + } + + return 0; + } + drm_exec_until_all_locked (&submit->exec) { ret =3D drm_exec_lock_obj(&submit->exec, drm_gpuvm_resv_obj(submit->vm)); drm_exec_retry_on_contention(&submit->exec); if (ret) - goto error; + return ret; for (unsigned i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; ret =3D drm_exec_prepare_obj(&submit->exec, obj, 1); drm_exec_retry_on_contention(&submit->exec); if (ret) - goto error; + return ret; } } =20 return 0; - -error: - return ret; } =20 static int submit_fence_sync(struct msm_gem_submit *submit) @@ -367,9 +381,18 @@ static void submit_unpin_objects(struct msm_gem_submit= *submit) =20 static void submit_attach_object_fences(struct msm_gem_submit *submit) { - int i; + struct msm_gem_vm *vm =3D to_msm_vm(submit->vm); + struct dma_fence *last_fence; + + if (msm_context_is_vmbind(submit->queue->ctx)) { + drm_gpuvm_resv_add_fence(submit->vm, &submit->exec, + submit->user_fence, + DMA_RESV_USAGE_BOOKKEEP, + DMA_RESV_USAGE_BOOKKEEP); + return; + } =20 - for (i =3D 0; i < submit->nr_bos; i++) { + for (unsigned i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; =20 if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) @@ -379,6 +402,10 @@ static void submit_attach_object_fences(struct msm_gem= _submit *submit) dma_resv_add_fence(obj->resv, submit->user_fence, DMA_RESV_USAGE_READ); } + + last_fence =3D vm->last_fence; + vm->last_fence =3D dma_fence_unwrap_merge(submit->user_fence, last_fence); + dma_fence_put(last_fence); } =20 static int submit_bo(struct msm_gem_submit *submit, uint32_t idx, @@ -537,6 +564,11 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void = *data, if (!queue) return -ENOENT; =20 + if (queue->flags & MSM_SUBMITQUEUE_VM_BIND) { + ret =3D UERR(EINVAL, dev, "Invalid queue type"); + goto out_post_unlock; + } + ring =3D gpu->rb[queue->ring_nr]; =20 if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) { @@ -726,6 +758,18 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void = *data, =20 submit_attach_object_fences(submit); =20 + if (msm_context_is_vmbind(ctx)) { + /* + * If we are not using VM_BIND, submit_pin_vmas() will validate + * just the BOs attached to the submit. In that case we don't + * need to validate the _entire_ vm, because userspace tracked + * what BOs are associated with the submit. + */ + ret =3D drm_gpuvm_validate(submit->vm, &submit->exec); + if (ret) + goto out; + } + /* The scheduler owns a ref now: */ msm_gem_submit_get(submit); =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index e16a8cafd8be..cf37abb98235 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -16,6 +16,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) drm_mm_takedown(&vm->mm); if (vm->mmu) vm->mmu->funcs->destroy(vm->mmu); + dma_fence_put(vm->last_fence); put_pid(vm->pid); kfree(vm); } @@ -154,6 +155,9 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { .vm_free =3D msm_gem_vm_free, }; =20 +static const struct drm_sched_backend_ops msm_vm_bind_ops =3D { +}; + /** * msm_gem_vm_create() - Create and initialize a &msm_gem_vm * @drm: the drm device @@ -195,6 +199,21 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_m= mu *mmu, const char *name, goto err_free_vm; } =20 + if (!managed) { + struct drm_sched_init_args args =3D { + .ops =3D &msm_vm_bind_ops, + .num_rqs =3D 1, + .credit_limit =3D 1, + .timeout =3D MAX_SCHEDULE_TIMEOUT, + .name =3D "msm-vm-bind", + .dev =3D drm->dev, + }; + + ret =3D drm_sched_init(&vm->sched, &args); + if (ret) + goto err_free_dummy; + } + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); @@ -206,8 +225,60 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_m= mu *mmu, const char *name, =20 return &vm->base; =20 +err_free_dummy: + drm_gem_object_put(dummy_gem); + err_free_vm: kfree(vm); return ERR_PTR(ret); =20 } + +/** + * msm_gem_vm_close() - Close a VM + * @gpuvm: The VM to close + * + * Called when the drm device file is closed, to tear down VM related reso= urces + * (which will drop refcounts to GEM objects that were still mapped into t= he + * VM at the time). + */ +void +msm_gem_vm_close(struct drm_gpuvm *gpuvm) +{ + struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); + struct drm_gpuva *vma, *tmp; + + /* + * For kernel managed VMs, the VMAs are torn down when the handle is + * closed, so nothing more to do. + */ + if (vm->managed) + return; + + if (vm->last_fence) + dma_fence_wait(vm->last_fence, false); + + /* Kill the scheduler now, so we aren't racing with it for cleanup: */ + drm_sched_stop(&vm->sched, NULL); + drm_sched_fini(&vm->sched); + + /* Tear down any remaining mappings: */ + dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL); + drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { + struct drm_gem_object *obj =3D vma->gem.obj; + + if (obj && obj->resv !=3D drm_gpuvm_resv(gpuvm)) { + drm_gem_object_get(obj); + msm_gem_lock(obj); + } + + msm_gem_vma_unmap(vma); + msm_gem_vma_close(vma); + + if (obj && obj->resv !=3D drm_gpuvm_resv(gpuvm)) { + msm_gem_unlock(obj); + drm_gem_object_put(obj); + } + } + dma_resv_unlock(drm_gpuvm_resv(gpuvm)); +} diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index b38a33a67ee9..5705e8d4e6b9 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -564,6 +564,9 @@ struct msm_gpu_submitqueue { struct mutex lock; struct kref ref; struct drm_sched_entity *entity; + + /** @_vm_bind_entity: used for @entity pointer for VM_BIND queues */ + struct drm_sched_entity _vm_bind_entity[0]; }; =20 struct msm_gpu_state_bo { diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index 8ced49c7557b..8617a82cd6b3 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -72,6 +72,9 @@ void msm_submitqueue_destroy(struct kref *kref) =20 idr_destroy(&queue->fence_idr); =20 + if (queue->entity =3D=3D &queue->_vm_bind_entity[0]) + drm_sched_entity_destroy(queue->entity); + msm_context_put(queue->ctx); =20 kfree(queue); @@ -102,7 +105,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct = msm_context *ctx, =20 void msm_submitqueue_close(struct msm_context *ctx) { - struct msm_gpu_submitqueue *entry, *tmp; + struct msm_gpu_submitqueue *queue, *tmp; =20 if (!ctx) return; @@ -111,10 +114,17 @@ void msm_submitqueue_close(struct msm_context *ctx) * No lock needed in close and there won't * be any more user ioctls coming our way */ - list_for_each_entry_safe(entry, tmp, &ctx->submitqueues, node) { - list_del(&entry->node); - msm_submitqueue_put(entry); + list_for_each_entry_safe(queue, tmp, &ctx->submitqueues, node) { + if (queue->entity =3D=3D &queue->_vm_bind_entity[0]) + drm_sched_entity_flush(queue->entity, MAX_WAIT_SCHED_ENTITY_Q_EMPTY); + list_del(&queue->node); + msm_submitqueue_put(queue); } + + if (!ctx->vm) + return; + + msm_gem_vm_close(ctx->vm); } =20 static struct drm_sched_entity * @@ -160,8 +170,6 @@ int msm_submitqueue_create(struct drm_device *drm, stru= ct msm_context *ctx, struct msm_drm_private *priv =3D drm->dev_private; struct msm_gpu_submitqueue *queue; enum drm_sched_priority sched_prio; - extern int enable_preemption; - bool preemption_supported; unsigned ring_nr; int ret; =20 @@ -171,26 +179,53 @@ int msm_submitqueue_create(struct drm_device *drm, st= ruct msm_context *ctx, if (!priv->gpu) return -ENODEV; =20 - preemption_supported =3D priv->gpu->nr_rings =3D=3D 1 && enable_preemptio= n !=3D 0; + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + unsigned sz; =20 - if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) - return -EINVAL; + /* Not allowed for kernel managed VMs (ie. kernel allocs VA) */ + if (!msm_context_is_vmbind(ctx)) + return -EINVAL; =20 - ret =3D msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); - if (ret) - return ret; + if (prio) + return -EINVAL; + + sz =3D struct_size(queue, _vm_bind_entity, 1); + queue =3D kzalloc(sz, GFP_KERNEL); + } else { + extern int enable_preemption; + bool preemption_supported =3D + priv->gpu->nr_rings =3D=3D 1 && enable_preemption !=3D 0; + + if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) + return -EINVAL; =20 - queue =3D kzalloc(sizeof(*queue), GFP_KERNEL); + ret =3D msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); + if (ret) + return ret; + + queue =3D kzalloc(sizeof(*queue), GFP_KERNEL); + } =20 if (!queue) return -ENOMEM; =20 kref_init(&queue->ref); queue->flags =3D flags; - queue->ring_nr =3D ring_nr; =20 - queue->entity =3D get_sched_entity(ctx, priv->gpu->rb[ring_nr], - ring_nr, sched_prio); + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + struct drm_gpu_scheduler *sched =3D &to_msm_vm(msm_context_vm(drm, ctx))= ->sched; + + queue->entity =3D &queue->_vm_bind_entity[0]; + + drm_sched_entity_init(queue->entity, DRM_SCHED_PRIORITY_KERNEL, + &sched, 1, NULL); + } else { + queue->ring_nr =3D ring_nr; + + queue->entity =3D get_sched_entity(ctx, priv->gpu->rb[ring_nr], + ring_nr, sched_prio); + } + if (IS_ERR(queue->entity)) { ret =3D PTR_ERR(queue->entity); kfree(queue); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2c2fc4b284d0..6d6cd1219926 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -385,12 +385,19 @@ struct drm_msm_gem_madvise { /* * Draw queues allow the user to set specific submission parameter. Command * submissions specify a specific submitqueue to use. ID 0 is reserved for - * backwards compatibility as a "default" submitqueue + * backwards compatibility as a "default" submitqueue. + * + * Because VM_BIND async updates happen on the CPU, they must run on a + * virtual queue created with the flag MSM_SUBMITQUEUE_VM_BIND. If we had + * a way to do pgtable updates on the GPU, we could drop this restriction. */ =20 #define MSM_SUBMITQUEUE_ALLOW_PREEMPT 0x00000001 +#define MSM_SUBMITQUEUE_VM_BIND 0x00000002 /* virtual queue for VM_BIND o= ps */ + #define MSM_SUBMITQUEUE_FLAGS ( \ MSM_SUBMITQUEUE_ALLOW_PREEMPT | \ + MSM_SUBMITQUEUE_VM_BIND | \ 0) =20 /* --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C77DC25EFBC for ; Sun, 29 Jun 2025 20:17:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228228; cv=none; b=g2F4BGRAWvSxqGcXfA//U2cqY7/kh2AyOpsCjoPBuiHE15P2Z77HJL2ewcns221ZcvfztTRI8GiPusm8NTKA20uSNg8EF2nxAKOPfB/5Bv/D2kajbHW1uPAcqk/QNQ2OrgcX9VtyNCUp+VgiebLuNuw1llqXvzwPpcnNCwMXXsE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228228; c=relaxed/simple; bh=Ci4tiURPEN3fgQgUAfKfwBL/Zz+rrpzlouAnx0HAJyE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t77jBvtZZ+gWg8fz0wolmtHJIdFolnrH63+PoZxwIafg8hL0vQyaMQ3cOjkO/0Q+4+1YCt+wrYv1JQBThfhtEkjNBDV1chsUUfCgVSA6dRUyjDbarGhuC2yc9vmBYLK0xJXrm9GPaeWT9eWUz91eRcsUS7x+I/MfZpUNuLrmu7c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=C205ImoM; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="C205ImoM" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TFgfs2028908 for ; Sun, 29 Jun 2025 20:17:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=+3e0ksDzZ42 mLlKJB4uSylL2Fh/ag708LB47aoZZ0mw=; b=C205ImoMUA3PXX4VdMPfQpZQcpx 4l/8ebb50L2HtCB2w7fMbd2JYEy6zU+YJwsD0NZm09WUDWWtzqE4BgZ1g84AEBPN FvP7Kl7avWNryi3+zsT0mYgQ+HibZOLRF6+EJzkGz3vUP//UyypJ0z/tiFLFqgUf /Tfbp4Ky0vXiqoH96aRmlkXjvWsaDfzMrXm3j32Sj9eMEUYNVITa0ybX6DWBsuW2 VbPa9PjBosu3RvaCfW5FRlu11745OHM6HrGxv78SVQqf+QMPj6tdxxrMgTiMIn4C 8zvS9kBF6ABT/BOY5ZAd7gyzKh/G/9zNA1DjOXAoVR0D/XlUvrlA9V9UM1w== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7bvjptb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:05 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-7425efba1a3so4189129b3a.0 for ; Sun, 29 Jun 2025 13:17:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228225; x=1751833025; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+3e0ksDzZ42mLlKJB4uSylL2Fh/ag708LB47aoZZ0mw=; b=GigLncdh0TBPW7Oe5mQ/1HnTYzRAgdzo07NPIhy4KaR1Mhi45E3ONatzuLyYdccH3V VS9wNBRuO/+XTaPMj14VU7DXXiL007Ll6KIp9nTUhNcpU0qMfmGbdQWDsV17/RwbxRFy axafwxs/eTYNmxzL960XZvxYpr9uvTtKG2jbtFUpbFHprCAcJ99I3uGYvWs64M0t+Rry engMdI4yHMnixe0IOsoyht8PSxSTz7jdzhvgqL+M9BaC/PzdOXCPuq7fXhaHHroqcKpv 9kfLTgTIXfH3SktWk31YOb3aEqU9YJEe5N4ir7zNm1nh9F4n9AscZJYteNtZJmGjdd09 eWew== X-Forwarded-Encrypted: i=1; AJvYcCXTnPC0RhLppiwkUOSX5/zqhlNWyxACmXMyC23nl8QM5LRn+e2uumaHQG7wzK++jvQJl13DkWmRbZiQ4Pk=@vger.kernel.org X-Gm-Message-State: AOJu0YxnHQVVcz32lO63BS+U23xnD6baW4S+onP2eyMH0rl1iHQn08ES 4qCnFtSyx/qNSh6x4SH2JZzn/9MFEZxx9BJs1Ykdso00IgPkPOIiOMud2EXqlGkdwqmvjhaMNGe dKgFwzf6F6xT6NKPeFcITy5RgRXUO2enlj11rtVQwzDlvwWwzN62V5u2ViGN9T4EvZBk= X-Gm-Gg: ASbGnctZRT46dEdPlsB9rKR3Z9VOP90vVN7/ILwb+n+ROCnQ1cVFLah9WRZk+VPTiJR IZinyVWEZTnRtYJeqK3Bt9M5XSyZpptNDEAWweS1bXsVyqnv4u87xYpuATC1N9unJ3VMOXEbCgS eQ0AkthH4KW8T31O2EWPoAEQiOu4kdgb7mYIMt54zXYQRsJKdeO6GQvoE+w0xmhWoVRY32WaAC8 JyEZ8cFZBIjknWaODEx6j+JFA1Axxdl4TBTTA5Cu4jpCrONAyiBdijDYjfqi5OiFXM0x48YQb9T 1iql+5q7ZRmGqwd45KIderYuorP0biMyQQ== X-Received: by 2002:a05:6a00:b52:b0:748:6a12:1b47 with SMTP id d2e1a72fcca58-74af7aef473mr15223425b3a.10.1751228224827; Sun, 29 Jun 2025 13:17:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF0f42J78b5kxB0hNvjroBKsRXjXeZuWzvQV4YS47hMUJFad/41z/IabO6ypJ/hY16UGYkV3Q== X-Received: by 2002:a05:6a00:b52:b0:748:6a12:1b47 with SMTP id d2e1a72fcca58-74af7aef473mr15223388b3a.10.1751228224324; Sun, 29 Jun 2025 13:17:04 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af56cb98asm7266082b3a.126.2025.06.29.13.17.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:03 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 32/42] drm/msm: Support IO_PGTABLE_QUIRK_NO_WARN_ON Date: Sun, 29 Jun 2025 13:13:15 -0700 Message-ID: <20250629201530.25775-33-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: jMSZj3n7cqckQitMZ22cSjjABEkfSfbd X-Authority-Analysis: v=2.4 cv=RJCzH5i+ c=1 sm=1 tr=0 ts=68619f41 cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=MLfKQGWeMraaM-6YArcA:9 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-ORIG-GUID: jMSZj3n7cqckQitMZ22cSjjABEkfSfbd X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX+8Skrab+Huun blUyZdIvx+701KPYW71Qp+FpegFYfsIW0AqUthFSE3geT5+dpWIRHZ7dIagH4yIrsyM7ZogyW9Z Mnabfd4siP0/o/TudjbfgwXMvzfE0A/lCe6OnSmxvobEBX2xKQycTsJWe1PSNCBtflcSbOIgQhp R64NjalBwxB1nDjl8jUzrkBUl4SXxGYVf3w+9tj18GeF9CK5nmqodgHkgdmfSR6zBXFjPtXTNc4 Nw+cCyGGqr6APYuiju7X2NasyvXk7Vy7kxB5ArPqj5+fqgI4KgA7L6Rj9NY39+sQDdL3OCtiIgv sWRbb4YUC+HnFkDesVKSlz+FqRXbPfM2iOzg1Fk6vUNNDIgQjzIsgIZ8rv/kAirKdqQ/tdHQVhP 0VbAk1LI8vRKtpEmXejlWZggJbqu2L/ksumUvE25mS7jkq3oM77PhHu1+8IBBVqWAH9lRh/n X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 priorityscore=1501 mlxlogscore=981 adultscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 phishscore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark With user managed VMs and multiple queues, it is in theory possible to trigger map/unmap errors. These will (in a later patch) mark the VM as unusable. But we want to tell the io-pgtable helpers not to spam the log. In addition, in the unmap path, we don't want to bail early from the unmap, to ensure we don't leave some dangling pages mapped. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/msm_iommu.c | 23 ++++++++++++++++++----- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 3 files changed, 20 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 62b5f294a2aa..5e115abe7692 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2280,7 +2280,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu, bool kern= el_managed) { struct msm_mmu *mmu; =20 - mmu =3D msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); + mmu =3D msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu, kernel_manage= d); =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index a0c74ecdb11b..bd67431cb25f 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -94,15 +94,24 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mm= u, u64 iova, { struct msm_iommu_pagetable *pagetable =3D to_pagetable(mmu); struct io_pgtable_ops *ops =3D pagetable->pgtbl_ops; + int ret =3D 0; =20 while (size) { - size_t unmapped, pgsize, count; + size_t pgsize, count; + ssize_t unmapped; =20 pgsize =3D calc_pgsize(pagetable, iova, iova, size, &count); =20 unmapped =3D ops->unmap_pages(ops, iova, pgsize, count, NULL); - if (!unmapped) - break; + if (unmapped <=3D 0) { + ret =3D -EINVAL; + /* + * Continue attempting to unamp the remained of the + * range, so we don't end up with some dangling + * mapped pages + */ + unmapped =3D PAGE_SIZE; + } =20 iova +=3D unmapped; size -=3D unmapped; @@ -110,7 +119,7 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mm= u, u64 iova, =20 iommu_flush_iotlb_all(to_msm_iommu(pagetable->parent)->domain); =20 - return (size =3D=3D 0) ? 0 : -EINVAL; + return ret; } =20 static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size= _t len, int prot) @@ -324,7 +333,7 @@ static const struct iommu_flush_ops tlb_ops =3D { static int msm_gpu_fault_handler(struct iommu_domain *domain, struct devic= e *dev, unsigned long iova, int flags, void *arg); =20 -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool ke= rnel_managed) { struct adreno_smmu_priv *adreno_smmu =3D dev_get_drvdata(parent->dev); struct msm_iommu *iommu =3D to_msm_iommu(parent); @@ -358,6 +367,10 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_= mmu *parent) ttbr0_cfg.quirks &=3D ~IO_PGTABLE_QUIRK_ARM_TTBR1; ttbr0_cfg.tlb =3D &tlb_ops; =20 + if (!kernel_managed) { + ttbr0_cfg.quirks |=3D IO_PGTABLE_QUIRK_NO_WARN; + } + pagetable->pgtbl_ops =3D alloc_io_pgtable_ops(ARM_64_LPAE_S1, &ttbr0_cfg, pagetable); =20 diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index 9d61999f4d42..04dce0faaa3a 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -51,7 +51,7 @@ static inline void msm_mmu_set_fault_handler(struct msm_m= mu *mmu, void *arg, mmu->handler =3D handler; } =20 -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent); +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool ke= rnel_managed); =20 int msm_iommu_pagetable_params(struct msm_mmu *mmu, phys_addr_t *ttbr, int *asid); --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F97F25DAFC for ; Sun, 29 Jun 2025 20:17:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228230; cv=none; b=CtJNq1syc/o9gjeOYxcbcqRkdzbQDHmBVVTxTIHWnxFVg5SPzuXVoVbajDy4Fn+Td40ZGohaBt6FKgMAK3QPksth2vJgTlvVQbFwvGgmUVBBFAY0TxESRbxuwsiw42rqG41s8psNJfjttH5u7zpopWlVol8DQeN5qHx7N6iSr28= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228230; c=relaxed/simple; bh=JsILLt2+b+xZrxxAbVvwrl1hB4+WAdrIOnzh9EnzXdU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LLsdeDDzQPTF6EcjBBgHrPXE8K+4BV3usRBsrU05WZRZQHxYeYAtu28g/iTCNjAMH0Ay7VJiakTfSkCJndajk1Bv8qCekYfNMRxoAIkilCQopz6Igt68zkgkCHDs8c+TqQSYO0Zg5KnvMm4vZn/RcMi5jh+VN2sE3X9qvYtiBZA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=YGqPr19L; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="YGqPr19L" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TDTXqp019282 for ; Sun, 29 Jun 2025 20:17:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=iVk8CflO5KH TgfTqgOrc0graSmlaqzNMFTNHNE1Irm0=; b=YGqPr19LqEQYskNY19A0fgN+pGY SQ1TCcCIIE3g3ghDESZBH/zKFFDLPBkb39VDb+PbUGSl5lBISUtINubb4VHrqxnb Cm3OphjDwzdKNaNYI0ElfF7Ban45eBJ3elKQqJ7dUsnpvDl+5OZRzvN5Bs0s2r0C vs2BXbKc11xSxOoqm5nfhWZaz6zu12AXr6StYWJbOnvwRHTCmVqsydarGhJNZIX+ twKSTtwWGgd37eceHZB1QW+o+zU5pAhVoucVENeNn3bU5PQZlXYVZlarpGcwCIF4 0QBAjZIaMDXmiBjV3p0UUMdAdOHeC/IXIJ5urVtoXz3S4Y+xypdEWRE/wzQ== Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7bvjptf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:07 +0000 (GMT) Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-7489d1f5e9fso5711358b3a.0 for ; Sun, 29 Jun 2025 13:17:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228226; x=1751833026; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iVk8CflO5KHTgfTqgOrc0graSmlaqzNMFTNHNE1Irm0=; b=U7z2DyVJsa3yx6T6+6l8ztB2MXunGYuJ5itPzE4kowuT29vtenYxoWTKsHNO6jtbwq D/PDaY+72IbeRWv6afPvPIRET5GpCQ9PqB0tMVLyPcBk0U8ZRyU0I/ZN59bF5AK9GieD 4PxfD0L284hb2btAIRsMqDVfmVZMTLFxwm7XNifOUePCffTmFHmCCXqcLYUC+uJr7JBF Nr5mjT83aGfw+VMMuvvtZSwniVE946eOGlzNDmo2wDfyD0P22zBC+XoihhkrFVskWHj1 Ho7r1N4Yum2P8O2j3juN2Q+4yKHjmmqB2e4w2uyTzPCSR2/0X2enYqVnWUsWEjEL9ACY v4WA== X-Forwarded-Encrypted: i=1; AJvYcCXmKZb2208umI06k655x6zfrEiqZX4QCQ7smPLIOzBK6UPpX8hNueVxBlE3fWEl7FCX6nndw/6bISdtAVM=@vger.kernel.org X-Gm-Message-State: AOJu0Yx2Xu5atebiNpv+Ag8Z165YqpT7BZCIPF93tRJTj6s4B0kdSRmX AW0JM2T8Vfp7rK6rMAR5uwOugYr5/GVYAEAlbFodCNf5neQ2yewlun3AHeU6qsGSvIg+tpFWxLZ Drcd7AP3ub8rFX825IsgR3lzxLOxvWElz0eHO8P3Lra5A5Knk6DoM2BnKvvgOkoCzIt4= X-Gm-Gg: ASbGncszz3bYDMmmlmufRzTCD+NfAp/m4XHA4yhAyIF6aSLOJy3ac/HsfdV3xpfuND6 5qgZxyNgrhbHtfP7nGMDwVjd8+CKw7WkLgGrrWiXIvYIAQV8Iv1xYinpMkQQDI4gejS8slp49YN d7rvfXjsViN4VeBFcc8COhQzVRwuzAPKkBqOimyKim+kxMcgJOn8XrLV/WtHMNczwdD5Z9XmMCy nCO/ofXiLdngCuy8QO1UYXOlsONHvRP0QW+80D6ZRF2cPFNqSYYFOtK9xwnR+m8dc1/eE8vknNP T4YS7XvVVHmBL/WvwQK9YT91PkpTf/K3CA== X-Received: by 2002:a05:6a20:e68b:b0:1f5:889c:3cbd with SMTP id adf61e73a8af0-220a1837cecmr19838086637.35.1751228226143; Sun, 29 Jun 2025 13:17:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFcrE7qywQHtGSXoLqU9TslvAYJaZLCyTG9TccAZbtf5Hzo86eqkFSE3GtcF3SgJ0I5Y+ZeBA== X-Received: by 2002:a05:6a20:e68b:b0:1f5:889c:3cbd with SMTP id adf61e73a8af0-220a1837cecmr19838049637.35.1751228225682; Sun, 29 Jun 2025 13:17:05 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af58068b6sm7324343b3a.175.2025.06.29.13.17.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:05 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 33/42] drm/msm: Support pgtable preallocation Date: Sun, 29 Jun 2025 13:13:16 -0700 Message-ID: <20250629201530.25775-34-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: AaevaBB_fM4lpxyjzB6X5DD0OKMWgXQz X-Authority-Analysis: v=2.4 cv=RJCzH5i+ c=1 sm=1 tr=0 ts=68619f43 cx=c_pps a=rEQLjTOiSrHUhVqRoksmgQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=7CQSdrXTAAAA:8 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=FDPAursefL7ktZtO0vwA:9 a=2VI0MkxyNR6bbpdq8BZq:22 a=a-qgeE7W1pNrGK8U0ZQC:22 X-Proofpoint-ORIG-GUID: AaevaBB_fM4lpxyjzB6X5DD0OKMWgXQz X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX3aU3rjNgiq79 RzM0G3Hj6RBFRAsP9paexpyyyAKjDaXBuEop+KyBx8hKp63tyrGjsUEgW5rfZ48k+KjeXTT/dkc NqPcNJ+giucvwOwz66U5R4WGkZDjiWIO2khwe9Xol82wwHPH6QVgT1HEqsjxuBLtETSelJb+p7s EHM090R1UzFUeSU6Mh74rBxZcgEZOX7RpNiLuX7FL1obQnzpWsr5bMNEYeMC9rQkucRILp2Hy7J x7m2k7otThDRKT5eZIfktSUVl0j9t58hNb+fNE+p6HIkSUGW2uSbPdGq84XySmxBlOwkppeQAds AV4/7IpSa1lc8sLOkxZJoEI4PKxwWERSYN4qWxibqnXolP7p4TbXbRcgLU4q7VWUAy8TUhGiQgH biE3k5ur5hOsWmoKISHqb2uCNowq5wxm7jIOwUG7EvuPXexpXjAXxwT7UOXwm7EBcCczMQHx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 priorityscore=1501 mlxlogscore=999 adultscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 phishscore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Introduce a mechanism to count the worst case # of pages required in a VM_BIND op. Note that previously we would have had to somehow account for allocations in unmap, when splitting a block. This behavior was removed in commit 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on unmap behavior)" Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_iommu.c | 191 +++++++++++++++++++++++++++++++- drivers/gpu/drm/msm/msm_mmu.h | 34 ++++++ 3 files changed, 225 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index af637409be39..f369a30a247c 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -7,6 +7,7 @@ #ifndef __MSM_GEM_H__ #define __MSM_GEM_H__ =20 +#include "msm_mmu.h" #include #include #include "drm/drm_exec.h" diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index bd67431cb25f..887c9023f8a2 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -6,6 +6,7 @@ =20 #include #include +#include #include "msm_drv.h" #include "msm_mmu.h" =20 @@ -14,6 +15,8 @@ struct msm_iommu { struct iommu_domain *domain; atomic_t pagetables; struct page *prr_page; + + struct kmem_cache *pt_cache; }; =20 #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -27,6 +30,9 @@ struct msm_iommu_pagetable { unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ phys_addr_t ttbr; u32 asid; + + /** @root_page_table: Stores the root page table pointer. */ + void *root_page_table; }; static struct msm_iommu_pagetable *to_pagetable(struct msm_mmu *mmu) { @@ -282,7 +288,145 @@ msm_iommu_pagetable_walk(struct msm_mmu *mmu, unsigne= d long iova, uint64_t ptes[ return 0; } =20 +static void +msm_iommu_pagetable_prealloc_count(struct msm_mmu *mmu, struct msm_mmu_pre= alloc *p, + uint64_t iova, size_t len) +{ + u64 pt_count; + + /* + * L1, L2 and L3 page tables. + * + * We could optimize L3 allocation by iterating over the sgt and merging + * 2M contiguous blocks, but it's simpler to over-provision and return + * the pages if they're not used. + * + * The first level descriptor (v8 / v7-lpae page table format) encodes + * 30 bits of address. The second level encodes 29. For the 3rd it is + * 39. + * + * https://developer.arm.com/documentation/ddi0406/c/System-Level-Archite= cture/Virtual-Memory-System-Architecture--VMSA-/Long-descriptor-translation= -table-format/Long-descriptor-translation-table-format-descriptors?lang=3De= n#BEIHEFFB + */ + pt_count =3D ((ALIGN(iova + len, 1ull << 39) - ALIGN_DOWN(iova, 1ull << 3= 9)) >> 39) + + ((ALIGN(iova + len, 1ull << 30) - ALIGN_DOWN(iova, 1ull << 30)) >> 30= ) + + ((ALIGN(iova + len, 1ull << 21) - ALIGN_DOWN(iova, 1ull << 21)) >> 21= ); + + p->count +=3D pt_count; +} + +static struct kmem_cache * +get_pt_cache(struct msm_mmu *mmu) +{ + struct msm_iommu_pagetable *pagetable =3D to_pagetable(mmu); + return to_msm_iommu(pagetable->parent)->pt_cache; +} + +static int +msm_iommu_pagetable_prealloc_allocate(struct msm_mmu *mmu, struct msm_mmu_= prealloc *p) +{ + struct kmem_cache *pt_cache =3D get_pt_cache(mmu); + int ret; + + p->pages =3D kvmalloc_array(p->count, sizeof(p->pages), GFP_KERNEL); + if (!p->pages) + return -ENOMEM; + + ret =3D kmem_cache_alloc_bulk(pt_cache, GFP_KERNEL, p->count, p->pages); + if (ret !=3D p->count) { + p->count =3D ret; + return -ENOMEM; + } + + return 0; +} + +static void +msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mmu, struct msm_mmu_p= realloc *p) +{ + struct kmem_cache *pt_cache =3D get_pt_cache(mmu); + uint32_t remaining_pt_count =3D p->count - p->ptr; + + kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]); + kvfree(p->pages); +} + +/** + * alloc_pt() - Custom page table allocator + * @cookie: Cookie passed at page table allocation time. + * @size: Size of the page table. This size should be fixed, + * and determined at creation time based on the granule size. + * @gfp: GFP flags. + * + * We want a custom allocator so we can use a cache for page table + * allocations and amortize the cost of the over-reservation that's + * done to allow asynchronous VM operations. + * + * Return: non-NULL on success, NULL if the allocation failed for any + * reason. + */ +static void * +msm_iommu_pagetable_alloc_pt(void *cookie, size_t size, gfp_t gfp) +{ + struct msm_iommu_pagetable *pagetable =3D cookie; + struct msm_mmu_prealloc *p =3D pagetable->base.prealloc; + void *page; + + /* Allocation of the root page table happening during init. */ + if (unlikely(!pagetable->root_page_table)) { + struct page *p; + + p =3D alloc_pages_node(dev_to_node(pagetable->iommu_dev), + gfp | __GFP_ZERO, get_order(size)); + page =3D p ? page_address(p) : NULL; + pagetable->root_page_table =3D page; + return page; + } + + if (WARN_ON(!p) || WARN_ON(p->ptr >=3D p->count)) + return NULL; + + page =3D p->pages[p->ptr++]; + memset(page, 0, size); + + /* + * Page table entries don't use virtual addresses, which trips out + * kmemleak. kmemleak_alloc_phys() might work, but physical addresses + * are mixed with other fields, and I fear kmemleak won't detect that + * either. + * + * Let's just ignore memory passed to the page-table driver for now. + */ + kmemleak_ignore(page); + + return page; +} + + +/** + * free_pt() - Custom page table free function + * @cookie: Cookie passed at page table allocation time. + * @data: Page table to free. + * @size: Size of the page table. This size should be fixed, + * and determined at creation time based on the granule size. + */ +static void +msm_iommu_pagetable_free_pt(void *cookie, void *data, size_t size) +{ + struct msm_iommu_pagetable *pagetable =3D cookie; + + if (unlikely(pagetable->root_page_table =3D=3D data)) { + free_pages((unsigned long)data, get_order(size)); + pagetable->root_page_table =3D NULL; + return; + } + + kmem_cache_free(get_pt_cache(&pagetable->base), data); +} + static const struct msm_mmu_funcs pagetable_funcs =3D { + .prealloc_count =3D msm_iommu_pagetable_prealloc_count, + .prealloc_allocate =3D msm_iommu_pagetable_prealloc_allocate, + .prealloc_cleanup =3D msm_iommu_pagetable_prealloc_cleanup, .map =3D msm_iommu_pagetable_map, .unmap =3D msm_iommu_pagetable_unmap, .destroy =3D msm_iommu_pagetable_destroy, @@ -333,6 +477,17 @@ static const struct iommu_flush_ops tlb_ops =3D { static int msm_gpu_fault_handler(struct iommu_domain *domain, struct devic= e *dev, unsigned long iova, int flags, void *arg); =20 +static size_t get_tblsz(const struct io_pgtable_cfg *cfg) +{ + int pg_shift, bits_per_level; + + pg_shift =3D __ffs(cfg->pgsize_bitmap); + /* arm_lpae_iopte is u64: */ + bits_per_level =3D pg_shift - ilog2(sizeof(u64)); + + return sizeof(u64) << bits_per_level; +} + struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool ke= rnel_managed) { struct adreno_smmu_priv *adreno_smmu =3D dev_get_drvdata(parent->dev); @@ -369,8 +524,34 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_= mmu *parent, bool kernel_m =20 if (!kernel_managed) { ttbr0_cfg.quirks |=3D IO_PGTABLE_QUIRK_NO_WARN; + + /* + * With userspace managed VM (aka VM_BIND), we need to pre- + * allocate pages ahead of time for map/unmap operations, + * handing them to io-pgtable via custom alloc/free ops as + * needed: + */ + ttbr0_cfg.alloc =3D msm_iommu_pagetable_alloc_pt; + ttbr0_cfg.free =3D msm_iommu_pagetable_free_pt; + + /* + * Restrict to single page granules. Otherwise we may run + * into a situation where userspace wants to unmap/remap + * only a part of a larger block mapping, which is not + * possible without unmapping the entire block. Which in + * turn could cause faults if the GPU is accessing other + * parts of the block mapping. + * + * Note that prior to commit 33729a5fc0ca ("iommu/io-pgtable-arm: + * Remove split on unmap behavior)" this was handled in + * io-pgtable-arm. But this apparently does not work + * correctly on SMMUv3. + */ + WARN_ON(!(ttbr0_cfg.pgsize_bitmap & PAGE_SIZE)); + ttbr0_cfg.pgsize_bitmap =3D PAGE_SIZE; } =20 + pagetable->iommu_dev =3D ttbr1_cfg->iommu_dev; pagetable->pgtbl_ops =3D alloc_io_pgtable_ops(ARM_64_LPAE_S1, &ttbr0_cfg, pagetable); =20 @@ -414,7 +595,6 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_m= mu *parent, bool kernel_m /* Needed later for TLB flush */ pagetable->parent =3D parent; pagetable->tlb =3D ttbr1_cfg->tlb; - pagetable->iommu_dev =3D ttbr1_cfg->iommu_dev; pagetable->pgsize_bitmap =3D ttbr0_cfg.pgsize_bitmap; pagetable->ttbr =3D ttbr0_cfg.arm_lpae_s1_cfg.ttbr; =20 @@ -510,6 +690,7 @@ static void msm_iommu_destroy(struct msm_mmu *mmu) { struct msm_iommu *iommu =3D to_msm_iommu(mmu); iommu_domain_free(iommu->domain); + kmem_cache_destroy(iommu->pt_cache); kfree(iommu); } =20 @@ -583,6 +764,14 @@ struct msm_mmu *msm_iommu_gpu_new(struct device *dev, = struct msm_gpu *gpu, unsig return mmu; =20 iommu =3D to_msm_iommu(mmu); + if (adreno_smmu && adreno_smmu->cookie) { + const struct io_pgtable_cfg *cfg =3D + adreno_smmu->get_ttbr1_cfg(adreno_smmu->cookie); + size_t tblsz =3D get_tblsz(cfg); + + iommu->pt_cache =3D + kmem_cache_create("msm-mmu-pt", tblsz, tblsz, 0, NULL); + } iommu_set_fault_handler(iommu->domain, msm_gpu_fault_handler, iommu); =20 /* Enable stall on iommu fault: */ diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index 04dce0faaa3a..8915662fbd4d 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -9,8 +9,16 @@ =20 #include =20 +struct msm_mmu_prealloc; +struct msm_mmu; +struct msm_gpu; + struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); + void (*prealloc_count)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p, + uint64_t iova, size_t len); + int (*prealloc_allocate)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p); + void (*prealloc_cleanup)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); @@ -24,12 +32,38 @@ enum msm_mmu_type { MSM_MMU_IOMMU_PAGETABLE, }; =20 +/** + * struct msm_mmu_prealloc - Tracking for pre-allocated pages for MMU upda= tes. + */ +struct msm_mmu_prealloc { + /** @count: Number of pages reserved. */ + uint32_t count; + /** @ptr: Index of first unused page in @pages */ + uint32_t ptr; + /** + * @pages: Array of pages preallocated for MMU table updates. + * + * After a VM operation, there might be free pages remaining in this + * array (since the amount allocated is a worst-case). These are + * returned to the pt_cache at mmu->prealloc_cleanup(). + */ + void **pages; +}; + struct msm_mmu { const struct msm_mmu_funcs *funcs; struct device *dev; int (*handler)(void *arg, unsigned long iova, int flags, void *data); void *arg; enum msm_mmu_type type; + + /** + * @prealloc: pre-allocated pages for pgtable + * + * Set while a VM_BIND job is running, serialized under + * msm_gem_vm::mmu_lock. + */ + struct msm_mmu_prealloc *prealloc; }; =20 static inline void msm_mmu_init(struct msm_mmu *mmu, struct device *dev, --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98296241CB6 for ; Sun, 29 Jun 2025 20:17:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228232; cv=none; b=C1kTmAPrzc89KuvmovWkXb6dRhuSwGV71t2XV/BJZ0lbqtY3haIfsltUMgXp3vpbCwkrnhKz/674LvQVDJNQI9Z3BkGNfEnagHakgsB1xSnXF1cUmzoWdZ1yfbShfJrpvbB4jHjLJkVDvzSMhq4J3jwguAdblehkzlqfvCIDzR8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228232; c=relaxed/simple; bh=MPo1IkqWJ2Qo6U6FxPeDsi41WXUD+qgCx9wi/nJZ6nA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rQ+upN5YoEtAV5bhjkXkEuSCNr+wmE89OA9cLRPB69HrULwfSJk3ZWhU/MoMt3oq3N4RXIYraimYz2s6R8UmTTY+21cHCdcFdOOt98TRneZVvMEAj903QcFoB8COCay55wTVDpht+HA4RhazSirnli1DxlxcPfffHfBq6+qu2zw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=RSmfV4mc; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="RSmfV4mc" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TDqWF6026426 for ; Sun, 29 Jun 2025 20:17:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=A54UdMciCNB ls2/cngumRMQ7rIW/ZCmN2ztDtF1NIpw=; b=RSmfV4mcvX5pQ405rpyNr43/qDU 7sAdjMrbR9hT8P0OK7oG1yg6B3Wq8RHgbDJ5/pIFs3dDGh9D9LAqVIYXCb/n9Wdf G6F1K+376yppN+S1oddc6eRkMo919bSAJATg3QUwhVnegGjPLm3X/TcSBOVIesri 9q8LMjyUpTSYAPMT55dpYht1xmYn5KfYwg8Zi2wUaGq4HGqr/KtpRjjR1fCcMn9Z D3abonYgAdddvhUAG42wEKJYLiMEyX0+NNTRxKEoRkrRIbrCdBT7bts+8Op3ZDrl aPk3C3irtBbdoXiDAZaAX+Ec5nJW+7ynPCEywSHu3y8YwdEBjRt5rT+V/mw== Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7bvjptp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:08 +0000 (GMT) Received: by mail-pf1-f200.google.com with SMTP id d2e1a72fcca58-74890390d17so3244998b3a.2 for ; Sun, 29 Jun 2025 13:17:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228227; x=1751833027; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A54UdMciCNBls2/cngumRMQ7rIW/ZCmN2ztDtF1NIpw=; b=I2JBWhvO4fYevaCS+G6vVawF/m5KwEKnzFDovogH1No6+s872kSlq0AagbiAP2zqa0 OcGQut7gpAGMESok0+hFEUOcwwvpj+8ot8Vfu3EO2IrNd1o5/griEJ1/XSLxC34Hg9WW wcpXKqDwgQFDEypBIDG5lco1oW3wGGCZpR+7EHptL/SLGIJPadNyhu71hIXkw0klGo+J 1SQQgfJa5NUrO3ub/VNVd8afKX1rgfjmqMWi9/Ec6C5e7NaKbaYQZ9twBF5o9VtuZrpI 2oSTb2a9X7NOT1zMvrdo7hyUO/x103xhQpdsPqniwxMxjxggDCME8ixU3tN8ubuK9j7h NFrQ== X-Forwarded-Encrypted: i=1; AJvYcCXW5ccksto/F4K4D7UG3BTTWlp27h+Us9So9BxkHX+pM1RYtoWysN8+KuyPysYfF5C3KrF2F67tNIAWHpA=@vger.kernel.org X-Gm-Message-State: AOJu0YzL91uMhm/tmBBsZBJJ5AFb7QtYehgJwRODDg3ycuoq5WeRQ62+ d3ScxiAZjCcSXdIGSYyc0TGrntUsuukkTNkUyIUaARLrHoLqjb7xhZttR2vSS7oPCbMZVS8fix5 65moWNVIS59NOJq2IbBPnYf8MWJM7JehKtRbUA9wfC6k6ykKXgGpaNtSpX9UAzerXqWYEdP4l43 c= X-Gm-Gg: ASbGncspJmdKIrMzU0QUF+YGVreM3C9vMsiKXFKwdOeHRvpsefBHC0nqYlTIfliFxfJ fP8Q9ZF6pwhKCJZI/kYbjpfUwRuidEO/sy/KtRzI8x39N/hnTBKF3nHWeNsDtsyJaqY5dvpnXn4 S6VnwIVfU5r6tvXnuj6c3I9Jpg4CamCzO+pWmE+jJwTUFPVGwN/mTbGRhYG8vVmvxqpbI/r3EHi M5X3yknpAWxMIcN00bqjycTBL21el9PSCJEi+XJQTbKK8mnBygQ5CupBLoj06nAhBojocxY+c40 wQAR09m9bSgwlxHnpm7Y214J4IRbMEamIQ== X-Received: by 2002:a05:6a00:845:b0:749:540:ca72 with SMTP id d2e1a72fcca58-74af6f5b39amr16188975b3a.24.1751228227425; Sun, 29 Jun 2025 13:17:07 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH9aruV6eSL2q1mrAooTzltJ0Bk0WGQyzvvWFqX/Eu7bihRZwCW1HMVKXXsx1qqK/9yIQBAVw== X-Received: by 2002:a05:6a00:845:b0:749:540:ca72 with SMTP id d2e1a72fcca58-74af6f5b39amr16188957b3a.24.1751228227015; Sun, 29 Jun 2025 13:17:07 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b34e302378bsm5914877a12.23.2025.06.29.13.17.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:06 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 34/42] drm/msm: Split out map/unmap ops Date: Sun, 29 Jun 2025 13:13:17 -0700 Message-ID: <20250629201530.25775-35-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: 5FInVbZz5f6EywU_hA76dwcoQojaXgYM X-Authority-Analysis: v=2.4 cv=RJCzH5i+ c=1 sm=1 tr=0 ts=68619f44 cx=c_pps a=mDZGXZTwRPZaeRUbqKGCBw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=xxbd-mT8tQbqgF3hF1cA:9 a=zc0IvFSfCIW2DFIPzwfm:22 X-Proofpoint-ORIG-GUID: 5FInVbZz5f6EywU_hA76dwcoQojaXgYM X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfXzb3WO2DkKbFH cXGj9nADK9gD93BHdP5iR5e8ihE5O3ivYg+q8A7/Hi563Nb/sSocd+ohvB6BsO7hHqybXY13eBm b9/nwGypJ8auDoFEIBEH0Gp4RhqEvUXgAZGrHCTkXlelMde+RTYEflTtrQnLMASU4qrnwJT5icm te6+SFUBrWicOFLTnUkZ3wyG9PkHvXOXw2jIYY21EFi9SoJJpSiVauy1KozfqErQKKos1FSsEb8 rAocYY60DQzbBmsF0pCGAPbGLU5l/r++I2dTRBs4W9INnPm1oUsGoR7Afm6wnPN4LvPWYyz/1Su rlyIll3E177nuKTf1WmBrkJGdB6POPCi+/WyjPJoeh1rUQCECCyyGfy4S8oqtGR1I+2lR1vV4h6 2+2YvBrfte+zKv5t1kHu9r78gNuS9so+Sm9/kfooYojEzi2GwOK3nj991NS7fZpl24QeCJEV X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 priorityscore=1501 mlxlogscore=999 adultscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 phishscore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark With async VM_BIND, the actual pgtable updates are deferred. Synchronously, a list of map/unmap ops will be generated, but the actual pgtable changes are deferred. To support that, split out op handlers and change the existing non-VM_BIND paths to use them. Note in particular, the vma itself may already be destroyed/freed by the time an UNMAP op runs (or even a MAP op if there is a later queued UNMAP). For this reason, the op handlers cannot reference the vma pointer. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem_vma.c | 63 +++++++++++++++++++++++++++---- 1 file changed, 56 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index cf37abb98235..76b79c122182 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -8,6 +8,34 @@ #include "msm_gem.h" #include "msm_mmu.h" =20 +#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##= __VA_ARGS__) + +/** + * struct msm_vm_map_op - create new pgtable mapping + */ +struct msm_vm_map_op { + /** @iova: start address for mapping */ + uint64_t iova; + /** @range: size of the region to map */ + uint64_t range; + /** @offset: offset into @sgt to map */ + uint64_t offset; + /** @sgt: pages to map, or NULL for a PRR mapping */ + struct sg_table *sgt; + /** @prot: the mapping protection flags */ + int prot; +}; + +/** + * struct msm_vm_unmap_op - unmap a range of pages from pgtable + */ +struct msm_vm_unmap_op { + /** @iova: start address for unmap */ + uint64_t iova; + /** @range: size of region to unmap */ + uint64_t range; +}; + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -21,18 +49,36 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } =20 +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + + vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); +} + +static int +vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) +{ + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + + return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, + op->range, op->prot); +} + /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); - struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); - unsigned size =3D vma->va.range; =20 /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; =20 - vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); + vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){ + .iova =3D vma->va.addr, + .range =3D vma->va.range, + }); =20 msm_vma->mapped =3D false; } @@ -42,7 +88,6 @@ int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); - struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); int ret; =20 if (GEM_WARN_ON(!vma->va.addr)) @@ -62,9 +107,13 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct= sg_table *sgt) * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, - vma->gem.offset, vma->va.range, - prot); + ret =3D vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){ + .iova =3D vma->va.addr, + .range =3D vma->va.range, + .offset =3D vma->gem.offset, + .sgt =3D sgt, + .prot =3D prot, + }); if (ret) { msm_vma->mapped =3D false; } --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6F0E25EFBC for ; Sun, 29 Jun 2025 20:17:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228238; cv=none; b=YIB7g4Lv6FPryhiNbZgp0+hPAR8Bd96v7Ki1xbBQrJtf+egkAyyvNVOBQiZc4BPye2IIkC8TCHCis0uGw8czAGRar+ippFaBGiPui7+Ts2dwGIYWSD3FTq8hnCkRSh2LyIDBsAjrWtKaHQBGgPtQdhpnlLQcqCloc+L0pMUNDls= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228238; c=relaxed/simple; bh=AdwHh2APALAKRdDyc1N+01yA+0c44BjsyOPApAyCPiA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jGUoDb2SQO1CyO7G00K6jobRy3UKpBlQuDMdF/i3xsUPvw5IfUF06tIexRDXk5WBz/HrRtnanQkC/g+Dk/zvnQSDDUjXRXvZ8MgLMZrG0Vy/e96FHhaqxQ9CfJVU+IWAQpAlW+ZO+zVtnfFnQ1S2ZkBtURDrQbhRvqnSnkm2o90= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=MbdtTnWv; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="MbdtTnWv" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TANmZC030508 for ; Sun, 29 Jun 2025 20:17:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=RLSe5h8h7+b W0OLOzZo2quximNt/ChRbuqG4x2TcON8=; b=MbdtTnWvHQs+l+eF8WldIinQanU CcVHc00CwL/LEmydgdGWCOAUwRUuUXIYkayvBMg0YuIy1c5H+BIMna3pNyMk4Z+0 LymZRqVe5JZ+pm9Sk2CnuYBCcAQkgVCOEGMe270knJDlSMgxoKOLkU0ZvV9tcqtu s2q0QTCmKXMe33MQC+wTSndtcE6dxNY0DfN56Hgz9CkFd7yG10GSepxDnFEnGLBF B1M8S5sbak20dRHdA+EagxIbVK+/rirrnN0Mf+vgssH2DtqHMXexSK8GNa0AmXQs d0BsO/NAKhUvNODU7+vQdW8/5WE5yFhArSIHRaNzS6JQHi6yurk4WBCkCSw== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j5rq2v1f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:12 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-23507382e64so10329505ad.2 for ; Sun, 29 Jun 2025 13:17:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228231; x=1751833031; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RLSe5h8h7+bW0OLOzZo2quximNt/ChRbuqG4x2TcON8=; b=Bib4L+3/AhG08oO8cUSesOij/OdwqBYfVhW5gg1QA8H1eRXTBB869GqwUsa398EFOk YyhLlpfipuTsy2u6FSpOnsUsjwYE828PQ8+wbtTIBOy3KWFEtF5mPYp4ojSPhWYVr85m PZhoxemxgrMOW2EAmuvBQCMPM+7GudNEfETroomO2e9NADsBhR112iVmESUwhAKaaVHV 0lnnA9KjOv4WZnLCihfGdD9Fqy9exHOaChqIHxgp5to2m0E8yYS52omQV5rmurpT2+eP QTWhDSuna6ij5E/ZsBWjdwAROPvdiGOzWiK7rqn+X2YWRFDjLCDNNXv+aVjbcze2qQ/a Jv3A== X-Forwarded-Encrypted: i=1; AJvYcCU3ZZSJVyp0LqjlaLvFRsaL6Kt+rQrT3JsOmLJBJQ5XxmzYclFH+N4QkNeOXh51u1gyRJQ5cR+NkWbc99A=@vger.kernel.org X-Gm-Message-State: AOJu0Yxpr0OMPVeu83oIijN5xHuYcyP7r2UgNfpjbAUyQ9WSklQOgXQ7 70azNmEA4UKtkdyG+3GOYiPa0hoKLYRJ738oZULBVyvfYkWV7mF+pMVOqIWVznY7xe8KYRoeUvC PVqDZw7xi76LpgvF4/GdbSj7CHstTQ7b49pcUoKzb8hKx8TtNJS4vOrR7SCY4cbjUBp4= X-Gm-Gg: ASbGncuXbKHJVKUl1nTyeB6PxzJwlPYWXltIJktjjPUxNZyjOkr4wJqYuRpfQPSrR7c IjvFxNhrWPaXbeFjcAo9Wzjf39FM6CAvp5uuz/qE4lRYaUOlCHaNLn+SZlI14o0vj+JtMaYtu3p QJr3j2AwhMrQb3LFj6Y7FUlByL9y+Obq4oGjTornzSEDC0f3f1ZEH1mOtsTiiSxeCr6IcnDtZaJ 3uxc4Qy2PEBpecE8lzCS/BQLL7dkfHAtv7z6j8PXyAB1q0zhUrDPXxqKJA9/FcePGtY4kWmaLvP IIKt+XCBJPFtD1C9NiiVUqcVHkMb6ttnEA== X-Received: by 2002:a17:903:1a4d:b0:237:ed7c:cd0c with SMTP id d9443c01a7336-23ac2d8b52dmr156002515ad.11.1751228229814; Sun, 29 Jun 2025 13:17:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEEYyCC68rM2KsS0M6gr/zHXjEYCqScvbvRT44OZIOCeu0zA7b1SLxZnrEakL3I6eJ45RDmiA== X-Received: by 2002:a17:903:1a4d:b0:237:ed7c:cd0c with SMTP id d9443c01a7336-23ac2d8b52dmr156001995ad.11.1751228228964; Sun, 29 Jun 2025 13:17:08 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb2e1b33sm63622955ad.31.2025.06.29.13.17.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:08 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v9 35/42] drm/msm: Add VM_BIND ioctl Date: Sun, 29 Jun 2025 13:13:18 -0700 Message-ID: <20250629201530.25775-36-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: GsAJufmGNrjlCEF8c_MvVVc5FdbrpMCn X-Authority-Analysis: v=2.4 cv=eIYTjGp1 c=1 sm=1 tr=0 ts=68619f48 cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=9iYb1zYOeEBEA4qMVjAA:9 a=lT2Ezh7aeK42-gto:21 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-GUID: GsAJufmGNrjlCEF8c_MvVVc5FdbrpMCn X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfXyOqdKYtw67hJ H3sx26qbrsdls4aTBmetMQ/jSXFaYmaB1Sbkxbnfn3cIOTaZu93kUrIy+3I8hY09S0qAsoCgnt6 NP+zW2L8Dq7ouMe4c4AMc+07mFncsp1/uOQ9S69QcnpxkvJke1Hgdn0Y1NBXuNmDfgvDHrvlWC2 KcGUkgKnVlV52bg4wANyaigxqQvAdkU8H+gT9hCa0jyOXOdEXWiHPjPnJ4cjBlggcbhFwBNOox5 io25U/LZIvAcqerssoUaoFCwj5al6tNqGXk5eisoEO9TxmwpwdcFGFwaHbztX24dhHJI4+SYZBd NIDG3Mq0k1IebAfCZ0Fv5xYb9+XqVDNmKx71Uvk/MLARnGTp43o4TI6l2l3JDI1sPb72kKPuyQ3 RycstrnjtTlteuGphV3pYLwRUC/65zCxlvcXZ44p17uQros0Hf5MIDyNt5oB7QmNfRsX0IBc X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 clxscore=1015 mlxlogscore=999 priorityscore=1501 adultscore=0 mlxscore=0 phishscore=0 bulkscore=0 spamscore=0 suspectscore=0 lowpriorityscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Add a VM_BIND ioctl for binding/unbinding buffers into a VM. This is only supported if userspace has opted in to MSM_PARAM_EN_VM_BIND. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_gem.c | 40 +- drivers/gpu/drm/msm/msm_gem.h | 4 + drivers/gpu/drm/msm/msm_gem_submit.c | 22 +- drivers/gpu/drm/msm/msm_gem_vma.c | 1092 +++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 74 +- 7 files changed, 1204 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index c1627cae6ae6..7881afa3a75a 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -795,6 +795,7 @@ static const struct drm_ioctl_desc msm_ioctls[] =3D { DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_NEW, msm_ioctl_submitqueue_new, DRM= _RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_CLOSE, msm_ioctl_submitqueue_close, DRM= _RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_QUERY, msm_ioctl_submitqueue_query, DRM= _RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(MSM_VM_BIND, msm_ioctl_vm_bind, DRM_RENDER_AL= LOW), }; =20 static void msm_show_fdinfo(struct drm_printer *p, struct drm_file *file) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 9b1ccb2b18f6..200c3135bbf9 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -255,7 +255,9 @@ struct drm_gpuvm *msm_kms_init_vm(struct drm_device *de= v); bool msm_use_mmu(struct drm_device *dev); =20 int msm_ioctl_gem_submit(struct drm_device *dev, void *data, - struct drm_file *file); + struct drm_file *file); +int msm_ioctl_vm_bind(struct drm_device *dev, void *data, + struct drm_file *file); =20 #ifdef CONFIG_DEBUG_FS unsigned long msm_gem_shrinker_shrink(struct drm_device *dev, unsigned lon= g nr_to_scan); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b688d397cc47..77fdf53d3e33 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -251,8 +251,7 @@ static void put_pages(struct drm_gem_object *obj) } } =20 -static struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, - unsigned madv) +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigne= d madv) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); =20 @@ -1052,18 +1051,37 @@ static void msm_gem_free_object(struct drm_gem_obje= ct *obj) /* * We need to lock any VMs the object is still attached to, but not * the object itself (see explaination in msm_gem_assert_locked()), - * so just open-code this special case: + * so just open-code this special case. + * + * Note that we skip the dance if we aren't attached to any VM. This + * is load bearing. The driver needs to support two usage models: + * + * 1. Legacy kernel managed VM: Userspace expects the VMA's to be + * implicitly torn down when the object is freed, the VMA's do + * not hold a hard reference to the BO. + * + * 2. VM_BIND, userspace managed VM: The VMA holds a reference to the + * BO. This can be dropped when the VM is closed and it's associated + * VMAs are torn down. (See msm_gem_vm_close()). + * + * In the latter case the last reference to a BO can be dropped while + * we already have the VM locked. It would have already been removed + * from the gpuva list, but lockdep doesn't know that. Or understand + * the differences between the two usage models. */ - drm_exec_init(&exec, 0, 0); - drm_exec_until_all_locked (&exec) { - struct drm_gpuvm_bo *vm_bo; - drm_gem_for_each_gpuvm_bo (vm_bo, obj) { - drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); - drm_exec_retry_on_contention(&exec); + if (!list_empty(&obj->gpuva.list)) { + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, + drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } } + put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ } - put_iova_spaces(obj, NULL, true); - drm_exec_fini(&exec); /* drop locks */ =20 if (drm_gem_is_imported(obj)) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f369a30a247c..ee464e315643 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -73,6 +73,9 @@ struct msm_gem_vm { /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; =20 + /** @mmu_lock: Protects access to the mmu */ + struct mutex mmu_lock; + /** * @pid: For address spaces associated with a specific process, this * will be non-NULL: @@ -205,6 +208,7 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj= , struct drm_gpuvm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigne= d madv); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index e2174b7d0e40..283e807c7874 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -193,6 +193,7 @@ static int submit_lookup_objects(struct msm_gem_submit = *submit, static int submit_lookup_cmds(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { + struct msm_context *ctx =3D file->driver_priv; unsigned i; size_t sz; int ret =3D 0; @@ -224,6 +225,20 @@ static int submit_lookup_cmds(struct msm_gem_submit *s= ubmit, goto out; } =20 + if (msm_context_is_vmbind(ctx)) { + if (submit_cmd.nr_relocs) { + ret =3D SUBMIT_ERROR(EINVAL, submit, "nr_relocs must be zero"); + goto out; + } + + if (submit_cmd.submit_idx || submit_cmd.submit_offset) { + ret =3D SUBMIT_ERROR(EINVAL, submit, "submit_idx/offset must be zero"); + goto out; + } + + submit->cmd[i].iova =3D submit_cmd.iova; + } + submit->cmd[i].type =3D submit_cmd.type; submit->cmd[i].size =3D submit_cmd.size / 4; submit->cmd[i].offset =3D submit_cmd.submit_offset / 4; @@ -532,6 +547,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, struct msm_syncobj_post_dep *post_deps =3D NULL; struct drm_syncobj **syncobjs_to_reset =3D NULL; struct sync_file *sync_file =3D NULL; + unsigned cmds_to_parse; int out_fence_fd =3D -1; unsigned i; int ret; @@ -655,7 +671,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, if (ret) goto out; =20 - for (i =3D 0; i < args->nr_cmds; i++) { + cmds_to_parse =3D msm_context_is_vmbind(ctx) ? 0 : args->nr_cmds; + + for (i =3D 0; i < cmds_to_parse; i++) { struct drm_gem_object *obj; uint64_t iova; =20 @@ -686,7 +704,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, goto out; } =20 - submit->nr_cmds =3D i; + submit->nr_cmds =3D args->nr_cmds; =20 idr_preload(GFP_KERNEL); =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 76b79c122182..6ec92b7771f5 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -4,9 +4,16 @@ * Author: Rob Clark */ =20 +#include "drm/drm_file.h" +#include "drm/msm_drm.h" +#include "linux/file.h" +#include "linux/sync_file.h" + #include "msm_drv.h" #include "msm_gem.h" +#include "msm_gpu.h" #include "msm_mmu.h" +#include "msm_syncobj.h" =20 #define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##= __VA_ARGS__) =20 @@ -36,6 +43,97 @@ struct msm_vm_unmap_op { uint64_t range; }; =20 +/** + * struct msm_vma_op - A MAP or UNMAP operation + */ +struct msm_vm_op { + /** @op: The operation type */ + enum { + MSM_VM_OP_MAP =3D 1, + MSM_VM_OP_UNMAP, + } op; + union { + /** @map: Parameters used if op =3D=3D MSM_VMA_OP_MAP */ + struct msm_vm_map_op map; + /** @unmap: Parameters used if op =3D=3D MSM_VMA_OP_UNMAP */ + struct msm_vm_unmap_op unmap; + }; + /** @node: list head in msm_vm_bind_job::vm_ops */ + struct list_head node; + + /** + * @obj: backing object for pages to be mapped/unmapped + * + * Async unmap ops, in particular, must hold a reference to the + * original GEM object backing the mapping that will be unmapped. + * But the same can be required in the map path, for example if + * there is not a corresponding unmap op, such as process exit. + * + * This ensures that the pages backing the mapping are not freed + * before the mapping is torn down. + */ + struct drm_gem_object *obj; +}; + +/** + * struct msm_vm_bind_job - Tracking for a VM_BIND ioctl + * + * A table of userspace requested VM updates (MSM_VM_BIND_OP_UNMAP/MAP/MAP= _NULL) + * gets applied to the vm, generating a list of VM ops (MSM_VM_OP_MAP/UNMA= P) + * which are applied to the pgtables asynchronously. For example a usersp= ace + * requested MSM_VM_BIND_OP_MAP could end up generating both an MSM_VM_OP_= UNMAP + * to unmap an existing mapping, and a MSM_VM_OP_MAP to apply the new mapp= ing. + */ +struct msm_vm_bind_job { + /** @base: base class for drm_sched jobs */ + struct drm_sched_job base; + /** @vm: The VM being operated on */ + struct drm_gpuvm *vm; + /** @fence: The fence that is signaled when job completes */ + struct dma_fence *fence; + /** @queue: The queue that the job runs on */ + struct msm_gpu_submitqueue *queue; + /** @prealloc: Tracking for pre-allocated MMU pgtable pages */ + struct msm_mmu_prealloc prealloc; + /** @vm_ops: a list of struct msm_vm_op */ + struct list_head vm_ops; + /** @bos_pinned: are the GEM objects being bound pinned? */ + bool bos_pinned; + /** @nr_ops: the number of userspace requested ops */ + unsigned int nr_ops; + /** + * @ops: the userspace requested ops + * + * The userspace requested ops are copied/parsed and validated + * before we start applying the updates to try to do as much up- + * front error checking as possible, to avoid the VM being in an + * undefined state due to partially executed VM_BIND. + * + * This table also serves to hold a reference to the backing GEM + * objects. + */ + struct msm_vm_bind_op { + uint32_t op; + uint32_t flags; + union { + struct drm_gem_object *obj; + uint32_t handle; + }; + uint64_t obj_offset; + uint64_t iova; + uint64_t range; + } ops[]; +}; + +#define job_foreach_bo(obj, _job) \ + for (unsigned i =3D 0; i < (_job)->nr_ops; i++) \ + if ((obj =3D (_job)->ops[i].obj)) + +static inline struct msm_vm_bind_job *to_msm_vm_bind_job(struct drm_sched_= job *job) +{ + return container_of(job, struct msm_vm_bind_job, base); +} + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -52,6 +150,9 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) static void vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) { + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); =20 vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); @@ -60,6 +161,9 @@ vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_u= nmap_op *op) static int vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) { + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); =20 return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, @@ -69,17 +173,29 @@ vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_m= ap_op *op) /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); =20 /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; =20 - vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){ + /* + * The mmu_lock is only needed when preallocation is used. But + * in that case we don't need to worry about recursion into + * shrinker + */ + if (!vm->managed) + mutex_lock(&vm->mmu_lock); + + vm_unmap_op(vm, &(struct msm_vm_unmap_op){ .iova =3D vma->va.addr, .range =3D vma->va.range, }); =20 + if (!vm->managed) + mutex_unlock(&vm->mmu_lock); + msm_vma->mapped =3D false; } =20 @@ -87,6 +203,7 @@ void msm_gem_vma_unmap(struct drm_gpuva *vma) int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); int ret; =20 @@ -98,6 +215,14 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct= sg_table *sgt) =20 msm_vma->mapped =3D true; =20 + /* + * The mmu_lock is only needed when preallocation is used. But + * in that case we don't need to worry about recursion into + * shrinker + */ + if (!vm->managed) + mutex_lock(&vm->mmu_lock); + /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -107,16 +232,19 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, stru= ct sg_table *sgt) * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){ + ret =3D vm_map_op(vm, &(struct msm_vm_map_op){ .iova =3D vma->va.addr, .range =3D vma->va.range, .offset =3D vma->gem.offset, .sgt =3D sgt, .prot =3D prot, }); - if (ret) { + + if (!vm->managed) + mutex_unlock(&vm->mmu_lock); + + if (ret) msm_vma->mapped =3D false; - } =20 return ret; } @@ -131,6 +259,9 @@ void msm_gem_vma_close(struct drm_gpuva *vma) =20 drm_gpuvm_resv_assert_held(&vm->base); =20 + if (vma->gem.obj) + msm_gem_assert_locked(vma->gem.obj); + if (vma->va.addr && vm->managed) drm_mm_remove_node(&msm_vma->node); =20 @@ -158,6 +289,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, =20 if (vm->managed) { BUG_ON(offset !=3D 0); + BUG_ON(!obj); /* NULL mappings not valid for kernel managed VM */ ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -169,7 +301,8 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, range_end =3D range_start + obj->size; } =20 - GEM_WARN_ON((range_end - range_start) > obj->size); + if (obj) + GEM_WARN_ON((range_end - range_start) > obj->size); =20 drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, off= set); vma->mapped =3D false; @@ -178,6 +311,9 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, if (ret) goto err_free_range; =20 + if (!obj) + return &vma->base; + vm_bo =3D drm_gpuvm_bo_obtain(&vm->base, obj); if (IS_ERR(vm_bo)) { ret =3D PTR_ERR(vm_bo); @@ -200,11 +336,297 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_= gem_object *obj, return ERR_PTR(ret); } =20 +static int +msm_gem_vm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec) +{ + struct drm_gem_object *obj =3D vm_bo->obj; + struct drm_gpuva *vma; + int ret; + + vm_dbg("validate: %p", obj); + + msm_gem_assert_locked(obj); + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + ret =3D msm_gem_pin_vma_locked(obj, vma); + if (ret) + return ret; + } + + return 0; +} + +struct op_arg { + unsigned flags; + struct msm_vm_bind_job *job; +}; + +static void +vm_op_enqueue(struct op_arg *arg, struct msm_vm_op _op) +{ + struct msm_vm_op *op =3D kmalloc(sizeof(*op), GFP_KERNEL); + *op =3D _op; + list_add_tail(&op->node, &arg->job->vm_ops); + + if (op->obj) + drm_gem_object_get(op->obj); +} + +static struct drm_gpuva * +vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) +{ + return msm_gem_vma_new(arg->job->vm, op->gem.obj, op->gem.offset, + op->va.addr, op->va.addr + op->va.range); +} + +static int +msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gem_object *obj =3D op->map.gem.obj; + struct drm_gpuva *vma; + struct sg_table *sgt; + unsigned prot; + + vma =3D vma_from_op(arg, &op->map); + if (WARN_ON(IS_ERR(vma))) + return PTR_ERR(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + vma->flags =3D ((struct op_arg *)arg)->flags; + + if (obj) { + sgt =3D to_msm_bo(obj)->sgt; + prot =3D msm_gem_prot(obj); + } else { + sgt =3D NULL; + prot =3D IOMMU_READ | IOMMU_WRITE; + } + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_MAP, + .map =3D { + .sgt =3D sgt, + .iova =3D vma->va.addr, + .range =3D vma->va.range, + .offset =3D vma->gem.offset, + .prot =3D prot, + }, + .obj =3D vma->gem.obj, + }); + + to_msm_vma(vma)->mapped =3D true; + + return 0; +} + +static int +msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) +{ + struct msm_vm_bind_job *job =3D ((struct op_arg *)arg)->job; + struct drm_gpuvm *vm =3D job->vm; + struct drm_gpuva *orig_vma =3D op->remap.unmap->va; + struct drm_gpuva *prev_vma =3D NULL, *next_vma =3D NULL; + struct drm_gpuvm_bo *vm_bo =3D orig_vma->vm_bo; + bool mapped =3D to_msm_vma(orig_vma)->mapped; + unsigned flags; + + vm_dbg("orig_vma: %p:%p:%p: %016llx %016llx", vm, orig_vma, + orig_vma->gem.obj, orig_vma->va.addr, orig_vma->va.range); + + if (mapped) { + uint64_t unmap_start, unmap_range; + + drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range= ); + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_UNMAP, + .unmap =3D { + .iova =3D unmap_start, + .range =3D unmap_range, + }, + .obj =3D orig_vma->gem.obj, + }); + + /* + * Part of this GEM obj is still mapped, but we're going to kill the + * existing VMA and replace it with one or two new ones (ie. two if + * the unmapped range is in the middle of the existing (unmap) VMA). + * So just set the state to unmapped: + */ + to_msm_vma(orig_vma)->mapped =3D false; + } + + /* + * Hold a ref to the vm_bo between the msm_gem_vma_close() and the + * creation of the new prev/next vma's, in case the vm_bo is tracked + * in the VM's evict list: + */ + if (vm_bo) + drm_gpuvm_bo_get(vm_bo); + + /* + * The prev_vma and/or next_vma are replacing the unmapped vma, and + * therefore should preserve it's flags: + */ + flags =3D orig_vma->flags; + + msm_gem_vma_close(orig_vma); + + if (op->remap.prev) { + prev_vma =3D vma_from_op(arg, op->remap.prev); + if (WARN_ON(IS_ERR(prev_vma))) + return PTR_ERR(prev_vma); + + vm_dbg("prev_vma: %p:%p: %016llx %016llx", vm, prev_vma, prev_vma->va.ad= dr, prev_vma->va.range); + to_msm_vma(prev_vma)->mapped =3D mapped; + prev_vma->flags =3D flags; + } + + if (op->remap.next) { + next_vma =3D vma_from_op(arg, op->remap.next); + if (WARN_ON(IS_ERR(next_vma))) + return PTR_ERR(next_vma); + + vm_dbg("next_vma: %p:%p: %016llx %016llx", vm, next_vma, next_vma->va.ad= dr, next_vma->va.range); + to_msm_vma(next_vma)->mapped =3D mapped; + next_vma->flags =3D flags; + } + + if (!mapped) + drm_gpuvm_bo_evict(vm_bo, true); + + /* Drop the previous ref: */ + drm_gpuvm_bo_put(vm_bo); + + return 0; +} + +static int +msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gpuva *vma =3D op->unmap.va; + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + if (!msm_vma->mapped) + goto out_close; + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_UNMAP, + .unmap =3D { + .iova =3D vma->va.addr, + .range =3D vma->va.range, + }, + .obj =3D vma->gem.obj, + }); + + msm_vma->mapped =3D false; + +out_close: + msm_gem_vma_close(vma); + + return 0; +} + static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { .vm_free =3D msm_gem_vm_free, + .vm_bo_validate =3D msm_gem_vm_bo_validate, + .sm_step_map =3D msm_gem_vm_sm_step_map, + .sm_step_remap =3D msm_gem_vm_sm_step_remap, + .sm_step_unmap =3D msm_gem_vm_sm_step_unmap, }; =20 +static struct dma_fence * +msm_vma_job_run(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job =3D to_msm_vm_bind_job(_job); + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct drm_gem_object *obj; + int ret =3D vm->unusable ? -EINVAL : 0; + + vm_dbg(""); + + mutex_lock(&vm->mmu_lock); + vm->mmu->prealloc =3D &job->prealloc; + + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op =3D + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + + switch (op->op) { + case MSM_VM_OP_MAP: + /* + * On error, stop trying to map new things.. but we + * still want to process the unmaps (or in particular, + * the drm_gem_object_put()s) + */ + if (!ret) + ret =3D vm_map_op(vm, &op->map); + break; + case MSM_VM_OP_UNMAP: + vm_unmap_op(vm, &op->unmap); + break; + } + drm_gem_object_put(op->obj); + list_del(&op->node); + kfree(op); + } + + vm->mmu->prealloc =3D NULL; + mutex_unlock(&vm->mmu_lock); + + /* + * We failed to perform at least _some_ of the pgtable updates, so + * now the VM is in an undefined state. Game over! + */ + if (ret) + vm->unusable =3D true; + + job_foreach_bo (obj, job) { + msm_gem_lock(obj); + msm_gem_unpin_locked(obj); + msm_gem_unlock(obj); + } + + /* VM_BIND ops are synchronous, so no fence to wait on: */ + return NULL; +} + +static void +msm_vma_job_free(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job =3D to_msm_vm_bind_job(_job); + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct drm_gem_object *obj; + + vm->mmu->funcs->prealloc_cleanup(vm->mmu, &job->prealloc); + + drm_sched_job_cleanup(_job); + + job_foreach_bo (obj, job) + drm_gem_object_put(obj); + + msm_submitqueue_put(job->queue); + dma_fence_put(job->fence); + + /* In error paths, we could have unexecuted ops: */ + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op =3D + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + list_del(&op->node); + kfree(op); + } + + kfree(job); +} + static const struct drm_sched_backend_ops msm_vm_bind_ops =3D { + .run_job =3D msm_vma_job_run, + .free_job =3D msm_vma_job_free }; =20 /** @@ -268,6 +690,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, drm_gem_object_put(dummy_gem); =20 vm->mmu =3D mmu; + mutex_init(&vm->mmu_lock); vm->managed =3D managed; =20 drm_mm_init(&vm->mm, va_start, va_size); @@ -280,7 +703,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, err_free_vm: kfree(vm); return ERR_PTR(ret); - } =20 /** @@ -296,6 +718,7 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) { struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); struct drm_gpuva *vma, *tmp; + struct drm_exec exec; =20 /* * For kernel managed VMs, the VMAs are torn down when the handle is @@ -312,22 +735,655 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) drm_sched_fini(&vm->sched); =20 /* Tear down any remaining mappings: */ - dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL); - drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { - struct drm_gem_object *obj =3D vma->gem.obj; + drm_exec_init(&exec, 0, 2); + drm_exec_until_all_locked (&exec) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(gpuvm)); + drm_exec_retry_on_contention(&exec); =20 - if (obj && obj->resv !=3D drm_gpuvm_resv(gpuvm)) { - drm_gem_object_get(obj); - msm_gem_lock(obj); + drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { + struct drm_gem_object *obj =3D vma->gem.obj; + + /* + * MSM_BO_NO_SHARE objects share the same resv as the + * VM, in which case the obj is already locked: + */ + if (obj && (obj->resv =3D=3D drm_gpuvm_resv(gpuvm))) + obj =3D NULL; + + if (obj) { + drm_exec_lock_obj(&exec, obj); + drm_exec_retry_on_contention(&exec); + } + + msm_gem_vma_unmap(vma); + msm_gem_vma_close(vma); + + if (obj) { + drm_exec_unlock_obj(&exec, obj); + } } + } + drm_exec_fini(&exec); +} + + +static struct msm_vm_bind_job * +vm_bind_job_create(struct drm_device *dev, struct msm_gpu *gpu, + struct msm_gpu_submitqueue *queue, uint32_t nr_ops) +{ + struct msm_vm_bind_job *job; + uint64_t sz; + int ret; + + sz =3D struct_size(job, ops, nr_ops); + + if (sz > SIZE_MAX) + return ERR_PTR(-ENOMEM); + + job =3D kzalloc(sz, GFP_KERNEL | __GFP_NOWARN); + if (!job) + return ERR_PTR(-ENOMEM); + + ret =3D drm_sched_job_init(&job->base, queue->entity, 1, queue); + if (ret) { + kfree(job); + return ERR_PTR(ret); + } =20 - msm_gem_vma_unmap(vma); - msm_gem_vma_close(vma); + job->vm =3D msm_context_vm(dev, queue->ctx); + job->queue =3D queue; + INIT_LIST_HEAD(&job->vm_ops); =20 - if (obj && obj->resv !=3D drm_gpuvm_resv(gpuvm)) { - msm_gem_unlock(obj); - drm_gem_object_put(obj); + return job; +} + +static bool invalid_alignment(uint64_t addr) +{ + /* + * Technically this is about GPU alignment, not CPU alignment. But + * I've not seen any qcom SoC where the SMMU does not support the + * CPU's smallest page size. + */ + return !PAGE_ALIGNED(addr); +} + +static int +lookup_op(struct msm_vm_bind_job *job, const struct drm_msm_vm_bind_op *op) +{ + struct drm_device *dev =3D job->vm->drm; + int i =3D job->nr_ops++; + int ret =3D 0; + + job->ops[i].op =3D op->op; + job->ops[i].handle =3D op->handle; + job->ops[i].obj_offset =3D op->obj_offset; + job->ops[i].iova =3D op->iova; + job->ops[i].range =3D op->range; + job->ops[i].flags =3D op->flags; + + if (op->flags & ~MSM_VM_BIND_OP_FLAGS) + ret =3D UERR(EINVAL, dev, "invalid flags: %x\n", op->flags); + + if (invalid_alignment(op->iova)) + ret =3D UERR(EINVAL, dev, "invalid address: %016llx\n", op->iova); + + if (invalid_alignment(op->obj_offset)) + ret =3D UERR(EINVAL, dev, "invalid bo_offset: %016llx\n", op->obj_offset= ); + + if (invalid_alignment(op->range)) + ret =3D UERR(EINVAL, dev, "invalid range: %016llx\n", op->range); + + if (!drm_gpuvm_range_valid(job->vm, op->iova, op->range)) + ret =3D UERR(EINVAL, dev, "invalid range: %016llx, %016llx\n", op->iova,= op->range); + + /* + * MAP must specify a valid handle. But the handle MBZ for + * UNMAP or MAP_NULL. + */ + if (op->op =3D=3D MSM_VM_BIND_OP_MAP) { + if (!op->handle) + ret =3D UERR(EINVAL, dev, "invalid handle\n"); + } else if (op->handle) { + ret =3D UERR(EINVAL, dev, "handle must be zero\n"); + } + + switch (op->op) { + case MSM_VM_BIND_OP_MAP: + case MSM_VM_BIND_OP_MAP_NULL: + case MSM_VM_BIND_OP_UNMAP: + break; + default: + ret =3D UERR(EINVAL, dev, "invalid op: %u\n", op->op); + break; + } + + return ret; +} + +/* + * ioctl parsing, parameter validation, and GEM handle lookup + */ +static int +vm_bind_job_lookup_ops(struct msm_vm_bind_job *job, struct drm_msm_vm_bind= *args, + struct drm_file *file, int *nr_bos) +{ + struct drm_device *dev =3D job->vm->drm; + int ret =3D 0; + int cnt =3D 0; + + if (args->nr_ops =3D=3D 1) { + /* Single op case, the op is inlined: */ + ret =3D lookup_op(job, &args->op); + } else { + for (unsigned i =3D 0; i < args->nr_ops; i++) { + struct drm_msm_vm_bind_op op; + void __user *userptr =3D + u64_to_user_ptr(args->ops + (i * sizeof(op))); + + /* make sure we don't have garbage flags, in case we hit + * error path before flags is initialized: + */ + job->ops[i].flags =3D 0; + + if (copy_from_user(&op, userptr, sizeof(op))) { + ret =3D -EFAULT; + break; + } + + ret =3D lookup_op(job, &op); + if (ret) + break; + } + } + + if (ret) { + job->nr_ops =3D 0; + goto out; + } + + spin_lock(&file->table_lock); + + for (unsigned i =3D 0; i < args->nr_ops; i++) { + struct drm_gem_object *obj; + + if (!job->ops[i].handle) { + job->ops[i].obj =3D NULL; + continue; + } + + /* + * normally use drm_gem_object_lookup(), but for bulk lookup + * all under single table_lock just hit object_idr directly: + */ + obj =3D idr_find(&file->object_idr, job->ops[i].handle); + if (!obj) { + ret =3D UERR(EINVAL, dev, "invalid handle %u at index %u\n", job->ops[i= ].handle, i); + goto out_unlock; + } + + drm_gem_object_get(obj); + + job->ops[i].obj =3D obj; + cnt++; + } + + *nr_bos =3D cnt; + +out_unlock: + spin_unlock(&file->table_lock); + +out: + return ret; +} + +static void +prealloc_count(struct msm_vm_bind_job *job, + struct msm_vm_bind_op *first, + struct msm_vm_bind_op *last) +{ + struct msm_mmu *mmu =3D to_msm_vm(job->vm)->mmu; + + if (!first) + return; + + uint64_t start_iova =3D first->iova; + uint64_t end_iova =3D last->iova + last->range; + + mmu->funcs->prealloc_count(mmu, &job->prealloc, start_iova, end_iova - st= art_iova); +} + +static bool +ops_are_same_pte(struct msm_vm_bind_op *first, struct msm_vm_bind_op *next) +{ + /* + * Last level pte covers 2MB.. so we should merge two ops, from + * the PoV of figuring out how much pgtable pages to pre-allocate + * if they land in the same 2MB range: + */ + uint64_t pte_mask =3D ~(SZ_2M - 1); + return ((first->iova + first->range) & pte_mask) =3D=3D (next->iova & pte= _mask); +} + +/* + * Determine the amount of memory to prealloc for pgtables. For sparse im= ages, + * in particular, userspace plays some tricks with the order of page mappi= ngs + * to get the desired swizzle pattern, resulting in a large # of tiny MAP = ops. + * So detect when multiple MAP operations are physically contiguous, and c= ount + * them as a single mapping. Otherwise the prealloc_count() will not real= ize + * they can share pagetable pages and vastly overcount. + */ +static void +vm_bind_prealloc_count(struct msm_vm_bind_job *job) +{ + struct msm_vm_bind_op *first =3D NULL, *last =3D NULL; + + for (int i =3D 0; i < job->nr_ops; i++) { + struct msm_vm_bind_op *op =3D &job->ops[i]; + + /* We only care about MAP/MAP_NULL: */ + if (op->op =3D=3D MSM_VM_BIND_OP_UNMAP) + continue; + + /* + * If op is contiguous with last in the current range, then + * it becomes the new last in the range and we continue + * looping: + */ + if (last && ops_are_same_pte(last, op)) { + last =3D op; + continue; + } + + /* + * If op is not contiguous with the current range, flush + * the current range and start anew: + */ + prealloc_count(job, first, last); + first =3D last =3D op; + } + + /* Flush the remaining range: */ + prealloc_count(job, first, last); +} + +/* + * Lock VM and GEM objects + */ +static int +vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exe= c) +{ + int ret; + + /* Lock VM and objects: */ + drm_exec_until_all_locked (exec) { + ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(job->vm)); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + + for (unsigned i =3D 0; i < job->nr_ops; i++) { + const struct msm_vm_bind_op *op =3D &job->ops[i]; + + switch (op->op) { + case MSM_VM_BIND_OP_UNMAP: + ret =3D drm_gpuvm_sm_unmap_exec_lock(job->vm, exec, + op->iova, + op->obj_offset); + break; + case MSM_VM_BIND_OP_MAP: + case MSM_VM_BIND_OP_MAP_NULL: + ret =3D drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, + op->iova, op->range, + op->obj, op->obj_offset); + break; + default: + /* + * lookup_op() should have already thrown an error for + * invalid ops + */ + WARN_ON("unreachable"); + } + + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + } + } + + return 0; +} + +/* + * Pin GEM objects, ensuring that we have backing pages. Pinning will move + * the object to the pinned LRU so that the shrinker knows to first consid= er + * other objects for evicting. + */ +static int +vm_bind_job_pin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + /* + * First loop, before holding the LRU lock, avoids holding the + * LRU lock while calling msm_gem_pin_vma_locked (which could + * trigger get_pages()) + */ + job_foreach_bo (obj, job) { + struct page **pages; + + pages =3D msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); + if (IS_ERR(pages)) + return PTR_ERR(pages); + } + + struct msm_drm_private *priv =3D job->vm->drm->dev_private; + + /* + * A second loop while holding the LRU lock (a) avoids acquiring/dropping + * the LRU lock for each individual bo, while (b) avoiding holding the + * LRU lock while calling msm_gem_pin_vma_locked() (which could trigger + * get_pages() which could trigger reclaim.. and if we held the LRU lock + * could trigger deadlock with the shrinker). + */ + mutex_lock(&priv->lru.lock); + job_foreach_bo (obj, job) + msm_gem_pin_obj_locked(obj); + mutex_unlock(&priv->lru.lock); + + job->bos_pinned =3D true; + + return 0; +} + +/* + * Unpin GEM objects. Normally this is done after the bind job is run. + */ +static void +vm_bind_job_unpin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + if (!job->bos_pinned) + return; + + job_foreach_bo (obj, job) + msm_gem_unpin_locked(obj); + + job->bos_pinned =3D false; +} + +/* + * Pre-allocate pgtable memory, and translate the VM bind requests into a + * sequence of pgtable updates to be applied asynchronously. + */ +static int +vm_bind_job_prepare(struct msm_vm_bind_job *job) +{ + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct msm_mmu *mmu =3D vm->mmu; + int ret; + + ret =3D mmu->funcs->prealloc_allocate(mmu, &job->prealloc); + if (ret) + return ret; + + for (unsigned i =3D 0; i < job->nr_ops; i++) { + const struct msm_vm_bind_op *op =3D &job->ops[i]; + struct op_arg arg =3D { + .job =3D job, + }; + + switch (op->op) { + case MSM_VM_BIND_OP_UNMAP: + ret =3D drm_gpuvm_sm_unmap(job->vm, &arg, op->iova, + op->range); + break; + case MSM_VM_BIND_OP_MAP: + if (op->flags & MSM_VM_BIND_OP_DUMP) + arg.flags |=3D MSM_VMA_DUMP; + fallthrough; + case MSM_VM_BIND_OP_MAP_NULL: + ret =3D drm_gpuvm_sm_map(job->vm, &arg, op->iova, + op->range, op->obj, op->obj_offset); + break; + default: + /* + * lookup_op() should have already thrown an error for + * invalid ops + */ + BUG_ON("unreachable"); + } + + if (ret) { + /* + * If we've already started modifying the vm, we can't + * adequetly describe to userspace the intermediate + * state the vm is in. So throw up our hands! + */ + if (i > 0) + vm->unusable =3D true; + return ret; + } + } + + return 0; +} + +/* + * Attach fences to the GEM objects being bound. This will signify to + * the shrinker that they are busy even after dropping the locks (ie. + * drm_exec_fini()) + */ +static void +vm_bind_job_attach_fences(struct msm_vm_bind_job *job) +{ + for (unsigned i =3D 0; i < job->nr_ops; i++) { + struct drm_gem_object *obj =3D job->ops[i].obj; + + if (!obj) + continue; + + dma_resv_add_fence(obj->resv, job->fence, + DMA_RESV_USAGE_KERNEL); + } +} + +int +msm_ioctl_vm_bind(struct drm_device *dev, void *data, struct drm_file *fil= e) +{ + struct msm_drm_private *priv =3D dev->dev_private; + struct drm_msm_vm_bind *args =3D data; + struct msm_context *ctx =3D file->driver_priv; + struct msm_vm_bind_job *job =3D NULL; + struct msm_gpu *gpu =3D priv->gpu; + struct msm_gpu_submitqueue *queue; + struct msm_syncobj_post_dep *post_deps =3D NULL; + struct drm_syncobj **syncobjs_to_reset =3D NULL; + struct sync_file *sync_file =3D NULL; + struct dma_fence *fence; + int out_fence_fd =3D -1; + int ret, nr_bos =3D 0; + unsigned i; + + if (!gpu) + return -ENXIO; + + /* + * Maybe we could allow just UNMAP ops? OTOH userspace should just + * immediately close the device file and all will be torn down. + */ + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + + /* + * Technically, you cannot create a VM_BIND submitqueue in the first + * place, if you haven't opted in to VM_BIND context. But it is + * cleaner / less confusing, to check this case directly. + */ + if (!msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "context does not support vmbind"); + + if (args->flags & ~MSM_VM_BIND_FLAGS) + return UERR(EINVAL, dev, "invalid flags"); + + queue =3D msm_submitqueue_get(ctx, args->queue_id); + if (!queue) + return -ENOENT; + + if (!(queue->flags & MSM_SUBMITQUEUE_VM_BIND)) { + ret =3D UERR(EINVAL, dev, "Invalid queue type"); + goto out_post_unlock; + } + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + out_fence_fd =3D get_unused_fd_flags(O_CLOEXEC); + if (out_fence_fd < 0) { + ret =3D out_fence_fd; + goto out_post_unlock; } } - dma_resv_unlock(drm_gpuvm_resv(gpuvm)); + + job =3D vm_bind_job_create(dev, gpu, queue, args->nr_ops); + if (IS_ERR(job)) { + ret =3D PTR_ERR(job); + goto out_post_unlock; + } + + ret =3D mutex_lock_interruptible(&queue->lock); + if (ret) + goto out_post_unlock; + + if (args->flags & MSM_VM_BIND_FENCE_FD_IN) { + struct dma_fence *in_fence; + + in_fence =3D sync_file_get_fence(args->fence_fd); + + if (!in_fence) { + ret =3D UERR(EINVAL, dev, "invalid in-fence"); + goto out_unlock; + } + + ret =3D drm_sched_job_add_dependency(&job->base, in_fence); + if (ret) + goto out_unlock; + } + + if (args->in_syncobjs > 0) { + syncobjs_to_reset =3D msm_syncobj_parse_deps(dev, &job->base, + file, args->in_syncobjs, + args->nr_in_syncobjs, + args->syncobj_stride); + if (IS_ERR(syncobjs_to_reset)) { + ret =3D PTR_ERR(syncobjs_to_reset); + goto out_unlock; + } + } + + if (args->out_syncobjs > 0) { + post_deps =3D msm_syncobj_parse_post_deps(dev, file, + args->out_syncobjs, + args->nr_out_syncobjs, + args->syncobj_stride); + if (IS_ERR(post_deps)) { + ret =3D PTR_ERR(post_deps); + goto out_unlock; + } + } + + ret =3D vm_bind_job_lookup_ops(job, args, file, &nr_bos); + if (ret) + goto out_unlock; + + vm_bind_prealloc_count(job); + + struct drm_exec exec; + unsigned flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WA= IT; + drm_exec_init(&exec, flags, nr_bos + 1); + + ret =3D vm_bind_job_lock_objects(job, &exec); + if (ret) + goto out; + + ret =3D vm_bind_job_pin_objects(job); + if (ret) + goto out; + + ret =3D vm_bind_job_prepare(job); + if (ret) + goto out; + + drm_sched_job_arm(&job->base); + + job->fence =3D dma_fence_get(&job->base.s_fence->finished); + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + sync_file =3D sync_file_create(job->fence); + if (!sync_file) { + ret =3D -ENOMEM; + } else { + fd_install(out_fence_fd, sync_file->file); + args->fence_fd =3D out_fence_fd; + } + } + + if (ret) + goto out; + + vm_bind_job_attach_fences(job); + + /* + * The job can be free'd (and fence unref'd) at any point after + * drm_sched_entity_push_job(), so we need to hold our own ref + */ + fence =3D dma_fence_get(job->fence); + + drm_sched_entity_push_job(&job->base); + + msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); + msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, fence); + + dma_fence_put(fence); + +out: + if (ret) + vm_bind_job_unpin_objects(job); + + drm_exec_fini(&exec); +out_unlock: + mutex_unlock(&queue->lock); +out_post_unlock: + if (ret && (out_fence_fd >=3D 0)) { + put_unused_fd(out_fence_fd); + if (sync_file) + fput(sync_file->file); + } + + if (!IS_ERR_OR_NULL(job)) { + if (ret) + msm_vma_job_free(&job->base); + } else { + /* + * If the submit hasn't yet taken ownership of the queue + * then we need to drop the reference ourself: + */ + msm_submitqueue_put(queue); + } + + if (!IS_ERR_OR_NULL(post_deps)) { + for (i =3D 0; i < args->nr_out_syncobjs; ++i) { + kfree(post_deps[i].chain); + drm_syncobj_put(post_deps[i].syncobj); + } + kfree(post_deps); + } + + if (!IS_ERR_OR_NULL(syncobjs_to_reset)) { + for (i =3D 0; i < args->nr_in_syncobjs; ++i) { + if (syncobjs_to_reset[i]) + drm_syncobj_put(syncobjs_to_reset[i]); + } + kfree(syncobjs_to_reset); + } + + return ret; } diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 6d6cd1219926..5c67294edc95 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -272,7 +272,10 @@ struct drm_msm_gem_submit_cmd { __u32 size; /* in, cmdstream size */ __u32 pad; __u32 nr_relocs; /* in, number of submit_reloc's */ - __u64 relocs; /* in, ptr to array of submit_reloc's */ + union { + __u64 relocs; /* in, ptr to array of submit_reloc's */ + __u64 iova; /* cmdstream address (for VM_BIND contexts) */ + }; }; =20 /* Each buffer referenced elsewhere in the cmdstream submit (ie. the @@ -339,7 +342,74 @@ struct drm_msm_gem_submit { __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ __u32 pad; /*in, reserved for future use, always 0. */ +}; + +#define MSM_VM_BIND_OP_UNMAP 0 +#define MSM_VM_BIND_OP_MAP 1 +#define MSM_VM_BIND_OP_MAP_NULL 2 + +#define MSM_VM_BIND_OP_DUMP 1 +#define MSM_VM_BIND_OP_FLAGS ( \ + MSM_VM_BIND_OP_DUMP | \ + 0) =20 +/** + * struct drm_msm_vm_bind_op - bind/unbind op to run + */ +struct drm_msm_vm_bind_op { + /** @op: one of MSM_VM_BIND_OP_x */ + __u32 op; + /** @handle: GEM object handle, MBZ for UNMAP or MAP_NULL */ + __u32 handle; + /** @obj_offset: Offset into GEM object, MBZ for UNMAP or MAP_NULL */ + __u64 obj_offset; + /** @iova: Address to operate on */ + __u64 iova; + /** @range: Number of bites to to map/unmap */ + __u64 range; + /** @flags: Bitmask of MSM_VM_BIND_OP_FLAG_x */ + __u32 flags; + /** @pad: MBZ */ + __u32 pad; +}; + +#define MSM_VM_BIND_FENCE_FD_IN 0x00000001 +#define MSM_VM_BIND_FENCE_FD_OUT 0x00000002 +#define MSM_VM_BIND_FLAGS ( \ + MSM_VM_BIND_FENCE_FD_IN | \ + MSM_VM_BIND_FENCE_FD_OUT | \ + 0) + +/** + * struct drm_msm_vm_bind - Input of &DRM_IOCTL_MSM_VM_BIND + */ +struct drm_msm_vm_bind { + /** @flags: in, bitmask of MSM_VM_BIND_x */ + __u32 flags; + /** @nr_ops: the number of bind ops in this ioctl */ + __u32 nr_ops; + /** @fence_fd: in/out fence fd (see MSM_VM_BIND_FENCE_FD_IN/OUT) */ + __s32 fence_fd; + /** @queue_id: in, submitqueue id */ + __u32 queue_id; + /** @in_syncobjs: in, ptr to array of drm_msm_gem_syncobj */ + __u64 in_syncobjs; + /** @out_syncobjs: in, ptr to array of drm_msm_gem_syncobj */ + __u64 out_syncobjs; + /** @nr_in_syncobjs: in, number of entries in in_syncobj */ + __u32 nr_in_syncobjs; + /** @nr_out_syncobjs: in, number of entries in out_syncobj */ + __u32 nr_out_syncobjs; + /** @syncobj_stride: in, stride of syncobj arrays */ + __u32 syncobj_stride; + /** @op_stride: sizeof each struct drm_msm_vm_bind_op in @ops */ + __u32 op_stride; + union { + /** @op: used if num_ops =3D=3D 1 */ + struct drm_msm_vm_bind_op op; + /** @ops: userptr to array of drm_msm_vm_bind_op if num_ops > 1 */ + __u64 ops; + }; }; =20 #define MSM_WAIT_FENCE_BOOST 0x00000001 @@ -435,6 +505,7 @@ struct drm_msm_submitqueue_query { #define DRM_MSM_SUBMITQUEUE_NEW 0x0A #define DRM_MSM_SUBMITQUEUE_CLOSE 0x0B #define DRM_MSM_SUBMITQUEUE_QUERY 0x0C +#define DRM_MSM_VM_BIND 0x0D =20 #define DRM_IOCTL_MSM_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM= _GET_PARAM, struct drm_msm_param) #define DRM_IOCTL_MSM_SET_PARAM DRM_IOW (DRM_COMMAND_BASE + DRM_MSM= _SET_PARAM, struct drm_msm_param) @@ -448,6 +519,7 @@ struct drm_msm_submitqueue_query { #define DRM_IOCTL_MSM_SUBMITQUEUE_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_M= SM_SUBMITQUEUE_NEW, struct drm_msm_submitqueue) #define DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE DRM_IOW (DRM_COMMAND_BASE + DRM_M= SM_SUBMITQUEUE_CLOSE, __u32) #define DRM_IOCTL_MSM_SUBMITQUEUE_QUERY DRM_IOW (DRM_COMMAND_BASE + DRM_M= SM_SUBMITQUEUE_QUERY, struct drm_msm_submitqueue_query) +#define DRM_IOCTL_MSM_VM_BIND DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM= _VM_BIND, struct drm_msm_vm_bind) =20 #if defined(__cplusplus) } --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D1A7263C6A for ; Sun, 29 Jun 2025 20:17:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228238; cv=none; b=F49eVlibWRVwnTVclJnygGFgRIqSClKesAl29oFdnlXdKYZYpse9WRgGgS6HiVx1YTojPJsbp3ciHNGS58/lQtnxt8S+fAnMsJxztR2ce7o1HfqhYeCHViB3JT5JfltuUbHbwHO/fozy9WlhAl1XjqV61q+QhYFHNCS3x6MiPXw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228238; c=relaxed/simple; bh=w9rKp/oo+exdBDTKO2lDegfjv3Zha28rswouF6Mm7+Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=a3DEINCkmiUpp6mmoN63NyMpX8mcsJCHLQ1mb4dL51lUnQI6WXwZ3pU6nr0xTjgu+31ANAFFd6rV4tF9oXsCJqvs6TUgfHJwOtL6MP3fsOcwSLsyVPrV55odH0O7Q5ej3ByIHvHzGwYuSRPO7T2caRoHgM4XC9sO0/kRI6kSzBk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=afLT6cbF; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="afLT6cbF" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TJhK3n026235 for ; Sun, 29 Jun 2025 20:17:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=TTDBcr8gQGD rmhJvWe0CWecl72QZMUM5Ntt1GhawWEM=; b=afLT6cbFl56GzOL8/6vAKsnFaN0 uMWkd1OepIexcxHUSXw9hTwviuvQ2W3qHlMEG1QNT5j7Bzqv1oUiIqMvQIbod9Xn cM2ZBRQZdHBCNCQCvh5330boS3Ac682sjR/WFRHrE7k71kUWmR95fFCI4A8N+udd AgMvl6tNdj+B3qz8YfupIEw1rPcRmfajuZ9mm2wQt4p97fzV5UWDg2YC4Lom9ADW 6lgmoQMLSDZ6GhoI3IUXkhNCAAOFvGX+FPreMU2/20I1drx4Ib8pGMF16yobMcyK QIUQTWiNMNh94dm3IJE5ZNPHNW2w5I6mWnQu2HeS6ZG6u//e7s1MBDXjF2A== Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j801tmq9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:12 +0000 (GMT) Received: by mail-pg1-f197.google.com with SMTP id 41be03b00d2f7-b34b810fdcaso1160045a12.0 for ; Sun, 29 Jun 2025 13:17:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228231; x=1751833031; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TTDBcr8gQGDrmhJvWe0CWecl72QZMUM5Ntt1GhawWEM=; b=S3xXb5HNwAx984ZrA8w0AGCR29GqcHQ0wOCe/QS+J8AIJJk942X8r6mdmT5y5enwnt WeiUieoE3K5U6E0rEZuzPQPGY6lqJvvUSO/xhRRp4RJTvUZo4w+xgsVT4GG4wJuEbvDn UgdsOrldlALRdSxdqbgWdBZIFQDoOsVNk9UaHCu7kMJ2II3s7C6jCgvJjoqb+HwR6+Oh A8KJUegNc+CUzHHBsiQc5VpuzWy5B48hv1zyDFQqM6w6dzCtMhf+31XJzXsNUMf+6Zst 4Ooo0/C415+Qxoo190iXLP4b8xbZ5XU+sLKf9tCCCZErkd3OlqcN3bsixdJhWesL5foR M3iA== X-Forwarded-Encrypted: i=1; AJvYcCWdVT3WvCcznn1bgreMbEVnEw/FVLusAjLMfX9izWUaDEd1yfvfM22zQefv3bkpgDT39foJhscghNTvQP0=@vger.kernel.org X-Gm-Message-State: AOJu0YxdX7EsdZw9ntfLa07YcHGyP/6datd+KygY2QnGAGDrVtsfHI59 plg6NTe0W5wTDdUd/f695tpcWZ7shNvW3Qzj9s0t2LWlUjLP5YRbyyZzKgsR6/QNMy6dDihiqku lRNswU/4l9uY5r6i9o0CvjxHJOan5KvMu/HOshHqCK+DKnqIKtbybvTJ9vqsJ2qX84tI= X-Gm-Gg: ASbGncvZcxzOvVzr82tr+89Auih5rv5tJj/BLpAGvb+nOxx3oU3KvQYHrSTV80yz6Oo NgHN/0KEauuAh6E3EV9XPf/hgl1/Wa1Vbx74WB1Y5PIrePxSmcf9ZhmcQ8xPflTuDRyfCbdIBrr KnJygIy3yEmkJX9hMTpbsqb6Bd1V/R2+/6wlUKNHq90bAvnbNXwSyC9TRZeS5pBlHKnWOGb2LSw zc5Uik4PP/lKN9TEwsa/OYG3yuwthbLxkxFYj7Ks7Etz2TvpUBrW7Vb11kUruSXkr1KFDWxpXu5 3e4JcZHDOm2ZnzaWjAEpteYyEkiKCW3vKQ== X-Received: by 2002:a17:90b:4c0b:b0:313:db0b:75d7 with SMTP id 98e67ed59e1d1-318c925932bmr15001116a91.27.1751228231054; Sun, 29 Jun 2025 13:17:11 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHNSkMZ6IRCYGxj1n3cea7BaF2COQ0zFrtcOt+Od0Poe0j+1jvJhGBt1evGrnxhBYQ+fQu5NA== X-Received: by 2002:a17:90b:4c0b:b0:313:db0b:75d7 with SMTP id 98e67ed59e1d1-318c925932bmr15001092a91.27.1751228230561; Sun, 29 Jun 2025 13:17:10 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-318c1392096sm7344622a91.8.2025.06.29.13.17.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:10 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 36/42] drm/msm: Add VM logging for VM_BIND updates Date: Sun, 29 Jun 2025 13:13:19 -0700 Message-ID: <20250629201530.25775-37-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: XioXzgoYFxRt_M6kJdcNaeMCTOmrbbqe X-Authority-Analysis: v=2.4 cv=YPWfyQGx c=1 sm=1 tr=0 ts=68619f48 cx=c_pps a=rz3CxIlbcmazkYymdCej/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=Ivokvn2EMlC-zZWxPmsA:9 a=bFCP_H2QrGi7Okbo017w:22 X-Proofpoint-ORIG-GUID: XioXzgoYFxRt_M6kJdcNaeMCTOmrbbqe X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX0BDlBPFR/fH3 QUrYfRPLIyz9OLP+J/o7tGfagFO0uCGzPIIw2LsQv3ezS9cnZvv4KA6U5i3CSn4D7tavCb5sS06 XxIngZeOrhLURHi8ORp/uP7nIo0s5ErZ0/y8BkqrmiyMSntJjeVdBxDZljv5ujLUJEK8EFd/U/f SGgz87vHXrlpGuhBsXwmUfYb+1QM6EnDyP+54RwwG6uRjh6nLi7XJ/7CvBDtlYgP3X/wx7cBE9Q lRPM4fSNcIUJeIxGjTInFq49awaQllzSYYCKv474UXnj6UD9TQQARMtLqJf+VjCAsz7YJcxMO96 A/rPs5LR6pxW72fch3PsDEFHhi8rB+bGhRXknD4Ib72E0uhGxiwEpi01vbTxhQDGYoLdzS9Tp+2 gg5B4goPkppyA546vkNpPy+vJlCY2KxR4XAbzvJvYd0lXwEDllM5Qsi7CS9wTcMWLA3cs2tq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 mlxlogscore=999 mlxscore=0 malwarescore=0 suspectscore=0 lowpriorityscore=0 clxscore=1015 impostorscore=0 adultscore=0 priorityscore=1501 bulkscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark When userspace opts in to VM_BIND, the submit no longer holds references keeping the VMA alive. This makes it difficult to distinguish between UMD/KMD/app bugs. So add a debug option for logging the most recent VM updates and capturing these in GPU devcoredumps. The submitqueue id is also captured, a value of zero means the operation did not go via a submitqueue (ie. comes from msm_gem_vm_close() tearing down the remaining mappings when the device file is closed. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 11 +++ drivers/gpu/drm/msm/msm_gem.h | 24 +++++ drivers/gpu/drm/msm/msm_gem_vma.c | 124 ++++++++++++++++++++++-- drivers/gpu/drm/msm/msm_gpu.c | 52 +++++++++- drivers/gpu/drm/msm/msm_gpu.h | 4 + 5 files changed, 202 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index ff25e3dada04..53cbfa5a507b 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -833,6 +833,7 @@ void adreno_gpu_state_destroy(struct msm_gpu_state *sta= te) for (i =3D 0; state->bos && i < state->nr_bos; i++) kvfree(state->bos[i].data); =20 + kfree(state->vm_logs); kfree(state->bos); kfree(state->comm); kfree(state->cmd); @@ -973,6 +974,16 @@ void adreno_show(struct msm_gpu *gpu, struct msm_gpu_s= tate *state, info->ptes[0], info->ptes[1], info->ptes[2], info->ptes[3]); } =20 + if (state->vm_logs) { + drm_puts(p, "vm-log:\n"); + for (i =3D 0; i < state->nr_vm_logs; i++) { + struct msm_gem_vm_log_entry *e =3D &state->vm_logs[i]; + drm_printf(p, " - %s:%d: 0x%016llx-0x%016llx\n", + e->op, e->queue_id, e->iova, + e->iova + e->range); + } + } + drm_printf(p, "rbbm-status: 0x%08x\n", state->rbbm_status); =20 drm_puts(p, "ringbuffer:\n"); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index ee464e315643..062d1b5477d6 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -24,6 +24,20 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash mem= ory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping = */ =20 +/** + * struct msm_gem_vm_log_entry - An entry in the VM log + * + * For userspace managed VMs, a log of recent VM updates is tracked and + * captured in GPU devcore dumps, to aid debugging issues caused by (for + * example) incorrectly synchronized VM updates + */ +struct msm_gem_vm_log_entry { + const char *op; + uint64_t iova; + uint64_t range; + int queue_id; +}; + /** * struct msm_gem_vm - VM object * @@ -85,6 +99,15 @@ struct msm_gem_vm { /** @last_fence: Fence for last pending work scheduled on the VM */ struct dma_fence *last_fence; =20 + /** @log: A log of recent VM updates */ + struct msm_gem_vm_log_entry *log; + + /** @log_shift: length of @log is (1 << @log_shift) */ + uint32_t log_shift; + + /** @log_idx: index of next @log entry to write */ + uint32_t log_idx; + /** @faults: the number of GPU hangs associated with this address space */ int faults; =20 @@ -115,6 +138,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, u64 va_start, u64 va_size, bool managed); =20 void msm_gem_vm_close(struct drm_gpuvm *gpuvm); +void msm_gem_vm_unusable(struct drm_gpuvm *gpuvm); =20 struct msm_fence_context; =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 6ec92b7771f5..9564e40c542f 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -17,6 +17,10 @@ =20 #define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##= __VA_ARGS__) =20 +static uint vm_log_shift =3D 0; +MODULE_PARM_DESC(vm_log_shift, "Length of VM op log"); +module_param_named(vm_log_shift, vm_log_shift, uint, 0600); + /** * struct msm_vm_map_op - create new pgtable mapping */ @@ -31,6 +35,13 @@ struct msm_vm_map_op { struct sg_table *sgt; /** @prot: the mapping protection flags */ int prot; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; }; =20 /** @@ -41,6 +52,13 @@ struct msm_vm_unmap_op { uint64_t iova; /** @range: size of region to unmap */ uint64_t range; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; }; =20 /** @@ -144,16 +162,87 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) vm->mmu->funcs->destroy(vm->mmu); dma_fence_put(vm->last_fence); put_pid(vm->pid); + kfree(vm->log); kfree(vm); } =20 +/** + * msm_gem_vm_unusable() - Mark a VM as unusable + * @vm: the VM to mark unusable + */ +void +msm_gem_vm_unusable(struct drm_gpuvm *gpuvm) +{ + struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); + uint32_t vm_log_len =3D (1 << vm->log_shift); + uint32_t vm_log_mask =3D vm_log_len - 1; + uint32_t nr_vm_logs; + int first; + + vm->unusable =3D true; + + /* Bail if no log, or empty log: */ + if (!vm->log || !vm->log[0].op) + return; + + mutex_lock(&vm->mmu_lock); + + /* + * log_idx is the next entry to overwrite, meaning it is the oldest, or + * first, entry (other than the special case handled below where the + * log hasn't wrapped around yet) + */ + first =3D vm->log_idx; + + if (!vm->log[first].op) { + /* + * If the next log entry has not been written yet, then only + * entries 0 to idx-1 are valid (ie. we haven't wrapped around + * yet) + */ + nr_vm_logs =3D MAX(0, first - 1); + first =3D 0; + } else { + nr_vm_logs =3D vm_log_len; + } + + pr_err("vm-log:\n"); + for (int i =3D 0; i < nr_vm_logs; i++) { + int idx =3D (i + first) & vm_log_mask; + struct msm_gem_vm_log_entry *e =3D &vm->log[idx]; + pr_err(" - %s:%d: 0x%016llx-0x%016llx\n", + e->op, e->queue_id, e->iova, + e->iova + e->range); + } + + mutex_unlock(&vm->mmu_lock); +} + static void -vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +vm_log(struct msm_gem_vm *vm, const char *op, uint64_t iova, uint64_t rang= e, int queue_id) { + int idx; + if (!vm->managed) lockdep_assert_held(&vm->mmu_lock); =20 - vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + vm_dbg("%s:%p:%d: %016llx %016llx", op, vm, queue_id, iova, iova + range); + + if (!vm->log) + return; + + idx =3D vm->log_idx; + vm->log[idx].op =3D op; + vm->log[idx].iova =3D iova; + vm->log[idx].range =3D range; + vm->log[idx].queue_id =3D queue_id; + vm->log_idx =3D (vm->log_idx + 1) & ((1 << vm->log_shift) - 1); +} + +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + vm_log(vm, "unmap", op->iova, op->range, op->queue_id); =20 vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); } @@ -161,10 +250,7 @@ vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm= _unmap_op *op) static int vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) { - if (!vm->managed) - lockdep_assert_held(&vm->mmu_lock); - - vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + vm_log(vm, "map", op->iova, op->range, op->queue_id); =20 return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, op->range, op->prot); @@ -382,6 +468,7 @@ vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map= *op) static int msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) { + struct msm_vm_bind_job *job =3D ((struct op_arg *)arg)->job; struct drm_gem_object *obj =3D op->map.gem.obj; struct drm_gpuva *vma; struct sg_table *sgt; @@ -412,6 +499,7 @@ msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *a= rg) .range =3D vma->va.range, .offset =3D vma->gem.offset, .prot =3D prot, + .queue_id =3D job->queue->id, }, .obj =3D vma->gem.obj, }); @@ -445,6 +533,7 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void = *arg) .unmap =3D { .iova =3D unmap_start, .range =3D unmap_range, + .queue_id =3D job->queue->id, }, .obj =3D orig_vma->gem.obj, }); @@ -506,6 +595,7 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void = *arg) static int msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) { + struct msm_vm_bind_job *job =3D ((struct op_arg *)arg)->job; struct drm_gpuva *vma =3D op->unmap.va; struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); =20 @@ -520,6 +610,7 @@ msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void = *arg) .unmap =3D { .iova =3D vma->va.addr, .range =3D vma->va.range, + .queue_id =3D job->queue->id, }, .obj =3D vma->gem.obj, }); @@ -584,7 +675,7 @@ msm_vma_job_run(struct drm_sched_job *_job) * now the VM is in an undefined state. Game over! */ if (ret) - vm->unusable =3D true; + msm_gem_vm_unusable(job->vm); =20 job_foreach_bo (obj, job) { msm_gem_lock(obj); @@ -695,6 +786,23 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_m= mu *mmu, const char *name, =20 drm_mm_init(&vm->mm, va_start, va_size); =20 + /* + * We don't really need vm log for kernel managed VMs, as the kernel + * is responsible for ensuring that GEM objs are mapped if they are + * used by a submit. Furthermore we piggyback on mmu_lock to serialize + * access to the log. + * + * Limit the max log_shift to 8 to prevent userspace from asking us + * for an unreasonable log size. + */ + if (!managed) + vm->log_shift =3D MIN(vm_log_shift, 8); + + if (vm->log_shift) { + vm->log =3D kmalloc_array(1 << vm->log_shift, sizeof(vm->log[0]), + GFP_KERNEL | __GFP_ZERO); + } + return &vm->base; =20 err_free_dummy: @@ -1161,7 +1269,7 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job) * state the vm is in. So throw up our hands! */ if (i > 0) - vm->unusable =3D true; + msm_gem_vm_unusable(job->vm); return ret; } } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index ccd9ebfc5c7c..c317b25a8162 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -259,9 +259,6 @@ static void crashstate_get_bos(struct msm_gpu_state *st= ate, struct msm_gem_submi { extern bool rd_full; =20 - if (!submit) - return; - if (msm_context_is_vmbind(submit->queue->ctx)) { struct drm_exec exec; struct drm_gpuva *vma; @@ -318,6 +315,48 @@ static void crashstate_get_bos(struct msm_gpu_state *s= tate, struct msm_gem_submi } } =20 +static void crashstate_get_vm_logs(struct msm_gpu_state *state, struct msm= _gem_vm *vm) +{ + uint32_t vm_log_len =3D (1 << vm->log_shift); + uint32_t vm_log_mask =3D vm_log_len - 1; + int first; + + /* Bail if no log, or empty log: */ + if (!vm->log || !vm->log[0].op) + return; + + mutex_lock(&vm->mmu_lock); + + /* + * log_idx is the next entry to overwrite, meaning it is the oldest, or + * first, entry (other than the special case handled below where the + * log hasn't wrapped around yet) + */ + first =3D vm->log_idx; + + if (!vm->log[first].op) { + /* + * If the next log entry has not been written yet, then only + * entries 0 to idx-1 are valid (ie. we haven't wrapped around + * yet) + */ + state->nr_vm_logs =3D MAX(0, first - 1); + first =3D 0; + } else { + state->nr_vm_logs =3D vm_log_len; + } + + state->vm_logs =3D kmalloc_array( + state->nr_vm_logs, sizeof(vm->log[0]), GFP_KERNEL); + for (int i =3D 0; i < state->nr_vm_logs; i++) { + int idx =3D (i + first) & vm_log_mask; + + state->vm_logs[i] =3D vm->log[idx]; + } + + mutex_unlock(&vm->mmu_lock); +} + static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, struct msm_gem_submit *submit, struct msm_gpu_fault_info *fault_info, char *comm, char *cmd) @@ -351,7 +390,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu = *gpu, msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); } =20 - crashstate_get_bos(state, submit); + if (submit) { + crashstate_get_vm_logs(state, to_msm_vm(submit->vm)); + crashstate_get_bos(state, submit); + } =20 /* Set the active crash state to be dumped on failure */ gpu->crashstate =3D state; @@ -452,7 +494,7 @@ static void recover_worker(struct kthread_work *work) * VM_BIND) */ if (!vm->managed) - vm->unusable =3D true; + msm_gem_vm_unusable(submit->vm); } =20 get_comm_cmdline(submit, &comm, &cmd); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 5705e8d4e6b9..b2a96544f92a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -20,6 +20,7 @@ #include "msm_gem.h" =20 struct msm_gem_submit; +struct msm_gem_vm_log_entry; struct msm_gpu_perfcntr; struct msm_gpu_state; struct msm_context; @@ -603,6 +604,9 @@ struct msm_gpu_state { =20 struct msm_gpu_fault_info fault_info; =20 + int nr_vm_logs; + struct msm_gem_vm_log_entry *vm_logs; + int nr_bos; struct msm_gpu_state_bo *bos; }; --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2FA5263F27 for ; Sun, 29 Jun 2025 20:17:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228237; cv=none; b=Rqi5DEODvucyeDH+GosOeowp3Wiijr2S0R4H9QMnYCDTLI8RYzpkQj4Ul9BcINLIOkVrkeiyT6vafgpZSM4wjGbSgqHW9zt5jDgwluasdLe1mF3zpwj3PhTyJN3bRQ2BdMXibw/GYLoW/v4EnpQdkSzdYqEhmp91WV/sxGoblKk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228237; c=relaxed/simple; bh=Ovqye2wwked0Ur0NdxtBqJxZ9D15juMbG5BYQBw2MHA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cuLp3gaPD3FTDsIpYsQY/bqRMskidUrMsKTp6ZSdA+3gr9R2mNex7Mdze2aY4SsWD1AAMDIxHJXGd7huhNVag8p5DAA0/sxybfJvE/z2tmsxYxN4cVVKZ8R/RuKVwSWQJ7BsmIY9ulxsLZIp8aoL8t7AfcNK+0PjQWDumpNHnZQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=MPWB+XVG; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="MPWB+XVG" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TFpnc1008879 for ; Sun, 29 Jun 2025 20:17:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=I4qWRLNE0J0 R1zX7la3wdwz6YdoD5A69AfGQZd1gOIs=; b=MPWB+XVG5OtwHb7sUuW6DPsSGRq Smd2SKfP2PMen0zmPgM6MwS1DTjzCI2zL+SiprO5JvTL+W+Zg1CLpH2tziPsEb42 zB907MkPm4ziBjM3tYoIPwjIUId0AOaFwwyAXF+CE5gymphQaQ4bDt7l3SrhLie+ ygJqhOZ31vrpAn+yzXxRi8OlFES1ppbmEwdrGmiqKbhC6FZUoOR14hW/IlGlsBfz yLT1iw7JYl5pB4Rfibca97HXkx/XqpWxJ5x83IVAu3haD/gJxlh/A4gFgOPr6s36 XvJFuJkvsigufsGqak3UCBM0rCGQGpagexxhUrTMipC8pFYCiTSLJz+vkLw== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7d9tpmf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:13 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-235c897d378so28681485ad.1 for ; Sun, 29 Jun 2025 13:17:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228232; x=1751833032; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=I4qWRLNE0J0R1zX7la3wdwz6YdoD5A69AfGQZd1gOIs=; b=SCv7UdapdAXlMIvkQ79pG1Y5SLs0mzw0CSE2XNX7+Krz+EBkr/kOStsJhK0++0elaO +GdKhwJT3yQ7B1SPR7tw0j+K97fCa/gocupp1TYN2QF1jn2jpD5zxWZcwbvNkrO7JJYQ ufjB/1KXWGJP91XxrUfDY8GRvjHq/jGAXzU2vyIJ5J/GNHcDVxlwnKAMBcdzptWZ9nhv UkgEsAdvms33aF1Ly2Uf2TRVLVJmroMxnlO3lyLV3ekJ/JV9My5ZBERQA7J0ox6H7a4j N2J/aG8lkhh25P+rRUVI3j6tgZfDghiFVYFRX6RmYB4ySTcxIOWinydPHnNh5LWQZ+hh /OFQ== X-Forwarded-Encrypted: i=1; AJvYcCV7BrmkVPvHv+6LvdY5WCiBIHbZL1AwNZ3/d/ADtq5zB1/ZceWFK0sy7wg5+qnBZmY+xOn8A20moVBAnSI=@vger.kernel.org X-Gm-Message-State: AOJu0YwkKuiBalIZBXRULYd+dn6r2hnK6LiMWt26MP3GtNuj37zrMk0Y 0JTvwNH6hdSU9SPYDSWRjnbVciZgDEkGAsvD9CAx4S0vP3l0McGzzD2VJY3fOy07DNfFwAbPWcZ DirlMWS0hJ1Jlbn2KclVaMEUHTXnmZxXT5IKKufUHQFPKoq15VDuuYN8Tnzhsp8q6hu0= X-Gm-Gg: ASbGncv1JLXqlJL84cy++0tKRHHwphLTo3XFL7poVfbzj8wIXR2244vGuxsZYYNrVW/ 0Av8gSjcWxR0HaqWguEjiPstING94MSNf4ZGYAYFyGHZKADL2oBvdujh88KrmvVaYfGFuSU1stR E5BHJYTJE+/LFQ8gf9/hDTVH1nydg2LpnwFcQdplpID5mj/P1IF1GMX4j3CL+ixZxueLVgR8eGH Q25UK7v+0R572Pe3bkyvGLlxumTEA0DSdn7NWK8EBxs8HjfqC9VEnrSH1l21QrUYQTcFulH6pHq FxAZLZ2S+iiaIS++0vHc1g1RUqm45rj/Tw== X-Received: by 2002:a17:903:481:b0:234:a44c:18d with SMTP id d9443c01a7336-23ac45c8ac9mr142686665ad.22.1751228232490; Sun, 29 Jun 2025 13:17:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFXgqXWLpWh3u413uLHW7n8uV/SIdZHu7UXlZEtqI9HdxNrR6GQmrfMw5kPbpgEqNBCAovTHw== X-Received: by 2002:a17:903:481:b0:234:a44c:18d with SMTP id d9443c01a7336-23ac45c8ac9mr142686435ad.22.1751228232046; Sun, 29 Jun 2025 13:17:12 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb39bb29sm67006285ad.137.2025.06.29.13.17.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:11 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 37/42] drm/msm: Add VMA unmap reason Date: Sun, 29 Jun 2025 13:13:20 -0700 Message-ID: <20250629201530.25775-38-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX5qYUh9Z20A4g jvZaOYkqDBedjKyT021qwrIDTqkZOijCMzpiXXhx3pYsDcSQLqpKv0ilHqS4eM8H9qyp6Y50mX/ RK9LvWu40hIClGEYPYprly7hwLNq+uKFAMDrRCl1ysnN7bSOgse0OsA0vyq8DDItdpJmLwu0Z6P kVGpZvQbjf09hl3VrmR58ivCjRXiRQa6o9i1vghuGAdz1wcLjWt2LUQ/ma7w8zuqFZGVA6PW04/ oeAA7mGuD5tgnzTbz4C+WBgZ2jSXHoON9qWLVKezaWal436IRaOls6QCECy4oeWi2ePsX9LURdt JcoHFWvnsk7upxF9tmm+GFVqxP6Zy4zq2KQQ54S9jKde6GAPZDTJ5D+hIAvNtsP0E6IevfP+hBH Zglq6w3cLH8F6Dl5z3voGl6Eic+N93OKJsh0yUDQs93FggnjZCLEili0CD6pzsdotXC2N7lV X-Proofpoint-GUID: 75SEUbbID1GhH-XQzVq_cMkN29ikamTA X-Authority-Analysis: v=2.4 cv=RrbFLDmK c=1 sm=1 tr=0 ts=68619f49 cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=S93TA_zjY9FrSvCmoiAA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-ORIG-GUID: 75SEUbbID1GhH-XQzVq_cMkN29ikamTA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=999 adultscore=0 clxscore=1015 suspectscore=0 phishscore=0 bulkscore=0 malwarescore=0 spamscore=0 mlxscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark Make the VM log a bit more useful by providing a reason for the unmap (ie. closing VM vs evict/purge, etc) Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 20 +++++++++++--------- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 15 ++++++++++++--- 3 files changed, 24 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 77fdf53d3e33..e3ccda777ef3 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -43,7 +43,8 @@ static int msm_gem_open(struct drm_gem_object *obj, struc= t drm_file *file) return 0; } =20 -static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, bool close); +static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, + bool close, const char *reason); =20 static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) { @@ -57,7 +58,7 @@ static void detach_vm(struct drm_gem_object *obj, struct = drm_gpuvm *vm) drm_gpuvm_bo_for_each_va (vma, vm_bo) { if (vma->vm !=3D vm) continue; - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, "detach"); msm_gem_vma_close(vma); break; } @@ -97,7 +98,7 @@ static void msm_gem_close(struct drm_gem_object *obj, str= uct drm_file *file) MAX_SCHEDULE_TIMEOUT); =20 msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); - put_iova_spaces(obj, ctx->vm, true); + put_iova_spaces(obj, ctx->vm, true, "close"); detach_vm(obj, ctx->vm); drm_exec_fini(&exec); /* drop locks */ } @@ -425,7 +426,8 @@ static struct drm_gpuva *lookup_vma(struct drm_gem_obje= ct *obj, * mapping. */ static void -put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool clo= se) +put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, + bool close, const char *reason) { struct drm_gpuvm_bo *vm_bo, *tmp; =20 @@ -440,7 +442,7 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_= gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); =20 drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, reason); if (close) msm_gem_vma_close(vma); } @@ -617,7 +619,7 @@ static int clear_iova(struct drm_gem_object *obj, if (!vma) return 0; =20 - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, NULL); msm_gem_vma_close(vma); =20 return 0; @@ -829,7 +831,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); =20 /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, NULL, false); + put_iova_spaces(obj, NULL, false, "purge"); =20 msm_gem_vunmap(obj); =20 @@ -867,7 +869,7 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(is_unevictable(msm_obj)); =20 /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, NULL, false); + put_iova_spaces(obj, NULL, false, "evict"); =20 drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); =20 @@ -1079,7 +1081,7 @@ static void msm_gem_free_object(struct drm_gem_object= *obj) drm_exec_retry_on_contention(&exec); } } - put_iova_spaces(obj, NULL, true); + put_iova_spaces(obj, NULL, true, "free"); drm_exec_fini(&exec); /* drop locks */ } =20 diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 062d1b5477d6..ce5e90ba935b 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -168,7 +168,7 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 offset, u64 range_start, u64 range_end); -void msm_gem_vma_unmap(struct drm_gpuva *vma); +void msm_gem_vma_unmap(struct drm_gpuva *vma, const char *reason); int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 9564e40c542f..63f4d078e1a2 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -53,6 +53,9 @@ struct msm_vm_unmap_op { /** @range: size of region to unmap */ uint64_t range; =20 + /** @reason: The reason for the unmap */ + const char *reason; + /** * @queue_id: The id of the submitqueue the operation is performed * on, or zero for (in particular) UNMAP ops triggered outside of @@ -242,7 +245,12 @@ vm_log(struct msm_gem_vm *vm, const char *op, uint64_t= iova, uint64_t range, int static void vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) { - vm_log(vm, "unmap", op->iova, op->range, op->queue_id); + const char *reason =3D op->reason; + + if (!reason) + reason =3D "unmap"; + + vm_log(vm, reason, op->iova, op->range, op->queue_id); =20 vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); } @@ -257,7 +265,7 @@ vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_ma= p_op *op) } =20 /* Actually unmap memory for the vma */ -void msm_gem_vma_unmap(struct drm_gpuva *vma) +void msm_gem_vma_unmap(struct drm_gpuva *vma, const char *reason) { struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); @@ -277,6 +285,7 @@ void msm_gem_vma_unmap(struct drm_gpuva *vma) vm_unmap_op(vm, &(struct msm_vm_unmap_op){ .iova =3D vma->va.addr, .range =3D vma->va.range, + .reason =3D reason, }); =20 if (!vm->managed) @@ -863,7 +872,7 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) drm_exec_retry_on_contention(&exec); } =20 - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, "close"); msm_gem_vma_close(vma); =20 if (obj) { --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1815263F47 for ; Sun, 29 Jun 2025 20:17:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228237; cv=none; b=cDCzZ5bj0t5OqhkMKzzLFp5TYAKVWKAyUQnTN6+m9VDNlMaIPVp0UlI78yLYMCoa7QtkVf/YTvTK03xFyPSaiET3CekC1him+tcoeAw1uoxjLb4Ph4E2JCGfU1UuT2ys0aFBEcLNhAejR2fIM7qVEdgKitbPw2es6TkCSxtjAHk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228237; c=relaxed/simple; bh=DBDmWCiCFVgbl4WtBxHdzqbNhlAEm9jFk7Gr/5+tOIQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VdQ4IRtmcGcBk17qtZDtsysnK46aj/L41BnkRPqxyNEP/tYMrAa1KQbe7kIXxA+GJ8zLiH65OesIS/D695A2DlkFeuRuzAwbOyB3suZtSVhYNl2FCSG9WZr2Hn6752qb0TKnsH+xVmysoUvt+scVJ6AsWz1/dPZr8DFfkJGRY9I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=P+fjg4A0; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="P+fjg4A0" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TBqiXn031184 for ; Sun, 29 Jun 2025 20:17:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=5mY0F5PMmU9 vrS33+13h25TEJD4UxWJEsq72XU3xSVs=; b=P+fjg4A0RZe6REe3XrkoPcZmiE2 OcTh9tv7DwGlIzx2RC8LdNJZNsqxuuMYRM+aL0FhYrNgxk6qJBt/FnSKIyqXQ+Bd WV1953yLX6NW39Ns9BklbuvSRNchtfwp9ndDR45xf1IVhb6hfyPgDDjjRgSSwxff 1lU3Bn8F3aWOCRU+Ve7zQLtqyPJXXna72nx3aAVxiTqKD/KV92YrDibrcnLH6NiY DPBD2HMtL2BQi80rxpJYqotdJiaW+hmH7akgXgEflBwdjvTcbNyDY2/YjycPnmXY DvAcrfH947y2/u0VG1YZc7lqDtRxjKObMAPWDo/Pxe7dcArHpzlXJYMAwzw== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8m62mdh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:14 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-748e1e474f8so2701091b3a.2 for ; Sun, 29 Jun 2025 13:17:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228234; x=1751833034; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5mY0F5PMmU9vrS33+13h25TEJD4UxWJEsq72XU3xSVs=; b=IAYl4UOdqT5DB/vMOp4gA+Hm7cR0fGSIF66aO2m/Weo7ZXI7zZpu1UpULAf/yqHv4G DPsdLtGd2gitCmTfgW+jaxqkLNVfaPZrFke9IGjXArORoGS0n/yCYfgpqmieswXd1IrW tjicLRTseazsb4NIzePB0V4eiqVsx6qwFQfg9IysY9runnSaTJuhNOLV1HwNsZfAg3ld QH6Ho8ZT0ApWKPgVjcwuf8Y+jjqGghFomKHnjyqaDIFLE3Z2fdUCYBsISuF1QDn0kLk2 flVbHrKcRmDSisLXbu9/gmyyY8uvb9k9qEwYQIF8kWTEKs5XHMlKaLISb//JCenxGnLi 4WQQ== X-Forwarded-Encrypted: i=1; AJvYcCXOuHxE8sUfZYMeQmwvTMInjOssIHimJEAeiTuIZchEjoYruNrurTwhjHqHQGAxLM8hlTcaRZBuE4MQhng=@vger.kernel.org X-Gm-Message-State: AOJu0YzsXTUJ3UiAot74o7KOfqAUhADe4u7e+DBjAIozxI5r1wBiXHBr ngGkofgFtdZy+BmCg2LL4ATvQT2b5T7xmgPzhpKkCwwqxh3KYSrubCWT5fWduMZAfJ9qCeQx2A3 RZgBed0BE1Pjt7HRU5/zo5Jbb1mt76TaCHEUQdtp8n1f+MMqQ/dBQxP3R1u4VGjJ3HNc= X-Gm-Gg: ASbGncsE1IR4JlYOqYEOte9f+UkStc1BRtGtspYHuR4EKIYavhKhwFfWF8FTOyxJrq7 8SM9n2AGVb/V7nJEwhrk75gIlQfm7QYksIhE2fCRMJfFd6Rz45NJgKXeJkZhfMhjgjtw5epY0nO zl5HJva/eI3agAPILnISmw+WZK+iy7OjV601r6s6alkhqZ+1fUIczrVAo6/lEX0zm3Rmw599qr6 CCSQtR35+Z2OaQN+nMq20p/Vj83RBYRyNBULDPfoUI3T9vkcbn+r4kWn1u3H13dXxsZ7sWyiJAs tzoeSzmTJBUzcXJG1wUmukkEUxdH6SgPrw== X-Received: by 2002:a05:6a00:3cd5:b0:74a:f611:484 with SMTP id d2e1a72fcca58-74af70a825cmr16437204b3a.24.1751228233861; Sun, 29 Jun 2025 13:17:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHOpY4JD8XXzpeyT/cFkQnTv9hNBJBrOs2mriXfKg05KpBh8dREEFouJKKLOYU1fqTJGyyiJw== X-Received: by 2002:a05:6a00:3cd5:b0:74a:f611:484 with SMTP id d2e1a72fcca58-74af70a825cmr16437173b3a.24.1751228233466; Sun, 29 Jun 2025 13:17:13 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af57efae1sm7273234b3a.163.2025.06.29.13.17.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:13 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 38/42] drm/msm: Add mmu prealloc tracepoint Date: Sun, 29 Jun 2025 13:13:21 -0700 Message-ID: <20250629201530.25775-39-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX+MKiBEM1xnkk l9n4+PSC9A6paiiTDFokL2X5v+YsMdfPkcq/l1/6ONdjRIKSD7Skp2DifK7V20X3QgNvuLF7Slq 5Ow5dH9tRUTLcIognRRd8m6YY01vZQdF5A/QJ5rCFVZKx8C4rd12OihBOPCBM0C881H4z2MphT/ GK+juUtP49PPuayRlZ1G6jesyx12+lwzvM5weTXF0aSoDlK+2/drCPfK/haCRjOwwEdPJJj2g9Y TNbe8C8NGPnRtfShP0jRjsC2xbe3tnFfNNLZH+Wi9BQw6x5iTV7COeQbVu+N7c1/E1+QdlcZSp9 hfh8GGuz0yIRKGzcG16y463xhFb29p46bDnMnA/Rd1aJCtmAhrvHnr9Pt/GABkKWD24JKQocu3C apwiJuohqS7lbUMi+/FCd24/UM2zuk72VByOMsMmR9vuehso0iqAAM01Ix9rX74loxnxoykJ X-Authority-Analysis: v=2.4 cv=Fq0F/3rq c=1 sm=1 tr=0 ts=68619f4a cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=W9C9WuCMp67TlgULjysA:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-GUID: 3EB2U0wAKOOZVJLyYTyWEFPyfmwm4-uM X-Proofpoint-ORIG-GUID: 3EB2U0wAKOOZVJLyYTyWEFPyfmwm4-uM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 mlxscore=0 suspectscore=0 adultscore=0 clxscore=1015 mlxlogscore=999 impostorscore=0 bulkscore=0 malwarescore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark So we can monitor how many pages are getting preallocated vs how many get used. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gpu_trace.h | 14 ++++++++++++++ drivers/gpu/drm/msm/msm_iommu.c | 4 ++++ 2 files changed, 18 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_= gpu_trace.h index 7f863282db0d..781bbe5540bd 100644 --- a/drivers/gpu/drm/msm/msm_gpu_trace.h +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h @@ -205,6 +205,20 @@ TRACE_EVENT(msm_gpu_preemption_irq, TP_printk("preempted to %u", __entry->ring_id) ); =20 +TRACE_EVENT(msm_mmu_prealloc_cleanup, + TP_PROTO(u32 count, u32 remaining), + TP_ARGS(count, remaining), + TP_STRUCT__entry( + __field(u32, count) + __field(u32, remaining) + ), + TP_fast_assign( + __entry->count =3D count; + __entry->remaining =3D remaining; + ), + TP_printk("count=3D%u, remaining=3D%u", __entry->count, __entry->remaini= ng) +); + #endif =20 #undef TRACE_INCLUDE_PATH diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index 887c9023f8a2..55c29f49b788 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -8,6 +8,7 @@ #include #include #include "msm_drv.h" +#include "msm_gpu_trace.h" #include "msm_mmu.h" =20 struct msm_iommu { @@ -346,6 +347,9 @@ msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mm= u, struct msm_mmu_preallo struct kmem_cache *pt_cache =3D get_pt_cache(mmu); uint32_t remaining_pt_count =3D p->count - p->ptr; =20 + if (p->count > 0) + trace_msm_mmu_prealloc_cleanup(p->count, remaining_pt_count); + kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]); kvfree(p->pages); } --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ACA67264FB1 for ; Sun, 29 Jun 2025 20:17:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228240; cv=none; b=sgGOV6L8B65aIj1t7EiHpM9oYql3arAflnpxxGWE/DyMeffCnKydLJ2aegA3MYm3o/eEHwWwDkB7V5npjw5TO8w167B0GGI2pcLy6IQp+xdZWY8/IBhf2NbeiYOx0qGwYUK4CanbAxJVTqNqwfz/BrVV4gdkjdWSlco/UA2npyg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228240; c=relaxed/simple; bh=gTdjIx8wy++YElnZqgMmZAa5qptcrzEPn6AG4VdQe9s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SAE/vs3ZnoXi/Gq3tMRMsND7Cy4N6LAgUsQQu3smk6CTsc2LHFcJNmLjavpsIUYh/TB1yJ/2nn+hUpV4aRM6TwaYeaX7aY/khyyZlA0xJ7CqbKdyFFS/i1VMw43L7TVQOIpMfNKB+uh8iuu53VjwUu1fZ9ZAW9ge/NY6J4Mhslk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=otwO/6/Q; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="otwO/6/Q" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TFnYHj005314 for ; Sun, 29 Jun 2025 20:17:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=ufkxvlCaXqE +JOcTp0SgGPSX3GCc3cmG0IREiQb02Vo=; b=otwO/6/QlbufEGIIqV3elURmoR5 wmZaq0AzynUV8kft65Y8NbAAcDNbInMYLYrGRj08kPp58lLq0q4cbfGrAuijlpM0 hEn9AoB+RVt9NflNd2jHG+exUbH9f3WzmUTzhaL4fREj5GN5C1ts1Kfnlm5vl36w 8mEHa3lROIUKe058VXHnc0Ar431lBJiDsXevih5IdyFM8uXNVsyiK+A8Q/1IiRs0 0D5gPAtedVAQHDWSVdkxXRUtGmK2lZuWftjlN+WX1pT3bRF+40r+5Mx/PRFEBmFR P6IZKg2lQ7CG6cTUxcPSKgZgbMKmbD9WdSa4DygOLZSQMgQHyi110G3hi5w== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7d9tpmm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:16 +0000 (GMT) Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-b00e4358a34so949202a12.0 for ; Sun, 29 Jun 2025 13:17:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228235; x=1751833035; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ufkxvlCaXqE+JOcTp0SgGPSX3GCc3cmG0IREiQb02Vo=; b=uUk0F9UhTtLDQ2yQCasrT1PLs/4H9HAI7xAYwowackPEC43A8+rW+jQzTYC24+NXsF APMNhBvrkGJQyXpGoZfGHSQ+21UQFWgX05aR1h2szSft9ixDa1QnmLYnqRMOHWo37UiB H0PxmvS9q8Khxj8rzgzeBYirMCyEoXMzl0VDoiMbo+MdHQBAukEj1lTfoU+WAuJ4n/Kh S66sm5OcU1Y7fiJjgyNo4rVqLXcM7iUh5lpyPQ00f//jBdRvoeyhb9ohzN+8nlx8JTNa QEiZadDz7Q+a8BaGwKue/0STX8kUA6KXoBmTWh9D8pnHi8wQYuvfn3I8wvofJs6lkAiq 7i0A== X-Forwarded-Encrypted: i=1; AJvYcCWiiKZw/w3NyW0yQAEdEZf2off9QXyUh18RtHXjasLiE6bnR9RoaMsu6GW/lyuqoKgX2/NgpDMlpo4ny2E=@vger.kernel.org X-Gm-Message-State: AOJu0YwRPxLRPQefsKxjtylBcihhYcfAG1CFujb81hXnTW7AModAu5JS EIZ8cQYs90QNgIidxbhQIIe6gj5spwXB6KqhmOvahoe4BYmpj2hQsdNkSyy8+urulKT1f7TiI0l SQluhyAk1fleBZ5p6OYCvhfg0soj3QAhwMjIViByAL/vJvTNPMwchf976mJiGLIA5yOc= X-Gm-Gg: ASbGncv8jxnHCKAqX+m7YMok4J9kzuIhRlAQKivjHacRwky5NUsaISZl5WRvY4U9MRy kqkG40skuuVAgouGvYV3IgdZvAMQ6yHeKya7jA8BEfrK856lzwa/8u+W9KaYhG5Du7wDtb8cld5 H3Pa/edLlzTGtdC4kQG1KhozuOux6C056rLRc5spCiXRWRiY6TsGtb0yu+Yw6G1sYYVUefROi1o 5OX51yIiDvwVsfEK1D1SdEZB2RoXj4EOgtpTQNDRud/L+Y78ck3KDWB+aT4lK3AyzrH25ILtQl8 1uymBBi2HSEI7UxlAriu1+xUElpiG9pjZQ== X-Received: by 2002:a17:903:1a27:b0:234:b735:dca8 with SMTP id d9443c01a7336-23ac2d86b33mr141348285ad.6.1751228235233; Sun, 29 Jun 2025 13:17:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFJvTw3e2vrywZVxfjZlEoy2MYDh/wxsQwXvulz3VBukHFPPCVpgZZypcbvOisjgkgC2KTm8Q== X-Received: by 2002:a17:903:1a27:b0:234:b735:dca8 with SMTP id d9443c01a7336-23ac2d86b33mr141348005ad.6.1751228234806; Sun, 29 Jun 2025 13:17:14 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb2f24c7sm63539985ad.82.2025.06.29.13.17.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:14 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 39/42] drm/msm: use trylock for debugfs Date: Sun, 29 Jun 2025 13:13:22 -0700 Message-ID: <20250629201530.25775-40-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MSBTYWx0ZWRfX6E8FL2Vv7Nsj dd51Oo71S+uclhdrPlPI6SEYtp5nWZLCS0kGNhi+RoTKAwl585/EC1TCxoQ52weybaC7EYQxRoY kzePJzejGAufQJiK3LiWSXlryV4rMp01iFXbtrEgBtdbg17yPLY/T9mMJT4WR488pDQy8bOHzqA cAu2kxPVyTQMmbBdFjsVk7+VyXqDIAnlp1+bLAgothRHXsDmLaf4cKmCJrxhEt9RtwA4D+qtCG2 GWtOkMnzDZW/gR5PzKfKyysk2J/cmFDWhYkWM40zrcgYcKmkQptDKIQ2W3GwbT746KvgpMn62ZT yD2zt2AwXdZst/oNSzN7PKRALKxiGyopT6ii9e/0Xz3RjXi5QZ917KLPRWEqwbkjHFLxbpAksF8 q8h96f4q6G1wKbU6QVwcrpc3dE+qWxunlBEw1mkhgkZo4TrhtP69dmbMFFbSzMPzkZeAZaVO X-Proofpoint-GUID: a0d6kmKXxkeGQRNNB-JYPEumyL9CS8X_ X-Authority-Analysis: v=2.4 cv=RrbFLDmK c=1 sm=1 tr=0 ts=68619f4c cx=c_pps a=Oh5Dbbf/trHjhBongsHeRQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=UIWvmcERRd2or3XT2GcA:9 a=_Vgx9l1VpLgwpw_dHYaR:22 X-Proofpoint-ORIG-GUID: a0d6kmKXxkeGQRNNB-JYPEumyL9CS8X_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=999 adultscore=0 clxscore=1015 suspectscore=0 phishscore=0 bulkscore=0 malwarescore=0 spamscore=0 mlxscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290171 Content-Type: text/plain; charset="utf-8" From: Rob Clark This resolves a potential deadlock vs msm_gem_vm_close(). Otherwise for _NO_SHARE buffers msm_gem_describe() could be trying to acquire the shared vm resv, while already holding priv->obj_lock. But _vm_close() might drop the last reference to a GEM obj while already holding the vm resv, and msm_gem_free_object() needs to grab priv->obj_lock, a locking inversion. OTOH this is only for debugfs and it isn't critical if we undercount by skipping a locked obj. So just use trylock() and move along if we can't get the lock. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 3 ++- drivers/gpu/drm/msm/msm_gem.h | 6 ++++++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index e3ccda777ef3..3e87d27dfcb6 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -938,7 +938,8 @@ void msm_gem_describe(struct drm_gem_object *obj, struc= t seq_file *m, uint64_t off =3D drm_vma_node_start(&obj->vma_node); const char *madv; =20 - msm_gem_lock(obj); + if (!msm_gem_trylock(obj)) + return; =20 stats->all.count++; stats->all.size +=3D obj->size; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index ce5e90ba935b..1ce97f8a30bb 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -280,6 +280,12 @@ msm_gem_lock(struct drm_gem_object *obj) dma_resv_lock(obj->resv, NULL); } =20 +static inline bool __must_check +msm_gem_trylock(struct drm_gem_object *obj) +{ + return dma_resv_trylock(obj->resv); +} + static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA017265289 for ; Sun, 29 Jun 2025 20:17:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228240; cv=none; b=kfHGX4rMABzVvPtQZ/K9NOltqsdR/tYHwZlU26AOiRaE3WKA2zz2zRNjJ2yAJkQT5eL4w5HtHmAZ2/f/QUiSzF78bolRuCME+XF8UJl9ZRyiRaqerCX/6uYOBcfIE+cTAY46U0bZoRB9PM88LGJugIU55cEJCCovb0VvCFW/dUo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228240; c=relaxed/simple; bh=ayKPNIzMHytJlMgLXTS3H2whkdxFm6qzz5vdQc2N/d4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fALHEIvwHbILkvxWzsvj3dZNvGhrPBoLjE6Z7fON3OhQQAdtXZBBtA3fr5Kgn81epm8UquQ2Rgnoz2erUBZG98t+HJ3F8Wgl+Ot9EQgf2s+e98i65gfQS+W4oUrP4z8DbeDolA7Xyjgjy1pjFpTcZAW6dFaEHtfpfVrJ19XYmKs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=bNA26lun; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="bNA26lun" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TDRoWi010344 for ; Sun, 29 Jun 2025 20:17:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=k07yEnAyEzE a9/0q+Hpj8bv2+DIU1Tpq9KBSOGFREQM=; b=bNA26lunYr+ved4vKFT73ZKdg2I 1epYMrVwBa5j/JKPC2aiwLSRVeTHlFzUqGPzH9kJRn6I6XHUb9gFL5gykmv2XTPY T378+Q/PSN/M5lQd2E9+cG6t3QPOCwuD/WItaZ0EoKAXcWZdm1fDswM1p1aEu5te Y0hZYHkh6pP7maHNBCGxtP/76DvHQTvrBoZB80TmrBQ2A4bCK6e38lzpatLP62Tw CJIgGvNHn3FmQ22nhz9RboXrNsMYUrA5gEhaxmiZA+ZrHhAvIy9yuvIzIGI4qsGu GthzmjiWBVeW2XHXyH2ODcKcHzBBybaTX6U/oQ3WyIxxjBGSvcdlskP0TpA== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8s9ak1s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:17 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-235eefe6a8fso25697525ad.1 for ; Sun, 29 Jun 2025 13:17:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228236; x=1751833036; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k07yEnAyEzEa9/0q+Hpj8bv2+DIU1Tpq9KBSOGFREQM=; b=kasDEIRXPB0/XMOqDgkl0XXqJ85ncOyT0wlrGZLT7SSmLPynbyjfrWUrjMicF52r5Y s488/qx69VosfPREBBJSXZyhAAhQaWvaCrrQKP7Rvko7eYRSVrk02E/OHLIb8U97xTXX 0eWdmJfTydUUjtU4+FuzuRvITvgEIE0ymgpQaKfmoRCysqozJ5gsMDpph/UZKqe7Wz2C dpdWoAmCH/OPnyneM6Sc6u2HPKX9Z+xdNG08xWkgS1sqRdiGoDubVseTCvSAePacFFk2 WZck5Yz257+LW/N8TBoAn84RkvdGK/nHXGNRvvaIUzg8cIBnZFd/ZXWqaAVLyCktN7hh /Dyw== X-Forwarded-Encrypted: i=1; AJvYcCWn4gN2X3IPkn7czkm7LLp4xl3xaUG3fyiU2XrjwcGt8cD8TTb03XEVHlXQz8FuDe2ybm181UU6oFXbwHs=@vger.kernel.org X-Gm-Message-State: AOJu0YyVMxdPgyxL4Y5BBFHvvpCd/oxwe6zWGJaFFVk5R1o4jpuztrp+ nTi2sJs30IkK6WERkNiHQrXunaUifuGIxmhxKvz/IQhJHXLmnTznRNmEbdfnJriO8PWr8M6FJ2O KMYEuXp5169x/SNoTLtNkgsYYulCq6wrCZCF1ZKKhjujATbSOfT5CHfZwwhmWOlrlhQI= X-Gm-Gg: ASbGncsDV/faGmm4SYsy06JSSkT/S8+Xt18yOz5NYMLnQ6Tu4d24xSnWGDQnj3s8SLy l2NDTrwQexqiNxM1WmQHLai+gD13X+N+bkp08W6xj7x7IlshW7a+IgvjIWa6R2h2M4XqojTriFo TCQZ0rC0dqddNms9dkbMcX0npmN5bnUi4p4evUnDotM8vZT2aa9dUGoZMOebhzLtUCNFwPU6Ie0 aEh070n6sIXM2AQdYXB2C2UgRdbdWuskOdQt/0QJornxdD9xauoqFMwT0+uHtI9wHDlEdNap4P3 599GfEWInkF2CMwzjtVfDaTtmW9FgFgTpw== X-Received: by 2002:a17:903:3d0d:b0:235:ea0d:ae21 with SMTP id d9443c01a7336-23ac487c9b4mr181492855ad.35.1751228236514; Sun, 29 Jun 2025 13:17:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGbqKK4XP+eBPgJHXyazG13c3Qw9Vds2B4hlk1UpKPfkqCJy6P8j6VSj/+TGEYyxHjNQNNVqw== X-Received: by 2002:a17:903:3d0d:b0:235:ea0d:ae21 with SMTP id d9443c01a7336-23ac487c9b4mr181492615ad.35.1751228236130; Sun, 29 Jun 2025 13:17:16 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b34e3013d7csm6301398a12.11.2025.06.29.13.17.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:15 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 40/42] drm/msm: Bump UAPI version Date: Sun, 29 Jun 2025 13:13:23 -0700 Message-ID: <20250629201530.25775-41-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=H/Pbw/Yi c=1 sm=1 tr=0 ts=68619f4d cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=KgEaFMypzpKrXJt10QQA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-ORIG-GUID: n3dHngJk5DAY35Dw5jQ1EPFn62gAczJb X-Proofpoint-GUID: n3dHngJk5DAY35Dw5jQ1EPFn62gAczJb X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX9RJ99WR7rmXF +hsLQk6l4SJDy/wJ7gh5RQSTW5UF+SrLikuozIyeyaU/66D7wbRHvuWngal/B4krauqMOwvJnwG +iUMJNDfRw6wFA6u9QAfsvR9dP0LkwUCmOWtoJ3Cu8r/nGHZv4g/TMGkxysDBIHs5YFBWww/ToV 8RhOOyKe2XwG5UxklslpQOkCyRAniMOo8f9OFEkWwrdiH9Dkw8BxxB4BhfOQwcaK9vteofCfpbx xBYtPKUNsdsIv3FOqgUQtzdYCSH2wkUbyUJ+5gZ80+EtLtZctVYkVaXWLmt1dULARYYNbjQuZhb 5GdhdUPGyxpf67MoJHlHWZLSRf+OQh3EF61I8RH7JkvL3STlaLdeWgQkMfFqJDprDd8CN+kXfy+ Tr01xP+48fGIByXFP+UXhVxYXWHIXWy9mgSyo4zHrtUS9jiSvMp6b0rS8pyz27ycsEwF0GqA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 priorityscore=1501 clxscore=1015 mlxscore=0 lowpriorityscore=0 spamscore=0 adultscore=0 bulkscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Bump version to signal to userspace that VM_BIND is supported. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 7881afa3a75a..9b1f1c1a41d4 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -41,9 +41,10 @@ * - 1.10.0 - Add MSM_SUBMIT_BO_NO_IMPLICIT * - 1.11.0 - Add wait boost (MSM_WAIT_FENCE_BOOST, MSM_PREP_BOOST) * - 1.12.0 - Add MSM_INFO_SET_METADATA and MSM_INFO_GET_METADATA + * - 1.13.0 - Add VM_BIND */ #define MSM_VERSION_MAJOR 1 -#define MSM_VERSION_MINOR 12 +#define MSM_VERSION_MINOR 13 #define MSM_VERSION_PATCHLEVEL 0 =20 bool dumpstate; --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5EFC1265631 for ; Sun, 29 Jun 2025 20:17:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228242; cv=none; b=e4TA+zKKb5ZLWuTFHTyJSmHHpsT21IYE35AdUSQXFB/ayjrRi7Y5OtLUoLwscN1GYOo1n7CAA6jtu7R4NdIB510qrj3FAPKyJJZDpfnwvcOG41PzrCd+DtYO6INmrkNZQacaV3CfupPtBBqqvmGQdMzrZo+ST9n50Oo9SBe7tjM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228242; c=relaxed/simple; bh=W8kPWjbVzloymT+WUD1vDHZMSH23Ip4gfQjinZZYu1Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CqFWbbzpcGdmn29XBcepO9WSKdyzXToKUefVcbws6+X+9pOf37ncVBEIqdjN/qEERcld77VPbtVnV2PlIR4ERxhpknkfXA3GoOVhi9i0EhW165+3dnICBDSntnpA4aPKhj7dxX8Z6AOJF2JV72xLZ5JPDC5zRnjdP9ISNIrYQ2U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=NyNcTa4+; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="NyNcTa4+" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TItM5B013587 for ; Sun, 29 Jun 2025 20:17:19 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=b2JMHHLa1rE J4Hs2IxSKvgFj2W/GQDFpMVFHK3adsj0=; b=NyNcTa4+8m73k1HbMn2XjVtqYtl aI1vv58Rv0m8AUzm0WOp34p2RtyeYNbxWf/+xKirgziQ9+EIw0f0qLN/+zuhVcVz FlU2ueWEXdIVukr+a0qnZDUXtyvA6S6ar/x7UQM9Bpm15rQx+/PWk+3xLI8vWD8c 6OQDE326sIiTDDad8VrQ4KQkXnAu7TrOiyCZKqV8ZHz7psB2kzDOYwpOF/UIFfOR gtmfAQTvdmKWyzeRy2t8N/lZPHYfJYQ+56EQ7W7spQxVMRb4W7NaFdN8D2yXLJ2t ywzxm4Xj/LmENEel0IL5+P1A3ZkyKtocCdKUmFtdz2lrzrymbaMe8yWh55Q== Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j7bvjpu9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:19 +0000 (GMT) Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-748764d84feso2796984b3a.2 for ; Sun, 29 Jun 2025 13:17:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228238; x=1751833038; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=b2JMHHLa1rEJ4Hs2IxSKvgFj2W/GQDFpMVFHK3adsj0=; b=iOVp0h5RzKW7YunUR1g/BVuolRhvR6UMFKUEmg/zaJPHKgjlaxBVyZTsFgn+tkPGOQ m1iT6defRNOaMGIukOo9hVWk/lZ/2pHTr6bLhRKPXdiMSZTdMETIrzYXAZwQJSo6IgJ5 3YUMYme2z0DRPCAMHXNtws4l7Lm+wteD8HTbL+V9t9cTmZiFUTUeXehB9tyBR80V5RBS 9FDaLMCpx+3LQli6CMndQ/E7SvSZO/U/XINMbm+I7Y/tGgEWzAcn/u9Zz6ZjlmerIkFh fzBUy9IGPvqrev9Z4ei7G4K7Vy0B5SUBo7RplJno/GIDYKmoqFOQkkB7XArJrrjCZHcv uOaA== X-Forwarded-Encrypted: i=1; AJvYcCWqFtaCVTgHvq3ptt+6GdPs14RKPS207hrLMirbtRTkXub6m4SVQ37AaArGYioAl9PyChwjdqx++gzJi9E=@vger.kernel.org X-Gm-Message-State: AOJu0Yx5ZwqTkCtb8X9M4gG4it5HKdw2PlS4rjF8vdypGJD0b23fBss/ ik9t5bbud0+CdvcDsOJJI0RUZeRU9GeBHIEgYSsUY8SGxaRdEpIbeehMSRV3BRlmcePF5xexhU9 P9y0VGJgHZu1kl2ucEpcfbnxUQLG5FvkGwDzrwk91PFKvQ6iCYDo1ul34naKh1RvAWo8= X-Gm-Gg: ASbGncsFCXbTmCxtu4yynOgdea5Hazb6TeihFhF9I46JWgaD1E4GlkFKcp69Q/MQoj/ UlNBZBD/mcbYdsNpld6aKZH+S0C+judzc+ui2Qhe81HA7ayqVyQBJzuXj4ucCA6w9txXCW6idxL cHqcyyDYC5Fq1bPTxH3L5LexgO4z3FAxQ7IS9xaz4MlzUCUas33/zZLynsbFW/Q/5yalIvrl1Fa ckjuYx9iFgUChkjRpkkHCHE3F3yFeXxAJV80+A5rHKK5NFuiHFAWOf9HzcQv6MY1w/D52vf2ZyR yJWeyIvsTzfSVGHDpGnbeD9q98mmXhZxbg== X-Received: by 2002:a05:6a00:21c9:b0:740:9d7c:8f5c with SMTP id d2e1a72fcca58-74af6f4cfa2mr16519960b3a.18.1751228238135; Sun, 29 Jun 2025 13:17:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHW3k2zWAY5y4ye1pG0x55foXVAYN4hmqQ2sf2QqrktZmybf7EtvoFCA4/tn6MAABQx2lQx3g== X-Received: by 2002:a05:6a00:21c9:b0:740:9d7c:8f5c with SMTP id d2e1a72fcca58-74af6f4cfa2mr16519927b3a.18.1751228237682; Sun, 29 Jun 2025 13:17:17 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af540b3d9sm7086468b3a.2.2025.06.29.13.17.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v9 41/42] drm/msm: Defer VMA unmap for fb unpins Date: Sun, 29 Jun 2025 13:13:24 -0700 Message-ID: <20250629201530.25775-42-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: lc8KEjJEdPaV7qCndeEpPzht9oMVUJqK X-Authority-Analysis: v=2.4 cv=RJCzH5i+ c=1 sm=1 tr=0 ts=68619f4f cx=c_pps a=rEQLjTOiSrHUhVqRoksmgQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=HvHaJK4xSQnTU1JWlMsA:9 a=2VI0MkxyNR6bbpdq8BZq:22 X-Proofpoint-ORIG-GUID: lc8KEjJEdPaV7qCndeEpPzht9oMVUJqK X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX/bvqPtz4fPc3 h4LWD+eOIHwJPZKNtP7XvHoI8lkKYg8K+Ivltx+nkv4qUejoAXx98v+ZBEgIUWGk5han2QXJNWj uUrawJ1QWmacb1LQhhkwMaZxPBw4tOi6O9w6t10uxcZ5ApbO6Bg39XhCb6F241OcqIoMqWPNvAs eKziwRQt6DDfO11oTRwVJj20dcowJt8OCFb8PTHjBKSP9eQcaDZFqBobq5FWbXmVkfxixr+duYc QeTt3DnIth4cqKiNvpZyT8x0VI+a17TIN8BO1P4B6N2z26X1q6ldALe4nV+11LHPBMc4XDggWap N72khD9v/+ggN5GKOpgOVa0GkKLJAIYYHRsGTnLAFnCYQeE6vH2JfbRu5fk1+JK1mGd3nZQMDTq y74UE++aD7Zvrhemvu/GkdWs+E9nj6aS4pCDIvofVk9axuQc9piZGgFno2idruOP8w2XbyeO X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 impostorscore=0 priorityscore=1501 mlxlogscore=919 adultscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 phishscore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" With the conversion to drm_gpuvm, we lost the lazy VMA cleanup, which means that fb cleanup/unpin when pageflipping to new scanout buffers immediately unmaps the scanout buffer. This is costly (with tlbinv, it can be 4-6ms for a 1080p scanout buffer, and more for higher resolutions)! To avoid this, introduce a vma_ref, which is incremented whenever userspace has a GEM handle or dma-buf fd. When unpinning if the vm is the kms->vm we defer tearing down the VMA until the vma_ref drops to zero. If the buffer is still part of a flip-chain then userspace will be holding some sort of reference to the BO, either via a GEM handle and/or dma-buf fd. So this avoids unmapping the VMA when there is a strong possibility that it will be needed again. Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 1 + drivers/gpu/drm/msm/msm_fb.c | 5 ++- drivers/gpu/drm/msm/msm_gem.c | 60 ++++++++++++++++++----------- drivers/gpu/drm/msm/msm_gem.h | 28 ++++++++++++++ drivers/gpu/drm/msm/msm_gem_prime.c | 54 +++++++++++++++++++++++++- 6 files changed, 123 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 9b1f1c1a41d4..0597ff6da317 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -830,6 +830,7 @@ static const struct drm_driver msm_driver =3D { .postclose =3D msm_postclose, .dumb_create =3D msm_gem_dumb_create, .dumb_map_offset =3D msm_gem_dumb_map_offset, + .gem_prime_import =3D msm_gem_prime_import, .gem_prime_import_sg_table =3D msm_gem_prime_import_sg_table, #ifdef CONFIG_DEBUG_FS .debugfs_init =3D msm_debugfs_init, diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 200c3135bbf9..2b49c4b800ee 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -269,6 +269,7 @@ void msm_gem_shrinker_cleanup(struct drm_device *dev); struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *ma= p); +struct drm_gem_object *msm_gem_prime_import(struct drm_device *dev, struct= dma_buf *buf); struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *de= v, struct dma_buf_attachment *attach, struct sg_table *sg); struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags= ); diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 8ae2f326ec54..bc7c2bb8f01e 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -89,6 +89,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, b= ool needs_dirtyfb) return 0; =20 for (i =3D 0; i < n; i++) { + msm_gem_vma_get(fb->obj[i]); ret =3D msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)\n", fb->base.id, i, msm_fb->iova[i], ret); @@ -114,8 +115,10 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *f= b, bool needed_dirtyfb) =20 memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); =20 - for (i =3D 0; i < n; i++) + for (i =3D 0; i < n; i++) { msm_gem_unpin_iova(fb->obj[i], vm); + msm_gem_vma_put(fb->obj[i]); + } } =20 uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 3e87d27dfcb6..33d3354c6102 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -19,6 +19,7 @@ #include "msm_drv.h" #include "msm_gem.h" #include "msm_gpu.h" +#include "msm_kms.h" =20 static void update_device_mem(struct msm_drm_private *priv, ssize_t size) { @@ -39,6 +40,7 @@ static void update_ctx_mem(struct drm_file *file, ssize_t= size) =20 static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) { + msm_gem_vma_get(obj); update_ctx_mem(file, obj->size); return 0; } @@ -46,33 +48,13 @@ static int msm_gem_open(struct drm_gem_object *obj, str= uct drm_file *file) static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, bool close, const char *reason); =20 -static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) -{ - msm_gem_assert_locked(obj); - drm_gpuvm_resv_assert_held(vm); - - struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(vm, obj); - if (vm_bo) { - struct drm_gpuva *vma; - - drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm !=3D vm) - continue; - msm_gem_vma_unmap(vma, "detach"); - msm_gem_vma_close(vma); - break; - } - - drm_gpuvm_bo_put(vm_bo); - } -} - static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *fil= e) { struct msm_context *ctx =3D file->driver_priv; struct drm_exec exec; =20 update_ctx_mem(file, -obj->size); + msm_gem_vma_put(obj); =20 /* * If VM isn't created yet, nothing to cleanup. And in fact calling @@ -99,7 +81,31 @@ static void msm_gem_close(struct drm_gem_object *obj, st= ruct drm_file *file) =20 msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, ctx->vm, true, "close"); - detach_vm(obj, ctx->vm); + drm_exec_fini(&exec); /* drop locks */ +} + +/* + * Get/put for kms->vm VMA + */ + +void msm_gem_vma_get(struct drm_gem_object *obj) +{ + atomic_inc(&to_msm_bo(obj)->vma_ref); +} + +void msm_gem_vma_put(struct drm_gem_object *obj) +{ + struct msm_drm_private *priv =3D obj->dev->dev_private; + struct drm_exec exec; + + if (atomic_dec_return(&to_msm_bo(obj)->vma_ref)) + return; + + if (!priv->kms) + return; + + msm_gem_lock_vm_and_obj(&exec, obj, priv->kms->vm); + put_iova_spaces(obj, priv->kms->vm, true, "vma_put"); drm_exec_fini(&exec); /* drop locks */ } =20 @@ -656,6 +662,13 @@ int msm_gem_set_iova(struct drm_gem_object *obj, return ret; } =20 +static bool is_kms_vm(struct drm_gpuvm *vm) +{ + struct msm_drm_private *priv =3D vm->drm->dev_private; + + return priv->kms && (priv->kms->vm =3D=3D vm); +} + /* * Unpin a iova by updating the reference counts. The memory isn't actually * purged until something else (shrinker, mm_notifier, destroy, etc) decid= es @@ -671,7 +684,8 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj, str= uct drm_gpuvm *vm) if (vma) { msm_gem_unpin_locked(obj); } - detach_vm(obj, vm); + if (!is_kms_vm(vm)) + put_iova_spaces(obj, vm, true, "close"); drm_exec_fini(&exec); /* drop locks */ } =20 diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 1ce97f8a30bb..5c0c59e4835c 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -211,9 +211,37 @@ struct msm_gem_object { * Protected by LRU lock. */ int pin_count; + + /** + * @vma_ref: Reference count of VMA users. + * + * With the vm_bo/vma holding a reference to the GEM object, we'd + * otherwise have to actively tear down a VMA when, for example, + * a buffer is unpinned for scanout, vs. the pre-drm_gpuvm approach + * where a VMA did not hold a reference to the BO, but instead was + * implicitly torn down when the BO was freed. + * + * To regain the lazy VMA teardown, we use the @vma_ref. It is + * incremented for any of the following: + * + * 1) the BO is exported as a dma_buf + * 2) the BO has open userspace handle + * + * All of those conditions will hold an reference to the BO, + * preventing it from being freed. So lazily keeping around the + * VMA will not prevent the BO from being freed. (Or rather, the + * reference loop is harmless in this case.) + * + * When the @vma_ref drops to zero, then kms->vm VMA will be + * torn down. + */ + atomic_t vma_ref; }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) =20 +void msm_gem_vma_get(struct drm_gem_object *obj); +void msm_gem_vma_put(struct drm_gem_object *obj); + uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); int msm_gem_prot(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma); diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_= gem_prime.c index 4d93f2daeeaa..c0a33ac839cb 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -6,6 +6,7 @@ =20 #include =20 +#include #include =20 #include "msm_drv.h" @@ -42,19 +43,68 @@ void msm_gem_prime_vunmap(struct drm_gem_object *obj, s= truct iosys_map *map) msm_gem_put_vaddr_locked(obj); } =20 +static void msm_gem_dmabuf_release(struct dma_buf *dma_buf) +{ + struct drm_gem_object *obj =3D dma_buf->priv; + + msm_gem_vma_put(obj); + drm_gem_dmabuf_release(dma_buf); +} + +static const struct dma_buf_ops msm_gem_prime_dmabuf_ops =3D { + .attach =3D drm_gem_map_attach, + .detach =3D drm_gem_map_detach, + .map_dma_buf =3D drm_gem_map_dma_buf, + .unmap_dma_buf =3D drm_gem_unmap_dma_buf, + .release =3D msm_gem_dmabuf_release, + .mmap =3D drm_gem_dmabuf_mmap, + .vmap =3D drm_gem_dmabuf_vmap, + .vunmap =3D drm_gem_dmabuf_vunmap, +}; + +struct drm_gem_object *msm_gem_prime_import(struct drm_device *dev, + struct dma_buf *buf) +{ + if (buf->ops =3D=3D &msm_gem_prime_dmabuf_ops) { + struct drm_gem_object *obj =3D buf->priv; + if (obj->dev =3D=3D dev) { + /* + * Importing dmabuf exported from our own gem increases + * refcount on gem itself instead of f_count of dmabuf. + */ + drm_gem_object_get(obj); + return obj; + } + } + + return drm_gem_prime_import(dev, buf); +} + struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *de= v, struct dma_buf_attachment *attach, struct sg_table *sg) { return msm_gem_import(dev, attach->dmabuf, sg); } =20 - struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags) { if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) return ERR_PTR(-EPERM); =20 - return drm_gem_prime_export(obj, flags); + msm_gem_vma_get(obj); + + struct drm_device *dev =3D obj->dev; + struct dma_buf_export_info exp_info =3D { + .exp_name =3D KBUILD_MODNAME, /* white lie for debug */ + .owner =3D dev->driver->fops->owner, + .ops =3D &msm_gem_prime_dmabuf_ops, + .size =3D obj->size, + .flags =3D flags, + .priv =3D obj, + .resv =3D obj->resv, + }; + + return drm_gem_dmabuf_export(dev, &exp_info); } =20 int msm_gem_prime_pin(struct drm_gem_object *obj) --=20 2.50.0 From nobody Wed Oct 8 10:02:28 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 370A62673A9 for ; Sun, 29 Jun 2025 20:17:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228244; cv=none; b=IzybtKwTPdLCjNVG4VIfdsDB6uYnOSmCkS5LS1Jk5MxV7KTOW90vYs9HqFBMeyCGo0z4iMHKOxffQwnu7Vmf2mJdo+SeGNzYRM/3Jdjg/aR3xghfbsDtCegj4lsKZT1MGHR0wRfIpLfCDCFWN2bWfY9R15ZjgYkf2rmEpnwFEKU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228244; c=relaxed/simple; bh=GA/euJZvPnrn04HD6YSSPSTZHhMAiLYipK1JN69VVyI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UW7gJE+nSNQbcjrB/trpkdFkQKZZB42L3oxWRiVUHswjmuZoTDEW6DdCFDlTtSyLyQFB4UrN0tbFx8DOCRf/fkIal+6atoGtWNlpg/QvRcdShQ6d381i68ac2gXtWQcC3S6LasTQWwRrPayD3kcX6ciz5aidP7ev/PH9gyaK7zA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=VjChzmTf; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="VjChzmTf" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TDRoWk010344 for ; Sun, 29 Jun 2025 20:17:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=H+PwAR+LcFx tNvXZN+nlHyJ0v2ma6kEtDHyNZ35oe64=; b=VjChzmTfPSxjfWmUMp8tV9s2/Gs ED6SfbfXB3F/E1CCpfu9EBlCJ38Ss1XUqy9YwMPQ1Mr6zPYXSaowRap1J3otU0oy TgBqYzYwAtGP7HyACUvsGLA4IHCOJ+a1G4AvndsSR/y5TVFTdAiQ3wqKjIXDSLHO 7mMrQW0/+KFhclAidNpWzt3WWnaSPhga269VtS8dV+KO4uvsDXwqzupEMD0jQrY6 PvxnXzO3djtea/hGuztR7db1ENiiTZ3Rc+/sM7g3L5JhJpVyUb/KA+M/61xKladK LtxkI+6DXrKEIeyR6CgKnlosfAYlYjWpfZz9idw/h2o2ykzZZAgiJLbtkTA== Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j8s9ak1y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:21 +0000 (GMT) Received: by mail-pg1-f200.google.com with SMTP id 41be03b00d2f7-b34ca003f1cso1090103a12.2 for ; Sun, 29 Jun 2025 13:17:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228240; x=1751833040; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H+PwAR+LcFxtNvXZN+nlHyJ0v2ma6kEtDHyNZ35oe64=; b=YMPrJv6cveji52N7u7k4aOyJkGqrT3N9dmHNRw/otdYc0pg4R9hDP6UtNqAmmB9cI3 s5BW/hyVcYSytnYQz/yV00nq/iY7lIwrC23nyCNmeM6c5HMPYlyugUmieJcaFDxsGNgt XE4VcaQC3M88HsfhfKvbpHiwS+8xgqRpwWGCnG72HzBAYLCIINd2NZxKGIp5aOPSynoA E7oHiGQTyK3/9zDmZk08tDnyYOEjIBwjoZdoP4RBKIg6VebivXPZ7tcZj0FttfKpQMAu Yn4JkBthXhq4BuLrmxnnZlgxdDyNbIIcJ9YYs2ONrGY3KGobXwx806x59c0UPaNwieur oVhw== X-Forwarded-Encrypted: i=1; AJvYcCXRjlJcU70862fWii/zWaOZpwIRQsTVgYmoqTu5HpPWL3jn2TUc8EG4HNDRarU+3BqPAA3dScjslP4Ndas=@vger.kernel.org X-Gm-Message-State: AOJu0YzaCEHxlzHf8SudDxtdSDHcCsunUbg3hRWqrWqJkX//f0DUTly7 Vbh2mNPeUSrN4vk+xvr8zfA4bVy+KZZEpM+ZrvQpSmObViq2WSErhD+15uyOPVT7HoPswOhzHBp T8WMTjgL2TzpWiDGNbCG1hhUnT/fC9Xq82W7p4EctjMWF0WTCnmtRJx/0/CagY77sd9A= X-Gm-Gg: ASbGnct66AgvKOmw319vLzLaAsG1L4ZnSaow4wY25CGTIMcDdflOGafHgBF8uSsIcoy Hs4rZBsINcArX/le0MgZD1tgtvK9duW67gs9Ilb0yNuY/I9tpO9/OoIuDOQpkxmY1YuEuEkiOta wZP3NhQjjAcbrnZ7IzY1qIPTnZmfhs+7a63S/SBiUnlQI1hrReR362qGjrKmPg1PEL7t8qOTjM/ c0tQ6JNTwqjXM1Iej1tRpCDKqHB2t3HKaSgFoHi8t8dP+ktiRQdPYVZ2mX2PrvkSN8ljDoVvlxY Ep84L0JkHQ42EtqFurpUpKn4R+tJkJbbMg== X-Received: by 2002:a05:6300:4041:b0:220:27d1:828d with SMTP id adf61e73a8af0-220a16a31b8mr16389643637.19.1751228239765; Sun, 29 Jun 2025 13:17:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF+DNxLLr1TV8ybCvuLW2bEtYQJQDNo3DFmHJc+uBq6Qmv0Y0YvTITh2swb06/IBU6JeCCoDQ== X-Received: by 2002:a05:6300:4041:b0:220:27d1:828d with SMTP id adf61e73a8af0-220a16a31b8mr16389620637.19.1751228239357; Sun, 29 Jun 2025 13:17:19 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af5806380sm7545854b3a.172.2025.06.29.13.17.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:18 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 42/42] drm/msm: Add VM_BIND throttling Date: Sun, 29 Jun 2025 13:13:25 -0700 Message-ID: <20250629201530.25775-43-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=H/Pbw/Yi c=1 sm=1 tr=0 ts=68619f51 cx=c_pps a=oF/VQ+ItUULfLr/lQ2/icg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=EjYnvkII5fvYeB9zoMQA:9 a=3WC7DwWrALyhR5TkjVHa:22 X-Proofpoint-ORIG-GUID: 222Q_UkOXWNwz8eAFaJvuV_zlRF7EG_o X-Proofpoint-GUID: 222Q_UkOXWNwz8eAFaJvuV_zlRF7EG_o X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfX9K7EeUf8CV8P EnRfT9+hG4xEmOqo0iFdOpGUpldAc76VIf/Lh7Q3nPT1Yn7ETcLFRaNRuSzjBztENHhfdwcaknN z9Js6cp1rbUWonukG7o+cD50ZCa6CvLR2MBJbs5FLC1kOwKkeCLolBsW7OtW4VtFOxSLe09Qui+ d9S8/peBNva4IHX4DN/WEsKNdxnABMjvFkFdUAsip7kGrOMOYAdHqyxfxsPO2QSFsqSo2oGL0Z5 7CFKb3gQqExDQei4Bc+w6i2fAfmOqD++flUeTYdrCn5HvvZwlLG0cPR51IyE+w8b1svKh2J7Vyz hO9yA1P5EfOTFXmmQNsXeISCn3WjAGA9X7bwaO/ZD8RVlMkqCDCQ/Huzp/QreEIWaaYDMDYvCGB Txwfe70Iv/aU8iM1ODuWGc6IeA89lE3EWIRAP/Sn+LsNUd6Ur5Ca23Plf4mFlTzo5GqsB1iv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 priorityscore=1501 clxscore=1015 mlxscore=0 lowpriorityscore=0 spamscore=0 adultscore=0 bulkscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" A large number of (unsorted or separate) small (<2MB) mappings can cause a lot of, probably unnecessary, prealloc pages. Ie. a single 4k page size mapping will pre-allocate 3 pages (for levels 2-4) for the pagetable. Which can chew up a large amount of unneeded memory. So add a mechanism to put an upper bound on the # of pre-alloc pages. Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.h | 20 ++++++++++++++++++++ drivers/gpu/drm/msm/msm_gem_vma.c | 28 ++++++++++++++++++++++++++-- 2 files changed, 46 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 5c0c59e4835c..88239da1cd72 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -75,6 +75,26 @@ struct msm_gem_vm { */ struct drm_gpu_scheduler sched; =20 + /** + * @prealloc_throttle: Used to throttle VM_BIND ops if too much pre- + * allocated memory is in flight. + * + * Because we have to pre-allocate pgtable pages for the worst case + * (ie. new mappings do not share any PTEs with existing mappings) + * we could end up consuming a lot of resources transiently. The + * prealloc_throttle puts an upper bound on that. + */ + struct { + /** @wait: Notified when preallocated resources are released */ + wait_queue_head_t wait; + + /** + * @in_flight: The # of preallocated pgtable pages in-flight + * for queued VM_BIND jobs. + */ + atomic_t in_flight; + } prealloc_throttle; + /** * @mm: Memory management for kernel managed VA allocations * diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 63f4d078e1a2..49f460e4302e 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -705,6 +705,8 @@ msm_vma_job_free(struct drm_sched_job *_job) =20 vm->mmu->funcs->prealloc_cleanup(vm->mmu, &job->prealloc); =20 + atomic_sub(job->prealloc.count, &vm->prealloc_throttle.in_flight); + drm_sched_job_cleanup(_job); =20 job_foreach_bo (obj, job) @@ -721,6 +723,8 @@ msm_vma_job_free(struct drm_sched_job *_job) kfree(op); } =20 + wake_up(&vm->prealloc_throttle.wait); + kfree(job); } =20 @@ -783,6 +787,8 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, ret =3D drm_sched_init(&vm->sched, &args); if (ret) goto err_free_dummy; + + init_waitqueue_head(&vm->prealloc_throttle.wait); } =20 drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, @@ -1089,10 +1095,12 @@ ops_are_same_pte(struct msm_vm_bind_op *first, stru= ct msm_vm_bind_op *next) * them as a single mapping. Otherwise the prealloc_count() will not real= ize * they can share pagetable pages and vastly overcount. */ -static void +static int vm_bind_prealloc_count(struct msm_vm_bind_job *job) { struct msm_vm_bind_op *first =3D NULL, *last =3D NULL; + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + int ret; =20 for (int i =3D 0; i < job->nr_ops; i++) { struct msm_vm_bind_op *op =3D &job->ops[i]; @@ -1121,6 +1129,20 @@ vm_bind_prealloc_count(struct msm_vm_bind_job *job) =20 /* Flush the remaining range: */ prealloc_count(job, first, last); + + /* + * Now that we know the needed amount to pre-alloc, throttle on pending + * VM_BIND jobs if we already have too much pre-alloc memory in flight + */ + ret =3D wait_event_interruptible( + vm->prealloc_throttle.wait, + atomic_read(&vm->prealloc_throttle.in_flight) <=3D 1024); + if (ret) + return ret; + + atomic_add(job->prealloc.count, &vm->prealloc_throttle.in_flight); + + return 0; } =20 /* @@ -1411,7 +1433,9 @@ msm_ioctl_vm_bind(struct drm_device *dev, void *data,= struct drm_file *file) if (ret) goto out_unlock; =20 - vm_bind_prealloc_count(job); + ret =3D vm_bind_prealloc_count(job); + if (ret) + goto out_unlock; =20 struct drm_exec exec; unsigned flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WA= IT; --=20 2.50.0