From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34E112DA742 for ; Wed, 25 Jun 2025 18:58:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877888; cv=none; b=N7V+/tlYc2t4+jfx6SK3/XACIebR2fCH7N3j/T0bzFlOGeSA0pRqd+U8ugixxlFnx6e0PgnmVQxVhRsBCgGouLnLbPxv09h2/onkYUlVuetsDkMMffk7Qh6zDmAjTThNQ/tEMiaVFLZHv7dcN+tqCJLXhmnln5DpkhKSWRL/q6g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877888; c=relaxed/simple; bh=nXgUqQdyDTDmUCSlus96vQ7zp/8O8nNWvr6C6TQ8+ag=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bh9N+xXmOHRAkaqUmC3ECgsIsvTecbxTPfG0p/a/kcpvpiI+CNaebznfICvfmWikY+GNagrCUzTb9x+lodQr3/uK+ciMioo2+QL2rJYpI+IIr1oGFtvW2I6CvJaJ53OxA7PYLu7LfKp3Dal5tQb9QLOr0OcBmzz/t355xT8sj9Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=YtyuO8Jk; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="YtyuO8Jk" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PC6AI7013337 for ; Wed, 25 Jun 2025 18:58:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=C2Mw8hDedlh HIu8wE01JvXGicmu3xM3Nu6IiIJHmMkU=; b=YtyuO8Jk4sMJ2Ri/m2vw7QCCLe9 tjKv2USWutaNQOLTXGaUx7HlhjrG9Q/96VntWuE7KCpdTkwXXe8t5N0cS75LuGYk s2JHO7AdW95w8iZgqvqjDnZNeRpWTloocpqwz//+wg2OZcct06JAJvdH8Kdscom5 DrmM7SKvuYYwlZBPDdZlZlvFlfs3SOq25OId9IxsXTDcOdiDVpSKeoiFAB098n7p QTizKccSaKoGJekxblARNCidNcQ4Fg9VVBmVm4xhyg4ZNjPgyFhumRy3hBFUh91f GlXi4nMszFZxI61Mojkb3hAJGTgySUw2Itl0rebGrQlg/cwvU6K/XklOyCg== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbm1ygw1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:05 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-234b133b428so1193595ad.3 for ; Wed, 25 Jun 2025 11:58:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877884; x=1751482684; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C2Mw8hDedlhHIu8wE01JvXGicmu3xM3Nu6IiIJHmMkU=; b=EYark9f3G+ukc5HV/532/Y3Ru3PQuoEAnmvSD+bRlw/t1ohMb+RY8CAop7T0DAr+UM o+knhW2JwEjbI1auCQdrVuVV5TjnDco3s0tAnMC0Bd8jisH1hrPQCa01bFVUMM+P7sHJ DSfrl+i1uutwLoEW59UrnZ4OQwrmha5scjSh79ybB6M5oVglzfneyAHclf4oTsnKMrLW LZy3D83xoPbRHe5EXEyDnsaNiWzn32MM4I+2tD3X7SjOBCnfaWKtgAruWbwQevxCLaMK GwvPMgViDKABvNQGJFLcOzsYxmuXFb6kvg1AAU7mBFVg8kXAiwQsT44BnWIvhV5+B8yy 5FTQ== X-Forwarded-Encrypted: i=1; AJvYcCVlew7F0jcMP9heAkTIZvctcFhpnja7Wzdp2rfVVfmR0KYGIeJClC5+us0a4mqnImgZ4oTzOtF0hBUZTcI=@vger.kernel.org X-Gm-Message-State: AOJu0YzPbXWjoBXx53krl5d479vFPJ1oWvdL2DLsNq4qiPCGhy9qrmye JB3VompIzKXHgxvZnfDJwOKb6rrFYEtgBnT2LjXywoLDYV2ZWmYOxA5GlhxXbmvh3/5maU3rpVG g/I3RWIv4rp57kRFfTDzxr7xzvKD07WB48NvjUomkAqPbR5NIQoUg7WGEtWN6lEmbx+8= X-Gm-Gg: ASbGncuULIr4diQaYpqqPgXMD7R6xa6dQZY0BEA33Ly/ix/+h7OscLw1agl7IPLcLyT 96B4xIQVhs0KuaM+kG9OgFA7YiUMJZP/Rhes5WKFXDbbcMYbxN5GeNRDGU1499xU6tjCQaqzVnE u+d+wbLPWrvdWAhBIPSr5hPOkFyBeqeRZ2KLggxKi/y/Mj9P1t8FvaV8Nm3cg5gDIq1FNDz5b48 KBpo0WK2iQcco4iNwPaqdX73lcTT8KFOPJeV7WS1idTbV1Yn4kUjZ/QxEibPvy/eyFnh5TyAbbM p5GROOFyUC3v+nHPZSiFxC6vsdC9Vdxt X-Received: by 2002:a17:902:ef4e:b0:223:65dc:4580 with SMTP id d9443c01a7336-238e9df10bfmr9537275ad.52.1750877884573; Wed, 25 Jun 2025 11:58:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHPxt9zGsBsFtjOcbawB8rK4sT+s71VW3b7IpqZzz7IC1qwcaoAPSwP8cubRNnwku3ut/tWrw== X-Received: by 2002:a17:902:ef4e:b0:223:65dc:4580 with SMTP id d9443c01a7336-238e9df10bfmr9536845ad.52.1750877884158; Wed, 25 Jun 2025 11:58:04 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d86d5ea7sm137335695ad.206.2025.06.25.11.58.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:03 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Danilo Krummrich , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 01/42] drm/gpuvm: Fix doc comments Date: Wed, 25 Jun 2025 11:46:54 -0700 Message-ID: <20250625184918.124608-2-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=YYu95xRf c=1 sm=1 tr=0 ts=685c46bd cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=VwQbUJbxAAAA:8 a=5Yek0j1-lJT91Je3aSMA:9 a=+jEqtf1s3R9VXZ0wqowq2kgwd+I=:19 a=0bXxn9q0MV6snEgNplNhOjQmxlI=:19 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-GUID: 12rLT7ogRoVgAh2W2qJFNi6H3rYorXVW X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfXz3lnXUp5spDb Nftr3qy7fWJ/iXmjHJJH5Wz/KPzd0rj0MOHX20hh5hH2iM+mVAzLGt7sq+U5XQoY+ifGQyFizhY GYYB3d5hxd38lDfYJwhIBjxQq6SlkNwoEjp2ts+/fB1BdqHF4jkFw8SRhthBn9KI+RzccxOhrNI ewBAbBqqjcU1JKKKHFmvF9QkupCuucfJmK1U3B7OUxF1Kl+KIqqZMqCl2Acwn0bLFsRUixGQRSo k1rDjoGmU2nI4dwUMZeVTi3C9Jkh9UraclZSkD1cmJHPkrdcuys2Ve/NnLF53oIfv9rwesAs5mT OfMl8Pg2Gpv40tKv1452MenFNCy2hi3hP/Y4z01JR7kELCD8ZmQswZdINvFYIpmAPcWWS2IYHNq 0A7qQMCYA66hUGAxJAG4ZmgsgIK508qBMnjrmU1eplHwW37CbIUQgrdyWgmarlZzslJqV4hD X-Proofpoint-ORIG-GUID: 12rLT7ogRoVgAh2W2qJFNi6H3rYorXVW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 spamscore=0 impostorscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 phishscore=0 mlxlogscore=999 clxscore=1015 mlxscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" Correctly summerize drm_gpuvm_sm_map/unmap, and fix the parameter order and names. Just something I noticed in passing. v2: Don't rename the arg names in prototypes to match function declarations [Danilo] Signed-off-by: Rob Clark Acked-by: Danilo Krummrich Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/drm_gpuvm.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index f9eb56f24bef..0ca717130541 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -2299,13 +2299,13 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, } =20 /** - * drm_gpuvm_sm_map() - creates the &drm_gpuva_op split/merge steps + * drm_gpuvm_sm_map() - calls the &drm_gpuva_op split/merge steps * @gpuvm: the &drm_gpuvm representing the GPU VA space + * @priv: pointer to a driver private data structure * @req_addr: the start address of the new mapping * @req_range: the range of the new mapping * @req_obj: the &drm_gem_object to map * @req_offset: the offset within the &drm_gem_object - * @priv: pointer to a driver private data structure * * This function iterates the given range of the GPU VA space. It utilizes= the * &drm_gpuvm_ops to call back into the driver providing the split and mer= ge @@ -2349,7 +2349,7 @@ drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv, EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map); =20 /** - * drm_gpuvm_sm_unmap() - creates the &drm_gpuva_ops to split on unmap + * drm_gpuvm_sm_unmap() - calls the &drm_gpuva_ops to split on unmap * @gpuvm: the &drm_gpuvm representing the GPU VA space * @priv: pointer to a driver private data structure * @req_addr: the start address of the range to unmap --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9502C2DCC1D for ; Wed, 25 Jun 2025 18:58:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877890; cv=none; b=KWLY9IOl27593bylO2283P4IaHFtDIcblTUxoMXyPN/xsQ8Vq/t3isuqSssBgzivNK6A4OOaPVcY2xScbegtDNEi/2+mHWEt1SHnfsLMfenTODSDkb8YAcL+v21uJf/OnWtFRnNV8RhhkYyePQnGjjNX8Ld9Vnp715xqJvT2b/Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877890; c=relaxed/simple; bh=MYuEJE+0SdySaYO3CVU/uvi2cgNQjyHkEeNQNPJjWt8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZnBBkYu3KVDeWHNvGfvoeSngBYSCmXRzJMs6gElSmW3Ae5SctNs66G65y5mBRrnmbt5De8r4LHJHFCBnpMTR8s5aX/wwO79U/1baysD15XeMLb16cJOmWpp6NshYzIdoUU5M29AiJIJm3RFyLiNfczGvkdJhRXM4+TSpn45B3og= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=da29qsGv; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="da29qsGv" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PB1fMl027698 for ; Wed, 25 Jun 2025 18:58:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=IBNhqEhlbdF Pkj+kCuKLrjIW5Ve2xstiu6VBwj267oo=; b=da29qsGvT4OtnT+b9gErUpruuBe MbPkvR/17J5GQSA0qkKEu59iIjJos0c6bbk6pvDnpS+Z5vgIHrJFUMdhpdbGb3NO d51/W1KvSnG1x7rltqLI1dhyr0iGKiUbGOeIDvrir2Tb9up9D/k+Pbxf0e/+plDQ yFcve5coFZzFE/tUPwUCZkfAtzyRTU2DhCPl9yrJQa6EmU803aME6t2pecmX7+4u 8vL8MMpDRi7ecRqy0GK1NIGKpcI9v67iJuVvwmaKqP78U+ozmIjArTSLaRrmRCDv JYdLnbeMljGPI0P7jsx+ikSuuXNl3eWM26rHeJFTlcCz8BOU5I6+2Xp9TbA== Received: from mail-pj1-f70.google.com (mail-pj1-f70.google.com [209.85.216.70]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbhqqdcf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:07 +0000 (GMT) Received: by mail-pj1-f70.google.com with SMTP id 98e67ed59e1d1-3132c8437ffso194985a91.1 for ; Wed, 25 Jun 2025 11:58:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877886; x=1751482686; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IBNhqEhlbdFPkj+kCuKLrjIW5Ve2xstiu6VBwj267oo=; b=kD0D4l4xGUK68hnsQZr4mQLvZikqRH1oLgbviK8WVrbaNobAMj+qknk2tyFeYsEXf5 3087flK2pHPjtTo5GrhrmQ5OrpvxF4bP1DzEfq2+KQpWkBK5M+NPflNAxCK6H+yTbs9K OYA42rsxNmo+vHpKk+UQPIbYObAwDq3QPvAHWjwb3jc8qyUEhbgheGDmMHbiMW/MxKrW aolhmC/xbrvI53xvig6aJ+1VvsJIcXAc2vOkMI3Fdx940/zVNDM9advU6ef8PdjtXvBy 2pBTHDBd3/5eCo/gxKBu7KYZ5n7HYfLFNZQDICb2rXK3QJ529p+uOyhOexRmdFYzDH57 HmLw== X-Forwarded-Encrypted: i=1; AJvYcCULYLQNal9QK37zldqD1xy3/c3lOBeAF6SqlajaIw4Usu5ahCro5uUauG2/6zmWjDkuJhUY5Bp9+IESlTE=@vger.kernel.org X-Gm-Message-State: AOJu0YzN0iS/CtLJsdzymuBXZ0XGN5kcKLo1hTdl1YAqbK5jSvLtwmPM 55cUST5LyuForB+MEKmTh2JuihtNmZ6o7X9lrcvdTgvJ0lbY7MTsu53wumhNi7UBCVkL/9VJXZt 7IvBJRlBPsp3WVNHG6OhchocV+2rxzbjovEVHJuvt9VS2c/Nq8nb8JzuAgagRNQf33Rg= X-Gm-Gg: ASbGncvjFl4zVK/crM9Lhe4YRY4ZRgBQAlfgFVk2R5p1eRqXKbYr7po7UKnAIznKYhG 7BkFRG1+TiCd+D6n3CqGSje03Rg3mtrNJqX5Xu4Fn8QJsWzLYVCh3KieClu2Mdl7n3r+Fdd3w/v denfqd+Lzm3B8m0K9nUOp82T7OKIQUV5Y7CsNFxWn5nKcheMjYFbcgpnJ9hvz0O+t7x2FeAXggD RIeL1dvFqzHypTYRhX6XFSbG2HM4XyrbCgAafOu04GAkALn7NMQyEZwBpdRTNiMfU/TL0B7RP2l gqF+tSq5AXoFWhr57aDOcXukb538I6N0 X-Received: by 2002:a17:90b:5706:b0:312:1c83:58f7 with SMTP id 98e67ed59e1d1-315f2525a76mr6759984a91.0.1750877885927; Wed, 25 Jun 2025 11:58:05 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEzUt3jRFX6LjL8FjMPCTktnh55EsI2oRHpomnsO0iMIJLz/9ZiWHSBy9AyDL9vaYvsYsSP7A== X-Received: by 2002:a17:90b:5706:b0:312:1c83:58f7 with SMTP id 98e67ed59e1d1-315f2525a76mr6759948a91.0.1750877885476; Wed, 25 Jun 2025 11:58:05 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-315f5437befsm2531993a91.39.2025.06.25.11.58.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:05 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 02/42] drm/gpuvm: Add locking helpers Date: Wed, 25 Jun 2025 11:46:55 -0700 Message-ID: <20250625184918.124608-3-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: 5ey-26fGOS6xHGIJIl4923fCVDQm7B0m X-Authority-Analysis: v=2.4 cv=Id+HWXqa c=1 sm=1 tr=0 ts=685c46bf cx=c_pps a=0uOsjrqzRL749jD1oC5vDA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=gXmLzIc8hE4PcKHQkQgA:9 a=mQ_c8vxmzFEMiUWkPHU9:22 X-Proofpoint-GUID: 5ey-26fGOS6xHGIJIl4923fCVDQm7B0m X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXygQPTt+NaO9i peYMxyEDeqCJBHZ5NfG3TLnf83DhPptUwXRx9tfbNndz01lS7h1Q89P9moce4KIdAe677pPdewt jONDtKkwn7Mm/6JwioUSmtLtnbBcEifFISbek3vNNQ613PHFz3J065SrIfwT39PoaWdG8FstE+q EIhOGd3B8KVRjj2c/Rdv/XtpVdGCrhht2e4WCeicjYL7Dog34hTeE2SB54d2tnxLV8xOb3lpBqH K+MfpXE0IPwSRA0o9P4CKE8YTgStloHfJQszu4rrf3Dw7/P9iOWGls69aDsVc3sCY5STAkn+09E NjZ2JGXcrZlPKGLKelR2X/AR8Iv0DXV1nXZVlgb3wfIaExORqo4FvuLwscmVa8VVZPTl9/z9Sop 40eSiHfvdmOD87Z6r6On7vRAq1DZuKEwlfvBuBCfsxyYXBUyi+G3/ZQbF39ehQ17vOpP3Nqa X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 priorityscore=1501 mlxlogscore=999 phishscore=0 bulkscore=0 clxscore=1015 impostorscore=0 mlxscore=0 lowpriorityscore=0 malwarescore=0 suspectscore=0 adultscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" For UNMAP/REMAP steps we could be needing to lock objects that are not explicitly listed in the VM_BIND ioctl in order to tear-down unmapped VAs. These helpers handle locking/preparing the needed objects. Note that these functions do not strictly require the VM changes to be applied before the next drm_gpuvm_sm_map_lock()/_unmap_lock() call. In the case that VM changes from an earlier drm_gpuvm_sm_map()/_unmap() call result in a differing sequence of steps when the VM changes are actually applied, it will be the same set of GEM objects involved, so the locking is still correct. v2: Rename to drm_gpuvm_sm_*_exec_locked() [Danilo] v3: Expand comments to show expected usage, and explain how the usage is safe in the case of overlapping driver VM_BIND ops. Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/drm_gpuvm.c | 126 ++++++++++++++++++++++++++++++++++++ include/drm/drm_gpuvm.h | 8 +++ 2 files changed, 134 insertions(+) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 0ca717130541..a811471b888e 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -2390,6 +2390,132 @@ drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *p= riv, } EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap); =20 +static int +drm_gpuva_sm_step_lock(struct drm_gpuva_op *op, void *priv) +{ + struct drm_exec *exec =3D priv; + + switch (op->op) { + case DRM_GPUVA_OP_REMAP: + if (op->remap.unmap->va->gem.obj) + return drm_exec_lock_obj(exec, op->remap.unmap->va->gem.obj); + return 0; + case DRM_GPUVA_OP_UNMAP: + if (op->unmap.va->gem.obj) + return drm_exec_lock_obj(exec, op->unmap.va->gem.obj); + return 0; + default: + return 0; + } +} + +static const struct drm_gpuvm_ops lock_ops =3D { + .sm_step_map =3D drm_gpuva_sm_step_lock, + .sm_step_remap =3D drm_gpuva_sm_step_lock, + .sm_step_unmap =3D drm_gpuva_sm_step_lock, +}; + +/** + * drm_gpuvm_sm_map_exec_lock() - locks the objects touched by a drm_gpuvm= _sm_map() + * @gpuvm: the &drm_gpuvm representing the GPU VA space + * @exec: the &drm_exec locking context + * @num_fences: for newly mapped objects, the # of fences to reserve + * @req_addr: the start address of the range to unmap + * @req_range: the range of the mappings to unmap + * @req_obj: the &drm_gem_object to map + * @req_offset: the offset within the &drm_gem_object + * + * This function locks (drm_exec_lock_obj()) objects that will be unmapped/ + * remapped, and locks+prepares (drm_exec_prepare_object()) objects that + * will be newly mapped. + * + * The expected usage is: + * + * vm_bind { + * struct drm_exec exec; + * + * // IGNORE_DUPLICATES is required, INTERRUPTIBLE_WAIT is recommen= ded: + * drm_exec_init(&exec, IGNORE_DUPLICATES | INTERRUPTIBLE_WAIT, 0); + * + * drm_exec_until_all_locked (&exec) { + * for_each_vm_bind_operation { + * switch (op->op) { + * case DRIVER_OP_UNMAP: + * ret =3D drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, o= p->addr, op->range); + * break; + * case DRIVER_OP_MAP: + * ret =3D drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num= _fences, + * op->addr, op->range, + * obj, op->obj_offset= ); + * break; + * } + * + * drm_exec_retry_on_contention(&exec); + * if (ret) + * return ret; + * } + * } + * } + * + * This enables all locking to be performed before the driver begins modif= ying + * the VM. This is safe to do in the case of overlapping DRIVER_VM_BIND_O= Ps, + * where an earlier op can alter the sequence of steps generated for a lat= er + * op, because the later altered step will involve the same GEM object(s) + * already seen in the earlier locking step. For example: + * + * 1) An earlier driver DRIVER_OP_UNMAP op removes the need for a + * DRM_GPUVA_OP_REMAP/UNMAP step. This is safe because we've already + * locked the GEM object in the earlier DRIVER_OP_UNMAP op. + * + * 2) An earlier DRIVER_OP_MAP op overlaps with a later DRIVER_OP_MAP/UNMAP + * op, introducing a DRM_GPUVA_OP_REMAP/UNMAP that wouldn't have been + * required without the earlier DRIVER_OP_MAP. This is safe because we= 've + * already locked the GEM object in the earlier DRIVER_OP_MAP step. + * + * Returns: 0 on success or a negative error codec + */ +int +drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, unsigned int num_fences, + u64 req_addr, u64 req_range, + struct drm_gem_object *req_obj, u64 req_offset) +{ + if (req_obj) { + int ret =3D drm_exec_prepare_obj(exec, req_obj, num_fences); + if (ret) + return ret; + } + + return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec, + req_addr, req_range, + req_obj, req_offset); + +} +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_exec_lock); + +/** + * drm_gpuvm_sm_unmap_exec_lock() - locks the objects touched by drm_gpuvm= _sm_unmap() + * @gpuvm: the &drm_gpuvm representing the GPU VA space + * @exec: the &drm_exec locking context + * @req_addr: the start address of the range to unmap + * @req_range: the range of the mappings to unmap + * + * This function locks (drm_exec_lock_obj()) objects that will be unmapped/ + * remapped by drm_gpuvm_sm_unmap(). + * + * See drm_gpuvm_sm_map_exec_lock() for expected usage. + * + * Returns: 0 on success or a negative error code + */ +int +drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec *exe= c, + u64 req_addr, u64 req_range) +{ + return __drm_gpuvm_sm_unmap(gpuvm, &lock_ops, exec, + req_addr, req_range); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap_exec_lock); + static struct drm_gpuva_op * gpuva_op_alloc(struct drm_gpuvm *gpuvm) { diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 2a9629377633..274532facfd6 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -1211,6 +1211,14 @@ int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *= priv, int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv, u64 addr, u64 range); =20 +int drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm, + struct drm_exec *exec, unsigned int num_fences, + u64 req_addr, u64 req_range, + struct drm_gem_object *obj, u64 offset); + +int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec = *exec, + u64 req_addr, u64 req_range); + void drm_gpuva_map(struct drm_gpuvm *gpuvm, struct drm_gpuva *va, struct drm_gpuva_op_map *op); --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C0142DAFD5 for ; Wed, 25 Jun 2025 18:58:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877892; cv=none; b=O8Fx0v29pltzD5LvcMud1ds/alaHB7DWdRq3ZjlykQURgmfaJP5SnmsOUodQjQP4Rua2F6klrM8xOcYzZdcx/XR//ZgxrNafr453a8tFoPhutXf9v+cLHGg/N14jcm8NqfuZ3t7R9bjggy/z7ip5hXhkwMT3WElBccXeW8pA0Og= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877892; c=relaxed/simple; bh=Ljxp0rbVayEqE7lmqppBFPy5GFbKDreN6wHhP8x2y+U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q3QBkXUcxN8b1MjOXIiDQFN8PuOsLk+9GfmO9Kjjm2ryBLTD2iOG078Tb8JfLLxSs5vUEoy31j1Y6IWvZVJy1zhpUZhSMylfCbnOsq0sceSONRigqNUrROcoZmfQPhnIpA15nQpJf/3vhHpnyVs2edLMqQ/1GTEEeJ50g2ItvrY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=m1xIFZN/; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="m1xIFZN/" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PC0mU4012128 for ; Wed, 25 Jun 2025 18:58:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=tgiGXs8Fmtg Byi6+4YZ4IRUEBBgzjqr+lD/A8pUTVaQ=; b=m1xIFZN/XPZlm+iaGw7r9PJl9/w ikyVhbyS7hQG2+ZTNZVDWatCJHNvXo0W47YK5eAwYnuzH9y1t96zaoz/5yZmNSBh xCKFiqoXXV94z97FTt38bXaCEghE4N+KhyDRp9B9RU6ISBoAMzYMyTn28kx7mI6u 11tY5EGq1x8vSHJSKgvWV8Lhlz9C8OmwfdpH5IdblaQVncqQ+GYohjVUVWzk3Grj YH//EsTbXLZWiX3/W4z4O0wogalOdz1AEPCHObBXaxxAVVH3g7cx4AZO1zNLszpP +APmQRBXXUkw1P7sbgC6YO8MNI2E0jrXm6gylfc9qwNgI4/HpRMzTQQjEvQ== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbm1ygwc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:08 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-74858256d38so190617b3a.2 for ; Wed, 25 Jun 2025 11:58:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877887; x=1751482687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tgiGXs8FmtgByi6+4YZ4IRUEBBgzjqr+lD/A8pUTVaQ=; b=UmmdGeJkS7XAxzXzWo2RRxlmPw3pA2kNhTPCZgA86BKxBf11OU047oQoMcO0LMdq2t viQUqxQwgxqOOykbd91EQC3edFfCzluCbs7iUn10X027IFIjDtEG6YrKvHE0UucwcQpB vNMkQtbWXGmu5kD8EntICPND6GBo3VhnJrUEjXfxiBjVmqH8JS5DQB6sk/HhJIM7Gc+3 1wtz6hepieDNDkRuOtUbkitazO5wrsX7H5whhD6nfkF1C6p77LuCj6elaYkawjvNM/Pn 5vZAZOjjyN2il4TMmzItERJ1AFlbuRFI+Z4GMSCGiI0et+vylIwzvdfng71zdruD67Fr NYKA== X-Forwarded-Encrypted: i=1; AJvYcCW4ovmch5XLZ4AnCMlGy84+cU+MVAMfZ8Kl8xpzW4oVIijwu1M+4U3Gu7h2evDC04G/cB61HcIX6OCg0IU=@vger.kernel.org X-Gm-Message-State: AOJu0Yx3sH9cpT/r4ZvtJLjXj0ffELreKAAUl8L4kVVoCwf1JCnqoZUE 5/tIzzOjLx6IwPO0bx45kz31u/jfsc/RAUL+KEOKY1C8QaQFveIzOva2nCqttceCk6fBymXcxXY sYlN++Nd0C6cIoITrlB/+MstllPFfV46g3QPJ7Pmjm1SLS0zY5QnnSVLGg2p1HSrMAtk= X-Gm-Gg: ASbGnct8Poo1rdCdMUJL5fvhoI/2KltQFdxbm1yMrzJoOj98w10Xfz8srSxhusEbNt4 KPb1MBhoMQytuX4Ink9y7kTQW6SXyh+R5NEGVugQKnVgqCjAv6PmhuIwmyRRmA8aaxsWmkr2DsF 6zkoBCaGVKBgOapJu+XWFo1pRuefpv27+1TW1Qlj5TIWiWThuq9Vhei+71rj0XgvqvI/8Plnrux 4vfbKIb7xUKhA+KsyL1XgB4JT1lrhpysVtGA1/uJxypkXjNJPzAqqMqDmvy0QUrjO20M1zqPJGX se03tJH7oCx70nx5fNDyEUQZNeWIY8An X-Received: by 2002:a05:6a00:170b:b0:748:f74f:6d27 with SMTP id d2e1a72fcca58-74ad45e0092mr7791440b3a.24.1750877887493; Wed, 25 Jun 2025 11:58:07 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEm76XpuTL7FOXiSfs4KzHoT2lFbrVJTkuBjdWPhsQOr/STJn7zr7eRcDMBwEs+eD8vlSAGow== X-Received: by 2002:a05:6a00:170b:b0:748:f74f:6d27 with SMTP id d2e1a72fcca58-74ad45e0092mr7791398b3a.24.1750877887038; Wed, 25 Jun 2025 11:58:07 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-749c8872629sm5028467b3a.164.2025.06.25.11.58.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:06 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 03/42] drm/gem: Add ww_acquire_ctx support to drm_gem_lru_scan() Date: Wed, 25 Jun 2025 11:46:56 -0700 Message-ID: <20250625184918.124608-4-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=YYu95xRf c=1 sm=1 tr=0 ts=685c46c0 cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=Xs8HUT0FnXyYc1zbtowA:9 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-GUID: 7FZR-AwVwjrnD26A7I7WVYt-GaAcebfz X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfX4qdFXh1F/vit fX+PMse28fpb2txfe7AZpS9GM4eU1SwXQRgmfPGT1ciwy3tt/vpmVnMOQuZZgz8qCAYoR1Y+ewZ 1n3DQy6LGx9M4NJLH0U16aCCkPT/TJQXaGCAoIXsCma+PeuwRUVXat64bp4G8FID/7H07/MI6S7 yd5AKbx5b4yPA+m1vQZFlEE9DHId6anAUEZeT2WORDbFeUb7Ocpq91UIIBD1x+naqkoVpCM7hiX E1jL2gEp6h/MX7ZixRVlJxe8ELv+B39Jcn5CBZwnsoUoe0s59p6qinrBpcUJXqfruqPmUPFnXHc 4c9AxZBOenUrMFxSEXR5S+XB7OKC6GJKehtbnRCu+JfToq+H9gxrTgFqlQtClf6meaDJJ3IGz7O 8yg6ly5drbs4xCA2h65PaHKJBXJZbUGVPdD7PfInS/wqO0X2jkwOBSVFl4NmI8lywS4Ku+62 X-Proofpoint-ORIG-GUID: 7FZR-AwVwjrnD26A7I7WVYt-GaAcebfz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 spamscore=0 impostorscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 phishscore=0 mlxlogscore=999 clxscore=1015 mlxscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" From: Rob Clark If the callback is going to have to attempt to grab more locks, it is useful to have an ww_acquire_ctx to avoid locking order problems. Why not use the drm_exec helper instead? Mainly because (a) where ww_acquire_init() is called is awkward, and (b) we don't really need to retry after backoff, we can just move on to the next object. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/drm_gem.c | 14 +++++++++++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 24 +++++++++++++----------- include/drm/drm_gem.h | 10 ++++++---- 3 files changed, 30 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index c6240bab3fa5..c8f983571c70 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1460,12 +1460,14 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail); * @nr_to_scan: The number of pages to try to reclaim * @remaining: The number of pages left to reclaim, should be initialized = by caller * @shrink: Callback to try to shrink/reclaim the object. + * @ticket: Optional ww_acquire_ctx context to use for locking */ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned int nr_to_scan, unsigned long *remaining, - bool (*shrink)(struct drm_gem_object *obj)) + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticke= t), + struct ww_acquire_ctx *ticket) { struct drm_gem_lru still_in_lru; struct drm_gem_object *obj; @@ -1498,17 +1500,20 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, */ mutex_unlock(lru->lock); =20 + if (ticket) + ww_acquire_init(ticket, &reservation_ww_class); + /* * Note that this still needs to be trylock, since we can * hit shrinker in response to trying to get backing pages * for this obj (ie. while it's lock is already held) */ - if (!dma_resv_trylock(obj->resv)) { + if (!ww_mutex_trylock(&obj->resv->lock, ticket)) { *remaining +=3D obj->size >> PAGE_SHIFT; goto tail; } =20 - if (shrink(obj)) { + if (shrink(obj, ticket)) { freed +=3D obj->size >> PAGE_SHIFT; =20 /* @@ -1522,6 +1527,9 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, =20 dma_resv_unlock(obj->resv); =20 + if (ticket) + ww_acquire_fini(ticket); + tail: drm_gem_object_put(obj); mutex_lock(lru->lock); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/m= sm_gem_shrinker.c index 07ca4ddfe4e3..de185fc34084 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -44,7 +44,7 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct = shrink_control *sc) } =20 static bool -purge(struct drm_gem_object *obj) +purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!is_purgeable(to_msm_bo(obj))) return false; @@ -58,7 +58,7 @@ purge(struct drm_gem_object *obj) } =20 static bool -evict(struct drm_gem_object *obj) +evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (is_unevictable(to_msm_bo(obj))) return false; @@ -79,21 +79,21 @@ wait_for_idle(struct drm_gem_object *obj) } =20 static bool -active_purge(struct drm_gem_object *obj) +active_purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!wait_for_idle(obj)) return false; =20 - return purge(obj); + return purge(obj, ticket); } =20 static bool -active_evict(struct drm_gem_object *obj) +active_evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!wait_for_idle(obj)) return false; =20 - return evict(obj); + return evict(obj, ticket); } =20 static unsigned long @@ -102,7 +102,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct= shrink_control *sc) struct msm_drm_private *priv =3D shrinker->private_data; struct { struct drm_gem_lru *lru; - bool (*shrink)(struct drm_gem_object *obj); + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket= ); bool cond; unsigned long freed; unsigned long remaining; @@ -122,8 +122,9 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct= shrink_control *sc) continue; stages[i].freed =3D drm_gem_lru_scan(stages[i].lru, nr, - &stages[i].remaining, - stages[i].shrink); + &stages[i].remaining, + stages[i].shrink, + NULL); nr -=3D stages[i].freed; freed +=3D stages[i].freed; remaining +=3D stages[i].remaining; @@ -164,7 +165,7 @@ msm_gem_shrinker_shrink(struct drm_device *dev, unsigne= d long nr_to_scan) static const int vmap_shrink_limit =3D 15; =20 static bool -vmap_shrink(struct drm_gem_object *obj) +vmap_shrink(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!is_vunmapable(to_msm_bo(obj))) return false; @@ -192,7 +193,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsign= ed long event, void *ptr) unmapped +=3D drm_gem_lru_scan(lrus[idx], vmap_shrink_limit - unmapped, &remaining, - vmap_shrink); + vmap_shrink, + NULL); } =20 *(unsigned long *)ptr +=3D unmapped; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index bcd54020d6ba..b611a9482abf 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -556,10 +556,12 @@ void drm_gem_lru_init(struct drm_gem_lru *lru, struct= mutex *lock); void drm_gem_lru_remove(struct drm_gem_object *obj); void drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_= object *obj); void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object = *obj); -unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, - unsigned int nr_to_scan, - unsigned long *remaining, - bool (*shrink)(struct drm_gem_object *obj)); +unsigned long +drm_gem_lru_scan(struct drm_gem_lru *lru, + unsigned int nr_to_scan, + unsigned long *remaining, + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticke= t), + struct ww_acquire_ctx *ticket); =20 int drm_gem_evict(struct drm_gem_object *obj); =20 --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C5972E0B45 for ; Wed, 25 Jun 2025 18:58:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877893; cv=none; b=aml8NN4C/l/NI/j20dhHSupnqwVEjvV3uU20ro2a8howy5wnxRZePARGuBQcfIvWoTGV/m0f4Njw0XIwIjfhehQuHcqiNAAEdFWt/tlGfgx+gLkbtVG33EadZHcwR66uIvWI98C4bq95Nut4/13l+oVW8GaJjSvYBm5IsQYUZp4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877893; c=relaxed/simple; bh=pWTQ4iKmrXVXY3iE+Dk0vkCRKIygHOG9cW6zdD2N1l8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aVaiU9etd1Fg6kocf4LQCscrvmuODVcA11ZclP9LF+lXlW/4r1u2d09aTGEiCEWLatkc9/gI0ORXJUDx4QXq11cMdA8Xg8ZMFjzH4OyUo2EzmiT7v0smFivKARkbyJ4hts2y4m67tOXkEasvEbj8DPYSHwLKw82bDYlRbMCqQUc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=bcMJgkG9; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="bcMJgkG9" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PBa8XS021236 for ; Wed, 25 Jun 2025 18:58:10 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=yVCtoAdChq2 rZfW409bmOE4102RcfW3cFf66u8cbB8o=; b=bcMJgkG9En5rZcJFCrb8iPVQh41 /0swB345YVigZPV4BactXABoerRRJU2SGVztMC3JCLJpGrHgQyNoUTr5ah6j8j0O ykAGpl/437s0TP+Z0pMminUYtHlpVIgHU/OgMp1YTiwbVFcGYsg3ukFmdTlzHCGf oskJ+7naTJ4q8krjqBTNcWhhf9mo+Y9Smg6VEzyW0v6zCAkJl2zRYKFT6EYL5X2m FkggyOqTVGJhWpUefrhDBsfXDeABrtulqgc9OC6IBaw3VdOM9bI+YT1nv0XQCRdl 73YAqpjRwHFWvrMpfpN2kUORqtfQvUGrVlzAtPkprBYs7iw3v2MOGVeSLDg== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47emcmtjx3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:10 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-235e1d66fa6so2115825ad.0 for ; Wed, 25 Jun 2025 11:58:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877889; x=1751482689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yVCtoAdChq2rZfW409bmOE4102RcfW3cFf66u8cbB8o=; b=dBaCGopqqxG4GRJfX2XKcYC7BAJjWbuH2hqBe8YB58WxT5426Q2i7hTi2a43GPbRBs KWSmV7rI9NxiIS/1FSLLmyhfScjLPkZB0NKOvwcJekzSYMoErtnQJrKgwfb3jG7Bh18y E9MwRyZGEq44bm+/gJTVrSE/3KnuUrNO+Nbw2sAxSi6iT++nM1iC3ma8Dk44yLIw8t6D vTPZxNGhYToLEsEtwa8UPjwofAsZmRil9AzWoQ8MBE4CE4xI+F9PJ/cjeDlV7YU1tDMy mlLKo61HRHbLusXPxbhK6Ov2EpbM4iPdb9OkOlEReXWXULAHwlX5FT5i604r4hSMImEf fEFw== X-Forwarded-Encrypted: i=1; AJvYcCUv8obtH8hQ820ddp4Tsmndt/iwIU5ohVB9RfDhVKJE9WJ9xRPjr2Dh+Dql6n9FUqfI+HIbuseKcChx+yE=@vger.kernel.org X-Gm-Message-State: AOJu0YwhyYchfQLe/G1q2pkT7YJTasWvEUAYxjR+7CyvUVNrp+VlSMGf 5fPJMnmEj4cYck2WDCk6IMXNMCGs9HWLpwiDRca9tOZdmUvl1u5t7IdYzm+1fgY8nCgFRrPPaIi b313frT0W71s8aupkHDDQrV8qHlM/J73As2e+4+HBnkzdNh3+CE5DgOdWtbeQ1Q1z3jw= X-Gm-Gg: ASbGncs+3T74v945B33nXY9R0NifzRc3FNc3SCgqPaI7VNFIrYbf/SaYXj/OcNddlVX x2WrR2Fm+0Xw+d8xPk31+6j995g1eUCyL9STD+FV76CbxyX4Kj9hlOaHNP1xUq7x9mOVhUzV9Tn boCOQUmvHMxmP3VeMBxVfbDf8sEmkrU/ubE5F5o8XPViSN/kMsF8gqZGkmwG/u3shh10hKzWjtd wwnZ6t4q2lN3/8kfw3XNxluimTw3UcOY3+7yRK9O8H3EvHcW0J3LZygMdkRzu541FgP43wzVmdJ RFiU83o4eS++fGHxElmLFqUrNe21gX6O X-Received: by 2002:a17:903:1986:b0:235:f49f:479d with SMTP id d9443c01a7336-238c86ee14amr9726105ad.3.1750877889203; Wed, 25 Jun 2025 11:58:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH5uAsI09BTNO6RS3aClHlddfXbRmkwVujvQC0aRt838iT5JgGIMJAsuRH49l/SvaVYl8uRZA== X-Received: by 2002:a17:903:1986:b0:235:f49f:479d with SMTP id d9443c01a7336-238c86ee14amr9725695ad.3.1750877888746; Wed, 25 Jun 2025 11:58:08 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d83937b1sm143463375ad.52.2025.06.25.11.58.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:08 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Dmitry Baryshkov , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 04/42] drm/msm: Rename msm_file_private -> msm_context Date: Wed, 25 Jun 2025 11:46:57 -0700 Message-ID: <20250625184918.124608-5-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: D3FPJIt_jmFS85JJHPWdEgOt13WJD4Zi X-Proofpoint-ORIG-GUID: D3FPJIt_jmFS85JJHPWdEgOt13WJD4Zi X-Authority-Analysis: v=2.4 cv=J+eq7BnS c=1 sm=1 tr=0 ts=685c46c2 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=f_Su2nKqq5cPEY4CjHUA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX589xZu4mVasl h8oG+y2h+iLH/QVVOD9gGPmqqsj51wPBiZd+vXuMU/KwVIet9IZtImLKr9o9SsXaoJ8nYat7FM0 WJw+ugKcOD6ouwjvrPrMEonJTOYUDiyF9yM/K9p6H2FTyP2OkeVE1Dw6ohcHzySQwCqQ3F2occi zGiyN8FgTp/JkQNAeNeHIsUTkGR5Qo+2O1wnZthXsRvpp603hFAwv6L7dkQ8q0hEZvnFaalF7D7 TM9jq69qIHj2tFDL7ogSoHXuO6U9BbhqnzZRsCqxjC8dXiglqrp4P2aY3FecKEKgfT40XGwsZDC GiOx642R59KQAhiHRkZDKBF+ZyBu4YD/GrWZxZeMPY5Fae01OWDhxWDFAv5A/dqWaSoV8p4/BZd DIO0vV8Urna6kekbY/bEydct0oG2nlfIGKu7dkzArEa3lQU2fwfXFGM2mNFwnaK4lmGOWVH2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxlogscore=999 adultscore=0 impostorscore=0 clxscore=1015 spamscore=0 malwarescore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 lowpriorityscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +-- drivers/gpu/drm/msm/msm_drv.c | 14 ++++----- drivers/gpu/drm/msm/msm_gem.c | 2 +- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.c | 4 +-- drivers/gpu/drm/msm/msm_gpu.h | 39 ++++++++++++------------- drivers/gpu/drm/msm/msm_submitqueue.c | 27 +++++++++-------- 9 files changed, 49 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index fd64af6d0440..620a26638535 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -111,7 +111,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, struct msm_ringbuffer *ring, struct msm_gem_submit *submit) { bool sysprof =3D refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; - struct msm_file_private *ctx =3D submit->queue->ctx; + struct msm_context *ctx =3D submit->queue->ctx; struct adreno_gpu *adreno_gpu =3D &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index d04657b77857..93fe26009511 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -356,7 +356,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, return 0; } =20 -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); @@ -444,7 +444,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_fi= le_private *ctx, } } =20 -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len) { struct drm_device *drm =3D gpu->dev; @@ -490,7 +490,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_fi= le_private *ctx, case MSM_PARAM_SYSPROF: if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); - return msm_file_private_set_sysprof(ctx, gpu, value); + return msm_context_set_sysprof(ctx, gpu, value); default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index 2366a57b280f..fed9516da365 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -603,9 +603,9 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) /* Put vm_start above 32b to catch issues with not setting xyz_BASE_HI */ #define ADRENO_VM_START 0x100000000ULL u64 adreno_private_address_space_size(struct msm_gpu *gpu); -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); const struct firmware *adreno_request_fw(struct adreno_gpu *adreno_gpu, const char *fwname); diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index c3588dc9e537..29ca24548c67 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -333,7 +333,7 @@ static int context_init(struct drm_device *dev, struct = drm_file *file) { static atomic_t ident =3D ATOMIC_INIT(0); struct msm_drm_private *priv =3D dev->dev_private; - struct msm_file_private *ctx; + struct msm_context *ctx; =20 ctx =3D kzalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) @@ -363,23 +363,23 @@ static int msm_open(struct drm_device *dev, struct dr= m_file *file) return context_init(dev, file); } =20 -static void context_close(struct msm_file_private *ctx) +static void context_close(struct msm_context *ctx) { msm_submitqueue_close(ctx); - msm_file_private_put(ctx); + msm_context_put(ctx); } =20 static void msm_postclose(struct drm_device *dev, struct drm_file *file) { struct msm_drm_private *priv =3D dev->dev_private; - struct msm_file_private *ctx =3D file->driver_priv; + struct msm_context *ctx =3D file->driver_priv; =20 /* * It is not possible to set sysprof param to non-zero if gpu * is not initialized: */ if (priv->gpu) - msm_file_private_set_sysprof(ctx, priv->gpu, 0); + msm_context_set_sysprof(ctx, priv->gpu, 0); =20 context_close(ctx); } @@ -511,7 +511,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *d= ev, uint64_t *iova) { struct msm_drm_private *priv =3D dev->dev_private; - struct msm_file_private *ctx =3D file->driver_priv; + struct msm_context *ctx =3D file->driver_priv; =20 if (!priv->gpu) return -EINVAL; @@ -531,7 +531,7 @@ static int msm_ioctl_gem_info_set_iova(struct drm_devic= e *dev, uint64_t iova) { struct msm_drm_private *priv =3D dev->dev_private; - struct msm_file_private *ctx =3D file->driver_priv; + struct msm_context *ctx =3D file->driver_priv; =20 if (!priv->gpu) return -EINVAL; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index d2f38e1df510..fdeb6cf7eeb5 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -48,7 +48,7 @@ static void update_device_mem(struct msm_drm_private *pri= v, ssize_t size) =20 static void update_ctx_mem(struct drm_file *file, ssize_t size) { - struct msm_file_private *ctx =3D file->driver_priv; + struct msm_context *ctx =3D file->driver_priv; uint64_t ctx_mem =3D atomic64_add_return(size, &ctx->ctx_mem); =20 rcu_read_lock(); /* Locks file->pid! */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index d4f71bb54e84..3aabf7f1da6d 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -651,7 +651,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, { struct msm_drm_private *priv =3D dev->dev_private; struct drm_msm_gem_submit *args =3D data; - struct msm_file_private *ctx =3D file->driver_priv; + struct msm_context *ctx =3D file->driver_priv; struct msm_gem_submit *submit =3D NULL; struct msm_gpu *gpu =3D priv->gpu; struct msm_gpu_submitqueue *queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index c380d9d9f5af..d786fcfad62f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -148,7 +148,7 @@ int msm_gpu_pm_suspend(struct msm_gpu *gpu) return 0; } =20 -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p) { drm_printf(p, "drm-engine-gpu:\t%llu ns\n", ctx->elapsed_ns); @@ -339,7 +339,7 @@ static void retire_submits(struct msm_gpu *gpu); =20 static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, c= har **cmd) { - struct msm_file_private *ctx =3D submit->queue->ctx; + struct msm_context *ctx =3D submit->queue->ctx; struct task_struct *task; =20 WARN_ON(!mutex_is_locked(&submit->gpu->lock)); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index e25009150579..957d6fb3469d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -22,7 +22,7 @@ struct msm_gem_submit; struct msm_gpu_perfcntr; struct msm_gpu_state; -struct msm_file_private; +struct msm_context; =20 struct msm_gpu_config { const char *ioname; @@ -44,9 +44,9 @@ struct msm_gpu_config { * + z180_gpu */ struct msm_gpu_funcs { - int (*get_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*get_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); - int (*set_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*set_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); int (*hw_init)(struct msm_gpu *gpu); =20 @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { #define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORI= TY_HIGH) =20 /** - * struct msm_file_private - per-drm_file context + * struct msm_context - per-drm_file context * * @queuelock: synchronizes access to submitqueues list * @submitqueues: list of &msm_gpu_submitqueue created by userspace @@ -357,7 +357,7 @@ struct msm_gpu_perfcntr { * @ref: reference count * @seqno: unique per process seqno */ -struct msm_file_private { +struct msm_context { rwlock_t queuelock; struct list_head submitqueues; int queueid; @@ -512,7 +512,7 @@ struct msm_gpu_submitqueue { u32 ring_nr; int faults; uint32_t last_fence; - struct msm_file_private *ctx; + struct msm_context *ctx; struct list_head node; struct idr fence_idr; struct spinlock idr_lock; @@ -608,33 +608,32 @@ static inline void gpu_write64(struct msm_gpu *gpu, u= 32 reg, u64 val) int msm_gpu_pm_suspend(struct msm_gpu *gpu); int msm_gpu_pm_resume(struct msm_gpu *gpu); =20 -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p); =20 -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *= ctx); -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *c= tx, +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx); +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id); int msm_submitqueue_create(struct drm_device *drm, - struct msm_file_private *ctx, + struct msm_context *ctx, u32 prio, u32 flags, u32 *id); -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private = *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args); -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); -void msm_submitqueue_close(struct msm_file_private *ctx); +int msm_submitqueue_remove(struct msm_context *ctx, u32 id); +void msm_submitqueue_close(struct msm_context *ctx); =20 void msm_submitqueue_destroy(struct kref *kref); =20 -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof); -void __msm_file_private_destroy(struct kref *kref); +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, = int sysprof); +void __msm_context_destroy(struct kref *kref); =20 -static inline void msm_file_private_put(struct msm_file_private *ctx) +static inline void msm_context_put(struct msm_context *ctx) { - kref_put(&ctx->ref, __msm_file_private_destroy); + kref_put(&ctx->ref, __msm_context_destroy); } =20 -static inline struct msm_file_private *msm_file_private_get( - struct msm_file_private *ctx) +static inline struct msm_context *msm_context_get( + struct msm_context *ctx) { kref_get(&ctx->ref); return ctx; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index 7fed1de63b5d..1acc0fe36353 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -7,8 +7,7 @@ =20 #include "msm_gpu.h" =20 -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof) +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, = int sysprof) { /* * Since pm_runtime and sysprof_active are both refcounts, we @@ -46,10 +45,10 @@ int msm_file_private_set_sysprof(struct msm_file_privat= e *ctx, return 0; } =20 -void __msm_file_private_destroy(struct kref *kref) +void __msm_context_destroy(struct kref *kref) { - struct msm_file_private *ctx =3D container_of(kref, - struct msm_file_private, ref); + struct msm_context *ctx =3D container_of(kref, + struct msm_context, ref); int i; =20 for (i =3D 0; i < ARRAY_SIZE(ctx->entities); i++) { @@ -73,12 +72,12 @@ void msm_submitqueue_destroy(struct kref *kref) =20 idr_destroy(&queue->fence_idr); =20 - msm_file_private_put(queue->ctx); + msm_context_put(queue->ctx); =20 kfree(queue); } =20 -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *c= tx, +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; @@ -101,7 +100,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct = msm_file_private *ctx, return NULL; } =20 -void msm_submitqueue_close(struct msm_file_private *ctx) +void msm_submitqueue_close(struct msm_context *ctx) { struct msm_gpu_submitqueue *entry, *tmp; =20 @@ -119,7 +118,7 @@ void msm_submitqueue_close(struct msm_file_private *ctx) } =20 static struct drm_sched_entity * -get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, +get_sched_entity(struct msm_context *ctx, struct msm_ringbuffer *ring, unsigned ring_nr, enum drm_sched_priority sched_prio) { static DEFINE_MUTEX(entity_lock); @@ -155,7 +154,7 @@ get_sched_entity(struct msm_file_private *ctx, struct m= sm_ringbuffer *ring, return ctx->entities[idx]; } =20 -int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private= *ctx, +int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, u32 prio, u32 flags, u32 *id) { struct msm_drm_private *priv =3D drm->dev_private; @@ -200,7 +199,7 @@ int msm_submitqueue_create(struct drm_device *drm, stru= ct msm_file_private *ctx, =20 write_lock(&ctx->queuelock); =20 - queue->ctx =3D msm_file_private_get(ctx); + queue->ctx =3D msm_context_get(ctx); queue->id =3D ctx->queueid++; =20 if (id) @@ -221,7 +220,7 @@ int msm_submitqueue_create(struct drm_device *drm, stru= ct msm_file_private *ctx, * Create the default submit-queue (id=3D=3D0), used for backwards compati= bility * for userspace that pre-dates the introduction of submitqueues. */ -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *= ctx) +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx) { struct msm_drm_private *priv =3D drm->dev_private; int default_prio, max_priority; @@ -261,7 +260,7 @@ static int msm_submitqueue_query_faults(struct msm_gpu_= submitqueue *queue, return ret ? -EFAULT : 0; } =20 -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private = *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args) { struct msm_gpu_submitqueue *queue; @@ -282,7 +281,7 @@ int msm_submitqueue_query(struct drm_device *drm, struc= t msm_file_private *ctx, return ret; } =20 -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id) +int msm_submitqueue_remove(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; =20 --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC8232E11B4 for ; Wed, 25 Jun 2025 18:58:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877894; cv=none; b=InbfR02ctST7Gyrm2Vjt/HfioWrPwIlKGqq3WxJJJcKPL/8ntrOHbfBI8sxCKk7dHzNeQXkheAfwso3xrS8HJ7Z9Lg7ZDg13tdaGqK/vcuhDG2b5M6LiCC6uA7dYWsTp15M/Nq0fIQiGtlo2ZZOqaHwX7hSCOj6yh4vT5bnPxl0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877894; c=relaxed/simple; bh=8yxrJuQkxzYQChDV1AsntwCFAr6kU9KajLDojsgN7uo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AKBCBR3xWupTCad+SywC8YdE558x3ovSzngrDpr0iQUp30J9m/W1eszuqyrGpjk2QoDS1L8wIRxWx48PjLo2y01ekwsFK7za1GtnVk84Xph14137F5GqaxPs+unrDnb90pEAKDEGbDjstbn2ltdChPfy7iLNLS9tB1F5qxRO1UI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=C0eXiFIo; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="C0eXiFIo" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PDKS4K022985 for ; Wed, 25 Jun 2025 18:58:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=iz489B2Vnt2 sUZBEqce/LAoH7pGwFDnWfRoUqmkB+Q8=; b=C0eXiFIoHrho4Cfeajp5hMIWveM i56buHxm6tliHu720mJsTe0PQhcF7wxbusDXvjaYI6DvvFDSUdvtMkpi1odH0t0P 3ejFlwHdcPJRwtNOJoPUJHz66jwrCpM1ppSudM4uPn6gLVV3LwmCMIuewz6blYlu BDbR9bVm4iL+JZ1mrv4DjjmmVVJf90ntEZ4Xvq+dSsdPfvTkoUP5D7lNTsT3Wd8r PZE0dqrmEJWd4arNv8UAC0zCk1pqco/ySqaNjlkqIPs6KMRtqqMvZutHlXTwVt1W 8JWcOFya2ch7nFeuv2vX/MsajM6agYOYyKCES+Xf6GuhVB+JcwiisuAfkTQ== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47f3bggxxu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:11 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-2356ce55d33so2838205ad.0 for ; Wed, 25 Jun 2025 11:58:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877890; x=1751482690; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iz489B2Vnt2sUZBEqce/LAoH7pGwFDnWfRoUqmkB+Q8=; b=jhkjwtojBJkJaUDcr18LU+rcaPOBHPeF1G0NkxynGa6Ya0PbaCYuvtCn0YfQZHY/rk jJvIftf9q4hebppBoMvR4Aap10x/zMokWs7TIgZ97+ppkAAwFOIaeWp6wP5jDlum3MRs bxFPvx0IimOIAzXxRj36EAk09C9G35Ikfpvthx86++d7gbUQxsShQJZAELLcgEESs1Qm JiS/VUIluz6sAVRpYQO06Xb+DX6Qvj8ScZ6FlML4nq9xaMHCbJbAXRwH/v3nmiLQUJ1u Mbztss5VdX6J+KMXoRL685BAN6P8PIv0skTMgUyxe9KhQy1rHi4PoBdfKIf54iGGpuZG hHtw== X-Forwarded-Encrypted: i=1; AJvYcCXAl7VjtCdzId48Uu5BNd1qWKieAPO4AafK+MqEfehucqnr5SMs9xookKP1uCtlYxUfc0t3sKwHD5g9r6A=@vger.kernel.org X-Gm-Message-State: AOJu0YzznhZRoOeInd97QwKj4YOaGIe8vgMKcb2NAJEuirDusV0uz0vr vH6bQpXJoc3Vc3tyL8WWUFEifxMZxU9JWRQNoYYNwa8kIfO+vHCxdPN32b9pA81LqhQ66/ykkWh Iy9HhTgXepwBnGT7YAzxO0aFQKqkw2xwaD0KlPaAkyB+O94j0AI1UOjVmG7URw7dDrIk= X-Gm-Gg: ASbGncsFP/l9shhcw77SuuaogOyRDcOJvT9jigOgplLT6IYVWQvQAV36E00/5+E6sLC 5wOF7lV3Bp+l1fh2GaHhPpha4zurHRD5TvUrtrkqdTBfJzZXi3AER9RWE6vThDYhnhg7RsyU+xl 0bvh7qmcpPVOn9s3h71xpJvWWBps/0xQUTm2DYBVbZJrcCHXvxQYXO5xF+qJVFXz2Ricy6cjXva +5pG+yEmMQV1TFf+pE2eQU9PNEtllbQsXHPDfFjkBt43VnpPampMRlUhsc7g2bGNFZkmWsuD158 DrGxAXZHMxVx1W0ShMPxAGicIC76EtRl X-Received: by 2002:a17:902:d501:b0:234:e3b7:5ce0 with SMTP id d9443c01a7336-2382407a220mr64201475ad.47.1750877890605; Wed, 25 Jun 2025 11:58:10 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEGELzNE88dk/4uZ9frRWF/+bGLdgcTXmuMfsYcOrT4v6Lw6umM/ReeIdKxbfbzYz91MBR74A== X-Received: by 2002:a17:902:d501:b0:234:e3b7:5ce0 with SMTP id d9443c01a7336-2382407a220mr64201115ad.47.1750877890193; Wed, 25 Jun 2025 11:58:10 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237f7579cddsm81828815ad.202.2025.06.25.11.58.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:09 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Dmitry Baryshkov , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 05/42] drm/msm: Improve msm_context comments Date: Wed, 25 Jun 2025 11:46:58 -0700 Message-ID: <20250625184918.124608-6-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: ZaPacTZySOrhrxJzdpbcuwxwU1UEyDl2 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX8h6chlV/vCZ1 u6QW71swk198SjsJdlv/WB3p7L0QlvmVX2PnX6Xql5EP1MXJ+v1ImkuSUatfeWy/ncKATekU1cz 36oW48a9hkwkEb5/n+byyp9go3vQcTsl7WzEMZWOgTKxcg2sVGdlHJKIVwNMjvOJhsXn35jd9j3 eqE5auQ5euMdr2xVdTs5vV38oxlEz1dIcfCX/pK73nTePq3Ca0zvQ6twb7Q6MXx19nTEhFJldBa OoxlA0I7ouaDQxTepwPKSXFdysn3rikbcxVDjdTdd12KAgSKHi2FgigR0RRAJwLzxnb09kDGtfK gIyaBxM7WAzPHVkDttcqPMmDV3lJjo8rmAvx1HjPqv7/f33NQShvPmaksOSfeJU7bg2Hqul8UEP VDZK0IN44x7u1TQ0+ypmQbV+i20C0zdjOGQJY5rtYPQFpzaJ/psq8X1rd5CKM/xXbU24tPwM X-Authority-Analysis: v=2.4 cv=L4kdQ/T8 c=1 sm=1 tr=0 ts=685c46c3 cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=t0yPb2BRG13ODCZxGvcA:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-GUID: ZaPacTZySOrhrxJzdpbcuwxwU1UEyDl2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0 bulkscore=0 clxscore=1015 suspectscore=0 adultscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 spamscore=0 phishscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Just some tidying up. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gpu.h | 44 +++++++++++++++++++++++------------ 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 957d6fb3469d..c699ce0c557b 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -348,25 +348,39 @@ struct msm_gpu_perfcntr { =20 /** * struct msm_context - per-drm_file context - * - * @queuelock: synchronizes access to submitqueues list - * @submitqueues: list of &msm_gpu_submitqueue created by userspace - * @queueid: counter incremented each time a submitqueue is created, - * used to assign &msm_gpu_submitqueue.id - * @aspace: the per-process GPU address-space - * @ref: reference count - * @seqno: unique per process seqno */ struct msm_context { + /** @queuelock: synchronizes access to submitqueues list */ rwlock_t queuelock; + + /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */ struct list_head submitqueues; + + /** + * @queueid: + * + * Counter incremented each time a submitqueue is created, used to + * assign &msm_gpu_submitqueue.id + */ int queueid; + + /** @aspace: the per-process GPU address-space */ struct msm_gem_address_space *aspace; + + /** @kref: the reference count */ struct kref ref; + + /** + * @seqno: + * + * A unique per-process sequence number. Used to detect context + * switches, without relying on keeping a, potentially dangling, + * pointer to the previous context. + */ int seqno; =20 /** - * sysprof: + * @sysprof: * * The value of MSM_PARAM_SYSPROF set by userspace. This is * intended to be used by system profiling tools like Mesa's @@ -384,21 +398,21 @@ struct msm_context { int sysprof; =20 /** - * comm: Overridden task comm, see MSM_PARAM_COMM + * @comm: Overridden task comm, see MSM_PARAM_COMM * * Accessed under msm_gpu::lock */ char *comm; =20 /** - * cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE + * @cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE * * Accessed under msm_gpu::lock */ char *cmdline; =20 /** - * elapsed: + * @elapsed: * * The total (cumulative) elapsed time GPU was busy with rendering * from this context in ns. @@ -406,7 +420,7 @@ struct msm_context { uint64_t elapsed_ns; =20 /** - * cycles: + * @cycles: * * The total (cumulative) GPU cycles elapsed attributed to this * context. @@ -414,7 +428,7 @@ struct msm_context { uint64_t cycles; =20 /** - * entities: + * @entities: * * Table of per-priority-level sched entities used by submitqueues * associated with this &drm_file. Because some userspace apps @@ -427,7 +441,7 @@ struct msm_context { struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS= ]; =20 /** - * ctx_mem: + * @ctx_mem: * * Total amount of memory of GEM buffers with handles attached for * this context. --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F8FC28C87D for ; Wed, 25 Jun 2025 18:58:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877912; cv=none; b=iBoJORnlIhQY7t/ka2GTSq5nxzSomJ5vPmDlXyiPyTpO78HLL8tVNdQUDbIq6tsV4BVSNlzZhFds+WojPsROWqvOGyiSrARxHxkaPcvfpyq1yP5TcveakEHf1GXXA27yMit6P3DpOtC6X3Hf72LkgsTESgyoDJ6WKyYiBJ2ZxaA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877912; c=relaxed/simple; bh=qdpTWVoY8wjKJ3WY3hhqAix9JNRJY0mRnwfqiFuXiFw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QXAB7IFuCQl5ZwdMahg36DMmsUNr/Hx/dVrDP3rPOHjRfDRobMsBSR4K74VjxvLZNlv/5GCPs3WWtvbRLIHcMLFR/ZNaIWCNoLuqif8JcEDdp5YXp465uR3isEnDDb/Zd5noY12dfz1Hti/Pc/aywgOztYhkA05Ra7OugfQbnlI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=PBsg7MPG; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="PBsg7MPG" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCbB8S006106 for ; Wed, 25 Jun 2025 18:58:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=L9dD9Q8hADL QOkskzGevOPdbFuYhtTIsJoW4Dzezivw=; b=PBsg7MPGxJeQuxVAZZAuqiBrsxk R8+mtZ4R7tNyBC2Z0BShU6+Naar6jFUMChvms3dw8v+i2fJEaaEU+ILLH4kFY66Q eJiTE/zj3BSx5OozUhMHwWbvkebN4D8Ni78YF36BEzYXVv4GSIqglVSXb0NhdFeR r6IUbKAKBY+CX9UzHhz1VZzGtr/J0+lAieexVhE+mOP1kdkpgwHhf8DIfk7JC4Zy tC9JTV+fKEA4ZeqGIiCYlFVpIjov8f5qT/ZtEn/SnD1RpiQoqj9Ay7r/gcJneplQ SzzjsQJ1m/M/ouZgapvlR5LQHD3oN6J7/ndytXZBVoRFOwdQ+zJZfxP3OWA== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47f3bggxya-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:26 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-234a102faa3so1556205ad.0 for ; Wed, 25 Jun 2025 11:58:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877905; x=1751482705; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=L9dD9Q8hADLQOkskzGevOPdbFuYhtTIsJoW4Dzezivw=; b=qlE0xnv/rLmLOD2O64gQMTENkmRFyuFnrXq19dCKZn+d18LScfhdVZbH9/P7TBWJY0 I0NVD2A0yeJYQQWUOL7dxu7AGkYK20yJdrUb/0YLiCu91Wn8oqzVBeblWJ1xwUEn0qKO X7McMtFbcMrT2av8PE4cj7uKV6ZXw74wTS84+xTQmNTCe3PHjkFc3B3MjK4/Ypr+Xsro njkras1tHda1jhVY2JUYLOnA35M5uLwvHDisjIdLwDJvLibUg1s4bEfT/LA/XOCRJlm9 tscyXdfGeRLn3UpdfpdBdWtV/Cvl+0Kfx0j5W1seNpEHDAwD4/VWs6P3uITupJV/feUU JtTg== X-Forwarded-Encrypted: i=1; AJvYcCUlNpIT3D+z1/fDoZKojfYYASmYZ0UumXS6w5qxP9HO2FAd28VCIKq4NWdEVfSNTPNfklIczOXaHHMjzP8=@vger.kernel.org X-Gm-Message-State: AOJu0Yyr2fMUNbroSqhHb4/06CbzNfdIL59MoPO/xFn4YRXkxr20DeNZ 1I2Y3dz0kuigL5oXofI+hL2chgwr8Vq2GpfSegGZiMf7P85fmLNuc+8oPfODMjIavxc4Xe9/mKC cQHh54zWXrYy8SbvG4NVORnJ48EkruwyVC5LkMbJmn0SA8erWpDWCYxUs3+Vf5k6e3p8= X-Gm-Gg: ASbGncuc/duSOoIR44XIuAocvuFRt5PzW8sFWxCjxpy4rhf5jW7pMNhqPr8U1yk3kAy 5oAkW+9gK6gsFcI4G3EjVEmPzjH7pWt59lsnxoLxMK3qAFZ+sJjo/Roj/ssFIgxBcr+xwzxgt32 iwyYYyIVklLrXstTERq3mUyeEkEiwagmblcItu7rVilq68qGY2UiPyqhx45x+erito6AkcBS5It hWSEJV8Rz739VhYDbQxPFEqr5sDju22bQsVKuCJcdcb8UgYR0fBwdSuFploGMozZgw1LBqEZNbv YcKLgeG5pxpaCL4bX9ZKzCRn8k2zD3SA X-Received: by 2002:a17:90b:5303:b0:311:c970:c9bc with SMTP id 98e67ed59e1d1-315f26a24a6mr6088573a91.30.1750877904163; Wed, 25 Jun 2025 11:58:24 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGbfJd0XpN/U4mkEcuCpWoLH6bZcAjMlaIoj9fDBL5/CDgwCgkKl86ZbXR0qVW2cqZYxK7jUw== X-Received: by 2002:a17:90b:5303:b0:311:c970:c9bc with SMTP id 98e67ed59e1d1-315f26a24a6mr6088505a91.30.1750877903116; Wed, 25 Jun 2025 11:58:23 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-315f53da49asm2295227a91.36.2025.06.25.11.58.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:22 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Dmitry Baryshkov , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Jessica Zhang , Arnd Bergmann , Jun Nie , Krzysztof Kozlowski , Eugene Lepshy , Haoxiang Li , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 06/42] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Date: Wed, 25 Jun 2025 11:46:59 -0700 Message-ID: <20250625184918.124608-7-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: SHl8aCt0NqMUztUJ2hNPcNpJAer5l-TE X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX+yMKqBJFdRtN oiPDMt9QXnOcV9qgnnisXaWjAocmsqcnhvGhOx95JTq+T9vGDBHpBGKmIv+9Jy/meZp7PVEOfPO FOBTXLPH8sxZq04XyqebkFLLrX5EQvkf2W7BL3gMQA2mcEGNkpzQVPzCBwV7EGTVuT6leHX/bwL ont/de8inJt/QZc48ZffGdshGK5tvtSyGNq4klFqJH0OeZAdJS7wmgM91L6hzF2DQDLiuTQzMh1 kUZsEN0UKUPrfkXlcRUCu6vYmYN3wKEcrVmSlCWYImV6z3hZZ7ZQkDcnRU/H/yN/W+kQjHyQPlI 7nw82GohvzkrU+p+KvuV1XLVxatFSNOCW2ihOi+8UVkl/Y8ze4vWukq8U0QtGJi5L+fmNyVYnR6 DqVxXHOkeVtxFiLT2i3fgoFYgNpE1yP8tcxqibOELmyaqFhcOGBm3/lx22qfXhNi8ST45EVb X-Authority-Analysis: v=2.4 cv=L4kdQ/T8 c=1 sm=1 tr=0 ts=685c46d2 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=kLMotZ-wiLr96KG7Bc8A:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-GUID: SHl8aCt0NqMUztUJ2hNPcNpJAer5l-TE X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0 bulkscore=0 clxscore=1015 suspectscore=0 adultscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 spamscore=0 phishscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Re-aligning naming to better match drm_gpuvm terminology will make things less confusing at the end of the drm_gpuvm conversion. This is just rename churn, no functional change. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 18 ++-- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_debugfs.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 22 ++--- drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 26 +++--- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 45 +++++---- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 47 +++++----- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 18 ++-- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 4 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 24 ++--- drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 12 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 12 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 14 +-- drivers/gpu/drm/msm/msm_drv.c | 8 +- drivers/gpu/drm/msm/msm_drv.h | 10 +- drivers/gpu/drm/msm/msm_fb.c | 10 +- drivers/gpu/drm/msm/msm_fbdev.c | 2 +- drivers/gpu/drm/msm/msm_gem.c | 74 +++++++-------- drivers/gpu/drm/msm/msm_gem.h | 34 +++---- drivers/gpu/drm/msm/msm_gem_submit.c | 6 +- drivers/gpu/drm/msm/msm_gem_vma.c | 93 +++++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 48 +++++----- drivers/gpu/drm/msm/msm_gpu.h | 16 ++-- drivers/gpu/drm/msm/msm_kms.c | 16 ++-- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 41 files changed, 349 insertions(+), 354 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 379a3d346c30..5eb063ed0b46 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; =20 - a2xx_gpummu_params(gpu->aspace->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); =20 DBG("%s", gpu->name); =20 @@ -466,19 +466,19 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struc= t msm_gpu *gpu) return state; } =20 -static struct msm_gem_address_space * -a2xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pde= v) +static struct msm_gem_vm * +a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu =3D a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 - aspace =3D msm_gem_address_space_create(mmu, "gpu", SZ_16M, + vm =3D msm_gem_vm_create(mmu, "gpu", SZ_16M, 0xfff * SZ_64K); =20 - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); =20 - return aspace; + return vm; } =20 static u32 a2xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -504,7 +504,7 @@ static const struct adreno_gpu_funcs funcs =3D { #endif .gpu_state_get =3D a2xx_gpu_state_get, .gpu_state_put =3D adreno_gpu_state_put, - .create_address_space =3D a2xx_create_address_space, + .create_vm =3D a2xx_create_vm, .get_rptr =3D a2xx_get_rptr, }, }; @@ -551,7 +551,7 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers =3D a220_registers; =20 - if (!gpu->aspace) { + if (!gpu->vm) { dev_err(dev->dev, "No memory protection without MMU\n"); if (!allow_vram_carveout) { ret =3D -ENXIO; diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a3xx_gpu.c index b6df115bb567..434e6ededf83 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -526,7 +526,7 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_busy =3D a3xx_gpu_busy, .gpu_state_get =3D a3xx_gpu_state_get, .gpu_state_put =3D adreno_gpu_state_put, - .create_address_space =3D adreno_create_address_space, + .create_vm =3D adreno_create_vm, .get_rptr =3D a3xx_get_rptr, }, }; @@ -581,7 +581,7 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } =20 - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a4xx_gpu.c index f1b18a6663f7..2c75debcfd84 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -645,7 +645,7 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_busy =3D a4xx_gpu_busy, .gpu_state_get =3D a4xx_gpu_state_get, .gpu_state_put =3D adreno_gpu_state_put, - .create_address_space =3D adreno_create_address_space, + .create_vm =3D adreno_create_vm, .get_rptr =3D a4xx_get_rptr, }, .get_timestamp =3D a4xx_get_timestamp, @@ -695,7 +695,7 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0xffff0000ffff0000ull; =20 - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c b/drivers/gpu/drm/ms= m/adreno/a5xx_debugfs.c index 169b8fe688f8..625a4e787d8f 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c @@ -116,13 +116,13 @@ reset_set(void *data, u64 val) adreno_gpu->fw[ADRENO_FW_PFP] =3D NULL; =20 if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); a5xx_gpu->pm4_bo =3D NULL; } =20 if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); a5xx_gpu->pfp_bo =3D NULL; } diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a5xx_gpu.c index 60aef0796236..dc31bc0afca4 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -622,7 +622,7 @@ static int a5xx_ucode_load(struct msm_gpu *gpu) a5xx_gpu->shadow =3D msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a5xx_gpu->shadow_bo, + gpu->vm, &a5xx_gpu->shadow_bo, &a5xx_gpu->shadow_iova); =20 if (IS_ERR(a5xx_gpu->shadow)) @@ -1042,22 +1042,22 @@ static void a5xx_destroy(struct msm_gpu *gpu) a5xx_preempt_fini(gpu); =20 if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); } =20 if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); } =20 if (a5xx_gpu->gpmu_bo) { - msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->gpmu_bo); } =20 if (a5xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->shadow_bo); } =20 @@ -1457,7 +1457,7 @@ static int a5xx_crashdumper_init(struct msm_gpu *gpu, struct a5xx_crashdumper *dumper) { dumper->ptr =3D msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); =20 if (!IS_ERR(dumper->ptr)) @@ -1557,7 +1557,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_g= pu *gpu, =20 if (a5xx_crashdumper_run(gpu, &dumper)) { kfree(a5xx_state->hlsqregs); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); return; } =20 @@ -1565,7 +1565,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_g= pu *gpu, memcpy(a5xx_state->hlsqregs, dumper.ptr + (256 * SZ_1K), count * sizeof(u32)); =20 - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); } =20 static struct msm_gpu_state *a5xx_gpu_state_get(struct msm_gpu *gpu) @@ -1713,7 +1713,7 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_busy =3D a5xx_gpu_busy, .gpu_state_get =3D a5xx_gpu_state_get, .gpu_state_put =3D a5xx_gpu_state_put, - .create_address_space =3D adreno_create_address_space, + .create_vm =3D adreno_create_vm, .get_rptr =3D a5xx_get_rptr, }, .get_timestamp =3D a5xx_get_timestamp, @@ -1786,8 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } =20 - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, a5xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); =20 /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/= adreno/a5xx_power.c index 6b91e0bd1514..d6da7351cfbb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c @@ -363,7 +363,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu) bosize =3D (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2; =20 ptr =3D msm_gem_kernel_new(drm, bosize, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &a5xx_gpu->gpmu_bo, &a5xx_gpu->gpmu_iova); if (IS_ERR(ptr)) return; diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/ms= m/adreno/a5xx_preempt.c index 36f72c43eae8..e50221d4e6ee 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -254,7 +254,7 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, =20 ptr =3D msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_RECORD_SIZE + A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); =20 if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -262,9 +262,9 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, /* The buffer to store counters needs to be unprivileged */ counters =3D msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC, gpu->aspace, &counters_bo, &counters_iova); + MSM_BO_WC, gpu->vm, &counters_bo, &counters_iova); if (IS_ERR(counters)) { - msm_gem_kernel_put(bo, gpu->aspace); + msm_gem_kernel_put(bo, gpu->vm); return PTR_ERR(counters); } =20 @@ -295,8 +295,8 @@ void a5xx_preempt_fini(struct msm_gpu *gpu) int i; =20 for (i =3D 0; i < gpu->nr_rings; i++) { - msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->aspace); - msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->aspace); + msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->vm); + msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->vm); } } =20 diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.c index 38c0f8ef85c3..848acc382b7d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,15 +1259,15 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) =20 static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { - msm_gem_kernel_put(gmu->hfi.obj, gmu->aspace); - msm_gem_kernel_put(gmu->debug.obj, gmu->aspace); - msm_gem_kernel_put(gmu->icache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dcache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dummy.obj, gmu->aspace); - msm_gem_kernel_put(gmu->log.obj, gmu->aspace); - - gmu->aspace->mmu->funcs->detach(gmu->aspace->mmu); - msm_gem_address_space_put(gmu->aspace); + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); + msm_gem_kernel_put(gmu->debug.obj, gmu->vm); + msm_gem_kernel_put(gmu->icache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dcache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); + msm_gem_kernel_put(gmu->log.obj, gmu->vm); + + gmu->vm->mmu->funcs->detach(gmu->vm->mmu); + msm_gem_vm_put(gmu->vm); } =20 static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo = *bo, @@ -1296,7 +1296,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu= , struct a6xx_gmu_bo *bo, if (IS_ERR(bo->obj)) return PTR_ERR(bo->obj); =20 - ret =3D msm_gem_get_and_pin_iova_range(bo->obj, gmu->aspace, &bo->iova, + ret =3D msm_gem_get_and_pin_iova_range(bo->obj, gmu->vm, &bo->iova, range_start, range_end); if (ret) { drm_gem_object_put(bo->obj); @@ -1321,9 +1321,9 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); =20 - gmu->aspace =3D msm_gem_address_space_create(mmu, "gmu", 0x0, 0x80000000); - if (IS_ERR(gmu->aspace)) - return PTR_ERR(gmu->aspace); + gmu->vm =3D msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + if (IS_ERR(gmu->vm)) + return PTR_ERR(gmu->vm); =20 return 0; } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.h index 39fb8c774a79..cceda7d9c33a 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; =20 - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 620a26638535..d05c00624f74 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, if (ctx->seqno =3D=3D ring->cur_ctx_seqno) return; =20 - if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) return; =20 if (adreno_gpu->info->family >=3D ADRENO_7XX_GEN1) { @@ -957,7 +957,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) =20 msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw"); if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); =20 a6xx_gpu->sqe_bo =3D NULL; @@ -974,7 +974,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->shadow =3D msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->shadow_bo, + gpu->vm, &a6xx_gpu->shadow_bo, &a6xx_gpu->shadow_iova); =20 if (IS_ERR(a6xx_gpu->shadow)) @@ -985,7 +985,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) =20 a6xx_gpu->pwrup_reglist_ptr =3D msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->pwrup_reglist_bo, + gpu->vm, &a6xx_gpu->pwrup_reglist_bo, &a6xx_gpu->pwrup_reglist_iova); =20 if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr)) @@ -2198,12 +2198,12 @@ static void a6xx_destroy(struct msm_gpu *gpu) struct a6xx_gpu *a6xx_gpu =3D to_a6xx_gpu(adreno_gpu); =20 if (a6xx_gpu->sqe_bo) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); } =20 if (a6xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->shadow_bo); } =20 @@ -2243,8 +2243,8 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, st= ruct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } =20 -static struct msm_gem_address_space * -a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pde= v) +static struct msm_gem_vm * +a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); struct a6xx_gpu *a6xx_gpu =3D to_a6xx_gpu(adreno_gpu); @@ -2258,22 +2258,22 @@ a6xx_create_address_space(struct msm_gpu *gpu, stru= ct platform_device *pdev) !device_iommu_capable(&pdev->dev, IOMMU_CAP_CACHE_COHERENCY)) quirks |=3D IO_PGTABLE_QUIRK_ARM_OUTER_WBWA; =20 - return adreno_iommu_create_address_space(gpu, pdev, quirks); + return adreno_iommu_create_vm(gpu, pdev, quirks); } =20 -static struct msm_gem_address_space * -a6xx_create_private_address_space(struct msm_gpu *gpu) +static struct msm_gem_vm * +a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; =20 - mmu =3D msm_iommu_pagetable_create(gpu->aspace->mmu); + mmu =3D msm_iommu_pagetable_create(gpu->vm->mmu); =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); =20 - return msm_gem_address_space_create(mmu, + return msm_gem_vm_create(mmu, "gpu", ADRENO_VM_START, - adreno_private_address_space_size(gpu)); + adreno_private_vm_size(gpu)); } =20 static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *= ring) @@ -2390,8 +2390,8 @@ static const struct adreno_gpu_funcs funcs =3D { .gpu_state_get =3D a6xx_gpu_state_get, .gpu_state_put =3D a6xx_gpu_state_put, #endif - .create_address_space =3D a6xx_create_address_space, - .create_private_address_space =3D a6xx_create_private_address_space, + .create_vm =3D a6xx_create_vm, + .create_private_vm =3D a6xx_create_private_vm, .get_rptr =3D a6xx_get_rptr, .progress =3D a6xx_progress, }, @@ -2419,8 +2419,8 @@ static const struct adreno_gpu_funcs funcs_gmuwrapper= =3D { .gpu_state_get =3D a6xx_gpu_state_get, .gpu_state_put =3D a6xx_gpu_state_put, #endif - .create_address_space =3D a6xx_create_address_space, - .create_private_address_space =3D a6xx_create_private_address_space, + .create_vm =3D a6xx_create_vm, + .create_private_vm =3D a6xx_create_private_vm, .get_rptr =3D a6xx_get_rptr, .progress =3D a6xx_progress, }, @@ -2450,8 +2450,8 @@ static const struct adreno_gpu_funcs funcs_a7xx =3D { .gpu_state_get =3D a6xx_gpu_state_get, .gpu_state_put =3D a6xx_gpu_state_put, #endif - .create_address_space =3D a6xx_create_address_space, - .create_private_address_space =3D a6xx_create_private_address_space, + .create_vm =3D a6xx_create_vm, + .create_private_vm =3D a6xx_create_private_vm, .get_rptr =3D a6xx_get_rptr, .progress =3D a6xx_progress, }, @@ -2547,9 +2547,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0x1fffffffff000ull; =20 - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, - a6xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); =20 a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/= msm/adreno/a6xx_gpu_state.c index 341a72a67401..ff06bb75b76d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -132,7 +132,7 @@ static int a6xx_crashdumper_init(struct msm_gpu *gpu, struct a6xx_crashdumper *dumper) { dumper->ptr =3D msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); =20 if (!IS_ERR(dumper->ptr)) @@ -1619,7 +1619,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_g= pu *gpu) a7xx_get_clusters(gpu, a6xx_state, dumper); a7xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); =20 - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } =20 a7xx_get_post_crashdumper_registers(gpu, a6xx_state); @@ -1631,7 +1631,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_g= pu *gpu) a6xx_get_clusters(gpu, a6xx_state, dumper); a6xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); =20 - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } } =20 diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/ms= m/adreno/a6xx_preempt.c index 9b5e27d2373c..b14a7c630bd0 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -343,7 +343,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, =20 ptr =3D msm_gem_kernel_new(gpu->dev, PREEMPT_RECORD_SIZE(adreno_gpu), - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); =20 if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -361,7 +361,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr =3D msm_gem_kernel_new(gpu->dev, PREEMPT_SMMU_INFO_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &bo, &iova); + gpu->vm, &bo, &iova); =20 if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, =20 struct a7xx_cp_smmu_info *smmu_info_ptr =3D ptr; =20 - msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); =20 smmu_info_ptr->magic =3D GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 =3D ttbr; @@ -404,7 +404,7 @@ void a6xx_preempt_fini(struct msm_gpu *gpu) int i; =20 for (i =3D 0; i < gpu->nr_rings; i++) - msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace); + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->vm); } =20 void a6xx_preempt_init(struct msm_gpu *gpu) @@ -430,7 +430,7 @@ void a6xx_preempt_init(struct msm_gpu *gpu) a6xx_gpu->preempt_postamble_ptr =3D msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &a6xx_gpu->preempt_postamble_bo, + gpu->vm, &a6xx_gpu->preempt_postamble_bo, &a6xx_gpu->preempt_postamble_iova); =20 preempt_prepare_postamble(a6xx_gpu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 93fe26009511..b01d9efb8663 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 p= asid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } =20 -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev) +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev) { - return adreno_iommu_create_address_space(gpu, pdev, 0); + return adreno_iommu_create_vm(gpu, pdev, 0); } =20 -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks) +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; u64 start, size; =20 mmu =3D msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -224,16 +224,15 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu, start =3D max_t(u64, SZ_16M, geometry->aperture_start); size =3D geometry->aperture_end - start + 1; =20 - aspace =3D msm_gem_address_space_create(mmu, "gpu", - start & GENMASK_ULL(48, 0), size); + vm =3D msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); =20 - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); =20 - return aspace; + return vm; } =20 -u64 adreno_private_address_space_size(struct msm_gpu *gpu) +u64 adreno_private_vm_size(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); struct adreno_smmu_priv *adreno_smmu =3D dev_get_drvdata(&gpu->pdev->dev); @@ -274,7 +273,7 @@ void adreno_check_and_reenable_stall(struct adreno_gpu = *adreno_gpu) !READ_ONCE(gpu->crashstate)) { adreno_gpu->stall_enabled =3D true; =20 - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); } spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags); } @@ -302,7 +301,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, if (adreno_gpu->stall_enabled) { adreno_gpu->stall_enabled =3D false; =20 - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); } adreno_gpu->stall_reenable_time =3D ktime_add_ms(ktime_get(), 500); spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags); @@ -312,7 +311,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, * it now. */ if (!do_devcoredump) { - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); + gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); } =20 /* @@ -406,8 +405,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, *value =3D 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->aspace) - *value =3D gpu->global_faults + ctx->aspace->faults; + if (ctx->vm) + *value =3D gpu->global_faults + ctx->vm->faults; else *value =3D gpu->global_faults; return 0; @@ -415,14 +414,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_= context *ctx, *value =3D gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->aspace =3D=3D gpu->aspace) + if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->aspace->va_start; + *value =3D ctx->vm->va_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->aspace =3D=3D gpu->aspace) + if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->aspace->va_size; + *value =3D ctx->vm->va_size; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value =3D adreno_gpu->ubwc_config.highest_bank_bit; @@ -612,7 +611,7 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_g= pu *gpu, void *ptr; =20 ptr =3D msm_gem_kernel_new(gpu->dev, fw->size - 4, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, &bo, iova); + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &bo, iova); =20 if (IS_ERR(ptr)) return ERR_CAST(ptr); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index fed9516da365..258c5c6dde2e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -602,7 +602,7 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) =20 /* Put vm_start above 32b to catch issues with not setting xyz_BASE_HI */ #define ADRENO_VM_START 0x100000000ULL -u64 adreno_private_address_space_size(struct msm_gpu *gpu); +u64 adreno_private_vm_size(struct msm_gpu *gpu); int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, @@ -645,14 +645,14 @@ void adreno_show_object(struct drm_printer *p, void *= *ptr, int len, * Common helper function to initialize the default address space for arm-= smmu * attached targets */ -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev); - -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks); +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev); + +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks); =20 int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flag= s, struct adreno_smmu_fault_info *info, const char *block, diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/= gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 849fea580a4c..32e208ee946d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct d= pu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc =3D to_dpu_encoder_phys_wb(phys_enc); @@ -576,13 +576,13 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct= dpu_encoder_phys *phys_enc =20 wb_enc->wb_job =3D job; wb_enc->wb_conn =3D job->connector; - aspace =3D phys_enc->dpu_kms->base.aspace; + vm =3D phys_enc->dpu_kms->base.vm; =20 wb_cfg =3D &wb_enc->wb_cfg; =20 memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); =20 - ret =3D msm_framebuffer_prepare(job->fb, aspace, false); + ret =3D msm_framebuffer_prepare(job->fb, vm, false); if (ret) { DPU_ERROR("prep fb failed, %d\n", ret); return; @@ -596,7 +596,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct d= pu_encoder_phys *phys_enc return; } =20 - dpu_format_populate_addrs(aspace, job->fb, &wb_cfg->dest); + dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest); =20 wb_cfg->dest.width =3D job->fb->width; wb_cfg->dest.height =3D job->fb->height; @@ -619,14 +619,14 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct= dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc =3D to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 if (!job->fb) return; =20 - aspace =3D phys_enc->dpu_kms->base.aspace; + vm =3D phys_enc->dpu_kms->base.vm; =20 - msm_framebuffer_cleanup(job->fb, aspace, false); + msm_framebuffer_cleanup(job->fb, vm, false); wb_enc->wb_job =3D NULL; wb_enc->wb_conn =3D NULL; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/= msm/disp/dpu1/dpu_formats.c index 59c9427da7dd..d115b79af771 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } =20 -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *= aspace, +static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -282,7 +282,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_= gem_address_space *aspace uint32_t base_addr =3D 0; bool meta; =20 - base_addr =3D msm_framebuffer_iova(fb, aspace, 0); + base_addr =3D msm_framebuffer_iova(fb, vm, 0); =20 fmt =3D msm_framebuffer_format(fb); meta =3D MSM_FORMAT_IS_UBWC(fmt); @@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_= gem_address_space *aspace } } =20 -static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space= *aspace, +static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -363,17 +363,17 @@ static void _dpu_format_populate_addrs_linear(struct = msm_gem_address_space *aspa =20 /* Populate addresses for simple formats here */ for (i =3D 0; i < layout->num_planes; ++i) - layout->plane_addr[i] =3D msm_framebuffer_iova(fb, aspace, i); -} + layout->plane_addr[i] =3D msm_framebuffer_iova(fb, vm, i); + } =20 /** * dpu_format_populate_addrs - populate buffer addresses based on * mmu, fb, and format found in the fb - * @aspace: address space pointer + * @vm: address space pointer * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -384,7 +384,7 @@ void dpu_format_populate_addrs(struct msm_gem_address_s= pace *aspace, /* Populate the addresses given the fb */ if (MSM_FORMAT_IS_UBWC(fmt) || MSM_FORMAT_IS_TILE(fmt)) - _dpu_format_populate_addrs_ubwc(aspace, fb, layout); + _dpu_format_populate_addrs_ubwc(vm, fb, layout); else - _dpu_format_populate_addrs_linear(aspace, fb, layout); + _dpu_format_populate_addrs_linear(vm, fb, layout); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/= msm/disp/dpu1/dpu_formats.h index c6145d43aa3f..989f3e13c497 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 = *supported_formats, return false; } =20 -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); =20 diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/= disp/dpu1/dpu_kms.c index 3305ad0623ca..bb5db6da636a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1095,26 +1095,26 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dp= u_kms) { struct msm_mmu *mmu; =20 - if (!dpu_kms->base.aspace) + if (!dpu_kms->base.vm) return; =20 - mmu =3D dpu_kms->base.aspace->mmu; + mmu =3D dpu_kms->base.vm->mmu; =20 mmu->funcs->detach(mmu); - msm_gem_address_space_put(dpu_kms->base.aspace); + msm_gem_vm_put(dpu_kms->base.vm); =20 - dpu_kms->base.aspace =3D NULL; + dpu_kms->base.vm =3D NULL; } =20 static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 - aspace =3D msm_kms_init_aspace(dpu_kms->dev); - if (IS_ERR(aspace)) - return PTR_ERR(aspace); + vm =3D msm_kms_init_vm(dpu_kms->dev); + if (IS_ERR(vm)) + return PTR_ERR(vm); =20 - dpu_kms->base.aspace =3D aspace; + dpu_kms->base.vm =3D vm; =20 return 0; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/ms= m/disp/dpu1/dpu_plane.c index e03d6091f736..2640ab9e6e90 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -71,7 +71,7 @@ static const uint32_t qcom_compressed_supported_formats[]= =3D { =20 /* * struct dpu_plane - local dpu plane structure - * @aspace: address space pointer + * @vm: address space pointer * @csc_ptr: Points to dpu_csc_cfg structure to use for current * @catalog: Points to dpu catalog structure * @revalidate: force revalidation of all the plane properties @@ -654,8 +654,8 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, =20 DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id); =20 - /* cache aspace */ - pstate->aspace =3D kms->base.aspace; + /* cache vm */ + pstate->vm =3D kms->base.vm; =20 /* * TODO: Need to sort out the msm_framebuffer_prepare() call below so @@ -664,9 +664,9 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, */ drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - if (pstate->aspace) { + if (pstate->vm) { ret =3D msm_framebuffer_prepare(new_state->fb, - pstate->aspace, pstate->needs_dirtyfb); + pstate->vm, pstate->needs_dirtyfb); if (ret) { DPU_ERROR("failed to prepare framebuffer\n"); return ret; @@ -689,7 +689,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plan= e, =20 DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id); =20 - msm_framebuffer_cleanup(old_state->fb, old_pstate->aspace, + msm_framebuffer_cleanup(old_state->fb, old_pstate->vm, old_pstate->needs_dirtyfb); } =20 @@ -1353,7 +1353,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_p= lane *plane, pstate->needs_qos_remap |=3D (is_rt_pipe !=3D pdpu->is_rt_pipe); pdpu->is_rt_pipe =3D is_rt_pipe; =20 - dpu_format_populate_addrs(pstate->aspace, new_state->fb, &pstate->layout); + dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout); =20 DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT ", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/ms= m/disp/dpu1/dpu_plane.h index acd5725175cd..3578f52048a5 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -17,7 +17,7 @@ /** * struct dpu_plane_state: Define dpu extension of drm plane state object * @base: base drm plane state object - * @aspace: pointer to address space for input/output buffers + * @vm: pointer to address space for input/output buffers * @pipe: software pipe description * @r_pipe: software pipe description of the second pipe * @pipe_cfg: software pipe configuration @@ -34,7 +34,7 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/ms= m/disp/mdp4/mdp4_crtc.c index b8610aa806ea..0133c0c01a0b 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c @@ -120,7 +120,7 @@ static void unref_cursor_worker(struct drm_flip_work *w= ork, void *val) struct mdp4_kms *mdp4_kms =3D get_kms(&mdp4_crtc->base); struct msm_kms *kms =3D &mdp4_kms->base.base; =20 - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } =20 @@ -369,7 +369,7 @@ static void update_cursor(struct drm_crtc *crtc) if (next_bo) { /* take a obj ref + iova ref when we start scanning out: */ drm_gem_object_get(next_bo); - msm_gem_get_and_pin_iova(next_bo, kms->aspace, &iova); + msm_gem_get_and_pin_iova(next_bo, kms->vm, &iova); =20 /* enable cursor: */ mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_SIZE(dma), @@ -427,7 +427,7 @@ static int mdp4_crtc_cursor_set(struct drm_crtc *crtc, } =20 if (cursor_bo) { - ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, &iova); + ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &iova); if (ret) goto fail; } else { diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm= /disp/mdp4/mdp4_kms.c index c469e66cfc11..94fbc20b2fbd 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -120,15 +120,15 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(kms)); struct device *dev =3D mdp4_kms->dev->dev; - struct msm_gem_address_space *aspace =3D kms->aspace; + struct msm_gem_vm *vm =3D kms->vm; =20 if (mdp4_kms->blank_cursor_iova) - msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->aspace); + msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); =20 - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } =20 if (mdp4_kms->rpm_enabled) @@ -380,7 +380,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms =3D NULL; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int ret; u32 major, minor; unsigned long max_clk; @@ -449,19 +449,19 @@ static int mdp4_kms_init(struct drm_device *dev) } else if (!mmu) { DRM_DEV_INFO(dev->dev, "no iommu, fallback to phys " "contig buffers for scanout\n"); - aspace =3D NULL; + vm =3D NULL; } else { - aspace =3D msm_gem_address_space_create(mmu, + vm =3D msm_gem_vm_create(mmu, "mdp4", 0x1000, 0x100000000 - 0x1000); =20 - if (IS_ERR(aspace)) { + if (IS_ERR(vm)) { if (!IS_ERR(mmu)) mmu->funcs->destroy(mmu); - ret =3D PTR_ERR(aspace); + ret =3D PTR_ERR(vm); goto fail; } =20 - kms->aspace =3D aspace; + kms->vm =3D vm; } =20 ret =3D modeset_init(mdp4_kms); @@ -478,7 +478,7 @@ static int mdp4_kms_init(struct drm_device *dev) goto fail; } =20 - ret =3D msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->aspace, + ret =3D msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->vm, &mdp4_kms->blank_cursor_iova); if (ret) { DRM_DEV_ERROR(dev->dev, "could not pin blank-cursor bo: %d\n", ret); diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/m= sm/disp/mdp4/mdp4_plane.c index 3fefb2088008..7743be6167f8 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c @@ -87,7 +87,7 @@ static int mdp4_plane_prepare_fb(struct drm_plane *plane, =20 drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - return msm_framebuffer_prepare(new_state->fb, kms->aspace, false); + return msm_framebuffer_prepare(new_state->fb, kms->vm, false); } =20 static void mdp4_plane_cleanup_fb(struct drm_plane *plane, @@ -102,7 +102,7 @@ static void mdp4_plane_cleanup_fb(struct drm_plane *pla= ne, return; =20 DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, false); + msm_framebuffer_cleanup(fb, kms->vm, false); } =20 =20 @@ -153,13 +153,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *= plane, MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); =20 mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } =20 static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms, diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/ms= m/disp/mdp5/mdp5_crtc.c index 0f653e62b4a0..298861f373b0 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -169,7 +169,7 @@ static void unref_cursor_worker(struct drm_flip_work *w= ork, void *val) struct mdp5_kms *mdp5_kms =3D get_kms(&mdp5_crtc->base); struct msm_kms *kms =3D &mdp5_kms->base.base; =20 - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } =20 @@ -993,7 +993,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (!cursor_bo) return -ENOENT; =20 - ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, + ret =3D msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &mdp5_crtc->cursor.iova); if (ret) { drm_gem_object_put(cursor_bo); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm= /disp/mdp5/mdp5_kms.c index 3fcca7a3d82e..9dca0385a42d 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,11 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms =3D to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_address_space *aspace =3D kms->aspace; + struct msm_gem_vm *vm =3D kms->vm; =20 - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } =20 mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +500,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms =3D priv->kms; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int i, ret; =20 ret =3D mdp5_init(to_platform_device(dev->dev), dev); @@ -534,13 +534,13 @@ static int mdp5_kms_init(struct drm_device *dev) } mdelay(16); =20 - aspace =3D msm_kms_init_aspace(mdp5_kms->dev); - if (IS_ERR(aspace)) { - ret =3D PTR_ERR(aspace); + vm =3D msm_kms_init_vm(mdp5_kms->dev); + if (IS_ERR(vm)) { + ret =3D PTR_ERR(vm); goto fail; } =20 - kms->aspace =3D aspace; + kms->vm =3D vm; =20 pm_runtime_put_sync(&pdev->dev); =20 diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/m= sm/disp/mdp5/mdp5_plane.c index bb1601921938..9f68a4747203 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -144,7 +144,7 @@ static int mdp5_plane_prepare_fb(struct drm_plane *plan= e, =20 drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - return msm_framebuffer_prepare(new_state->fb, kms->aspace, needs_dirtyfb); + return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb); } =20 static void mdp5_plane_cleanup_fb(struct drm_plane *plane, @@ -159,7 +159,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *pla= ne, return; =20 DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, needed_dirtyfb); + msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb); } =20 static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_= state, @@ -478,13 +478,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_= kms, MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); =20 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } =20 /* Note: mdp5_plane->pipe_lock must be locked */ diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/d= si_host.c index 4d75529c0e85..16335ebd21e4 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { =20 /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,10 +1146,10 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_ho= st, int size) uint64_t iova; u8 *data; =20 - msm_host->aspace =3D msm_gem_address_space_get(priv->kms->aspace); + msm_host->vm =3D msm_gem_vm_get(priv->kms->vm); =20 data =3D msm_gem_kernel_new(dev, size, MSM_BO_WC, - msm_host->aspace, + msm_host->vm, &msm_host->tx_gem_obj, &iova); =20 if (IS_ERR(data)) { @@ -1193,10 +1193,10 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) return; =20 if (msm_host->tx_gem_obj) { - msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->aspace); - msm_gem_address_space_put(msm_host->aspace); + msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); + msm_gem_vm_put(msm_host->vm); msm_host->tx_gem_obj =3D NULL; - msm_host->aspace =3D NULL; + msm_host->vm =3D NULL; } =20 if (msm_host->tx_buf) @@ -1327,7 +1327,7 @@ int dsi_dma_base_get_6g(struct msm_dsi_host *msm_host= , uint64_t *dma_base) return -EINVAL; =20 return msm_gem_get_and_pin_iova(msm_host->tx_gem_obj, - priv->kms->aspace, dma_base); + priv->kms->vm, dma_base); } =20 int dsi_dma_base_get_v2(struct msm_dsi_host *msm_host, uint64_t *dma_base) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 29ca24548c67..903abf3532e0 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -345,7 +345,7 @@ static int context_init(struct drm_device *dev, struct = drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); =20 - ctx->aspace =3D msm_gpu_create_private_address_space(priv->gpu, current); + ctx->vm =3D msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv =3D ctx; =20 ctx->seqno =3D atomic_inc_return(&ident); @@ -523,7 +523,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *d= ev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->aspace, iova); + return msm_gem_get_iova(obj, ctx->vm, iova); } =20 static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -537,13 +537,13 @@ static int msm_ioctl_gem_info_set_iova(struct drm_dev= ice *dev, return -EINVAL; =20 /* Only supported if per-process address space is supported: */ - if (priv->gpu->aspace =3D=3D ctx->aspace) + if (priv->gpu->vm =3D=3D ctx->vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); =20 if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; =20 - return msm_gem_set_iova(obj, ctx->aspace, iova); + return msm_gem_set_iova(obj, ctx->vm, iova); } =20 static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index a65077855201..0e675c9a7f83 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,7 +48,7 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_address_space; +struct msm_gem_vm; struct msm_gem_vma; struct msm_disp_state; =20 @@ -241,7 +241,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); =20 -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev); +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); =20 int msm_ioctl_gem_submit(struct drm_device *dev, void *data, @@ -263,11 +263,11 @@ int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); =20 int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needs_dirtyfb); + struct msm_gem_vm *vm, bool needs_dirtyfb); void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needed_dirtyfb); + struct msm_gem_vm *vm, bool needed_dirtyfb); uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane); + struct msm_gem_vm *vm, int plane); struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int = plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb= ); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 09268e416843..6df318b73534 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -76,7 +76,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb,= struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needs_dirtyfb) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); @@ -88,7 +88,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, atomic_inc(&msm_fb->prepare_count); =20 for (i =3D 0; i < n; i++) { - ret =3D msm_gem_get_and_pin_iova(fb->obj[i], aspace, &msm_fb->iova[i]); + ret =3D msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)\n", fb->base.id, i, msm_fb->iova[i], ret); if (ret) @@ -99,7 +99,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, } =20 void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needed_dirtyfb) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); @@ -109,14 +109,14 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *= fb, refcount_dec(&msm_fb->dirtyfb); =20 for (i =3D 0; i < n; i++) - msm_gem_unpin_iova(fb->obj[i], aspace); + msm_gem_unpin_iova(fb->obj[i], vm); =20 if (!atomic_dec_return(&msm_fb->prepare_count)) memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } =20 uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane) + struct msm_gem_vm *vm, int plane) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbde= v.c index c62249b1ab3d..b5969374d53f 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -122,7 +122,7 @@ int msm_fbdev_driver_fbdev_probe(struct drm_fb_helper *= helper, * in panic (ie. lock-safe, etc) we could avoid pinning the * buffer now: */ - ret =3D msm_gem_get_and_pin_iova(bo, priv->kms->aspace, &paddr); + ret =3D msm_gem_get_and_pin_iova(bo, priv->kms->vm, &paddr); if (ret) { DRM_DEV_ERROR(dev->dev, "failed to get buffer obj iova: %d\n", ret); goto fail; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index fdeb6cf7eeb5..07a30d29248c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -402,14 +402,14 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *o= bj) } =20 static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct msm_gem_vma *vma; =20 msm_gem_assert_locked(obj); =20 - vma =3D msm_gem_vma_new(aspace); + vma =3D msm_gem_vma_new(vm); if (!vma) return ERR_PTR(-ENOMEM); =20 @@ -419,7 +419,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_objec= t *obj, } =20 static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct msm_gem_vma *vma; @@ -427,7 +427,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_ob= ject *obj, msm_gem_assert_locked(obj); =20 list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace =3D=3D aspace) + if (vma->vm =3D=3D vm) return vma; } =20 @@ -458,7 +458,7 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) msm_gem_assert_locked(obj); =20 list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace) { + if (vma->vm) { msm_gem_vma_purge(vma); if (close) msm_gem_vma_close(vma); @@ -481,19 +481,19 @@ put_iova_vmas(struct drm_gem_object *obj) } =20 static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; =20 msm_gem_assert_locked(obj); =20 - vma =3D lookup_vma(obj, aspace); + vma =3D lookup_vma(obj, vm); =20 if (!vma) { int ret; =20 - vma =3D add_vma(obj, aspace); + vma =3D add_vma(obj, vm); if (IS_ERR(vma)) return vma; =20 @@ -569,13 +569,13 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) } =20 struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - return get_vma_locked(obj, aspace, 0, U64_MAX); + return get_vma_locked(obj, vm, 0, U64_MAX); } =20 static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; @@ -583,7 +583,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, =20 msm_gem_assert_locked(obj); =20 - vma =3D get_vma_locked(obj, aspace, range_start, range_end); + vma =3D get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); =20 @@ -601,13 +601,13 @@ static int get_and_pin_iova_range_locked(struct drm_g= em_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { int ret; =20 msm_gem_lock(obj); - ret =3D get_and_pin_iova_range_locked(obj, aspace, iova, range_start, ran= ge_end); + ret =3D get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_e= nd); msm_gem_unlock(obj); =20 return ret; @@ -615,9 +615,9 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_objec= t *obj, =20 /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { - return msm_gem_get_and_pin_iova_range(obj, aspace, iova, 0, U64_MAX); + return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } =20 /* @@ -625,13 +625,13 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *o= bj, * valid for the life of the object */ int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; int ret =3D 0; =20 msm_gem_lock(obj); - vma =3D get_vma_locked(obj, aspace, 0, U64_MAX); + vma =3D get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else { @@ -643,9 +643,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } =20 static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - struct msm_gem_vma *vma =3D lookup_vma(obj, aspace); + struct msm_gem_vma *vma =3D lookup_vma(obj, vm); =20 if (!vma) return 0; @@ -665,20 +665,20 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova) + struct msm_gem_vm *vm, uint64_t iova) { int ret =3D 0; =20 msm_gem_lock(obj); if (!iova) { - ret =3D clear_iova(obj, aspace); + ret =3D clear_iova(obj, vm); } else { struct msm_gem_vma *vma; - vma =3D get_vma_locked(obj, aspace, iova, iova + obj->size); + vma =3D get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else if (GEM_WARN_ON(vma->iova !=3D iova)) { - clear_iova(obj, aspace); + clear_iova(obj, vm); ret =3D -EBUSY; } } @@ -693,12 +693,12 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * to get rid of it */ void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_vma *vma; =20 msm_gem_lock(obj); - vma =3D lookup_vma(obj, aspace); + vma =3D lookup_vma(obj, vm); if (!GEM_WARN_ON(!vma)) { msm_gem_unpin_locked(obj); } @@ -1016,23 +1016,23 @@ void msm_gem_describe(struct drm_gem_object *obj, s= truct seq_file *m, =20 list_for_each_entry(vma, &msm_obj->vmas, list) { const char *name, *comm; - if (vma->aspace) { - struct msm_gem_address_space *aspace =3D vma->aspace; + if (vma->vm) { + struct msm_gem_vm *vm =3D vma->vm; struct task_struct *task =3D - get_pid_task(aspace->pid, PIDTYPE_PID); + get_pid_task(vm->pid, PIDTYPE_PID); if (task) { comm =3D kstrdup(task->comm, GFP_KERNEL); put_task_struct(task); } else { comm =3D NULL; } - name =3D aspace->name; + name =3D vm->name; } else { name =3D comm =3D NULL; } - seq_printf(m, " [%s%s%s: aspace=3D%p, %08llx,%s]", + seq_printf(m, " [%s%s%s: vm=3D%p, %08llx,%s]", name, comm ? ":" : "", comm ? comm : "", - vma->aspace, vma->iova, + vma->vm, vma->iova, vma->mapped ? "mapped" : "unmapped"); kfree(comm); } @@ -1357,7 +1357,7 @@ struct drm_gem_object *msm_gem_import(struct drm_devi= ce *dev, } =20 void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova) { void *vaddr; @@ -1368,14 +1368,14 @@ void *msm_gem_kernel_new(struct drm_device *dev, ui= nt32_t size, return ERR_CAST(obj); =20 if (iova) { - ret =3D msm_gem_get_and_pin_iova(obj, aspace, iova); + ret =3D msm_gem_get_and_pin_iova(obj, vm, iova); if (ret) goto err; } =20 vaddr =3D msm_gem_get_vaddr(obj); if (IS_ERR(vaddr)) { - msm_gem_unpin_iova(obj, aspace); + msm_gem_unpin_iova(obj, vm); ret =3D PTR_ERR(vaddr); goto err; } @@ -1392,13 +1392,13 @@ void *msm_gem_kernel_new(struct drm_device *dev, ui= nt32_t size, } =20 void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { if (IS_ERR_OR_NULL(bo)) return; =20 msm_gem_put_vaddr(bo); - msm_gem_unpin_iova(bo, aspace); + msm_gem_unpin_iova(bo, vm); drm_gem_object_put(bo); } =20 diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 85f0257e83da..d2f39a371373 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -22,7 +22,7 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash mem= ory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping = */ =20 -struct msm_gem_address_space { +struct msm_gem_vm { const char *name; /* NOTE: mm managed at the page level, size is in # of pages * and position mm_node->start is in # of pages: @@ -47,13 +47,13 @@ struct msm_gem_address_space { uint64_t va_size; }; =20 -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace); +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm); =20 -void msm_gem_address_space_put(struct msm_gem_address_space *aspace); +void msm_gem_vm_put(struct msm_gem_vm *vm); =20 -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size); =20 struct msm_fence_context; @@ -61,12 +61,12 @@ struct msm_fence_context; struct msm_gem_vma { struct drm_mm_node node; uint64_t iova; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head list; /* node in msm_gem_object::vmas */ bool mapped; }; =20 -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace); +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); @@ -127,18 +127,18 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj= , struct msm_gem_vma *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova); + struct msm_gem_vm *vm, uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -160,10 +160,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct= drm_file *file, struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova); void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -257,7 +257,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 3aabf7f1da6d..a59816b6b6de 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_de= vice *dev, =20 kref_init(&submit->ref); submit->dev =3D dev; - submit->aspace =3D queue->ctx->aspace; + submit->vm =3D queue->ctx->vm; submit->gpu =3D gpu; submit->cmd =3D (void *)&submit->bos[nr_bos]; submit->queue =3D queue; @@ -311,7 +311,7 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) struct msm_gem_vma *vma; =20 /* if locking succeeded, pin bo: */ - vma =3D msm_gem_get_vma_locked(obj, submit->aspace); + vma =3D msm_gem_get_vma_locked(obj, submit->vm); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); break; @@ -669,7 +669,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, if (args->pad) return -EINVAL; =20 - if (unlikely(!ctx->aspace) && !capable(CAP_SYS_RAWIO)) { + if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); return -EPERM; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 11e842dda73c..9419692f0cc8 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -10,45 +10,44 @@ #include "msm_mmu.h" =20 static void -msm_gem_address_space_destroy(struct kref *kref) +msm_gem_vm_destroy(struct kref *kref) { - struct msm_gem_address_space *aspace =3D container_of(kref, - struct msm_gem_address_space, kref); - - drm_mm_takedown(&aspace->mm); - if (aspace->mmu) - aspace->mmu->funcs->destroy(aspace->mmu); - put_pid(aspace->pid); - kfree(aspace); + struct msm_gem_vm *vm =3D container_of(kref, struct msm_gem_vm, kref); + + drm_mm_takedown(&vm->mm); + if (vm->mmu) + vm->mmu->funcs->destroy(vm->mmu); + put_pid(vm->pid); + kfree(vm); } =20 =20 -void msm_gem_address_space_put(struct msm_gem_address_space *aspace) +void msm_gem_vm_put(struct msm_gem_vm *vm) { - if (aspace) - kref_put(&aspace->kref, msm_gem_address_space_destroy); + if (vm) + kref_put(&vm->kref, msm_gem_vm_destroy); } =20 -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace) +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm) { - if (!IS_ERR_OR_NULL(aspace)) - kref_get(&aspace->kref); + if (!IS_ERR_OR_NULL(vm)) + kref_get(&vm->kref); =20 - return aspace; + return vm; } =20 /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; unsigned size =3D vma->node.size; =20 /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; =20 - aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); =20 vma->mapped =3D false; } @@ -58,7 +57,7 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; int ret; =20 if (GEM_WARN_ON(!vma->iova)) @@ -69,7 +68,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, =20 vma->mapped =3D true; =20 - if (!aspace) + if (!vm) return 0; =20 /* @@ -81,7 +80,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, size, prot); + ret =3D vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); =20 if (ret) { vma->mapped =3D false; @@ -93,21 +92,21 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; =20 GEM_WARN_ON(vma->mapped); =20 - spin_lock(&aspace->lock); + spin_lock(&vm->lock); if (vma->iova) drm_mm_remove_node(&vma->node); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); =20 vma->iova =3D 0; =20 - msm_gem_address_space_put(aspace); + msm_gem_vm_put(vm); } =20 -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) { struct msm_gem_vma *vma; =20 @@ -115,7 +114,7 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_addr= ess_space *aspace) if (!vma) return NULL; =20 - vma->aspace =3D aspace; + vma->vm =3D vm; =20 return vma; } @@ -124,20 +123,20 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_ad= dress_space *aspace) int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end) { - struct msm_gem_address_space *aspace =3D vma->aspace; + struct msm_gem_vm *vm =3D vma->vm; int ret; =20 - if (GEM_WARN_ON(!aspace)) + if (GEM_WARN_ON(!vm)) return -EINVAL; =20 if (GEM_WARN_ON(vma->iova)) return -EBUSY; =20 - spin_lock(&aspace->lock); - ret =3D drm_mm_insert_node_in_range(&aspace->mm, &vma->node, + spin_lock(&vm->lock); + ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); =20 if (ret) return ret; @@ -145,33 +144,33 @@ int msm_gem_vma_init(struct msm_gem_vma *vma, int siz= e, vma->iova =3D vma->node.start; vma->mapped =3D false; =20 - kref_get(&aspace->kref); + kref_get(&vm->kref); =20 return 0; } =20 -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); =20 - aspace =3D kzalloc(sizeof(*aspace), GFP_KERNEL); - if (!aspace) + vm =3D kzalloc(sizeof(*vm), GFP_KERNEL); + if (!vm) return ERR_PTR(-ENOMEM); =20 - spin_lock_init(&aspace->lock); - aspace->name =3D name; - aspace->mmu =3D mmu; - aspace->va_start =3D va_start; - aspace->va_size =3D size; + spin_lock_init(&vm->lock); + vm->name =3D name; + vm->mmu =3D mmu; + vm->va_start =3D va_start; + vm->va_size =3D size; =20 - drm_mm_init(&aspace->mm, va_start, size); + drm_mm_init(&vm->mm, va_start, size); =20 - kref_init(&aspace->kref); + kref_init(&vm->kref); =20 - return aspace; + return vm; } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index d786fcfad62f..0d466a2e9b32 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -283,7 +283,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *= gpu, =20 if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info =3D &state->fault_info; - struct msm_mmu *mmu =3D submit->aspace->mmu; + struct msm_mmu *mmu =3D submit->vm->mmu; =20 msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -386,8 +386,8 @@ static void recover_worker(struct kthread_work *work) =20 /* Increment the fault counts */ submit->queue->faults++; - if (submit->aspace) - submit->aspace->faults++; + if (submit->vm) + submit->vm->faults++; =20 get_comm_cmdline(submit, &comm, &cmd); =20 @@ -492,7 +492,7 @@ static void fault_worker(struct kthread_work *work) =20 resume_smmu: memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); + gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); =20 mutex_unlock(&gpu->lock); } @@ -829,10 +829,10 @@ static int get_clocks(struct platform_device *pdev, s= truct msm_gpu *gpu) } =20 /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_stru= ct *task) +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_address_space *aspace =3D NULL; + struct msm_gem_vm *vm =3D NULL; if (!gpu) return NULL; =20 @@ -840,16 +840,16 @@ msm_gpu_create_private_address_space(struct msm_gpu *= gpu, struct task_struct *ta * If the target doesn't support private address spaces then return * the global one */ - if (gpu->funcs->create_private_address_space) { - aspace =3D gpu->funcs->create_private_address_space(gpu); - if (!IS_ERR(aspace)) - aspace->pid =3D get_pid(task_pid(task)); + if (gpu->funcs->create_private_vm) { + vm =3D gpu->funcs->create_private_vm(gpu); + if (!IS_ERR(vm)) + vm->pid =3D get_pid(task_pid(task)); } =20 - if (IS_ERR_OR_NULL(aspace)) - aspace =3D msm_gem_address_space_get(gpu->aspace); + if (IS_ERR_OR_NULL(vm)) + vm =3D msm_gem_vm_get(gpu->vm); =20 - return aspace; + return vm; } =20 int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, @@ -945,18 +945,18 @@ int msm_gpu_init(struct drm_device *drm, struct platf= orm_device *pdev, msm_devfreq_init(gpu); =20 =20 - gpu->aspace =3D gpu->funcs->create_address_space(gpu, pdev); + gpu->vm =3D gpu->funcs->create_vm(gpu, pdev); =20 - if (gpu->aspace =3D=3D NULL) + if (gpu->vm =3D=3D NULL) DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", nam= e); - else if (IS_ERR(gpu->aspace)) { - ret =3D PTR_ERR(gpu->aspace); + else if (IS_ERR(gpu->vm)) { + ret =3D PTR_ERR(gpu->vm); goto fail; } =20 memptrs =3D msm_gem_kernel_new(drm, sizeof(struct msm_rbmemptrs) * nr_rings, - check_apriv(gpu, MSM_BO_WC), gpu->aspace, &gpu->memptrs_bo, + check_apriv(gpu, MSM_BO_WC), gpu->vm, &gpu->memptrs_bo, &memptrs_iova); =20 if (IS_ERR(memptrs)) { @@ -1000,7 +1000,7 @@ int msm_gpu_init(struct drm_device *drm, struct platf= orm_device *pdev, gpu->rb[i] =3D NULL; } =20 - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); =20 platform_set_drvdata(pdev, NULL); return ret; @@ -1017,11 +1017,11 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) gpu->rb[i] =3D NULL; } =20 - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); =20 - if (!IS_ERR_OR_NULL(gpu->aspace)) { - gpu->aspace->mmu->funcs->detach(gpu->aspace->mmu); - msm_gem_address_space_put(gpu->aspace); + if (!IS_ERR_OR_NULL(gpu->vm)) { + gpu->vm->mmu->funcs->detach(gpu->vm->mmu); + msm_gem_vm_put(gpu->vm); } =20 if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index c699ce0c557b..1f26ba00f773 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,10 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_address_space *(*create_address_space) - (struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_address_space *(*create_private_address_space) - (struct msm_gpu *gpu); + struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_devi= ce *pdev); + struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); =20 /** @@ -236,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; =20 - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -364,8 +362,8 @@ struct msm_context { */ int queueid; =20 - /** @aspace: the per-process GPU address-space */ - struct msm_gem_address_space *aspace; + /** @vm: the per-process GPU address-space */ + struct msm_gem_vm *vm; =20 /** @kref: the reference count */ struct kref ref; @@ -675,8 +673,8 @@ int msm_gpu_init(struct drm_device *drm, struct platfor= m_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); =20 -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_stru= ct *task); +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); =20 void msm_gpu_cleanup(struct msm_gpu *gpu); =20 diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 35d5397e73b4..88504c4b842f 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned lo= ng iova, int flags, void return -ENOSYS; } =20 -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct msm_mmu *mmu; struct device *mdp_dev =3D dev->dev; struct device *mdss_dev =3D mdp_dev->parent; @@ -204,17 +204,17 @@ struct msm_gem_address_space *msm_kms_init_aspace(str= uct drm_device *dev) return NULL; } =20 - aspace =3D msm_gem_address_space_create(mmu, "mdp_kms", + vm =3D msm_gem_vm_create(mmu, "mdp_kms", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { - dev_err(mdp_dev, "aspace create, error %pe\n", aspace); + if (IS_ERR(vm)) { + dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); - return aspace; + return vm; } =20 - msm_mmu_set_fault_handler(aspace->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); =20 - return aspace; + return vm; } =20 void msm_drm_kms_uninit(struct device *dev) diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index 43b58d052ee6..f45996a03e15 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; =20 /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; =20 /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm= _ringbuffer.c index c5651c39ac2a..bbf8503f6bb5 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -84,7 +84,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu = *gpu, int id, =20 ring->start =3D msm_gem_kernel_new(gpu->dev, MSM_GPU_RINGBUFFER_SZ, check_apriv(gpu, MSM_BO_WC | MSM_BO_GPU_READONLY), - gpu->aspace, &ring->bo, &ring->iova); + gpu->vm, &ring->bo, &ring->iova); =20 if (IS_ERR(ring->start)) { ret =3D PTR_ERR(ring->start); @@ -131,7 +131,7 @@ void msm_ringbuffer_destroy(struct msm_ringbuffer *ring) =20 msm_fence_context_free(ring->fctx); =20 - msm_gem_kernel_put(ring->bo, ring->gpu->aspace); + msm_gem_kernel_put(ring->bo, ring->gpu->vm); =20 kfree(ring); } diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index 1acc0fe36353..6298233c3568 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } =20 - msm_gem_address_space_put(ctx->aspace); + msm_gem_vm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 750332DCC07 for ; Wed, 25 Jun 2025 18:58:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877910; cv=none; b=aCdA4zSxdBlg8bQ0sU4iFNCJ+80BGTVodNh9xUNdKvIB9Q8+Fx3Y3Q8VH9aR/rdIu4w/+jwbSWz3Vi2pXUXkiSL9SMRuI4j4TN+xdr81qq09MsZoB7XNWEtvawld0XE0T2Hk+TUvNJShmVs+aINJWFia9RnVvz2G5fwlaaAEV8g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877910; c=relaxed/simple; bh=582gsS3SPyTENXAfRN07BZXSPUUl4eK209uRnsSMntk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rlgvmoH+VPD3+P3knWUKLbaEssMCZExs90udkCy7LjIi4hsMFtNdWcTnG52hNOETiXIAZGgvnOu5RpZzxq7wYLv5E4Ema3+OlsgY+iaB81PgQpygEz27I90laVdULXEicJYWApHiNAzwKXTpJGQfDs8t7s/zYFhzJeExwEM7Oss= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=XNrCvIgE; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="XNrCvIgE" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PAtaKg015721 for ; Wed, 25 Jun 2025 18:58:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Brhrophe0A7 Gd1KgTCcLb6vKHfZb9nFr0On2wKb0sBY=; b=XNrCvIgEYLlGFynxUxxR1aNm80W g6L/7HH9XxKhX1b8PNUHgFNaRw8Morqbgk3D5PaITGJ5Boky+Dz/ZsZe8uzTrDYU jYeTboVqVF9YI3aeUP/FSFJlXwRx5jQzmTQgpd6SPaB80C4t6iMiBf5LgueS8Q8C CoXl+LytifKCeD4lycvJpLHtRUCOztlb5uyNKjbwtsHwWoL7VuPEKjEoTsubaE2w O2oWtMV2O7efu1Iy3zMOFjPCfY0Y6XXeaCFkZthOTOZPK5QHAnT5AyM+ncMKWEfZ H3RpFInx9h0G+DKnnkobkDLrKBGTKm7pgP2jsiHS6qz8sQe8P8XuwTWNE6g== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47evc5t1mg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:27 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-2355651d204so1571695ad.2 for ; Wed, 25 Jun 2025 11:58:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877906; x=1751482706; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Brhrophe0A7Gd1KgTCcLb6vKHfZb9nFr0On2wKb0sBY=; b=kg4S9ab7VomzJD242WKgbXBXk+I62j1qAwMlA4z/pFBLNje1k054XdhKZzbrMO5tti 6IgB0ShqK97G5EixNmZEn3PlRPxsXwkyF7yVxF7tf7DNVjywlLAqiSbSrC8PW+2iAFLo xuJVSIeNKp0CkQ15RnVk5NsfUPAw7Vlxi7CSMKWXe7zBzWnkX2L1A+C/x4Aaw9RubAcR RWsedRDssRlmiJdLk8VmKGZPkNtn7MtLZAnFU2Mfxr4sPvGVk2KUFzpNuaXnl8y42xu7 I/HF88kN0WSenSWyXxfCINQCODTO6LL2CTz2plk41A7F3VJ55OkjcPEQ9SPe1GvcEC4f MW5w== X-Forwarded-Encrypted: i=1; AJvYcCXY4TRAHntm/v4xlLrL0A0pPZ5zLq973gFyoONJq7Mi/RGmaAUb8LRqMlVyc0afboTcsyOPDT1MM2woYIA=@vger.kernel.org X-Gm-Message-State: AOJu0Yx6MW90IgEky+GY6Lh+O0Oo1aIEcHvs/0wtArscFaRpomys4Fid wmNPfKAimhr3riUOBfOL5zAhCc1mbxVCyQfygfcHiuctOg/f6aQpEb7AW+vveZWaR1MdZYYv0JC e2uuRepS9PbGepq2lkXJOYajP5j37JoZCzCaCxmRYEXkG7STqhGd2J9Q2yvAiBo7tQVI= X-Gm-Gg: ASbGnct5YzAmaaH3ay0oNK6CAwkA+54I9zZXfCpKdd9vk8TCStO2gO/CDL+06GIP6Hz b9Qh4IcRsD0++c+AtKXSsa/JJ6NLZYB9/yNmnps724Lx5I4K0fv6RGwtTfEhdyeAfA2V21HVrR/ BybWwUvVMiFb0A3HDL390TWoLKGy6wT94GYZBsp6PKOhxRGXEJqYvapIwfB4jI4DCPd/mCTU8TQ GmB/4pONBTC0O7/VTONRIcQzitIxZWN/EK9SmdkiTmg4JKOkXjupjCv7NBUT4zPnvpDzcTRxFFg VUFczFtDztH5yfByFmB9iW+FzWz8WSqE X-Received: by 2002:a17:903:1ac7:b0:234:ed31:fc94 with SMTP id d9443c01a7336-2382403a997mr76263855ad.26.1750877905809; Wed, 25 Jun 2025 11:58:25 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF7Uzmb+uI5SKFO9UurQpdo9v9pvyGvrPHSp9ya2mMl87EXYsxjXxwJpPfRiNqiQVh6p8xcJA== X-Received: by 2002:a17:903:1ac7:b0:234:ed31:fc94 with SMTP id d9443c01a7336-2382403a997mr76263345ad.26.1750877905187; Wed, 25 Jun 2025 11:58:25 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d8664f0esm138825535ad.159.2025.06.25.11.58.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:24 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 07/42] drm/msm: Remove vram carveout support Date: Wed, 25 Jun 2025 11:47:00 -0700 Message-ID: <20250625184918.124608-8-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: 9LYs5vBXCZ-FiSeeZnLwoQPcHpHOchn5 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXxlhdn+m0lWls foAtUKv31d1++RgVe0CMtZK410tDi22mf/mMl3BJoPg3Tzd92ygVhd60lTMvbAU/xwos13g5UHL fnGOr1dfOHqbTLDqu6P0fZj6BXlVHT7/q+/FP0KY36rPcpGa6tfcBF86tk5/tiuywQ4jnxXJYa1 G1945eahq9ISOzpWC+aFp7PPqDQoiutGGK+lB3mEKLoGO3vZuW9tbIgs6ySuNumcjJYTAnmXkjH ibkepBjxHKwIjBqJFRf2GoMFjeBszLeL+glPORHOTHikX+hV3UDBNIacbHEj27I6lqQiuqol+jm lSfqF48gvXpIAbzwiPB69YGt1AJsg/jGn74soQyb0TDEEAQfx9vppXOo1S/0dDskLZN3Xhr8bJa w+WZ4rwXxDbPtCya+gF1fQ5u/eBDYGabQD9gJhlhSyMi272Wyvn2DyDfL3HW79810PI+Yx68 X-Authority-Analysis: v=2.4 cv=caHSrmDM c=1 sm=1 tr=0 ts=685c46d3 cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=2B-FfE5MDWLa5Dp4eRYA:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-ORIG-GUID: 9LYs5vBXCZ-FiSeeZnLwoQPcHpHOchn5 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 lowpriorityscore=0 bulkscore=0 adultscore=0 mlxscore=0 spamscore=0 malwarescore=0 phishscore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark It is standing in the way of drm_gpuvm / VM_BIND support. Not to mention frequently broken and rarely tested. And I think only needed for a 10yr old not quite upstream SoC (msm8974). Maybe we can add support back in later, but I'm doubtful. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 8 -- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 4 - drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 - drivers/gpu/drm/msm/msm_drv.c | 117 +----------------- drivers/gpu/drm/msm/msm_drv.h | 11 -- drivers/gpu/drm/msm/msm_gem.c | 131 ++------------------- drivers/gpu/drm/msm/msm_gem.h | 5 - drivers/gpu/drm/msm/msm_gem_submit.c | 5 - drivers/gpu/drm/msm/msm_gpu.c | 6 +- 14 files changed, 19 insertions(+), 309 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 5eb063ed0b46..095bae92e3e8 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -551,14 +551,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers =3D a220_registers; =20 - if (!gpu->vm) { - dev_err(dev->dev, "No memory protection without MMU\n"); - if (!allow_vram_carveout) { - ret =3D -ENXIO; - goto fail; - } - } - return gpu; =20 fail: diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a3xx_gpu.c index 434e6ededf83..a956cd79195e 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -581,21 +581,6 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } =20 - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret =3D -ENXIO; - goto fail; - } - } - icc_path =3D devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret =3D PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a4xx_gpu.c index 2c75debcfd84..83f6329accba 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -695,21 +695,6 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0xffff0000ffff0000ull; =20 - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret =3D -ENXIO; - goto fail; - } - } - icc_path =3D devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret =3D PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a5xx_gpu.c index dc31bc0afca4..04138a06724b 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,8 +1786,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } =20 - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); =20 /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index d05c00624f74..f4d9cdbc5602 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2547,8 +2547,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0x1fffffffff000ull; =20 - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); =20 a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/m= sm/adreno/adreno_device.c index f4552b8c6767..6b0390c38bff 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -16,10 +16,6 @@ bool snapshot_debugbus =3D false; MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devc= oredump (if not fused off)"); module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600); =20 -bool allow_vram_carveout =3D false; -MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place= of IOMMU"); -module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600); - int enable_preemption =3D -1; MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=3Don= , 0=3Ddisable, -1=3Dauto (default))"); module_param(enable_preemption, int, 0600); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index b01d9efb8663..35a99c81f7e0 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -209,7 +209,9 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, u64 start, size; =20 mmu =3D msm_iommu_gpu_new(&pdev->dev, gpu, quirks); - if (IS_ERR_OR_NULL(mmu)) + if (!mmu) + return ERR_PTR(-ENODEV); + else if (IS_ERR_OR_NULL(mmu)) return ERR_CAST(mmu); =20 geometry =3D msm_iommu_get_geometry(mmu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index 258c5c6dde2e..bbd7e664286e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -18,7 +18,6 @@ #include "adreno_pm4.xml.h" =20 extern bool snapshot_debugbus; -extern bool allow_vram_carveout; =20 enum { ADRENO_FW_PM4 =3D 0, diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 903abf3532e0..978f1d355b42 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -46,12 +46,6 @@ #define MSM_VERSION_MINOR 12 #define MSM_VERSION_PATCHLEVEL 0 =20 -static void msm_deinit_vram(struct drm_device *ddev); - -static char *vram =3D "16m"; -MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPU= MMU)"); -module_param(vram, charp, 0); - bool dumpstate; MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors"); module_param(dumpstate, bool, 0600); @@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev) if (priv->kms) msm_drm_kms_uninit(dev); =20 - msm_deinit_vram(ddev); - component_unbind_all(dev, ddev); =20 ddev->dev_private =3D NULL; @@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev) return 0; } =20 -bool msm_use_mmu(struct drm_device *dev) -{ - struct msm_drm_private *priv =3D dev->dev_private; - - /* - * a2xx comes with its own MMU - * On other platforms IOMMU can be declared specified either for the - * MDP/DPU device or for its parent, MDSS device. - */ - return priv->is_a2xx || - device_iommu_mapped(dev->dev) || - device_iommu_mapped(dev->dev->parent); -} - -static int msm_init_vram(struct drm_device *dev) -{ - struct msm_drm_private *priv =3D dev->dev_private; - struct device_node *node; - unsigned long size =3D 0; - int ret =3D 0; - - /* In the device-tree world, we could have a 'memory-region' - * phandle, which gives us a link to our "vram". Allocating - * is all nicely abstracted behind the dma api, but we need - * to know the entire size to allocate it all in one go. There - * are two cases: - * 1) device with no IOMMU, in which case we need exclusive - * access to a VRAM carveout big enough for all gpu - * buffers - * 2) device with IOMMU, but where the bootloader puts up - * a splash screen. In this case, the VRAM carveout - * need only be large enough for fbdev fb. But we need - * exclusive access to the buffer to avoid the kernel - * using those pages for other purposes (which appears - * as corruption on screen before we have a chance to - * load and do initial modeset) - */ - - node =3D of_parse_phandle(dev->dev->of_node, "memory-region", 0); - if (node) { - struct resource r; - ret =3D of_address_to_resource(node, 0, &r); - of_node_put(node); - if (ret) - return ret; - size =3D r.end - r.start + 1; - DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start); - - /* if we have no IOMMU, then we need to use carveout allocator. - * Grab the entire DMA chunk carved out in early startup in - * mach-msm: - */ - } else if (!msm_use_mmu(dev)) { - DRM_INFO("using %s VRAM carveout\n", vram); - size =3D memparse(vram, NULL); - } - - if (size) { - unsigned long attrs =3D 0; - void *p; - - priv->vram.size =3D size; - - drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1); - spin_lock_init(&priv->vram.lock); - - attrs |=3D DMA_ATTR_NO_KERNEL_MAPPING; - attrs |=3D DMA_ATTR_WRITE_COMBINE; - - /* note that for no-kernel-mapping, the vaddr returned - * is bogus, but non-null if allocation succeeded: - */ - p =3D dma_alloc_attrs(dev->dev, size, - &priv->vram.paddr, GFP_KERNEL, attrs); - if (!p) { - DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n"); - priv->vram.paddr =3D 0; - return -ENOMEM; - } - - DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n", - (uint32_t)priv->vram.paddr, - (uint32_t)(priv->vram.paddr + size)); - } - - return ret; -} - -static void msm_deinit_vram(struct drm_device *ddev) -{ - struct msm_drm_private *priv =3D ddev->dev_private; - unsigned long attrs =3D DMA_ATTR_NO_KERNEL_MAPPING; - - if (!priv->vram.paddr) - return; - - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, - attrs); -} - static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv =3D dev_get_drvdata(dev); @@ -256,16 +147,12 @@ static int msm_drm_init(struct device *dev, const str= uct drm_driver *drv) goto err_destroy_wq; } =20 - ret =3D msm_init_vram(ddev); - if (ret) - goto err_destroy_wq; - dma_set_max_seg_size(dev, UINT_MAX); =20 /* Bind all our sub-components: */ ret =3D component_bind_all(dev, ddev); if (ret) - goto err_deinit_vram; + goto err_destroy_wq; =20 ret =3D msm_gem_shrinker_init(ddev); if (ret) @@ -302,8 +189,6 @@ static int msm_drm_init(struct device *dev, const struc= t drm_driver *drv) =20 return ret; =20 -err_deinit_vram: - msm_deinit_vram(ddev); err_destroy_wq: destroy_workqueue(priv->wq); err_put_dev: diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 0e675c9a7f83..ad509403f072 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -183,17 +183,6 @@ struct msm_drm_private { =20 struct msm_drm_thread event_thread[MAX_CRTCS]; =20 - /* VRAM carveout, used when no IOMMU: */ - struct { - unsigned long size; - dma_addr_t paddr; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: - */ - struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ - } vram; - struct notifier_block vmap_notifier; struct shrinker *shrinker; =20 diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 07a30d29248c..621fb4e17a2e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,24 +17,8 @@ #include =20 #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_gpu.h" -#include "msm_mmu.h" - -static dma_addr_t physaddr(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_drm_private *priv =3D obj->dev->dev_private; - return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) + - priv->vram.paddr; -} - -static bool use_pages(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - return !msm_obj->vram_node; -} =20 static int pgprot =3D 0; module_param(pgprot, int, 0600); @@ -139,36 +123,6 @@ static void update_lru(struct drm_gem_object *obj) mutex_unlock(&priv->lru.lock); } =20 -/* allocate pages from VRAM carveout, used when no IOMMU: */ -static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_drm_private *priv =3D obj->dev->dev_private; - dma_addr_t paddr; - struct page **p; - int ret, i; - - p =3D kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!p) - return ERR_PTR(-ENOMEM); - - spin_lock(&priv->vram.lock); - ret =3D drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages); - spin_unlock(&priv->vram.lock); - if (ret) { - kvfree(p); - return ERR_PTR(ret); - } - - paddr =3D physaddr(obj); - for (i =3D 0; i < npages; i++) { - p[i] =3D pfn_to_page(__phys_to_pfn(paddr)); - paddr +=3D PAGE_SIZE; - } - - return p; -} - static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); @@ -180,10 +134,7 @@ static struct page **get_pages(struct drm_gem_object *= obj) struct page **p; int npages =3D obj->size >> PAGE_SHIFT; =20 - if (use_pages(obj)) - p =3D drm_gem_get_pages(obj); - else - p =3D get_pages_vram(obj, npages); + p =3D drm_gem_get_pages(obj); =20 if (IS_ERR(p)) { DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n", @@ -216,18 +167,6 @@ static struct page **get_pages(struct drm_gem_object *= obj) return msm_obj->pages; } =20 -static void put_pages_vram(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_drm_private *priv =3D obj->dev->dev_private; - - spin_lock(&priv->vram.lock); - drm_mm_remove_node(msm_obj->vram_node); - spin_unlock(&priv->vram.lock); - - kvfree(msm_obj->pages); -} - static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); @@ -248,10 +187,7 @@ static void put_pages(struct drm_gem_object *obj) =20 update_device_mem(obj->dev->dev_private, -obj->size); =20 - if (use_pages(obj)) - drm_gem_put_pages(obj, msm_obj->pages, true, false); - else - put_pages_vram(obj); + drm_gem_put_pages(obj, msm_obj->pages, true, false); =20 msm_obj->pages =3D NULL; update_lru(obj); @@ -1215,19 +1151,10 @@ struct drm_gem_object *msm_gem_new(struct drm_devic= e *dev, uint32_t size, uint32 struct msm_drm_private *priv =3D dev->dev_private; struct msm_gem_object *msm_obj; struct drm_gem_object *obj =3D NULL; - bool use_vram =3D false; int ret; =20 size =3D PAGE_ALIGN(size); =20 - if (!msm_use_mmu(dev)) - use_vram =3D true; - else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) - use_vram =3D true; - - if (GEM_WARN_ON(use_vram && !priv->vram.size)) - return ERR_PTR(-EINVAL); - /* Disallow zero sized objects as they make the underlying * infrastructure grumpy */ @@ -1240,44 +1167,16 @@ struct drm_gem_object *msm_gem_new(struct drm_devic= e *dev, uint32_t size, uint32 =20 msm_obj =3D to_msm_bo(obj); =20 - if (use_vram) { - struct msm_gem_vma *vma; - struct page **pages; - - drm_gem_private_object_init(dev, obj, size); - - msm_gem_lock(obj); - - vma =3D add_vma(obj, NULL); - msm_gem_unlock(obj); - if (IS_ERR(vma)) { - ret =3D PTR_ERR(vma); - goto fail; - } - - to_msm_bo(obj)->vram_node =3D &vma->node; - - msm_gem_lock(obj); - pages =3D get_pages(obj); - msm_gem_unlock(obj); - if (IS_ERR(pages)) { - ret =3D PTR_ERR(pages); - goto fail; - } - - vma->iova =3D physaddr(obj); - } else { - ret =3D drm_gem_object_init(dev, obj, size); - if (ret) - goto fail; - /* - * Our buffers are kept pinned, so allocating them from the - * MOVABLE zone is a really bad idea, and conflicts with CMA. - * See comments above new_inode() why this is required _and_ - * expected if you're going to pin these pages. - */ - mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); - } + ret =3D drm_gem_object_init(dev, obj, size); + if (ret) + goto fail; + /* + * Our buffers are kept pinned, so allocating them from the + * MOVABLE zone is a really bad idea, and conflicts with CMA. + * See comments above new_inode() why this is required _and_ + * expected if you're going to pin these pages. + */ + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); =20 drm_gem_lru_move_tail(&priv->lru.unbacked, obj); =20 @@ -1305,12 +1204,6 @@ struct drm_gem_object *msm_gem_import(struct drm_dev= ice *dev, uint32_t size; int ret, npages; =20 - /* if we don't have IOMMU, don't bother pretending we can import: */ - if (!msm_use_mmu(dev)) { - DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n"); - return ERR_PTR(-EINVAL); - } - size =3D PAGE_ALIGN(dmabuf->size); =20 ret =3D msm_gem_new_impl(dev, size, MSM_BO_WC, &obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d2f39a371373..c16b11182831 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -102,11 +102,6 @@ struct msm_gem_object { =20 struct list_head vmas; /* list of msm_gem_vma */ =20 - /* For physically contiguous buffers. Used when we don't have - * an IOMMU. Also used for stolen/splashscreen buffer. - */ - struct drm_mm_node *vram_node; - char name[32]; /* Identifier to print for the debugfs files */ =20 /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index a59816b6b6de..c184b1a1f522 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -669,11 +669,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void = *data, if (args->pad) return -EINVAL; =20 - if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { - DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); - return -EPERM; - } - /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0d466a2e9b32..b30800f80120 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -944,12 +944,8 @@ int msm_gpu_init(struct drm_device *drm, struct platfo= rm_device *pdev, =20 msm_devfreq_init(gpu); =20 - gpu->vm =3D gpu->funcs->create_vm(gpu, pdev); - - if (gpu->vm =3D=3D NULL) - DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", nam= e); - else if (IS_ERR(gpu->vm)) { + if (IS_ERR(gpu->vm)) { ret =3D PTR_ERR(gpu->vm); goto fail; } --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE7292DBF66 for ; Wed, 25 Jun 2025 18:58:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877910; cv=none; b=KHvdQaA0c6XsPPNSoBTsFg+XlCcXJj2VR5nAjyWVSlljN0ypWY9E27u3rCAFGFgSTm7sQgkRMQml+xyUMZO4uPTJylIyxH0zK+GuRSaMbMwfJ76Re28zZiYNm9MjlSm0viLA8zoWVOxjb+t9wMERGNzjyqwSYgICpwPphANfamI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877910; c=relaxed/simple; bh=x6l1U8SQbNXar2T+nKC44CZbNnCrpWZd9rRpAvG91aA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=u0mkZOq395NcCEP0Ft/KjIe5bbaAtyoVdMdE2sAL+H5R38mM6p+cXVTVDZZCPyBGUd2srLrIALqV+DCr5rnA+Hi89AjCimQoCdvuq/5XYhT2Hv+4Q0U8Xxv9fth3catf86/rdqUR3HtG4Jof4uJCYDY1Qh2KRcu86KYapoWk59Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=WovWfke8; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="WovWfke8" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PBvDw3011924 for ; Wed, 25 Jun 2025 18:58:28 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=BHyE+ROYmr1 BYZwR3C+QXhzqfJ1dS+58uUEwZ+AZnWU=; b=WovWfke8x2POZSi4XbGmSzoYY7r OJuPg4AxkbaNSEDy4W9A/qIvRpSjsPBtHFLNfg2eAF7LchS0j2rSUJZNSxecTKhR ZSd88/qdk1s4I+A+WGSbtG91p+FaLHyyCB7THVOZyDPRfH7EpACjmmePFE8utNH5 31zjC90lH2xfh/5MvanjaDhKs2YgMalBEDZZfXUtj1gAB+Za/nwOp3miOitMZUFb qFjPLR3wD/j+IlOVshFrvlhITG4lP1LYb1b7L7szhnPuDzcPsTxCn0zCCRo5ANL2 MO91QPqTho0c7/OFx7yG4eJmt5++tQrfLcIuoHCovxInXdp2sZFW6EXzsiA== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47ec26b651-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:27 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-234fedd3e51so1839915ad.1 for ; Wed, 25 Jun 2025 11:58:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877907; x=1751482707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BHyE+ROYmr1BYZwR3C+QXhzqfJ1dS+58uUEwZ+AZnWU=; b=ANBjY7rzg3RSf9xPemSGsSEoZx0LDzSSsJOGhTY25ARtpQKhm2bI0+gTXXLeY1ZBPa zi0LtUZq9LyfKmQtgpT285RdcRbvCSaYKc7wMbtJUUgb2R6VyOKpM8uF7D2RwPik5SGx cn46af5AGa2Dgi6lbGNtmZ8o3FQTdtKbS64cuLRJhsO8PQM4l+yHsjd0A5Bcudgg8vdf Z9Ky1OOLw3tttzVz6/aDQRA5MFa1Iso26r41ms+c2vKVU8BgeVnzBx2LegdfcO1HLRiy jxoOsxB96OjhVe2bZEWzo5dUOgW7v/hl33GSyuJzrpJ4GdacLxk088gj73retl0t/9T6 +aYw== X-Forwarded-Encrypted: i=1; AJvYcCVgZXCGnnWEW0QDSqoz/fx2RXaaRF3QXmVizLChbL0xmGmjMChv9gSpXNPztZhjpqqzhkp8H8URy1dgFLE=@vger.kernel.org X-Gm-Message-State: AOJu0YxQilpJjmbMr6cu8NHzYBaFjaNT9C2BjYs5V+TxeYqPgh0zlwG+ 1bvw/urjDYOYPZZ1qZnVUjmkz9Cc/SAGdAQFOfst8ypmFy8vl33TP17YXm0Smg169VLRi4CXeRS z6AoEQsJXQUNQb0Dhkkg66naGBUuvK5dW8tUaIpAAEzU/86yK3O6u4AVnuuXt4PvS1zk= X-Gm-Gg: ASbGncvdOPWs/jDIsX+3ihLh3koeNsFYxcB+Jwysd1xvQymet6d2oVfUz7BapR86Rvd CBiiDEjk8HjrCOdHji0d5WxgF7ngaf1M4iZfwkO57SmsrjreQAapA9M5fQqd/IFIVPeKlFoOx/i Ks45Lr/bsbHJsW5cz2Qffh7sficmQZmdyp1kzaw+seG2+iKaFHbaX2wpxliOfL7dBez9nxK5Mvl 3yla5oJ5fbR7Pu9I3h7fUIq78npgPcQaPGedz62PNgitYl3Uk+bh/dDg/TF0PuBSagaFtCFjpEy txIFQuTe92YQjoWfgRZ1XMy6zNzOk5/l X-Received: by 2002:a17:902:ce06:b0:235:eca0:12d4 with SMTP id d9443c01a7336-23824095671mr80469685ad.53.1750877906901; Wed, 25 Jun 2025 11:58:26 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEfOU8b5xN3DBcrI0SlkS8W7vOGGlK3RW15375C9lfmS207Fy10Dze1XSZd5UHnBi7y5LdHyA== X-Received: by 2002:a17:902:ce06:b0:235:eca0:12d4 with SMTP id d9443c01a7336-23824095671mr80469315ad.53.1750877906514; Wed, 25 Jun 2025 11:58:26 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d868f878sm144055785ad.184.2025.06.25.11.58.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:26 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 08/42] drm/msm: Collapse vma allocation and initialization Date: Wed, 25 Jun 2025 11:47:01 -0700 Message-ID: <20250625184918.124608-9-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX8QH3VFkVb9xD dBHm2IbccDsACFV+LxXx9tVvgD6NcWqUjLNoRZJnamdYnKlADyUxy59N/8lvh/uYgCzdIFAFruv c8gFIZnyRdKhm92yTVP2YeOg4xgAbWnOPny/14UVCi8kOltgipPZCbUWVwQU3PVx/S+WfDMX1+Y Ym5uxh3SZ1euVxVd0i2Xc+2XSGO+1v5bqdBdRgvoWYkBxWhrY6Xaqu5EtPJWhmExAsvB1NHfRm3 CP6pAegkg/KnkZpZ7uXdF0P9Zvfr3mTt3BOzk+J0grchzgaGJK5geX6Q1VCtXHO+Xea0FhV5wOW 9nFHg0vxZzwcwy6V/SUZp5uLA5WSqLdPU9Ry0hKmcy9LX2Mnw76hZHkYnWHnGlD+j7YctK5EXCm omWdyHx8TzPTJZD4lAC8eeSQsZ6Uy4ZG0omHdGheg7Ttnx9SisjCvBA8sPj982WuOcq2+zJg X-Authority-Analysis: v=2.4 cv=XPQwSRhE c=1 sm=1 tr=0 ts=685c46d3 cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=Fs5xBETKxiRrdmql5B8A:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-GUID: xNemKAqan5336AYeZkxlGXz_5HE0jAg7 X-Proofpoint-ORIG-GUID: xNemKAqan5336AYeZkxlGXz_5HE0jAg7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 clxscore=1015 suspectscore=0 mlxscore=0 spamscore=0 phishscore=0 malwarescore=0 mlxlogscore=999 bulkscore=0 priorityscore=1501 adultscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Now that we've dropped vram carveout support, we can collapse vma allocation and initialization. This better matches how things work with drm_gpuvm. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 30 +++----------------------- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 36 +++++++++++++------------------ 3 files changed, 20 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 621fb4e17a2e..29247911f048 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -337,23 +337,6 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *ob= j) return offset; } =20 -static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_gem_vma *vma; - - msm_gem_assert_locked(obj); - - vma =3D msm_gem_vma_new(vm); - if (!vma) - return ERR_PTR(-ENOMEM); - - list_add_tail(&vma->list, &msm_obj->vmas); - - return vma; -} - static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_vm *vm) { @@ -420,6 +403,7 @@ static struct msm_gem_vma *get_vma_locked(struct drm_ge= m_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { + struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct msm_gem_vma *vma; =20 msm_gem_assert_locked(obj); @@ -427,18 +411,10 @@ static struct msm_gem_vma *get_vma_locked(struct drm_= gem_object *obj, vma =3D lookup_vma(obj, vm); =20 if (!vma) { - int ret; - - vma =3D add_vma(obj, vm); + vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); if (IS_ERR(vma)) return vma; - - ret =3D msm_gem_vma_init(vma, obj->size, - range_start, range_end); - if (ret) { - del_vma(vma); - return ERR_PTR(ret); - } + list_add_tail(&vma->list, &msm_obj->vmas); } else { GEM_WARN_ON(vma->iova < range_start); GEM_WARN_ON((vma->iova + obj->size) > range_end); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index c16b11182831..9bd78642671c 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -66,8 +66,8 @@ struct msm_gem_vma { bool mapped; }; =20 -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sg= t, int size); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 9419692f0cc8..6d18364f321c 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -106,47 +106,41 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) msm_gem_vm_put(vm); } =20 -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) +/* Create a new vma and allocate an iova for it */ +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, + u64 range_start, u64 range_end) { struct msm_gem_vma *vma; + int ret; =20 vma =3D kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) - return NULL; + return ERR_PTR(-ENOMEM); =20 vma->vm =3D vm; =20 - return vma; -} - -/* Initialize a new vma and allocate an iova for it */ -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, - u64 range_start, u64 range_end) -{ - struct msm_gem_vm *vm =3D vma->vm; - int ret; - - if (GEM_WARN_ON(!vm)) - return -EINVAL; - - if (GEM_WARN_ON(vma->iova)) - return -EBUSY; - spin_lock(&vm->lock); ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, - size, PAGE_SIZE, 0, + obj->size, PAGE_SIZE, 0, range_start, range_end, 0); spin_unlock(&vm->lock); =20 if (ret) - return ret; + goto err_free_vma; =20 vma->iova =3D vma->node.start; vma->mapped =3D false; =20 + INIT_LIST_HEAD(&vma->list); + kref_get(&vm->kref); =20 - return 0; + return vma; + +err_free_vma: + kfree(vma); + return ERR_PTR(ret); } =20 struct msm_gem_vm * --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED0882DCC1C for ; Wed, 25 Jun 2025 18:58:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877911; cv=none; b=FGWry2IvGJjhe+fLl6OQvCCln1OTwAWFUr9wah9rCW4vf+iVSxKcm8+x+uUWI7WGrpDdeNL3XDMHwhI2g1hDsSFAGEBqkHnUMbUEK2FMhLmtLCez+FM8gpQ4InNS4NvSAe6f+Z2OtpfexqcS7ydcPGkeqQ9mMyZkAneUf0K7xOs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877911; c=relaxed/simple; bh=Xoz678jgKXUU5eflKb2/hYyyho0cK1Uk/BBv29BM5m0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CgtHMN3B6zZYQ+pB1LLhO8SPCmTcerN/GTYfZw9aQ//fOxeY6lNEr97SU+s2DxX5G+CklKd943RacvcKi6M9w6nSjqjqnnV7FaTK/Z3XdnzUygLmF6MCQXQTJqkYlXiB4U4qXG9SNlGOBcommUN6VWFO2Cf/5vSymStQIxAkBvI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Ui1VBoV+; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Ui1VBoV+" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PC5bPh014481 for ; Wed, 25 Jun 2025 18:58:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=pRucYDqGv/m aZtfEL5JUnKr0yOuvQGJ3l3EY7Z9RO/I=; b=Ui1VBoV+IXmMp6fl+lfP2dmVQoz K0nEfJCd/5Zr4mZvaokKx6Fii7Tyn889iyNEhz5PDYfWySxA49w42Z7kHSlXhjTj p+TQqBv5MqOxoPx74BjGy0of1mVxkZKb8F5kS4bF1Dzrl8tnl5e7DC5EyVlVQJWX cJMEv28as7h5bYJz7ULR7h/T9U8U8uP6b0152ZYXx5MLp2LdCcXRyDlr54LFNpkg YDR8DApCPgQZzmoxNKNeYZq6Ec/BC3vHJO0Jx67tTWWcwsYNjSisMtmLqt8x771G Dc/0NXGyfZV+cCS3AWSkOTctvOQbTaXLSeUvbCaS6OO+74BA29wSfJIu1pQ== Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47evc5t1mk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:28 +0000 (GMT) Received: by mail-pg1-f197.google.com with SMTP id 41be03b00d2f7-b321087b1cdso192113a12.2 for ; Wed, 25 Jun 2025 11:58:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877908; x=1751482708; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pRucYDqGv/maZtfEL5JUnKr0yOuvQGJ3l3EY7Z9RO/I=; b=ILlZQSaxEsBbVOUb/5xIj1hGQmZfTt/SBzQ8vpQH6m0UottwFp3x6clUxSSud7YlLh liDKQhYTK+OKZ87kkYt92XZYysLMS9ntCwm2sGzVnZpKebMpjnEr8H8qEe9YuZBWe24x 4cUoGdcUsAu1oaj5Q1Zkp3vQDWZ3/SQ+HLiOV+T+9gb0I8j/tnIKKCFjdNl/2enGwuTu L3VDgVsPl0ExNla5mZo4jzDZKXhLkq8us3YBUSsUbygUwS2PEJ8/N45B4QpeDBvprFkt yPURijWATxMEmKGnh/iHZ57bVe502RiX5jHKCEObCsdKImg2UQrfnvFjT2IIkA844mYH ulaA== X-Forwarded-Encrypted: i=1; AJvYcCUZPwfJ0tqF3Iq2qjJAwxe/caLHmL0g918DqU9c8+n3xTwm6QmhikLhZEnyPO0q+Q6EQRFChrYjX5lajXI=@vger.kernel.org X-Gm-Message-State: AOJu0YzEpqW8gqSeljmmW9r6pKewGqc1dp57UUDppbuAo1phnAmll1mS lv8ND3jfUfkZumFieny1dlolwmSUIqUn2MgmtwreH+ZFG0z+MZGeUMy5CRItBHa9aY2GI1+LBCP 9WixSxnyWThLL43O4OOfgLB9ulJIUX2wlw8nD6fHo1gux0U8p6vLx7Sr1G1rtHXUWPCU= X-Gm-Gg: ASbGncskx3jN1XUw+pnzTKVbpBYUwuxeHFKm22Yk9z0NSPnBX0u9LjZS+kLZNqVnUB+ 0BrKn4UmzACMOGx+0bLaJgHvFOw0LgbvU96oEReiOglwSLbmjNfIvxfexVZ20H/MlsdR5ehPa0X gJFar6gBHA6ablS2dTeEiv35mDanKpPBynkm4dxNURhO/rYetQY0VZ01yOrT5Y4vOwF0nbeiII3 EoeeYcHZJ+RVZIL8wxgGR6eTvFeaYlTNSapukl7GwuIXLb5Q9fvRoMvAZQYq0+pqRTossZNdyfn OFDb5UmYGMdUP6YPDbtedRxJK2Cn0aRR X-Received: by 2002:a05:6a20:2455:b0:215:dfee:bb70 with SMTP id adf61e73a8af0-2207f318fe2mr8507921637.29.1750877908109; Wed, 25 Jun 2025 11:58:28 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHOvgejDLBkS6INLNWDK/Gkek/+S3lKYyWdUsJlq2UMYP86RfIoua6q9zR3X9TSiP+SVESMTg== X-Received: by 2002:a05:6a20:2455:b0:215:dfee:bb70 with SMTP id adf61e73a8af0-2207f318fe2mr8507887637.29.1750877907712; Wed, 25 Jun 2025 11:58:27 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b31f118f7d5sm11937530a12.12.2025.06.25.11.58.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:27 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 09/42] drm/msm: Collapse vma close and delete Date: Wed, 25 Jun 2025 11:47:02 -0700 Message-ID: <20250625184918.124608-10-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: EWkO5GTCRMuONcDQYhGxHSsb5zA_UGgK X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX8gzXfeCeLL72 wzh16U4DiO57EE88miCJxhjnjlhtUjGR/lYQZxNQ+utzkAIt24chKGCAutxU91H+tlg5GZoM1oo CaD/hxIBgJNu/DcSjEXIPzBuw5tzBZnqW5P7ZPxhc+kI+JLbpqQ+v+5kP9EggfND9EKyaMHzOcr PHKPoiBKATaFenc9n5AqiXayO4dlDm3+4KzuRnk942J2iYrb/KWukUuShQ6MCwO7DlO0yxvzKqG 5g8fJRmmQj00w7+Yf51SrcWlpRFtGJ45AG8zM7m1EP95ZOjLEu1ldcNfssyQvi6JiCgjagMaxkt qSPocrpHqLb0eGXUXy0iBIhbZrahRPjsWrz2ZEAMVasFZTCDvl8SYFV2Elx706PO7RzH1+3aR13 GENhT7sG6smgr+IC2nfPdPyR8ondl2M2ZCijt0yVsNci2h1WA/K+C1UOTQ1hiF6oeTaYLKT7 X-Authority-Analysis: v=2.4 cv=caHSrmDM c=1 sm=1 tr=0 ts=685c46d5 cx=c_pps a=rz3CxIlbcmazkYymdCej/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=RIvuzEnNBJp2qadMtJ0A:9 a=bFCP_H2QrGi7Okbo017w:22 X-Proofpoint-ORIG-GUID: EWkO5GTCRMuONcDQYhGxHSsb5zA_UGgK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 lowpriorityscore=0 bulkscore=0 adultscore=0 mlxscore=0 spamscore=0 malwarescore=0 phishscore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark This fits better drm_gpuvm/drm_gpuva. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 16 +++------------- drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++ 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 29247911f048..4c10eca404e0 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -353,15 +353,6 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_o= bject *obj, return NULL; } =20 -static void del_vma(struct msm_gem_vma *vma) -{ - if (!vma) - return; - - list_del(&vma->list); - kfree(vma); -} - /* * If close is true, this also closes the VMA (releasing the allocated * iova range) in addition to removing the iommu mapping. In the eviction @@ -372,11 +363,11 @@ static void put_iova_spaces(struct drm_gem_object *obj, bool close) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_gem_vma *vma; + struct msm_gem_vma *vma, *tmp; =20 msm_gem_assert_locked(obj); =20 - list_for_each_entry(vma, &msm_obj->vmas, list) { + list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { if (vma->vm) { msm_gem_vma_purge(vma); if (close) @@ -395,7 +386,7 @@ put_iova_vmas(struct drm_gem_object *obj) msm_gem_assert_locked(obj); =20 list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - del_vma(vma); + msm_gem_vma_close(vma); } } =20 @@ -564,7 +555,6 @@ static int clear_iova(struct drm_gem_object *obj, =20 msm_gem_vma_purge(vma); msm_gem_vma_close(vma); - del_vma(vma); =20 return 0; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 6d18364f321c..ca29e81d79d2 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -102,8 +102,10 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) spin_unlock(&vm->lock); =20 vma->iova =3D 0; + list_del(&vma->list); =20 msm_gem_vm_put(vm); + kfree(vma); } =20 /* Create a new vma and allocate an iova for it */ --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFC442DECC8 for ; Wed, 25 Jun 2025 18:58:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877912; cv=none; b=EPtC5hBMiqtwsv8/uQVIznMUYvoQOUgSm99/lmWAtjUaVJLeHaAiKMWgI9dWUwV2J60BhjzKvI0p4FmvHM7WfLydeplDrvDmJwZPCzUiE6JqGqSbNSJUCeqdsS6IBcECInI1ggz432ZOuvzXWGA0QKGIFLWYyzNdzo+EWCxhEr4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877912; c=relaxed/simple; bh=GrGA6aQZ+A819VWvU0l5WrQm1ewySWpRFv6nLaO7aCA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cNW37ioSk5jhzOX/2XIG5JbRwZMV8WNZxFa+Q1JNqWTrx1JN9xqYz3aC/C6oHvtPYG+zLKFuINWYbujeSHYRjkNhbKA0gMiU9QbJvV/WPIToJDP8GotEQfrkQWPQ2GOzwQkM2CH1OLPa1YLHZlrPwABByo0Uf1B1mn7UhYqBwlI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=pFPAYJxO; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="pFPAYJxO" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCtfrV004238 for ; Wed, 25 Jun 2025 18:58:30 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=aJRux9Mmt7D uLoNeHMGnLL0gFwmGFL0eGNY7k6BguSQ=; b=pFPAYJxODp7VLqvoCbvH+1VyFoR TVVh8aSjTtlmlxdCSvt3gkbQFZA5EiJz+tz0zl2LIqflivHZYPF+St2WOkPcCHdT 5G7cWuDWjzBDv4gGMEAtA65z0/XlHKoGGajfs0u/EjgkGiKYnISbAJUh+GsYoBj+ wfz5JwZrZAd94KBVHajZQ3FQHvHiL1KPe+JAR7JJbwP9PzdhOWqEasP2ezVf9L4d dYC5FmQM04XuMxffPpu0rh79jaKEjWeVsw5XNkJKAHJtcqN2YWh5vRDkoGac5VoL /y5jyaLj02fxQsDabfjeGOC9A2NO2jjyY4tWqAqeFI5xfB2R+t9Kep2ttAg== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47esa4t5cg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:30 +0000 (GMT) Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-b2eeff19115so204971a12.0 for ; Wed, 25 Jun 2025 11:58:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877909; x=1751482709; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aJRux9Mmt7DuLoNeHMGnLL0gFwmGFL0eGNY7k6BguSQ=; b=SyQI0wtOOHvi1fpV06A4RF3/CRlif93ek7tCiGxX+RDh142cb17o6t+KJhsFAoo6u7 1doJks5/PrPCpSYqx9RwlsbPvG89djvQv1yT25UQKWoKVBsZYX++8TwOjiDfPbavm1sk MlaEsKMKuJCUZ6he5grMp65lQOwJp40htcNtJhrHos7zht0kwMkt6EbSCXWjEk65Y62r r8/0ynoXeSSMyukVgvdybCkW2mvgZxbeEpF/nte/sKoPTuT9J9ZO1XCMnMqBQIo7C9wF bhAQhupCn3mk6ch7AkwgnTYBJ+H4UE9vL88xqKA3cjzYZVX4WWYNL0K9PV1pehewZakg KnLw== X-Forwarded-Encrypted: i=1; AJvYcCW0647H4q8vCojEMSnVsiDW2PDAwgGjkgu0QT2GnZ63IA9O2qxoMZoNxS0XZUJZSPKd4V3J7nue4FmJcMw=@vger.kernel.org X-Gm-Message-State: AOJu0YzFSkuOMvRkNG2Re4CDSqpwIegUV3myLtgIE3jV+I6dKkij/dwU uegBNN7GhXUbwpKD6sdtc+OHN5fvs+0VHx3L53ZICTkTXJfRYZczrTeL7tnplOqnuc7T4gIagNI a4Ead1i3lCIVMrIGTcLtSXzvcFKXY0io0rXF7/ra0ElNVgYCL8UsxRtv3pcwEWkvvYmY= X-Gm-Gg: ASbGnctVrLYCd8YGCV9RtdSyGQNbWWI58UmG5Nsgies0qtWtYOjzi+6EgAUSd1ZR5xk gZ5VqQwK/ZE6qDCf2ZWHbbf1t2wnVk9ml7Ym0PoOsTNBu6YeZeiuzJ/ctVXkY8NZgvtuGcO49xW YCWP+hkZAwlBRCaIk4XTg3eT9nl2Uvu3RT8bqiCCVM0K9x25T1lbtVgz02I4RT84HBJmTfqnyzO dTowBJvuYmbQ+F/mjdW3GvNzHeFlZrvOMQs0dbQZHKU3cVwGawiKhidDuaF5Xw74LoBf80RK1FD 7iXIQqabvhqjAHpshnp75NPFd4ysequ2 X-Received: by 2002:a05:6a20:2588:b0:1f5:717b:46dc with SMTP id adf61e73a8af0-2207f25dd7amr6659473637.27.1750877909328; Wed, 25 Jun 2025 11:58:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGC270RcmL94Y5v5YXXUNIT0L43Aj7rmknkarCokDfRAAcbW7dfFvu+z2RftyoP2ixS2WyXNQ== X-Received: by 2002:a05:6a20:2588:b0:1f5:717b:46dc with SMTP id adf61e73a8af0-2207f25dd7amr6659442637.27.1750877908935; Wed, 25 Jun 2025 11:58:28 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-749c882cc78sm4909730b3a.87.2025.06.25.11.58.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:28 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 10/42] drm/msm: Don't close VMAs on purge Date: Wed, 25 Jun 2025 11:47:03 -0700 Message-ID: <20250625184918.124608-11-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=eLYTjGp1 c=1 sm=1 tr=0 ts=685c46d6 cx=c_pps a=Oh5Dbbf/trHjhBongsHeRQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pCKbN_IlIROpsMws84IA:9 a=_Vgx9l1VpLgwpw_dHYaR:22 X-Proofpoint-GUID: fKIP_ehVlezaQoG3rZNioc8I42CeGv2- X-Proofpoint-ORIG-GUID: fKIP_ehVlezaQoG3rZNioc8I42CeGv2- X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXyGQyJEGr42ew ollr1FVbRWiRzmAIurwmBDduRjg5VTZ39cJg+t+lFaLJBOmLQc9gBeOkhPAbaK8NtASeK6yT+zL BQg7Z6SeOhRXaNdUpr/q1/YRA1SiIFUmkeeBx6S0nnKg2IJX9r/UXtF/POdAf/h8WjxScmatj3Q 5D/Hw+8vA4oTCDf81BOElkwSw2+eMRrmbpDoLf+KAZmQ8aoiSrAYKBtDLMjGltm4OVGNuLwiy8r /xLfYTKp2No7WdmvDeGK1bSE4S8ap92WTSfnAJQU6PWjliME58s6mU9vilrlFsuPtHeVJFp2TB3 PZSLNpvkfKECQ2smkOmMdyHziYxo4NoEpGsJMHfO6zz7Tv51xCC4blR7UNFPF5VniOOgj23dpO0 WgQLAM41ky4p7r5UmIUHuhQvegmQbUyY06VtOgiQ+sUBd4Hy/Tv0uxcdlxfnkwyaIeZnwPIa X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 priorityscore=1501 mlxscore=0 clxscore=1015 mlxlogscore=999 suspectscore=0 bulkscore=0 impostorscore=0 lowpriorityscore=0 malwarescore=0 adultscore=0 spamscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Previously we'd also tear down the VMA, making the address space available again. But with drm_gpuvm conversion, this would require holding the locks of all VMs the GEM object is mapped in. Which is problematic for the shrinker. Instead just let the VMA hang around until the GEM object is freed. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 4c10eca404e0..50b866dcf439 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -763,7 +763,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); =20 /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, true); + put_iova_spaces(obj, false); =20 msm_gem_vunmap(obj); =20 --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47DEE2DCC07 for ; Wed, 25 Jun 2025 18:58:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877920; cv=none; b=XzCTMeuYHx/BUtisWrsafae7NguPSS+c0ruwtnhGjTVRynCmoZ3FEYFR4vzOt6WuqtBFMrQDLrkLgHRCq0uSnQLo9Csi8Klm6TJamH5OJumglYKa+TJyCFTI5+J4Oa/F44WukaWBV32zEsirlH0Tz86LWqNCP0yOeN7HoXniKWM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877920; c=relaxed/simple; bh=ckoCa1/xzl9ludIfhHdUrw8hq3Oma0o6nVNB/YTPkEg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iiLWfqyahBSU7FMMCLUpOlNQZFMg1OaBoCXx+BLSrG/Ynpu7CID1FWKf8bbDj/fBIAhffV/KNJd37ReNVg4UewCcwNMQ6otRwCVavyfJEQSHO24VK9myjcwLO+e201CHatDdTmuFmMF8MRqhb8N7Zsv7rYCx4WyX5L1bfyi40kA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=cp7vgtrr; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="cp7vgtrr" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PB1h90027614 for ; Wed, 25 Jun 2025 18:58:37 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=nNsabTPQHzS iOOHIBX1s5pcQmikAfPXYam4gqeCq7oQ=; b=cp7vgtrr1pDubDeJd5z9pEP9bgy cnorbilbmtu+iNSseo56/C9aRGDeV/hKDfZ+HXyqkXYCNMvaF70bayfg8fpnz4pB fttqTutytYwI9T4OAyGLhhopH1/vEhk4OaO4hT4pJIxNu0xSfbV+XjaYuxcn2YnV D8fwYMl+iqmBGO4GZdA+X6qUxyOgirqqvEjwgGTal4PhEJBC70qAUGbGI1Y50Dzk BJxLUNFWjx5aoXXlfJFB3DuS3tn0PPrBMI3yhIlUZ+roDzW0tsefFH2dwy5vqiE6 YqIQINyw8p0t1Z3x+V+h/FCumcnNLXWd0i5M/O71I7uH9VBPQ5xFkizepwg== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbhqqded-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:37 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-2349498f00eso10298855ad.0 for ; Wed, 25 Jun 2025 11:58:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877916; x=1751482716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nNsabTPQHzSiOOHIBX1s5pcQmikAfPXYam4gqeCq7oQ=; b=E8Q8BgHl1tBADJ8nDv8suEyXFZNuubBSHOiXyuvmONNyP2oAljqcd5WrNLExH4GQmT 9r+c3PQ8BSoLL9SaT+dI0Hty/ubDFDKEBLUIgcFX0N1BSN/m5EOZMADB+dqHWumz71z1 lArWa59/zzw7tgqLZmAS1OTeP6ZCze4N6MmDqmpN/KAsJZCsX75OTy2Nho/jv0jK9IH6 tlyvFu3cXwYL4e0ocgTMSFMYApcoWFBA2Q1QzKmeTpAVi3fzBru6JMzhkG78a48I6OdF A2AuRmIHid6WRuRxLQseAne3D8VJKt4kJP/czwKbysvjTNEZ7bfgvjgVdagK+qoyegnq fbTg== X-Forwarded-Encrypted: i=1; AJvYcCVJoaYvDb/Z2BlMjI9YRbr3N1AtKyWOsNwh0eXEUkFpVixLFY5EV+Q9jYHxj+jN7SQ6ZKGB8oL6yE4wUlA=@vger.kernel.org X-Gm-Message-State: AOJu0Yz2aOETDi9U8Vv5L90Mq26vM2oZ9qldj7ZGOx5rUdr6wC7TUw7+ dVpsYXFESDMzZ1/YbyTvGoROj27PBdWlIoYvz4mVoOziUNc+f+G2rxqVzt09jx7A6tGoyALo+b5 9cvspfOT+w+ZzFx9GLT7n6uawHtF7o/ll9hSwmG2BeTbaU17WJBKoSSXdtf6lXfCG2k4= X-Gm-Gg: ASbGncsW1thcboUKaudOIi4AO9Pi8QJrhTXS3UB2S7bLuKNjGrCnq6SrnBn9q3yE3EI NVPiG+zLIKgrfz995jVlRIiOzkgNaz9k0keGxqTBTyxEJiWsjt2BljrqioOqpY5UYb8ADMWm3DQ TYiA1f08+E1qNddBEjo0FY+zBN4Y3coujY1LRTrYN54ORp3o2JCSuS1gQuMfjvn+BYp2oDkO5NI RxYuhGZg/qVZSMyUIdkETyO3Sf+OJS3AzISG4jx4aAjwg1uqE62wptWBwhB8L49JVHPEoH4vUMI ryNzbbrnYIO6bWbQRDcJc5rgR0NP7Cv2 X-Received: by 2002:a17:903:19cb:b0:231:c89f:4e94 with SMTP id d9443c01a7336-2390a54135bmr9756155ad.21.1750877915802; Wed, 25 Jun 2025 11:58:35 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFhiU7jkLkdBDTXE8Vgl8GgbUZ4C3Lpu1KZ/J8gxUpCHJzD9NEDN1Cu6PowCQ3IgtMk+3/PVg== X-Received: by 2002:a17:903:19cb:b0:231:c89f:4e94 with SMTP id d9443c01a7336-2390a54135bmr9755755ad.21.1750877915243; Wed, 25 Jun 2025 11:58:35 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d8610ac2sm144184265ad.135.2025.06.25.11.58.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:34 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Jessica Zhang , Jun Nie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 11/42] drm/msm: Stop passing vm to msm_framebuffer Date: Wed, 25 Jun 2025 11:47:04 -0700 Message-ID: <20250625184918.124608-12-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: E-Uuo5iwh8odD4_r8cyyc1NpBAr-U0hR X-Authority-Analysis: v=2.4 cv=Id+HWXqa c=1 sm=1 tr=0 ts=685c46dd cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=HL1_eh3RQQ0Sznupcm4A:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-GUID: E-Uuo5iwh8odD4_r8cyyc1NpBAr-U0hR X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX+dRzgxUBfPL8 okwiHuRlKc8uacL/gs/no8ASEJbxbxL8rqkUJPH2+cKZYIJjKhXOrdlD52idVwW2viscz3wBuPw /uRZ/RNpQvRrQLChXlzyCOieBwrb2Gks7thNViX9G1MAULTDg6CLuJk1FypwOvWryt5ChXpbqWG qnVaBt6FAQ6Z+YodK0TGmCm7CsTHyUmHHllBorJsDleJoM5b42+IVHMz+CH+TS7yxdt9Qx50Pmm CfF2ziARfJVrNuIxGm6UHE6eOadpWW+ZxtuAA6LoNX0BukYF2pjzqhJ0wO3uh7iCMHSk5qqYB6K GmBB/RykIG8x6+oZUiPkNiAYX9TKgjLTQ4BWPmMHr3SWpJRwWFsvIQHmXycBvTVhzkJ2Zq4gEXQ BYSPEz0TCZpnpt4DhvumitTZciZ9ogGggN400ZPsV3bZ57NCKo14JLBG/ruNyRblW+TIqHwC X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 priorityscore=1501 mlxlogscore=999 phishscore=0 bulkscore=0 clxscore=1015 impostorscore=0 mlxscore=0 lowpriorityscore=0 malwarescore=0 suspectscore=0 adultscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" The fb only deals with kms->vm, so make that explicit. This will start letting us refcount the # of times the fb is pinned, so we can only unpin the vma after last user of the fb is done. Having a single reference count really only works if there is only a single vm. Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 11 +++------- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 +++++++---------- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 3 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 20 ++++++------------- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 2 -- drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 18 ++++++----------- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 18 ++++++----------- drivers/gpu/drm/msm/msm_drv.h | 9 +++------ drivers/gpu/drm/msm/msm_fb.c | 15 +++++++------- 9 files changed, 39 insertions(+), 75 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/= gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 32e208ee946d..9a54da1c9e3c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,6 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct d= pu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_vm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc =3D to_dpu_encoder_phys_wb(phys_enc); @@ -576,13 +575,12 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct= dpu_encoder_phys *phys_enc =20 wb_enc->wb_job =3D job; wb_enc->wb_conn =3D job->connector; - vm =3D phys_enc->dpu_kms->base.vm; =20 wb_cfg =3D &wb_enc->wb_cfg; =20 memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); =20 - ret =3D msm_framebuffer_prepare(job->fb, vm, false); + ret =3D msm_framebuffer_prepare(job->fb, false); if (ret) { DPU_ERROR("prep fb failed, %d\n", ret); return; @@ -596,7 +594,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct d= pu_encoder_phys *phys_enc return; } =20 - dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest); + dpu_format_populate_addrs(job->fb, &wb_cfg->dest); =20 wb_cfg->dest.width =3D job->fb->width; wb_cfg->dest.height =3D job->fb->height; @@ -619,14 +617,11 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct= dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc =3D to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_vm *vm; =20 if (!job->fb) return; =20 - vm =3D phys_enc->dpu_kms->base.vm; - - msm_framebuffer_cleanup(job->fb, vm, false); + msm_framebuffer_cleanup(job->fb, false); wb_enc->wb_job =3D NULL; wb_enc->wb_conn =3D NULL; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/= msm/disp/dpu1/dpu_formats.c index d115b79af771..b0d585c5315c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,15 +274,14 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } =20 -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +static void _dpu_format_populate_addrs_ubwc(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { const struct msm_format *fmt; uint32_t base_addr =3D 0; bool meta; =20 - base_addr =3D msm_framebuffer_iova(fb, vm, 0); + base_addr =3D msm_framebuffer_iova(fb, 0); =20 fmt =3D msm_framebuffer_format(fb); meta =3D MSM_FORMAT_IS_UBWC(fmt); @@ -355,26 +354,23 @@ static void _dpu_format_populate_addrs_ubwc(struct ms= m_gem_vm *vm, } } =20 -static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +static void _dpu_format_populate_addrs_linear(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { unsigned int i; =20 /* Populate addresses for simple formats here */ for (i =3D 0; i < layout->num_planes; ++i) - layout->plane_addr[i] =3D msm_framebuffer_iova(fb, vm, i); + layout->plane_addr[i] =3D msm_framebuffer_iova(fb, i); } =20 /** * dpu_format_populate_addrs - populate buffer addresses based on * mmu, fb, and format found in the fb - * @vm: address space pointer * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +void dpu_format_populate_addrs(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { const struct msm_format *fmt; @@ -384,7 +380,7 @@ void dpu_format_populate_addrs(struct msm_gem_vm *vm, /* Populate the addresses given the fb */ if (MSM_FORMAT_IS_UBWC(fmt) || MSM_FORMAT_IS_TILE(fmt)) - _dpu_format_populate_addrs_ubwc(vm, fb, layout); + _dpu_format_populate_addrs_ubwc(fb, layout); else - _dpu_format_populate_addrs_linear(vm, fb, layout); + _dpu_format_populate_addrs_linear(fb, layout); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/= msm/disp/dpu1/dpu_formats.h index 989f3e13c497..dc03f522e616 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,8 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 = *supported_formats, return false; } =20 -void dpu_format_populate_addrs(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +void dpu_format_populate_addrs(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); =20 int dpu_format_populate_plane_sizes( diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/ms= m/disp/dpu1/dpu_plane.c index 2640ab9e6e90..8f5f7cc27215 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -646,7 +646,6 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, struct drm_framebuffer *fb =3D new_state->fb; struct dpu_plane *pdpu =3D to_dpu_plane(plane); struct dpu_plane_state *pstate =3D to_dpu_plane_state(new_state); - struct dpu_kms *kms =3D _dpu_plane_get_kms(&pdpu->base); int ret; =20 if (!new_state->fb) @@ -654,9 +653,6 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, =20 DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id); =20 - /* cache vm */ - pstate->vm =3D kms->base.vm; - /* * TODO: Need to sort out the msm_framebuffer_prepare() call below so * we can use msm_atomic_prepare_fb() instead of doing the @@ -664,13 +660,10 @@ static int dpu_plane_prepare_fb(struct drm_plane *pla= ne, */ drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - if (pstate->vm) { - ret =3D msm_framebuffer_prepare(new_state->fb, - pstate->vm, pstate->needs_dirtyfb); - if (ret) { - DPU_ERROR("failed to prepare framebuffer\n"); - return ret; - } + ret =3D msm_framebuffer_prepare(new_state->fb, pstate->needs_dirtyfb); + if (ret) { + DPU_ERROR("failed to prepare framebuffer\n"); + return ret; } =20 return 0; @@ -689,8 +682,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plan= e, =20 DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id); =20 - msm_framebuffer_cleanup(old_state->fb, old_pstate->vm, - old_pstate->needs_dirtyfb); + msm_framebuffer_cleanup(old_state->fb, old_pstate->needs_dirtyfb); } =20 static int dpu_plane_check_inline_rotation(struct dpu_plane *pdpu, @@ -1353,7 +1345,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_p= lane *plane, pstate->needs_qos_remap |=3D (is_rt_pipe !=3D pdpu->is_rt_pipe); pdpu->is_rt_pipe =3D is_rt_pipe; =20 - dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout); + dpu_format_populate_addrs(new_state->fb, &pstate->layout); =20 DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT ", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/ms= m/disp/dpu1/dpu_plane.h index 3578f52048a5..a3a6e9028333 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -17,7 +17,6 @@ /** * struct dpu_plane_state: Define dpu extension of drm plane state object * @base: base drm plane state object - * @vm: pointer to address space for input/output buffers * @pipe: software pipe description * @r_pipe: software pipe description of the second pipe * @pipe_cfg: software pipe configuration @@ -34,7 +33,6 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_vm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/m= sm/disp/mdp4/mdp4_plane.c index 7743be6167f8..098c3b5ff2b2 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c @@ -79,30 +79,25 @@ static const struct drm_plane_funcs mdp4_plane_funcs = =3D { static int mdp4_plane_prepare_fb(struct drm_plane *plane, struct drm_plane_state *new_state) { - struct msm_drm_private *priv =3D plane->dev->dev_private; - struct msm_kms *kms =3D priv->kms; - if (!new_state->fb) return 0; =20 drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - return msm_framebuffer_prepare(new_state->fb, kms->vm, false); + return msm_framebuffer_prepare(new_state->fb, false); } =20 static void mdp4_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *old_state) { struct mdp4_plane *mdp4_plane =3D to_mdp4_plane(plane); - struct mdp4_kms *mdp4_kms =3D get_kms(plane); - struct msm_kms *kms =3D &mdp4_kms->base.base; struct drm_framebuffer *fb =3D old_state->fb; =20 if (!fb) return; =20 DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->vm, false); + msm_framebuffer_cleanup(fb, false); } =20 =20 @@ -141,7 +136,6 @@ static void mdp4_plane_set_scanout(struct drm_plane *pl= ane, { struct mdp4_plane *mdp4_plane =3D to_mdp4_plane(plane); struct mdp4_kms *mdp4_kms =3D get_kms(plane); - struct msm_kms *kms =3D &mdp4_kms->base.base; enum mdp4_pipe pipe =3D mdp4_plane->pipe; =20 mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRC_STRIDE_A(pipe), @@ -153,13 +147,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *= plane, MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); =20 mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 0)); + msm_framebuffer_iova(fb, 0)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 1)); + msm_framebuffer_iova(fb, 1)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 2)); + msm_framebuffer_iova(fb, 2)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 3)); + msm_framebuffer_iova(fb, 3)); } =20 static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms, diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/m= sm/disp/mdp5/mdp5_plane.c index 9f68a4747203..7c790406d533 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -135,8 +135,6 @@ static const struct drm_plane_funcs mdp5_plane_funcs = =3D { static int mdp5_plane_prepare_fb(struct drm_plane *plane, struct drm_plane_state *new_state) { - struct msm_drm_private *priv =3D plane->dev->dev_private; - struct msm_kms *kms =3D priv->kms; bool needs_dirtyfb =3D to_mdp5_plane_state(new_state)->needs_dirtyfb; =20 if (!new_state->fb) @@ -144,14 +142,12 @@ static int mdp5_plane_prepare_fb(struct drm_plane *pl= ane, =20 drm_gem_plane_helper_prepare_fb(plane, new_state); =20 - return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb); + return msm_framebuffer_prepare(new_state->fb, needs_dirtyfb); } =20 static void mdp5_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *old_state) { - struct mdp5_kms *mdp5_kms =3D get_kms(plane); - struct msm_kms *kms =3D &mdp5_kms->base.base; struct drm_framebuffer *fb =3D old_state->fb; bool needed_dirtyfb =3D to_mdp5_plane_state(old_state)->needs_dirtyfb; =20 @@ -159,7 +155,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *pla= ne, return; =20 DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb); + msm_framebuffer_cleanup(fb, needed_dirtyfb); } =20 static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_= state, @@ -467,8 +463,6 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_km= s, enum mdp5_pipe pipe, struct drm_framebuffer *fb) { - struct msm_kms *kms =3D &mdp5_kms->base.base; - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_STRIDE_A(pipe), MDP5_PIPE_SRC_STRIDE_A_P0(fb->pitches[0]) | MDP5_PIPE_SRC_STRIDE_A_P1(fb->pitches[1])); @@ -478,13 +472,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_= kms, MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); =20 mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 0)); + msm_framebuffer_iova(fb, 0)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 1)); + msm_framebuffer_iova(fb, 1)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 2)); + msm_framebuffer_iova(fb, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 3)); + msm_framebuffer_iova(fb, 3)); } =20 /* Note: mdp5_plane->pipe_lock must be locked */ diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index ad509403f072..e4c57deaa1f9 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -251,12 +251,9 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(s= truct drm_device *dev, int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); =20 -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needs_dirtyfb); -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needed_dirtyfb); -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane); +int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb= ); +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirty= fb); +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane); struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int = plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb= ); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 6df318b73534..8a3b88130f4d 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -75,10 +75,10 @@ void msm_framebuffer_describe(struct drm_framebuffer *f= b, struct seq_file *m) =20 /* prepare/pin all the fb's bo's for scanout. */ -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needs_dirtyfb) +int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb) { + struct msm_drm_private *priv =3D fb->dev->dev_private; + struct msm_gem_vm *vm =3D priv->kms->vm; struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); int ret, i, n =3D fb->format->num_planes; =20 @@ -98,10 +98,10 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, return 0; } =20 -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needed_dirtyfb) +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirty= fb) { + struct msm_drm_private *priv =3D fb->dev->dev_private; + struct msm_gem_vm *vm =3D priv->kms->vm; struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); int i, n =3D fb->format->num_planes; =20 @@ -115,8 +115,7 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } =20 -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane) +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane) { struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B71B2E9EC0 for ; Wed, 25 Jun 2025 18:58:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877921; cv=none; b=Ns74pLjHNkEGVJu9pPyEI0di7VbYEkq0QGmwyVEY5WEL37NjTIcR1IJo60g3yoWlTyDXts8na1dzVDi2HdUefAu8H44c0V3zbIvnycepHpUbQOESEHQt03AeWFKLhOZ4yDi5MHJbtua0M11hciPOcq8e++yP/dWD2Oe2hNXisz4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877921; c=relaxed/simple; bh=CbdUzDS0r5evR14HE2y2m9T+o2DmOJlOXmMaAhyMoOI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RoPLDZyVukOOUIWm+ktuGI2nhwCfMAFIM/cJGKlwOsM6Fe2vbGwU+QBWwnFH5IMjExFmKu+iq7BbKdxsXE/G5CFHlGNRr3DLfs6FmNGX+L60qTP2XntYp1ML5oYIwQdPEUGhsx2ca1Wanh1+ALQcq2IwTTVR5J2OvIgQs5lZv80= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=YxcAs3V3; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="YxcAs3V3" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PBtgLK026798 for ; Wed, 25 Jun 2025 18:58:38 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=vb+Q3ccg7Qa sQsi7gj8/5KoBXUBkdYd/cEtWLqH67ms=; b=YxcAs3V37MUahOJdP9dLUX55cMg cqYGsTQjIXFnqtQlXpSrZ6ie0xBmdMx775N8mW4MSO/1RYljObLQ7j4eZ5S3tthJ AXfbezOb3OhLaPNzAFbWXlsnL1EV7l96LmfxJnSGyeRHmr3W2Vyuo3OTFPGfC3A3 KDpMfS9zJhlAxVg5lPvJfLRua0GxYpoeoeZa2dkI+nF6pYs5N7Q2+Oha3qJ3gSDO LGVA5u4VfvWxwCpmJwiCcYRDFasUxbRc/p7bv08Sv73+PTHT0N+dynyhLVbgCV7w ofNYL1Pf5Ezxc/b+oZWgbdVV4tNi8fn0+4CxsQ7mIqYiGdBnKn+G/7GuyEQ== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbhqqdeq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:38 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-74890390d17so167613b3a.2 for ; Wed, 25 Jun 2025 11:58:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877917; x=1751482717; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vb+Q3ccg7QasQsi7gj8/5KoBXUBkdYd/cEtWLqH67ms=; b=QGDhu60l+i3lTyo+K2lauHzsGnyaBmSj4EPEc9WR4ugxDYpMY7Lcod05t07d8m+L8z L4U6oru7VhIr09RAJ2IpW2k4A7OC+j5EbPaXNiSxELxeBGo9t0SkUv6bH2mpLQTfaEXF 51gZdL+syIZ9IXw0D0SQgdK6KISusnx9bfGb7Hcg7PZJsS8dHsBedWNDb4Q1/+NmqezA 7fL17NYMMDtOJ43pCxqJUrXFrn/FNMyYB9nsNTIShAeTsi9OJ52AIv+WF/+/hD9Ctx6l pnFQKB89rynj6jRQY1RiAm4MpLaVWsmzCr2Fqo8z9/+qPZzkChF0nKz3JaFhJRnLppJ8 BCVQ== X-Forwarded-Encrypted: i=1; AJvYcCUE3eyG45FBWGKFWbr+wQZEDlpRdtiuPToMwMBM4aXEvUMhGBB/G1i8DUSA3COJcLmuLsqyqBffkv9eI2E=@vger.kernel.org X-Gm-Message-State: AOJu0YwaLnW8jEW6t2vABEDCo2noAEPGL2GStqIlsMFoJPbv0nHlqKcB M0ZIp+GQkdclXtyKWJNvHwh2gCL71I1oNPVJ5dMDw1N6mhGtIjUwDpyK5W/aZiO2wwG1NPAE4W3 BssFpI7Xr1ZU1yhjiOo/n3Yl5usDQJdodwIwbO0y0RJkCieZSKLPyN/ObM6VNr1pxakw= X-Gm-Gg: ASbGncsJGmfKCqPFb1vIPZymUOVDJavyf7VGn6FV6vOiKSJtMTZlaDaW4EMURZTUGa6 12VQ4MHywvmQGtA79Dhfhiltyq8ssZrFOAJrJfvLQzlpthCAKPoxshrhw1NIg4TZmU099qsJ5Lv uwKqhgfuzABSpafYz7+yPqClV9gEI8yqMdAewi5xdDNu5Lm7vR8JhY1Yjn0QS8/UsaPyz5Alcsc lsqyZevJO4UeAL0EAlvxOluZzvWKMvVzBxE2C1FiwRFlZEN57GsFVzXorn7nEt79clqg7dW8RAe k+g/gQxU7PpJLGHirXlyXls1Z/3DRhBg X-Received: by 2002:a05:6a00:2da4:b0:748:fb2c:6b95 with SMTP id d2e1a72fcca58-74ad4599964mr6768249b3a.18.1750877916851; Wed, 25 Jun 2025 11:58:36 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH2R3fCh2w1CdRYNs6SzElJV5q80FOXlY69V8dHuPmzMI0PHAMaTPozp1q9qVR8kIBjZ/CByg== X-Received: by 2002:a05:6a00:2da4:b0:748:fb2c:6b95 with SMTP id d2e1a72fcca58-74ad4599964mr6768216b3a.18.1750877916423; Wed, 25 Jun 2025 11:58:36 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b31f11ac8eesm13442633a12.34.2025.06.25.11.58.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:36 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 12/42] drm/msm: Refcount framebuffer pins Date: Wed, 25 Jun 2025 11:47:05 -0700 Message-ID: <20250625184918.124608-13-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: PDtUR-ZS8njiAFT0mUONbI3fXLFSEJ2X X-Authority-Analysis: v=2.4 cv=Id+HWXqa c=1 sm=1 tr=0 ts=685c46de cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=MhmIxDhvR8qEtQvFyXAA:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-GUID: PDtUR-ZS8njiAFT0mUONbI3fXLFSEJ2X X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX0+QtGF5FnQZj 4qNyG/u7qP2Zwz3YSP/tCYa/vspevb+GMF2gaohNQqm48bygtKXx7uCs77Psg+YDkHegAkeBUnE eq6HkUHJN7S09BLLvlK03AIVSlzza5HHFDNPc4U0uKR+2DIrEoQYCUCBFxo+VcnVxt2n107XN8e 9E0AXPXVFwbTaF5+F1c9z42n1vCvJbz4Me5ee3UryGQhYEOF/V02Gn0ohK58Fqb6vJ8IZrOF4Fp 6U+LT4FlIqKC3QK8Cxr1e6K0ZA8ir7psYSL7jXdLcIX+vI/00nfPrpvIV+SY2lx8XQbc3qF4Mft YLL2RcEt+fW7p2YSBMDEWZDfZ7RRfwHKS5VRjgnB7v23CTbqeHdDrSugz1seGZVEiMA/PpM2vod p/UPkavYEt2s8U5L71fND+tGRe21z2NSfyU8J1CgKhRtMCKFGJ4KU/DaLtwOSSb+RQzZXdTr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 priorityscore=1501 mlxlogscore=999 phishscore=0 bulkscore=0 clxscore=1015 impostorscore=0 mlxscore=0 lowpriorityscore=0 malwarescore=0 suspectscore=0 adultscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" We were already keeping a refcount of # of prepares (pins), to clear the iova array. Use that to avoid unpinning the iova until the last cleanup (unpin). This way, when msm_gem_unpin_iova() actually tears down the mapping, we won't have problems if the fb is being scanned out on another display (for example). Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_fb.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 8a3b88130f4d..3b17d83f6673 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -85,7 +85,8 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, b= ool needs_dirtyfb) if (needs_dirtyfb) refcount_inc(&msm_fb->dirtyfb); =20 - atomic_inc(&msm_fb->prepare_count); + if (atomic_inc_return(&msm_fb->prepare_count) > 1) + return 0; =20 for (i =3D 0; i < n; i++) { ret =3D msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); @@ -108,11 +109,13 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *= fb, bool needed_dirtyfb) if (needed_dirtyfb) refcount_dec(&msm_fb->dirtyfb); =20 + if (atomic_dec_return(&msm_fb->prepare_count)) + return; + + memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); + for (i =3D 0; i < n; i++) msm_gem_unpin_iova(fb->obj[i], vm); - - if (!atomic_dec_return(&msm_fb->prepare_count)) - memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } =20 uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane) --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5E212EB5BA for ; Wed, 25 Jun 2025 18:58:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877925; cv=none; b=Ut5FphqqHvXw3G/MmkAf8ckqnukf0l0Uw9fm2cG8ISu6Ao3fldY4J5RbYMYgJ3uoEvPEW6C00c+pWvyfJfBZ10lhTxf4VEXlI+7+UaEPYI3w4YIsmBOPg+nLrjJ9Qc2adw4OAnXUjFGn6T7Tgdqu9HmBxY0M20j3SsEPU3oylBo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877925; c=relaxed/simple; bh=5SmTjJz9xRqSqhyKepKbPTt/zd5/VxfyQWz4L+aMoCo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OfMcTViIUV/1RnMPOAAdMwY4OwFENDquF98hobU9gk3zufJTeX4M5S1is+L4TonypQrlCRHybqtQsrTVUp0ciTT9dZkJ8XHMWrWCryD6Tn2KvmIjAjnIyABsSWXurpSZ7mtXhUgo9bTMS96kGJIgngW0sPa/WDPVL6Ru01UMdEA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=XTXTGQnn; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="XTXTGQnn" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PB1fMq027698 for ; Wed, 25 Jun 2025 18:58:41 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=eBYMuZyji8E 55NVyrY6D+vZdjygisaNcm5WjZmdpQOA=; b=XTXTGQnnlkHdDUv86mFJdjTtTYl E5Zp6HCeOXIuSZ/JIGKBpkqx0AVAn9rKuPoBqLLIwUEVLIHnAySE8s56m9Fr1P1+ iFt1jadarWDGNXyl9coZd/oXjhD8if8Dl4royEJNEb/ka8x0ibS6vlG55T+LuCIm T7kNnqebwmow2+0AtCHaTM7gj7yLeqX0LgutKFRWQnTMeiapOpjjfq5LDVdtAlGT ZBgZOXia9K7QqgSMiv/5YH3PsYXE00yK85eorygUmb3astY4z6x6/tyaUkr67zhi Rk+aO5VDtqFXdDknhWkRAM7r/cAASB0HZkHxiiWFsxioFOM1tRnkmpGiU5g== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbhqqdey-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:41 +0000 (GMT) Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-b2fdba7f818so172196a12.2 for ; Wed, 25 Jun 2025 11:58:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877920; x=1751482720; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eBYMuZyji8E55NVyrY6D+vZdjygisaNcm5WjZmdpQOA=; b=kLGjNETF/Hm6mf8IMXojL5nS3YEDDIyIJzT88MXDYAKI/iTz+vdoP7wmVI3dwZLZ2Y Xgz5TRThQaYZyY+CRfZwaa+wmzhTKSObxuLVTCm3tUSi7lbfMDwBwKPTNPPYqG1MeyzB 9LlwO+BCDZRJ5HkgpwVn+9bb4XO9bOGvL/lW2RGEXmnIPfhVzy39mlkTHipu8541R4Qq pP7MfRPgtC4dvSsdIuL4ZQhhOJf2mCHamA+R/iqCgmvSSpqIFpvWa2XxyJuselNu5+vJ iUNec5GpY2gp1wUCOVj7T81Tkg6269LgUy2VlWlhVQYXGsSyKw/K3+3jTZ/SFuic4WXD jZow== X-Forwarded-Encrypted: i=1; AJvYcCWrAdqS1w3YD3fP+6PKgk2deUiuylPantimG8/CnfFMqyxoXr/kPQtIVGvF36vZpWSvN/02TOWXi8JGgi0=@vger.kernel.org X-Gm-Message-State: AOJu0YzS8noLrCHEy75XdQ5+w2fsgei68uFJpactuAg+2ZQj6/w6A8aH xo7Yr/YD+hU8n/CyBOhjFaosNi2QtfN9eHzlpJW95oaw2J3Swlxue8/evBNzmTPZPI90Le7OJCV 7LGkl4P6o0gBSnCLmtX2/Vgl9TYwCqL5th2kK4qpZFiwdnixracvT70qYvOWDQ/9kiW0= X-Gm-Gg: ASbGncuvhIJTMXlGl9LLN7n9Jlj0NJngG7dZQUWUqSjpgVVlgc7rxctVf96T/ZBpPGJ nGbzytrJUaIwnf1LC7vyfvKCrkdnnLNtYoZSu0NPIqeN/eMpqUyINFZWB9QB0igZOJDtoXEuLZ8 Qds378i+EmuL/qNkeeGowywOUBLuZejykZOLYpZh9GNyU57jn/ga7wibOe4nIEYOOaV3bowBlBc Ej9KJO3hUTrJJlmcafRVHRdNRfW9ugHRwkYNceP+6g1p5gujzjj5K7vWoJOgKBhDwGmEf2Fpw/b fh/r8kfosL2nhrkFtGDuzkMYmxDReSnj X-Received: by 2002:a17:903:17cb:b0:234:ba75:836 with SMTP id d9443c01a7336-23823f98062mr71219975ad.7.1750877919672; Wed, 25 Jun 2025 11:58:39 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE/r5Jkc3GCSm1kN7yQi6XuOjweGrp17eSzI1I614SbXakDNUW5t/VeOiVDTJeCEAF/TKLjcA== X-Received: by 2002:a17:903:17cb:b0:234:ba75:836 with SMTP id d9443c01a7336-23823f98062mr71219585ad.7.1750877919093; Wed, 25 Jun 2025 11:58:39 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d8729de6sm137952965ad.238.2025.06.25.11.58.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:38 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 13/42] drm/msm: drm_gpuvm conversion Date: Wed, 25 Jun 2025 11:47:06 -0700 Message-ID: <20250625184918.124608-14-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: hY00rA2WoFkAFKPLmo_dfkRzC0oos7pK X-Authority-Analysis: v=2.4 cv=Id+HWXqa c=1 sm=1 tr=0 ts=685c46e1 cx=c_pps a=Oh5Dbbf/trHjhBongsHeRQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=DWwiU9S3deBeNsXQ0cYA:9 a=hD0YvAvVATwCeCSz:21 a=_Vgx9l1VpLgwpw_dHYaR:22 X-Proofpoint-GUID: hY00rA2WoFkAFKPLmo_dfkRzC0oos7pK X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX1mBmUsYm40qA +XdTsBj3AzjLH4ZZYi7S/by6HV6d+Ejf6bT585QT5RWxfI4q+5gvan495kSWRd+7hlMjPWsqPkD dtTZxZLc0WaFbTj6z46tA08AIiMg92nBCsjwVLpJHbweIzL2CXHiB8iDMb3Bch6YmMv14TbS7UM dlIGnm5QKLWVLrDMinIxe514lzCt/I0zzu9L/MgEiaYKNqMo4TiYqQ3HwP7vSbBJ/aQzuWwpKB0 JwdZVzgWO8TP5lpb8wD54tuCANoSjdUfsJZxHynuESBRw+BOuE8KhtcK+XVxhndflhgXCywenAa IqqREu6q0uIjWFWcr6m4W6hTci8UDGIPaF0eo/eHLpEVdoYvu67+0Cpluo5ZNgaWZyxDXY2Hpm8 EBYxL4MGb11JJ2fqCVusASywncBfOPXsqQdaq1PsapsRZXVndP3x0bHlIclo7fetOW09O7F6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 priorityscore=1501 mlxlogscore=999 phishscore=0 bulkscore=0 clxscore=1015 impostorscore=0 mlxscore=0 lowpriorityscore=0 malwarescore=0 suspectscore=0 adultscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Now that we've realigned deletion and allocation, switch over to using drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per VM, to allow mapping different parts of a single BO at different virtual addresses, which is a key requirement for sparse/VM_BIND. This prepares us for using drm_gpuvm to translate a batch of MAP/ MAP_NULL/UNMAP operations from userspace into a sequence of map/remap/ unmap steps for updating the page tables. Since, unlike our prior vm/vma setup, with drm_gpuvm the vm_bo holds a reference to the GEM object. To prevent reference loops causing us to leak all GEM objects, we implicitly tear down the mapping when the GEM handle is close or when the obj is unpinned. Which means the submit needs to also hold a reference to the vm_bo, to prevent the VMA from being torn down while the submit is in-flight. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/Kconfig | 1 + drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 5 +- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gem.c | 166 +++++++++++++++-------- drivers/gpu/drm/msm/msm_gem.h | 90 +++++++++--- drivers/gpu/drm/msm/msm_gem_submit.c | 7 +- drivers/gpu/drm/msm/msm_gem_vma.c | 140 +++++++++++++------ drivers/gpu/drm/msm/msm_kms.c | 4 +- 12 files changed, 300 insertions(+), 135 deletions(-) diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig index 974bc7c0ea76..4af7e896c1d4 100644 --- a/drivers/gpu/drm/msm/Kconfig +++ b/drivers/gpu/drm/msm/Kconfig @@ -21,6 +21,7 @@ config DRM_MSM select DRM_DISPLAY_HELPER select DRM_BRIDGE_CONNECTOR select DRM_EXEC + select DRM_GPUVM select DRM_KMS_HELPER select DRM_PANEL select DRM_BRIDGE diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 095bae92e3e8..889480aa13ba 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -472,8 +472,7 @@ a2xx_create_vm(struct msm_gpu *gpu, struct platform_dev= ice *pdev) struct msm_mmu *mmu =3D a2xx_gpummu_new(&pdev->dev, gpu); struct msm_gem_vm *vm; =20 - vm =3D msm_gem_vm_create(mmu, "gpu", SZ_16M, - 0xfff * SZ_64K); + vm =3D msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, tr= ue); =20 if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.c index 848acc382b7d..77d9ff9632d1 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1311,7 +1311,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu= , struct a6xx_gmu_bo *bo, return 0; } =20 -static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) +static int a6xx_gmu_memory_probe(struct drm_device *drm, struct a6xx_gmu *= gmu) { struct msm_mmu *mmu; =20 @@ -1321,7 +1321,7 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); =20 - gmu->vm =3D msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + gmu->vm =3D msm_gem_vm_create(drm, mmu, "gmu", 0x0, 0x80000000, true); if (IS_ERR(gmu->vm)) return PTR_ERR(gmu->vm); =20 @@ -1940,7 +1940,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct d= evice_node *node) if (ret) goto err_put_device; =20 - ret =3D a6xx_gmu_memory_probe(gmu); + ret =3D a6xx_gmu_memory_probe(adreno_gpu->base.dev, gmu); if (ret) goto err_put_device; =20 diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index f4d9cdbc5602..26d0a863f38c 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2271,9 +2271,8 @@ a6xx_create_private_vm(struct msm_gpu *gpu) if (IS_ERR(mmu)) return ERR_CAST(mmu); =20 - return msm_gem_vm_create(mmu, - "gpu", ADRENO_VM_START, - adreno_private_vm_size(gpu)); + return msm_gem_vm_create(gpu->dev, mmu, "gpu", ADRENO_VM_START, + adreno_private_vm_size(gpu), true); } =20 static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *= ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 35a99c81f7e0..287b032fefe4 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -226,7 +226,8 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, start =3D max_t(u64, SZ_16M, geometry->aperture_start); size =3D geometry->aperture_end - start + 1; =20 - vm =3D msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); + vm =3D msm_gem_vm_create(gpu->dev, mmu, "gpu", start & GENMASK_ULL(48, 0), + size, true); =20 if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); @@ -418,12 +419,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_= context *ctx, case MSM_PARAM_VA_START: if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->va_start; + *value =3D ctx->vm->base.mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->va_size; + *value =3D ctx->vm->base.mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value =3D adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm= /disp/mdp4/mdp4_kms.c index 94fbc20b2fbd..d5b5628bee24 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -451,8 +451,9 @@ static int mdp4_kms_init(struct drm_device *dev) "contig buffers for scanout\n"); vm =3D NULL; } else { - vm =3D msm_gem_vm_create(mmu, - "mdp4", 0x1000, 0x100000000 - 0x1000); + vm =3D msm_gem_vm_create(dev, mmu, "mdp4", + 0x1000, 0x100000000 - 0x1000, + true); =20 if (IS_ERR(vm)) { if (!IS_ERR(mmu)) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 978f1d355b42..6ef29bc48bb0 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -776,6 +776,7 @@ static const struct file_operations fops =3D { =20 static const struct drm_driver msm_driver =3D { .driver_features =3D DRIVER_GEM | + DRIVER_GEM_GPUVA | DRIVER_RENDER | DRIVER_ATOMIC | DRIVER_MODESET | diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 50b866dcf439..6f99f9356c5c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -47,9 +47,53 @@ static int msm_gem_open(struct drm_gem_object *obj, stru= ct drm_file *file) return 0; } =20 +static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, bool close); + +static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) +{ + msm_gem_assert_locked(obj); + + struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(&vm->base, obj); + if (vm_bo) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + if (vma->vm !=3D &vm->base) + continue; + msm_gem_vma_purge(to_msm_vma(vma)); + msm_gem_vma_close(to_msm_vma(vma)); + break; + } + + drm_gpuvm_bo_put(vm_bo); + } +} + static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *fil= e) { + struct msm_context *ctx =3D file->driver_priv; + update_ctx_mem(file, -obj->size); + + /* + * If VM isn't created yet, nothing to cleanup. And in fact calling + * put_iova_spaces() with vm=3DNULL would be bad, in that it will tear- + * down the mappings of shared buffers in other contexts. + */ + if (!ctx->vm) + return; + + /* + * TODO we might need to kick this to a queue to avoid blocking + * in CLOSE ioctl + */ + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, + msecs_to_jiffies(1000)); + + msm_gem_lock(obj); + put_iova_spaces(obj, &ctx->vm->base, true); + detach_vm(obj, ctx->vm); + msm_gem_unlock(obj); } =20 /* @@ -171,6 +215,13 @@ static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); =20 + /* + * Skip gpuvm in the object free path to avoid a WARN_ON() splat. + * See explaination in msm_gem_assert_locked() + */ + if (kref_read(&obj->refcount)) + drm_gpuvm_bo_gem_evict(obj, true); + if (msm_obj->pages) { if (msm_obj->sgt) { /* For non-cached buffers, ensure the new @@ -338,16 +389,25 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *o= bj) } =20 static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct msm_gem_vm *vm) { - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_gem_vma *vma; + struct drm_gpuvm_bo *vm_bo; =20 msm_gem_assert_locked(obj); =20 - list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->vm =3D=3D vm) - return vma; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + if (vma->vm =3D=3D &vm->base) { + /* lookup_vma() should only be used in paths + * with at most one vma per vm + */ + GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); + + return to_msm_vma(vma); + } + } } =20 return NULL; @@ -360,33 +420,29 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_= object *obj, * mapping. */ static void -put_iova_spaces(struct drm_gem_object *obj, bool close) +put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool clo= se) { - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + struct drm_gpuvm_bo *vm_bo, *tmp; =20 msm_gem_assert_locked(obj); =20 - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - if (vma->vm) { - msm_gem_vma_purge(vma); - if (close) - msm_gem_vma_close(vma); - } - } -} + drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) { + struct drm_gpuva *vma, *vmatmp; =20 -/* Called with msm_obj locked */ -static void -put_iova_vmas(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + if (vm && vm_bo->vm !=3D vm) + continue; =20 - msm_gem_assert_locked(obj); + drm_gpuvm_bo_get(vm_bo); + + drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + + msm_gem_vma_purge(msm_vma); + if (close) + msm_gem_vma_close(msm_vma); + } =20 - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - msm_gem_vma_close(vma); + drm_gpuvm_bo_put(vm_bo); } } =20 @@ -394,7 +450,6 @@ static struct msm_gem_vma *get_vma_locked(struct drm_ge= m_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct msm_gem_vma *vma; =20 msm_gem_assert_locked(obj); @@ -403,12 +458,9 @@ static struct msm_gem_vma *get_vma_locked(struct drm_g= em_object *obj, =20 if (!vma) { vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); - if (IS_ERR(vma)) - return vma; - list_add_tail(&vma->list, &msm_obj->vmas); } else { - GEM_WARN_ON(vma->iova < range_start); - GEM_WARN_ON((vma->iova + obj->size) > range_end); + GEM_WARN_ON(vma->base.va.addr < range_start); + GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); } =20 return vma; @@ -492,7 +544,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, =20 ret =3D msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova =3D vma->iova; + *iova =3D vma->base.va.addr; pin_obj_locked(obj); } =20 @@ -538,7 +590,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else { - *iova =3D vma->iova; + *iova =3D vma->base.va.addr; } msm_gem_unlock(obj); =20 @@ -579,7 +631,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, vma =3D get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->iova !=3D iova)) { + } else if (GEM_WARN_ON(vma->base.va.addr !=3D iova)) { clear_iova(obj, vm); ret =3D -EBUSY; } @@ -601,9 +653,10 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj, =20 msm_gem_lock(obj); vma =3D lookup_vma(obj, vm); - if (!GEM_WARN_ON(!vma)) { + if (vma) { msm_gem_unpin_locked(obj); } + detach_vm(obj, vm); msm_gem_unlock(obj); } =20 @@ -763,7 +816,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); =20 /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, false); + put_iova_spaces(obj, NULL, false); =20 msm_gem_vunmap(obj); =20 @@ -771,8 +824,6 @@ void msm_gem_purge(struct drm_gem_object *obj) =20 put_pages(obj); =20 - put_iova_vmas(obj); - mutex_lock(&priv->lru.lock); /* A one-way transition: */ msm_obj->madv =3D __MSM_MADV_PURGED; @@ -803,7 +854,7 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(is_unevictable(msm_obj)); =20 /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, false); + put_iova_spaces(obj, NULL, false); =20 drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); =20 @@ -869,7 +920,6 @@ void msm_gem_describe(struct drm_gem_object *obj, struc= t seq_file *m, { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct dma_resv *robj =3D obj->resv; - struct msm_gem_vma *vma; uint64_t off =3D drm_vma_node_start(&obj->vma_node); const char *madv; =20 @@ -912,14 +962,17 @@ void msm_gem_describe(struct drm_gem_object *obj, str= uct seq_file *m, =20 seq_printf(m, " %08zu %9s %-32s\n", obj->size, madv, msm_obj->name); =20 - if (!list_empty(&msm_obj->vmas)) { + if (!list_empty(&obj->gpuva.list)) { + struct drm_gpuvm_bo *vm_bo; =20 seq_puts(m, " vmas:"); =20 - list_for_each_entry(vma, &msm_obj->vmas, list) { - const char *name, *comm; - if (vma->vm) { - struct msm_gem_vm *vm =3D vma->vm; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + const char *name, *comm; + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); struct task_struct *task =3D get_pid_task(vm->pid, PIDTYPE_PID); if (task) { @@ -928,15 +981,14 @@ void msm_gem_describe(struct drm_gem_object *obj, str= uct seq_file *m, } else { comm =3D NULL; } - name =3D vm->name; - } else { - name =3D comm =3D NULL; + name =3D vm->base.name; + + seq_printf(m, " [%s%s%s: vm=3D%p, %08llx, %smapped]", + name, comm ? ":" : "", comm ? comm : "", + vma->vm, vma->va.addr, + to_msm_vma(vma)->mapped ? "" : "un"); + kfree(comm); } - seq_printf(m, " [%s%s%s: vm=3D%p, %08llx,%s]", - name, comm ? ":" : "", comm ? comm : "", - vma->vm, vma->iova, - vma->mapped ? "mapped" : "unmapped"); - kfree(comm); } =20 seq_puts(m, "\n"); @@ -982,7 +1034,7 @@ static void msm_gem_free_object(struct drm_gem_object = *obj) list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); =20 - put_iova_spaces(obj, true); + put_iova_spaces(obj, NULL, true); =20 if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); @@ -992,13 +1044,10 @@ static void msm_gem_free_object(struct drm_gem_objec= t *obj) */ kvfree(msm_obj->pages); =20 - put_iova_vmas(obj); - drm_prime_gem_destroy(obj, msm_obj->sgt); } else { msm_gem_vunmap(obj); put_pages(obj); - put_iova_vmas(obj); } =20 drm_gem_object_release(obj); @@ -1104,7 +1153,6 @@ static int msm_gem_new_impl(struct drm_device *dev, msm_obj->madv =3D MSM_MADV_WILLNEED; =20 INIT_LIST_HEAD(&msm_obj->node); - INIT_LIST_HEAD(&msm_obj->vmas); =20 *obj =3D &msm_obj->base; (*obj)->funcs =3D &msm_gem_object_funcs; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 9bd78642671c..60769c68d408 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -10,6 +10,7 @@ #include #include #include "drm/drm_exec.h" +#include "drm/drm_gpuvm.h" #include "drm/gpu_scheduler.h" #include "msm_drv.h" =20 @@ -22,30 +23,67 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash mem= ory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping = */ =20 +/** + * struct msm_gem_vm - VM object + * + * A VM object representing a GPU (or display or GMU or ...) virtual addre= ss + * space. + * + * In the case of GPU, if per-process address spaces are supported, the ad= dress + * space is split into two VMs, which map to TTBR0 and TTBR1 in the SMMU. = TTBR0 + * is used for userspace objects, and is unique per msm_context/drm_file, = while + * TTBR1 is the same for all processes. (The kernel controlled ringbuffer= and + * a few other kernel controlled buffers live in TTBR1.) + * + * The GPU TTBR0 vm can be managed by userspace or by the kernel, dependin= g on + * whether userspace supports VM_BIND. All other vm's are managed by the = kernel. + * (Managed by kernel means the kernel is responsible for VA allocation.) + * + * Note that because VM_BIND allows a given BO to be mapped multiple times= in + * a VM, and therefore have multiple VMA's in a VM, there is an extra obje= ct + * provided by drm_gpuvm infrastructure.. the drm_gpuvm_bo, which is not + * embedded in any larger driver structure. The GEM object holds a list of + * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma + * holds a reference to the vm_bo, and drops it when the vma is unlinked. + * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an + * existing vm_bo, or create a new one. Once the vma is linked, the ref + * to the vm_bo can be dropped (since the vma is holding one). + */ struct msm_gem_vm { - const char *name; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: + /** @base: Inherit from drm_gpuvm. */ + struct drm_gpuvm base; + + /** + * @mm: Memory management for kernel managed VA allocations + * + * Only used for kernel managed VMs, unused for user managed VMs. + * + * Protected by @mm_lock. */ struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ + + /** @mm_lock: protects @mm node allocation/removal */ + struct spinlock mm_lock; + + /** @vm_lock: protects gpuvm insert/remove/traverse */ + struct mutex vm_lock; + + /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; - struct kref kref; =20 - /* For address spaces associated with a specific process, this + /** + * @pid: For address spaces associated with a specific process, this * will be non-NULL: */ struct pid *pid; =20 - /* @faults: the number of GPU hangs associated with this address space */ + /** @faults: the number of GPU hangs associated with this address space */ int faults; =20 - /** @va_start: lowest possible address to allocate */ - uint64_t va_start; - - /** @va_size: the size of the address space (in bytes) */ - uint64_t va_size; + /** @managed: is this a kernel managed VM? */ + bool managed; }; +#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) =20 struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm); @@ -53,18 +91,33 @@ msm_gem_vm_get(struct msm_gem_vm *vm); void msm_gem_vm_put(struct msm_gem_vm *vm); =20 struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size); +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, + u64 va_start, u64 va_size, bool managed); =20 struct msm_fence_context; =20 +#define MSM_VMA_DUMP (DRM_GPUVA_USERBITS << 0) + +/** + * struct msm_gem_vma - a VMA mapping + * + * Represents a combination of a GEM object plus a VM. + */ struct msm_gem_vma { + /** @base: inherit from drm_gpuva */ + struct drm_gpuva base; + + /** + * @node: mm node for VA allocation + * + * Only used by kernel managed VMs + */ struct drm_mm_node node; - uint64_t iova; - struct msm_gem_vm *vm; - struct list_head list; /* node in msm_gem_object::vmas */ + + /** @mapped: Is this VMA mapped? */ bool mapped; }; +#define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) =20 struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, @@ -100,8 +153,6 @@ struct msm_gem_object { struct sg_table *sgt; void *vaddr; =20 - struct list_head vmas; /* list of msm_gem_vma */ - char name[32]; /* Identifier to print for the debugfs files */ =20 /* userspace metadata backchannel */ @@ -292,6 +343,7 @@ struct msm_gem_submit { struct drm_gem_object *obj; uint32_t handle; }; + struct drm_gpuvm_bo *vm_bo; uint64_t iova; } bos[]; }; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index c184b1a1f522..2de5a07392eb 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -321,7 +321,8 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) if (ret) break; =20 - submit->bos[i].iova =3D vma->iova; + submit->bos[i].vm_bo =3D drm_gpuvm_bo_get(vma->base.vm_bo); + submit->bos[i].iova =3D vma->base.va.addr; } =20 /* @@ -474,7 +475,11 @@ void msm_submit_retire(struct msm_gem_submit *submit) =20 for (i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; + struct drm_gpuvm_bo *vm_bo =3D submit->bos[i].vm_bo; =20 + msm_gem_lock(obj); + drm_gpuvm_bo_put(vm_bo); + msm_gem_unlock(obj); drm_gem_object_put(obj); } } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index ca29e81d79d2..1f4c9b5c2e8f 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -5,14 +5,13 @@ */ =20 #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_mmu.h" =20 static void -msm_gem_vm_destroy(struct kref *kref) +msm_gem_vm_free(struct drm_gpuvm *gpuvm) { - struct msm_gem_vm *vm =3D container_of(kref, struct msm_gem_vm, kref); + struct msm_gem_vm *vm =3D container_of(gpuvm, struct msm_gem_vm, base); =20 drm_mm_takedown(&vm->mm); if (vm->mmu) @@ -25,14 +24,14 @@ msm_gem_vm_destroy(struct kref *kref) void msm_gem_vm_put(struct msm_gem_vm *vm) { if (vm) - kref_put(&vm->kref, msm_gem_vm_destroy); + drm_gpuvm_put(&vm->base); } =20 struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm) { if (!IS_ERR_OR_NULL(vm)) - kref_get(&vm->kref); + drm_gpuvm_get(&vm->base); =20 return vm; } @@ -40,14 +39,14 @@ msm_gem_vm_get(struct msm_gem_vm *vm) /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm =3D vma->vm; - unsigned size =3D vma->node.size; + struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); + unsigned size =3D vma->base.va.range; =20 /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; =20 - vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); =20 vma->mapped =3D false; } @@ -57,10 +56,10 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm =3D vma->vm; + struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); int ret; =20 - if (GEM_WARN_ON(!vma->iova)) + if (GEM_WARN_ON(!vma->base.va.addr)) return -EINVAL; =20 if (vma->mapped) @@ -68,9 +67,6 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, =20 vma->mapped =3D true; =20 - if (!vm) - return 0; - /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -80,7 +76,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); + ret =3D vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); =20 if (ret) { vma->mapped =3D false; @@ -92,19 +88,20 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm =3D vma->vm; + struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); =20 GEM_WARN_ON(vma->mapped); =20 - spin_lock(&vm->lock); - if (vma->iova) + spin_lock(&vm->mm_lock); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->lock); + spin_unlock(&vm->mm_lock); =20 - vma->iova =3D 0; - list_del(&vma->list); + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + drm_gpuva_unlink(&vma->base); + mutex_unlock(&vm->vm_lock); =20 - msm_gem_vm_put(vm); kfree(vma); } =20 @@ -113,6 +110,7 @@ struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; =20 @@ -120,36 +118,83 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem= _object *obj, if (!vma) return ERR_PTR(-ENOMEM); =20 - vma->vm =3D vm; + if (vm->managed) { + spin_lock(&vm->mm_lock); + ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, + obj->size, PAGE_SIZE, 0, + range_start, range_end, 0); + spin_unlock(&vm->mm_lock); =20 - spin_lock(&vm->lock); - ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, - obj->size, PAGE_SIZE, 0, - range_start, range_end, 0); - spin_unlock(&vm->lock); + if (ret) + goto err_free_vma; =20 - if (ret) - goto err_free_vma; + range_start =3D vma->node.start; + range_end =3D range_start + obj->size; + } =20 - vma->iova =3D vma->node.start; + GEM_WARN_ON((range_end - range_start) > obj->size); + + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped =3D false; =20 - INIT_LIST_HEAD(&vma->list); + mutex_lock(&vm->vm_lock); + ret =3D drm_gpuva_insert(&vm->base, &vma->base); + mutex_unlock(&vm->vm_lock); + if (ret) + goto err_free_range; =20 - kref_get(&vm->kref); + vm_bo =3D drm_gpuvm_bo_obtain(&vm->base, obj); + if (IS_ERR(vm_bo)) { + ret =3D PTR_ERR(vm_bo); + goto err_va_remove; + } + + mutex_lock(&vm->vm_lock); + drm_gpuvm_bo_extobj_add(vm_bo); + drm_gpuva_link(&vma->base, vm_bo); + mutex_unlock(&vm->vm_lock); + GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); =20 return vma; =20 +err_va_remove: + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + mutex_unlock(&vm->vm_lock); +err_free_range: + if (vm->managed) + drm_mm_remove_node(&vma->node); err_free_vma: kfree(vma); return ERR_PTR(ret); } =20 +static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { + .vm_free =3D msm_gem_vm_free, +}; + +/** + * msm_gem_vm_create() - Create and initialize a &msm_gem_vm + * @drm: the drm device + * @mmu: the backing MMU objects handling mapping/unmapping + * @name: the name of the VM + * @va_start: the start offset of the VA space + * @va_size: the size of the VA space + * @managed: is it a kernel managed VM? + * + * In a kernel managed VM, the kernel handles address allocation, and only + * synchronous operations are supported. In a user managed VM, userspace + * handles virtual address allocation, and both async and sync operations + * are supported. + */ struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size) +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, + u64 va_start, u64 va_size, bool managed) { + enum drm_gpuvm_flags flags =3D 0; struct msm_gem_vm *vm; + struct drm_gem_object *dummy_gem; + int ret =3D 0; =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -158,15 +203,28 @@ msm_gem_vm_create(struct msm_mmu *mmu, const char *na= me, if (!vm) return ERR_PTR(-ENOMEM); =20 - spin_lock_init(&vm->lock); - vm->name =3D name; - vm->mmu =3D mmu; - vm->va_start =3D va_start; - vm->va_size =3D size; + dummy_gem =3D drm_gpuvm_resv_object_alloc(drm); + if (!dummy_gem) { + ret =3D -ENOMEM; + goto err_free_vm; + } + + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, + va_start, va_size, 0, 0, &msm_gpuvm_ops); + drm_gem_object_put(dummy_gem); + + spin_lock_init(&vm->mm_lock); + mutex_init(&vm->vm_lock); =20 - drm_mm_init(&vm->mm, va_start, size); + vm->mmu =3D mmu; + vm->managed =3D managed; =20 - kref_init(&vm->kref); + drm_mm_init(&vm->mm, va_start, va_size); =20 return vm; + +err_free_vm: + kfree(vm); + return ERR_PTR(ret); + } diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 88504c4b842f..6458bd82a0cd 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -204,8 +204,8 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *d= ev) return NULL; } =20 - vm =3D msm_gem_vm_create(mmu, "mdp_kms", - 0x1000, 0x100000000 - 0x1000); + vm =3D msm_gem_vm_create(dev, mmu, "mdp_kms", + 0x1000, 0x100000000 - 0x1000, true); if (IS_ERR(vm)) { dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB0782EBBB6 for ; Wed, 25 Jun 2025 18:58:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877926; cv=none; b=Bep7Y6vfNa3oYj/v1HX9KbO4jKTgHnK+SUvCuI3wfKBkjHzXDIBj4S0Yn8sZkU8BZl9mNGylSvCRxdva2vH2/dUWESpQIiHR/SuIQ1Kfoc0z2sEz+rzKDFnme8zOQxY6N9xCAAczHTrb4o+IW9GmR2gz078cMmrLRz8cHUsAKjw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877926; c=relaxed/simple; bh=q4gCbxcMYEm3zoaYXsw7SuIGk/RXB8DFIvwfn1p4eq0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AwSb55Ic/NOCdvDlNHcvkDkVOx8mD3p/GDpjUbzP5fut0Pnj+qVgZ8L6t7Q2OcVEdCj29tDQhuNQKmHJhzK957wFc65ZHHt3AI87oS6vg0j6ibs7azw7HzIH+7Z11uzBTI9PZ+JS0hXvC+oYD1soQl2tKLUsLJum1V64WOtnSOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=VsVMc1XF; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="VsVMc1XF" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCKv65014571 for ; Wed, 25 Jun 2025 18:58:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=w1KSq0qBMbn 8vDCQkGrj8jPph96M9d4hpv60nbIOQ/E=; b=VsVMc1XFiOptmnmnGZSFEZgbPQV N2znD2PDZ4MUNu0jZt3exgZdQhyhOOgjUfjA4dlPykEqzWgSudrUBpnlqnBrwHpS 5naUgA3gcSeYr9UcbDGyOoxzRcu1oge3tVbvdk0M6y32cMoa7SymPjJ9iaYVzZqW YPpo7wdINvxkLE0PGtClGDMza1hdVzhZAN/BnUzFnEqAvKPF8agKDuGKyEjAajG/ ztcWjqqQLVVH0ujq/oyzeIN+/VLcNt6+QT15fULsOvPFoslYRclMIm1pHPJLxL8O LXk8BMGuyy7x1eZnS5s9THrf1NZ5UWmU3zqHfLImfb4Dt83/Y6/SKKDJBWw== Received: from mail-pj1-f72.google.com (mail-pj1-f72.google.com [209.85.216.72]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47f2rq159r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:42 +0000 (GMT) Received: by mail-pj1-f72.google.com with SMTP id 98e67ed59e1d1-311e98ee3fcso1236013a91.0 for ; Wed, 25 Jun 2025 11:58:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877921; x=1751482721; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w1KSq0qBMbn8vDCQkGrj8jPph96M9d4hpv60nbIOQ/E=; b=am1FPstSi2TSk+5kYePC16oHaauMw6FyN2crlAjGHH8e+qjWVz4BCfezfMrSlsMra4 IJhwc0mCnHLuclOacLhZRD6hCSOtPu4Pk1aYV0a2rNhkZRAmOuPc0HoMD6X1QsnizWpa Qn3ufRNyYvHSNsVkmuf921hi8jMug4Q/CVhmLbYlxDj59Sajs2UqBesUL6MAAXJgwOib oOLneMq4cLQZ0m69w8ygKQrC9Y8urYwkLg1tink9odnSnaLBURSIo2kwhR6S/l+YiIVF Zx1oz/QImXsArjbgoKm4gfny5shMZrW2/9oJWQnU63HJc7Oq+baj4d2XzcBXDCCL/9I5 3X1Q== X-Forwarded-Encrypted: i=1; AJvYcCXhJPlspeaRAN6wDDmP+JSy6STplPI67SAgrrH10yv15TPJw5UmeIMaCqKkxTGeUDH4TiJ3Q2ZoWjM759s=@vger.kernel.org X-Gm-Message-State: AOJu0YwXy2tG5PwfCzdtsZNADeByfHvQOAfCZ5iDX/4FEXnq3ZozK7l3 +o76rXzlZ6WSrfPyDM3MyLkrooI2OaNBGOHADo8+ke7pP4BpZaQKJvGPzGWfwrUVtWltlkPakLd 7HeIuO7RWYIE8yZ53wy7NxtuutctUiilzAKg+qZPz1fVi+IJ4RXo0B9lEp9pe3U72dkc= X-Gm-Gg: ASbGncs6TCHn/WoTvm/Mc5PRmlsFquYb4j+eDsQMMYlyiJ4JiPKS9czueRExODqk0nf Sd5KN35PABtwnhiG/Yc4nb/ZoZ3vpHz2cMzhlIeB6Lv5EY5gyx/iJqKQUJI9M6fEWeIIlOlfy5s je5L4lOUiaHgoup11bDQrWrC84X5dcB42rf2Y+xS0t9d/Bej4GUXZQPpTI1wrrnvIVy5QxrntVz TYTFX/NwKxo30uj4ETsGWeYa8oYbaeR452BtOLuI0MT78/T7IzpD977jMSohBjH7mbUMGt3fml7 PEuar3m62fKjJ+Ov7QWi6KnDolfgrK39 X-Received: by 2002:a17:90b:1848:b0:311:abba:53b6 with SMTP id 98e67ed59e1d1-316d6a0db93mr875915a91.14.1750877921176; Wed, 25 Jun 2025 11:58:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGsWmo0XcfY14cs14p24v7aKlCpVmoi+qFh+n3i1O5FwYt3u5BbeIQWYiWOVDn5GpGfp0RINQ== X-Received: by 2002:a17:90b:1848:b0:311:abba:53b6 with SMTP id 98e67ed59e1d1-316d6a0db93mr875868a91.14.1750877920555; Wed, 25 Jun 2025 11:58:40 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-315f5441770sm2486922a91.48.2025.06.25.11.58.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:40 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v7 14/42] drm/msm: Convert vm locking Date: Wed, 25 Jun 2025 11:47:07 -0700 Message-ID: <20250625184918.124608-15-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=NdDm13D4 c=1 sm=1 tr=0 ts=685c46e2 cx=c_pps a=RP+M6JBNLl+fLTcSJhASfg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=1nDRvngas_nmfiIOXgsA:9 a=iS9zxrgQBfv6-_F4QbHw:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX9CWptD7f0QEE pKbK7IANdrGPFxTct9l2EsLJFQRqlJmYYFdjKCXgS4EQptpjart+POG54WuN7ZJryx2sP5lEHGA JcZxqRXQ3xRdG+RwlKN3UAkZlTCaayrzw38duUfCpHsvAStG9dfUpNVaWyVrpJHwps30Dt+XW8g DZs95pYpI85iMyh/aeDde/Msw2OnqqbsK0vblqfjJVoF2lr8lRZl4F6darKQB++bFT7cl00FTPa zA3N2+fklXPaVofTH7b1WqdAQ7adgTpXkZmQFfmPKuq+eR1RVIAioy/rLmtlg3dJJkwgcqb/YNi iEGtQ6B48dI0NrymrIeyapOsLU7Jy42koOCZic4iBxLkm4KL2l+vdSb6kA43F1arDqoB9EH7Sbc FzHteo0B4oAQhwvwNlW74TBQxdbbR1VMnejyul4G6gmBm74fP6PU0lds9M6PDp5csFoNGI+T X-Proofpoint-ORIG-GUID: ldVFhUPQPp50oYOzUNKDRaFhzk_dQKSn X-Proofpoint-GUID: ldVFhUPQPp50oYOzUNKDRaFhzk_dQKSn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxscore=0 spamscore=0 malwarescore=0 lowpriorityscore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxlogscore=999 adultscore=0 clxscore=1015 impostorscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Convert to using the gpuvm's r_obj for serializing access to the VM. This way we can use the drm_exec helper for dealing with deadlock detection and backoff. This will let us deal with upcoming locking order conflicts with the VM_BIND implmentation (ie. in some scenarious we need to acquire the obj lock first, for ex. to iterate all the VMs an obj is bound in, and in other scenarious we need to acquire the VM lock first). Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 41 +++++++++---- drivers/gpu/drm/msm/msm_gem.h | 37 ++++++++++-- drivers/gpu/drm/msm/msm_gem_shrinker.c | 80 +++++++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_submit.c | 9 ++- drivers/gpu/drm/msm/msm_gem_vma.c | 24 +++----- 5 files changed, 152 insertions(+), 39 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 6f99f9356c5c..45a542173cca 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -52,6 +52,7 @@ static void put_iova_spaces(struct drm_gem_object *obj, s= truct drm_gpuvm *vm, bo static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) { msm_gem_assert_locked(obj); + drm_gpuvm_resv_assert_held(&vm->base); =20 struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(&vm->base, obj); if (vm_bo) { @@ -72,6 +73,7 @@ static void detach_vm(struct drm_gem_object *obj, struct = msm_gem_vm *vm) static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *fil= e) { struct msm_context *ctx =3D file->driver_priv; + struct drm_exec exec; =20 update_ctx_mem(file, -obj->size); =20 @@ -90,10 +92,10 @@ static void msm_gem_close(struct drm_gem_object *obj, s= truct drm_file *file) dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, msecs_to_jiffies(1000)); =20 - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, &ctx->vm->base, true); detach_vm(obj, ctx->vm); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ } =20 /* @@ -559,11 +561,12 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_obj= ect *obj, struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { + struct drm_exec exec; int ret; =20 - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); ret =3D get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_e= nd); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ =20 return ret; } @@ -583,16 +586,17 @@ int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; + struct drm_exec exec; int ret =3D 0; =20 - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); vma =3D get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else { *iova =3D vma->base.va.addr; } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ =20 return ret; } @@ -621,9 +625,10 @@ static int clear_iova(struct drm_gem_object *obj, int msm_gem_set_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t iova) { + struct drm_exec exec; int ret =3D 0; =20 - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); if (!iova) { ret =3D clear_iova(obj, vm); } else { @@ -636,7 +641,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, ret =3D -EBUSY; } } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ =20 return ret; } @@ -650,14 +655,15 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm) { struct msm_gem_vma *vma; + struct drm_exec exec; =20 - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); vma =3D lookup_vma(obj, vm); if (vma) { msm_gem_unpin_locked(obj); } detach_vm(obj, vm); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ } =20 int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, @@ -1029,12 +1035,27 @@ static void msm_gem_free_object(struct drm_gem_obje= ct *obj) struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct drm_device *dev =3D obj->dev; struct msm_drm_private *priv =3D dev->dev_private; + struct drm_exec exec; =20 mutex_lock(&priv->obj_lock); list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); =20 + /* + * We need to lock any VMs the object is still attached to, but not + * the object itself (see explaination in msm_gem_assert_locked()), + * so just open-code this special case: + */ + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } + } put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ =20 if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 60769c68d408..31933ed8fb2c 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -62,12 +62,6 @@ struct msm_gem_vm { */ struct drm_mm mm; =20 - /** @mm_lock: protects @mm node allocation/removal */ - struct spinlock mm_lock; - - /** @vm_lock: protects gpuvm insert/remove/traverse */ - struct mutex vm_lock; - /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; =20 @@ -246,6 +240,37 @@ msm_gem_unlock(struct drm_gem_object *obj) dma_resv_unlock(obj->resv); } =20 +/** + * msm_gem_lock_vm_and_obj() - Helper to lock an obj + VM + * @exec: the exec context helper which will be initalized + * @obj: the GEM object to lock + * @vm: the VM to lock + * + * Operations which modify a VM frequently need to lock both the VM and + * the object being mapped/unmapped/etc. This helper uses drm_exec to + * acquire both locks, dealing with potential deadlock/backoff scenarios + * which arise when multiple locks are involved. + */ +static inline int +msm_gem_lock_vm_and_obj(struct drm_exec *exec, + struct drm_gem_object *obj, + struct msm_gem_vm *vm) +{ + int ret =3D 0; + + drm_exec_init(exec, 0, 2); + drm_exec_until_all_locked (exec) { + ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); + if (!ret && (obj->resv !=3D drm_gpuvm_resv(&vm->base))) + ret =3D drm_exec_lock_obj(exec, obj); + drm_exec_retry_on_contention(exec); + if (GEM_WARN_ON(ret)) + break; + } + + return ret; +} + static inline void msm_gem_assert_locked(struct drm_gem_object *obj) { diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/m= sm_gem_shrinker.c index de185fc34084..5faf6227584a 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -43,6 +43,75 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct= shrink_control *sc) return count; } =20 +static bool +with_vm_locks(struct ww_acquire_ctx *ticket, + void (*fn)(struct drm_gem_object *obj), + struct drm_gem_object *obj) +{ + /* + * Track last locked entry for for unwinding locks in error and + * success paths + */ + struct drm_gpuvm_bo *vm_bo, *last_locked =3D NULL; + int ret =3D 0; + + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct dma_resv *resv =3D drm_gpuvm_resv(vm_bo->vm); + + if (resv =3D=3D obj->resv) + continue; + + ret =3D dma_resv_lock(resv, ticket); + + /* + * Since we already skip the case when the VM and obj + * share a resv (ie. _NO_SHARE objs), we don't expect + * to hit a double-locking scenario... which the lock + * unwinding cannot really cope with. + */ + WARN_ON(ret =3D=3D -EALREADY); + + /* + * Don't bother with slow-lock / backoff / retry sequence, + * if we can't get the lock just give up and move on to + * the next object. + */ + if (ret) + goto out_unlock; + + /* + * Hold a ref to prevent the vm_bo from being freed + * and removed from the obj's gpuva list, as that would + * would result in missing the unlock below + */ + drm_gpuvm_bo_get(vm_bo); + + last_locked =3D vm_bo; + } + + fn(obj); + +out_unlock: + if (last_locked) { + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct dma_resv *resv =3D drm_gpuvm_resv(vm_bo->vm); + + if (resv =3D=3D obj->resv) + continue; + + dma_resv_unlock(resv); + + /* Drop the ref taken while locking: */ + drm_gpuvm_bo_put(vm_bo); + + if (last_locked =3D=3D vm_bo) + break; + } + } + + return ret =3D=3D 0; +} + static bool purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { @@ -52,9 +121,7 @@ purge(struct drm_gem_object *obj, struct ww_acquire_ctx = *ticket) if (msm_gem_active(obj)) return false; =20 - msm_gem_purge(obj); - - return true; + return with_vm_locks(ticket, msm_gem_purge, obj); } =20 static bool @@ -66,9 +133,7 @@ evict(struct drm_gem_object *obj, struct ww_acquire_ctx = *ticket) if (msm_gem_active(obj)) return false; =20 - msm_gem_evict(obj); - - return true; + return with_vm_locks(ticket, msm_gem_evict, obj); } =20 static bool @@ -100,6 +165,7 @@ static unsigned long msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv =3D shrinker->private_data; + struct ww_acquire_ctx ticket; struct { struct drm_gem_lru *lru; bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket= ); @@ -124,7 +190,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct= shrink_control *sc) drm_gem_lru_scan(stages[i].lru, nr, &stages[i].remaining, stages[i].shrink, - NULL); + &ticket); nr -=3D stages[i].freed; freed +=3D stages[i].freed; remaining +=3D stages[i].remaining; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 2de5a07392eb..bd8e465e8049 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -256,11 +256,18 @@ static int submit_lookup_cmds(struct msm_gem_submit *= submit, /* This is where we make sure all the bo's are reserved and pin'd: */ static int submit_lock_objects(struct msm_gem_submit *submit) { + unsigned flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WA= IT; int ret; =20 - drm_exec_init(&submit->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, submit->nr_bos); +// TODO need to add vm_bind path which locks vm resv + external objs + drm_exec_init(&submit->exec, flags, submit->nr_bos); =20 drm_exec_until_all_locked (&submit->exec) { + ret =3D drm_exec_lock_obj(&submit->exec, + drm_gpuvm_resv_obj(&submit->vm->base)); + drm_exec_retry_on_contention(&submit->exec); + if (ret) + goto error; for (unsigned i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; ret =3D drm_exec_prepare_obj(&submit->exec, obj, 1); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 1f4c9b5c2e8f..ccb20897a2b0 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -92,15 +92,13 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) =20 GEM_WARN_ON(vma->mapped); =20 - spin_lock(&vm->mm_lock); + drm_gpuvm_resv_assert_held(&vm->base); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->mm_lock); =20 - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); drm_gpuva_unlink(&vma->base); - mutex_unlock(&vm->vm_lock); =20 kfree(vma); } @@ -114,16 +112,16 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem= _object *obj, struct msm_gem_vma *vma; int ret; =20 + drm_gpuvm_resv_assert_held(&vm->base); + vma =3D kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) return ERR_PTR(-ENOMEM); =20 if (vm->managed) { - spin_lock(&vm->mm_lock); ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&vm->mm_lock); =20 if (ret) goto err_free_vma; @@ -137,9 +135,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_o= bject *obj, drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped =3D false; =20 - mutex_lock(&vm->vm_lock); ret =3D drm_gpuva_insert(&vm->base, &vma->base); - mutex_unlock(&vm->vm_lock); if (ret) goto err_free_range; =20 @@ -149,18 +145,14 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem= _object *obj, goto err_va_remove; } =20 - mutex_lock(&vm->vm_lock); drm_gpuvm_bo_extobj_add(vm_bo); drm_gpuva_link(&vma->base, vm_bo); - mutex_unlock(&vm->vm_lock); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); =20 return vma; =20 err_va_remove: - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); - mutex_unlock(&vm->vm_lock); err_free_range: if (vm->managed) drm_mm_remove_node(&vma->node); @@ -191,6 +183,11 @@ struct msm_gem_vm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, u64 va_start, u64 va_size, bool managed) { + /* + * We mostly want to use DRM_GPUVM_RESV_PROTECTED, except that + * makes drm_gpuvm_bo_evict() a no-op for extobjs (ie. we loose + * tracking that an extobj is evicted) :facepalm: + */ enum drm_gpuvm_flags flags =3D 0; struct msm_gem_vm *vm; struct drm_gem_object *dummy_gem; @@ -213,9 +210,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); =20 - spin_lock_init(&vm->mm_lock); - mutex_init(&vm->vm_lock); - vm->mmu =3D mmu; vm->managed =3D managed; =20 --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 140932ED85C for ; Wed, 25 Jun 2025 18:58:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877933; cv=none; b=vEff/uzVQipryb+ENTFuzljLSuw/dKH7X4GLbrhy2IyDgbFxTJjY3KyPrXJR+/Nybu49GXYHNLS9Gtvs9Zi7ID1LJJAoBLetbiIVNAIvlYzpdN7KRqbHKqUk918uqDV8VIkdnVReYf/DpGhgDnE3QBbNwIL7tWL1/M8/Faul9jE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877933; c=relaxed/simple; bh=0KmzYc84owqIDNho1KhuhbpO6OCa28tC4IRfXX8ZSj4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YFrARRbdeO5Api6II4HZpjxRccN7ytd1RHRuerhDMjHtVSRtAsPwJL6dgLObUhEvtcr1CZXMgMiOvEJXQB56imguNW3uaNEYzbtddwN7qDtMRAKX3pQ59MXN/RZMhhxTxg+340ce0Pdli51XFLVXTWjnYudqyAbGYNx57nGQ9lw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=AZLS+vaS; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="AZLS+vaS" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55P9nIK4032752 for ; Wed, 25 Jun 2025 18:58:49 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Y2zxveILbHW PIr8ks/pyRPi0stqPjZq8Es+9coKpY4I=; b=AZLS+vaSOzh8rg8byeid3a6qoft uKikeA61eYp8QbKr+wAvMxuqwN2yRxYnt+FqjyYdPzw6VGGfpuEFE8OWSQzfj4qr tFS7aKbwGwdE2f49VKwVBsnyUG30+KLcTyh79M5XxZ/idBMkKnJ2bKHfaZOdJ5ls jaNox6LI+vNbVNqzquRfWzmGohTYuxBbaDIK3lewozc38Ckqn+2G3Ag3ttard6RO 1UScY2wGfZRnV5XO1uk+WJC8XMDnXbxteSkzQJ5yLGFx5cM4f3+sS96exKz0AFNC C2ep3hQE/IflH8tq2nXlCiqr555YeV34jNNkpvgLWOLZswv9hDmR3Tq5OEg== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbm1ygyq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:48 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-234fedd3e51so1845015ad.1 for ; Wed, 25 Jun 2025 11:58:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877927; x=1751482727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Y2zxveILbHWPIr8ks/pyRPi0stqPjZq8Es+9coKpY4I=; b=Rnv0SBt4KQFOo6a0k/cXJ1v4s8DKBIGx7F+EZPL0LT60jVXnaAV2gDe4s6uS2FtDWR T8Bes2YFkWZC4BaEZWQ6Hy8mSe9inc5Vv83XjNFILPUgtWCHu0wN9Xx6KkN4/FKR4UK+ sld+Jgpb5552UzkfnzrNmsgOAb9bUa1b2hODf5cKSoQkhWEShegb10JFNFzQD0CnYQUu so5/u7ywghvQvWV6o0iNYq4Vv0M4ISAy7Vj6neple+nGKo7UCwQ5RxcD3HdynoNjeqRF axyDsaiJc8gpk/Pyhl4ss+/9kx71Iy1MC0s4k8RinVgznBz+ACgYXKY22z0ZdcSGHZP+ 178A== X-Forwarded-Encrypted: i=1; AJvYcCWyaS4yrY3rYNbh4oCOsK0auIccpeAmThAbzZTaVAwUxfM5iwSuuJYzmHFoUUEN4biGokNqUN3Xg2t45T0=@vger.kernel.org X-Gm-Message-State: AOJu0YyIsQIorNzd1vhHJg61nT94byBjDuaWaRvm/3xUbcc+1Y0esTcP prAsqq8vNpZbZGUsqdzwHCXAozjezRUJGmk8h5epSpJYOB8Vnykg4z+Ap7tF6JQaLufL7dM15en rR7Hrydz8VDWMOijST3GDJ7t2HfFRwrz77DV3ebjkxezWVwih1F8M32wNPgC/58PFCk4= X-Gm-Gg: ASbGncv8VXcKDPyFP7Johlglo5FKyAgDWfiSi55MMcHDDj6nLg6bkJtKvL5J7u4qoDl 5GZMH8MXiQJnR7ATLdiDyvX0hjJfSBE/UpECTDn9enfq9nOh15Qt1WMRIVajGYiagj3R5M7MBdi C+sYfecH5KChGPqTYWQnlWx5Gqy9em7Cr/5byAeLLsKUTK4iaF0DvI1qNzw2jIBAzs4/M1racU+ xGv3sC4udroNVuu5RSU1MAs8VBRrXwK629aztmRbUpshxp3NKuoMLTFGNZadWDQVH6CLwIjxg9E f5mqsIkghYoLdEhlhBbJ7DZBCaR/bPYb X-Received: by 2002:a17:902:ea02:b0:22f:b6d6:2737 with SMTP id d9443c01a7336-23823f6652emr90989355ad.10.1750877926362; Wed, 25 Jun 2025 11:58:46 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGglkr3KfsvVlknlXsJ0RrrrhYBeGXucCqwrbMO6Mp7UtZWU6/sguboqXKdlhaboKOwCC8ksw== X-Received: by 2002:a17:902:ea02:b0:22f:b6d6:2737 with SMTP id d9443c01a7336-23823f6652emr90988575ad.10.1750877925711; Wed, 25 Jun 2025 11:58:45 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d8640b83sm145335175ad.142.2025.06.25.11.58.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:45 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Jessica Zhang , Arnd Bergmann , Krzysztof Kozlowski , Eugene Lepshy , Haoxiang Li , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 15/42] drm/msm: Use drm_gpuvm types more Date: Wed, 25 Jun 2025 11:47:08 -0700 Message-ID: <20250625184918.124608-16-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=YYu95xRf c=1 sm=1 tr=0 ts=685c46e8 cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=5ZgAWMNDsa3KKn93vecA:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-GUID: qnvI8l81_31-NVVDIgYIYCun-tUkzcq1 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfXw10MYdATAw7B Z08g003JJ9lHxH7gUdVWebP4wPeTiyS1TbDL/oj9Hr9PSKJ6qrzMHp50vzz6ALaFYubVUuFE7CH flivwtdkiqM8mQIOVo2g+7YlY2YuhCsTO+Cevoe0S4bBJsa3gT92HiwCtEKljtfEyzjzvGK30CP KqA7w7LhAR+zmfL3hqPrhbVP4J11fIkXdEh9eAc4Mdgoayghv+fU/VYcws375lu1/LnU/Zlkvf/ FnimjAXGCxNVFQKI1PHPMwmVHYxf1N3TH3sPqrbxavJ4urL1UnxuqfDrAPvkgleFDKE5HfzgFK8 WSLs8Oj39BlTxvyzhwCWEZ2Rb/7re+tz+ZUz7RHnp0lYi+g5E3RvAM5Na0dr0olCzqJ5zWZ+q3x tr2PEbc6ku5veWc1pD2x9LUYt+dKIafyIl3KsXz5OIpeLByjfgE25/TgGVoGKtT7QkWgzxpn X-Proofpoint-ORIG-GUID: qnvI8l81_31-NVVDIgYIYCun-tUkzcq1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 spamscore=0 impostorscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 phishscore=0 mlxlogscore=999 clxscore=1015 mlxscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" From: Rob Clark Most of the driver code doesn't need to reach in to msm specific fields, so just use the drm_gpuvm/drm_gpuva types directly. This should hopefully improve commonality with other drivers and make the code easier to understand. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 11 +-- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 21 ++--- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 11 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 11 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 6 +- drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_fb.c | 4 +- drivers/gpu/drm/msm/msm_gem.c | 94 +++++++++++------------ drivers/gpu/drm/msm/msm_gem.h | 59 +++++++------- drivers/gpu/drm/msm/msm_gem_submit.c | 8 +- drivers/gpu/drm/msm/msm_gem_vma.c | 70 +++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 21 ++--- drivers/gpu/drm/msm/msm_gpu.h | 10 +-- drivers/gpu/drm/msm/msm_kms.c | 6 +- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 23 files changed, 178 insertions(+), 191 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a2xx_gpu.c index 889480aa13ba..ec38db45d8a3 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; =20 - a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(to_msm_vm(gpu->vm)->mmu, &pt_base, &tran_error); =20 DBG("%s", gpu->name); =20 @@ -466,11 +466,11 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struc= t msm_gpu *gpu) return state; } =20 -static struct msm_gem_vm * +static struct drm_gpuvm * a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu =3D a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 vm =3D msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, tr= ue); =20 diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a5xx_gpu.c index 04138a06724b..ee927d8cc0dc 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,7 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } =20 - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a5xx_fault_handler); =20 /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.c index 77d9ff9632d1..28e6705c6da6 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,6 +1259,8 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) =20 static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { + struct msm_mmu *mmu =3D to_msm_vm(gmu->vm)->mmu; + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); msm_gem_kernel_put(gmu->debug.obj, gmu->vm); msm_gem_kernel_put(gmu->icache.obj, gmu->vm); @@ -1266,8 +1268,8 @@ static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); msm_gem_kernel_put(gmu->log.obj, gmu->vm); =20 - gmu->vm->mmu->funcs->detach(gmu->vm->mmu); - msm_gem_vm_put(gmu->vm); + mmu->funcs->detach(mmu); + drm_gpuvm_put(gmu->vm); } =20 static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo = *bo, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/ad= reno/a6xx_gmu.h index cceda7d9c33a..5da36226b93d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; =20 - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 26d0a863f38c..c43a443661e4 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, if (ctx->seqno =3D=3D ring->cur_ctx_seqno) return; =20 - if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) return; =20 if (adreno_gpu->info->family >=3D ADRENO_7XX_GEN1) { @@ -2243,7 +2243,7 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, st= ruct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } =20 -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); @@ -2261,12 +2261,12 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform= _device *pdev) return adreno_iommu_create_vm(gpu, pdev, quirks); } =20 -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; =20 - mmu =3D msm_iommu_pagetable_create(gpu->vm->mmu); + mmu =3D msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -2546,7 +2546,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) =20 adreno_gpu->uche_trap_base =3D 0x1fffffffff000ull; =20 - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a6xx_fault_handler); =20 a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/ms= m/adreno/a6xx_preempt.c index b14a7c630bd0..7fd560a2c1ce 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, =20 struct a7xx_cp_smmu_info *smmu_info_ptr =3D ptr; =20 - msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(to_msm_vm(gpu->vm)->mmu, &ttbr, &asid); =20 smmu_info_ptr->magic =3D GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 =3D ttbr; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index 287b032fefe4..f6624a246694 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 p= asid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } =20 -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { return adreno_iommu_create_vm(gpu, pdev, 0); } =20 -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; u64 start, size; =20 mmu =3D msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -274,9 +274,10 @@ void adreno_check_and_reenable_stall(struct adreno_gpu= *adreno_gpu) if (!adreno_gpu->stall_enabled && ktime_after(ktime_get(), adreno_gpu->stall_reenable_time) && !READ_ONCE(gpu->crashstate)) { + struct msm_mmu *mmu =3D to_msm_vm(gpu->vm)->mmu; adreno_gpu->stall_enabled =3D true; =20 - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); + mmu->funcs->set_stall(mmu, true); } spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags); } @@ -290,6 +291,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, u32 scratch[4]) { + struct msm_mmu *mmu =3D to_msm_vm(gpu->vm)->mmu; struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); const char *type =3D "UNKNOWN"; bool do_devcoredump =3D info && (info->fsr & ARM_SMMU_FSR_SS) && @@ -302,9 +304,10 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned= long iova, int flags, */ spin_lock_irqsave(&adreno_gpu->fault_stall_lock, irq_flags); if (adreno_gpu->stall_enabled) { + adreno_gpu->stall_enabled =3D false; =20 - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); + mmu->funcs->set_stall(mmu, false); } adreno_gpu->stall_reenable_time =3D ktime_add_ms(ktime_get(), 500); spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags); @@ -314,7 +317,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned = long iova, int flags, * it now. */ if (!do_devcoredump) { - gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); + mmu->funcs->resume_translation(mmu); } =20 /* @@ -409,7 +412,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, return 0; case MSM_PARAM_FAULTS: if (ctx->vm) - *value =3D gpu->global_faults + ctx->vm->faults; + *value =3D gpu->global_faults + to_msm_vm(ctx->vm)->faults; else *value =3D gpu->global_faults; return 0; @@ -419,12 +422,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_= context *ctx, case MSM_PARAM_VA_START: if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->base.mm_start; + *value =3D ctx->vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->base.mm_range; + *value =3D ctx->vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value =3D adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/= adreno/adreno_gpu.h index bbd7e664286e..e9a63fbd131b 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -644,11 +644,11 @@ void adreno_show_object(struct drm_printer *p, void *= *ptr, int len, * Common helper function to initialize the default address space for arm-= smmu * attached targets */ -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev); =20 -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/= disp/dpu1/dpu_kms.c index bb5db6da636a..a9cd215cfd33 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1098,17 +1098,17 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dp= u_kms) if (!dpu_kms->base.vm) return; =20 - mmu =3D dpu_kms->base.vm->mmu; + mmu =3D to_msm_vm(dpu_kms->base.vm)->mmu; =20 mmu->funcs->detach(mmu); - msm_gem_vm_put(dpu_kms->base.vm); + drm_gpuvm_put(dpu_kms->base.vm); =20 dpu_kms->base.vm =3D NULL; } =20 static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 vm =3D msm_kms_init_vm(dpu_kms->dev); if (IS_ERR(vm)) diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm= /disp/mdp4/mdp4_kms.c index d5b5628bee24..9326ed3aab04 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -120,15 +120,16 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(kms)); struct device *dev =3D mdp4_kms->dev->dev; - struct msm_gem_vm *vm =3D kms->vm; =20 if (mdp4_kms->blank_cursor_iova) msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); =20 - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu =3D to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } =20 if (mdp4_kms->rpm_enabled) @@ -380,7 +381,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms =3D to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms =3D NULL; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int ret; u32 major, minor; unsigned long max_clk; diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm= /disp/mdp5/mdp5_kms.c index 9dca0385a42d..b6e6bd1f95ee 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,12 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms =3D to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_vm *vm =3D kms->vm; =20 - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu =3D to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } =20 mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +501,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms =3D priv->kms; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int i, ret; =20 ret =3D mdp5_init(to_platform_device(dev->dev), dev); diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/d= si_host.c index 16335ebd21e4..2d1699b7dc93 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { =20 /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,7 +1146,7 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host= , int size) uint64_t iova; u8 *data; =20 - msm_host->vm =3D msm_gem_vm_get(priv->kms->vm); + msm_host->vm =3D drm_gpuvm_get(priv->kms->vm); =20 data =3D msm_gem_kernel_new(dev, size, MSM_BO_WC, msm_host->vm, @@ -1194,7 +1194,7 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) =20 if (msm_host->tx_gem_obj) { msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); - msm_gem_vm_put(msm_host->vm); + drm_gpuvm_put(msm_host->vm); msm_host->tx_gem_obj =3D NULL; msm_host->vm =3D NULL; } diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index e4c57deaa1f9..136dd928135a 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,8 +48,6 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_vm; -struct msm_gem_vma; struct msm_disp_state; =20 #define MAX_CRTCS 8 @@ -230,7 +228,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); =20 -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); =20 int msm_ioctl_gem_submit(struct drm_device *dev, void *data, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 3b17d83f6673..8ae2f326ec54 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -78,7 +78,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb,= struct seq_file *m) int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb) { struct msm_drm_private *priv =3D fb->dev->dev_private; - struct msm_gem_vm *vm =3D priv->kms->vm; + struct drm_gpuvm *vm =3D priv->kms->vm; struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); int ret, i, n =3D fb->format->num_planes; =20 @@ -102,7 +102,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb,= bool needs_dirtyfb) void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirty= fb) { struct msm_drm_private *priv =3D fb->dev->dev_private; - struct msm_gem_vm *vm =3D priv->kms->vm; + struct drm_gpuvm *vm =3D priv->kms->vm; struct msm_framebuffer *msm_fb =3D to_msm_framebuffer(fb); int i, n =3D fb->format->num_planes; =20 diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 45a542173cca..87949d0e87bf 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -49,20 +49,20 @@ static int msm_gem_open(struct drm_gem_object *obj, str= uct drm_file *file) =20 static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, bool close); =20 -static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) +static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) { msm_gem_assert_locked(obj); - drm_gpuvm_resv_assert_held(&vm->base); + drm_gpuvm_resv_assert_held(vm); =20 - struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(&vm->base, obj); + struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(vm, obj); if (vm_bo) { struct drm_gpuva *vma; =20 drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm !=3D &vm->base) + if (vma->vm !=3D vm) continue; - msm_gem_vma_purge(to_msm_vma(vma)); - msm_gem_vma_close(to_msm_vma(vma)); + msm_gem_vma_purge(vma); + msm_gem_vma_close(vma); break; } =20 @@ -93,7 +93,7 @@ static void msm_gem_close(struct drm_gem_object *obj, str= uct drm_file *file) msecs_to_jiffies(1000)); =20 msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); - put_iova_spaces(obj, &ctx->vm->base, true); + put_iova_spaces(obj, ctx->vm, true); detach_vm(obj, ctx->vm); drm_exec_fini(&exec); /* drop locks */ } @@ -390,8 +390,8 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } =20 -static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +static struct drm_gpuva *lookup_vma(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { struct drm_gpuvm_bo *vm_bo; =20 @@ -401,13 +401,13 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_= object *obj, struct drm_gpuva *vma; =20 drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm =3D=3D &vm->base) { + if (vma->vm =3D=3D vm) { /* lookup_vma() should only be used in paths * with at most one vma per vm */ GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); =20 - return to_msm_vma(vma); + return vma; } } } @@ -437,22 +437,20 @@ put_iova_spaces(struct drm_gem_object *obj, struct dr= m_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); =20 drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); - - msm_gem_vma_purge(msm_vma); + msm_gem_vma_purge(vma); if (close) - msm_gem_vma_close(msm_vma); + msm_gem_vma_close(vma); } =20 drm_gpuvm_bo_put(vm_bo); } } =20 -static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, - u64 range_start, u64 range_end) +static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm, u64 range_start, + u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; =20 msm_gem_assert_locked(obj); =20 @@ -461,14 +459,14 @@ static struct msm_gem_vma *get_vma_locked(struct drm_= gem_object *obj, if (!vma) { vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); } else { - GEM_WARN_ON(vma->base.va.addr < range_start); - GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); + GEM_WARN_ON(vma->va.addr < range_start); + GEM_WARN_ON((vma->va.addr + obj->size) > range_end); } =20 return vma; } =20 -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma = *vma) +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); struct page **pages; @@ -525,17 +523,17 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) update_lru_active(obj); } =20 -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { return get_vma_locked(obj, vm, 0, U64_MAX); } =20 static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; int ret; =20 msm_gem_assert_locked(obj); @@ -546,7 +544,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, =20 ret =3D msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova =3D vma->base.va.addr; + *iova =3D vma->va.addr; pin_obj_locked(obj); } =20 @@ -558,8 +556,8 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { struct drm_exec exec; int ret; @@ -572,8 +570,8 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_objec= t *obj, } =20 /* get iova and pin it. Should have a matching put */ -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm = *vm, + uint64_t *iova) { return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } @@ -582,10 +580,10 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *o= bj, * Get an iova but don't pin it. Doesn't need a put because iovas are curr= ently * valid for the life of the object */ -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; struct drm_exec exec; int ret =3D 0; =20 @@ -594,7 +592,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); } else { - *iova =3D vma->base.va.addr; + *iova =3D vma->va.addr; } drm_exec_fini(&exec); /* drop locks */ =20 @@ -602,9 +600,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } =20 static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { - struct msm_gem_vma *vma =3D lookup_vma(obj, vm); + struct drm_gpuva *vma =3D lookup_vma(obj, vm); =20 if (!vma) return 0; @@ -623,7 +621,7 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova) + struct drm_gpuvm *vm, uint64_t iova) { struct drm_exec exec; int ret =3D 0; @@ -632,11 +630,11 @@ int msm_gem_set_iova(struct drm_gem_object *obj, if (!iova) { ret =3D clear_iova(obj, vm); } else { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; vma =3D get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->base.va.addr !=3D iova)) { + } else if (GEM_WARN_ON(vma->va.addr !=3D iova)) { clear_iova(obj, vm); ret =3D -EBUSY; } @@ -651,10 +649,9 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * purged until something else (shrinker, mm_notifier, destroy, etc) decid= es * to get rid of it */ -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; struct drm_exec exec; =20 msm_gem_lock_vm_and_obj(&exec, obj, vm); @@ -1284,9 +1281,9 @@ struct drm_gem_object *msm_gem_import(struct drm_devi= ce *dev, return ERR_PTR(ret); } =20 -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova) +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t f= lags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova) { void *vaddr; struct drm_gem_object *obj =3D msm_gem_new(dev, size, flags); @@ -1319,8 +1316,7 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint= 32_t size, =20 } =20 -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm) +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm) { if (IS_ERR_OR_NULL(bo)) return; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 31933ed8fb2c..557b6804181f 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -79,12 +79,7 @@ struct msm_gem_vm { }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) =20 -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm); - -void msm_gem_vm_put(struct msm_gem_vm *vm); - -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, u64 va_start, u64 va_size, bool managed); =20 @@ -113,12 +108,12 @@ struct msm_gem_vma { }; #define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) =20 -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct msm_gem_vma *vma); -int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sg= t, int size); -void msm_gem_vma_close(struct msm_gem_vma *vma); +void msm_gem_vma_purge(struct drm_gpuva *vma); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt,= int size); +void msm_gem_vma_close(struct drm_gpuva *vma); =20 struct msm_gem_object { struct drm_gem_object base; @@ -163,22 +158,21 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) =20 uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma = *vma); +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm); -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova); +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm); +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +int msm_gem_set_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm); + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm = *vm, + uint64_t *iova); +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -199,11 +193,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct= drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, char *name); struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova); -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm); +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t f= lags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova); +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -254,14 +247,14 @@ msm_gem_unlock(struct drm_gem_object *obj) static inline int msm_gem_lock_vm_and_obj(struct drm_exec *exec, struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { int ret =3D 0; =20 drm_exec_init(exec, 0, 2); drm_exec_until_all_locked (exec) { - ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); - if (!ret && (obj->resv !=3D drm_gpuvm_resv(&vm->base))) + ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(vm)); + if (!ret && (obj->resv !=3D drm_gpuvm_resv(vm))) ret =3D drm_exec_lock_obj(exec, obj); drm_exec_retry_on_contention(exec); if (GEM_WARN_ON(ret)) @@ -328,7 +321,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index bd8e465e8049..d8ff6aeb04ab 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -264,7 +264,7 @@ static int submit_lock_objects(struct msm_gem_submit *s= ubmit) =20 drm_exec_until_all_locked (&submit->exec) { ret =3D drm_exec_lock_obj(&submit->exec, - drm_gpuvm_resv_obj(&submit->vm->base)); + drm_gpuvm_resv_obj(submit->vm)); drm_exec_retry_on_contention(&submit->exec); if (ret) goto error; @@ -315,7 +315,7 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) =20 for (i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; - struct msm_gem_vma *vma; + struct drm_gpuva *vma; =20 /* if locking succeeded, pin bo: */ vma =3D msm_gem_get_vma_locked(obj, submit->vm); @@ -328,8 +328,8 @@ static int submit_pin_objects(struct msm_gem_submit *su= bmit) if (ret) break; =20 - submit->bos[i].vm_bo =3D drm_gpuvm_bo_get(vma->base.vm_bo); - submit->bos[i].iova =3D vma->base.va.addr; + submit->bos[i].vm_bo =3D drm_gpuvm_bo_get(vma->vm_bo); + submit->bos[i].iova =3D vma->va.addr; } =20 /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index ccb20897a2b0..df8eb910ca31 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -20,52 +20,38 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } =20 - -void msm_gem_vm_put(struct msm_gem_vm *vm) -{ - if (vm) - drm_gpuvm_put(&vm->base); -} - -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm) -{ - if (!IS_ERR_OR_NULL(vm)) - drm_gpuvm_get(&vm->base); - - return vm; -} - /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct msm_gem_vma *vma) +void msm_gem_vma_purge(struct drm_gpuva *vma) { - struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); - unsigned size =3D vma->base.va.range; + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); + unsigned size =3D vma->va.range; =20 /* Don't do anything if the memory isn't mapped */ - if (!vma->mapped) + if (!msm_vma->mapped) return; =20 - vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); + vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); =20 - vma->mapped =3D false; + msm_vma->mapped =3D false; } =20 /* Map and pin vma: */ int -msm_gem_vma_map(struct msm_gem_vma *vma, int prot, +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); int ret; =20 - if (GEM_WARN_ON(!vma->base.va.addr)) + if (GEM_WARN_ON(!vma->va.addr)) return -EINVAL; =20 - if (vma->mapped) + if (msm_vma->mapped) return 0; =20 - vma->mapped =3D true; + msm_vma->mapped =3D true; =20 /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold @@ -76,38 +62,40 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); + ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); =20 if (ret) { - vma->mapped =3D false; + msm_vma->mapped =3D false; } =20 return ret; } =20 /* Close an iova. Warn if it is still in use */ -void msm_gem_vma_close(struct msm_gem_vma *vma) +void msm_gem_vma_close(struct drm_gpuva *vma) { - struct msm_gem_vm *vm =3D to_msm_vm(vma->base.vm); + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); =20 - GEM_WARN_ON(vma->mapped); + GEM_WARN_ON(msm_vma->mapped); =20 drm_gpuvm_resv_assert_held(&vm->base); =20 - if (vma->base.va.addr) - drm_mm_remove_node(&vma->node); + if (vma->va.addr && vm->managed) + drm_mm_remove_node(&msm_vma->node); =20 - drm_gpuva_remove(&vma->base); - drm_gpuva_unlink(&vma->base); + drm_gpuva_remove(vma); + drm_gpuva_unlink(vma); =20 kfree(vma); } =20 /* Create a new vma and allocate an iova for it */ -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -149,7 +137,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_o= bject *obj, drm_gpuva_link(&vma->base, vm_bo); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); =20 - return vma; + return &vma->base; =20 err_va_remove: drm_gpuva_remove(&vma->base); @@ -179,7 +167,7 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { * handles virtual address allocation, and both async and sync operations * are supported. */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, u64 va_start, u64 va_size, bool managed) { @@ -215,7 +203,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, =20 drm_mm_init(&vm->mm, va_start, va_size); =20 - return vm; + return &vm->base; =20 err_free_vm: kfree(vm); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index b30800f80120..82e33aa1ccd0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -283,7 +283,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *= gpu, =20 if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info =3D &state->fault_info; - struct msm_mmu *mmu =3D submit->vm->mmu; + struct msm_mmu *mmu =3D to_msm_vm(submit->vm)->mmu; =20 msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -387,7 +387,7 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; if (submit->vm) - submit->vm->faults++; + to_msm_vm(submit->vm)->faults++; =20 get_comm_cmdline(submit, &comm, &cmd); =20 @@ -463,6 +463,7 @@ static void fault_worker(struct kthread_work *work) { struct msm_gpu *gpu =3D container_of(work, struct msm_gpu, fault_work); struct msm_gem_submit *submit; + struct msm_mmu *mmu =3D to_msm_vm(gpu->vm)->mmu; struct msm_ringbuffer *cur_ring =3D gpu->funcs->active_ring(gpu); char *comm =3D NULL, *cmd =3D NULL; =20 @@ -492,7 +493,7 @@ static void fault_worker(struct kthread_work *work) =20 resume_smmu: memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); - gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); + mmu->funcs->resume_translation(mmu); =20 mutex_unlock(&gpu->lock); } @@ -829,10 +830,11 @@ static int get_clocks(struct platform_device *pdev, s= truct msm_gpu *gpu) } =20 /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_vm *vm =3D NULL; + struct drm_gpuvm *vm =3D NULL; + if (!gpu) return NULL; =20 @@ -843,11 +845,11 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct= task_struct *task) if (gpu->funcs->create_private_vm) { vm =3D gpu->funcs->create_private_vm(gpu); if (!IS_ERR(vm)) - vm->pid =3D get_pid(task_pid(task)); + to_msm_vm(vm)->pid =3D get_pid(task_pid(task)); } =20 if (IS_ERR_OR_NULL(vm)) - vm =3D msm_gem_vm_get(gpu->vm); + vm =3D drm_gpuvm_get(gpu->vm); =20 return vm; } @@ -1016,8 +1018,9 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); =20 if (!IS_ERR_OR_NULL(gpu->vm)) { - gpu->vm->mmu->funcs->detach(gpu->vm->mmu); - msm_gem_vm_put(gpu->vm); + struct msm_mmu *mmu =3D to_msm_vm(gpu->vm)->mmu; + mmu->funcs->detach(mmu); + drm_gpuvm_put(gpu->vm); } =20 if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 1f26ba00f773..d8425e6d7f5a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,8 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_devi= ce *pdev); - struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_devic= e *pdev); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); =20 /** @@ -234,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; =20 - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -363,7 +363,7 @@ struct msm_context { int queueid; =20 /** @vm: the per-process GPU address-space */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /** @kref: the reference count */ struct kref ref; @@ -673,7 +673,7 @@ int msm_gpu_init(struct drm_device *drm, struct platfor= m_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); =20 -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); =20 void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 6458bd82a0cd..e82b8569a468 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned lo= ng iova, int flags, void return -ENOSYS; } =20 -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct msm_mmu *mmu; struct device *mdp_dev =3D dev->dev; struct device *mdss_dev =3D mdp_dev->parent; @@ -212,7 +212,7 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *d= ev) return vm; } =20 - msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(vm)->mmu, kms, msm_kms_fault_handler); =20 return vm; } diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index f45996a03e15..7cdb2eb67700 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; =20 /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; =20 /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index 6298233c3568..8ced49c7557b 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } =20 - msm_gem_vm_put(ctx->vm); + drm_gpuvm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B25D2DFA51 for ; Wed, 25 Jun 2025 18:58:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877931; cv=none; b=lE+9OSUIytcxuDFzRxsDiMWS9lC26coZoqpqNjt6QpQZJ+d9CdeOlMoJC2ScFNcSYQacCFOQ8tljHU0xC/edF3OLfUqkpaEjauW28yzQ2b6Kr3Uq3mqvYhT9s5+dLrkqxDqrjx1d/+7nmhddYwEVlS+P68gEX2gOoiAS7Ob9x80= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877931; c=relaxed/simple; bh=2jcccFJXFCYh6q0iCZXWcRWf5w1lUItpk77t2RSn1c0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=m3oQ83Qsp+LAQqzXZBcMiOoJVJcPQseLgTgZT8iUKRC/qYuhKL8sHT8s0trT8OfTM36xgdBuYdmBGD423He/9sB6vCrop4I0aq9rfUkNaajwdRIBTK3WjL/eWUHVXf3pnAuIw4E9MBZQl9dtDsjHdYZRzNNOBM40KDBlkMK6ntc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=HQ2KRNFV; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="HQ2KRNFV" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PAxe09000866 for ; Wed, 25 Jun 2025 18:58:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=BEMRIRtPgB7 xj7eLCSE0ieGk3YZPnc78Gz5XX/Mz3mY=; b=HQ2KRNFVzgHdRLB15LaR2n7rCIq DnEXyjQAhx7gJdmYIYh2GKjQ72x1vtNs1ah96RWUMn+XgPLDT16NMrBy88B8Dz19 ScTYonCVUsUfs82L6rHD7zNJTcVlICjfAeaN+pcXrDvVvzQux9XOHol9Vvaw/JSX VJYvdAhlwBtE3oHVYb1GffkqyDr817jEyY92gnPcQaGcEV7E1kVvh+pwxCWxeFWp wXQIiPl3tmipsXWYY+kLceU9u64Edc9V5Lwwxdch5BlJqfQ72s6MRGBqTnna8T7q JZzhttO741kiNWreYCw+WKokD24+5RumYjxNM4/QLsFbdzmv63xfkro361w== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47emcmtk0a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:48 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-235e1d70d67so1930045ad.0 for ; Wed, 25 Jun 2025 11:58:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877927; x=1751482727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BEMRIRtPgB7xj7eLCSE0ieGk3YZPnc78Gz5XX/Mz3mY=; b=QO8RZY1d0Pm1pudAg7k6veVqM+Xqa6lztx3pvHRklh6x8bkDfFfvjXy2qRCbJ03A3K G3cxmkKjKx5LuObRpTW8gXyM7fAjCbrnCgFx9y7mRXt3eR+XuuTinntzDkC9B72OfaNQ 4kz1C3iV/70DvXMHHa3QXUHbMFD6KFvug4Z4xXZaO+z/Ht3r9nmndbVUd9FnFzIZnF1P A8GM/2bcGWwQNShsQo+OTUwk7UcIIoZvRhww6/Ya/ogUr7hF9LjqKuKvsQ2lebWs/bdQ EfyTAYdMnskJ6+sezpf1UvjLg25G7o0Z90GemUTVVBm8saf/ibkSlJHYxuayDAizLfRJ 93sg== X-Forwarded-Encrypted: i=1; AJvYcCUiWGCRxAITbII/m6Ggh4T3pa9mzKz5VscBTFRQKShwCw/Y8ts9v8W/weAnGJ+nRVYV84HSHdZbYQirr2U=@vger.kernel.org X-Gm-Message-State: AOJu0YzWH3944b/ruEWbGpuWRFM+D267z7je8sl6e4ZVHT3wkPA2caDf cZQYqTA4fntCpBryxWgzBJ/mLVVUvuqwBEQxQRmaDl169IQm4Ixc7jHv+GQzX42AAphfnvqVDkB drxCXgW0PgRfHreIKnxt2jEIVn2HV9cpFEywoM5NyorUpNQ1V9Z9pwItKtQuKlsE3jGvYpL/w71 U= X-Gm-Gg: ASbGncsMa9bF82rPjtCsilZX8i28pWDfYNEBpJtpgx91ff8ln31KjdpWO4h6wsfRHtN Ug16Jy7R8A86EFzUBA9ZC8RzXrUgYKwOEuCUWVHBApc0Df59Y0kM302xZ7gvLTrRepF/hJwLb93 q3gLpYMsUWDue7yNbniwd5UxQ0Eupz8PXv2+6dazPpqOuI+t80aLIUqqrvhOhVn56+ve9xk8Bln DHlJIvwDW7vio2qQzoAtb/evXqifm/56eskqeay1W2Jb4d5UT91P7POaVO6jHYg3hE40ELgGeEB qdLOtYxoCbeEZ8eZ9UMpcTe7j9aL0PJF X-Received: by 2002:a17:903:228a:b0:235:f70:fd44 with SMTP id d9443c01a7336-23823fe14f8mr72556785ad.21.1750877927353; Wed, 25 Jun 2025 11:58:47 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGFuXPf9rljBk9fO9uMHuDT5eLJ/AwD6fL7woWfRkVupOMVq+NgBkPNYRPET7WG+oVc3xFc5Q== X-Received: by 2002:a17:903:228a:b0:235:f70:fd44 with SMTP id d9443c01a7336-23823fe14f8mr72556455ad.21.1750877926896; Wed, 25 Jun 2025 11:58:46 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d868f878sm144060625ad.184.2025.06.25.11.58.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:46 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 16/42] drm/msm: Split out helper to get iommu prot flags Date: Wed, 25 Jun 2025 11:47:09 -0700 Message-ID: <20250625184918.124608-17-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: hmWZdtmQacPt_MYYOnnMEp--nwG54W-m X-Proofpoint-ORIG-GUID: hmWZdtmQacPt_MYYOnnMEp--nwG54W-m X-Authority-Analysis: v=2.4 cv=J+eq7BnS c=1 sm=1 tr=0 ts=685c46e8 cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=isCY8TonHXl0-fnU9HAA:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXxEkqy6LkCOEr lUkB74xLMbAi4d+tuG38brHWra+I3XrGOuciXhvzllctipdOPhbyJsHyHG18VIUcUdMdfVomdfk c83rdgNa3kNyuaFCfprQuIfm95SvhBND3rrJP8/e3R/eJPfOSwLx//ZWsamb/tOh+jgV9nnIm0C /1yo3PPaGmAU8SD6Sx2/Dm+XIkRdDFuLvTRu9F6eUcpqXHTAGHiXn904/F7iwIoldZyHAaJQYkT IYay2samF0DzrPooiCXvdhRZxyisz6S4n2+ENOoKoF9JWCAxvuTA4aguUU5WU1cTkMbcyCE9NVe JCWucu7Forhn3HTNyAlay9YbV3bPqzOwPTME4AWtuwjvJvmeIg5ijeQkTM9zh8tbhH14KfGCaaU 5E+yEz992mjovefFtsgZmH9VQbs/KQd+lLDUI/CAPSyiu/hMbf8GFF1eIqoj6Ju27g8jR1cJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxlogscore=999 adultscore=0 impostorscore=0 clxscore=1015 spamscore=0 malwarescore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 lowpriorityscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark We'll re-use this in the vm_bind path. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 12 ++++++++++-- drivers/gpu/drm/msm/msm_gem.h | 1 + 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 87949d0e87bf..09c40a7e04ac 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -466,10 +466,9 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem= _object *obj, return vma; } =20 -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma) +int msm_gem_prot(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); - struct page **pages; int prot =3D IOMMU_READ; =20 if (!(msm_obj->flags & MSM_BO_GPU_READONLY)) @@ -485,6 +484,15 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj,= struct drm_gpuva *vma) else if (prot =3D=3D 2) prot |=3D IOMMU_USE_LLC_NWA; =20 + return prot; +} + +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma) +{ + struct msm_gem_object *msm_obj =3D to_msm_bo(obj); + struct page **pages; + int prot =3D msm_gem_prot(obj); + msm_gem_assert_locked(obj); =20 pages =3D msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 557b6804181f..278ec34c31fc 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -158,6 +158,7 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) =20 uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_prot(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F42332ED86D for ; Wed, 25 Jun 2025 18:58:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877932; cv=none; b=BIBoIt9vOVWkjnz3lvnRxma/QJuCw1PmGOoAmgLgxkna3AdKK7YVT6oN/uxLKLd1KRx2P+EI8E3G8qyRRprNKqnS+ncxo+3Ci8+9dVHbDX02iagQ2iR4zB3k43UVO7RRzUTWscxDuwGkkrCWbPDl0v0FuheCU24cNTnNTqI3xr8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877932; c=relaxed/simple; bh=B9eE+MJ7+O1AN7BKJ5NQdswWxGPSuZQjmPfgsgUDJiM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DswyMQAPPcsKDB6weeAFl/1fNMfInyvVaVjfL3nLMNJ6YQZ8njhs4iefBGnn+pYgOVbnqTfDVscty1oJn3K0zBgMZhnZS/OpxgxDHPIprjpOxASzW7D09W/N+cFi8SBCBqCPJHcr6izRZYRFtMXEDgcCtXic83wMsybZd8rywMA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=daHX063X; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="daHX063X" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PAfEN8000685 for ; Wed, 25 Jun 2025 18:58:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=uUof/zDaOaC uzta8w72KM+mWpGInUe/M4USMZdkFuc4=; b=daHX063XX52gcUva/JWjgS2QGJI 9b0MmNBvFM01Am+kUbi6KWt1Ncm8SkCzHVBxIJXDJQai6GZj5R+r645OHfYXMtCn KgcByb3qKPmCHmjMNe3C/JMncH+ESpnBsWkGP+VwhSwv7SyMB29U2OAUAhReK4Ry 42jOUS9VcaKY59XlueARc2Gf8uL4Zz1enH/zr4yOPxrEvh6DaZMPIpSghNbUgzmw ykNJY2if+HX0YOrozk7n/fUE3UtqYFMXm3YYoPG1O59H03JW+Y0H8C4gjY31Qw2Y dzhpleiuKplqfDAKGpchfAI6MSq+0/yq21oOxVfpEUWRO33WKrzStxleRqA== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47g7tdasrg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:49 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-74ad608d60aso177804b3a.1 for ; Wed, 25 Jun 2025 11:58:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877929; x=1751482729; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uUof/zDaOaCuzta8w72KM+mWpGInUe/M4USMZdkFuc4=; b=eczKLLerq+/h9UpdEULq3a1nqcwUREI7Iz80ReOyglIteMCS5smQm6mviA/vlYjiie 4X8vg+rqBAvXdG07WgR3iKUhvIpU67iBJU1M0+AGMf8Oy75JF0rT3pcYt+lBTGsjP6j+ GaIyNr/I5uwcenZXrLXUMxGV+k8Df/hyeIzirCIRQ58ghKHUahIlFzF8ODRg5qJIZQfT HSEcww9ZChGshDuHphct/pMRm3ADU1tIJA1/ITrPZ8Wxh11JrENl+GyHStN0ueRAMUSZ flsJPNWWSOxYHwH3pGn0v6QeEZpWFeBGrjRH123MVCogKqmsB6rfigLINMhkis4W1maO j7wQ== X-Forwarded-Encrypted: i=1; AJvYcCV3SfhU49lVkUvCXPCGksvCx7iM8KQ/ECtyj0F9Vc5QgZoT68MO4dBdf02/4K0k7mM9JlqlHWhoudmdWFg=@vger.kernel.org X-Gm-Message-State: AOJu0YxEIoHZ42+Y8Z7j0SEQyrSKtOxJu56KdlIASQ8G9vLe96d4pKpJ RdvJbDBOJe83K9eC4T2CBzS6HvDWRTUlhIZ/03z1HeJKpPR/LeqnSTdrvXRbZa/RT4ZKud11lvO zLWBiNydUnP78DGbBC6wEiYVPSXkCkb+RPQqooX0/Zk9tBb32ec/pJW8/8A4jsenmkdI= X-Gm-Gg: ASbGnctCyHm8ri6Lijh54eTd/vH23d2gZah1utX/5zMGKDyVfFT2RHp0zGKhtroRAQq 02K7p2q+zT12WY1OMSsgrO94JNdEWJRjaz2ttcESEUL2mdzIgifqqvXlM0rtn87XBeq+0A7gN5t uVHUTBOd7bldblB9QkVHg8gsgVI9t7aXIKTcpKXwyOv/cBn0NlvuPQ0MjGboaq+Q/EMDdx+arXv 93a6eDigAudBUJj+EWuqCeluWdYS/dOrqB/pY9ie16DZiXaQQ8sNtK/tS2HeeQCqqADsLaIWGUJ hG5FibvO81PS6q3bl+mQYL5mGedroUjk X-Received: by 2002:a05:6a00:3cd2:b0:736:51ab:7aed with SMTP id d2e1a72fcca58-74ad458d7b5mr6148087b3a.16.1750877928793; Wed, 25 Jun 2025 11:58:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGdsEeiKThsDDj6JNuEy0m0GDad42QGwOQbpzwLriU0Lfc6EjO+iivd3VBI2GjbL4nRye7z/w== X-Received: by 2002:a05:6a00:3cd2:b0:736:51ab:7aed with SMTP id d2e1a72fcca58-74ad458d7b5mr6148048b3a.16.1750877928325; Wed, 25 Jun 2025 11:58:48 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-749b5e2145bsm5458283b3a.54.2025.06.25.11.58.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:47 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 17/42] drm/msm: Add mmu support for non-zero offset Date: Wed, 25 Jun 2025 11:47:10 -0700 Message-ID: <20250625184918.124608-18-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=CPYqXQrD c=1 sm=1 tr=0 ts=685c46ea cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=SJjE8ph6EfIcxDFTuEgA:9 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX4OQ/c0xHn4yl JFugXCHFjNX9Nk7kUwAcvWUG47A+HLaxCfl9dUs2rkvTVlWkKpUMvh5Usa6K8ZuQilxBFT7rBSx EwdCMgCeztPKnuGPzWgX4nIZQMTD0vYG+9pdbeZfy0jDXDyTuokj/jt6A7If8+nu33n7wvSXouB 7CPOFfvNklm9E0TjrbeAgNWSR3q585jrEoTcr5XyQTXXNn/DKjxXIMX2FJOeG6k+LMhiMVJrnYi ZxdIpNWBO0IIJ//EYE2viBZjzQA6gez53LzA5QpWd3vSKJHEMvrGRkiqHp5abP/PMtaexbmHB8G MDE/AXNCl4TFWcM3EnOMu4drrcIN/xX2GiCWJ2q2AH9yr2we1Q4mp3ZFEmWjpaubheKHmSG5IoH uOZSfhHxiF2jZUPHTU6zo8DgwZOezzgKHq9wL9u6WRL6lny77VKq67SnxUEq5kDwhMf91LcQ X-Proofpoint-GUID: dMsOkvlG1zH-Z51Y29l4RVnb7SkPzbCO X-Proofpoint-ORIG-GUID: dMsOkvlG1zH-Z51Y29l4RVnb7SkPzbCO X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 phishscore=0 mlxlogscore=999 lowpriorityscore=0 malwarescore=0 impostorscore=0 suspectscore=0 clxscore=1015 spamscore=0 priorityscore=1501 adultscore=0 mlxscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Only needs to be supported for iopgtables mmu, the other cases are either only used for kernel managed mappings (where offset is always zero) or devices which do not support sparse bindings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------ drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 6 files changed, 36 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm= /adreno/a2xx_gpummu.c index 39641551eeb6..6124336af2ec 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c @@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu) } =20 static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct a2xx_gpummu *gpummu =3D to_a2xx_gpummu(mmu); unsigned idx =3D (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE; struct sg_dma_page_iter dma_iter; unsigned prot_bits =3D 0; =20 + WARN_ON(off !=3D 0); + if (prot & IOMMU_WRITE) prot_bits |=3D 1; if (prot & IOMMU_READ) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 09c40a7e04ac..194a15802a5f 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -457,7 +457,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_= object *obj, vma =3D lookup_vma(obj, vm); =20 if (!vma) { - vma =3D msm_gem_vma_new(vm, obj, range_start, range_end); + vma =3D msm_gem_vma_new(vm, obj, 0, range_start, range_end); } else { GEM_WARN_ON(vma->va.addr < range_start); GEM_WARN_ON((vma->va.addr + obj->size) > range_end); @@ -499,7 +499,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, = struct drm_gpuva *vma) if (IS_ERR(pages)) return PTR_ERR(pages); =20 - return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); + return msm_gem_vma_map(vma, prot, msm_obj->sgt); } =20 void msm_gem_unpin_locked(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 278ec34c31fc..2dd9a7f585f4 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -110,9 +110,9 @@ struct msm_gem_vma { =20 struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end); + u64 offset, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct drm_gpuva *vma); -int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt,= int size); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); =20 struct msm_gem_object { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index df8eb910ca31..ef0efd87e4a6 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma) =20 /* Map and pin vma: */ int -msm_gem_vma_map(struct drm_gpuva *vma, int prot, - struct sg_table *sgt, int size) +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); @@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); - + ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, + vma->gem.offset, vma->va.range, + prot); if (ret) { msm_vma->mapped =3D false; } @@ -93,7 +93,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma) /* Create a new vma and allocate an iova for it */ struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, - u64 range_start, u64 range_end) + u64 offset, u64 range_start, u64 range_end) { struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; @@ -107,6 +107,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, return ERR_PTR(-ENOMEM); =20 if (vm->managed) { + BUG_ON(offset !=3D 0); ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -120,7 +121,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, =20 GEM_WARN_ON((range_end - range_start) > obj->size); =20 - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, off= set); vma->mapped =3D false; =20 ret =3D drm_gpuva_insert(&vm->base, &vma->base); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index e70088a91283..2fd48e66bc98 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mm= u, u64 iova, } =20 static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu_pagetable *pagetable =3D to_pagetable(mmu); struct io_pgtable_ops *ops =3D pagetable->pgtbl_ops; @@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu= , u64 iova, size_t size =3D sg->length; phys_addr_t phys =3D sg_phys(sg); =20 + if (!len) + break; + + if (size <=3D off) { + off -=3D size; + continue; + } + + phys +=3D off; + size -=3D off; + size =3D min_t(size_t, size, len); + off =3D 0; + while (size) { size_t pgsize, count, mapped =3D 0; int ret; @@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu,= u64 iova, phys +=3D mapped; addr +=3D mapped; size -=3D mapped; + len -=3D mapped; =20 if (ret) { msm_iommu_pagetable_unmap(mmu, iova, addr - iova); @@ -400,11 +415,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu) } =20 static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu *iommu =3D to_msm_iommu(mmu); size_t ret; =20 + WARN_ON(off !=3D 0); + /* The arm-smmu driver expects the addresses to be sign extended */ if (iova & BIT_ULL(48)) iova |=3D GENMASK_ULL(63, 49); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c33247e459d6..c874852b7331 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -12,7 +12,7 @@ struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, - size_t len, int prot); + size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*resume_translation)(struct msm_mmu *mmu); --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E2162ED85B for ; Wed, 25 Jun 2025 18:58:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877934; cv=none; b=SAiB8F3gUxTPt5NyCydaZuIJPBT/T/Hp/GnVvDhaAkjwGT/RRw8G4aPIq42IuRraNZd9f1nxuLiMhIiyt/S9CywdAmaUi8e4D8DMGKFqOyrM2A36PaW/Iwsy+fGFomfHodujbZwQxkDfTPGj6iDFcq1scC7V/WFUGiFxMO7q4jk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877934; c=relaxed/simple; bh=1DeDF0ar0bWjHYOdiDm1icJLnBGNeW7jU5ESThe4AlM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sWY2JFNfatNh2x+4Qff0K9guB6d9yAPnVx8EfUvs2O7p8gXLV5TkZ5sDs5oR2jGGeVG2hT2MDOdgVW6ROUHwDFNooEk/5cz0ueG+MJHFB0Ma3q/oyc7wKJVGQL1HoI3WCxxlAThc7AC8XxQAUAYEjYcYeCMDZ13m6CqCMP053dw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=hSND3PA1; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="hSND3PA1" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PIf0Oa031883 for ; Wed, 25 Jun 2025 18:58:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=3u7saZqSf9v I7fV2Qhb933URL/6xLtZCDgVKJFmz8jc=; b=hSND3PA1MWB8KK7yxEeVOXw6Vax 8yFAQJwhT8Ey5lfeV75HT2BnWSVZ8fOo31cQu76cCGsqo9GrkhxWaJSu/PMp1T/E SwiU1JqQvlxyrp9xpqsuo5YRKsE3QKdnhtZ3dbriTwjsPxK5/CCUsqfUYLO3iHdY 2e2Y4IEpKZPt34C4vuZWBMDz+4rqqsRm7LoLiS+VDrf49h88eDB2rwacSt9zxkdg wxhLL3UgZJFt8q5HvZ6LfaztH5A24ra73z+rPL1Lt7sLmHz4bEDTu/RSEryeAfN8 MOgj9wCdIQUBRHYLBttgW/B4yC9CNDlT5cnivmpS3v9cVTYPSIPhCzfBIJw== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47f4b40s2v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:51 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-23494a515e3so1395095ad.2 for ; Wed, 25 Jun 2025 11:58:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877930; x=1751482730; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3u7saZqSf9vI7fV2Qhb933URL/6xLtZCDgVKJFmz8jc=; b=eIDVfRbiMo/ZQjkvZO9tR3WrM49xcXBGT6XJTgPwKfXBCyAuZFwrnT8yqJnYj4dxDj FLuOFfgFByM6q69M+KvGRZRNS2JVhcCno7HaQUS4l6thM3+yKraCiz3Ip/pC1fZU+beF 7UXapeillZe/W/eCaZHORyrBCEy93gLEaDSQVUKZJcOEO2PMbyngtubfDt83I1u7fwMC J24fwYyyZQfMFK7yJwMtln27go/UFT4tsrJ345Ee8E1XtbGTuFMefbA6IXEhuSWl3sxr SDRq+cAGYfZMR+NN6NcSLvf4qsZxYgi3BnTd2XamIolJqAFcTmkoRJ+XXlB1FIqXL0l5 TNPA== X-Forwarded-Encrypted: i=1; AJvYcCWTT6dASznL0YA8lfAAKEzLcJHHfndURHEAzA8HZHhASLIamqpBsJQHk0Se4GcJPqFIZ265l5yU9exUs9k=@vger.kernel.org X-Gm-Message-State: AOJu0YxcvfJXHi+3hRcOu6D8vLAnYUHJSaXNrwS6KwgXOlnm/lwUtsBl RzFJ+ZPBE8nDwE7x1nmgfu+yUBRS+q+kEgjYVzCB5harlO+QrnVI+A9ieKbm2lvaSZOSlxQERxI o+rmD853PxcEqGQeJ5fRsB6xICPab4IvfyPztUD8KgF6xvBXhFNmRkVWcXln+9n3SyVU= X-Gm-Gg: ASbGncsg/mG+wA6WLI5VIqI+uUPlOTm+ZyxzrRdMrw6eTPVvYgaQUr5cxBitzZbBEKK A7ZAEO66zmOdgBJAFSBCrCBm5R2+B7/NM0MmOdB53UeeGOEYKyiUuKVTyW4tI5fsXdGXruvtfBc uNOUnfyWgDjyxLNqIGqCn87Ne2Z7TaWwbTHDmi9K7fZQ+dBM9Cms+YXgvI+SFk2Zn4lBgisHgSx 9+fLf+VOBUtqbHAl7TkXC1K+t2lOGQnrA21PLzT+yRf7E2lCxJ13O0WUA1J9RYEnUXby2ssToOc 3h//8FRpVmpAo2sUHs5uhSTM6DtuEBKh X-Received: by 2002:a17:903:41c1:b0:233:d1e6:4d12 with SMTP id d9443c01a7336-238c8720354mr9760155ad.13.1750877930263; Wed, 25 Jun 2025 11:58:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH8kz0ofYLy2QZ4Xfb5MD01YYNXHJp5HPzUqB6F1lr7GMPXi2DikFC/+7Q83W53JHLLXjlYkg== X-Received: by 2002:a17:903:41c1:b0:233:d1e6:4d12 with SMTP id d9443c01a7336-238c8720354mr9759805ad.13.1750877929826; Wed, 25 Jun 2025 11:58:49 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d8393440sm143372875ad.35.2025.06.25.11.58.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:49 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 18/42] drm/msm: Add PRR support Date: Wed, 25 Jun 2025 11:47:11 -0700 Message-ID: <20250625184918.124608-19-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX8O2VB109r4PJ L6SVRA+YguIPlOWEXS6rsEDvbhKwyjtMCdlWSay3ZF5k45QniFQLPGTzYag7UCYBYDCt3P4pMW/ TyPtPsbxFpv/0mNEVH81tlDu1r+avMLg2ayGbjUYsJHfk1Vn3xtRS4Hdm4lCAJ14a8o8R2/vlD3 4/5iPPUoUAGrBq0PLOhb+4t8ngc4sIAVH8YZ2kGJcSWGWuZpn0HtoRcJ5GrNx62POTMPmc8Z/tG cIOuABmq+IxVMBif3QWpglRKApMVf+89+rxpYn3ZJ9Dn6oKO/dqEQFHktQgTQBM2qwUiNlnyAjj t/lMhJ2AdLYR9D3dPnLuKAQwr+IuYU5Xlw3faqbIDbzNDxy7o6pyxvAko+aJna61h5GX+v+1N3D uTrX1Xa4tKJvmzvZ6JhZbeIFirPG7saXULYGIrlietkQnXQp9Z/5RBfTEX4GUc8ha0kdi/wK X-Proofpoint-ORIG-GUID: eYAafoOe5GpmJE41GfIOp1_b4REjMMlI X-Proofpoint-GUID: eYAafoOe5GpmJE41GfIOp1_b4REjMMlI X-Authority-Analysis: v=2.4 cv=A8BsP7WG c=1 sm=1 tr=0 ts=685c46eb cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=m2jltaIWnU9X2HFGTMUA:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 mlxlogscore=999 malwarescore=0 spamscore=0 bulkscore=0 phishscore=0 adultscore=0 impostorscore=0 suspectscore=0 mlxscore=0 clxscore=1015 priorityscore=1501 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Add PRR (Partial Resident Region) is a bypass address which make GPU writes go to /dev/null and reads return zero. This is used to implement vulkan sparse residency. To support PRR/NULL mappings, we allocate a page to reserve a physical address which we know will not be used as part of a GEM object, and configure the SMMU to use this address for PRR/NULL mappings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++++ drivers/gpu/drm/msm/msm_iommu.c | 62 ++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 2 + 3 files changed, 73 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index f6624a246694..e24f627daf37 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -361,6 +361,13 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned= long iova, int flags, return 0; } =20 +static bool +adreno_smmu_has_prr(struct msm_gpu *gpu) +{ + struct adreno_smmu_priv *adreno_smmu =3D dev_get_drvdata(&gpu->pdev->dev); + return adreno_smmu && adreno_smmu->set_prr_addr; +} + int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { @@ -444,6 +451,9 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, case MSM_PARAM_UCHE_TRAP_BASE: *value =3D adreno_gpu->uche_trap_base; return 0; + case MSM_PARAM_HAS_PRR: + *value =3D adreno_smmu_has_prr(gpu); + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index 2fd48e66bc98..756bd55ee94f 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -13,6 +13,7 @@ struct msm_iommu { struct msm_mmu base; struct iommu_domain *domain; atomic_t pagetables; + struct page *prr_page; }; =20 #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -112,6 +113,36 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *m= mu, u64 iova, return (size =3D=3D 0) ? 0 : -EINVAL; } =20 +static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size= _t len, int prot) +{ + struct msm_iommu_pagetable *pagetable =3D to_pagetable(mmu); + struct io_pgtable_ops *ops =3D pagetable->pgtbl_ops; + struct msm_iommu *iommu =3D to_msm_iommu(pagetable->parent); + phys_addr_t phys =3D page_to_phys(iommu->prr_page); + u64 addr =3D iova; + + while (len) { + size_t mapped =3D 0; + size_t size =3D PAGE_SIZE; + int ret; + + ret =3D ops->map_pages(ops, addr, phys, size, 1, prot, GFP_KERNEL, &mapp= ed); + + /* map_pages could fail after mapping some of the pages, + * so update the counters before error handling. + */ + addr +=3D mapped; + len -=3D mapped; + + if (ret) { + msm_iommu_pagetable_unmap(mmu, iova, addr - iova); + return -EINVAL; + } + } + + return 0; +} + static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, struct sg_table *sgt, size_t off, size_t len, int prot) @@ -122,6 +153,9 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu,= u64 iova, u64 addr =3D iova; unsigned int i; =20 + if (!sgt) + return msm_iommu_pagetable_map_prr(mmu, iova, len, prot); + for_each_sgtable_sg(sgt, sg, i) { size_t size =3D sg->length; phys_addr_t phys =3D sg_phys(sg); @@ -177,9 +211,16 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu= *mmu) * If this is the last attached pagetable for the parent, * disable TTBR0 in the arm-smmu driver */ - if (atomic_dec_return(&iommu->pagetables) =3D=3D 0) + if (atomic_dec_return(&iommu->pagetables) =3D=3D 0) { adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL); =20 + if (adreno_smmu->set_prr_bit) { + adreno_smmu->set_prr_bit(adreno_smmu->cookie, false); + __free_page(iommu->prr_page); + iommu->prr_page =3D NULL; + } + } + free_io_pgtable_ops(pagetable->pgtbl_ops); kfree(pagetable); } @@ -336,6 +377,25 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_= mmu *parent) kfree(pagetable); return ERR_PTR(ret); } + + BUG_ON(iommu->prr_page); + if (adreno_smmu->set_prr_bit) { + /* + * We need a zero'd page for two reasons: + * + * 1) Reserve a known physical address to use when + * mapping NULL / sparsely resident regions + * 2) Read back zero + * + * It appears the hw drops writes to the PRR region + * on the floor, but reads actually return whatever + * is in the PRR page. + */ + iommu->prr_page =3D alloc_page(GFP_KERNEL | __GFP_ZERO); + adreno_smmu->set_prr_addr(adreno_smmu->cookie, + page_to_phys(iommu->prr_page)); + adreno_smmu->set_prr_bit(adreno_smmu->cookie, true); + } } =20 /* Needed later for TLB flush */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2342cb90857e..5bc5e4526ccf 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -91,6 +91,8 @@ struct drm_msm_timespec { #define MSM_PARAM_UBWC_SWIZZLE 0x12 /* RO */ #define MSM_PARAM_MACROTILE_MODE 0x13 /* RO */ #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ +/* PRR (Partially Resident Region) is required for sparse residency: */ +#define MSM_PARAM_HAS_PRR 0x15 /* RO */ =20 /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C7D42EF288 for ; Wed, 25 Jun 2025 18:58:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877935; cv=none; b=HssL4YyHkpbnbru2atdPtX+FxZXelDCqqScgKyxyd/hpfhlnTJ2RnCEGxxZaGp0nF+vy6SeKQen2Kn/slEYC01/NQpQ6bihe+0fcfFHWP1q2Cd3eSgCAdvXG0OgNX0skSxleBEjG8svz95imGpFnaoE9FnnrV6gA5MvhNr2mbd8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877935; c=relaxed/simple; bh=QW16C4VF1d8e7Sjyfc+eli2odXO+RUUedY3b/XyYGGY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dimvmhG2W5eqQ1c0IdM9rIVBzKr+BDMnU9l/f3PG3OXJ1W1uRVbqF3IHq+am5KhShIg6mNwZwwkVX08YehygkgpHT9Uhfv20D6mMhXcIIqdatExObMyqM1DxPCwWesK9Bzxl9h1VeGXsjp3iC0wtbDW6G/cwcYpP6jWdu1mqOPo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=YE+joRQV; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="YE+joRQV" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PD5OTQ014954 for ; Wed, 25 Jun 2025 18:58:53 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=ZmGKliz2HfS kLFaMboPGj46o2FCBYdJ6NdGmIRY3dt8=; b=YE+joRQVeW83E173DEBd8DlfS6Q 1GD/RlzoDgt9mkUKHzTtPhxxyyvWzbs0oRDmViGR3z86K8X6EbQdJVmvoMjS9LGH aCoGwGs4f7CoFbPCW8RKFxu0nImD+Acz1RB/QNxPSxhncmr+OH+n6tEm3Zf4RAM5 107j05enx5cdOVRgumfK2cBxdTRVHF0WCnetEb0iBpys4sb4tPk063w2MOJRe5iS q3z3qbp3CUixevCYAQLdD73syXENgaIPdIrmX9ejOgWsMuMySJhm3otxc+CKtitr /Y4dhCGCp/6SA7JDF6HWb+Ju9gEAOe4c4bZOZOzV9h8m4VnbMe0QAUQILMQ== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47f2rq15ae-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:52 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-2358de17665so1743385ad.3 for ; Wed, 25 Jun 2025 11:58:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877932; x=1751482732; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZmGKliz2HfSkLFaMboPGj46o2FCBYdJ6NdGmIRY3dt8=; b=BWgW8GUN+gqoqWFS2Q8+EXnWhoDvyGi0lqu1GqJqe1NwBTI7SNFk8y7F6VH2i44lUy foMvx/rC57CLLcYvO1kU7lHiifOsqqIpNUhj1PaOtcBZtfkk9WqK3L5gfkp+DnlDqrzH GBD5hyVrznCJLdbZPO+h2Cl+43m6c0RZ2vaTjROT72MpEDGt6WtmjG1P8tZxFAXFNr4c cEE6r3fxBuo/+eps8/aGc+OFAKdpXLkA2OOEtiufyr3QXkHaqZ5mAETO8S324UcgE9fa zZWGIMSfrlmrMO5SGLw196w2qS8dqnHQUbEzOFU4V/SZHykJeVmdM/IbbHVxoeSDVfAF QZfQ== X-Forwarded-Encrypted: i=1; AJvYcCVcGlnn+NfqHLc9QgEC4tdUHYz0q7ZipuJhdn0BB9a8ZIlN2yFqK6ud6ai+nIgatnjvgHWCvptE8+icjt8=@vger.kernel.org X-Gm-Message-State: AOJu0YyafLPUDi41So48tqjxJpHYUb5tkn+76ckIWzls+RapF/fP9D5b RSzCrqYJSAz1ifyMKkXdNn4UPELE3gZMFdK7En3tNCDUw2t+RDI0cg1HWwc7WzI83QQFipVsX1A rf9+biJB+v28QIJrxDk4HTu4pMIKQsf/8m+Piv8RZvBhCRFiLFGTziRmp9kMcjpSfvKw= X-Gm-Gg: ASbGncsk/LnmiDiaU7aoW1itnA7+fd8AhBZK24h7g9oJiCzFgJuVpmtJAqv+dfUjbVI 7gKkG79Ck8SdeOSKVPCjqb0LsuLQEnY49Sd9xmCElAU70zYapKBQwomqhbCBInajRflZ7jY4KGe 2za5gt2tvJqx5rwJcDjm7uZG0Xgp6XBDk6IUzxGXR7uPC9dAmQZGgBUbpqroVbHYawjY1nxauwF pd4gyqpUssBWp4p7Y0FraIpHu4UDBQDGEiCIilsbmRfT5jYw7srpsVn2dkJaKUUwpob+lfUPrvw XQwJ1aIQSQJG1wQPJ7mTko6Y0E7WcROK X-Received: by 2002:a17:902:db0a:b0:234:b41e:37a4 with SMTP id d9443c01a7336-23823f95145mr55316215ad.6.1750877931641; Wed, 25 Jun 2025 11:58:51 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFd+biFg7tZ2NwXXiWCrI5tlm21WBzd0htJGfF806nU5iy7FUrbp/Yw9gOaS7aR3Qp5foqhCw== X-Received: by 2002:a17:902:db0a:b0:234:b41e:37a4 with SMTP id d9443c01a7336-23823f95145mr55316015ad.6.1750877931211; Wed, 25 Jun 2025 11:58:51 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d867f8d0sm143513745ad.175.2025.06.25.11.58.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 19/42] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Date: Wed, 25 Jun 2025 11:47:12 -0700 Message-ID: <20250625184918.124608-20-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=NdDm13D4 c=1 sm=1 tr=0 ts=685c46ec cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=jqtXezU9Yb2X64KsF0MA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXzWVXY8/T8Jy9 s7DwLn43rGIZrDMgXWSVOL++WRQEUyLFombB6Jhs1VdBNdhX87/h0jzPK48k4gwNvWRg3k3M7oe Szy7zWchKQPGk2E2N788ri4jl5Dr3G0oiItz36T3gZnN2W4oSMQiJxcb5kuGrNaShlm4aBbYHQ1 3LbI2092dVFy0btGKAGtbO2nBY5jOtBs9vHZDaLTOK3vZbIooIMwWPu3YQLrTzI7+8msngo3+vX hBOFcL+Juoq7JW0dXjD9dRay2ezKlCSFO/r65MVfwvP494PMSErXSI5nHSD4bCcDAj2/cIEaSQC Ekp1d2ygNk0l3cGTMV4AWUxXa/52Vdg3xwPhs0QL09pLK2VXCjBW3gv84pFVX5VfEj7xYDrpLZH 0q0GNiybd1rHUV1jMv7oA9Tq73Oqi40j3vz/kjbHh4zSeWgMXhJ9CXrsw09STDIaXvWe0513 X-Proofpoint-ORIG-GUID: jetktUqSFyyTrzx_YVzuxjSNRhSeudMn X-Proofpoint-GUID: jetktUqSFyyTrzx_YVzuxjSNRhSeudMn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxscore=0 spamscore=0 malwarescore=0 lowpriorityscore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxlogscore=999 adultscore=0 clxscore=1015 impostorscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 194a15802a5f..89fead77c0d8 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -61,7 +61,7 @@ static void detach_vm(struct drm_gem_object *obj, struct = drm_gpuvm *vm) drm_gpuvm_bo_for_each_va (vma, vm_bo) { if (vma->vm !=3D vm) continue; - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); msm_gem_vma_close(vma); break; } @@ -437,7 +437,7 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_= gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); =20 drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); if (close) msm_gem_vma_close(vma); } @@ -615,7 +615,7 @@ static int clear_iova(struct drm_gem_object *obj, if (!vma) return 0; =20 - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); msm_gem_vma_close(vma); =20 return 0; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 2dd9a7f585f4..ec1a7a837e52 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -111,7 +111,7 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 offset, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct drm_gpuva *vma); +void msm_gem_vma_unmap(struct drm_gpuva *vma); int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index ef0efd87e4a6..e16a8cafd8be 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -21,7 +21,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) } =20 /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct drm_gpuva *vma) +void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2F2B2EF2BD for ; Wed, 25 Jun 2025 18:58:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877936; cv=none; b=ZlvFzPAynCFnHgpZs74msXDH+y5i0r6/Ace/2D0SeyQaZgBSlsoeJlAJeFuPTmOSvQ88N4wznmUyTnMXwaU/lVbr6cdoLJS/zSMjLOnrRr+o8AktLpRxzsq4d/lZ+L+GuUCMFztru9EMv0w8Eg6aUMQrg5Khsb3+lfSPZRV38fc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877936; c=relaxed/simple; bh=asg0ofK7A9j8GXUsqP3RShrmeE25j2whkBS5nOJbEJ0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ci6aHaazIPMSzts26d01gQIUYA6y051zsdD5tVWGDdtuT/SZ8YErMPyos35VC6EN1lN3eL2x+T+m+rYMUjf/dLS8T96ycPUsDGKCcFxVoW62KkxULTAyvq7c1icphauLjYV7Sj0K0OiD0TOEFXFfRPboyWSZx95KKard/iT58ik= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=RluCK2o2; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="RluCK2o2" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PABmDY029963 for ; Wed, 25 Jun 2025 18:58:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=ffdUUPweRzg vpO3v9DoMvCSyNHDyemApNNm03T9kbrM=; b=RluCK2o2vLtJJnpmimfjCEGt4SF HpILbEhqSZg6S/LvsfJMADHiwSNZZENM9MvuHNZazy6iMNs7iw4fIa8JD/f4V0W2 7uoCjdAKHMiMRwDMi2drF8RujBPRvNw0EtDyYjZXTWjmQrTjU7Z/0mjrHFk5xKfU NB/B9bLQzHL6paGWjExlP3zcfoj0tNJ46UOELvBllGmxgVcE4PwJssJpyUYWUyGz O7x8harkijKoAbYiGiEQFMuLU4bmT03mL/K9TNulNvEpRJ0I9JS4U/qjJAOZdz8L xQNzh/R+/qss+e7xIWuZNTf/FFEGZ1y4eEUXQ/bZYkQg37Jhsa2aiPwHRVw== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47ec26b67g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:53 +0000 (GMT) Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-b3183193374so113268a12.1 for ; Wed, 25 Jun 2025 11:58:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877933; x=1751482733; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ffdUUPweRzgvpO3v9DoMvCSyNHDyemApNNm03T9kbrM=; b=SC23TdQpPqpZT0uk7yO7zDL8Gqbxbox1V8mS0W2csxThPuzjh1bVh8ehem0uX6gVwG XVLLYDwVnuvuS2cDJmzKcc0nKd6u7KZpFwrKTxtN+UjR2NTWuy+UQ8Akp70EPjkSwRUJ piOEDOt8gGqa0hsD53FUN2HTTG2Z7kwvZ2mthxJz3B0jl8EIlW4DhIj3Y+iElWx1nAvX DxrosU+5D7zJgtx6pWU2Gk31jwCkrIOpL9lymYdbKzMQFxICSNAbh/VhYFwByq1GaWKW KabrK4Ds1SwUuUmM1Z5gsQkPi4KaOi8qvbfAsAu4661ewKp0oegdwCpsPR7Fp1GOJL+M EGHQ== X-Forwarded-Encrypted: i=1; AJvYcCUE6tCMWP8+j6Xuvh8v+AIQu9B35sFyITTZUPSwqumPYwTEdzfeQpBsTlMpD1NsygajgwMi0pw1nGvSwpY=@vger.kernel.org X-Gm-Message-State: AOJu0Ywyp8w6FO/24iTXUqeaJ4cJjfFA8t8/SIRVcIL7f2sZnZUHvbwQ 8mgK8j8lNH+WcFWVXZyj0pfXzwA4bQ8cmAv/P9yaylQHp6JnRFs1RXUfSXLZ5OvRJLGRtBWcl91 odPHmb9Q7MRjKfpWQEZCQ0RzOakeeX4afyOCpBpNpSW4BkB3bgTAi/1VY34E1hfm7CSI= X-Gm-Gg: ASbGnctZwUW/LGxtinSaEpbRlbci4XCQM87/uE+fIXWjQG/Natrr4J6fxalkgJJE6bD CGdK3nJVBnR8cs4s1a+yaFXn0j+RcOUQxn4uzDN/Ub0P+7tYMJ7VC7953b/c5V8tsM1vKh/35NB DI4/rw85mTeXhcScIzF08lVn9VoJo1ydk6w4t0iY3SyciUEsSVJLPo+O5P8NCu8ERtEJactdPZX zuIAWAip54E2O89JCcSiwxj54uiu/05J2nlQhB+vSGyCyfPufPBXm+KKW5mXHGmMiSeZuMgD8FW EbvTuJ2ZLHW9RuQQ50X9cgofDk260bBV X-Received: by 2002:a05:6a20:3d85:b0:220:193b:913 with SMTP id adf61e73a8af0-2207f33227bmr8106702637.34.1750877932962; Wed, 25 Jun 2025 11:58:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHR0kw5bnLdP6IBPTLphpUWcdTRJzruak++T3jSrLRaNiq8qRkCEOilIj6df1ql7bzZsWpPBQ== X-Received: by 2002:a05:6a20:3d85:b0:220:193b:913 with SMTP id adf61e73a8af0-2207f33227bmr8106655637.34.1750877932520; Wed, 25 Jun 2025 11:58:52 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-749c8851b26sm4805260b3a.122.2025.06.25.11.58.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:52 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 20/42] drm/msm: Drop queued submits on lastclose() Date: Wed, 25 Jun 2025 11:47:13 -0700 Message-ID: <20250625184918.124608-21-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX30U+uKxMJbTf 3LXD6R5sIQKtwy9Abo/Kzi+CzGLUAOFN+PS2mUQqBY6rRNHB2uEsn9m/8+d1/hrzkWAht+5ooup fsD+UuHh0BYOcN+nnKc2iGApxeYDVnJMmdYW0SVi9BpzxRA1lzcdzY2h7XFvpZu6wniMxdhA6lQ TsRPUg0f9kRbtePJSKoWlaBUfkzclta2F1ZCBlFRdq8OQwWTMmFexgingFXfb445t+FI9jEr0rO d35l+DbutcBeqA9m9VGxwP8ql7Np4I/QU+4+ipOykUcw1CvEuQXv+mZYcA7hMVO1I04wQE8YUt0 oQN8i+ZaPT2BYgK8arqQpjQGFBMPQVnJWD2prdr6fWGZZuB3wLLIjU2g8BZx4Kg0+DYPmSo9c13 XOk9jilYRCveTT0QyztPMbvcyUMHnMrBd+VHp/iTTsw8PAA6hVPRjI7LlcLF21tgQge8jML+ X-Authority-Analysis: v=2.4 cv=XPQwSRhE c=1 sm=1 tr=0 ts=685c46ee cx=c_pps a=Oh5Dbbf/trHjhBongsHeRQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=Vc5hAS3c26tUa1HFGawA:9 a=_Vgx9l1VpLgwpw_dHYaR:22 X-Proofpoint-GUID: qkBPEgreevnXVIHXFsSAJ1tBzNU_Q8nP X-Proofpoint-ORIG-GUID: qkBPEgreevnXVIHXFsSAJ1tBzNU_Q8nP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 clxscore=1015 suspectscore=0 mlxscore=0 spamscore=0 phishscore=0 malwarescore=0 mlxlogscore=999 bulkscore=0 priorityscore=1501 adultscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark If we haven't written the submit into the ringbuffer yet, then drop it. The submit still retires through the normal path, to preserve fence signalling order, but we can skip the IB's to userspace cmdstream. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gpu.h | 8 ++++++++ drivers/gpu/drm/msm/msm_ringbuffer.c | 6 ++++++ 3 files changed, 15 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 6ef29bc48bb0..5909720be48d 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -250,6 +250,7 @@ static int msm_open(struct drm_device *dev, struct drm_= file *file) =20 static void context_close(struct msm_context *ctx) { + ctx->closed =3D true; msm_submitqueue_close(ctx); msm_context_put(ctx); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d8425e6d7f5a..bfaec80e5f2d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -362,6 +362,14 @@ struct msm_context { */ int queueid; =20 + /** + * @closed: The device file associated with this context has been closed. + * + * Once the device is closed, any submits that have not been written + * to the ring buffer are no-op'd. + */ + bool closed; + /** @vm: the per-process GPU address-space */ struct drm_gpuvm *vm; =20 diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm= _ringbuffer.c index bbf8503f6bb5..b8bcd5d9690d 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -17,6 +17,7 @@ static struct dma_fence *msm_job_run(struct drm_sched_job= *job) struct msm_fence_context *fctx =3D submit->ring->fctx; struct msm_gpu *gpu =3D submit->gpu; struct msm_drm_private *priv =3D gpu->dev->dev_private; + unsigned nr_cmds =3D submit->nr_cmds; int i; =20 msm_fence_init(submit->hw_fence, fctx); @@ -36,8 +37,13 @@ static struct dma_fence *msm_job_run(struct drm_sched_jo= b *job) /* TODO move submit path over to using a per-ring lock.. */ mutex_lock(&gpu->lock); =20 + if (submit->queue->ctx->closed) + submit->nr_cmds =3D 0; + msm_gpu_submit(gpu, submit); =20 + submit->nr_cmds =3D nr_cmds; + mutex_unlock(&gpu->lock); =20 return dma_fence_get(submit->hw_fence); --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 623E12EF9DF for ; Wed, 25 Jun 2025 18:58:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877938; cv=none; b=qRUEba+EquuNc4qVVlGWK4TC4oa7M2OPmnUKBq16cLx8G6S3YRz3jgHHGvHqOk5IL5tutEGsM8nmSI6skzJYSaZToemsQG7rtykllhBE+aZaur2+jCdGbDUY5BeSz4hYAMBSeiL1Yyn/UuieaOaardXiGtGrdwhMUdGzjSrT3mo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877938; c=relaxed/simple; bh=ajM0ChvulsIcVGfGfQUYaLVu02a/srjqveKXuNHgeqw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pwvBMsjCPAohdwYjC+HTE85YMmPs2UKJstitiUHWjRllXxJWM7zq1naDBgiw25xsnnaMEHSxN8fQ70HuiOFp+b+ww/EP7SOgOG1jr416kqf9LVkS6X0wNu+KK/vNKT5yWhM7T9vx6LVddv+3zHr/bch18owuQ+AtOV1/JsXiOe4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Qaps4hdF; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Qaps4hdF" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCtnMq020753 for ; Wed, 25 Jun 2025 18:58:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=9Qwd9YtBr5K edr0HQ8biobuJ+sJVNM9PhDsuEO5zY+s=; b=Qaps4hdFhZPjqvdGwNOjy1poBAs mZVw3zcznxd+uM+CRz79SffOUEc86MEEaRRrZbjDqzbG5O3zx+oKHFx7Rq/M+baS 5c6KPcJYBcnrI2AaoPg+ki9JyKNPVVaz8BJdDO0fTenCzWCfAbF7EB+uJDFs3esm rNPs/T551SfI+HzNz4j79V7hobghWh3cFqY1fMdS+F3Qo8A6pltkLLX9Wk+aw9VL I0d/knyP52JUSFieLGIm2sZHffNfrg0lxnsG7++AfV6PuHeue+5pnWrDFlv3D3vb svCciBlCy+WjYE8+iYARciDDsasKBG7fI5mLXYVQ7/F1J6PcVYzN0X+pbTg== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47f4b40s35-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:55 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-235c897d378so1453325ad.1 for ; Wed, 25 Jun 2025 11:58:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877934; x=1751482734; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9Qwd9YtBr5Kedr0HQ8biobuJ+sJVNM9PhDsuEO5zY+s=; b=uunRINVzYImyesO5IlDroyDVw7/1YMPbTFNsEk7bZkAcfMYWIIR8i9xhtx9jKub3i6 63T/bp75IePMKKSr0YS+1NXCnwdG/bpv30ScxClMci0WTqRq6TqdFmX1vjNsjWxhHpUk by/hFpYCX527PoM1xYtp+F/kE9VpJfzBwod6ZZfzExjc0SKo01lO6Nl6wc4xC1lL/KDd ESEeRxlb1tvZEX+Ep0e0b3LPpHn6orXUMzrsnOY55cHRrCkVESiWJwloQSeExSpQMdT0 1SL8tk/nlJ4d4ixpmdjTu3E40bENZI8MWf2arQ3/gE9rvIihpPVoLB4AXXiLL9+UQqtL VoQw== X-Forwarded-Encrypted: i=1; AJvYcCVqkQqXWNzpJRc+Wy2qEKHdFLwzg0HAwaOAWGPc9uSETRZCF6b12C/a03J7ix5uZIudDEogxvJ0WNOYuSo=@vger.kernel.org X-Gm-Message-State: AOJu0YylHile73GJN4u1QPKJY/QBVSLR4d0stRdP8DSRUnOMvIF6VucK LFLR7P09rzQjuh+pKU2vSwBelB29NcvnJDqC4kBCDAI44i5RRqSSBSBTpVM0tyi7gtfU0SEaaJJ t/jiDtUJR16T6x958K1XOqwZ55TnCscE4x5p3u8yDtUPAUopowMoZHOCH8L1EaYwGQQk= X-Gm-Gg: ASbGncsXBa5REZY7ZB4dYDoDwnBMBUWy+oOm9KrySfzRrN+c4P4eier6G56v8VUmtkY 9EJfNn6zZ4Fvvz5iN4/uoSRxUkWhIyo8L3Bes76xSFlOq3wGTxvmE3jGVjpJBK9rboFgLjV5TVb iIVgthRmJ2Ux9pROGswdl4wnPEj9YDtLoJFprW+0JJw+GdX+f4ftfy+4bsGDoTae2KHQ5+H4zyL IJa6sj3pb5WwUa2h69tsb5fRg/lcoowrz6+SoqXS0jTzUtstM9X9TItX5b3S9h6xjBq6ViCY4BU BTld2rYUAhh2sWTErwtZHZV5ZfRYv8OK X-Received: by 2002:a17:903:1b50:b0:235:e1d6:4e22 with SMTP id d9443c01a7336-23823fd6328mr60830675ad.18.1750877934564; Wed, 25 Jun 2025 11:58:54 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH0aNDwPow6gLTCD2PANYngWFgatt1KGMuU8iZXTOOqmmB7r1M+2L+7LtfVVDdNAVULeCsdSw== X-Received: by 2002:a17:903:1b50:b0:235:e1d6:4e22 with SMTP id d9443c01a7336-23823fd6328mr60830405ad.18.1750877934062; Wed, 25 Jun 2025 11:58:54 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d86e8fddsm138155205ad.210.2025.06.25.11.58.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:53 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 21/42] drm/msm: Lazily create context VM Date: Wed, 25 Jun 2025 11:47:14 -0700 Message-ID: <20250625184918.124608-22-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX1M8yVeHlZPFj Nc2T0tGZK6dSIS59fpf/PWcVpwbP/mo8zh1kdGEpfgwZDfY+7dfoyvFXJQClFGkvWR25rxBAVwY jhVGbfOxdmkze0/FTEgBI0dEwUhnGhqA8MvKB4LMrqNOTx6kcsJillxwW4ut/EOY4CyY/HQa/SH qngaeyCIaf9LQOh5nym74UWc4p6KrNh+L8AzWLCPmJ8aDSVlaE8AXN91jh8+CnQDBJ0cDBv+UOK wGsVHyJO610jO37sNpLGB46Q0Cb3S/YJhN5oqQbmzxrebduLbpFhOT3R9EGQ4qgbWpVjLSWvkvw 6TOSIiqUr0/Z7ZaMY8fw76ui/z0xFh+092LuDzxqeUpNE/TZza5XO0VIYQ52P8Gb754YXRiAnKp JZIQ+jbDcQnmCxhHDGN0GPWYdkfAnyH3gnaP6d33mD+WTx2w+LXenygSVWbWVuFhOF4nOca9 X-Proofpoint-ORIG-GUID: tWWaLe8xCdJcEpKY9W03kOJk0b3g55Hy X-Proofpoint-GUID: tWWaLe8xCdJcEpKY9W03kOJk0b3g55Hy X-Authority-Analysis: v=2.4 cv=A8BsP7WG c=1 sm=1 tr=0 ts=685c46ef cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=K8YrE2tTMaBrqk7BmowA:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 mlxlogscore=999 malwarescore=0 spamscore=0 bulkscore=0 phishscore=0 adultscore=0 impostorscore=0 suspectscore=0 mlxscore=0 clxscore=1015 priorityscore=1501 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark In the next commit, a way for userspace to opt-in to userspace managed VM is added. For this to work, we need to defer creation of the VM until it is needed. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 ++- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 14 +++++++----- drivers/gpu/drm/msm/msm_drv.c | 29 ++++++++++++++++++++----- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.h | 9 +++++++- 5 files changed, 43 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index c43a443661e4..0d7c2a2eeb8f 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -112,6 +112,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, { bool sysprof =3D refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; struct msm_context *ctx =3D submit->queue->ctx; + struct drm_gpuvm *vm =3D msm_context_vm(submit->dev, ctx); struct adreno_gpu *adreno_gpu =3D &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; @@ -120,7 +121,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gp= u, if (ctx->seqno =3D=3D ring->cur_ctx_seqno) return; =20 - if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(vm)->mmu, &ttbr, &asid)) return; =20 if (adreno_gpu->info->family >=3D ADRENO_7XX_GEN1) { diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index e24f627daf37..b70ed4bc0e0d 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -373,6 +373,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, { struct adreno_gpu *adreno_gpu =3D to_adreno_gpu(gpu); struct drm_device *drm =3D gpu->dev; + /* Note ctx can be NULL when called from rd_open(): */ + struct drm_gpuvm *vm =3D ctx ? msm_context_vm(drm, ctx) : NULL; =20 /* No pointer params yet */ if (*len !=3D 0) @@ -418,8 +420,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_co= ntext *ctx, *value =3D 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->vm) - *value =3D gpu->global_faults + to_msm_vm(ctx->vm)->faults; + if (vm) + *value =3D gpu->global_faults + to_msm_vm(vm)->faults; else *value =3D gpu->global_faults; return 0; @@ -427,14 +429,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_= context *ctx, *value =3D gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->vm =3D=3D gpu->vm) + if (vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->mm_start; + *value =3D vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->vm =3D=3D gpu->vm) + if (vm =3D=3D gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value =3D ctx->vm->mm_range; + *value =3D vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value =3D adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 5909720be48d..ac8a5b072afe 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -214,10 +214,29 @@ static void load_gpu(struct drm_device *dev) mutex_unlock(&init_lock); } =20 +/** + * msm_context_vm - lazily create the context's VM + * + * @dev: the drm device + * @ctx: the context + * + * The VM is lazily created, so that userspace has a chance to opt-in to h= aving + * a userspace managed VM before the VM is created. + * + * Note that this does not return a reference to the VM. Once the VM is c= reated, + * it exists for the lifetime of the context. + */ +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_contex= t *ctx) +{ + struct msm_drm_private *priv =3D dev->dev_private; + if (!ctx->vm) + ctx->vm =3D msm_gpu_create_private_vm(priv->gpu, current); + return ctx->vm; +} + static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident =3D ATOMIC_INIT(0); - struct msm_drm_private *priv =3D dev->dev_private; struct msm_context *ctx; =20 ctx =3D kzalloc(sizeof(*ctx), GFP_KERNEL); @@ -230,7 +249,6 @@ static int context_init(struct drm_device *dev, struct = drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); =20 - ctx->vm =3D msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv =3D ctx; =20 ctx->seqno =3D atomic_inc_return(&ident); @@ -409,7 +427,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *d= ev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->vm, iova); + return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova); } =20 static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -418,18 +436,19 @@ static int msm_ioctl_gem_info_set_iova(struct drm_dev= ice *dev, { struct msm_drm_private *priv =3D dev->dev_private; struct msm_context *ctx =3D file->driver_priv; + struct drm_gpuvm *vm =3D msm_context_vm(dev, ctx); =20 if (!priv->gpu) return -EINVAL; =20 /* Only supported if per-process address space is supported: */ - if (priv->gpu->vm =3D=3D ctx->vm) + if (priv->gpu->vm =3D=3D vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); =20 if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; =20 - return msm_gem_set_iova(obj, ctx->vm, iova); + return msm_gem_set_iova(obj, vm, iova); } =20 static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index d8ff6aeb04ab..068ca618376c 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_de= vice *dev, =20 kref_init(&submit->ref); submit->dev =3D dev; - submit->vm =3D queue->ctx->vm; + submit->vm =3D msm_context_vm(dev, queue->ctx); submit->gpu =3D gpu; submit->cmd =3D (void *)&submit->bos[nr_bos]; submit->queue =3D queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index bfaec80e5f2d..d1530de96315 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -370,7 +370,12 @@ struct msm_context { */ bool closed; =20 - /** @vm: the per-process GPU address-space */ + /** + * @vm: + * + * The per-process GPU address-space. Do not access directly, use + * msm_context_vm(). + */ struct drm_gpuvm *vm; =20 /** @kref: the reference count */ @@ -455,6 +460,8 @@ struct msm_context { atomic64_t ctx_mem; }; =20 +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_contex= t *ctx); + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched p= riority * --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 06FC82F1998 for ; Wed, 25 Jun 2025 18:58:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877940; cv=none; b=M1Kwk9gLCBLJDrw/LVkoYJS4z5wEDcaS+mGXsS9LigV58DViSkjHqa9unOa1HC072A5pTeKLS5oreCH+8k9x3VmO17B2CWeM8CVmcQKwU3P/x1Vygw/zvC5flL0dMac17Ura8TvfrrgEAnRpCa5eiN34dJuxcTRza1K5v1nHdBo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877940; c=relaxed/simple; bh=LREB/xY7C5jNcdjT/n4AlwVuZ2kqMqXCgxBuqCdzI2A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TAWKD0DcaAC4eEbZHEBrBa1EOapJiEf/yICYc5CVdDEDGziGlwOvpEtgufb6tkxkqZXInqeW4fJBqKiX33sXBCREWzfs33HOaq8Ibn2c8WcHBg5ue8pG2UeVViA0GQrn8lKDvPRWpsyYyNFB8iB7PAEl8Lq344uik7po+9hMVTc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=BRVPUDP0; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="BRVPUDP0" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PA33au032470 for ; Wed, 25 Jun 2025 18:58:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Hh5tX4Ew+Aw YGkhPc+5ZLD0+EzWfomf8YCd88hJnzq0=; b=BRVPUDP0CPl6GQY+52eTwDR27yQ mnFlEAXs2Ei5WbyUVsmU6xWAOq/QWtgbpU3tRD2MXZjmu6sPO2vpGNMrW6S6fICd azh0D92DsikdwCOxpUP/REa8JAxeZEWB3D/FWRYnO0qFSDgECWwc8o1ZyJF08EIG SlwKxHOQhxpS14YKNasi4vgBJpaju7KZNv3JQrzOak9SCtpEo05rlV0DDR5i5w5x ZmHfVp2nmut7KxuLmDNKUgAbHWQLuWUyBS+SLb4XCEPr5NGTb3LbONzcT96iwiD0 lK4UNpRHwR9206J3+Jp/7UeoLMv4dGsKCb+0djkrZP7vv6dd/Xhppvqf3Jw== Received: from mail-pj1-f70.google.com (mail-pj1-f70.google.com [209.85.216.70]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbm1yh0a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:57 +0000 (GMT) Received: by mail-pj1-f70.google.com with SMTP id 98e67ed59e1d1-31218e2d5b0so261402a91.2 for ; Wed, 25 Jun 2025 11:58:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877936; x=1751482736; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hh5tX4Ew+AwYGkhPc+5ZLD0+EzWfomf8YCd88hJnzq0=; b=c/922B7corDMEmykqx9JKSVCwMfuukQuCaeTMjaT+92c9acVCwDL1ZWTlq5N0Ky4ZH ov4fvHxhTpgGrhwL1MEssrjNCipxBdZeR8WAsh88T773dpKEF5dyvRRTvaOoJgCI4Fiu SGZpjX1WjxttfZAIKRRcjWkxCJerdYa6Mc1zpVo97gDtgFa21jHhdRuaLHv++eUqoUzw a4vOXXBC95aDhnHMd1jXumncE+LS/mMgq3hBZbDLrbPcWRi+QY2/jUlmWN3vMFB8j8oW kHRxjV0LWgISbLdUQOnfc30hExCPWQFR3toecaS7B1u5Ma6o94JIh9jkCPuj/oICP65b U+HQ== X-Forwarded-Encrypted: i=1; AJvYcCVjm9hveneuJ7oNreLcIxN+QxSfLDGLbuy5VfcsY0s6EcYyqYcx7Na+Zy40um2wHuDklcw4zYwOCZqVCZw=@vger.kernel.org X-Gm-Message-State: AOJu0Yy2HFHCtkjh4urh+9h5hoLB/XmiUrrtYjPxvq98YM7IOceFpsQY oqkxu+sOHbS6XfCZgMSRb/lIdZU3stOI4pdD0wr7iVJGh0PcGmUIDCw4MPpkr9qUo3jz7cqjfZ4 AeMYlHTAzw1amd6ZhicPaJ9L1relZxPsi4QgYR6JOCqpM36zkNADunW5OFySKhN8+a5I= X-Gm-Gg: ASbGncteYQPTPgmPTGOdXrc7IQq4MLWgVnP5T6ooK1+CDspSLqKBWidBMJgUePtZWIv 0O7DEcl0djgsFzxBsd+h8OlVzc5K2OhCq0GkEB13UdPL1TrhTBE0vsncxs747b6dsxZf+Hz6vE1 05kcZS9wIboA7qq51PeXmDjdmYYqPqxPnl9Ue9VgU5UNrxBFPxLwNg4RT4pD4xOZEuM9+PrMBo2 Avo9z5UojvBkUJ+zNuZaYI35/AH1ZAXl4aI/y32U1ZW7GeqOsbZBBWxyYY7D8dfxgLjWlviRosa K/lU1T0hZsFFqiNEpNitvjFictIyCI3W X-Received: by 2002:a17:90b:52c6:b0:311:f99e:7f4b with SMTP id 98e67ed59e1d1-315f268d1dfmr6120307a91.28.1750877936361; Wed, 25 Jun 2025 11:58:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IELg7y0BBSEXdb+RpJNqLCAHvVquCtejZGqhozSz9F3YB5bONEdwukKoEgzoLXImTOBbjcArw== X-Received: by 2002:a17:90b:52c6:b0:311:f99e:7f4b with SMTP id 98e67ed59e1d1-315f268d1dfmr6120266a91.28.1750877935809; Wed, 25 Jun 2025 11:58:55 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-315f51dc446sm2518313a91.0.2025.06.25.11.58.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:55 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 22/42] drm/msm: Add opt-in for VM_BIND Date: Wed, 25 Jun 2025 11:47:15 -0700 Message-ID: <20250625184918.124608-23-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=YYu95xRf c=1 sm=1 tr=0 ts=685c46f1 cx=c_pps a=0uOsjrqzRL749jD1oC5vDA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=iDRtRSvoPPFvr6RSarUA:9 a=mQ_c8vxmzFEMiUWkPHU9:22 X-Proofpoint-GUID: jU_fzouSSZWjt5CqUHHktlMg0Lr9iaAn X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfX/O+jVXZjYxGl NodP0V5VWxUUkusj51ruNdOZxyv59RW5b5nL962YsAXAN9Q5lugnsH/YH1NJ/1BaYCkKqAYJ8C2 8pAmRvgKI7nIm3s/WncrZPLP9avRHTVtyUwVWviLe38aoH5OZHQTTvbWkkNxwze9FspZd3q5PA3 5LNZdnZqhcYY4JTro0MWl5R9MrO72iT7kFzAPC0yEo/bV6/GICwcgr+I9+HJ9gmlV7dp5j6YuHm 1sXKw312qE7lluby2FCsjLH/hwfiLzTEipI7VI4EtXRK3nopvfroff8Q/rVNOgHf+uo6dx1+EZP +G1JasgwbSoqGIICTN7b7uZTF8qh6MKgOvqfmyTMnXrsCxEUCnr0s8fkq652TBOim2+XBPTNxkh OcJoxhO7GkGPkBzliUQIDCFGh2c4K+c4s3gnY4zI+aUHgwFsc7MvIU2nCP0W9PgM29dhPEFB X-Proofpoint-ORIG-GUID: jU_fzouSSZWjt5CqUHHktlMg0Lr9iaAn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 spamscore=0 impostorscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 phishscore=0 mlxlogscore=999 clxscore=1015 mlxscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" From: Rob Clark Add a SET_PARAM for userspace to request to manage to the VM itself, instead of getting a kernel managed VM. In order to transition to a userspace managed VM, this param must be set before any mappings are created. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 15 +++++++++++++ drivers/gpu/drm/msm/msm_drv.c | 22 +++++++++++++++++-- drivers/gpu/drm/msm/msm_gem.c | 8 +++++++ drivers/gpu/drm/msm/msm_gpu.c | 5 +++-- drivers/gpu/drm/msm/msm_gpu.h | 29 +++++++++++++++++++++++-- include/uapi/drm/msm_drm.h | 24 ++++++++++++++++++++ 7 files changed, 99 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index 0d7c2a2eeb8f..f0e37733c65d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2263,7 +2263,7 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_d= evice *pdev) } =20 static struct drm_gpuvm * -a6xx_create_private_vm(struct msm_gpu *gpu) +a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; =20 @@ -2273,7 +2273,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu) return ERR_CAST(mmu); =20 return msm_gem_vm_create(gpu->dev, mmu, "gpu", ADRENO_VM_START, - adreno_private_vm_size(gpu), true); + adreno_private_vm_size(gpu), kernel_managed); } =20 static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *= ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index b70ed4bc0e0d..efe03f3f42ba 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -508,6 +508,21 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_c= ontext *ctx, if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); return msm_context_set_sysprof(ctx, gpu, value); + case MSM_PARAM_EN_VM_BIND: + /* We can only support VM_BIND with per-process pgtables: */ + if (ctx->vm =3D=3D gpu->vm) + return UERR(EINVAL, drm, "requires per-process pgtables"); + + /* + * We can only swtich to VM_BIND mode if the VM has not yet + * been created: + */ + if (ctx->vm) + return UERR(EBUSY, drm, "VM already created"); + + ctx->userspace_managed_vm =3D value; + + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index ac8a5b072afe..89cb7820064f 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -228,9 +228,21 @@ static void load_gpu(struct drm_device *dev) */ struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_contex= t *ctx) { + static DEFINE_MUTEX(init_lock); struct msm_drm_private *priv =3D dev->dev_private; - if (!ctx->vm) - ctx->vm =3D msm_gpu_create_private_vm(priv->gpu, current); + + /* Once ctx->vm is created it is valid for the lifetime of the context: */ + if (ctx->vm) + return ctx->vm; + + mutex_lock(&init_lock); + if (!ctx->vm) { + ctx->vm =3D msm_gpu_create_private_vm( + priv->gpu, current, !ctx->userspace_managed_vm); + + } + mutex_unlock(&init_lock); + return ctx->vm; } =20 @@ -420,6 +432,9 @@ static int msm_ioctl_gem_info_iova(struct drm_device *d= ev, if (!priv->gpu) return -EINVAL; =20 + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; =20 @@ -441,6 +456,9 @@ static int msm_ioctl_gem_info_set_iova(struct drm_devic= e *dev, if (!priv->gpu) return -EINVAL; =20 + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + /* Only supported if per-process address space is supported: */ if (priv->gpu->vm =3D=3D vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 89fead77c0d8..142845378deb 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -85,6 +85,14 @@ static void msm_gem_close(struct drm_gem_object *obj, st= ruct drm_file *file) if (!ctx->vm) return; =20 + /* + * VM_BIND does not depend on implicit teardown of VMAs on handle + * close, but instead on implicit teardown of the VM when the device + * is closed (see msm_gem_vm_close()) + */ + if (msm_context_is_vmbind(ctx)) + return; + /* * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 82e33aa1ccd0..0314e15d04c2 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -831,7 +831,8 @@ static int get_clocks(struct platform_device *pdev, str= uct msm_gpu *gpu) =20 /* Return a new address space for a msm_drm_private instance */ struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed) { struct drm_gpuvm *vm =3D NULL; =20 @@ -843,7 +844,7 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct t= ask_struct *task) * the global one */ if (gpu->funcs->create_private_vm) { - vm =3D gpu->funcs->create_private_vm(gpu); + vm =3D gpu->funcs->create_private_vm(gpu, kernel_managed); if (!IS_ERR(vm)) to_msm_vm(vm)->pid =3D get_pid(task_pid(task)); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d1530de96315..448ebf721bd8 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -79,7 +79,7 @@ struct msm_gpu_funcs { void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_devic= e *pdev); - struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu, bool kernel_m= anaged); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); =20 /** @@ -370,6 +370,14 @@ struct msm_context { */ bool closed; =20 + /** + * @userspace_managed_vm: + * + * Has userspace opted-in to userspace managed VM (ie. VM_BIND) via + * MSM_PARAM_EN_VM_BIND? + */ + bool userspace_managed_vm; + /** * @vm: * @@ -462,6 +470,22 @@ struct msm_context { =20 struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_contex= t *ctx); =20 +/** + * msm_context_is_vm_bind() - has userspace opted in to VM_BIND? + * + * @ctx: the drm_file context + * + * See MSM_PARAM_EN_VM_BIND. If userspace is managing the VM, it can + * do sparse binding including having multiple, potentially partial, + * mappings in the VM. Therefore certain legacy uabi (ie. GET_IOVA, + * SET_IOVA) are rejected because they don't have a sensible meaning. + */ +static inline bool +msm_context_is_vmbind(struct msm_context *ctx) +{ + return ctx->userspace_managed_vm; +} + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched p= riority * @@ -689,7 +713,8 @@ int msm_gpu_init(struct drm_device *drm, struct platfor= m_device *pdev, const char *name, struct msm_gpu_config *config); =20 struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed); =20 void msm_gpu_cleanup(struct msm_gpu *gpu); =20 diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 5bc5e4526ccf..b974f5a24dbc 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -93,6 +93,30 @@ struct drm_msm_timespec { #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ /* PRR (Partially Resident Region) is required for sparse residency: */ #define MSM_PARAM_HAS_PRR 0x15 /* RO */ +/* MSM_PARAM_EN_VM_BIND is set to 1 to enable VM_BIND ops. + * + * With VM_BIND enabled, userspace is required to allocate iova and use the + * VM_BIND ops for map/unmap ioctls. MSM_INFO_SET_IOVA and MSM_INFO_GET_I= OVA + * will be rejected. (The latter does not have a sensible meaning when a = BO + * can have multiple and/or partial mappings.) + * + * With VM_BIND enabled, userspace does not include a submit_bo table in t= he + * SUBMIT ioctl (this will be rejected), the resident set is determined by + * the the VM_BIND ops. + * + * Enabling VM_BIND will fail on devices which do not have per-process pgt= ables. + * And it is not allowed to disable VM_BIND once it has been enabled. + * + * Enabling VM_BIND should be done (attempted) prior to allocating any BOs= or + * submitqueues of type MSM_SUBMITQUEUE_VM_BIND. + * + * Relatedly, when VM_BIND mode is enabled, the kernel will not try to rec= over + * from GPU faults or failed async VM_BIND ops, in particular because it is + * difficult to communicate to userspace which op failed so that userspace + * could rewind and try again. When the VM is marked unusable, the SUBMIT + * ioctl will throw -EPIPE. + */ +#define MSM_PARAM_EN_VM_BIND 0x16 /* WO, once */ =20 /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E529C2F235A for ; Wed, 25 Jun 2025 18:58:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877941; cv=none; b=mlIxyn6leydJ4gFKjXBrc5s9hbaeDLBjJAcnfJWxqtMTHNMgo/bpNDiepx0Ki+M75I+O+XOiE/vs6bxCbOBVU2y9x/iK09OQtNaD+IVuGA8S1abZvBIR0Hi5pVGKKJz8ObZlKDZSfolNzL/U7Rxd1cUGfSK2lys7xBEVfhdxV7g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877941; c=relaxed/simple; bh=gDYgHef44Lny38EE3+GPC9wMtp7r1eZztZ+KNWUtrRg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jRWm4pBge1L06IUzSuUomtsq5ARr8NdJGVcvQrcEH0aytORBUrWKvXB1L3xIv0532rYWhgrXlbrspDbsBqqTI2RgeMQgcKpwDcwOqJUgk3de3GYdylwKlvEt2+TUZT+FEvGW0bjMPu3jKMKBugkSEk8YOXZiHosIo6Q9fyci6vY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=kePs+Dk/; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="kePs+Dk/" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PB1Cwd022842 for ; Wed, 25 Jun 2025 18:58:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Fs6B1Ljf/Yl zr1lvB9QzWqwNCHwqvCdUFJZrpJiQ7wk=; b=kePs+Dk/AI0NmqYr7qQyVxJlqto 7g3sZ27ngvGptMS60mrrga5DuIhr+DHd6TR6JazMLu2JRONd9WLtV9pEiUoHBoum dHHXb/Yk7CoZPsPAEKcnGkhrQQ2S/Ot7Q2elimyjLbOzqEg8TvLzBSVsQ4//cEUG r9Ovv65hLnZdgjn9Bv4s3vbGAQOyD1ijJQyzrUOGXcpCL5YYdPM/fmCNED7RNX8M agf15PdG5jmzVfS374OQoB9wstLv7Sb+NylQ8FWzmO/mkLYc6VymjAo8q29eliiH jD0LQwXCuHevZYhV533ia9mOlLW1OZE+2zN+dk0ZB5PTrReuZkSviGEv6CA== Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47g88fapnc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:58:58 +0000 (GMT) Received: by mail-pg1-f198.google.com with SMTP id 41be03b00d2f7-b2fdba7f818so172449a12.2 for ; Wed, 25 Jun 2025 11:58:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877938; x=1751482738; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Fs6B1Ljf/Ylzr1lvB9QzWqwNCHwqvCdUFJZrpJiQ7wk=; b=q/PjkUmXxr593VB+Ie5zmsE0859OQRXWp7RAk8gjIi0ETexw7vP1b97kRMMeiJJXjI YPM5bxF8z0rb1wjgXNL9I5wfscmDLDuRcxCmWDQRuVVcEkn+VS+2uLqeXaKDxXZZ0Z0Y +Q5g/Dstb/On+6AiD5iBybzlmHVNr/wNrrixxLYcCwvQXe4Ghin1Mhmk9IWXl7av0Y/h FFchBB897dlzNcWkyImxPBSycr2eIleGEg2qxTeluGQuG1pYmkGXNYalG5Vm7mhb1gD0 3pkav2r+h0RGnorEYfiSvus6FsubAEZ/9UUIO5RxPNDEn4fNLLjNkn1UeasRvVPjqV5L gAMg== X-Forwarded-Encrypted: i=1; AJvYcCVQnZLDBQ0RCgJqnEvS4/cDrOS6a2iKrCi/u52uD/0VjSoqK/zACNIqGXn8HcowEvU3b4zfqcK+c2km6dY=@vger.kernel.org X-Gm-Message-State: AOJu0Yw78z1pTS9vOehYgvZWrBW735FgMZRzXVtAiII968YndlBIscPL ZZcOVRUPoWa0NDNp1wADUl4ZIjLEcUxGOvRmhpl6hp0qoccGANQIvs0PF2NFBLkOFnCptjFGsF9 qB+UeCQla2g5VxXgKpieN1YdPgbZkoLWcYS9FFJev2p2NYMsgDvLrTtTy5aJ9O3jritU= X-Gm-Gg: ASbGnctV81w0bcj5c2RPsZTalyjmrkajxUP88qKi9q3d+r+YvyDtqPnC//RtfCdKyLB u9+DhhzSNRUg4obmVEhDJPIC9Ul95rGb+9soiIFb22hTJ4rye6gEhDp+KWwg5DWodggAswVaiXg ph+KFmNY9xULgF6ybl53MsNIA9jlmmBRT8UrII4z01WhYmBLnEuLr7qgBOkiNDxV+9iBQAkx0EZ 6bHVbGEX7A/ZO2yfdkkcUtrUD1Xdg7p4Rt/AlGxA3YiqCncnse1TNlEpYLPEGb8PzkDXApS3kyc irsdCvLaye1X87+84O19G+cYkavV9zLa X-Received: by 2002:a17:903:2bcc:b0:235:f3e6:467f with SMTP id d9443c01a7336-23823f97264mr79546475ad.2.1750877937681; Wed, 25 Jun 2025 11:58:57 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF4LBvG/d/JiDQ3oTmnTEUEQGkYF7zUCx1h2gczvffJ7ySJIZMuymwWPjvyB2PmsV21XU20/A== X-Received: by 2002:a17:903:2bcc:b0:235:f3e6:467f with SMTP id d9443c01a7336-23823f97264mr79546105ad.2.1750877937283; Wed, 25 Jun 2025 11:58:57 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d839da52sm144521145ad.42.2025.06.25.11.58.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:56 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 23/42] drm/msm: Mark VM as unusable on GPU hangs Date: Wed, 25 Jun 2025 11:47:16 -0700 Message-ID: <20250625184918.124608-24-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfX5vc5IQ3n0bGo ngng3jAuM+WfeaWfKl2hlrqH8SDFqPJ72RbD1K+/hFs9fp1DH7K6PfUr6qpoX8wjgob0GnsD3AN cl28MlWQrHyi5h/WPs7llvA9l1ejzrwJpt1dAlqRxjPDMu0C6hdy2OlGVH5YZWDJekzHvWa16Ay Sr+NQRlo6iIcuSx7c4FRcu/2V++0ZCuf6Tw7JCtRloXAma4ha+bWsDYbBe7E5/31aeg3Hc1PYuD ZuHyq3EWNQXS0vaZzQGJiyREbEQ0NoBjtfIgv9ukVUoBP/jMxh/Ldj8CDrAavbIC6kNoO6/eHjG 5nK+2DIQqeHpUi+8d4kaDJzonaFlj9Wathab1R2PdtGTBXDyuKMyoEAwFsiiLp6PdRYLt2K3hoI TbEFO1fgxTQ2rMiV0SMYl4KTSYpVKQljnrOQV5Eix7ppCjIJcqakxA1q/MaBb9v9ltjpvM5O X-Proofpoint-ORIG-GUID: H0i0rmlJdDbomiL_CA5Fgh_AIpHXA9qR X-Proofpoint-GUID: H0i0rmlJdDbomiL_CA5Fgh_AIpHXA9qR X-Authority-Analysis: v=2.4 cv=LNNmQIW9 c=1 sm=1 tr=0 ts=685c46f2 cx=c_pps a=Qgeoaf8Lrialg5Z894R3/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=DhdyfM8_h7Qj4WYt2N4A:9 a=x9snwWr2DeNwDh03kgHS:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 suspectscore=0 spamscore=0 bulkscore=0 mlxlogscore=999 impostorscore=0 mlxscore=0 clxscore=1015 adultscore=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" From: Rob Clark If userspace has opted-in to VM_BIND, then GPU hangs and VM_BIND errors will mark the VM as unusable. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.h | 17 +++++++++++++++++ drivers/gpu/drm/msm/msm_gem_submit.c | 3 +++ drivers/gpu/drm/msm/msm_gpu.c | 16 ++++++++++++++-- 3 files changed, 34 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index ec1a7a837e52..5e8c419ed834 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -76,6 +76,23 @@ struct msm_gem_vm { =20 /** @managed: is this a kernel managed VM? */ bool managed; + + /** + * @unusable: True if the VM has turned unusable because something + * bad happened during an asynchronous request. + * + * We don't try to recover from such failures, because this implies + * informing userspace about the specific operation that failed, and + * hoping the userspace driver can replay things from there. This all + * sounds very complicated for little gain. + * + * Instead, we should just flag the VM as unusable, and fail any + * further request targeting this VM. + * + * As an analogy, this would be mapped to a VK_ERROR_DEVICE_LOST + * situation, where the logical device needs to be re-created. + */ + bool unusable; }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) =20 diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 068ca618376c..9562b6343e13 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -681,6 +681,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, if (args->pad) return -EINVAL; =20 + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0314e15d04c2..6503ce655b10 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -386,8 +386,20 @@ static void recover_worker(struct kthread_work *work) =20 /* Increment the fault counts */ submit->queue->faults++; - if (submit->vm) - to_msm_vm(submit->vm)->faults++; + if (submit->vm) { + struct msm_gem_vm *vm =3D to_msm_vm(submit->vm); + + vm->faults++; + + /* + * If userspace has opted-in to VM_BIND (and therefore userspace + * management of the VM), faults mark the VM as unusuable. This + * matches vulkan expectations (vulkan is the main target for + * VM_BIND) + */ + if (!vm->managed) + vm->unusable =3D true; + } =20 get_comm_cmdline(submit, &comm, &cmd); =20 --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8BE92E06CD for ; Wed, 25 Jun 2025 18:59:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877943; cv=none; b=K/Z623q2IrKW+4cw4JGZ/4UsT14gqd65VA7aEQUzxNQc64AtDhcKu0xQQmUtlV0hSHbmPAdBaR/eCA61+vI1xYKX4czbBczKpEpM4vP0/EduLxWnQlZb1DzqjykELLFYkJfLWAIg3IkhrVLpLVT+ePqX/4rapiYBppSkysT47n4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877943; c=relaxed/simple; bh=hd+SGUOq+Yjn92HqZGWhDKi1nbHdVQyx7YshAq31C8I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OYdgqAn1KHqqsRKVUpID2+zgSRrBaf9moNNXBxgyTyQ+dOjobGIcZnfaB3YAZayxbjU0GFvubGLXBB2OZwxk0D7BDocQzrikLrCJc81f7BDdpXvowkkpsp/r0VDbVZ+ge8fUXNNFX97vCzh8kkC6K5Q8ZBy+MLZWinJ8QMgh5lk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=eIGvpvDe; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="eIGvpvDe" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCbB8Z006106 for ; Wed, 25 Jun 2025 18:59:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=nqziWkTpKn7 +Zo16n4ZbnSlGZfnmg6jhLx65Rn+Pn3w=; b=eIGvpvDehYgqN9sXz+0//R1otJG hEi0O7yl1x1v1GTQp+D6tnG2oKP2eWY9Ue9tHjF7Z8UXtulyyxmTNu82jVWKIAO6 qDQgycWIW4j147uo37iaUrFzr5iUAXBfKxlPsbzsA5tsFov2NQE4ivfzPab3uPW8 MnJLE/JWZ7Tc0EKD5xyMXJPzWa+2PRDxgLGZmbfqeD0AXrC6RYlGmUYx5EGCcuLJ jyIVG+ilm3kA6fSMEeLJm/oVoszYDqXmKyUJ4Ye7zZWeBuBQItHinXJvqB1Yr/hg q/rN7s2qgVKbQan+kMEutmXFV5SGqOghmE+veOxLZIVu4vSPWb3r61YmjBQ== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47f3bggy1t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:00 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-23494a515e3so1395515ad.2 for ; Wed, 25 Jun 2025 11:59:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877939; x=1751482739; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nqziWkTpKn7+Zo16n4ZbnSlGZfnmg6jhLx65Rn+Pn3w=; b=mYXiPTx35kXTN3JiA0jruHsDEoRAnMAvjdupxolNXI+d0XcMLO5zSxUekEGxas9dS6 k1bxSjPPqIOstXse8UTJBLFir81RGNQ9XIO19Gj3Neu+NLkr3/HVvuJ20iTasIEER1Zl NDl4RO2Q+M+hwE0fmBEzQ5kmqVcHihPiGdC0ECYuJdqcoHAlf/cR2Hp6xAWxEEPmMjd7 G7o0y4HzBiVJfX34Gjha3zoZALE70+SiFQVe/Gogky8GWfU444rGzohsd9ddQ55IWSXh JbXQuAPA9U2b+px8l7mUvKUrCXgNXlGoWgdP86ONkHWwbjQTYl6xnNFmeEafl/NL6Wzv 1n+A== X-Forwarded-Encrypted: i=1; AJvYcCV550FD/XSUdzjXeVtAYPa9jPTAwnWY4HAPbck8y7nEuyEGbCHPU4nPxqqmCkkI8Q8f8No0VcZl6yP8KlU=@vger.kernel.org X-Gm-Message-State: AOJu0Yy/+okTTlBjG75WPcqBkZqfnX96d+/Xsc0gdMD++Z3X6FykSC7u 8N3R/qLnATqg88moys2s14IBBqcrWgYlzUUt5C/xyZGRetG5RiJLtGrdGE/IXXTv6fvP8GTPI0V oUHfg3qBmGgAcSGxp/hBbcDdLdHZVdQqzmvJfzEk1kwgvrDUEVG3Nh+FipvaCYPhkNZc= X-Gm-Gg: ASbGncuQOQpsgIJFiC2UN9GAJz1EM8Sfo4y22Cen8C6SwvAy1mhvM8gJIEiclG4rwWa VaHw0V71zGPrii+BF3vWirWcVC9H6S9ZpOra8YpsWHPfDo7Hd3VIqWqR9+NusazNw+ukdyKhKKh g9ku8fqBa304CuzawEVswn9QvX6hwVou87dq4VAUEPSodQd3GVZySSClzShr9/qFE5G0vPIVW64 qJkwKQsej9W9oSAUGBDVElfupNyRiVjvO00c51kMLznWAQGEx6WnF/y/TWjhh9YXgKfqVNtvuM7 6mNKmcSsRHDbKccGfplx7ZvcnRgZkCdT X-Received: by 2002:a17:903:2f4c:b0:234:b41e:37a2 with SMTP id d9443c01a7336-238c86ee132mr9721375ad.11.1750877939032; Wed, 25 Jun 2025 11:58:59 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHQyQ5uDlaknabIX1fOYhjGPTFIufwUdkrN+G7wbVg/TJZF7xfMrXlccspbcvQ7vz/RHivcvg== X-Received: by 2002:a17:903:2f4c:b0:234:b41e:37a2 with SMTP id d9443c01a7336-238c86ee132mr9721035ad.11.1750877938619; Wed, 25 Jun 2025 11:58:58 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d83efb71sm139513385ad.75.2025.06.25.11.58.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:58 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v7 24/42] drm/msm: Add _NO_SHARE flag Date: Wed, 25 Jun 2025 11:47:17 -0700 Message-ID: <20250625184918.124608-25-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: eS-k4v6B18bDQfhKP8howhN4wTUuCsVk X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX3nMiwPvDhkff dICSKcMdDZcNNSIAi1y5SuNA3fzjLPuHbb8+Qn1STS0d6GPX1vET0aRMsyJDGT/zDQ2tdYzTyRk SIweZYxyGGrxXEQDOO+P+3munqEQWjelypxCO5vEMZaknFZeLqB4ongFQAUlLDf/+SyySb1NUhX 8h7fMHhNKE/EspsU3gM6NRfA4LCoqiNAZwv1/FSGVIo0L6P9aPWxC7T+6nBVdM3BCgRyj5fF+Lm JW5pvKuiveyNbEM9dESmidxbAksjOrs1m1wgtA4oZm+KeC4feTAJ9974Ih+unHZLVdVhe9/MZy1 2RCEjsNGo9CrrZ8Nw1X/cGyzfhPOWNu8MhkpSjSUi7yjM+1OfjOdgCUkvGWtgWD/NtmD0QXlPgU NejlrSTsxnrPS83ZFhceoB88wkh3NlVwyo7/MlE3PJ5MyUB1E+0ssZhOKMKIZI9w2o90DKd3 X-Authority-Analysis: v=2.4 cv=L4kdQ/T8 c=1 sm=1 tr=0 ts=685c46f4 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=5GAAy6agFmV6x6zTEMEA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-GUID: eS-k4v6B18bDQfhKP8howhN4wTUuCsVk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0 bulkscore=0 clxscore=1015 suspectscore=0 adultscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 spamscore=0 phishscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Buffers that are not shared between contexts can share a single resv object. This way drm_gpuvm will not track them as external objects, and submit-time validating overhead will be O(1) for all N non-shared BOs, instead of O(n). Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.h | 1 + drivers/gpu/drm/msm/msm_gem.c | 21 +++++++++++++++++++++ drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++ include/uapi/drm/msm_drm.h | 14 ++++++++++++++ 4 files changed, 51 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 136dd928135a..2f42d075f13a 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -246,6 +246,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, stru= ct iosys_map *map); void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *ma= p); struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *de= v, struct dma_buf_attachment *attach, struct sg_table *sg); +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags= ); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); =20 diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 142845378deb..ec349719b49a 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -554,6 +554,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem= _object *obj, =20 msm_gem_assert_locked(obj); =20 + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + vma =3D get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -1084,6 +1087,14 @@ static void msm_gem_free_object(struct drm_gem_objec= t *obj) put_pages(obj); } =20 + if (msm_obj->flags & MSM_BO_NO_SHARE) { + struct drm_gem_object *r_obj =3D + container_of(obj->resv, struct drm_gem_object, _resv); + + /* Drop reference we hold to shared resv obj: */ + drm_gem_object_put(r_obj); + } + drm_gem_object_release(obj); =20 kfree(msm_obj->metadata); @@ -1116,6 +1127,15 @@ int msm_gem_new_handle(struct drm_device *dev, struc= t drm_file *file, if (name) msm_gem_object_set_name(obj, "%s", name); =20 + if (flags & MSM_BO_NO_SHARE) { + struct msm_context *ctx =3D file->driver_priv; + struct drm_gem_object *r_obj =3D drm_gpuvm_resv_obj(ctx->vm); + + drm_gem_object_get(r_obj); + + obj->resv =3D r_obj->resv; + } + ret =3D drm_gem_handle_create(file, obj, handle); =20 /* drop reference from allocate - handle holds it now */ @@ -1148,6 +1168,7 @@ static const struct drm_gem_object_funcs msm_gem_obje= ct_funcs =3D { .free =3D msm_gem_free_object, .open =3D msm_gem_open, .close =3D msm_gem_close, + .export =3D msm_gem_prime_export, .pin =3D msm_gem_prime_pin, .unpin =3D msm_gem_prime_unpin, .get_sg_table =3D msm_gem_prime_get_sg_table, diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_= gem_prime.c index ee267490c935..1a6d8099196a 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_ge= m_object *obj) struct msm_gem_object *msm_obj =3D to_msm_bo(obj); int npages =3D obj->size >> PAGE_SHIFT; =20 + if (msm_obj->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EINVAL); + if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ return ERR_PTR(-ENOMEM); =20 @@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(str= uct drm_device *dev, return msm_gem_import(dev, attach->dmabuf, sg); } =20 + +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags) +{ + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EPERM); + + return drm_gem_prime_export(obj, flags); +} + int msm_gem_prime_pin(struct drm_gem_object *obj) { struct page **pages; @@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj) if (obj->import_attach) return 0; =20 + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + pages =3D msm_gem_pin_pages_locked(obj); if (IS_ERR(pages)) ret =3D PTR_ERR(pages); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index b974f5a24dbc..1bccc347945c 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -140,6 +140,19 @@ struct drm_msm_param { =20 #define MSM_BO_SCANOUT 0x00000001 /* scanout capable */ #define MSM_BO_GPU_READONLY 0x00000002 +/* Private buffers do not need to be explicitly listed in the SUBMIT + * ioctl, unless referenced by a drm_msm_gem_submit_cmd. Private + * buffers may NOT be imported/exported or used for scanout (or any + * other situation where buffers can be indefinitely pinned, but + * cases other than scanout are all kernel owned BOs which are not + * visible to userspace). + * + * In exchange for those constraints, all private BOs associated with + * a single context (drm_file) share a single dma_resv, and if there + * has been no eviction since the last submit, there are no per-BO + * bookeeping to do, significantly cutting the SUBMIT overhead. + */ +#define MSM_BO_NO_SHARE 0x00000004 #define MSM_BO_CACHE_MASK 0x000f0000 /* cache modes */ #define MSM_BO_CACHED 0x00010000 @@ -149,6 +162,7 @@ struct drm_msm_param { =20 #define MSM_BO_FLAGS (MSM_BO_SCANOUT | \ MSM_BO_GPU_READONLY | \ + MSM_BO_NO_SHARE | \ MSM_BO_CACHE_MASK) =20 struct drm_msm_gem_new { --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11EAC2F273D for ; Wed, 25 Jun 2025 18:59:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877943; cv=none; b=P0/fNOk2LeB/8oNgvW/SSBXV86+DueIOLzieHGqJjB+9VVp2b8KOSrasCuCbfOx32Boq2e9amEhtzIJ7JbF+twOKrZBOgHe12Z1VJx/5y4mBD97L8Vwo0wyXkSgNpqtSQQl324l/p+kaOb1vXQNepQ2VKlbC+/wRqAtC6rpdjE8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877943; c=relaxed/simple; bh=4K+ThbkPsMfskaKnhREr4LpxGki6i+xf+4qaxzdTSv0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aFnIDDzHN7S10CsbzrWtqo4MuoO7xY8FrWSQ8G0RDGxiBu2nVwMX0pcQyXru/sUtl108lBxQoLNGKBLQzD4Uh1NPE8S9msnjHwxhJtLhJ7BSa8EIavGmKEFcfHoJ4GGUOp6B4V+dsoxe39CBXyXdznInD7VGFeilMnshk3uNZSQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Y5Xb9PZA; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Y5Xb9PZA" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCaWUc014580 for ; Wed, 25 Jun 2025 18:59:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=NGLp+jqx0Ko YMf7JXKfzU6SErPKJ/xFeT0XGQHG0jTw=; b=Y5Xb9PZA5V9SfHTSNZrW2S229Ep tf/237o+5A7MKzfVbAdzgxyVjbOHkY1SgpLzhUoHQ9SDemhASlhCZWJ45CX8MtXJ qxqF2J0nNrZFlQYFYXVelNk7T4IwmIF1k0RexZk/viAIU4Fsc/Rg48UHw7Ark/AO oNywZl1jaIXwov679cvJWjQnJ2WhOD6SvrEYl13wbj+e49gMxSfLU6HS3T7xBx9b z23pPwTxjylsKz0DqezgrusMW9OlI0TzHz6LTUowPhNoQbWjI8LQhauB9pNBd/N1 LRPc+PBaHJByLxz2bvfWRHLl8sdC6EhR/mCTbtvmo1zAY0lyh4JxriZZDiw== Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47evc5t1qb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:01 +0000 (GMT) Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-748d96b974cso173368b3a.2 for ; Wed, 25 Jun 2025 11:59:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877940; x=1751482740; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NGLp+jqx0KoYMf7JXKfzU6SErPKJ/xFeT0XGQHG0jTw=; b=JB6M+PIsxl+LQCF7l7T2RYQ77eu+EJFoycEGDY1dm7pB8VK9253YBIxHPEze9zq2kk go3KRXiJmpHoXC9IQx8coP9q9qqLHNZnOIJU5qZMY+b+JExmwDy6LkEahnB8VAvE8X8+ Utk9dP2VdqPmGxjgJ8n2mZGrTUazzq5bbaidA++A3A41QZmRMCvbNCfet1n6ePrMzNxL x0tdJxigZIBmdeILwPEOe0aH8jwpeijKbYa4xOoQr7ToDb42kr1vnIjtVR2WRd+YAraU /SvMbXo6LXPOnmqKhEed5pcu6C40QboQFo3Ln0PWk0JAVaGMqajdRcsKD+w5hFqh0JyW 5kpg== X-Forwarded-Encrypted: i=1; AJvYcCWiSBG7K1192tN0I+c4bwVeNbn7XyjH7OHcSyShCLcZxXjOKp6zsEJJhHr7gLUP/Xs8FA9ntQxkOZ1DXIQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yw3kTNNLPwxIpVW2hmp/PP8WoR7TIhOOYFW2BC4yJZz/s9hxBQm Zi4ubPpfvoi/anEVhtQtcJiWQDmZVIITknd1iQ/sk1vigqaDmMzjb5gYYLLlDtzC8L0Ry7livV7 z8bUvjf6HM7xy74TdRM9qrc+avYMg27BNa7itIz0V+ZEfyoShvk663kBBiH29VAvPFq0= X-Gm-Gg: ASbGncswdtWa+pNm778/HuaBB2g83AS1uaf2bEQPg4rqVmsilu1qRnWiGg6wOHCoYKJ 9irOSfOXLldUmS4jLscuZiQV14dGNUXKVBAHl6/Gg69Cfo9QDDax1Z8lC7I540II620uvA4dLtJ HeQiEO9W4ArFcIZeeSCMd7bq3SYUOoGzY05dK9bL3nfmx3g/Jc8h1zvyrbYbGYZAH0FjAKV8L96 k7g3LFiG1nKUI0dcdurzmXc1HndmQzsEBN88MGmFU76Kxm+Nv7XXKu37y6NSI2BvA+9pa7G/qLo nSSoUVRalFp93MXKnK7BwFVwaa0HdYEH X-Received: by 2002:a05:6a21:99a4:b0:21f:becf:5f4d with SMTP id adf61e73a8af0-2208c5b6de0mr1386083637.20.1750877940369; Wed, 25 Jun 2025 11:59:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEGjs/J0QvS0VnPAD9ghs0ZXCa9bknqb6IuT0ol4Vr7UUmIFGa/LYVfwR/mtK9s9csP9ptJgQ== X-Received: by 2002:a05:6a21:99a4:b0:21f:becf:5f4d with SMTP id adf61e73a8af0-2208c5b6de0mr1386057637.20.1750877939992; Wed, 25 Jun 2025 11:58:59 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b31f118ea18sm13411229a12.2.2025.06.25.11.58.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:58:59 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 25/42] drm/msm: Crashdump prep for sparse mappings Date: Wed, 25 Jun 2025 11:47:18 -0700 Message-ID: <20250625184918.124608-26-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: XRCN72V4Vqu1bJZVtFp-yvytoejziCRq X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXxpqSS84YHZcx x2sDAGpqOzB2cjtd4Xe8/5dsW+lB7grwIYiP3qGzwl3PiW6cZElHsLQBLNEXHXrzkShjJ3Lt2al qBtwFnIi37Tbyk07Zqfj1e5Qa7fZNI2ZuHjwRUZGXzeXY0Qgd+AibU++cM0cX98vUgZ79SqH1+l 5NcvrmzdtHNqskBiu0VPu7l6Mfvl4UWhQ55hOzSE19qjZCEx3N1DLVuHyMVFweOPvdvTD65LOb2 jvANQ6WuPFuXJKBtlzOVI3GUEHI8+dSkr30aFTS3EtD3IS8zlgaBDPZBWFl/1i0uUyavYj+9iSV ZaFuYxDA+9jHdwPV1rZFBgaSegya4MJfM2HBkjF2dXa3gXgQ+OvO8ZyBvSwP0FLIaZd6xdOma+Q flVQNYlK4vRP1WAD5/E4P3o7kRRuEqWHBdODT/LHsLj29H2Vb1aBOZi0FltdBlv2BWXhftAh X-Authority-Analysis: v=2.4 cv=caHSrmDM c=1 sm=1 tr=0 ts=685c46f5 cx=c_pps a=rEQLjTOiSrHUhVqRoksmgQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=bYXzjpskvHxJzFY9Y_MA:9 a=2VI0MkxyNR6bbpdq8BZq:22 X-Proofpoint-ORIG-GUID: XRCN72V4Vqu1bJZVtFp-yvytoejziCRq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=932 suspectscore=0 priorityscore=1501 lowpriorityscore=0 bulkscore=0 adultscore=0 mlxscore=0 spamscore=0 malwarescore=0 phishscore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark In this case, userspace could request dumping partial GEM obj mappings. Also drop use of should_dump() helper, which really only makes sense in the old submit->bos[] table world. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gpu.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 6503ce655b10..2eaca2a22de9 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -219,13 +219,14 @@ static void msm_gpu_devcoredump_free(void *data) } =20 static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, - struct drm_gem_object *obj, u64 iova, bool full) + struct drm_gem_object *obj, u64 iova, + bool full, size_t offset, size_t size) { struct msm_gpu_state_bo *state_bo =3D &state->bos[state->nr_bos]; struct msm_gem_object *msm_obj =3D to_msm_bo(obj); =20 /* Don't record write only objects */ - state_bo->size =3D obj->size; + state_bo->size =3D size; state_bo->flags =3D msm_obj->flags; state_bo->iova =3D iova; =20 @@ -236,7 +237,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_st= ate *state, if (full) { void *ptr; =20 - state_bo->data =3D kvmalloc(obj->size, GFP_KERNEL); + state_bo->data =3D kvmalloc(size, GFP_KERNEL); if (!state_bo->data) goto out; =20 @@ -249,7 +250,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_st= ate *state, goto out; } =20 - memcpy(state_bo->data, ptr, obj->size); + memcpy(state_bo->data, ptr + offset, size); msm_gem_put_vaddr(obj); } out: @@ -279,6 +280,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *= gpu, state->fault_info =3D gpu->fault_info; =20 if (submit) { + extern bool rd_full; int i; =20 if (state->fault_info.ttbr0) { @@ -294,9 +296,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu = *gpu, sizeof(struct msm_gpu_state_bo), GFP_KERNEL); =20 for (i =3D 0; state->bos && i < submit->nr_bos; i++) { - msm_gpu_crashstate_get_bo(state, submit->bos[i].obj, - submit->bos[i].iova, - should_dump(submit, i)); + struct drm_gem_object *obj =3D submit->bos[i].obj; + bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, + dump, 0, obj->size); } } =20 --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 398332F2721 for ; Wed, 25 Jun 2025 18:59:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877945; cv=none; b=DEGVRQR+ttgITlpQhqvEhrPaZrpZ221GoMMQJKtGFZMbCzHLg/+wm0+MixaxOLOISNVgZeFPmJhZ+nFjeIHIE0iof2czJSrV4B65wpIa+pheZyjzFTJhmoFviinYxM+6INBgC/FUYB4LyD9Odq7g4ZwgKMMyePY0FH2PmO9x80c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877945; c=relaxed/simple; bh=jAFX8qN/ZOAZrSo0QLmGzc4udiuEqBJZUE1qEmE5rxA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PsFTY3Vmji9J0W+xy+a3OaHxmVsIv0JzlzWqUsoYlenbWAYNo/6R6FEw2bJG7ff6GVqEWS/SdrtdbWoipko0h1XI0dbMJ/GE+QT8JY/58dP4t2eDJqt7NiRBgh9py9+ndsJIqjfOt6fuCU/88w5Ok4RMUV7Q/M3MUAEG7kF52CY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=N1I0mv/m; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="N1I0mv/m" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PDKS4T022985 for ; Wed, 25 Jun 2025 18:59:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=QOMNbElsFZP lX1Zq+j4xLdTNJSwjVmJdOq5xQhhnr9E=; b=N1I0mv/mQZH9Ks7Xc+fzl1wwXv9 OSzf5PDmPgVa5lhXD1CrTcDbi1mb2IqBpsVHKeKLEPZEJW2Gp3K23zjWh9HeJD7w d1hpTdKpjSssePA6CZoXIj5OK+LtD8hp1WMpQ4JNO4aWeON1D4Kf5FjBJktuZhhn fJ2VVerzw+VTMZtk37xxvyKVInipKcUzw3Y2yj7/c1VQ6Kcc2GK5n2wytLTCsVb4 9/NX2X9br0q2+y7Wz6K6WV2/pA2GXeJoKH1Q0hF8uM5Fpja3R04Ktp2w7YdpmmoS pCkd0pNsDDRd41wFJ0U5S/PRPLqGK/3GF/vdmkLqHnASPcXO7Uhe4qn1Uiw== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47f3bggy22-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:03 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-74913a4f606so207647b3a.1 for ; Wed, 25 Jun 2025 11:59:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877942; x=1751482742; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QOMNbElsFZPlX1Zq+j4xLdTNJSwjVmJdOq5xQhhnr9E=; b=AuaaNFkvVuph/++lcuimafXjUPAufShk0sVdqy19ZR8rkyXDBJE28q4r1Xo5pbwzUw J3IWyFMHOh1gmseJVCP/Vd90L1R1NBSfSRGcVvEMEaHEUjc5/TLFWm77tMaOf5Wm59gd oJuYGupsI0ljQC3OFb5M4WBJNTT+1+gTjac8+QakxQqlU8TP1HYt3s6CKRKlb5f59RdF Z60GBj1xCXWqHfXZg8Y7Rqfx5BJRUXhXZUR0TaQ4yleFwCf/lwQV3LVq8FtC00cwT+EY hj6J+P7oP7N+2/u5p1/O6ehOHaVH4aucC8PeO3aBf+OBCvdF/l+hOXy+gshAhcb3fswj ZdyQ== X-Forwarded-Encrypted: i=1; AJvYcCUIc9OOnWdeLBQbP2oRtnrq8wj5F6etja2EjdKRU8V3WWQ4eEqtywSjuqGahb4wPK2vTORR08NfoTJF0oM=@vger.kernel.org X-Gm-Message-State: AOJu0YwVEJjCRkb4UwS+6YJ+VE9UA7GcpaW6MpEI4sT3BzPCW1C8fm5/ Ls4mNIJ/WubiDJdlIhlvfNeaeqhFrgmz2X/64kwCY3wL5B583cAIfRfBSr1CzzCCOV+CP2AjKkw x5ToJWr1BBuMVszAOBMfw6pJASQTSlEtG+hP8Bf+CPdkROPAkcp9Qevm2UMcq6xwVvJ4eAjOaLe o= X-Gm-Gg: ASbGncsiuWRj3rAWxLIgXeocf8iwe5Yp8TsU6umAKakjByp2OEzXt8OmCRRfVn6ONdu 8ivzE4abf1wW2wgidPa0WHWbU1dce1Gc660HdQwL6bJXv0HG+K6bmAz00V2FRnbwooYXB6bsYNG b3LrU0mm8Lke0FmDkgKIf++d8d5PPSTRg1u4QnvfVcbHHmR4K9Fr4cjpVBWbH168ZizFmLaj/kh sZx0vS6Eoxyih/06UrC16BKIe4MwbKnD6UbsLLidhwjQMMoWmta/XlnRsiXEuN/S7qeoQiDveH6 h4jJ3lKLZdj8w9p1vU1hWXenSW0prM3Z X-Received: by 2002:a05:6a00:1a91:b0:749:472:d3a7 with SMTP id d2e1a72fcca58-74ad45bcfa8mr6497892b3a.18.1750877941679; Wed, 25 Jun 2025 11:59:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFNIpJthfggdVhg6h4YytmxITyHayu+CLw/csQVSnZlQgLg/GHx0v96sgTadkVnx72wDQ0ltQ== X-Received: by 2002:a05:6a00:1a91:b0:749:472:d3a7 with SMTP id d2e1a72fcca58-74ad45bcfa8mr6497858b3a.18.1750877941248; Wed, 25 Jun 2025 11:59:01 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-749c8854943sm5355763b3a.138.2025.06.25.11.59.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:00 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 26/42] drm/msm: rd dumping prep for sparse mappings Date: Wed, 25 Jun 2025 11:47:19 -0700 Message-ID: <20250625184918.124608-27-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: nc2NqlQRhd_tvM69KXM3Rm6n4yjeCm2u X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXxDNBlOdOWGCf i5cQps7DmKLkfJBuRAxMmiKZ9HqyVPcaz8jkB+ZW71hM5j0SbKaCZIs3haRT8vJkWS8mripjmaf zI1k/FZZgVQu63qxpKY67BFP08E1UV2DkXOPJjqinS80kdXKzEmHZmKtuNr6ABh9ZEHLeMglshw p6OuognQhMSBV3NBWOEzs6JizIE5ksnkVrEteZiz4lyFkRebN8NzmHbYF85OpRCsr9RD0SJi1ZB OQ6qMQ6jcnAdEWcu9BDZ4zQFqjuOqc+5CwGHdLJc1iD/M9zeKWSsvkMarZZCw9NF3gvhottdQvX UsJizYJ3wm2ghlhUIdF26EfAAY4GJtCKI9NT4+iIWJOSOZ6y+IVrvIwupOiUsMGh2jnll7lpWV9 Am/74ZBoQJw1qmYSZ3wt5YNmMx2UNgpKsw9M48cv15acjXT0FmRljFT16gIyOz6yeVRhZ71E X-Authority-Analysis: v=2.4 cv=L4kdQ/T8 c=1 sm=1 tr=0 ts=685c46f7 cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=uvlqqL4q8Y98p8K7alsA:9 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-GUID: nc2NqlQRhd_tvM69KXM3Rm6n4yjeCm2u X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0 bulkscore=0 clxscore=1015 suspectscore=0 adultscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 spamscore=0 phishscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Similar to the previous commit, add support for dumping partial mappings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.h | 10 --------- drivers/gpu/drm/msm/msm_rd.c | 38 ++++++++++++++++------------------- 2 files changed, 17 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 5e8c419ed834..b44a4f7313c9 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -403,14 +403,4 @@ static inline void msm_gem_submit_put(struct msm_gem_s= ubmit *submit) =20 void msm_submit_retire(struct msm_gem_submit *submit); =20 -/* helper to determine of a buffer in submit should be dumped, used for bo= th - * devcoredump and debugfs cmdstream dumping: - */ -static inline bool -should_dump(struct msm_gem_submit *submit, int idx) -{ - extern bool rd_full; - return rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); -} - #endif /* __MSM_GEM_H__ */ diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index 39138e190cb9..edbcb93410a9 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -308,21 +308,11 @@ void msm_rd_debugfs_cleanup(struct msm_drm_private *p= riv) priv->hangrd =3D NULL; } =20 -static void snapshot_buf(struct msm_rd_state *rd, - struct msm_gem_submit *submit, int idx, - uint64_t iova, uint32_t size, bool full) +static void snapshot_buf(struct msm_rd_state *rd, struct drm_gem_object *o= bj, + uint64_t iova, bool full, size_t offset, size_t size) { - struct drm_gem_object *obj =3D submit->bos[idx].obj; - unsigned offset =3D 0; const char *buf; =20 - if (iova) { - offset =3D iova - submit->bos[idx].iova; - } else { - iova =3D submit->bos[idx].iova; - size =3D obj->size; - } - /* * Always write the GPUADDR header so can get a complete list of all the * buffers in the cmd @@ -333,10 +323,6 @@ static void snapshot_buf(struct msm_rd_state *rd, if (!full) return; =20 - /* But only dump the contents of buffers marked READ */ - if (!(submit->bos[idx].flags & MSM_SUBMIT_BO_READ)) - return; - buf =3D msm_gem_get_vaddr_active(obj); if (IS_ERR(buf)) return; @@ -352,6 +338,7 @@ static void snapshot_buf(struct msm_rd_state *rd, void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *su= bmit, const char *fmt, ...) { + extern bool rd_full; struct task_struct *task; char msg[256]; int i, n; @@ -385,16 +372,25 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, stru= ct msm_gem_submit *submit, =20 rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); =20 - for (i =3D 0; i < submit->nr_bos; i++) - snapshot_buf(rd, submit, i, 0, 0, should_dump(submit, i)); + for (i =3D 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj =3D submit->bos[i].obj; + bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } =20 for (i =3D 0; i < submit->nr_cmds; i++) { uint32_t szd =3D submit->cmd[i].size; /* in dwords */ + int idx =3D submit->cmd[i].idx; + bool dump =3D rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); =20 /* snapshot cmdstream bo's (if we haven't already): */ - if (!should_dump(submit, i)) { - snapshot_buf(rd, submit, submit->cmd[i].idx, - submit->cmd[i].iova, szd * 4, true); + if (!dump) { + struct drm_gem_object *obj =3D submit->bos[idx].obj; + size_t offset =3D submit->cmd[i].iova - submit->bos[idx].iova; + + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); } } =20 --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7D052F3659 for ; Wed, 25 Jun 2025 18:59:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877946; cv=none; b=hhjOpxoWLH94PNEzdyUPjRR+iCaPy4TGfC6crUAUZvowNCYu44ad+WI5bEutLbbvxGIZ3Rp2/1Bq/h9rXD13Q0ReSa/nm6iocVJM+oADnM1hARR6BUXNeqWVfFfdUmY+RdRC9cvSAoWOCSCQvG6sg5Jid7vfjhQw1cAoBjVX21U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877946; c=relaxed/simple; bh=cVRNX2A6ELOpK0RfBGNzCk+LFIQL5ZAU1DSg6RQcvzM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U26LSe34rFZj26imE9OCk838e1jfdUk3IGHnuebYlsHs9Amrgf/jD81DZoI0D9Kb5vucBc39cHgKJIT5OVOEHChZoTyOEAp6blFjAqAvlCzbGQ6BwJ1dLNbMv46GibioLO0fEsuEnMmr4CPyenoeGn98t/fNGtLQ+lH/SDl9USY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=GysHBRol; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="GysHBRol" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCZ0nH015714 for ; Wed, 25 Jun 2025 18:59:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=F6qh8d6fUyE 8PFPWr8kAxkQ8qVtIka0LBkE1NoV8MqA=; b=GysHBRolCVL8iJJjgnL3i4JJ2C4 r/PjE9IgO0DcDYOlwm5VGFd5z83ttKrBGTWcy76wrpYMlUyqjRmEuAWkub+MoL6H kP+M5glqyt2YoO6mRzHgxYikmRZ73856aNpOB+d9wLTY+BcvmazdZh3CJsLp9zOQ wPOOf/q28tsuNfiMNvgUylh4geO1zsITey1HNKkX+CkQsnBN0gVbn4IYxmeVun9s HIrY1zv8oRmKPTPZ8Rod/YqPF7bMQboKf1Wzau/cB84eQPKv+068bRptlD3P9Djk QSWDBL9F1ghhYoBawNLlVA3SPHXz4VjGqxKiypgkz5JakIzrJHv12oq6rtA== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47evc5t1qm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:03 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-748e1e474f8so341768b3a.2 for ; Wed, 25 Jun 2025 11:59:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877943; x=1751482743; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F6qh8d6fUyE8PFPWr8kAxkQ8qVtIka0LBkE1NoV8MqA=; b=mmckH8GFlYzU566ILntA5MfnxEjFMlqAS0ltgvvhH1/5ZjCIltKOz5pT5PAn4i5YLG n3pgxtkAOjxtAfTRS7DX+N5wQD/EE+KJdrUcS6lFB0eZkuGO7JsVsg5eliadodUa/zPK FaPlohdgT3dXtxin2p3n7/F9ns9JlOZmF9C30sw90OjMgGMbVw+DwbVtHhPmDpmU0R5Z XTzxy8WUEkkE0ZCfkye5TDJBhEhuDsDOmJirwINtHh9NwOIpZc0ovNT9NKEHO/+w4vfb 6+lwg1bzD7MIqpt+7AgwRrbFAXGbyk+YhQCdZkUO2k9DMfMmy+UV7GL+7tXo2ECaG2O0 tRtQ== X-Forwarded-Encrypted: i=1; AJvYcCV1ZoU0pQ0Dvj+b6w6/CnQfXdTJGs/LObsef6dJCegWuPYUy4y8XkYmPQLb7HQDRllhU2iqQN+TXSVS3VQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxV0wp+YYQbdBRZxPdBMco5ndfxIgMTzTv0d87dlvMF1D8uTs8M Q7R0vguYrP/ITB6epbRGzKQq7RhCQo5VuLcEEkZhRlmLaKrceThST/9yIbiT36uyn1J/fyLlnzk Oq6wxnkv00UKZ2Ll7rgNmkn/r4HRDeWwZsFzg3EP4Qh9rXJrXW15O6Htwlhdkwe++8Bg= X-Gm-Gg: ASbGnctmjvOlL1z3Xe677Ol0N0abPPG5nMC9Rp0gYpH/6FKftJnf45xGM4XDka/3VVF IcSX/W+ab5zHxtrNvKyBVxwA/zSrc3l4hUz5vZgWES/neOHvv4t9STvm8nivH8bb43LsYB2Fji/ ItRIdjlsve2N0HWNZGwJ2eGMDg+qeP8sH+gNfsLUOzcqTBvqDo72HwJmYa6nwvArj6bkYZ3h3kM LqJWFg9OenST3O7M5lyAv95ZLB42BjdqcPuPB83Mbhj1VbzVKCWLOiTS2JJvxV7M9/2rfTHuzEh twoX2ANgmHaQ4bW+kiQK6j07PsNYKqOl X-Received: by 2002:a05:6a00:2e03:b0:736:31cf:2590 with SMTP id d2e1a72fcca58-74ad45bb45bmr5615184b3a.16.1750877942968; Wed, 25 Jun 2025 11:59:02 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHg8vNYWkF6glZsGu/qF4tKCYvJINSQzai59ELYbebz2aZfun5XKFQM14/85G3r1zrV/TUgLA== X-Received: by 2002:a05:6a00:2e03:b0:736:31cf:2590 with SMTP id d2e1a72fcca58-74ad45bb45bmr5615157b3a.16.1750877942543; Wed, 25 Jun 2025 11:59:02 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-749b5e08cfbsm5259906b3a.18.2025.06.25.11.59.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:02 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 27/42] drm/msm: Crashdump support for sparse Date: Wed, 25 Jun 2025 11:47:20 -0700 Message-ID: <20250625184918.124608-28-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: 9DQXQSpKKr3pl6lIDDzkOQbTeKaE607m X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXx+VLLC5XPbw0 8voSg8dDCSxfFPhLfqU1d1km1J6Qo1xCYSb23NICBfTifqTY2y2W9hjVx/0LRSBiuIsNfgMv/t5 4CdHva5UHMvMmikXhPeSan7uAmw+l4uG29S08zwYUqlG3Fkb1I8rimKxCS/AQxngPq5wn+nzBT8 F61s5wr6rfOvmudzmnCaa4A6D1VgN3kPxkYOgOQtUCkfTuKyrAGkSv4La5b8e7yEU3soZDd5cQM Vagyld61t9xECCWapslJW8sz9p+3hkHkocTchogTwnnjW10XP4Qqx+iNQCcsbncgp2+cdy8mSNU I/7zzcapArmbSb7ORDMwVhMeARNlciU8AfqxGQVWxxTAYcofgDCBViZXONhV/ZcpzQSQvyJ/wye J9D7y6HTDy1yd3CItFIOt0qmlPfA6gpGlQJHwfBZQcgGfsObJKk9b0gcH5IU0wyYA5+gT2er X-Authority-Analysis: v=2.4 cv=caHSrmDM c=1 sm=1 tr=0 ts=685c46f7 cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=MUnOxqT-vkRKCsmERf0A:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-ORIG-GUID: 9DQXQSpKKr3pl6lIDDzkOQbTeKaE607m X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 lowpriorityscore=0 bulkscore=0 adultscore=0 mlxscore=0 spamscore=0 malwarescore=0 phishscore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark In this case, we need to iterate the VMAs looking for ones with MSM_VMA_DUMP flag. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gpu.c | 96 ++++++++++++++++++++++++++--------- 1 file changed, 72 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 2eaca2a22de9..8178b6499478 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -241,9 +241,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_st= ate *state, if (!state_bo->data) goto out; =20 - msm_gem_lock(obj); ptr =3D msm_gem_get_vaddr_active(obj); - msm_gem_unlock(obj); if (IS_ERR(ptr)) { kvfree(state_bo->data); state_bo->data =3D NULL; @@ -251,12 +249,75 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_= state *state, } =20 memcpy(state_bo->data, ptr + offset, size); - msm_gem_put_vaddr(obj); + msm_gem_put_vaddr_locked(obj); } out: state->nr_bos++; } =20 +static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem= _submit *submit) +{ + extern bool rd_full; + + if (!submit) + return; + + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_exec exec; + struct drm_gpuva *vma; + unsigned cnt =3D 0; + + drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES, 0); + drm_exec_until_all_locked(&exec) { + cnt =3D 0; + + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(submit->vm)); + drm_exec_retry_on_contention(&exec); + + drm_gpuvm_for_each_va (vma, submit->vm) { + if (!vma->gem.obj) + continue; + + cnt++; + drm_exec_lock_obj(&exec, vma->gem.obj); + drm_exec_retry_on_contention(&exec); + } + + } + + drm_gpuvm_for_each_va (vma, submit->vm) + cnt++; + + state->bos =3D kcalloc(cnt, sizeof(struct msm_gpu_state_bo), GFP_KERNEL); + + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump =3D rd_full || (vma->flags & MSM_VMA_DUMP); + + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + msm_gpu_crashstate_get_bo(state, vma->gem.obj, vma->va.addr, + dump, vma->gem.offset, vma->va.range); + } + + drm_exec_fini(&exec); + } else { + state->bos =3D kcalloc(submit->nr_bos, + sizeof(struct msm_gpu_state_bo), GFP_KERNEL); + + for (int i =3D 0; state->bos && i < submit->nr_bos; i++) { + struct drm_gem_object *obj =3D submit->bos[i].obj;; + bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + msm_gem_lock(obj); + msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, + dump, 0, obj->size); + msm_gem_unlock(obj); + } + } +} + static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, struct msm_gem_submit *submit, char *comm, char *cmd) { @@ -279,30 +340,17 @@ static void msm_gpu_crashstate_capture(struct msm_gpu= *gpu, state->cmd =3D kstrdup(cmd, GFP_KERNEL); state->fault_info =3D gpu->fault_info; =20 - if (submit) { - extern bool rd_full; - int i; - - if (state->fault_info.ttbr0) { - struct msm_gpu_fault_info *info =3D &state->fault_info; - struct msm_mmu *mmu =3D to_msm_vm(submit->vm)->mmu; + if (submit && state->fault_info.ttbr0) { + struct msm_gpu_fault_info *info =3D &state->fault_info; + struct msm_mmu *mmu =3D to_msm_vm(submit->vm)->mmu; =20 - msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, - &info->asid); - msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); - } - - state->bos =3D kcalloc(submit->nr_bos, - sizeof(struct msm_gpu_state_bo), GFP_KERNEL); - - for (i =3D 0; state->bos && i < submit->nr_bos; i++) { - struct drm_gem_object *obj =3D submit->bos[i].obj; - bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); - msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, - dump, 0, obj->size); - } + msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, + &info->asid); + msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); } =20 + crashstate_get_bos(state, submit); + /* Set the active crash state to be dumped on failure */ gpu->crashstate =3D state; =20 --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A8422F3C1E for ; Wed, 25 Jun 2025 18:59:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877947; cv=none; b=n5w4NCCK1iCU4kf3HwB2gfUBky1eIeP103tg/1SZzGSd3x8Wm5lOCEzHIFByeNmQ3xFnMDxRkNGno5kjIFAimEStAyWgl2XgblQX+KJRWj+lnP3/Y7OfRyYXkKT6OCRNgkJEaPI14lg2+JKsb6U9nqeqAJnUbZgIIbtBik9+oEE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877947; c=relaxed/simple; bh=qCbhnRnuW+mHLXXemWgEWHentAADrrysxUd/juV5+7A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WtQE7c6jvO/sZdwvYA2W02BXa+y+e5GJCyMCNhAKFiaFjn3BJKjYhvlmxZW58/R97QhJTMjLzwja8tQwM9sWLHEhSGtkI9uDMSfEiBYX93n+6PHmDNwn66tQJ49xZpo4WGql7GPkLD5pqZya/aOARZEXeD+CUQf0pQ+fbJIJtx0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=lRxDIN5C; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="lRxDIN5C" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCHZgE031297 for ; Wed, 25 Jun 2025 18:59:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=zZ0/1Hg9ehG RWkpUmL2vj2q1yR8YxmohRbA6KHUWPBw=; b=lRxDIN5CicFLvLQrRAk+zyEqz8A 6Bio695kpwmaahDHWQb9FMpv74R8pYMWJNBM5diCBFMy8GeNPuE3kqB+ixa8vYrL PcYfxYsKNVtsmA7bkbTHcfO3X2M7dlQtvQH60b61uLGpVogpZNB8ldoBYIcvY0Je 42bSQgYy7HODZLu9I2meWiO4nznBoeM8Z5uBubThjnwyVomUd8UWNVC4kUHJz7/R CNAE9OP4oNdBUTkSR/VQzZ73rneRyG5TjNyC3D6mZldqv+yBi/+0UoELQWizMXao QVG7S0mBKOksR85YPgImHZCL8ujBfXD032PwwzR/qDqxvPd+TMyCZjZWCwg== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47emcmtk1k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:05 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-748efefedb5so256307b3a.2 for ; Wed, 25 Jun 2025 11:59:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877944; x=1751482744; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zZ0/1Hg9ehGRWkpUmL2vj2q1yR8YxmohRbA6KHUWPBw=; b=CVO4Rc7sPW7k/w/p9BbtvaCJ1ZInJnrHjBDTRG8yXK4s3dpoFoYVYEaUTb7cf15r8w OL/ZkV5NsyR6SyMoKgNi7bW0LCO3Li1c6HsKLo61ljZCT23BWcV6mFqh6Kic+sQ3Q207 vzss5/nAFerCZzgEnoidkRsWpkw9LBF3hRlM7RvxvycPozLR1mGFGILoep64tkHjDzKW ML31rMIrr1Jjktsc+v2UWHptKcNf9bFsSGghPaAZHVEYRgCBwEKzfrQTYdb7/FrF7lRb tD/9Iek+qg92/7X5bE8Tfmsj7u1lcv3wSqxt0qLDbChQE/DebUVJSSGR6/EKqNOGlgOv YnOA== X-Forwarded-Encrypted: i=1; AJvYcCX2VNO8YhzOswAiQc6USVl3qZRO6mBuSIzmgYLuKaVK8VSRwTvhh804y+jJqLkzNNCA5eKs6wAJIsWZkx4=@vger.kernel.org X-Gm-Message-State: AOJu0Ywaef/YEpQeZ7cG98w2OB/jzMRXFCQgj8XSE1pc4UAIoUlkOY7z vhsDyNL1Z/9UVIboCiJ8lMj9UIVC5kIPrebjoFo+2kuMyzAhGykJEznBAtCKEp9vCW6dePetVHQ WEBI3KPH9/6gI3CoBVTUwuOujXs3d4oZuX9Rr/3LVSjl+WclicDFC3gZrbOww7FTWUrE= X-Gm-Gg: ASbGncswtOHy20wJRibSEraMVR3D3Auu2z0V1JeUxeGSsOAzxSqSk5YNRvNxSbN8HDx rOBKbnH+2bnkSelfBI9o7EikiZc8behNdWhlrLLbYrBlbPWXNeJEDPQzP3cXuJTPO4ieLxxHhU0 pXMEyvH/xU45zbNLWE8Vp28AUltQNurVmcgyLo33hOJ1CGIOEvJACvUD7hPmV8ru9vw9aEUr0WZ aWKlTBwk0PPIq2TswvCktQH14r0CNHtcjFY43KSjkoCDzv+LcJwhw4sCS+CIg9L6H7PHRmIcOYX BswZWC59pIRwPQn5BG1PZ8Zfu2Fztx60 X-Received: by 2002:a05:6a00:a92:b0:749:93d:b098 with SMTP id d2e1a72fcca58-74ad4500c14mr5889490b3a.22.1750877944379; Wed, 25 Jun 2025 11:59:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEv/fci/aWABQPqIxId1qt4Av5ksAJmltuRXsEkge7A5PTFsk2rkR0SaCYSvzUz9d2gs7rt6w== X-Received: by 2002:a05:6a00:a92:b0:749:93d:b098 with SMTP id d2e1a72fcca58-74ad4500c14mr5889456b3a.22.1750877943945; Wed, 25 Jun 2025 11:59:03 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-749c8872c7fsm4943244b3a.167.2025.06.25.11.59.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:03 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 28/42] drm/msm: rd dumping support for sparse Date: Wed, 25 Jun 2025 11:47:21 -0700 Message-ID: <20250625184918.124608-29-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: ilSbiFTvKRfffJ0JVOaqqixAVilCiNXo X-Proofpoint-ORIG-GUID: ilSbiFTvKRfffJ0JVOaqqixAVilCiNXo X-Authority-Analysis: v=2.4 cv=J+eq7BnS c=1 sm=1 tr=0 ts=685c46f9 cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=Oi01P0gpvwaEutKy2E0A:9 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX8uNCfg2XKv8y 7hnyek3Q9WSns2vbcFSKiOpo5kny3cVqZ+xEwR1r7waB3pGZRlnleDFsIC4ousGSGxxSandSRpr 4tycxlrAkg+rks1eGdPp0dCxwjWbNf4KB5/lItkTAKCHBNPb0pavIRcJ5kJ0zLwrEqhABGZo7De /buBuvOeIPjvxSerz821J7HHRP0keCMt0+52Bm3GXg8MHQqrpEpgFYCFeNKH8q+3cGKaBw13igf /l8HE+cPhBg0LoLSDJRbJUxyCw5u4CDMO90eLv/+0CZBXP/BxB0VYZEd6JlgP/9BrdZxb5A6Os4 JM8TXPjOelPQ1EtMgu6AJXCGcDk7gw7cM9OUdXSVNH62Ow7U69DhrN7jOpuQxgoIwAsFXiKTtaG sdkCOExGiAWNrC7FSr5VcHz+lip+FuwmRCTSGQJuZP5mB3GL/oGcZ23E0XkSTtzwPrKR3hZn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxlogscore=999 adultscore=0 impostorscore=0 clxscore=1015 spamscore=0 malwarescore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 lowpriorityscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark As with devcoredump, we need to iterate the VMAs to figure out what to dump. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_rd.c | 48 +++++++++++++++++++++++++----------- 1 file changed, 33 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index edbcb93410a9..54493a94dcb7 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -372,25 +372,43 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, stru= ct msm_gem_submit *submit, =20 rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); =20 - for (i =3D 0; i < submit->nr_bos; i++) { - struct drm_gem_object *obj =3D submit->bos[i].obj; - bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_gpuva *vma; =20 - snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); - } + drm_gpuvm_resv_assert_held(submit->vm); =20 - for (i =3D 0; i < submit->nr_cmds; i++) { - uint32_t szd =3D submit->cmd[i].size; /* in dwords */ - int idx =3D submit->cmd[i].idx; - bool dump =3D rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump =3D rd_full || (vma->flags & MSM_VMA_DUMP); + + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + snapshot_buf(rd, vma->gem.obj, vma->va.addr, dump, + vma->gem.offset, vma->va.range); + } + + } else { + for (i =3D 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj =3D submit->bos[i].obj; + bool dump =3D rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } + + for (i =3D 0; i < submit->nr_cmds; i++) { + uint32_t szd =3D submit->cmd[i].size; /* in dwords */ + int idx =3D submit->cmd[i].idx; + bool dump =3D rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); =20 - /* snapshot cmdstream bo's (if we haven't already): */ - if (!dump) { - struct drm_gem_object *obj =3D submit->bos[idx].obj; - size_t offset =3D submit->cmd[i].iova - submit->bos[idx].iova; + /* snapshot cmdstream bo's (if we haven't already): */ + if (!dump) { + struct drm_gem_object *obj =3D submit->bos[idx].obj; + size_t offset =3D submit->cmd[i].iova - submit->bos[idx].iova; =20 - snapshot_buf(rd, obj, submit->cmd[i].iova, true, - offset, szd * 4); + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); + } } } =20 --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A2012F49FD for ; Wed, 25 Jun 2025 18:59:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877951; cv=none; b=KWba/EX9tOtQMUkgBUziJSEK7/LPaqPsNrwHQl3KzDQKQrZDRmL0h6QE0l3Fo80edc6nkGEEovkKPEMHS8xzlMUQrrBO8jztznb8Nw37SGYE0oWTI/Up0/F1kwUOb4cnHV/x0jUJXDxYr42zyBMMOM0ocVGepp0fUk1r6arA7rM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877951; c=relaxed/simple; bh=J0pQNbB6x/H7ePaaaS7ZUOpUGbfsgtn3CSHqO9Dcccc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eL7fLdZX55FVd1CjXIeNc1y68ygCBra+Res+CZ98K3hePGdb/xoSXDt14nKqNSkDh+Uq7KvPkICZF5mNDwQj1XLsJz4dWRrfGu02DSWDlydoMCJ9q0ko5G6uuBNaStbMGntD46IVaKImzuQpc/JZB26mn05Taoauz9tECqMFsWI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Jsjj11g2; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Jsjj11g2" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PBrQRA012117 for ; Wed, 25 Jun 2025 18:59:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=F1yQ7yX6yaI pkwDE3wKH9/Crq7XFmKQYbQ8fPyvBhLE=; b=Jsjj11g2GaQ7eLpMX999DlmpRRZ Qdm/Ot9DBdj2iOFXnlkYR98in9o5uGBvdnv1k+wkco73TQPDbIYTGRVgUPmOQ7ow /n/2bFPspIRN5jh4DSRjlSD3kbTpCTjBL/I372r8pp8yrYw2mMtbEwJFFOjzMuak 4zyG8LJ3c915BHAgGSBWBs2MsN7b/EXj2hzv8VC5kcPJgIYAu60BEB3ro7XTPB3i fXFfgFvyOsBMVZhj2IunSw2Y+xwnysfAszBTpzVcLrk5CPfadmwWaIZ5WFYwcNZL 3V4RD6mV3Ltg9+zGKghl11tkcSygCgzTKoYbxK0BbxwWnOsua9PYFsyO3sw== Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbm1yh16-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:07 +0000 (GMT) Received: by mail-pg1-f198.google.com with SMTP id 41be03b00d2f7-b2fcbd76b61so207031a12.3 for ; Wed, 25 Jun 2025 11:59:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877947; x=1751482747; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F1yQ7yX6yaIpkwDE3wKH9/Crq7XFmKQYbQ8fPyvBhLE=; b=dcneIZZEBVcND4y5Rv7h91cMcjsavxRjf3kAP2V+qjFvFi9YFVQn+RNc2DT/vJwbOA 7d59e0WQvC1p/M/Tq3Gs08J2kKiGfSiJO2/WEEfCHLUrtGDAf1HYlSTlMT5P19pfMtFM XOo7sEe7Zs57ZbwRD/CdjCooCa1qHBdQWKyXHdGMz46lpe32jGxsEB7QK+TqJ6DvwKrI 8BlKdM+6UN9QW0Y58rFeK4/9D5nrhBc331JSnLY9Z1Ot6ki8KBZetcIzYAemYKygsfRy xrO/+zWSVwgEGoDu65GUpqcJ36vLbAscGEkJgW11e+k6v3FtDk6EBTC2Rj8wgF3dC+ho Mc/A== X-Forwarded-Encrypted: i=1; AJvYcCVkSmPj+nk2G19/x3pGHi/EJW47toduGbVkDBtXfTK39d2T7EKZX/Hg8TEN8yGwaSL7piGDSnA9I/mXvV0=@vger.kernel.org X-Gm-Message-State: AOJu0YxehZCpqV47dzUXRnGCoCPtjETs2IzaJM9uHwblA0VErzmC85ni H+NtSm86khqRO+eSwPbfEazYw50KNfGuQSl7AFRTSmIWiYSQtPZjZq0ADyyEAfUp+cXKqeHaAz1 O8TbQ59aM8UZrMw89BI7e9bulKHcFT4Gcxlq9xLAON4AC6Ww6qpmmD0oZJqGpNJycflM= X-Gm-Gg: ASbGnctFTVH79Ry9ynPqFPHVlocz9I/rPA3zzeAoFzpES7aBtyIM1WajVuCr0sOZAW3 sYfcLN7/fuWM3+/bmoZFMN4ROadE1VEv9p+1KGFkB7ZHeLzE4vk5z50/yQoXHQ3qmNNUzL7tAxx UZPtaPVkCZvtjSuQvkEXf6Y+Y//htRYcZ8rVhvLeVplSY+M5EDygaFTo7nQfVxK/cYFi8JGQV4g 9qJZnneOrPeiTs7sAcSBxBXXfZo8ZDpbSRm7CnjPp/NtDlNhunFeJGPMKF8dquHmvXS4fro8Pow iXaj0gExr0g1MNmiaOUXoaOumlTZr5ak X-Received: by 2002:a05:6a21:680b:b0:204:4573:d854 with SMTP id adf61e73a8af0-2207f1adf3bmr5936428637.9.1750877946431; Wed, 25 Jun 2025 11:59:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF1LmWA4B37cTCZvJEIcBbu0dTeCfOuaoULsy1QTxM37zp0IXZvdQikKnG42zq3g+gUrjS54w== X-Received: by 2002:a05:6a21:680b:b0:204:4573:d854 with SMTP id adf61e73a8af0-2207f1adf3bmr5936371637.9.1750877945889; Wed, 25 Jun 2025 11:59:05 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b31f125d668sm11591495a12.54.2025.06.25.11.59.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:05 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v7 29/42] drm/msm: Extract out syncobj helpers Date: Wed, 25 Jun 2025 11:47:22 -0700 Message-ID: <20250625184918.124608-30-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=YYu95xRf c=1 sm=1 tr=0 ts=685c46fb cx=c_pps a=Qgeoaf8Lrialg5Z894R3/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=ii7SkllToyZ1umWTbp4A:9 a=x9snwWr2DeNwDh03kgHS:22 X-Proofpoint-GUID: 9h669e0wd74ciX84y0FOoDK_i4zxHE_6 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfX7MBsAfm9SxAy +KGvwdDWuH0y5VEyEvepEhSZykafhRk4XqhNoN902fQo3G8v9V9kHPTWSUnhQXZQJWeUbmblJIB oQZrcmHKaj8cOGixUX3zMTP8w5qKyC2CW9janAlIuEO/88vJBNXx5Q3uYGPdLHpL4Dx6OUdvPon pOkSeK2sNO8HRrDER5aAzV4g17paFzn0zIno/XyFRc82dsSAloT+mq8568zm6hm7kTk59UX0ASS uLhRVFNNLL9kcByevFUtlcnC5AmOjjMsPn6bjLF/WurmMIqUjLmBju/sEk+TQm37Cr/aXU0XdPv huHBcgRhPq4cZkMnt+AvN2vN3S9s4PjSSuD9Cqa8mOklU4M6BRwgJdaFr7Fw7IUya5vUKY+DjDi M8LR5cbi4VpBfCwSIKZLqwFOkMHx4BEXy/8In4xNDFCoKhlLbe0Hm1ywCaMpj7yDSECZi21d X-Proofpoint-ORIG-GUID: 9h669e0wd74ciX84y0FOoDK_i4zxHE_6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 spamscore=0 impostorscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 phishscore=0 mlxlogscore=999 clxscore=1015 mlxscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" From: Rob Clark We'll be re-using these for the VM_BIND ioctl. Also, rename a few things in the uapi header to reflect that syncobj use is not specific to the submit ioctl. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/msm_gem_submit.c | 192 ++------------------------- drivers/gpu/drm/msm/msm_syncobj.c | 172 ++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_syncobj.h | 37 ++++++ include/uapi/drm/msm_drm.h | 26 ++-- 5 files changed, 235 insertions(+), 193 deletions(-) create mode 100644 drivers/gpu/drm/msm/msm_syncobj.c create mode 100644 drivers/gpu/drm/msm/msm_syncobj.h diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index 5df20cbeafb8..8af34f87e0c8 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -128,6 +128,7 @@ msm-y +=3D \ msm_rd.o \ msm_ringbuffer.o \ msm_submitqueue.o \ + msm_syncobj.o \ msm_gpu_tracepoints.o \ =20 msm-$(CONFIG_DRM_FBDEV_EMULATION) +=3D msm_fbdev.o diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 9562b6343e13..9f18771a1e88 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -16,6 +16,7 @@ #include "msm_gpu.h" #include "msm_gem.h" #include "msm_gpu_trace.h" +#include "msm_syncobj.h" =20 /* For userspace errors, use DRM_UT_DRIVER.. so that userspace can enable * error msgs for debugging, but we don't spam dmesg by default @@ -491,173 +492,6 @@ void msm_submit_retire(struct msm_gem_submit *submit) } } =20 -struct msm_submit_post_dep { - struct drm_syncobj *syncobj; - uint64_t point; - struct dma_fence_chain *chain; -}; - -static struct drm_syncobj **msm_parse_deps(struct msm_gem_submit *submit, - struct drm_file *file, - uint64_t in_syncobjs_addr, - uint32_t nr_in_syncobjs, - size_t syncobj_stride) -{ - struct drm_syncobj **syncobjs =3D NULL; - struct drm_msm_gem_submit_syncobj syncobj_desc =3D {0}; - int ret =3D 0; - uint32_t i, j; - - syncobjs =3D kcalloc(nr_in_syncobjs, sizeof(*syncobjs), - GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); - if (!syncobjs) - return ERR_PTR(-ENOMEM); - - for (i =3D 0; i < nr_in_syncobjs; ++i) { - uint64_t address =3D in_syncobjs_addr + i * syncobj_stride; - - if (copy_from_user(&syncobj_desc, - u64_to_user_ptr(address), - min(syncobj_stride, sizeof(syncobj_desc)))) { - ret =3D -EFAULT; - break; - } - - if (syncobj_desc.point && - !drm_core_check_feature(submit->dev, DRIVER_SYNCOBJ_TIMELINE)) { - ret =3D SUBMIT_ERROR(EOPNOTSUPP, submit, "syncobj timeline unsupported"= ); - break; - } - - if (syncobj_desc.flags & ~MSM_SUBMIT_SYNCOBJ_FLAGS) { - ret =3D SUBMIT_ERROR(EINVAL, submit, "invalid syncobj flags: %x", synco= bj_desc.flags); - break; - } - - ret =3D drm_sched_job_add_syncobj_dependency(&submit->base, file, - syncobj_desc.handle, syncobj_desc.point); - if (ret) - break; - - if (syncobj_desc.flags & MSM_SUBMIT_SYNCOBJ_RESET) { - syncobjs[i] =3D - drm_syncobj_find(file, syncobj_desc.handle); - if (!syncobjs[i]) { - ret =3D SUBMIT_ERROR(EINVAL, submit, "invalid syncobj handle: %u", i); - break; - } - } - } - - if (ret) { - for (j =3D 0; j <=3D i; ++j) { - if (syncobjs[j]) - drm_syncobj_put(syncobjs[j]); - } - kfree(syncobjs); - return ERR_PTR(ret); - } - return syncobjs; -} - -static void msm_reset_syncobjs(struct drm_syncobj **syncobjs, - uint32_t nr_syncobjs) -{ - uint32_t i; - - for (i =3D 0; syncobjs && i < nr_syncobjs; ++i) { - if (syncobjs[i]) - drm_syncobj_replace_fence(syncobjs[i], NULL); - } -} - -static struct msm_submit_post_dep *msm_parse_post_deps(struct drm_device *= dev, - struct drm_file *fi= le, - uint64_t syncobjs_a= ddr, - uint32_t nr_syncobj= s, - size_t syncobj_stri= de) -{ - struct msm_submit_post_dep *post_deps; - struct drm_msm_gem_submit_syncobj syncobj_desc =3D {0}; - int ret =3D 0; - uint32_t i, j; - - post_deps =3D kcalloc(nr_syncobjs, sizeof(*post_deps), - GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); - if (!post_deps) - return ERR_PTR(-ENOMEM); - - for (i =3D 0; i < nr_syncobjs; ++i) { - uint64_t address =3D syncobjs_addr + i * syncobj_stride; - - if (copy_from_user(&syncobj_desc, - u64_to_user_ptr(address), - min(syncobj_stride, sizeof(syncobj_desc)))) { - ret =3D -EFAULT; - break; - } - - post_deps[i].point =3D syncobj_desc.point; - - if (syncobj_desc.flags) { - ret =3D UERR(EINVAL, dev, "invalid syncobj flags"); - break; - } - - if (syncobj_desc.point) { - if (!drm_core_check_feature(dev, - DRIVER_SYNCOBJ_TIMELINE)) { - ret =3D UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); - break; - } - - post_deps[i].chain =3D dma_fence_chain_alloc(); - if (!post_deps[i].chain) { - ret =3D -ENOMEM; - break; - } - } - - post_deps[i].syncobj =3D - drm_syncobj_find(file, syncobj_desc.handle); - if (!post_deps[i].syncobj) { - ret =3D UERR(EINVAL, dev, "invalid syncobj handle"); - break; - } - } - - if (ret) { - for (j =3D 0; j <=3D i; ++j) { - dma_fence_chain_free(post_deps[j].chain); - if (post_deps[j].syncobj) - drm_syncobj_put(post_deps[j].syncobj); - } - - kfree(post_deps); - return ERR_PTR(ret); - } - - return post_deps; -} - -static void msm_process_post_deps(struct msm_submit_post_dep *post_deps, - uint32_t count, struct dma_fence *fence) -{ - uint32_t i; - - for (i =3D 0; post_deps && i < count; ++i) { - if (post_deps[i].chain) { - drm_syncobj_add_point(post_deps[i].syncobj, - post_deps[i].chain, - fence, post_deps[i].point); - post_deps[i].chain =3D NULL; - } else { - drm_syncobj_replace_fence(post_deps[i].syncobj, - fence); - } - } -} - int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file) { @@ -668,7 +502,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, struct msm_gpu *gpu =3D priv->gpu; struct msm_gpu_submitqueue *queue; struct msm_ringbuffer *ring; - struct msm_submit_post_dep *post_deps =3D NULL; + struct msm_syncobj_post_dep *post_deps =3D NULL; struct drm_syncobj **syncobjs_to_reset =3D NULL; struct sync_file *sync_file =3D NULL; int out_fence_fd =3D -1; @@ -745,10 +579,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void= *data, } =20 if (args->flags & MSM_SUBMIT_SYNCOBJ_IN) { - syncobjs_to_reset =3D msm_parse_deps(submit, file, - args->in_syncobjs, - args->nr_in_syncobjs, - args->syncobj_stride); + syncobjs_to_reset =3D msm_syncobj_parse_deps(dev, &submit->base, + file, args->in_syncobjs, + args->nr_in_syncobjs, + args->syncobj_stride); if (IS_ERR(syncobjs_to_reset)) { ret =3D PTR_ERR(syncobjs_to_reset); goto out_unlock; @@ -756,10 +590,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void= *data, } =20 if (args->flags & MSM_SUBMIT_SYNCOBJ_OUT) { - post_deps =3D msm_parse_post_deps(dev, file, - args->out_syncobjs, - args->nr_out_syncobjs, - args->syncobj_stride); + post_deps =3D msm_syncobj_parse_post_deps(dev, file, + args->out_syncobjs, + args->nr_out_syncobjs, + args->syncobj_stride); if (IS_ERR(post_deps)) { ret =3D PTR_ERR(post_deps); goto out_unlock; @@ -902,10 +736,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void = *data, args->fence =3D submit->fence_id; queue->last_fence =3D submit->fence_id; =20 - msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs); - msm_process_post_deps(post_deps, args->nr_out_syncobjs, - submit->user_fence); - + msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); + msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, submit->u= ser_fence); =20 out: submit_cleanup(submit, !!ret); diff --git a/drivers/gpu/drm/msm/msm_syncobj.c b/drivers/gpu/drm/msm/msm_sy= ncobj.c new file mode 100644 index 000000000000..4baa9f522c54 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_syncobj.c @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Google, Inc */ + +#include "drm/drm_drv.h" + +#include "msm_drv.h" +#include "msm_syncobj.h" + +struct drm_syncobj ** +msm_syncobj_parse_deps(struct drm_device *dev, + struct drm_sched_job *job, + struct drm_file *file, + uint64_t in_syncobjs_addr, + uint32_t nr_in_syncobjs, + size_t syncobj_stride) +{ + struct drm_syncobj **syncobjs =3D NULL; + struct drm_msm_syncobj syncobj_desc =3D {0}; + int ret =3D 0; + uint32_t i, j; + + syncobjs =3D kcalloc(nr_in_syncobjs, sizeof(*syncobjs), + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); + if (!syncobjs) + return ERR_PTR(-ENOMEM); + + for (i =3D 0; i < nr_in_syncobjs; ++i) { + uint64_t address =3D in_syncobjs_addr + i * syncobj_stride; + + if (copy_from_user(&syncobj_desc, + u64_to_user_ptr(address), + min(syncobj_stride, sizeof(syncobj_desc)))) { + ret =3D -EFAULT; + break; + } + + if (syncobj_desc.point && + !drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE)) { + ret =3D UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); + break; + } + + if (syncobj_desc.flags & ~MSM_SYNCOBJ_FLAGS) { + ret =3D UERR(EINVAL, dev, "invalid syncobj flags: %x", syncobj_desc.fla= gs); + break; + } + + ret =3D drm_sched_job_add_syncobj_dependency(job, file, + syncobj_desc.handle, + syncobj_desc.point); + if (ret) + break; + + if (syncobj_desc.flags & MSM_SYNCOBJ_RESET) { + syncobjs[i] =3D drm_syncobj_find(file, syncobj_desc.handle); + if (!syncobjs[i]) { + ret =3D UERR(EINVAL, dev, "invalid syncobj handle: %u", i); + break; + } + } + } + + if (ret) { + for (j =3D 0; j <=3D i; ++j) { + if (syncobjs[j]) + drm_syncobj_put(syncobjs[j]); + } + kfree(syncobjs); + return ERR_PTR(ret); + } + return syncobjs; +} + +void +msm_syncobj_reset(struct drm_syncobj **syncobjs, uint32_t nr_syncobjs) +{ + uint32_t i; + + for (i =3D 0; syncobjs && i < nr_syncobjs; ++i) { + if (syncobjs[i]) + drm_syncobj_replace_fence(syncobjs[i], NULL); + } +} + +struct msm_syncobj_post_dep * +msm_syncobj_parse_post_deps(struct drm_device *dev, + struct drm_file *file, + uint64_t syncobjs_addr, + uint32_t nr_syncobjs, + size_t syncobj_stride) +{ + struct msm_syncobj_post_dep *post_deps; + struct drm_msm_syncobj syncobj_desc =3D {0}; + int ret =3D 0; + uint32_t i, j; + + post_deps =3D kcalloc(nr_syncobjs, sizeof(*post_deps), + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); + if (!post_deps) + return ERR_PTR(-ENOMEM); + + for (i =3D 0; i < nr_syncobjs; ++i) { + uint64_t address =3D syncobjs_addr + i * syncobj_stride; + + if (copy_from_user(&syncobj_desc, + u64_to_user_ptr(address), + min(syncobj_stride, sizeof(syncobj_desc)))) { + ret =3D -EFAULT; + break; + } + + post_deps[i].point =3D syncobj_desc.point; + + if (syncobj_desc.flags) { + ret =3D UERR(EINVAL, dev, "invalid syncobj flags"); + break; + } + + if (syncobj_desc.point) { + if (!drm_core_check_feature(dev, + DRIVER_SYNCOBJ_TIMELINE)) { + ret =3D UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); + break; + } + + post_deps[i].chain =3D dma_fence_chain_alloc(); + if (!post_deps[i].chain) { + ret =3D -ENOMEM; + break; + } + } + + post_deps[i].syncobj =3D + drm_syncobj_find(file, syncobj_desc.handle); + if (!post_deps[i].syncobj) { + ret =3D UERR(EINVAL, dev, "invalid syncobj handle"); + break; + } + } + + if (ret) { + for (j =3D 0; j <=3D i; ++j) { + dma_fence_chain_free(post_deps[j].chain); + if (post_deps[j].syncobj) + drm_syncobj_put(post_deps[j].syncobj); + } + + kfree(post_deps); + return ERR_PTR(ret); + } + + return post_deps; +} + +void +msm_syncobj_process_post_deps(struct msm_syncobj_post_dep *post_deps, + uint32_t count, struct dma_fence *fence) +{ + uint32_t i; + + for (i =3D 0; post_deps && i < count; ++i) { + if (post_deps[i].chain) { + drm_syncobj_add_point(post_deps[i].syncobj, + post_deps[i].chain, + fence, post_deps[i].point); + post_deps[i].chain =3D NULL; + } else { + drm_syncobj_replace_fence(post_deps[i].syncobj, + fence); + } + } +} diff --git a/drivers/gpu/drm/msm/msm_syncobj.h b/drivers/gpu/drm/msm/msm_sy= ncobj.h new file mode 100644 index 000000000000..bcaa15d01da0 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_syncobj.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Google, Inc */ + +#ifndef __MSM_GEM_SYNCOBJ_H__ +#define __MSM_GEM_SYNCOBJ_H__ + +#include "drm/drm_device.h" +#include "drm/drm_syncobj.h" +#include "drm/gpu_scheduler.h" + +struct msm_syncobj_post_dep { + struct drm_syncobj *syncobj; + uint64_t point; + struct dma_fence_chain *chain; +}; + +struct drm_syncobj ** +msm_syncobj_parse_deps(struct drm_device *dev, + struct drm_sched_job *job, + struct drm_file *file, + uint64_t in_syncobjs_addr, + uint32_t nr_in_syncobjs, + size_t syncobj_stride); + +void msm_syncobj_reset(struct drm_syncobj **syncobjs, uint32_t nr_syncobjs= ); + +struct msm_syncobj_post_dep * +msm_syncobj_parse_post_deps(struct drm_device *dev, + struct drm_file *file, + uint64_t syncobjs_addr, + uint32_t nr_syncobjs, + size_t syncobj_stride); + +void msm_syncobj_process_post_deps(struct msm_syncobj_post_dep *post_deps, + uint32_t count, struct dma_fence *fence); + +#endif /* __MSM_GEM_SYNCOBJ_H__ */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 1bccc347945c..2c2fc4b284d0 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -220,6 +220,17 @@ struct drm_msm_gem_cpu_fini { * Cmdstream Submission: */ =20 +#define MSM_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */ +#define MSM_SYNCOBJ_FLAGS ( \ + MSM_SYNCOBJ_RESET | \ + 0) + +struct drm_msm_syncobj { + __u32 handle; /* in, syncobj handle. */ + __u32 flags; /* in, from MSM_SUBMIT_SYNCOBJ_FLAGS */ + __u64 point; /* in, timepoint for timeline syncobjs. */ +}; + /* The value written into the cmdstream is logically: * * ((relocbuf->gpuaddr + reloc_offset) << shift) | or @@ -309,17 +320,6 @@ struct drm_msm_gem_submit_bo { MSM_SUBMIT_FENCE_SN_IN | \ 0) =20 -#define MSM_SUBMIT_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */ -#define MSM_SUBMIT_SYNCOBJ_FLAGS ( \ - MSM_SUBMIT_SYNCOBJ_RESET | \ - 0) - -struct drm_msm_gem_submit_syncobj { - __u32 handle; /* in, syncobj handle. */ - __u32 flags; /* in, from MSM_SUBMIT_SYNCOBJ_FLAGS */ - __u64 point; /* in, timepoint for timeline syncobjs. */ -}; - /* Each cmdstream submit consists of a table of buffers involved, and * one or more cmdstream buffers. This allows for conditional execution * (context-restore), and IB buffers needed for per tile/bin draw cmds. @@ -333,8 +333,8 @@ struct drm_msm_gem_submit { __u64 cmds; /* in, ptr to array of submit_cmd's */ __s32 fence_fd; /* in/out fence fd (see MSM_SUBMIT_FENCE_FD_IN/OUT)= */ __u32 queueid; /* in, submitqueue id */ - __u64 in_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */ - __u64 out_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */ + __u64 in_syncobjs; /* in, ptr to array of drm_msm_syncobj */ + __u64 out_syncobjs; /* in, ptr to array of drm_msm_syncobj */ __u32 nr_in_syncobjs; /* in, number of entries in in_syncobj */ __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2249A2F431B for ; Wed, 25 Jun 2025 18:59:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877950; cv=none; b=cmFo6VgJUP+MSLHRw1nXGvUMGs9L8CMdlx6yjl+ymlc4iBXGOiroqzXahYbDtT5Husrlu9ZT1Z/0HT1OK2P5Id81negplDwhKWllGBE37/58z1nV9D4hCEPlgI2reCpXbpWQ4enS9pMwlHBUvnV1BWulSvVotnNQqHKt6JV9MAI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877950; c=relaxed/simple; bh=npcaXqJDoSyMBRhF30x2mSPk/8hD8RjcLAdanR+hf0U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LHfEhayTkyx4wTCmiNJhVDfO+vo22urVCNlwjWT5FH8DCAPZwt00qpDaO09Kom10O29lcfy6h+TUy/hjii4oGODqijpTYXF+BUr0sHVFXCIi3WzOyr5KboIJUG22LmjRGZ1cyAA2wV1cjjgG/g9anD1Vs/l9dgtGfwya/cuASdM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=fzcpaVVY; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="fzcpaVVY" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PC5bPr014481 for ; Wed, 25 Jun 2025 18:59:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=YGw3pq0xuYX eB0bxqdCW/EyRGUrXHxJgo02Bi+E/25I=; b=fzcpaVVYOQkPu+AqJEW5xxL+2Dr sJP3/k/nXVodPhaR9Z9hZrRWY4HMU8w72ygsq5YB1dpf13DKZvItcuz5f+wAi/VP hOkxdqxdmV04cGSmv0e4Ir5FV3NVrrAmO+78pQZnaYgFKyN/+tdvGxwTqxv1cVgy I8xv22yU/UiN2LtRjtwIVKf+E+4NpmGe+N/fm0xsD0hk2coNB8Ywp9hChf14zoPX LFp0IMswHz3huuewD0tzkhUC3dnTqqDRiqHA9/gc1wIEbt0YW65/VVjz/K/coDHz ULLt+3jqg5VHX70xrK6un1LsCfZSL0CPM8mxbFZv8n8PpoqyPCw8Q2jDsug== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47evc5t1r0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:08 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-2365ab89b52so1571975ad.2 for ; Wed, 25 Jun 2025 11:59:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877947; x=1751482747; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YGw3pq0xuYXeB0bxqdCW/EyRGUrXHxJgo02Bi+E/25I=; b=vHlhSe/D4/yxDfc+/OngzQoTi9Ws2wTV6CfFNCUTGyhKtQFN8T75GC4zYyVcoVKEaM BffyTPogdD8K9nGL7ZE5fZEEzPnIHLLlBAN3FwqR0N0Xlc9mqseyUe9WTFB97VoV4JRd FuC6OpXoaMjQ/aDL60Dz8V+UyP2sEoiSQmL5X7Wa4/axV51dP3MH9Y43lIxcBEDCR/wz qG6jlhBfk+gSMx67NaYtrM+aWF8yy4X/aE1Jhxh0Lidh2hHoHxLaQvC5Pa8S7GWJwcU9 OfFoKW4vGEvCxdQllFifwiT6PZwMFrWSsWUh6frSHioHLNAccyn+Qf4MqPWxtho728Ix ZPwA== X-Forwarded-Encrypted: i=1; AJvYcCUB4nTZu/UkO2rJsdfu+UFPlhdfaXGfmkwsmsrqHFWf6tOdPVWCnFPHZaVmAKeOpjwQRjylQ9cjcwdDD/0=@vger.kernel.org X-Gm-Message-State: AOJu0YynkLxV6lTaCeklrIB59B6TqOOqVJ3X8zxLgjYnKxE0hyNRS+Qk wAuEzqkR1P95HLevnf7q+1584Pp+TUUwUVoGdbIU1TgZYOxLRw6I0J4jXQ2aY41HrnboUNBLvQ+ xFNUQuqiOAcf+21rBQwVg5oM/qohfjXijnHvYOoZOjJmGrIGdkEm8G422ZIBBfS84Mx8= X-Gm-Gg: ASbGncuQwEhBtllR3896NxmWxwWifWCWfpx/zgbd3cJSpJgKe0m5JbZgmc0AA6keXz+ jD2MKNWFBStu8Olfz1vmI2boJsRnc1bgBoKjt82wuSK8grggscAvANusB5vGeH2mYNRP0Ovd4FA uTJVx7svfSTiVtrjkk1y5qHJ2wOD7haOSsyGvHZ+/x1A9CONUnMxU11PEtZ1rFyQy1Sg0wHR0B+ PmCukngRWmiIWS6vM5P3wXFM2WRkqBy5NJZvisVdBQDw7LS3hdlvMJ2qFm7kv0SN25eedwx7vOX XNL0hvjgo4DEmbH9VfxF4dK3jY/cXO1d X-Received: by 2002:a17:903:2bcb:b0:234:8a4a:adb4 with SMTP id d9443c01a7336-238240f5674mr63435525ad.21.1750877947523; Wed, 25 Jun 2025 11:59:07 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHWujkmyQJzCosZxG0/o9huVMzGmefy+6BjUnqrS3MSq6Cgpe4WC/Xv+9jKHpehtNVJ8NIU0w== X-Received: by 2002:a17:903:2bcb:b0:234:8a4a:adb4 with SMTP id d9443c01a7336-238240f5674mr63435275ad.21.1750877947137; Wed, 25 Jun 2025 11:59:07 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d86628ffsm137223375ad.156.2025.06.25.11.59.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:06 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 30/42] drm/msm: Use DMA_RESV_USAGE_BOOKKEEP/KERNEL Date: Wed, 25 Jun 2025 11:47:23 -0700 Message-ID: <20250625184918.124608-31-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: soaaFocQSl3nPyLWS5jwH0T6CT7OYSIP X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXzAt038Jkj0P4 cLPgzHmHgkApm8gPIt0uRrQRsU9zglWidW9lesYS/oVs2qUHFNKBpdtx/Fn+ThEfJQOV9dWVJcj DNLroFgLFVmbU0d4yLGg2C/tozzkgs+8uXc0Uys0UarPsmv+S5m9fYvwghgCGKnJreyaWq19NTt yQxRhbo6z3aYhIt26h43E8buMAQafLVU6dYl6V3lyC8gzQRYILzMsWCfeOsPGg0xuoW2/W1B7vO 8BMd6n71e6LGDeUM7aveW0ga2YXW648UnB3cnzOKUMeBN26frLdmc0rYeqn529Gm48mmcFzXrPA m0tVqLODiQgOPxgXQQidugnzHSLJE4EUhRMtyEJ2kpatK2mjRhEwQq8vf69JJJLuR3i6OejkM/v u8LLLmj/4S7zamexMbkrI43HuQ0ClgWTpbCZKpPMYB7geIf2qwKPgFi38f6JfcBO3JaGfBpY X-Authority-Analysis: v=2.4 cv=caHSrmDM c=1 sm=1 tr=0 ts=685c46fc cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=isCY8TonHXl0-fnU9HAA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-ORIG-GUID: soaaFocQSl3nPyLWS5jwH0T6CT7OYSIP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 lowpriorityscore=0 bulkscore=0 adultscore=0 mlxscore=0 spamscore=0 malwarescore=0 phishscore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Any place we wait for a BO to become idle, we should use BOOKKEEP usage, to ensure that it waits for _any_ activity. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index ec349719b49a..106fec06c18d 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -97,8 +97,8 @@ static void msm_gem_close(struct drm_gem_object *obj, str= uct drm_file *file) * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl */ - dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, - msecs_to_jiffies(1000)); + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_BOOKKEEP, false, + MAX_SCHEDULE_TIMEOUT); =20 msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, ctx->vm, true); @@ -903,7 +903,7 @@ bool msm_gem_active(struct drm_gem_object *obj) if (to_msm_bo(obj)->pin_count) return true; =20 - return !dma_resv_test_signaled(obj->resv, dma_resv_usage_rw(true)); + return !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_BOOKKEEP); } =20 int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *tim= eout) diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/m= sm_gem_shrinker.c index 5faf6227584a..1039e3c0a47b 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -139,7 +139,7 @@ evict(struct drm_gem_object *obj, struct ww_acquire_ctx= *ticket) static bool wait_for_idle(struct drm_gem_object *obj) { - enum dma_resv_usage usage =3D dma_resv_usage_rw(true); + enum dma_resv_usage usage =3D DMA_RESV_USAGE_BOOKKEEP; return dma_resv_wait_timeout(obj->resv, usage, false, 10) > 0; } =20 --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCC252F5489 for ; Wed, 25 Jun 2025 18:59:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877955; cv=none; b=QrbVj5XlBS4gMcJnkhOgNYRDZ6YhRKMjM2F3khSuS7mivtpuESWVcDIR7iCiQE9Tpq8lLfSrhAYkr0Ahm0c0mY1TuwR3uoyjj5NFti1V19lNVqV3LoLPKYHvXKDnVS7g6GPGVQS2gVmFoeHtdo6XdHwdpYe+hy/x2B/O+eAx5hw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877955; c=relaxed/simple; bh=6P72wyHUP0Uts1ho71jfImK3Za+ySadWpObyOKq/Ij8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=px4darifHIb50UVwXH6ve2XZ3K35R8zgH5RU4dFx60rbYg5WOfWw36SB1G97LQzfn9PRI3lqytASjfyCkZ5tnFz2yBRui8Ddvfuu3BcdpgPXZeDkt6qOJLyDMc85ucww/nEqFTB/gK/rRYK/P/1cdUZOqBOp1aopaOhnncP6fPg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=X63vZWpT; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="X63vZWpT" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PA33aw032470 for ; Wed, 25 Jun 2025 18:59:10 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=XUBhuSPLTTK SJ37M+CTYeQmfyEx8yN+0MTUmq1WzjRY=; b=X63vZWpTVGeWh9Dso5ZsT2qob8I NXQmVmMKk6G4MDxuE974sR967Pwo43Lb6KKdNaXRZdUVCF4LZzuLEJOQiAG7Zg6J bCWCHADXYdSEiTY5eQ0P3EcZokGV2FIOoH2/XzcYNn5hUiIqQe97G7ZkzgXyoKQo Zhc70otpNEebgm7Am7EF+ZQg4+9x9J2DV8o1TjIAydO21l3S4okFTExh5uRYmet5 BNJX0W4Fksdoo4oTv9ynhy3rKLIGQW01j0nD+qDgiPuUOUuhsMwzw1nT1eJC/F5j esBcQakgXBL/j4I/Ryt7rZlfmePIvjGylMpXjerjAt+caltKJRw8XGYgUJw== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbm1yh1j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:10 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-2349fe994a9so1238095ad.1 for ; Wed, 25 Jun 2025 11:59:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877949; x=1751482749; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XUBhuSPLTTKSJ37M+CTYeQmfyEx8yN+0MTUmq1WzjRY=; b=hoH2d8x9tYyJFZmAsd3wvXYB4t4/Po1c9bPRt7hvpDI7pvpx1UlsJlTnbOpUXRPweE 6t6ocN+ASiTiJZj0ya22nQ6cz5Bd+WNDnmzVPXasKZfyED1OgpMyjV77u5xekFbj9l8Z wnERTmqc682VzsCs3L6vN0sv7RObVz48OmqYQUXPl6/ryEkxvDWWi0VdeJdmFBZUN8XO zdEDjJl6UENPLo+VG5Zo5KgfuM+pXhsgwVRRdtKEQ91hoI6+sUBnmP327aK10Iic4HEj 1o7O+NrWyMmqB4bx/iRmZWMO9I8l42dv00l2vuV1wbowTsYGzBpQAS/GJ2PX/lCun2GG toaQ== X-Forwarded-Encrypted: i=1; AJvYcCVT50r3Sx8zFZa87tkl+BdW78HXrlqjyhbeitXD/Os1w5l+/s+OiaW+dfDDbRgxm6SD5XPkqQkmyVSyGFs=@vger.kernel.org X-Gm-Message-State: AOJu0YwlsL4A2vB2d6lfLOV8cg8ubiqiDGfkg7opDZ5AVTQkAsrHaBeQ Kq4Ni+ma3Tdn3nLQjYFyzewfEbkwZUotCv0ay/3iSLwvPz+HnzCKOPY86V0hr+OiOwcAFo2ade/ EF788s443OrHLMBbAwajE8FA2tsXSM9g1XEO+dDjYqsco0QA3YeUJ68GlHPIASalAAqc= X-Gm-Gg: ASbGnctO27MtgEx8i5ab9OlTJxEb0YQmON/3xDtho5K24ZWmrM6LOiKGyIyKJCj6ql2 5Dt/CYKomlFjjIISIZP7w0SiLRL2mTFvqXC0qKo7Uto6UQDkZDs9bSCP1ov7rncYy86IAhzhWOZ FwJfdx38hqexJ9W+H1KGdLcHgiAUmEOnOgPpwTgs6BySVprz5CPIWXwqQHyWlmsFQeOj73hO03H T7wvEDuhdjopCdvU1twvfRTT1fIRla+2QGKPhIhA4UiomMgQiUt4Vjwf17ZTBN9vZ/uIyX/lS4C gQ7QM6M0OMgEE7bwAeWTqztCxCF/La28 X-Received: by 2002:a17:903:2f10:b0:234:f182:a759 with SMTP id d9443c01a7336-23824044709mr71556325ad.28.1750877949177; Wed, 25 Jun 2025 11:59:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHvX5Sl/WDvH4+nu3fzC4yJunJqdJ8QtHBAtsyIN1MEJyeXJmSWnXLeTatrEQ7V5hMi05usVw== X-Received: by 2002:a17:903:2f10:b0:234:f182:a759 with SMTP id d9443c01a7336-23824044709mr71555915ad.28.1750877948611; Wed, 25 Jun 2025 11:59:08 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d8610177sm138762345ad.121.2025.06.25.11.59.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:08 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v7 31/42] drm/msm: Add VM_BIND submitqueue Date: Wed, 25 Jun 2025 11:47:24 -0700 Message-ID: <20250625184918.124608-32-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=YYu95xRf c=1 sm=1 tr=0 ts=685c46fe cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=PnHp71_pcMtKiH-pJVMA:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-GUID: 3Wi0RYMO6BS0E_Ehx7rdZi67USRP-ieK X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfX3cKGXGnU9e9q 4SXJ5t4mpvTJHyNwjYgwH7lS+ezvsKigTzlSOoNsAjYoph3zB2MRISOxxSw4zkZZO8YMmw2St8Y qTFJUMhY4d0TTLHnKqs+q7dw0j4mzj+Qmk2p49PPi+GNqwT+cOy/wwfV7o/894e8LsdjC1QCH16 LQ9ciuVTYr1gQbNjIDHqd34N2agRjFzJEy2c5oIQ2wiUGtvUg7Nt57KGEvgqGKe5se6Lahl4W9m vySXmOMHP3y7E5SOxDbnuteizhYPdGS+5XjOyUtF7wT7ND65LjtjAVlqiqbedy+LkleucuPtYys qpz/2VBM6jbpJIBdoFEvkrYbKAT/Nm9uy6MidgQ3CN9TCCRloeo41TjiT3HkQfZ3cO9rITX9g7W f7Psa7fgYpg6MLyzrc+eGGObqlodJxCVwHmE9z100+qUoD3yVbiqTzIAnx2gzpnC5ECblsMQ X-Proofpoint-ORIG-GUID: 3Wi0RYMO6BS0E_Ehx7rdZi67USRP-ieK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 spamscore=0 impostorscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 phishscore=0 mlxlogscore=999 clxscore=1015 mlxscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" From: Rob Clark This submitqueue type isn't tied to a hw ringbuffer, but instead executes on the CPU for performing async VM_BIND ops. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.h | 12 +++++ drivers/gpu/drm/msm/msm_gem_submit.c | 60 +++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_vma.c | 71 +++++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 3 ++ drivers/gpu/drm/msm/msm_submitqueue.c | 67 +++++++++++++++++++------ include/uapi/drm/msm_drm.h | 9 +++- 6 files changed, 197 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index b44a4f7313c9..945a235d73cf 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -53,6 +53,13 @@ struct msm_gem_vm { /** @base: Inherit from drm_gpuvm. */ struct drm_gpuvm base; =20 + /** + * @sched: Scheduler used for asynchronous VM_BIND request. + * + * Unused for kernel managed VMs (where all operations are synchronous). + */ + struct drm_gpu_scheduler sched; + /** * @mm: Memory management for kernel managed VA allocations * @@ -71,6 +78,9 @@ struct msm_gem_vm { */ struct pid *pid; =20 + /** @last_fence: Fence for last pending work scheduled on the VM */ + struct dma_fence *last_fence; + /** @faults: the number of GPU hangs associated with this address space */ int faults; =20 @@ -100,6 +110,8 @@ struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char = *name, u64 va_start, u64 va_size, bool managed); =20 +void msm_gem_vm_close(struct drm_gpuvm *gpuvm); + struct msm_fence_context; =20 #define MSM_VMA_DUMP (DRM_GPUVA_USERBITS << 0) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index 9f18771a1e88..e2174b7d0e40 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -4,6 +4,7 @@ * Author: Rob Clark */ =20 +#include #include #include #include @@ -258,30 +259,43 @@ static int submit_lookup_cmds(struct msm_gem_submit *= submit, static int submit_lock_objects(struct msm_gem_submit *submit) { unsigned flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WA= IT; + struct drm_exec *exec =3D &submit->exec; int ret; =20 -// TODO need to add vm_bind path which locks vm resv + external objs drm_exec_init(&submit->exec, flags, submit->nr_bos); =20 + if (msm_context_is_vmbind(submit->queue->ctx)) { + drm_exec_until_all_locked (&submit->exec) { + ret =3D drm_gpuvm_prepare_vm(submit->vm, exec, 1); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + + ret =3D drm_gpuvm_prepare_objects(submit->vm, exec, 1); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + } + + return 0; + } + drm_exec_until_all_locked (&submit->exec) { ret =3D drm_exec_lock_obj(&submit->exec, drm_gpuvm_resv_obj(submit->vm)); drm_exec_retry_on_contention(&submit->exec); if (ret) - goto error; + return ret; for (unsigned i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; ret =3D drm_exec_prepare_obj(&submit->exec, obj, 1); drm_exec_retry_on_contention(&submit->exec); if (ret) - goto error; + return ret; } } =20 return 0; - -error: - return ret; } =20 static int submit_fence_sync(struct msm_gem_submit *submit) @@ -367,9 +381,18 @@ static void submit_unpin_objects(struct msm_gem_submit= *submit) =20 static void submit_attach_object_fences(struct msm_gem_submit *submit) { - int i; + struct msm_gem_vm *vm =3D to_msm_vm(submit->vm); + struct dma_fence *last_fence; + + if (msm_context_is_vmbind(submit->queue->ctx)) { + drm_gpuvm_resv_add_fence(submit->vm, &submit->exec, + submit->user_fence, + DMA_RESV_USAGE_BOOKKEEP, + DMA_RESV_USAGE_BOOKKEEP); + return; + } =20 - for (i =3D 0; i < submit->nr_bos; i++) { + for (unsigned i =3D 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj =3D submit->bos[i].obj; =20 if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) @@ -379,6 +402,10 @@ static void submit_attach_object_fences(struct msm_gem= _submit *submit) dma_resv_add_fence(obj->resv, submit->user_fence, DMA_RESV_USAGE_READ); } + + last_fence =3D vm->last_fence; + vm->last_fence =3D dma_fence_unwrap_merge(submit->user_fence, last_fence); + dma_fence_put(last_fence); } =20 static int submit_bo(struct msm_gem_submit *submit, uint32_t idx, @@ -537,6 +564,11 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void = *data, if (!queue) return -ENOENT; =20 + if (queue->flags & MSM_SUBMITQUEUE_VM_BIND) { + ret =3D UERR(EINVAL, dev, "Invalid queue type"); + goto out_post_unlock; + } + ring =3D gpu->rb[queue->ring_nr]; =20 if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) { @@ -726,6 +758,18 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void = *data, =20 submit_attach_object_fences(submit); =20 + if (msm_context_is_vmbind(ctx)) { + /* + * If we are not using VM_BIND, submit_pin_vmas() will validate + * just the BOs attached to the submit. In that case we don't + * need to validate the _entire_ vm, because userspace tracked + * what BOs are associated with the submit. + */ + ret =3D drm_gpuvm_validate(submit->vm, &submit->exec); + if (ret) + goto out; + } + /* The scheduler owns a ref now: */ msm_gem_submit_get(submit); =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index e16a8cafd8be..cf37abb98235 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -16,6 +16,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) drm_mm_takedown(&vm->mm); if (vm->mmu) vm->mmu->funcs->destroy(vm->mmu); + dma_fence_put(vm->last_fence); put_pid(vm->pid); kfree(vm); } @@ -154,6 +155,9 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { .vm_free =3D msm_gem_vm_free, }; =20 +static const struct drm_sched_backend_ops msm_vm_bind_ops =3D { +}; + /** * msm_gem_vm_create() - Create and initialize a &msm_gem_vm * @drm: the drm device @@ -195,6 +199,21 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_m= mu *mmu, const char *name, goto err_free_vm; } =20 + if (!managed) { + struct drm_sched_init_args args =3D { + .ops =3D &msm_vm_bind_ops, + .num_rqs =3D 1, + .credit_limit =3D 1, + .timeout =3D MAX_SCHEDULE_TIMEOUT, + .name =3D "msm-vm-bind", + .dev =3D drm->dev, + }; + + ret =3D drm_sched_init(&vm->sched, &args); + if (ret) + goto err_free_dummy; + } + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); @@ -206,8 +225,60 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_m= mu *mmu, const char *name, =20 return &vm->base; =20 +err_free_dummy: + drm_gem_object_put(dummy_gem); + err_free_vm: kfree(vm); return ERR_PTR(ret); =20 } + +/** + * msm_gem_vm_close() - Close a VM + * @gpuvm: The VM to close + * + * Called when the drm device file is closed, to tear down VM related reso= urces + * (which will drop refcounts to GEM objects that were still mapped into t= he + * VM at the time). + */ +void +msm_gem_vm_close(struct drm_gpuvm *gpuvm) +{ + struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); + struct drm_gpuva *vma, *tmp; + + /* + * For kernel managed VMs, the VMAs are torn down when the handle is + * closed, so nothing more to do. + */ + if (vm->managed) + return; + + if (vm->last_fence) + dma_fence_wait(vm->last_fence, false); + + /* Kill the scheduler now, so we aren't racing with it for cleanup: */ + drm_sched_stop(&vm->sched, NULL); + drm_sched_fini(&vm->sched); + + /* Tear down any remaining mappings: */ + dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL); + drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { + struct drm_gem_object *obj =3D vma->gem.obj; + + if (obj && obj->resv !=3D drm_gpuvm_resv(gpuvm)) { + drm_gem_object_get(obj); + msm_gem_lock(obj); + } + + msm_gem_vma_unmap(vma); + msm_gem_vma_close(vma); + + if (obj && obj->resv !=3D drm_gpuvm_resv(gpuvm)) { + msm_gem_unlock(obj); + drm_gem_object_put(obj); + } + } + dma_resv_unlock(drm_gpuvm_resv(gpuvm)); +} diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 448ebf721bd8..9cbf155ff222 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -570,6 +570,9 @@ struct msm_gpu_submitqueue { struct mutex lock; struct kref ref; struct drm_sched_entity *entity; + + /** @_vm_bind_entity: used for @entity pointer for VM_BIND queues */ + struct drm_sched_entity _vm_bind_entity[0]; }; =20 struct msm_gpu_state_bo { diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/ms= m_submitqueue.c index 8ced49c7557b..8617a82cd6b3 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -72,6 +72,9 @@ void msm_submitqueue_destroy(struct kref *kref) =20 idr_destroy(&queue->fence_idr); =20 + if (queue->entity =3D=3D &queue->_vm_bind_entity[0]) + drm_sched_entity_destroy(queue->entity); + msm_context_put(queue->ctx); =20 kfree(queue); @@ -102,7 +105,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct = msm_context *ctx, =20 void msm_submitqueue_close(struct msm_context *ctx) { - struct msm_gpu_submitqueue *entry, *tmp; + struct msm_gpu_submitqueue *queue, *tmp; =20 if (!ctx) return; @@ -111,10 +114,17 @@ void msm_submitqueue_close(struct msm_context *ctx) * No lock needed in close and there won't * be any more user ioctls coming our way */ - list_for_each_entry_safe(entry, tmp, &ctx->submitqueues, node) { - list_del(&entry->node); - msm_submitqueue_put(entry); + list_for_each_entry_safe(queue, tmp, &ctx->submitqueues, node) { + if (queue->entity =3D=3D &queue->_vm_bind_entity[0]) + drm_sched_entity_flush(queue->entity, MAX_WAIT_SCHED_ENTITY_Q_EMPTY); + list_del(&queue->node); + msm_submitqueue_put(queue); } + + if (!ctx->vm) + return; + + msm_gem_vm_close(ctx->vm); } =20 static struct drm_sched_entity * @@ -160,8 +170,6 @@ int msm_submitqueue_create(struct drm_device *drm, stru= ct msm_context *ctx, struct msm_drm_private *priv =3D drm->dev_private; struct msm_gpu_submitqueue *queue; enum drm_sched_priority sched_prio; - extern int enable_preemption; - bool preemption_supported; unsigned ring_nr; int ret; =20 @@ -171,26 +179,53 @@ int msm_submitqueue_create(struct drm_device *drm, st= ruct msm_context *ctx, if (!priv->gpu) return -ENODEV; =20 - preemption_supported =3D priv->gpu->nr_rings =3D=3D 1 && enable_preemptio= n !=3D 0; + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + unsigned sz; =20 - if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) - return -EINVAL; + /* Not allowed for kernel managed VMs (ie. kernel allocs VA) */ + if (!msm_context_is_vmbind(ctx)) + return -EINVAL; =20 - ret =3D msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); - if (ret) - return ret; + if (prio) + return -EINVAL; + + sz =3D struct_size(queue, _vm_bind_entity, 1); + queue =3D kzalloc(sz, GFP_KERNEL); + } else { + extern int enable_preemption; + bool preemption_supported =3D + priv->gpu->nr_rings =3D=3D 1 && enable_preemption !=3D 0; + + if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) + return -EINVAL; =20 - queue =3D kzalloc(sizeof(*queue), GFP_KERNEL); + ret =3D msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); + if (ret) + return ret; + + queue =3D kzalloc(sizeof(*queue), GFP_KERNEL); + } =20 if (!queue) return -ENOMEM; =20 kref_init(&queue->ref); queue->flags =3D flags; - queue->ring_nr =3D ring_nr; =20 - queue->entity =3D get_sched_entity(ctx, priv->gpu->rb[ring_nr], - ring_nr, sched_prio); + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + struct drm_gpu_scheduler *sched =3D &to_msm_vm(msm_context_vm(drm, ctx))= ->sched; + + queue->entity =3D &queue->_vm_bind_entity[0]; + + drm_sched_entity_init(queue->entity, DRM_SCHED_PRIORITY_KERNEL, + &sched, 1, NULL); + } else { + queue->ring_nr =3D ring_nr; + + queue->entity =3D get_sched_entity(ctx, priv->gpu->rb[ring_nr], + ring_nr, sched_prio); + } + if (IS_ERR(queue->entity)) { ret =3D PTR_ERR(queue->entity); kfree(queue); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2c2fc4b284d0..6d6cd1219926 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -385,12 +385,19 @@ struct drm_msm_gem_madvise { /* * Draw queues allow the user to set specific submission parameter. Command * submissions specify a specific submitqueue to use. ID 0 is reserved for - * backwards compatibility as a "default" submitqueue + * backwards compatibility as a "default" submitqueue. + * + * Because VM_BIND async updates happen on the CPU, they must run on a + * virtual queue created with the flag MSM_SUBMITQUEUE_VM_BIND. If we had + * a way to do pgtable updates on the GPU, we could drop this restriction. */ =20 #define MSM_SUBMITQUEUE_ALLOW_PREEMPT 0x00000001 +#define MSM_SUBMITQUEUE_VM_BIND 0x00000002 /* virtual queue for VM_BIND o= ps */ + #define MSM_SUBMITQUEUE_FLAGS ( \ MSM_SUBMITQUEUE_ALLOW_PREEMPT | \ + MSM_SUBMITQUEUE_VM_BIND | \ 0) =20 /* --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD8762E0B6D for ; Wed, 25 Jun 2025 18:59:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877954; cv=none; b=DvDt53U2tty35DHgzW0ZaiXglUUD+9F3Sp0o0EeW39kN3Uc1dMc7Kj6EVD+lTWeN5EPdknWOoE4KaBdBvQ70qMuUwTXCCz99va/FnWz6cknxB3fBOB9ufkV/YE+JaFd7vF6aWZERE8v2BtGEhFuxXluZqK6cEjfOmzTI8YmY8/0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877954; c=relaxed/simple; bh=miM77xEXTrB6mt10AQICgq6MDZD3BX5ufhFU8RcosDw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aYmFO8YGB3khyA58knN3W5TIuUzbLpwM28NKDGbES8lqfqxNAxqlpOcSYYiUufFJgw+7ase5HDo6oEH5JRY8XZMeI7Dz9EgVgBzQ6oEREUHkX3FAMwaBjbbBg9lC7jJDG1hmnszroDfu6T3fBD/GxadOmIeNDvEocepHS3CqbMU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=pFGV5HrF; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="pFGV5HrF" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55P9b7P8032301 for ; Wed, 25 Jun 2025 18:59:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=DtgSsqvS9NG lueSPRpKpNF02UHsYVqSrf4yaa+KHu3E=; b=pFGV5HrFiVHgGbMsigNsiN34XeL V85NrU2arxrLMhWzSknYErzNud7rFy1AoxahQ50iQ5MV+R3gcPYOGziCAxFYskJW 4qwK1MuxSf6SEQQ3ait4roeXZnj9GpvXSCMm8CHWBTNN+CIOxEtmyOG/FG8bG5rY y120oAcUqDBgx4ETAdMbMeA900xZlCIhad1KInMlEKwtglPCG046esCod4mk2epg RfmUXOhQPu4wjcjGJEynqnzN4/wReZvmYTnjG3jOZg3ugKVKfaByMk8PP1FlBRFP bdUcjX5uSbiMn1JVjhS4vBzuUi2QTWzZI5Ts9JqyU8M63AwZyRxu3UM9Dhw== Received: from mail-pj1-f70.google.com (mail-pj1-f70.google.com [209.85.216.70]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbm1yh1s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:11 +0000 (GMT) Received: by mail-pj1-f70.google.com with SMTP id 98e67ed59e1d1-312df02acf5so955011a91.1 for ; Wed, 25 Jun 2025 11:59:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877950; x=1751482750; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DtgSsqvS9NGlueSPRpKpNF02UHsYVqSrf4yaa+KHu3E=; b=qd6zDuT2mALSLnNNY139m5E3NBP/avgV/SRDLPPG8K9kzVrrtLc1ShxUQDcmClf1By 2a51FT0T3XZvVi8gQ8MHsPwgdyEigySaPiNc+anCx47g1BwLHXTRQNJTs+GanFzPqysj zRoR3Wc43mIbdpMaJ0W1NPGehfbvcxrM8Q3DYlFA0HoLOO7rC+NtpXyPkJrVfKn2IL44 kBhjNBw8lD9TjATKKihGwfNeztFdVBoI+DUcP4LbHZXktrDsCFHXVg8Blnuh6prCANPd OZjuz7OoMqSWotGg29j5UtYlZJ9I5wvwfZVMsS6j8qUOu+MLJrYvn8VZiitm8aIZ09rW YYBg== X-Forwarded-Encrypted: i=1; AJvYcCVm+J98vTZKTwrPoFpb16MQf7UGXZFvt1WGUEgbBVY/yKQTQjRAHVuzbyXLYopPtJEPbY5dhYVyr6Vw0KY=@vger.kernel.org X-Gm-Message-State: AOJu0Yxx+mQOA14KHTkJSXXGZCaF1BzGrRZpY8IJp77uXjFdxC6B0Nov drCq0o/7opmbaU2QWleMRjfSR6+sTlwb2NIXzffhKWB2mZtozMDY9UrO54y5Re2Pm9iz3aEbgSU I2e0l1Xo0A92XnhxIJ7Ig8IwBivph9ULDV2E+emJO9iww3q1EapZI5ldol+9AtbZQjiY= X-Gm-Gg: ASbGnctn8k3fsH9yOW1nu5xd5EMSAFCusV5/h0CG0MNMW6oAfasYMrcZFMBb/t9iamR eZJDvR9pBsT/Z0uMjEUjBiX+2WxzIx9oS8PpeB53EMuLJWmPWP5SQ4M5oAl0fsSAMBSZuuNAPFm diPIZAHhG8iGZuh+LqAsIB6896lkYCvr1uZcIQCi1YqAZxelMxM41TYdjrG0H1RZDsDU+MX4HnP qBuq08syJ4rraF0lnYrZI4sT26O2hHCSyqxN/8De0CqlzZHlZHGHpXAPDcNMzDrAQ6s7kZf1Yo3 CMEmW7Ee4Bk0K6ct9R60DNVQtnRsTAo/ X-Received: by 2002:a17:90b:5884:b0:310:c8ec:4192 with SMTP id 98e67ed59e1d1-316e2336f18mr798547a91.10.1750877950574; Wed, 25 Jun 2025 11:59:10 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEbp4SKrEY9d8JIbSkMB6p5URNqdjm5cGwlxWoVGnHIftvqm3YVrqyP0ZrvpIQrI+VVUGZoBA== X-Received: by 2002:a17:90b:5884:b0:310:c8ec:4192 with SMTP id 98e67ed59e1d1-316e2336f18mr798499a91.10.1750877950074; Wed, 25 Jun 2025 11:59:10 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-315f539e6ccsm2331259a91.17.2025.06.25.11.59.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:09 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 32/42] drm/msm: Support IO_PGTABLE_QUIRK_NO_WARN_ON Date: Wed, 25 Jun 2025 11:47:25 -0700 Message-ID: <20250625184918.124608-33-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=YYu95xRf c=1 sm=1 tr=0 ts=685c46ff cx=c_pps a=0uOsjrqzRL749jD1oC5vDA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=MLfKQGWeMraaM-6YArcA:9 a=mQ_c8vxmzFEMiUWkPHU9:22 X-Proofpoint-GUID: LkdaegrkjouZOsDhIQBkbyu6lbdOuJZr X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfXxNGE1NB0Oqed 4AECmHaCjLruHoyj35mhIVEvohrNTz4x9ugrj/PycdsNdwZBGnJ+HQU1Ki7Bsx61IMwqZKPLdWO vqMapW/JX+DIxalyMpW5UXakbjloPaPqZiQUYKeKvAgwabTte1abomU68+xxdXxFwi4qAZsyCOc G5Iygk16sXTxUyaT3jwU1r6efZH/FOQUfWCP1gS56bNcZvr81YYxZbP8LAV3nu0b9osfILvyr6O Ho3r1z58bJpxKH7E0dV+XAgndtEyQyhHawTMjgKqEMGIddU6LJyT0pV6IRGQDpepUXG50Nf1rf2 laQBs3OIr24EWsY2/lTb8SN/9bIAdNcK5WKrNy15vB5SXG843JGC1Sq/gxdF+4FpYNvT3k7cmI1 Kj0thAspUEPCh+o6m9bXaF/5bl8lwoZ4gg28j+T7cz+plJiO02P58rm2l9hqQCBJ/lao+GgC X-Proofpoint-ORIG-GUID: LkdaegrkjouZOsDhIQBkbyu6lbdOuJZr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 spamscore=0 impostorscore=0 suspectscore=0 lowpriorityscore=0 priorityscore=1501 phishscore=0 mlxlogscore=980 clxscore=1015 mlxscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" From: Rob Clark With user managed VMs and multiple queues, it is in theory possible to trigger map/unmap errors. These will (in a later patch) mark the VM as unusable. But we want to tell the io-pgtable helpers not to spam the log. In addition, in the unmap path, we don't want to bail early from the unmap, to ensure we don't leave some dangling pages mapped. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/msm_iommu.c | 23 ++++++++++++++++++----- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 3 files changed, 20 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/ad= reno/a6xx_gpu.c index f0e37733c65d..83fba02ca1df 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2267,7 +2267,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu, bool kern= el_managed) { struct msm_mmu *mmu; =20 - mmu =3D msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); + mmu =3D msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu, kernel_manage= d); =20 if (IS_ERR(mmu)) return ERR_CAST(mmu); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index 756bd55ee94f..1c068592f9e9 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -94,15 +94,24 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mm= u, u64 iova, { struct msm_iommu_pagetable *pagetable =3D to_pagetable(mmu); struct io_pgtable_ops *ops =3D pagetable->pgtbl_ops; + int ret =3D 0; =20 while (size) { - size_t unmapped, pgsize, count; + size_t pgsize, count; + ssize_t unmapped; =20 pgsize =3D calc_pgsize(pagetable, iova, iova, size, &count); =20 unmapped =3D ops->unmap_pages(ops, iova, pgsize, count, NULL); - if (!unmapped) - break; + if (unmapped <=3D 0) { + ret =3D -EINVAL; + /* + * Continue attempting to unamp the remained of the + * range, so we don't end up with some dangling + * mapped pages + */ + unmapped =3D PAGE_SIZE; + } =20 iova +=3D unmapped; size -=3D unmapped; @@ -110,7 +119,7 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mm= u, u64 iova, =20 iommu_flush_iotlb_all(to_msm_iommu(pagetable->parent)->domain); =20 - return (size =3D=3D 0) ? 0 : -EINVAL; + return ret; } =20 static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size= _t len, int prot) @@ -324,7 +333,7 @@ static const struct iommu_flush_ops tlb_ops =3D { static int msm_gpu_fault_handler(struct iommu_domain *domain, struct devic= e *dev, unsigned long iova, int flags, void *arg); =20 -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool ke= rnel_managed) { struct adreno_smmu_priv *adreno_smmu =3D dev_get_drvdata(parent->dev); struct msm_iommu *iommu =3D to_msm_iommu(parent); @@ -358,6 +367,10 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_= mmu *parent) ttbr0_cfg.quirks &=3D ~IO_PGTABLE_QUIRK_ARM_TTBR1; ttbr0_cfg.tlb =3D &tlb_ops; =20 + if (!kernel_managed) { + ttbr0_cfg.quirks |=3D IO_PGTABLE_QUIRK_NO_WARN; + } + pagetable->pgtbl_ops =3D alloc_io_pgtable_ops(ARM_64_LPAE_S1, &ttbr0_cfg, pagetable); =20 diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c874852b7331..c70c71fb1a4a 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -52,7 +52,7 @@ static inline void msm_mmu_set_fault_handler(struct msm_m= mu *mmu, void *arg, mmu->handler =3D handler; } =20 -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent); +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool ke= rnel_managed); =20 int msm_iommu_pagetable_params(struct msm_mmu *mmu, phys_addr_t *ttbr, int *asid); --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6DEF2F94A3 for ; Wed, 25 Jun 2025 18:59:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877956; cv=none; b=r3BaeJTWiD2ySIoWLpdTTy+Os5qKK+uI+muCnQV5zkDbks/kWqXM+nzYyDub5b5eEGj+6RjXwAJPwpU6RxSAhzzom5btNqp4rIGr5cRno5VZikESQbff3jzrsdEUNxTDzFGXHoA8ArgvEtDhvKmnnT04Pe3cgLdyC8cMaMpVqHI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877956; c=relaxed/simple; bh=NtRjz5V35Ecv1En9evSsW7bK7szwceihcpIDA526Qk0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KxYLxTuwRiUhxHcByxwXeIAkZRFc/o9/1IbK+LeekudPTNzFD1VY23lOHfa4Cej/oVDMXWd5hqh4ciPVuCwaBakb8FPYVHDrCZhSaD2oiDxVbP7PMJbPRut+fj8FR/xI/Q0P07S5zunJMZbz/u/80Gi3HUF0XkWSwRJHaKSbISo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=GyqZn3Ah; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="GyqZn3Ah" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCKkdc005903 for ; Wed, 25 Jun 2025 18:59:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=/oopH5gDR7c AO8FzYui8Gj2RGY+GaRhFrU3dbHaafIQ=; b=GyqZn3Ah4Eyb1K33C7BUKXpTbfE lpjMl0j0xTRqzWViPqsieAUgKlh1HqgeDJrhCPHf9reWB+WNCrWZRnX8jbYldDv3 MC3+2wNws95wfa3MdLWXzk0D4x+J9wT7r+1GxFs6nIPWSjp9DNnU4mFQ+BTxnl6c rwzx9yaOyBd628iRtvBKK8wqIxncx49oVo8aPixyaJvAjw9DeFpVaINYYoDkssqK 1quoU8Xxpoj+gUj126pAyVmRsznsLaUhaiFeIqyicE+Z271MAEQzmn6Vvbsdemt5 5mxuqG6BeCs2iioiXpZ/CqCRYOCIm7Utodv0GC1CpcvFfUDBP/zhHJZ0SSg== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47f3bggy2u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:13 +0000 (GMT) Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-b34abbcdcf3so92744a12.1 for ; Wed, 25 Jun 2025 11:59:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877952; x=1751482752; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/oopH5gDR7cAO8FzYui8Gj2RGY+GaRhFrU3dbHaafIQ=; b=WmPYHyaVpZDosIZMKZ/z4DuqWd3ciMei1t3j9A4i+TqTZ6wKyd9qC1/Apj4cHgouOx 6CdH38xyo36YTd4jQvh0eaNkJ2ThZIINfvvnkueVPamZ8casL4tR6N2BATYDKFs7PzYt X5y2O1w17vJPrujN5cnG8pZg8MMvWp2WglnnmrX2qM/poGy40dqxVYgrDApg+20IwiWl IKcg9UeRhe4GAEC9BcsSrt7YiPRP9p5VN1wHqCA70OUrrnK1mK7ObSibwsFvggHQ7oOp LRrGWsq4djUl6Wp+5hmcNrvfcNJ9tIm2MzhCrJhoSAKhMWQe4xpy1gz25XtyUnmBHTwi OdXA== X-Forwarded-Encrypted: i=1; AJvYcCUXqJDrMMXLfZ3wU7xaaJdXc43UDLXzLtWxNaGF9GIA4nZLSnqzkLG6WcWOigdJ1owzvuc1nSXqDjQzgoY=@vger.kernel.org X-Gm-Message-State: AOJu0YxWHGdKM3K7390ayz8Hy8heEkWDLaHGxljYprom3wPm42vkrY08 P30BtwKEkcL8u6vNuM9KxlOP9H4WWPo65qUfpR1YFiNavlS/YhoUjIZ603RP6Y+th6vqkGkmG8q KCTPGI0btD810uqGMSd6HgyaXUIpbI1rH/qdRZ79ZSccXmPW0tGRhpXme6xpC4scQCUA= X-Gm-Gg: ASbGncvnvarnfQ3kD+d87qv52TYGzQBlctU7Qmx5BCUUD7ZUSqkfCPYg0uJ3gne8nnQ r4rnDE7paGBn+bES8SaoIZO5X7Nf6VIzc74EmF7fPTZOYFrFFcx7F+NjUOsVvGY4FyrowV8YCh0 Fq1KVxNJxVLZVyxRBtMI8u3Ii8jCpSHZgjf0/Ts1NtDMOUiET4hMeTFUQy/1wa+mDLYpHYRa2cw 5N/BAzdX4eJCoxqdi9G7hv7dKoj9meEZIx3WEO7VQ3nBkHAVZDLCLoUJwwH0K7k/xSwSbBtZAW5 BSjL3lXRRIDuFJPP7S0qe6EX8xM6awpu X-Received: by 2002:a05:6a20:93a1:b0:21d:85c:2906 with SMTP id adf61e73a8af0-2208c0f86famr1158125637.13.1750877951987; Wed, 25 Jun 2025 11:59:11 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFWIdPg+envjhgu3e9+Irxral3EyOd30mUU3HTCbj+pgsTzao4HrOoniYO2KTzay3GBl6mVLQ== X-Received: by 2002:a05:6a20:93a1:b0:21d:85c:2906 with SMTP id adf61e73a8af0-2208c0f86famr1158098637.13.1750877951514; Wed, 25 Jun 2025 11:59:11 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-749c882cd82sm4868690b3a.86.2025.06.25.11.59.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:11 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 33/42] drm/msm: Support pgtable preallocation Date: Wed, 25 Jun 2025 11:47:26 -0700 Message-ID: <20250625184918.124608-34-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: ifgdta_BEcCvy7OqvmbhAiL1UCtFGklx X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXzRnaznvs2Arq 6LtoXR4OEp3DOSOyPyWCRl1NNXzr7zHpH8eWRxR2M53SRJxqI+EKijPezooBI0MuO9GZlI3ikqj TbQbFqj4pUQqywmZmPtfOurw45iwaRLeJyPfqF3KSqIs3k1KMpmAaOkK2DCJ2OmiCAMOGt8kktd FUHURNTGD6n0kMa9ANGXAyeyq2sEXMhujmjJGC4WHSOylTxW4FYb7Ecq8lqKeHYiS5b8KBWObdG GlJypuP6MzwRJ4tCjxYY9+wnqrPssUyb8mVOVzw7TkZJpqeJe/jBXkggin1xC4jWkMnr2hN57N1 UX1S5Bj+GfnwaUG58kKrecdqM0HiR4WFUgSMXH+UMj6Y85JYaDxB/nZM/96IMvAk8mI3okx0o5T rSHzbEpJhNM7XRwiiknFPFVc1i+B/+zGSCiFgpmYhGP5Wx/16tgxt+53ifnKkg8AGx2MK4cO X-Authority-Analysis: v=2.4 cv=L4kdQ/T8 c=1 sm=1 tr=0 ts=685c4701 cx=c_pps a=Oh5Dbbf/trHjhBongsHeRQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=7CQSdrXTAAAA:8 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=FDPAursefL7ktZtO0vwA:9 a=_Vgx9l1VpLgwpw_dHYaR:22 a=a-qgeE7W1pNrGK8U0ZQC:22 X-Proofpoint-GUID: ifgdta_BEcCvy7OqvmbhAiL1UCtFGklx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0 bulkscore=0 clxscore=1015 suspectscore=0 adultscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 spamscore=0 phishscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Introduce a mechanism to count the worst case # of pages required in a VM_BIND op. Note that previously we would have had to somehow account for allocations in unmap, when splitting a block. This behavior was removed in commit 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on unmap behavior)" Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_iommu.c | 191 +++++++++++++++++++++++++++++++- drivers/gpu/drm/msm/msm_mmu.h | 34 ++++++ 3 files changed, 225 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 945a235d73cf..46c7ddbc2dce 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -7,6 +7,7 @@ #ifndef __MSM_GEM_H__ #define __MSM_GEM_H__ =20 +#include "msm_mmu.h" #include #include #include "drm/drm_exec.h" diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index 1c068592f9e9..bfee3e0dcb23 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -6,6 +6,7 @@ =20 #include #include +#include #include "msm_drv.h" #include "msm_mmu.h" =20 @@ -14,6 +15,8 @@ struct msm_iommu { struct iommu_domain *domain; atomic_t pagetables; struct page *prr_page; + + struct kmem_cache *pt_cache; }; =20 #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -27,6 +30,9 @@ struct msm_iommu_pagetable { unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ phys_addr_t ttbr; u32 asid; + + /** @root_page_table: Stores the root page table pointer. */ + void *root_page_table; }; static struct msm_iommu_pagetable *to_pagetable(struct msm_mmu *mmu) { @@ -282,7 +288,145 @@ msm_iommu_pagetable_walk(struct msm_mmu *mmu, unsigne= d long iova, uint64_t ptes[ return 0; } =20 +static void +msm_iommu_pagetable_prealloc_count(struct msm_mmu *mmu, struct msm_mmu_pre= alloc *p, + uint64_t iova, size_t len) +{ + u64 pt_count; + + /* + * L1, L2 and L3 page tables. + * + * We could optimize L3 allocation by iterating over the sgt and merging + * 2M contiguous blocks, but it's simpler to over-provision and return + * the pages if they're not used. + * + * The first level descriptor (v8 / v7-lpae page table format) encodes + * 30 bits of address. The second level encodes 29. For the 3rd it is + * 39. + * + * https://developer.arm.com/documentation/ddi0406/c/System-Level-Archite= cture/Virtual-Memory-System-Architecture--VMSA-/Long-descriptor-translation= -table-format/Long-descriptor-translation-table-format-descriptors?lang=3De= n#BEIHEFFB + */ + pt_count =3D ((ALIGN(iova + len, 1ull << 39) - ALIGN_DOWN(iova, 1ull << 3= 9)) >> 39) + + ((ALIGN(iova + len, 1ull << 30) - ALIGN_DOWN(iova, 1ull << 30)) >> 30= ) + + ((ALIGN(iova + len, 1ull << 21) - ALIGN_DOWN(iova, 1ull << 21)) >> 21= ); + + p->count +=3D pt_count; +} + +static struct kmem_cache * +get_pt_cache(struct msm_mmu *mmu) +{ + struct msm_iommu_pagetable *pagetable =3D to_pagetable(mmu); + return to_msm_iommu(pagetable->parent)->pt_cache; +} + +static int +msm_iommu_pagetable_prealloc_allocate(struct msm_mmu *mmu, struct msm_mmu_= prealloc *p) +{ + struct kmem_cache *pt_cache =3D get_pt_cache(mmu); + int ret; + + p->pages =3D kvmalloc_array(p->count, sizeof(p->pages), GFP_KERNEL); + if (!p->pages) + return -ENOMEM; + + ret =3D kmem_cache_alloc_bulk(pt_cache, GFP_KERNEL, p->count, p->pages); + if (ret !=3D p->count) { + p->count =3D ret; + return -ENOMEM; + } + + return 0; +} + +static void +msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mmu, struct msm_mmu_p= realloc *p) +{ + struct kmem_cache *pt_cache =3D get_pt_cache(mmu); + uint32_t remaining_pt_count =3D p->count - p->ptr; + + kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]); + kvfree(p->pages); +} + +/** + * alloc_pt() - Custom page table allocator + * @cookie: Cookie passed at page table allocation time. + * @size: Size of the page table. This size should be fixed, + * and determined at creation time based on the granule size. + * @gfp: GFP flags. + * + * We want a custom allocator so we can use a cache for page table + * allocations and amortize the cost of the over-reservation that's + * done to allow asynchronous VM operations. + * + * Return: non-NULL on success, NULL if the allocation failed for any + * reason. + */ +static void * +msm_iommu_pagetable_alloc_pt(void *cookie, size_t size, gfp_t gfp) +{ + struct msm_iommu_pagetable *pagetable =3D cookie; + struct msm_mmu_prealloc *p =3D pagetable->base.prealloc; + void *page; + + /* Allocation of the root page table happening during init. */ + if (unlikely(!pagetable->root_page_table)) { + struct page *p; + + p =3D alloc_pages_node(dev_to_node(pagetable->iommu_dev), + gfp | __GFP_ZERO, get_order(size)); + page =3D p ? page_address(p) : NULL; + pagetable->root_page_table =3D page; + return page; + } + + if (WARN_ON(!p) || WARN_ON(p->ptr >=3D p->count)) + return NULL; + + page =3D p->pages[p->ptr++]; + memset(page, 0, size); + + /* + * Page table entries don't use virtual addresses, which trips out + * kmemleak. kmemleak_alloc_phys() might work, but physical addresses + * are mixed with other fields, and I fear kmemleak won't detect that + * either. + * + * Let's just ignore memory passed to the page-table driver for now. + */ + kmemleak_ignore(page); + + return page; +} + + +/** + * free_pt() - Custom page table free function + * @cookie: Cookie passed at page table allocation time. + * @data: Page table to free. + * @size: Size of the page table. This size should be fixed, + * and determined at creation time based on the granule size. + */ +static void +msm_iommu_pagetable_free_pt(void *cookie, void *data, size_t size) +{ + struct msm_iommu_pagetable *pagetable =3D cookie; + + if (unlikely(pagetable->root_page_table =3D=3D data)) { + free_pages((unsigned long)data, get_order(size)); + pagetable->root_page_table =3D NULL; + return; + } + + kmem_cache_free(get_pt_cache(&pagetable->base), data); +} + static const struct msm_mmu_funcs pagetable_funcs =3D { + .prealloc_count =3D msm_iommu_pagetable_prealloc_count, + .prealloc_allocate =3D msm_iommu_pagetable_prealloc_allocate, + .prealloc_cleanup =3D msm_iommu_pagetable_prealloc_cleanup, .map =3D msm_iommu_pagetable_map, .unmap =3D msm_iommu_pagetable_unmap, .destroy =3D msm_iommu_pagetable_destroy, @@ -333,6 +477,17 @@ static const struct iommu_flush_ops tlb_ops =3D { static int msm_gpu_fault_handler(struct iommu_domain *domain, struct devic= e *dev, unsigned long iova, int flags, void *arg); =20 +static size_t get_tblsz(const struct io_pgtable_cfg *cfg) +{ + int pg_shift, bits_per_level; + + pg_shift =3D __ffs(cfg->pgsize_bitmap); + /* arm_lpae_iopte is u64: */ + bits_per_level =3D pg_shift - ilog2(sizeof(u64)); + + return sizeof(u64) << bits_per_level; +} + struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool ke= rnel_managed) { struct adreno_smmu_priv *adreno_smmu =3D dev_get_drvdata(parent->dev); @@ -369,8 +524,34 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_= mmu *parent, bool kernel_m =20 if (!kernel_managed) { ttbr0_cfg.quirks |=3D IO_PGTABLE_QUIRK_NO_WARN; + + /* + * With userspace managed VM (aka VM_BIND), we need to pre- + * allocate pages ahead of time for map/unmap operations, + * handing them to io-pgtable via custom alloc/free ops as + * needed: + */ + ttbr0_cfg.alloc =3D msm_iommu_pagetable_alloc_pt; + ttbr0_cfg.free =3D msm_iommu_pagetable_free_pt; + + /* + * Restrict to single page granules. Otherwise we may run + * into a situation where userspace wants to unmap/remap + * only a part of a larger block mapping, which is not + * possible without unmapping the entire block. Which in + * turn could cause faults if the GPU is accessing other + * parts of the block mapping. + * + * Note that prior to commit 33729a5fc0ca ("iommu/io-pgtable-arm: + * Remove split on unmap behavior)" this was handled in + * io-pgtable-arm. But this apparently does not work + * correctly on SMMUv3. + */ + WARN_ON(!(ttbr0_cfg.pgsize_bitmap & PAGE_SIZE)); + ttbr0_cfg.pgsize_bitmap =3D PAGE_SIZE; } =20 + pagetable->iommu_dev =3D ttbr1_cfg->iommu_dev; pagetable->pgtbl_ops =3D alloc_io_pgtable_ops(ARM_64_LPAE_S1, &ttbr0_cfg, pagetable); =20 @@ -414,7 +595,6 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_m= mu *parent, bool kernel_m /* Needed later for TLB flush */ pagetable->parent =3D parent; pagetable->tlb =3D ttbr1_cfg->tlb; - pagetable->iommu_dev =3D ttbr1_cfg->iommu_dev; pagetable->pgsize_bitmap =3D ttbr0_cfg.pgsize_bitmap; pagetable->ttbr =3D ttbr0_cfg.arm_lpae_s1_cfg.ttbr; =20 @@ -522,6 +702,7 @@ static void msm_iommu_destroy(struct msm_mmu *mmu) { struct msm_iommu *iommu =3D to_msm_iommu(mmu); iommu_domain_free(iommu->domain); + kmem_cache_destroy(iommu->pt_cache); kfree(iommu); } =20 @@ -596,6 +777,14 @@ struct msm_mmu *msm_iommu_gpu_new(struct device *dev, = struct msm_gpu *gpu, unsig return mmu; =20 iommu =3D to_msm_iommu(mmu); + if (adreno_smmu && adreno_smmu->cookie) { + const struct io_pgtable_cfg *cfg =3D + adreno_smmu->get_ttbr1_cfg(adreno_smmu->cookie); + size_t tblsz =3D get_tblsz(cfg); + + iommu->pt_cache =3D + kmem_cache_create("msm-mmu-pt", tblsz, tblsz, 0, NULL); + } iommu_set_fault_handler(iommu->domain, msm_gpu_fault_handler, iommu); =20 /* Enable stall on iommu fault: */ diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c70c71fb1a4a..76d7dcc1c977 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -9,8 +9,16 @@ =20 #include =20 +struct msm_mmu_prealloc; +struct msm_mmu; +struct msm_gpu; + struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); + void (*prealloc_count)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p, + uint64_t iova, size_t len); + int (*prealloc_allocate)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p); + void (*prealloc_cleanup)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); @@ -25,12 +33,38 @@ enum msm_mmu_type { MSM_MMU_IOMMU_PAGETABLE, }; =20 +/** + * struct msm_mmu_prealloc - Tracking for pre-allocated pages for MMU upda= tes. + */ +struct msm_mmu_prealloc { + /** @count: Number of pages reserved. */ + uint32_t count; + /** @ptr: Index of first unused page in @pages */ + uint32_t ptr; + /** + * @pages: Array of pages preallocated for MMU table updates. + * + * After a VM operation, there might be free pages remaining in this + * array (since the amount allocated is a worst-case). These are + * returned to the pt_cache at mmu->prealloc_cleanup(). + */ + void **pages; +}; + struct msm_mmu { const struct msm_mmu_funcs *funcs; struct device *dev; int (*handler)(void *arg, unsigned long iova, int flags, void *data); void *arg; enum msm_mmu_type type; + + /** + * @prealloc: pre-allocated pages for pgtable + * + * Set while a VM_BIND job is running, serialized under + * msm_gem_vm::mmu_lock. + */ + struct msm_mmu_prealloc *prealloc; }; =20 static inline void msm_mmu_init(struct msm_mmu *mmu, struct device *dev, --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C9512F94B4 for ; Wed, 25 Jun 2025 18:59:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877957; cv=none; b=WU96P0CamOmxc4POF85vz4Ole4F/x1VdNTvgak2bw8riZzGD0ElY/td9o1CaW5bSD522aEUDwgvES1XJDLLz+j9i/iTpA1U8G1q2JvmkTSU0+dbD6sVqT+0l1EqRdF2PnKlsF/Q1FmZmRcnAAA0TCh/5YP3QWLzfRGjUAJd8Oos= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877957; c=relaxed/simple; bh=vPu53PGWBpfPppllpl1iLj20es1VLGg4hj8SiJ/zrYU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j/Vf68uW/JxyW0JADJfQVrrT0oV2UYolCDe3zvTkAPR61Sycez+mu+dnRtqQQYM5h4VGvfG2I435uGgImoghLREx0LFfhCfHh024mkzD3F4SDMyH0aE0RnUZKQVuoHCf3bsz9wJZ7EGh/4VHqP7wE93ZL7jG97J/ZKzqIGSOwYs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=WtZf3CkK; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="WtZf3CkK" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCdV7M019166 for ; Wed, 25 Jun 2025 18:59:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=49iUFarYqun NoIxPR8E20lSmIXKZAbaZ3WmH/nqkmuc=; b=WtZf3CkKNuQU97IvGeNFmcWXCY6 y2l4y8KMgmNeiWDi0g8rGOrSkCzKn2ph/JxmTm7tL0MJ/qr4a+CkeUaZ4x+thjl3 DLLAuPmFc9T+hc0c2jEUVcJeX3Nbd5qGNf/ioRPy0deT+qW2mOwvaNneFIRsvZqv V/hQv31Z0d/sAXbcJ+ZeEOfUQOBb/rylxj3dJY8xK14i/xGjgM23/tqYgWDNaJtl /PQR3mmLuMGbURcDyQzCDG0WKTXhKVoLurdai2h/Ip6rHKRHTC1b8ptbkpsh6IED Qmm6juVfCxYYyRiJPiYxmbFyPgrXA3THe7LRso6NbvuOi7q1mSCm+H9ikQg== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fdfwy7uf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:14 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-23692793178so1566105ad.0 for ; Wed, 25 Jun 2025 11:59:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877954; x=1751482754; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=49iUFarYqunNoIxPR8E20lSmIXKZAbaZ3WmH/nqkmuc=; b=qFW4755fhflTS39ozd9/4GHZMxw15LCjpEgWuIB8wuultQ5L825owaaePMx+dli35a JO0xuUg8IpRUm8+jLKR3WrDRYapcEGFl9gj2KoT8viqPkTdHpx0I6bDx8wQMcoFRA/Lr YoYTCBvw+SqdjXWQG7X93lbx9W2sejvoy47f69nk+X0udsWcuzsBreWncX7Gu4YKp0Az XFyJEqa6yXw91kYscZonfT0stRE1FQZaFggko+BsZUWrnnIP+jKvHH5ffF7vKBecH4+x rkRXTu0wwJL/KLdKK93jZcv3A4aQExl1sr1fQEHzgWRROLbUO2/gDu/AANwdn6tTw+5h JD3g== X-Forwarded-Encrypted: i=1; AJvYcCXulog4GxEMKNYefkcVOel9H0vI/EeFmXVFujYRXmdlrJD78FHAGY22ZBfJAxq6+L58R6ZjZSPyqJ+1Kdk=@vger.kernel.org X-Gm-Message-State: AOJu0YzqAk9sSw3bE456z3ij5tMKWbg509eLQ+MyhbH6zWYGCXflvms3 FKs06JRZsSTMRfcsoAjI7shVsC+6WQoqVCZPKLzqM9wgVCGH4ZgGm9ClxPPm8D75kN3CBqKTQ2u nqfr8F738gtP705vQzbBWErsLxhzLVtaWmAga3z13OLeL6cqLGc/hoUyLNIfSYXw9kOs= X-Gm-Gg: ASbGncsfqqCBB61l3Hvccx+SZK2FT0qAkdxc5hy/yIp7S6PAuUdQ9bELaO727hLnwrV pc4YVnkj0ymtVK3LtfTQQvbBkCUiDlv/16K0GoVp7z/HntuyGxxRMjd2tSaZK4i4d3Q4xAPiH1Y fJmWu3IbtAgipZz8KcCTVqhLWyNPibVwNl2I/7jVNEZArAtaX5YMWYu1sqQMDBGyqgMFtloeGkp tjv+0YNFwvdcp0M42z8pOXShPWC58v5cyvL+dbco2ej9HYLsXpBV2VgxKIIV6M6d/k3Si6NrObr ziLHHSuFvgCui7wm8TWBS2+7RFSRSr+T X-Received: by 2002:a17:903:1cb:b0:235:91a:4d with SMTP id d9443c01a7336-23824108cbdmr77910095ad.23.1750877953567; Wed, 25 Jun 2025 11:59:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFf3AcJpzfzjZQsgRmjwLAbbXRsrGhtl7AY0PDU2VAAi3sP12gUBwP1a3MTqSTUA/gKCDHVqw== X-Received: by 2002:a17:903:1cb:b0:235:91a:4d with SMTP id d9443c01a7336-23824108cbdmr77909665ad.23.1750877953165; Wed, 25 Jun 2025 11:59:13 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d83989besm144278745ad.38.2025.06.25.11.59.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:12 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 34/42] drm/msm: Split out map/unmap ops Date: Wed, 25 Jun 2025 11:47:27 -0700 Message-ID: <20250625184918.124608-35-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: G4v5h9_KiXvP-DQgMbvhAoU3NlOTiGU7 X-Proofpoint-ORIG-GUID: G4v5h9_KiXvP-DQgMbvhAoU3NlOTiGU7 X-Authority-Analysis: v=2.4 cv=MtZS63ae c=1 sm=1 tr=0 ts=685c4703 cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=xxbd-mT8tQbqgF3hF1cA:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX2weDZSS4EUMs t+NWz9y0uCHh/NgQD9rbKHmJB9wktbG/fBYtGaE+OUOxPdahwblHIkB5TLqH6HcEDTb4qLDw4so yh94FXFwZD6rcXPMiVZOaorNUVmxF/feAefSdKKF/6yuX0RQ5/98zHu4BFnagrV5KzTYYhs9eum ehv9uIC6aR/TbhlB96D7kCRRnVLMiF3g5WQrQb+71nh6l9xlxKxF4wC+wz3Dvqc4duEu/6mQ9Kg aNCJ8yHOKS/au+atukekQOsoelJKaQwagilCYlirmO8WSWryL26c0pKc45cyfGydM/Btp+Se++0 P4T8xL/jFPX3t5bPcwx1ItFNG2viOvWCFV/D+LXC/eS/xtk48CNQ4R7ZjLahMAOgRDNvooaXQ8M h1JEcxxv96Zg4iwrQFnBrBYJFqAUyRPHbs9YgdRDlZFQDUBB+dKLupj2gegU4s54NYlL6HoQ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 impostorscore=0 suspectscore=0 adultscore=0 phishscore=0 mlxlogscore=999 spamscore=0 malwarescore=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 mlxscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark With async VM_BIND, the actual pgtable updates are deferred. Synchronously, a list of map/unmap ops will be generated, but the actual pgtable changes are deferred. To support that, split out op handlers and change the existing non-VM_BIND paths to use them. Note in particular, the vma itself may already be destroyed/freed by the time an UNMAP op runs (or even a MAP op if there is a later queued UNMAP). For this reason, the op handlers cannot reference the vma pointer. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem_vma.c | 63 +++++++++++++++++++++++++++---- 1 file changed, 56 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index cf37abb98235..76b79c122182 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -8,6 +8,34 @@ #include "msm_gem.h" #include "msm_mmu.h" =20 +#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##= __VA_ARGS__) + +/** + * struct msm_vm_map_op - create new pgtable mapping + */ +struct msm_vm_map_op { + /** @iova: start address for mapping */ + uint64_t iova; + /** @range: size of the region to map */ + uint64_t range; + /** @offset: offset into @sgt to map */ + uint64_t offset; + /** @sgt: pages to map, or NULL for a PRR mapping */ + struct sg_table *sgt; + /** @prot: the mapping protection flags */ + int prot; +}; + +/** + * struct msm_vm_unmap_op - unmap a range of pages from pgtable + */ +struct msm_vm_unmap_op { + /** @iova: start address for unmap */ + uint64_t iova; + /** @range: size of region to unmap */ + uint64_t range; +}; + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -21,18 +49,36 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } =20 +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + + vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); +} + +static int +vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) +{ + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + + return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, + op->range, op->prot); +} + /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); - struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); - unsigned size =3D vma->va.range; =20 /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; =20 - vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); + vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){ + .iova =3D vma->va.addr, + .range =3D vma->va.range, + }); =20 msm_vma->mapped =3D false; } @@ -42,7 +88,6 @@ int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); - struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); int ret; =20 if (GEM_WARN_ON(!vma->va.addr)) @@ -62,9 +107,13 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct= sg_table *sgt) * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, - vma->gem.offset, vma->va.range, - prot); + ret =3D vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){ + .iova =3D vma->va.addr, + .range =3D vma->va.range, + .offset =3D vma->gem.offset, + .sgt =3D sgt, + .prot =3D prot, + }); if (ret) { msm_vma->mapped =3D false; } --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 126252FBFF8 for ; Wed, 25 Jun 2025 18:59:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877964; cv=none; b=by29l3GPE1CizKSndiL9pxHtK4jLBkYMnbWgCaufJGu1TpbKpSdNVEUC7U1QD7rN6UWZ+ZHSKlMrqewj5uGrdH9gkACiR+U77waqJTRJfu8F9jNUS8EbrpGV8rCgolQWE3+7ELCemNDE5y49oB5tmMcwHC4ehY0cuPQzabR8Xb8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877964; c=relaxed/simple; bh=rafvd94Lfl/++DHE5f8nngL46Y+7ijkKL3fsPofc1Tc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iEBDm8+MLB7yoG4jEoHy7sk0AepfAN/rdQZLKtx/l81EvIjml6247ru3JcV0pkxEYoIFMyx12pFbICSDLDCHU8CGSIwdufIV1S71jQBc+st83lU/scS95VqJfDNk+r0stlib8tvZCEfkleykPYQQ+0vNl4ENZsVgjMxJ6/45q7M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=e6PTP2Cb; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="e6PTP2Cb" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PB1h98027614 for ; Wed, 25 Jun 2025 18:59:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=jud0rNNbvlu nj3F4a1GuuaGxZ5klgC/qvxvSZXFVU+k=; b=e6PTP2CbZywYFeXmM85dfuSzZ28 UYW4ujnZKXGiRKAYWKektwbhfz3f4sTQnkL0AEygQwye2y8+NsXh6bYOZwkFCK5U SqVcMyiZstWjJeFga/KB96xUIkt0qLN6+CKKo/BJBeKqHYjI3x0HXOkBkq4cYQT6 My/b1nkbEKy8NnMKz1ZjzyzSJHvTx3We3qAOhfthI4yKeHMKfGM4uQ6sgc5r4Jk9 fzyecpx6XGfwEf6b4670ibqmgrlHt65qO7tSkosim25RbxiXgTR/X9wYVWXBGLOE tIq/pwwSbEBC31VS5bB+DJ0aLeWgqjYWvTcR1hhBZ+8j5LCQuhreJi7yrIA== Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fbhqqdjj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:17 +0000 (GMT) Received: by mail-pg1-f198.google.com with SMTP id 41be03b00d2f7-b00e4358a34so95722a12.0 for ; Wed, 25 Jun 2025 11:59:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877956; x=1751482756; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jud0rNNbvlunj3F4a1GuuaGxZ5klgC/qvxvSZXFVU+k=; b=MvUYu1DlkI36obKMnluBVTTEPHNVSNBYGRNXvclZqNQX2I5xyNYnstrsTtG0WY3grs 2s/HCsCg5aJrmIL1+j+WHf8d47DWJahO5PJPHfwikBIKf9FiGHtiA9DXSOZU8fLQVY4i MoxFh4U7firuKFek0nVemL3vrrvKrb5IYEn5rfqLbESuAwwWOk1V8jZyqudz8wRlwAPG DZnGeV5O69O6g4vPRPbNdoy86TfEUS03isxR2UsRdw1hSTwRGqQdblIyNFKi4gU9R/QE 5+iTCL/yCUQC7+9KxoCQCSEbpJxf2YsDKld8Tk5YUuXzJ3pUoC66MpVzX2eg74zGdJlq xxpQ== X-Forwarded-Encrypted: i=1; AJvYcCV4CMk7xgjn31szuFa1CyGRwuWTttesSo/y21xMLb7KWiG/VTNAd5Pxxni4ar63Sbypnzs+I+y2xFwhInw=@vger.kernel.org X-Gm-Message-State: AOJu0Yy9l1scpeLD9ba2l7nPt57zKdE3qWJFa7ZaU3Jg5jhkcGRJ7RI8 lsfxjAI6OmbNblF5mWFQ14Jik4x4bFuWPyzoMLPa/ccxXqClMBFovA4jl7coLLCjYnER9Y0m77i 1ak3kKw3Efnrj/kURQ5uAApJrm1r1b7NQQIvM2HgGqP/HGFw4fnW6+E1/ezgevgp5d5w= X-Gm-Gg: ASbGnctqPTzthAaahXO7YU9MUf/45dqd/EfwaXCC4rjzMhJil810lTyla62l2mmEjc8 piIKSPoO/DaiYgpOdtimTCl52loz/vDMBkqA2RT1bGIakVJb+DsKqdKjPqS4yl/Fd42fMlR+iXO +2EBGSeZJ4XgCtzSG3cFoZjJglBiKM7V4xgzlNhK1cjHwd8P76mJkvIQfq3gLcqV/l/ImGRGckC S7+d7+VTEeHyrfacmk4VPN6JBYIEsFU7PGLYvNezRxudw9qn8E3Az9v5jW798gimI9RBky2Z/WE 2ltyuyj3+1NKeUptcW5Zi8u97vsT/gBp X-Received: by 2002:a17:90a:d64b:b0:311:df4b:4b82 with SMTP id 98e67ed59e1d1-315f25c89bamr5728853a91.4.1750877955928; Wed, 25 Jun 2025 11:59:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IElo0qy7jsSfFjI7I68oHuEoQbCWu3HpRNjM4YgV68cjhEVnQKdXX6YzF4UrsAOb5PHEZlUww== X-Received: by 2002:a17:90a:d64b:b0:311:df4b:4b82 with SMTP id 98e67ed59e1d1-315f25c89bamr5728796a91.4.1750877955107; Wed, 25 Jun 2025 11:59:15 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-315f5384107sm2388602a91.5.2025.06.25.11.59.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:14 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v7 35/42] drm/msm: Add VM_BIND ioctl Date: Wed, 25 Jun 2025 11:47:28 -0700 Message-ID: <20250625184918.124608-36-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: -PDcZMU6NtRkLsh4gR1nAilb12_ExIxx X-Authority-Analysis: v=2.4 cv=Id+HWXqa c=1 sm=1 tr=0 ts=685c4705 cx=c_pps a=Qgeoaf8Lrialg5Z894R3/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=9iYb1zYOeEBEA4qMVjAA:9 a=lT2Ezh7aeK42-gto:21 a=x9snwWr2DeNwDh03kgHS:22 X-Proofpoint-GUID: -PDcZMU6NtRkLsh4gR1nAilb12_ExIxx X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXyDPvrUEAhTSn 3i81w7FA8yPBSal6tvfE2VY5UU4BXnC4+GToKzPZprZLLOZoGLqPvy1knHQxs73qgjk+1kcFVi4 32P+41RjPkhqw5EG3NA0mNtCai2YI854lBti4ykpwwQ5x7Wie34+cJq/Yx1r1vfU3j/HrRKYR2C BBFHRhLNs3cJyCPzeto3Yclmw9Jf3ne8sp83fAbKXfB0CJLY5yaDVrowHeeTgrvxte8Z/EPsBKD 0X9vZ00LjeY0YkWpjHQgd9OEkgy9U6Xq2mPWEwPL8tBmaItVrn+mhLjuxZOQh+uihuc2a7KYMMW cL9Jib/8dKreXxX3qfehe0ozqWjIMX74wq6TAMVLIyiPWbqp7jAkN7BbMC4cICvPNkOJ35laPQn 3btxXMjnR8DODqqZecg2aSma0mfTIm0FdMrC4Z+U59CsB+8BtGFuNncALM6x1BxQnO7vN4gq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 priorityscore=1501 mlxlogscore=999 phishscore=0 bulkscore=0 clxscore=1015 impostorscore=0 mlxscore=0 lowpriorityscore=0 malwarescore=0 suspectscore=0 adultscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Add a VM_BIND ioctl for binding/unbinding buffers into a VM. This is only supported if userspace has opted in to MSM_PARAM_EN_VM_BIND. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_gem.c | 40 +- drivers/gpu/drm/msm/msm_gem.h | 4 + drivers/gpu/drm/msm/msm_gem_submit.c | 22 +- drivers/gpu/drm/msm/msm_gem_vma.c | 1092 +++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 74 +- 7 files changed, 1204 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 89cb7820064f..bdf775897de8 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -791,6 +791,7 @@ static const struct drm_ioctl_desc msm_ioctls[] =3D { DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_NEW, msm_ioctl_submitqueue_new, DRM= _RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_CLOSE, msm_ioctl_submitqueue_close, DRM= _RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_QUERY, msm_ioctl_submitqueue_query, DRM= _RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(MSM_VM_BIND, msm_ioctl_vm_bind, DRM_RENDER_AL= LOW), }; =20 static void msm_show_fdinfo(struct drm_printer *p, struct drm_file *file) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 2f42d075f13a..9a4d2b6d459d 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -232,7 +232,9 @@ struct drm_gpuvm *msm_kms_init_vm(struct drm_device *de= v); bool msm_use_mmu(struct drm_device *dev); =20 int msm_ioctl_gem_submit(struct drm_device *dev, void *data, - struct drm_file *file); + struct drm_file *file); +int msm_ioctl_vm_bind(struct drm_device *dev, void *data, + struct drm_file *file); =20 #ifdef CONFIG_DEBUG_FS unsigned long msm_gem_shrinker_shrink(struct drm_device *dev, unsigned lon= g nr_to_scan); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 106fec06c18d..fea13a993629 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -255,8 +255,7 @@ static void put_pages(struct drm_gem_object *obj) } } =20 -static struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, - unsigned madv) +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigne= d madv) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); =20 @@ -1060,18 +1059,37 @@ static void msm_gem_free_object(struct drm_gem_obje= ct *obj) /* * We need to lock any VMs the object is still attached to, but not * the object itself (see explaination in msm_gem_assert_locked()), - * so just open-code this special case: + * so just open-code this special case. + * + * Note that we skip the dance if we aren't attached to any VM. This + * is load bearing. The driver needs to support two usage models: + * + * 1. Legacy kernel managed VM: Userspace expects the VMA's to be + * implicitly torn down when the object is freed, the VMA's do + * not hold a hard reference to the BO. + * + * 2. VM_BIND, userspace managed VM: The VMA holds a reference to the + * BO. This can be dropped when the VM is closed and it's associated + * VMAs are torn down. (See msm_gem_vm_close()). + * + * In the latter case the last reference to a BO can be dropped while + * we already have the VM locked. It would have already been removed + * from the gpuva list, but lockdep doesn't know that. Or understand + * the differences between the two usage models. */ - drm_exec_init(&exec, 0, 0); - drm_exec_until_all_locked (&exec) { - struct drm_gpuvm_bo *vm_bo; - drm_gem_for_each_gpuvm_bo (vm_bo, obj) { - drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); - drm_exec_retry_on_contention(&exec); + if (!list_empty(&obj->gpuva.list)) { + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, + drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } } + put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ } - put_iova_spaces(obj, NULL, true); - drm_exec_fini(&exec); /* drop locks */ =20 if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 46c7ddbc2dce..d062722942b5 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -73,6 +73,9 @@ struct msm_gem_vm { /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; =20 + /** @mmu_lock: Protects access to the mmu */ + struct mutex mmu_lock; + /** * @pid: For address spaces associated with a specific process, this * will be non-NULL: @@ -205,6 +208,7 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj= , struct drm_gpuvm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigne= d madv); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index e2174b7d0e40..283e807c7874 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -193,6 +193,7 @@ static int submit_lookup_objects(struct msm_gem_submit = *submit, static int submit_lookup_cmds(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { + struct msm_context *ctx =3D file->driver_priv; unsigned i; size_t sz; int ret =3D 0; @@ -224,6 +225,20 @@ static int submit_lookup_cmds(struct msm_gem_submit *s= ubmit, goto out; } =20 + if (msm_context_is_vmbind(ctx)) { + if (submit_cmd.nr_relocs) { + ret =3D SUBMIT_ERROR(EINVAL, submit, "nr_relocs must be zero"); + goto out; + } + + if (submit_cmd.submit_idx || submit_cmd.submit_offset) { + ret =3D SUBMIT_ERROR(EINVAL, submit, "submit_idx/offset must be zero"); + goto out; + } + + submit->cmd[i].iova =3D submit_cmd.iova; + } + submit->cmd[i].type =3D submit_cmd.type; submit->cmd[i].size =3D submit_cmd.size / 4; submit->cmd[i].offset =3D submit_cmd.submit_offset / 4; @@ -532,6 +547,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, struct msm_syncobj_post_dep *post_deps =3D NULL; struct drm_syncobj **syncobjs_to_reset =3D NULL; struct sync_file *sync_file =3D NULL; + unsigned cmds_to_parse; int out_fence_fd =3D -1; unsigned i; int ret; @@ -655,7 +671,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, if (ret) goto out; =20 - for (i =3D 0; i < args->nr_cmds; i++) { + cmds_to_parse =3D msm_context_is_vmbind(ctx) ? 0 : args->nr_cmds; + + for (i =3D 0; i < cmds_to_parse; i++) { struct drm_gem_object *obj; uint64_t iova; =20 @@ -686,7 +704,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, goto out; } =20 - submit->nr_cmds =3D i; + submit->nr_cmds =3D args->nr_cmds; =20 idr_preload(GFP_KERNEL); =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 76b79c122182..5d4b7e3e9d2c 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -4,9 +4,16 @@ * Author: Rob Clark */ =20 +#include "drm/drm_file.h" +#include "drm/msm_drm.h" +#include "linux/file.h" +#include "linux/sync_file.h" + #include "msm_drv.h" #include "msm_gem.h" +#include "msm_gpu.h" #include "msm_mmu.h" +#include "msm_syncobj.h" =20 #define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##= __VA_ARGS__) =20 @@ -36,6 +43,97 @@ struct msm_vm_unmap_op { uint64_t range; }; =20 +/** + * struct msm_vma_op - A MAP or UNMAP operation + */ +struct msm_vm_op { + /** @op: The operation type */ + enum { + MSM_VM_OP_MAP =3D 1, + MSM_VM_OP_UNMAP, + } op; + union { + /** @map: Parameters used if op =3D=3D MSM_VMA_OP_MAP */ + struct msm_vm_map_op map; + /** @unmap: Parameters used if op =3D=3D MSM_VMA_OP_UNMAP */ + struct msm_vm_unmap_op unmap; + }; + /** @node: list head in msm_vm_bind_job::vm_ops */ + struct list_head node; + + /** + * @obj: backing object for pages to be mapped/unmapped + * + * Async unmap ops, in particular, must hold a reference to the + * original GEM object backing the mapping that will be unmapped. + * But the same can be required in the map path, for example if + * there is not a corresponding unmap op, such as process exit. + * + * This ensures that the pages backing the mapping are not freed + * before the mapping is torn down. + */ + struct drm_gem_object *obj; +}; + +/** + * struct msm_vm_bind_job - Tracking for a VM_BIND ioctl + * + * A table of userspace requested VM updates (MSM_VM_BIND_OP_UNMAP/MAP/MAP= _NULL) + * gets applied to the vm, generating a list of VM ops (MSM_VM_OP_MAP/UNMA= P) + * which are applied to the pgtables asynchronously. For example a usersp= ace + * requested MSM_VM_BIND_OP_MAP could end up generating both an MSM_VM_OP_= UNMAP + * to unmap an existing mapping, and a MSM_VM_OP_MAP to apply the new mapp= ing. + */ +struct msm_vm_bind_job { + /** @base: base class for drm_sched jobs */ + struct drm_sched_job base; + /** @vm: The VM being operated on */ + struct drm_gpuvm *vm; + /** @fence: The fence that is signaled when job completes */ + struct dma_fence *fence; + /** @queue: The queue that the job runs on */ + struct msm_gpu_submitqueue *queue; + /** @prealloc: Tracking for pre-allocated MMU pgtable pages */ + struct msm_mmu_prealloc prealloc; + /** @vm_ops: a list of struct msm_vm_op */ + struct list_head vm_ops; + /** @bos_pinned: are the GEM objects being bound pinned? */ + bool bos_pinned; + /** @nr_ops: the number of userspace requested ops */ + unsigned int nr_ops; + /** + * @ops: the userspace requested ops + * + * The userspace requested ops are copied/parsed and validated + * before we start applying the updates to try to do as much up- + * front error checking as possible, to avoid the VM being in an + * undefined state due to partially executed VM_BIND. + * + * This table also serves to hold a reference to the backing GEM + * objects. + */ + struct msm_vm_bind_op { + uint32_t op; + uint32_t flags; + union { + struct drm_gem_object *obj; + uint32_t handle; + }; + uint64_t obj_offset; + uint64_t iova; + uint64_t range; + } ops[]; +}; + +#define job_foreach_bo(obj, _job) \ + for (unsigned i =3D 0; i < (_job)->nr_ops; i++) \ + if ((obj =3D (_job)->ops[i].obj)) + +static inline struct msm_vm_bind_job *to_msm_vm_bind_job(struct drm_sched_= job *job) +{ + return container_of(job, struct msm_vm_bind_job, base); +} + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -52,6 +150,9 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) static void vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) { + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); =20 vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); @@ -60,6 +161,9 @@ vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_u= nmap_op *op) static int vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) { + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); =20 return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, @@ -69,17 +173,29 @@ vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_m= ap_op *op) /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); =20 /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; =20 - vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){ + /* + * The mmu_lock is only needed when preallocation is used. But + * in that case we don't need to worry about recursion into + * shrinker + */ + if (!vm->managed) + mutex_lock(&vm->mmu_lock); + + vm_unmap_op(vm, &(struct msm_vm_unmap_op){ .iova =3D vma->va.addr, .range =3D vma->va.range, }); =20 + if (!vm->managed) + mutex_unlock(&vm->mmu_lock); + msm_vma->mapped =3D false; } =20 @@ -87,6 +203,7 @@ void msm_gem_vma_unmap(struct drm_gpuva *vma) int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); int ret; =20 @@ -98,6 +215,14 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct= sg_table *sgt) =20 msm_vma->mapped =3D true; =20 + /* + * The mmu_lock is only needed when preallocation is used. But + * in that case we don't need to worry about recursion into + * shrinker + */ + if (!vm->managed) + mutex_lock(&vm->mmu_lock); + /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -107,16 +232,19 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, stru= ct sg_table *sgt) * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){ + ret =3D vm_map_op(vm, &(struct msm_vm_map_op){ .iova =3D vma->va.addr, .range =3D vma->va.range, .offset =3D vma->gem.offset, .sgt =3D sgt, .prot =3D prot, }); - if (ret) { + + if (!vm->managed) + mutex_unlock(&vm->mmu_lock); + + if (ret) msm_vma->mapped =3D false; - } =20 return ret; } @@ -131,6 +259,9 @@ void msm_gem_vma_close(struct drm_gpuva *vma) =20 drm_gpuvm_resv_assert_held(&vm->base); =20 + if (vma->gem.obj) + msm_gem_assert_locked(vma->gem.obj); + if (vma->va.addr && vm->managed) drm_mm_remove_node(&msm_vma->node); =20 @@ -158,6 +289,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, =20 if (vm->managed) { BUG_ON(offset !=3D 0); + BUG_ON(!obj); /* NULL mappings not valid for kernel managed VM */ ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -169,7 +301,8 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, range_end =3D range_start + obj->size; } =20 - GEM_WARN_ON((range_end - range_start) > obj->size); + if (obj) + GEM_WARN_ON((range_end - range_start) > obj->size); =20 drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, off= set); vma->mapped =3D false; @@ -178,6 +311,9 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, if (ret) goto err_free_range; =20 + if (!obj) + return &vma->base; + vm_bo =3D drm_gpuvm_bo_obtain(&vm->base, obj); if (IS_ERR(vm_bo)) { ret =3D PTR_ERR(vm_bo); @@ -200,11 +336,297 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_= gem_object *obj, return ERR_PTR(ret); } =20 +static int +msm_gem_vm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec) +{ + struct drm_gem_object *obj =3D vm_bo->obj; + struct drm_gpuva *vma; + int ret; + + vm_dbg("validate: %p", obj); + + msm_gem_assert_locked(obj); + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + ret =3D msm_gem_pin_vma_locked(obj, vma); + if (ret) + return ret; + } + + return 0; +} + +struct op_arg { + unsigned flags; + struct msm_vm_bind_job *job; +}; + +static void +vm_op_enqueue(struct op_arg *arg, struct msm_vm_op _op) +{ + struct msm_vm_op *op =3D kmalloc(sizeof(*op), GFP_KERNEL); + *op =3D _op; + list_add_tail(&op->node, &arg->job->vm_ops); + + if (op->obj) + drm_gem_object_get(op->obj); +} + +static struct drm_gpuva * +vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) +{ + return msm_gem_vma_new(arg->job->vm, op->gem.obj, op->gem.offset, + op->va.addr, op->va.addr + op->va.range); +} + +static int +msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gem_object *obj =3D op->map.gem.obj; + struct drm_gpuva *vma; + struct sg_table *sgt; + unsigned prot; + + vma =3D vma_from_op(arg, &op->map); + if (WARN_ON(IS_ERR(vma))) + return PTR_ERR(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + vma->flags =3D ((struct op_arg *)arg)->flags; + + if (obj) { + sgt =3D to_msm_bo(obj)->sgt; + prot =3D msm_gem_prot(obj); + } else { + sgt =3D NULL; + prot =3D IOMMU_READ | IOMMU_WRITE; + } + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_MAP, + .map =3D { + .sgt =3D sgt, + .iova =3D vma->va.addr, + .range =3D vma->va.range, + .offset =3D vma->gem.offset, + .prot =3D prot, + }, + .obj =3D vma->gem.obj, + }); + + to_msm_vma(vma)->mapped =3D true; + + return 0; +} + +static int +msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) +{ + struct msm_vm_bind_job *job =3D ((struct op_arg *)arg)->job; + struct drm_gpuvm *vm =3D job->vm; + struct drm_gpuva *orig_vma =3D op->remap.unmap->va; + struct drm_gpuva *prev_vma =3D NULL, *next_vma =3D NULL; + struct drm_gpuvm_bo *vm_bo =3D orig_vma->vm_bo; + bool mapped =3D to_msm_vma(orig_vma)->mapped; + unsigned flags; + + vm_dbg("orig_vma: %p:%p:%p: %016llx %016llx", vm, orig_vma, + orig_vma->gem.obj, orig_vma->va.addr, orig_vma->va.range); + + if (mapped) { + uint64_t unmap_start, unmap_range; + + drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range= ); + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_UNMAP, + .unmap =3D { + .iova =3D unmap_start, + .range =3D unmap_range, + }, + .obj =3D orig_vma->gem.obj, + }); + + /* + * Part of this GEM obj is still mapped, but we're going to kill the + * existing VMA and replace it with one or two new ones (ie. two if + * the unmapped range is in the middle of the existing (unmap) VMA). + * So just set the state to unmapped: + */ + to_msm_vma(orig_vma)->mapped =3D false; + } + + /* + * Hold a ref to the vm_bo between the msm_gem_vma_close() and the + * creation of the new prev/next vma's, in case the vm_bo is tracked + * in the VM's evict list: + */ + if (vm_bo) + drm_gpuvm_bo_get(vm_bo); + + /* + * The prev_vma and/or next_vma are replacing the unmapped vma, and + * therefore should preserve it's flags: + */ + flags =3D orig_vma->flags; + + msm_gem_vma_close(orig_vma); + + if (op->remap.prev) { + prev_vma =3D vma_from_op(arg, op->remap.prev); + if (WARN_ON(IS_ERR(prev_vma))) + return PTR_ERR(prev_vma); + + vm_dbg("prev_vma: %p:%p: %016llx %016llx", vm, prev_vma, prev_vma->va.ad= dr, prev_vma->va.range); + to_msm_vma(prev_vma)->mapped =3D mapped; + prev_vma->flags =3D flags; + } + + if (op->remap.next) { + next_vma =3D vma_from_op(arg, op->remap.next); + if (WARN_ON(IS_ERR(next_vma))) + return PTR_ERR(next_vma); + + vm_dbg("next_vma: %p:%p: %016llx %016llx", vm, next_vma, next_vma->va.ad= dr, next_vma->va.range); + to_msm_vma(next_vma)->mapped =3D mapped; + next_vma->flags =3D flags; + } + + if (!mapped) + drm_gpuvm_bo_evict(vm_bo, true); + + /* Drop the previous ref: */ + drm_gpuvm_bo_put(vm_bo); + + return 0; +} + +static int +msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gpuva *vma =3D op->unmap.va; + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + if (!msm_vma->mapped) + goto out_close; + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_UNMAP, + .unmap =3D { + .iova =3D vma->va.addr, + .range =3D vma->va.range, + }, + .obj =3D vma->gem.obj, + }); + + msm_vma->mapped =3D false; + +out_close: + msm_gem_vma_close(vma); + + return 0; +} + static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { .vm_free =3D msm_gem_vm_free, + .vm_bo_validate =3D msm_gem_vm_bo_validate, + .sm_step_map =3D msm_gem_vm_sm_step_map, + .sm_step_remap =3D msm_gem_vm_sm_step_remap, + .sm_step_unmap =3D msm_gem_vm_sm_step_unmap, }; =20 +static struct dma_fence * +msm_vma_job_run(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job =3D to_msm_vm_bind_job(_job); + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct drm_gem_object *obj; + int ret =3D vm->unusable ? -EINVAL : 0; + + vm_dbg(""); + + mutex_lock(&vm->mmu_lock); + vm->mmu->prealloc =3D &job->prealloc; + + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op =3D + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + + switch (op->op) { + case MSM_VM_OP_MAP: + /* + * On error, stop trying to map new things.. but we + * still want to process the unmaps (or in particular, + * the drm_gem_object_put()s) + */ + if (!ret) + ret =3D vm_map_op(vm, &op->map); + break; + case MSM_VM_OP_UNMAP: + vm_unmap_op(vm, &op->unmap); + break; + } + drm_gem_object_put(op->obj); + list_del(&op->node); + kfree(op); + } + + vm->mmu->prealloc =3D NULL; + mutex_unlock(&vm->mmu_lock); + + /* + * We failed to perform at least _some_ of the pgtable updates, so + * now the VM is in an undefined state. Game over! + */ + if (ret) + vm->unusable =3D true; + + job_foreach_bo (obj, job) { + msm_gem_lock(obj); + msm_gem_unpin_locked(obj); + msm_gem_unlock(obj); + } + + /* VM_BIND ops are synchronous, so no fence to wait on: */ + return NULL; +} + +static void +msm_vma_job_free(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job =3D to_msm_vm_bind_job(_job); + struct msm_mmu *mmu =3D to_msm_vm(job->vm)->mmu; + struct drm_gem_object *obj; + + mmu->funcs->prealloc_cleanup(mmu, &job->prealloc); + + drm_sched_job_cleanup(_job); + + job_foreach_bo (obj, job) + drm_gem_object_put(obj); + + msm_submitqueue_put(job->queue); + dma_fence_put(job->fence); + + /* In error paths, we could have unexecuted ops: */ + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op =3D + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + list_del(&op->node); + kfree(op); + } + + kfree(job); +} + static const struct drm_sched_backend_ops msm_vm_bind_ops =3D { + .run_job =3D msm_vma_job_run, + .free_job =3D msm_vma_job_free }; =20 /** @@ -268,6 +690,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, drm_gem_object_put(dummy_gem); =20 vm->mmu =3D mmu; + mutex_init(&vm->mmu_lock); vm->managed =3D managed; =20 drm_mm_init(&vm->mm, va_start, va_size); @@ -280,7 +703,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, err_free_vm: kfree(vm); return ERR_PTR(ret); - } =20 /** @@ -296,6 +718,7 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) { struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); struct drm_gpuva *vma, *tmp; + struct drm_exec exec; =20 /* * For kernel managed VMs, the VMAs are torn down when the handle is @@ -312,22 +735,655 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) drm_sched_fini(&vm->sched); =20 /* Tear down any remaining mappings: */ - dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL); - drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { - struct drm_gem_object *obj =3D vma->gem.obj; + drm_exec_init(&exec, 0, 2); + drm_exec_until_all_locked (&exec) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(gpuvm)); + drm_exec_retry_on_contention(&exec); =20 - if (obj && obj->resv !=3D drm_gpuvm_resv(gpuvm)) { - drm_gem_object_get(obj); - msm_gem_lock(obj); + drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { + struct drm_gem_object *obj =3D vma->gem.obj; + + /* + * MSM_BO_NO_SHARE objects share the same resv as the + * VM, in which case the obj is already locked: + */ + if (obj && (obj->resv =3D=3D drm_gpuvm_resv(gpuvm))) + obj =3D NULL; + + if (obj) { + drm_exec_lock_obj(&exec, obj); + drm_exec_retry_on_contention(&exec); + } + + msm_gem_vma_unmap(vma); + msm_gem_vma_close(vma); + + if (obj) { + drm_exec_unlock_obj(&exec, obj); + } } + } + drm_exec_fini(&exec); +} + + +static struct msm_vm_bind_job * +vm_bind_job_create(struct drm_device *dev, struct msm_gpu *gpu, + struct msm_gpu_submitqueue *queue, uint32_t nr_ops) +{ + struct msm_vm_bind_job *job; + uint64_t sz; + int ret; + + sz =3D struct_size(job, ops, nr_ops); + + if (sz > SIZE_MAX) + return ERR_PTR(-ENOMEM); + + job =3D kzalloc(sz, GFP_KERNEL | __GFP_NOWARN); + if (!job) + return ERR_PTR(-ENOMEM); + + ret =3D drm_sched_job_init(&job->base, queue->entity, 1, queue); + if (ret) { + kfree(job); + return ERR_PTR(ret); + } =20 - msm_gem_vma_unmap(vma); - msm_gem_vma_close(vma); + job->vm =3D msm_context_vm(dev, queue->ctx); + job->queue =3D queue; + INIT_LIST_HEAD(&job->vm_ops); =20 - if (obj && obj->resv !=3D drm_gpuvm_resv(gpuvm)) { - msm_gem_unlock(obj); - drm_gem_object_put(obj); + return job; +} + +static bool invalid_alignment(uint64_t addr) +{ + /* + * Technically this is about GPU alignment, not CPU alignment. But + * I've not seen any qcom SoC where the SMMU does not support the + * CPU's smallest page size. + */ + return !PAGE_ALIGNED(addr); +} + +static int +lookup_op(struct msm_vm_bind_job *job, const struct drm_msm_vm_bind_op *op) +{ + struct drm_device *dev =3D job->vm->drm; + int i =3D job->nr_ops++; + int ret =3D 0; + + job->ops[i].op =3D op->op; + job->ops[i].handle =3D op->handle; + job->ops[i].obj_offset =3D op->obj_offset; + job->ops[i].iova =3D op->iova; + job->ops[i].range =3D op->range; + job->ops[i].flags =3D op->flags; + + if (op->flags & ~MSM_VM_BIND_OP_FLAGS) + ret =3D UERR(EINVAL, dev, "invalid flags: %x\n", op->flags); + + if (invalid_alignment(op->iova)) + ret =3D UERR(EINVAL, dev, "invalid address: %016llx\n", op->iova); + + if (invalid_alignment(op->obj_offset)) + ret =3D UERR(EINVAL, dev, "invalid bo_offset: %016llx\n", op->obj_offset= ); + + if (invalid_alignment(op->range)) + ret =3D UERR(EINVAL, dev, "invalid range: %016llx\n", op->range); + + if (!drm_gpuvm_range_valid(job->vm, op->iova, op->range)) + ret =3D UERR(EINVAL, dev, "invalid range: %016llx, %016llx\n", op->iova,= op->range); + + /* + * MAP must specify a valid handle. But the handle MBZ for + * UNMAP or MAP_NULL. + */ + if (op->op =3D=3D MSM_VM_BIND_OP_MAP) { + if (!op->handle) + ret =3D UERR(EINVAL, dev, "invalid handle\n"); + } else if (op->handle) { + ret =3D UERR(EINVAL, dev, "handle must be zero\n"); + } + + switch (op->op) { + case MSM_VM_BIND_OP_MAP: + case MSM_VM_BIND_OP_MAP_NULL: + case MSM_VM_BIND_OP_UNMAP: + break; + default: + ret =3D UERR(EINVAL, dev, "invalid op: %u\n", op->op); + break; + } + + return ret; +} + +/* + * ioctl parsing, parameter validation, and GEM handle lookup + */ +static int +vm_bind_job_lookup_ops(struct msm_vm_bind_job *job, struct drm_msm_vm_bind= *args, + struct drm_file *file, int *nr_bos) +{ + struct drm_device *dev =3D job->vm->drm; + int ret =3D 0; + int cnt =3D 0; + + if (args->nr_ops =3D=3D 1) { + /* Single op case, the op is inlined: */ + ret =3D lookup_op(job, &args->op); + } else { + for (unsigned i =3D 0; i < args->nr_ops; i++) { + struct drm_msm_vm_bind_op op; + void __user *userptr =3D + u64_to_user_ptr(args->ops + (i * sizeof(op))); + + /* make sure we don't have garbage flags, in case we hit + * error path before flags is initialized: + */ + job->ops[i].flags =3D 0; + + if (copy_from_user(&op, userptr, sizeof(op))) { + ret =3D -EFAULT; + break; + } + + ret =3D lookup_op(job, &op); + if (ret) + break; + } + } + + if (ret) { + job->nr_ops =3D 0; + goto out; + } + + spin_lock(&file->table_lock); + + for (unsigned i =3D 0; i < args->nr_ops; i++) { + struct drm_gem_object *obj; + + if (!job->ops[i].handle) { + job->ops[i].obj =3D NULL; + continue; + } + + /* + * normally use drm_gem_object_lookup(), but for bulk lookup + * all under single table_lock just hit object_idr directly: + */ + obj =3D idr_find(&file->object_idr, job->ops[i].handle); + if (!obj) { + ret =3D UERR(EINVAL, dev, "invalid handle %u at index %u\n", job->ops[i= ].handle, i); + goto out_unlock; + } + + drm_gem_object_get(obj); + + job->ops[i].obj =3D obj; + cnt++; + } + + *nr_bos =3D cnt; + +out_unlock: + spin_unlock(&file->table_lock); + +out: + return ret; +} + +static void +prealloc_count(struct msm_vm_bind_job *job, + struct msm_vm_bind_op *first, + struct msm_vm_bind_op *last) +{ + struct msm_mmu *mmu =3D to_msm_vm(job->vm)->mmu; + + if (!first) + return; + + uint64_t start_iova =3D first->iova; + uint64_t end_iova =3D last->iova + last->range; + + mmu->funcs->prealloc_count(mmu, &job->prealloc, start_iova, end_iova - st= art_iova); +} + +static bool +ops_are_same_pte(struct msm_vm_bind_op *first, struct msm_vm_bind_op *next) +{ + /* + * Last level pte covers 2MB.. so we should merge two ops, from + * the PoV of figuring out how much pgtable pages to pre-allocate + * if they land in the same 2MB range: + */ + uint64_t pte_mask =3D ~(SZ_2M - 1); + return ((first->iova + first->range) & pte_mask) =3D=3D (next->iova & pte= _mask); +} + +/* + * Determine the amount of memory to prealloc for pgtables. For sparse im= ages, + * in particular, userspace plays some tricks with the order of page mappi= ngs + * to get the desired swizzle pattern, resulting in a large # of tiny MAP = ops. + * So detect when multiple MAP operations are physically contiguous, and c= ount + * them as a single mapping. Otherwise the prealloc_count() will not real= ize + * they can share pagetable pages and vastly overcount. + */ +static void +vm_bind_prealloc_count(struct msm_vm_bind_job *job) +{ + struct msm_vm_bind_op *first =3D NULL, *last =3D NULL; + + for (int i =3D 0; i < job->nr_ops; i++) { + struct msm_vm_bind_op *op =3D &job->ops[i]; + + /* We only care about MAP/MAP_NULL: */ + if (op->op =3D=3D MSM_VM_BIND_OP_UNMAP) + continue; + + /* + * If op is contiguous with last in the current range, then + * it becomes the new last in the range and we continue + * looping: + */ + if (last && ops_are_same_pte(last, op)) { + last =3D op; + continue; + } + + /* + * If op is not contiguous with the current range, flush + * the current range and start anew: + */ + prealloc_count(job, first, last); + first =3D last =3D op; + } + + /* Flush the remaining range: */ + prealloc_count(job, first, last); +} + +/* + * Lock VM and GEM objects + */ +static int +vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exe= c) +{ + int ret; + + /* Lock VM and objects: */ + drm_exec_until_all_locked (exec) { + ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(job->vm)); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + + for (unsigned i =3D 0; i < job->nr_ops; i++) { + const struct msm_vm_bind_op *op =3D &job->ops[i]; + + switch (op->op) { + case MSM_VM_BIND_OP_UNMAP: + ret =3D drm_gpuvm_sm_unmap_exec_lock(job->vm, exec, + op->iova, + op->obj_offset); + break; + case MSM_VM_BIND_OP_MAP: + case MSM_VM_BIND_OP_MAP_NULL: + ret =3D drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, + op->iova, op->range, + op->obj, op->obj_offset); + break; + default: + /* + * lookup_op() should have already thrown an error for + * invalid ops + */ + WARN_ON("unreachable"); + } + + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + } + } + + return 0; +} + +/* + * Pin GEM objects, ensuring that we have backing pages. Pinning will move + * the object to the pinned LRU so that the shrinker knows to first consid= er + * other objects for evicting. + */ +static int +vm_bind_job_pin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + /* + * First loop, before holding the LRU lock, avoids holding the + * LRU lock while calling msm_gem_pin_vma_locked (which could + * trigger get_pages()) + */ + job_foreach_bo (obj, job) { + struct page **pages; + + pages =3D msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); + if (IS_ERR(pages)) + return PTR_ERR(pages); + } + + struct msm_drm_private *priv =3D job->vm->drm->dev_private; + + /* + * A second loop while holding the LRU lock (a) avoids acquiring/dropping + * the LRU lock for each individual bo, while (b) avoiding holding the + * LRU lock while calling msm_gem_pin_vma_locked() (which could trigger + * get_pages() which could trigger reclaim.. and if we held the LRU lock + * could trigger deadlock with the shrinker). + */ + mutex_lock(&priv->lru.lock); + job_foreach_bo (obj, job) + msm_gem_pin_obj_locked(obj); + mutex_unlock(&priv->lru.lock); + + job->bos_pinned =3D true; + + return 0; +} + +/* + * Unpin GEM objects. Normally this is done after the bind job is run. + */ +static void +vm_bind_job_unpin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + if (!job->bos_pinned) + return; + + job_foreach_bo (obj, job) + msm_gem_unpin_locked(obj); + + job->bos_pinned =3D false; +} + +/* + * Pre-allocate pgtable memory, and translate the VM bind requests into a + * sequence of pgtable updates to be applied asynchronously. + */ +static int +vm_bind_job_prepare(struct msm_vm_bind_job *job) +{ + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct msm_mmu *mmu =3D vm->mmu; + int ret; + + ret =3D mmu->funcs->prealloc_allocate(mmu, &job->prealloc); + if (ret) + return ret; + + for (unsigned i =3D 0; i < job->nr_ops; i++) { + const struct msm_vm_bind_op *op =3D &job->ops[i]; + struct op_arg arg =3D { + .job =3D job, + }; + + switch (op->op) { + case MSM_VM_BIND_OP_UNMAP: + ret =3D drm_gpuvm_sm_unmap(job->vm, &arg, op->iova, + op->range); + break; + case MSM_VM_BIND_OP_MAP: + if (op->flags & MSM_VM_BIND_OP_DUMP) + arg.flags |=3D MSM_VMA_DUMP; + fallthrough; + case MSM_VM_BIND_OP_MAP_NULL: + ret =3D drm_gpuvm_sm_map(job->vm, &arg, op->iova, + op->range, op->obj, op->obj_offset); + break; + default: + /* + * lookup_op() should have already thrown an error for + * invalid ops + */ + BUG_ON("unreachable"); + } + + if (ret) { + /* + * If we've already started modifying the vm, we can't + * adequetly describe to userspace the intermediate + * state the vm is in. So throw up our hands! + */ + if (i > 0) + vm->unusable =3D true; + return ret; + } + } + + return 0; +} + +/* + * Attach fences to the GEM objects being bound. This will signify to + * the shrinker that they are busy even after dropping the locks (ie. + * drm_exec_fini()) + */ +static void +vm_bind_job_attach_fences(struct msm_vm_bind_job *job) +{ + for (unsigned i =3D 0; i < job->nr_ops; i++) { + struct drm_gem_object *obj =3D job->ops[i].obj; + + if (!obj) + continue; + + dma_resv_add_fence(obj->resv, job->fence, + DMA_RESV_USAGE_KERNEL); + } +} + +int +msm_ioctl_vm_bind(struct drm_device *dev, void *data, struct drm_file *fil= e) +{ + struct msm_drm_private *priv =3D dev->dev_private; + struct drm_msm_vm_bind *args =3D data; + struct msm_context *ctx =3D file->driver_priv; + struct msm_vm_bind_job *job =3D NULL; + struct msm_gpu *gpu =3D priv->gpu; + struct msm_gpu_submitqueue *queue; + struct msm_syncobj_post_dep *post_deps =3D NULL; + struct drm_syncobj **syncobjs_to_reset =3D NULL; + struct sync_file *sync_file =3D NULL; + struct dma_fence *fence; + int out_fence_fd =3D -1; + int ret, nr_bos =3D 0; + unsigned i; + + if (!gpu) + return -ENXIO; + + /* + * Maybe we could allow just UNMAP ops? OTOH userspace should just + * immediately close the device file and all will be torn down. + */ + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + + /* + * Technically, you cannot create a VM_BIND submitqueue in the first + * place, if you haven't opted in to VM_BIND context. But it is + * cleaner / less confusing, to check this case directly. + */ + if (!msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "context does not support vmbind"); + + if (args->flags & ~MSM_VM_BIND_FLAGS) + return UERR(EINVAL, dev, "invalid flags"); + + queue =3D msm_submitqueue_get(ctx, args->queue_id); + if (!queue) + return -ENOENT; + + if (!(queue->flags & MSM_SUBMITQUEUE_VM_BIND)) { + ret =3D UERR(EINVAL, dev, "Invalid queue type"); + goto out_post_unlock; + } + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + out_fence_fd =3D get_unused_fd_flags(O_CLOEXEC); + if (out_fence_fd < 0) { + ret =3D out_fence_fd; + goto out_post_unlock; } } - dma_resv_unlock(drm_gpuvm_resv(gpuvm)); + + job =3D vm_bind_job_create(dev, gpu, queue, args->nr_ops); + if (IS_ERR(job)) { + ret =3D PTR_ERR(job); + goto out_post_unlock; + } + + ret =3D mutex_lock_interruptible(&queue->lock); + if (ret) + goto out_post_unlock; + + if (args->flags & MSM_VM_BIND_FENCE_FD_IN) { + struct dma_fence *in_fence; + + in_fence =3D sync_file_get_fence(args->fence_fd); + + if (!in_fence) { + ret =3D UERR(EINVAL, dev, "invalid in-fence"); + goto out_unlock; + } + + ret =3D drm_sched_job_add_dependency(&job->base, in_fence); + if (ret) + goto out_unlock; + } + + if (args->in_syncobjs > 0) { + syncobjs_to_reset =3D msm_syncobj_parse_deps(dev, &job->base, + file, args->in_syncobjs, + args->nr_in_syncobjs, + args->syncobj_stride); + if (IS_ERR(syncobjs_to_reset)) { + ret =3D PTR_ERR(syncobjs_to_reset); + goto out_unlock; + } + } + + if (args->out_syncobjs > 0) { + post_deps =3D msm_syncobj_parse_post_deps(dev, file, + args->out_syncobjs, + args->nr_out_syncobjs, + args->syncobj_stride); + if (IS_ERR(post_deps)) { + ret =3D PTR_ERR(post_deps); + goto out_unlock; + } + } + + ret =3D vm_bind_job_lookup_ops(job, args, file, &nr_bos); + if (ret) + goto out_unlock; + + vm_bind_prealloc_count(job); + + struct drm_exec exec; + unsigned flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WA= IT; + drm_exec_init(&exec, flags, nr_bos + 1); + + ret =3D vm_bind_job_lock_objects(job, &exec); + if (ret) + goto out; + + ret =3D vm_bind_job_pin_objects(job); + if (ret) + goto out; + + ret =3D vm_bind_job_prepare(job); + if (ret) + goto out; + + drm_sched_job_arm(&job->base); + + job->fence =3D dma_fence_get(&job->base.s_fence->finished); + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + sync_file =3D sync_file_create(job->fence); + if (!sync_file) { + ret =3D -ENOMEM; + } else { + fd_install(out_fence_fd, sync_file->file); + args->fence_fd =3D out_fence_fd; + } + } + + if (ret) + goto out; + + vm_bind_job_attach_fences(job); + + /* + * The job can be free'd (and fence unref'd) at any point after + * drm_sched_entity_push_job(), so we need to hold our own ref + */ + fence =3D dma_fence_get(job->fence); + + drm_sched_entity_push_job(&job->base); + + msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); + msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, fence); + + dma_fence_put(fence); + +out: + if (ret) + vm_bind_job_unpin_objects(job); + + drm_exec_fini(&exec); +out_unlock: + mutex_unlock(&queue->lock); +out_post_unlock: + if (ret && (out_fence_fd >=3D 0)) { + put_unused_fd(out_fence_fd); + if (sync_file) + fput(sync_file->file); + } + + if (!IS_ERR_OR_NULL(job)) { + if (ret) + msm_vma_job_free(&job->base); + } else { + /* + * If the submit hasn't yet taken ownership of the queue + * then we need to drop the reference ourself: + */ + msm_submitqueue_put(queue); + } + + if (!IS_ERR_OR_NULL(post_deps)) { + for (i =3D 0; i < args->nr_out_syncobjs; ++i) { + kfree(post_deps[i].chain); + drm_syncobj_put(post_deps[i].syncobj); + } + kfree(post_deps); + } + + if (!IS_ERR_OR_NULL(syncobjs_to_reset)) { + for (i =3D 0; i < args->nr_in_syncobjs; ++i) { + if (syncobjs_to_reset[i]) + drm_syncobj_put(syncobjs_to_reset[i]); + } + kfree(syncobjs_to_reset); + } + + return ret; } diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 6d6cd1219926..5c67294edc95 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -272,7 +272,10 @@ struct drm_msm_gem_submit_cmd { __u32 size; /* in, cmdstream size */ __u32 pad; __u32 nr_relocs; /* in, number of submit_reloc's */ - __u64 relocs; /* in, ptr to array of submit_reloc's */ + union { + __u64 relocs; /* in, ptr to array of submit_reloc's */ + __u64 iova; /* cmdstream address (for VM_BIND contexts) */ + }; }; =20 /* Each buffer referenced elsewhere in the cmdstream submit (ie. the @@ -339,7 +342,74 @@ struct drm_msm_gem_submit { __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ __u32 pad; /*in, reserved for future use, always 0. */ +}; + +#define MSM_VM_BIND_OP_UNMAP 0 +#define MSM_VM_BIND_OP_MAP 1 +#define MSM_VM_BIND_OP_MAP_NULL 2 + +#define MSM_VM_BIND_OP_DUMP 1 +#define MSM_VM_BIND_OP_FLAGS ( \ + MSM_VM_BIND_OP_DUMP | \ + 0) =20 +/** + * struct drm_msm_vm_bind_op - bind/unbind op to run + */ +struct drm_msm_vm_bind_op { + /** @op: one of MSM_VM_BIND_OP_x */ + __u32 op; + /** @handle: GEM object handle, MBZ for UNMAP or MAP_NULL */ + __u32 handle; + /** @obj_offset: Offset into GEM object, MBZ for UNMAP or MAP_NULL */ + __u64 obj_offset; + /** @iova: Address to operate on */ + __u64 iova; + /** @range: Number of bites to to map/unmap */ + __u64 range; + /** @flags: Bitmask of MSM_VM_BIND_OP_FLAG_x */ + __u32 flags; + /** @pad: MBZ */ + __u32 pad; +}; + +#define MSM_VM_BIND_FENCE_FD_IN 0x00000001 +#define MSM_VM_BIND_FENCE_FD_OUT 0x00000002 +#define MSM_VM_BIND_FLAGS ( \ + MSM_VM_BIND_FENCE_FD_IN | \ + MSM_VM_BIND_FENCE_FD_OUT | \ + 0) + +/** + * struct drm_msm_vm_bind - Input of &DRM_IOCTL_MSM_VM_BIND + */ +struct drm_msm_vm_bind { + /** @flags: in, bitmask of MSM_VM_BIND_x */ + __u32 flags; + /** @nr_ops: the number of bind ops in this ioctl */ + __u32 nr_ops; + /** @fence_fd: in/out fence fd (see MSM_VM_BIND_FENCE_FD_IN/OUT) */ + __s32 fence_fd; + /** @queue_id: in, submitqueue id */ + __u32 queue_id; + /** @in_syncobjs: in, ptr to array of drm_msm_gem_syncobj */ + __u64 in_syncobjs; + /** @out_syncobjs: in, ptr to array of drm_msm_gem_syncobj */ + __u64 out_syncobjs; + /** @nr_in_syncobjs: in, number of entries in in_syncobj */ + __u32 nr_in_syncobjs; + /** @nr_out_syncobjs: in, number of entries in out_syncobj */ + __u32 nr_out_syncobjs; + /** @syncobj_stride: in, stride of syncobj arrays */ + __u32 syncobj_stride; + /** @op_stride: sizeof each struct drm_msm_vm_bind_op in @ops */ + __u32 op_stride; + union { + /** @op: used if num_ops =3D=3D 1 */ + struct drm_msm_vm_bind_op op; + /** @ops: userptr to array of drm_msm_vm_bind_op if num_ops > 1 */ + __u64 ops; + }; }; =20 #define MSM_WAIT_FENCE_BOOST 0x00000001 @@ -435,6 +505,7 @@ struct drm_msm_submitqueue_query { #define DRM_MSM_SUBMITQUEUE_NEW 0x0A #define DRM_MSM_SUBMITQUEUE_CLOSE 0x0B #define DRM_MSM_SUBMITQUEUE_QUERY 0x0C +#define DRM_MSM_VM_BIND 0x0D =20 #define DRM_IOCTL_MSM_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM= _GET_PARAM, struct drm_msm_param) #define DRM_IOCTL_MSM_SET_PARAM DRM_IOW (DRM_COMMAND_BASE + DRM_MSM= _SET_PARAM, struct drm_msm_param) @@ -448,6 +519,7 @@ struct drm_msm_submitqueue_query { #define DRM_IOCTL_MSM_SUBMITQUEUE_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_M= SM_SUBMITQUEUE_NEW, struct drm_msm_submitqueue) #define DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE DRM_IOW (DRM_COMMAND_BASE + DRM_M= SM_SUBMITQUEUE_CLOSE, __u32) #define DRM_IOCTL_MSM_SUBMITQUEUE_QUERY DRM_IOW (DRM_COMMAND_BASE + DRM_M= SM_SUBMITQUEUE_QUERY, struct drm_msm_submitqueue_query) +#define DRM_IOCTL_MSM_VM_BIND DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM= _VM_BIND, struct drm_msm_vm_bind) =20 #if defined(__cplusplus) } --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B2B4F2E0B6D for ; Wed, 25 Jun 2025 18:59:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877961; cv=none; b=Mob8+7v3aZ2rlrmKd1Egr0oKxw9eYVb/Qd2dRBxavLH4FT6YlxIUAvUInbd855C3nbW3w41Vm2O4ENmWFGtf+qXmkxPHycIi6KSoMhuf5heOr/Ef/Wvo3kAzoYkjglDpTZogPn3jkS4zt1KVZBrSZHrEb2Pm4b7xyr9V4HsQGeY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877961; c=relaxed/simple; bh=L8wpcxI5KXYprc9BeNWaB4wQshf9pNrtQh0ARD/map0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=m28Tk6Z9XKZcM9uc0D/CbC6DmBwiLIn3vabgt4ZUppB1ENdxXnnKjc1kFbuuT5N8atBHXiJOYqM2TtOm2GhQf/7r9VpawWHrWv/tpDfOJPHmr2+aMLdSlzwBALkdUYf7uZgoxhyvEaM+hJwlNBHYilza/+p2NUU61fx7kTGGZ1U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=dCoEpAD7; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="dCoEpAD7" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PBZXqF023070 for ; Wed, 25 Jun 2025 18:59:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=9I2+D+NGCoh icuIhMIGVY89TGnMrZSKG21yuS94HUEI=; b=dCoEpAD7aJ7BxHTVDUNEjGMohfp J3jVuQT0enTKUHmyPqu1xBDW3KzaPPhmtnVsusLfv1nYBSR2usAu7OS0llhxY8ZP NIcG29qAZlf/PYS/SezwjInMLqkVZ+VlJjtuuELy4vQTm+rB2ZlQerVQR5b+tO99 eHNo0+8+MKDuT2wfceVmXJxbC6t2NHuig/ED0YB7aGeQzs5Pan0Ysc0m/eRtlE9z +02MtdTPTqYSYiPfkjm/bRv8uE552tFAFuv6kZa5mSYC9ezlFZg2xldZL+4NVZYO tdZaoN7EzANkV0itDnJmpPoVn+3DXd0Oqek5bI4RZU4uVVMLLlWHgjsjj4w== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47g88fapqb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:18 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-235e7550f7bso1591555ad.3 for ; Wed, 25 Jun 2025 11:59:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877957; x=1751482757; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9I2+D+NGCohicuIhMIGVY89TGnMrZSKG21yuS94HUEI=; b=HiRfw1LuJk842NGB9BsuuX2W1Ns9XRW+dlH5+YNy6w2uSEvZmbyKxNP0jS+Lye5GMx pFr5P6QkzTrae/MQCCotB2ulyEd3ms6g4LUR8lf3i5I2Bn9Mgj0eySSDgokcTicF9AG0 R+eqxanM+khCopIjQ6p+7ULndnpV98MhxUorzeMARtxwJyo+IaqE2SKUZpZ/T1qNGrRm ZOqhe9Bt/RIW1y8DKUYbAhXE/YSasnjmg3Zc0z6X7WalAQZ4sDrzmf28PrdQf3rPob8C oUaunHogbNwKRJZkQ46qAwVfW3Z7hYW/we4UvXE8cBkuAIjAogWmLRtFbcOVXi5e74xm pbkg== X-Forwarded-Encrypted: i=1; AJvYcCXnJQ3QmD2ikhmrcsoX50fxS8lAGD1m+ipxXXIa+UjGlqyxy0s2M6mm6OJ1nSiUpBt9HUBT5rbqajKY//8=@vger.kernel.org X-Gm-Message-State: AOJu0YwYEfSKasGafb4NuKfrOBeEN9N8V/rxzq5J+k2u9M8thUf3I09P /0cQlgZRYWaiwBLNEOBEchirEdpi/gFZ897t23KEIqNxyUm8IlxJIbu+ri6FYKi2OmhVLBXJt0t 82NHZNqAOyaJyuRU0leVCLPFDI6EkVGW4xerAfxc6YDVgxFU1D0LUhfPc3HKuFm42uvQ= X-Gm-Gg: ASbGncsW7uW5MTnb0swg6ceXjEkgtO6ANiJ1HezZgnPyyP4qOS3vat6MD8CdrRUZDCQ trHP1xWGKXpuJvmp/vUQsktzy9rRpBSmB+Lv4bhFU+FcVWG2oS8f0/44OmuToy4wzTuA51Xfxel 7suaNHdb3IGX857CKJRBnOyf3reTj7djtkCEvyU2lPwY6hk0V0wmpcq9L/jgIsNnPvT7u1RtyLU NhZzwWEFr26T4Y1PltPYLyuzUI0rYpIJ1VPr4cXIy1fQXQQx0y53sSbG4tuOlDOr6Qc9wjJ6fQt JuCaYhQQ2L+WVtv5s2QlPQlx6I1AT5KR X-Received: by 2002:a17:902:ea06:b0:22f:c19c:810c with SMTP id d9443c01a7336-23824385a8bmr77135445ad.51.1750877957209; Wed, 25 Jun 2025 11:59:17 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEDgIa12KYBHmawAQQa/WWpZRau96PEwqSOKWlZcQvJvcYsT19my7oKMt6QL6vRlqMLYfN0Lw== X-Received: by 2002:a17:902:ea06:b0:22f:c19c:810c with SMTP id d9443c01a7336-23824385a8bmr77134965ad.51.1750877956657; Wed, 25 Jun 2025 11:59:16 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d83932dfsm140491125ad.30.2025.06.25.11.59.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:16 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 36/42] drm/msm: Add VM logging for VM_BIND updates Date: Wed, 25 Jun 2025 11:47:29 -0700 Message-ID: <20250625184918.124608-37-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfX92HOP/0DPQg1 5db0C9CcYZz1iH85Y6/DUwyhexextfPsl3rru2d9YoXhqKFpzbjgEEHUvbcdH3gLPwZtZn1YPyE WI4f3+xft4SKNG72EhqpnpTehc7Khn8dpN7vwBpm4D0W5TCQJctnjqx2EhkRKbZaqd3hjMT1JuA tmMahYljGhm2Csh6x2Ol6STpZjxXPWNqUbhYSP/xXuGu9UFpFpIRh90T6G49Toy9L0QYwvP5GmS wHlqLcCDubpHDPwBHWtPt5h3x+Au/wvlDL0D6iZjmRjxd7IiJR5Tx2rKgp0JF17FcFHulFueb5T V/s5HaseZkKbnAYKPSHIRpKeJgTihs8B9SYaSEaZUVrvJVicSjRxPcq6qbPjAtHE5TKzPz8hkLW 9EV9ktQOohcG4TN7/1hrJ930OA6N6Ej7a4vfI2gunQ+D4nqc/KLDE1wISOddNpHiJ09U8brV X-Proofpoint-ORIG-GUID: 788mQdjjhKpzJa8plgfGwyQ34mrqUZvV X-Proofpoint-GUID: 788mQdjjhKpzJa8plgfGwyQ34mrqUZvV X-Authority-Analysis: v=2.4 cv=LNNmQIW9 c=1 sm=1 tr=0 ts=685c4706 cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=zDHoyB4d8wgZrPPXejgA:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 suspectscore=0 spamscore=0 bulkscore=0 mlxlogscore=999 impostorscore=0 mlxscore=0 clxscore=1015 adultscore=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" From: Rob Clark When userspace opts in to VM_BIND, the submit no longer holds references keeping the VMA alive. This makes it difficult to distinguish between UMD/KMD/app bugs. So add a debug option for logging the most recent VM updates and capturing these in GPU devcoredumps. The submitqueue id is also captured, a value of zero means the operation did not go via a submitqueue (ie. comes from msm_gem_vm_close() tearing down the remaining mappings when the device file is closed. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 11 +++ drivers/gpu/drm/msm/msm_gem.h | 24 +++++ drivers/gpu/drm/msm/msm_gem_vma.c | 124 ++++++++++++++++++++++-- drivers/gpu/drm/msm/msm_gpu.c | 52 +++++++++- drivers/gpu/drm/msm/msm_gpu.h | 4 + 5 files changed, 202 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/= adreno/adreno_gpu.c index efe03f3f42ba..12b42ae2688c 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -837,6 +837,7 @@ void adreno_gpu_state_destroy(struct msm_gpu_state *sta= te) for (i =3D 0; state->bos && i < state->nr_bos; i++) kvfree(state->bos[i].data); =20 + kfree(state->vm_logs); kfree(state->bos); kfree(state->comm); kfree(state->cmd); @@ -977,6 +978,16 @@ void adreno_show(struct msm_gpu *gpu, struct msm_gpu_s= tate *state, info->ptes[0], info->ptes[1], info->ptes[2], info->ptes[3]); } =20 + if (state->vm_logs) { + drm_puts(p, "vm-log:\n"); + for (i =3D 0; i < state->nr_vm_logs; i++) { + struct msm_gem_vm_log_entry *e =3D &state->vm_logs[i]; + drm_printf(p, " - %s:%d: 0x%016llx-0x%016llx\n", + e->op, e->queue_id, e->iova, + e->iova + e->range); + } + } + drm_printf(p, "rbbm-status: 0x%08x\n", state->rbbm_status); =20 drm_puts(p, "ringbuffer:\n"); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d062722942b5..efbf58594c08 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -24,6 +24,20 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash mem= ory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping = */ =20 +/** + * struct msm_gem_vm_log_entry - An entry in the VM log + * + * For userspace managed VMs, a log of recent VM updates is tracked and + * captured in GPU devcore dumps, to aid debugging issues caused by (for + * example) incorrectly synchronized VM updates + */ +struct msm_gem_vm_log_entry { + const char *op; + uint64_t iova; + uint64_t range; + int queue_id; +}; + /** * struct msm_gem_vm - VM object * @@ -85,6 +99,15 @@ struct msm_gem_vm { /** @last_fence: Fence for last pending work scheduled on the VM */ struct dma_fence *last_fence; =20 + /** @log: A log of recent VM updates */ + struct msm_gem_vm_log_entry *log; + + /** @log_shift: length of @log is (1 << @log_shift) */ + uint32_t log_shift; + + /** @log_idx: index of next @log entry to write */ + uint32_t log_idx; + /** @faults: the number of GPU hangs associated with this address space */ int faults; =20 @@ -115,6 +138,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, u64 va_start, u64 va_size, bool managed); =20 void msm_gem_vm_close(struct drm_gpuvm *gpuvm); +void msm_gem_vm_unusable(struct drm_gpuvm *gpuvm); =20 struct msm_fence_context; =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 5d4b7e3e9d2c..729027245986 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -17,6 +17,10 @@ =20 #define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##= __VA_ARGS__) =20 +static uint vm_log_shift =3D 0; +MODULE_PARM_DESC(vm_log_shift, "Length of VM op log"); +module_param_named(vm_log_shift, vm_log_shift, uint, 0600); + /** * struct msm_vm_map_op - create new pgtable mapping */ @@ -31,6 +35,13 @@ struct msm_vm_map_op { struct sg_table *sgt; /** @prot: the mapping protection flags */ int prot; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; }; =20 /** @@ -41,6 +52,13 @@ struct msm_vm_unmap_op { uint64_t iova; /** @range: size of region to unmap */ uint64_t range; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; }; =20 /** @@ -144,16 +162,87 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) vm->mmu->funcs->destroy(vm->mmu); dma_fence_put(vm->last_fence); put_pid(vm->pid); + kfree(vm->log); kfree(vm); } =20 +/** + * msm_gem_vm_unusable() - Mark a VM as unusable + * @vm: the VM to mark unusable + */ +void +msm_gem_vm_unusable(struct drm_gpuvm *gpuvm) +{ + struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); + uint32_t vm_log_len =3D (1 << vm->log_shift); + uint32_t vm_log_mask =3D vm_log_len - 1; + uint32_t nr_vm_logs; + int first; + + vm->unusable =3D true; + + /* Bail if no log, or empty log: */ + if (!vm->log || !vm->log[0].op) + return; + + mutex_lock(&vm->mmu_lock); + + /* + * log_idx is the next entry to overwrite, meaning it is the oldest, or + * first, entry (other than the special case handled below where the + * log hasn't wrapped around yet) + */ + first =3D vm->log_idx; + + if (!vm->log[first].op) { + /* + * If the next log entry has not been written yet, then only + * entries 0 to idx-1 are valid (ie. we haven't wrapped around + * yet) + */ + nr_vm_logs =3D MAX(0, first - 1); + first =3D 0; + } else { + nr_vm_logs =3D vm_log_len; + } + + pr_err("vm-log:\n"); + for (int i =3D 0; i < nr_vm_logs; i++) { + int idx =3D (i + first) & vm_log_mask; + struct msm_gem_vm_log_entry *e =3D &vm->log[idx]; + pr_err(" - %s:%d: 0x%016llx-0x%016llx\n", + e->op, e->queue_id, e->iova, + e->iova + e->range); + } + + mutex_unlock(&vm->mmu_lock); +} + static void -vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +vm_log(struct msm_gem_vm *vm, const char *op, uint64_t iova, uint64_t rang= e, int queue_id) { + int idx; + if (!vm->managed) lockdep_assert_held(&vm->mmu_lock); =20 - vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + vm_dbg("%s:%p:%d: %016llx %016llx", op, vm, queue_id, iova, iova + range); + + if (!vm->log) + return; + + idx =3D vm->log_idx; + vm->log[idx].op =3D op; + vm->log[idx].iova =3D iova; + vm->log[idx].range =3D range; + vm->log[idx].queue_id =3D queue_id; + vm->log_idx =3D (vm->log_idx + 1) & ((1 << vm->log_shift) - 1); +} + +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + vm_log(vm, "unmap", op->iova, op->range, op->queue_id); =20 vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); } @@ -161,10 +250,7 @@ vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm= _unmap_op *op) static int vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) { - if (!vm->managed) - lockdep_assert_held(&vm->mmu_lock); - - vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + vm_log(vm, "map", op->iova, op->range, op->queue_id); =20 return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, op->range, op->prot); @@ -382,6 +468,7 @@ vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map= *op) static int msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) { + struct msm_vm_bind_job *job =3D ((struct op_arg *)arg)->job; struct drm_gem_object *obj =3D op->map.gem.obj; struct drm_gpuva *vma; struct sg_table *sgt; @@ -412,6 +499,7 @@ msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *a= rg) .range =3D vma->va.range, .offset =3D vma->gem.offset, .prot =3D prot, + .queue_id =3D job->queue->id, }, .obj =3D vma->gem.obj, }); @@ -445,6 +533,7 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void = *arg) .unmap =3D { .iova =3D unmap_start, .range =3D unmap_range, + .queue_id =3D job->queue->id, }, .obj =3D orig_vma->gem.obj, }); @@ -506,6 +595,7 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void = *arg) static int msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) { + struct msm_vm_bind_job *job =3D ((struct op_arg *)arg)->job; struct drm_gpuva *vma =3D op->unmap.va; struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); =20 @@ -520,6 +610,7 @@ msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void = *arg) .unmap =3D { .iova =3D vma->va.addr, .range =3D vma->va.range, + .queue_id =3D job->queue->id, }, .obj =3D vma->gem.obj, }); @@ -584,7 +675,7 @@ msm_vma_job_run(struct drm_sched_job *_job) * now the VM is in an undefined state. Game over! */ if (ret) - vm->unusable =3D true; + msm_gem_vm_unusable(job->vm); =20 job_foreach_bo (obj, job) { msm_gem_lock(obj); @@ -695,6 +786,23 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_m= mu *mmu, const char *name, =20 drm_mm_init(&vm->mm, va_start, va_size); =20 + /* + * We don't really need vm log for kernel managed VMs, as the kernel + * is responsible for ensuring that GEM objs are mapped if they are + * used by a submit. Furthermore we piggyback on mmu_lock to serialize + * access to the log. + * + * Limit the max log_shift to 8 to prevent userspace from asking us + * for an unreasonable log size. + */ + if (!managed) + vm->log_shift =3D MIN(vm_log_shift, 8); + + if (vm->log_shift) { + vm->log =3D kmalloc_array(1 << vm->log_shift, sizeof(vm->log[0]), + GFP_KERNEL | __GFP_ZERO); + } + return &vm->base; =20 err_free_dummy: @@ -1161,7 +1269,7 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job) * state the vm is in. So throw up our hands! */ if (i > 0) - vm->unusable =3D true; + msm_gem_vm_unusable(job->vm); return ret; } } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 8178b6499478..e5896c084c8a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -259,9 +259,6 @@ static void crashstate_get_bos(struct msm_gpu_state *st= ate, struct msm_gem_submi { extern bool rd_full; =20 - if (!submit) - return; - if (msm_context_is_vmbind(submit->queue->ctx)) { struct drm_exec exec; struct drm_gpuva *vma; @@ -318,6 +315,48 @@ static void crashstate_get_bos(struct msm_gpu_state *s= tate, struct msm_gem_submi } } =20 +static void crashstate_get_vm_logs(struct msm_gpu_state *state, struct msm= _gem_vm *vm) +{ + uint32_t vm_log_len =3D (1 << vm->log_shift); + uint32_t vm_log_mask =3D vm_log_len - 1; + int first; + + /* Bail if no log, or empty log: */ + if (!vm->log || !vm->log[0].op) + return; + + mutex_lock(&vm->mmu_lock); + + /* + * log_idx is the next entry to overwrite, meaning it is the oldest, or + * first, entry (other than the special case handled below where the + * log hasn't wrapped around yet) + */ + first =3D vm->log_idx; + + if (!vm->log[first].op) { + /* + * If the next log entry has not been written yet, then only + * entries 0 to idx-1 are valid (ie. we haven't wrapped around + * yet) + */ + state->nr_vm_logs =3D MAX(0, first - 1); + first =3D 0; + } else { + state->nr_vm_logs =3D vm_log_len; + } + + state->vm_logs =3D kmalloc_array( + state->nr_vm_logs, sizeof(vm->log[0]), GFP_KERNEL); + for (int i =3D 0; i < state->nr_vm_logs; i++) { + int idx =3D (i + first) & vm_log_mask; + + state->vm_logs[i] =3D vm->log[idx]; + } + + mutex_unlock(&vm->mmu_lock); +} + static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, struct msm_gem_submit *submit, char *comm, char *cmd) { @@ -349,7 +388,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu = *gpu, msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); } =20 - crashstate_get_bos(state, submit); + if (submit) { + crashstate_get_vm_logs(state, to_msm_vm(submit->vm)); + crashstate_get_bos(state, submit); + } =20 /* Set the active crash state to be dumped on failure */ gpu->crashstate =3D state; @@ -449,7 +491,7 @@ static void recover_worker(struct kthread_work *work) * VM_BIND) */ if (!vm->managed) - vm->unusable =3D true; + msm_gem_vm_unusable(submit->vm); } =20 get_comm_cmdline(submit, &comm, &cmd); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 9cbf155ff222..31b83e9e3673 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -20,6 +20,7 @@ #include "msm_gem.h" =20 struct msm_gem_submit; +struct msm_gem_vm_log_entry; struct msm_gpu_perfcntr; struct msm_gpu_state; struct msm_context; @@ -609,6 +610,9 @@ struct msm_gpu_state { =20 struct msm_gpu_fault_info fault_info; =20 + int nr_vm_logs; + struct msm_gem_vm_log_entry *vm_logs; + int nr_bos; struct msm_gpu_state_bo *bos; }; --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF61D2E11B5 for ; Wed, 25 Jun 2025 18:59:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877962; cv=none; b=Ltj0Ijl6uInPICvdaAfjQ3hhVMF4Zma6vxiszNMvvwNwBIicKKTkBNz1W1VLND29FAK2AismS6HI4lPAwRx7uDJJ4ziCPX9HYePs44ggx2u+iGRjtaI3882owCMP9x5vB1wcXazPzowQcmdxSi6qNqUx1YHwu0O/87QcytJcFf4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877962; c=relaxed/simple; bh=PsaIIHYp0cre6Epw63fH/Cz+I+fRn0wM45fSqqUqlsE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j6WcTbs4OP6yBuCjFGBUFFs/8NQDhq93wtgwggx2xFDujvHK4KLiS1PZLJLZnTAsbD8vJlH1XoFkNaUBwsi6xwr5aUhds8xTTYdlPYPfkrq8ivdvOuhvNZLJShOXsvof7K5nXwE0F+CNxB8RySKT0ipmEHwyTtcUvbm3jGyKx7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=ZzKlFUqK; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="ZzKlFUqK" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PB1egC022996 for ; Wed, 25 Jun 2025 18:59:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=St2ndJNB5T5 qxjvTBi+Qboj7ahOA3a+vNQ3fmIRIIqc=; b=ZzKlFUqKEOVyFXcMOdJUfarENsq O2Eb5I7wLaeCVGo4vVuFEMsHLMiOzr7K1po5u//ATfuu1UzO7AQrZqYX0JFY3u+1 4pONbArdLl0JLXNQw5fuSzeI3/4vNR8wUsI7ZC8iU3HtpjCGX+aPRkBQEae1yB8g Whjw4ki6Shcf3Z/B4kLj8W4Vvawr0+tQkwpnLQ07qj/UWBrs7byXdnzbnqzNqQZn q7TP4eABYaUMzzZJcKUx9fGdpcyezNd2j+qNEEcdrl8Ah4xyA7paoj6QVIFLvoBt ItFgY7A33husmTVHRyCBwiWpADIB++/OUE9NXA8XHXcM2j8yLrDtvlOchYg== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47g88fapqm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:19 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-2355651d204so1581485ad.2 for ; Wed, 25 Jun 2025 11:59:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877958; x=1751482758; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=St2ndJNB5T5qxjvTBi+Qboj7ahOA3a+vNQ3fmIRIIqc=; b=IOsQwz8F1tAXC6/9kYDevfXbOuK104K/eaMT0TteW8RBGvSGanl8e8+OkCXo4m1zyj AOna1gIQ44mcphBDTkSV7RKLs2Ga313XQsNGKNh6xVE2hGbkYr3RotGNwaIKdvvmPCts IBnaOwaJMthNS/oXXrTy0FBsvtt6endDUzMVGxfrNM09WZ1+BOhAy6V2khdFWLveFMxv dq9E8CG/BCbvleeWsMreHon7W0WooGM0Gu6m/d1EDcWfZzCS2cvvaW5IKkgcaLlbuylW bq1ja67O0PbQYVU80+ZDBuYzb8XAqsUwhANRT4ZKLIAXUxvqtuXMWm9aSkgy/XOF0yvC RLpQ== X-Forwarded-Encrypted: i=1; AJvYcCUzMDv+67Td5MUhyFBTtPga9AOoeEQelXsKgzKUcoLzUp+f9spMRjZmXz1YEKmZcxzkomKl90b+vu/dyk0=@vger.kernel.org X-Gm-Message-State: AOJu0YxeOAC3eKIWSiHg7FoSqTOV1vZqNBjLE1doY7bZTBX+BRwdymVF nm2Ho3QaOPiNByxN6eKHWgwzB3n4mElIPa487C07WIpcPjqEOlARuyR4G559xT7YuqQ7uiB+sSW syiBC8nnbx2GOlfa/jqeptHf9iLZyioYaScP7/AXSKVzYoMUmTerQAJP7WyNEFRs6U/0= X-Gm-Gg: ASbGnct9WfY8cZkpM71fgQ8K8bRrqrRTWahgdIuIg28p+gjfIXW3QadI3HV3+iyZSWs eW9P1J0MusoGT5pasnBjMK+3slw2ZyHnTClO/llqjoUoK6z31IyAzrfs03Vn4UJnzIiymVInwt8 mfwriaLhzyecQ7BiJQ4VguOxaWzpnSf9oY0L86fSKW0dyYhCiERye4PxaM/40FeR/IRzm2uG23E NwPqIHcMPgd1k0vDATWG3Hp7P9UxF9j56SzsUF/NjEGFX+ryMFhB0hb4DARNexGdDzvbLuGfrvB YMoODPAr2bcGHZoKpWPlqVfWqf3HmLJi X-Received: by 2002:a17:902:e890:b0:235:6e1:3edf with SMTP id d9443c01a7336-23824039f7amr77982825ad.34.1750877958521; Wed, 25 Jun 2025 11:59:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF/Y9lB16IKtQS7RqL3y30IZ+jiD7LUeSPJzyPzPcOjdlpXE/X4dI8MmjryvmUd3anc1LWsJw== X-Received: by 2002:a17:902:e890:b0:235:6e1:3edf with SMTP id d9443c01a7336-23824039f7amr77982545ad.34.1750877958122; Wed, 25 Jun 2025 11:59:18 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2380d1a9d79sm42665515ad.93.2025.06.25.11.59.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 37/42] drm/msm: Add VMA unmap reason Date: Wed, 25 Jun 2025 11:47:30 -0700 Message-ID: <20250625184918.124608-38-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfXzY2xQsWaXdMg TAJLlSHKU5LvIXIyzgaKd605NWpxnIx+gHcBWIsNB26ipY7XZ0/YLXWutEbJYIfSkg0cutSz1mz 1li4iDAsNupi7XtznPWKckqqNW9i/F2XOgDGbV9DVI3ZKyCgMTzvYylRgUBducn/At9b7xkkmcz Hzm03nbEX3mOmhdm7jZvxEuTLF4HlEQQocmA8qv2lcHmCBncTszq3luYzl0lUmd+eLa7i/p7CXB 3+z6ZyiHlu1lVy5mw/rfQm7K+sQAmE/uq/KUU/V4YEf2XzhWQ9y+4VCgIW1mO4ZwbuKnLkz/u4G Z9bblXKqGoxL2M/r33NolrAivWgz8qSVDf5bO3qw9D6i0xMtMsi2Q1r02QAhW48Wf/9wb2WoZKp 4uamhsUixY8nWjuv+ZU5L7tX3OibMK4I6FWtlZeyXgAj18NIFrRrgYQ5Gv7G+sGO+z25xcSu X-Proofpoint-ORIG-GUID: eJU7hzYHL41azaj1j2sNZpSR6UkF92Vg X-Proofpoint-GUID: eJU7hzYHL41azaj1j2sNZpSR6UkF92Vg X-Authority-Analysis: v=2.4 cv=LNNmQIW9 c=1 sm=1 tr=0 ts=685c4707 cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=S93TA_zjY9FrSvCmoiAA:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 suspectscore=0 spamscore=0 bulkscore=0 mlxlogscore=999 impostorscore=0 mlxscore=0 clxscore=1015 adultscore=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" From: Rob Clark Make the VM log a bit more useful by providing a reason for the unmap (ie. closing VM vs evict/purge, etc) Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 20 +++++++++++--------- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 15 ++++++++++++--- 3 files changed, 24 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index fea13a993629..e415e6e32a59 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -47,7 +47,8 @@ static int msm_gem_open(struct drm_gem_object *obj, struc= t drm_file *file) return 0; } =20 -static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, bool close); +static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, + bool close, const char *reason); =20 static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) { @@ -61,7 +62,7 @@ static void detach_vm(struct drm_gem_object *obj, struct = drm_gpuvm *vm) drm_gpuvm_bo_for_each_va (vma, vm_bo) { if (vma->vm !=3D vm) continue; - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, "detach"); msm_gem_vma_close(vma); break; } @@ -101,7 +102,7 @@ static void msm_gem_close(struct drm_gem_object *obj, s= truct drm_file *file) MAX_SCHEDULE_TIMEOUT); =20 msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); - put_iova_spaces(obj, ctx->vm, true); + put_iova_spaces(obj, ctx->vm, true, "close"); detach_vm(obj, ctx->vm); drm_exec_fini(&exec); /* drop locks */ } @@ -429,7 +430,8 @@ static struct drm_gpuva *lookup_vma(struct drm_gem_obje= ct *obj, * mapping. */ static void -put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool clo= se) +put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, + bool close, const char *reason) { struct drm_gpuvm_bo *vm_bo, *tmp; =20 @@ -444,7 +446,7 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_= gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); =20 drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, reason); if (close) msm_gem_vma_close(vma); } @@ -625,7 +627,7 @@ static int clear_iova(struct drm_gem_object *obj, if (!vma) return 0; =20 - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, NULL); msm_gem_vma_close(vma); =20 return 0; @@ -837,7 +839,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); =20 /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, NULL, false); + put_iova_spaces(obj, NULL, false, "purge"); =20 msm_gem_vunmap(obj); =20 @@ -875,7 +877,7 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(is_unevictable(msm_obj)); =20 /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, NULL, false); + put_iova_spaces(obj, NULL, false, "evict"); =20 drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); =20 @@ -1087,7 +1089,7 @@ static void msm_gem_free_object(struct drm_gem_object= *obj) drm_exec_retry_on_contention(&exec); } } - put_iova_spaces(obj, NULL, true); + put_iova_spaces(obj, NULL, true, "free"); drm_exec_fini(&exec); /* drop locks */ } =20 diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index efbf58594c08..57252b5e08d0 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -168,7 +168,7 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 offset, u64 range_start, u64 range_end); -void msm_gem_vma_unmap(struct drm_gpuva *vma); +void msm_gem_vma_unmap(struct drm_gpuva *vma, const char *reason); int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 729027245986..907ebf5073e6 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -53,6 +53,9 @@ struct msm_vm_unmap_op { /** @range: size of region to unmap */ uint64_t range; =20 + /** @reason: The reason for the unmap */ + const char *reason; + /** * @queue_id: The id of the submitqueue the operation is performed * on, or zero for (in particular) UNMAP ops triggered outside of @@ -242,7 +245,12 @@ vm_log(struct msm_gem_vm *vm, const char *op, uint64_t= iova, uint64_t range, int static void vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) { - vm_log(vm, "unmap", op->iova, op->range, op->queue_id); + const char *reason =3D op->reason; + + if (!reason) + reason =3D "unmap"; + + vm_log(vm, reason, op->iova, op->range, op->queue_id); =20 vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); } @@ -257,7 +265,7 @@ vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_ma= p_op *op) } =20 /* Actually unmap memory for the vma */ -void msm_gem_vma_unmap(struct drm_gpuva *vma) +void msm_gem_vma_unmap(struct drm_gpuva *vma, const char *reason) { struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); @@ -277,6 +285,7 @@ void msm_gem_vma_unmap(struct drm_gpuva *vma) vm_unmap_op(vm, &(struct msm_vm_unmap_op){ .iova =3D vma->va.addr, .range =3D vma->va.range, + .reason =3D reason, }); =20 if (!vm->managed) @@ -863,7 +872,7 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) drm_exec_retry_on_contention(&exec); } =20 - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, "close"); msm_gem_vma_close(vma); =20 if (obj) { --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77F522FC003 for ; Wed, 25 Jun 2025 18:59:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877963; cv=none; b=OI0RiZEEMNuJIP8P2jh2Bo/70jMaOtJkA3raGYSviEI1I8+VIywuWXKadc/NXQV//R/UOb6Bw/b0qWVb8G9GfkPFUFaZudaeA0KeY7tm5kLtc3sGa/V6ILCMxr0LfkWOsaONNh74cXkVkCIopz6p/rVTbikczeQK0NgG3wS6MII= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877963; c=relaxed/simple; bh=szvV6IrMopmr+OBL9gQagLLcUjdP+1eIVq6kpBn3t2w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RCwoO/SzeMG2nk2sZL/4UbLT2IkMps2ZH/0CnC81cwjir2sB9EmsMwsbTKbUotLXgr6sjY93S3f0glvB1J2is/vh6p57wPy6EojiIa8FeA95jHqslYh3rv28exTu/bBxWXIz9XiX5RUVTXaHhshm4p8sscjFJoL+zISldzMBtF0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=P9T/rjxq; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="P9T/rjxq" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCu4jN015282 for ; Wed, 25 Jun 2025 18:59:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=E0CQf8ECEko EZPuzosk2AQuQ/2hYymsahlQVM4KlO5E=; b=P9T/rjxqUMR6cavrz/lgkUrhCUz DN/OtrtklgG5Gq0r+xNFu9IswxkwPWlCs91mPtSma44UBle2uz99UVmUUpSHe12V flHGUKwYppSBSdFnKRLuH0YZ9R7mhVDenTwCFWevlMs2/Vsr85weCyBJ4pgY1gEw T2McfsXeoFvGmOOA+joewFvy1Ugm2tM1nX5H0AE+ISfC2W4gYygccRv/Syxs2RAF nO7K9S8zCqDNcuPQWz2T/O/sChmouMuihU9wL2jnpAqgCZav3GYovZB6JK7Dq0ZW NKsp+It9Hr/Wi4AhhdX6q+OAGLk9B4I0DqCbXdRDLdK7ILkAVihylboiLWg== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47evc5t1rr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:20 +0000 (GMT) Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-b31f112c90aso143262a12.0 for ; Wed, 25 Jun 2025 11:59:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877960; x=1751482760; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=E0CQf8ECEkoEZPuzosk2AQuQ/2hYymsahlQVM4KlO5E=; b=cbnEdxOKIGs9m3Sgny4f6jYiLEw+qXGOsCmPAR7mKxxE5rizKpcJA5bUGS0E2m2kCA 2u6UH5IV1DULX6VuW8a/ZarSyv2vQzfUICg82BClAyObyBglOrJGXgJy3UJRqzrazJVH QBl0b3uoMSQkjkqfpq5q99I55JrUCcndaGlxZ3APcwSHAosFXQhYR2uEVqewOI+RJAuD gzdowlJ9fFr3fT2iyFBV1DWlrm/kqhQW46c0V3HKqgOXHyt5Xmqx3EKHeE2FiAxId5in 7+9yKkVRf+T09sFO97rPc65oH2fIT+sPht5/0Ye/kqCU+ycpi0mNFRj6KfzREeCM2zoI VSeg== X-Forwarded-Encrypted: i=1; AJvYcCWUz2DFjGHppAOSvyw1UJAXAWhxfal12uFQkgigJlIdGor9VJoDyz6mIbX8OAbag7QOs0x5ZbJLFxUtoU8=@vger.kernel.org X-Gm-Message-State: AOJu0YwrizChnrj2gppv3M2Gl2uEAseWFSHeXZAnm6gfcsgsndCIqcAK Cg3OIJ3Z4ZWhL47PPEbzMJ0NC1bSGXcn4u5/ddE67Xv2L4OHEM6nt0S+l2khKTIeh7XmOolTH1x /4jpL7T4CB1u8UDpQ3pjgrCGTEhzzPvbO/EsGJHhDO4O34u67B2meL6Pr3n28T2XsgoQ= X-Gm-Gg: ASbGncvUQXwVZDlczTcmrsVe+qPhkzq/QBejNSxFRlfYWOv3u5agSyJvPJqrIIuxBnL KPjr4OwM32vNZt/O+Axen6PqhiIVhVxqgneoPwxM1E+I1mRz4XTmMuUAL1ARsqddo42ZWDJGBdB KGi/mYAvlvrlHo7V5H+BIbQDrKgvhpwluRTLA0yU5ImQ/ncRj42IS8aiMPl8v9o0WRizA8ZoWyx GzkwKR05yj57JDoE7IZNcIUnY5pPUIu8SGrZcqcQp0us3QC4LzhkeIZOSNP9EUTzUzNaZeNgmGI N0PT9EJYJLamazhnaqa9xp8zGP7Kx736 X-Received: by 2002:a05:6a21:790:b0:1f3:31fe:c1da with SMTP id adf61e73a8af0-2208d116316mr884049637.11.1750877959814; Wed, 25 Jun 2025 11:59:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFmJ+IhD7XkltkNbiUYif6D9uQ/Gi11DQoaOwAXBuNqubE9zV+mGLMYWUUTXdmancoD07qNUA== X-Received: by 2002:a05:6a21:790:b0:1f3:31fe:c1da with SMTP id adf61e73a8af0-2208d116316mr884031637.11.1750877959453; Wed, 25 Jun 2025 11:59:19 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b31f126a7bbsm13319342a12.69.2025.06.25.11.59.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:19 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 38/42] drm/msm: Add mmu prealloc tracepoint Date: Wed, 25 Jun 2025 11:47:31 -0700 Message-ID: <20250625184918.124608-39-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: vCKtF2WpwOa00aXQjXW1btWt_n7SVEip X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX6U97TqJ3mz4Q BVN2F4hqgnH5i/nL6fNse9fJhPejJi/9CJvumKw5v+ye6+G60V/8KeWBV8C56LS/+ooQDtAuyXf CB2JaxcqgfOs8mINh7hZqdb/2dlEnjbBDbvX5BkaBlRBRhIsk3mH5J4CJvhHZz69284lwpW0xBp wi2HN2CXzXsKOum1t+To8mobIHqWdv8jHAwErkhC4q6kKWbBT9yUaW2QEt9jzJr86TZNsSHeMW7 U8jZ7bEOQg+8vkmqUTgZ6JHc6VlLvi/lhQ5Nrz+EeCjnatmvMgUknWYLprRyY/LKmLaBzFZR0qJ X0HnSTalraPsOeAp5Q3OeXiyTROFX9QecW7mDyN0aybBwAnFLwcv2TxbWgCOLxXo5Em/h/oCvlH kYECUnLCcKmtkRbzIFFZ3/GpPcP6oI0rbn1fx5ldf9/RmKiBSXMfYEusJdbxDiL5QznzgbCX X-Authority-Analysis: v=2.4 cv=caHSrmDM c=1 sm=1 tr=0 ts=685c4708 cx=c_pps a=Oh5Dbbf/trHjhBongsHeRQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=W9C9WuCMp67TlgULjysA:9 a=_Vgx9l1VpLgwpw_dHYaR:22 X-Proofpoint-ORIG-GUID: vCKtF2WpwOa00aXQjXW1btWt_n7SVEip X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 lowpriorityscore=0 bulkscore=0 adultscore=0 mlxscore=0 spamscore=0 malwarescore=0 phishscore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark So we can monitor how many pages are getting preallocated vs how many get used. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gpu_trace.h | 14 ++++++++++++++ drivers/gpu/drm/msm/msm_iommu.c | 4 ++++ 2 files changed, 18 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_= gpu_trace.h index 7f863282db0d..781bbe5540bd 100644 --- a/drivers/gpu/drm/msm/msm_gpu_trace.h +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h @@ -205,6 +205,20 @@ TRACE_EVENT(msm_gpu_preemption_irq, TP_printk("preempted to %u", __entry->ring_id) ); =20 +TRACE_EVENT(msm_mmu_prealloc_cleanup, + TP_PROTO(u32 count, u32 remaining), + TP_ARGS(count, remaining), + TP_STRUCT__entry( + __field(u32, count) + __field(u32, remaining) + ), + TP_fast_assign( + __entry->count =3D count; + __entry->remaining =3D remaining; + ), + TP_printk("count=3D%u, remaining=3D%u", __entry->count, __entry->remaini= ng) +); + #endif =20 #undef TRACE_INCLUDE_PATH diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iomm= u.c index bfee3e0dcb23..09fd99ac06f6 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -8,6 +8,7 @@ #include #include #include "msm_drv.h" +#include "msm_gpu_trace.h" #include "msm_mmu.h" =20 struct msm_iommu { @@ -346,6 +347,9 @@ msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mm= u, struct msm_mmu_preallo struct kmem_cache *pt_cache =3D get_pt_cache(mmu); uint32_t remaining_pt_count =3D p->count - p->ptr; =20 + if (p->count > 0) + trace_msm_mmu_prealloc_cleanup(p->count, remaining_pt_count); + kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]); kvfree(p->pages); } --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 818852FCE0D for ; Wed, 25 Jun 2025 18:59:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877965; cv=none; b=tEdOmrDy2T+jrMhonHGL7sstB4s28y85EfrHuTHurvnVxGMh09eVejLcUkiT2iJjsxCsdB9A848LYGR3tUUBhp1UCoZr1a+p2lxvcnMsqq/JkigesRdAqteOCwYzHHgGLoEG3DgFGNEjG4TjVxAxPGxpsdtkffzB7Gv2hONj0uc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877965; c=relaxed/simple; bh=eFxBT0OAg7rchLVBxX/NKfn/olJKKvoe8oSOxRdtsVQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Mpv2VnbzjZwNzwz0SDjnBZfkRqXCWOUcJ0fWBbwht7cYGEu7otdne5Wr6p+N6L0UEZry6sNXl6MyU82vSkHcXLNzdItW074U/jxhFbNEqWcmKvOJ9qY6/kofEwQ7cZHl1LbZ2wS6TSB5DQjUURGUXGS86Ju/4XNbzhB8BkeqK9M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=F2yRjw6+; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="F2yRjw6+" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCGcGM020068 for ; Wed, 25 Jun 2025 18:59:22 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=eqq9rbaAioE 9x/Vr1cLJypF4bAqJQvBM2Xsa5cd3ZNI=; b=F2yRjw6+OVLjPctkQ7sN/SPa2Dw 9I0eoXTefhTcUVJuB44zerx7wk9plj3WgmZuB+U1NDuOkBpjgYQcanCPgttNDshZ LHjG+vrltCf1AEry4+JKwdu9sSWZ02G5naJKvSMsQg5Z8JKeiy9R3/Yd6nzG4BYn s8X+WgmXDKd7rTxqiFSUOnKHK5a2BpPVFoW1NEKlhTjBeO3npBS/cIOjhn+9e6j5 UTxIXgzyXpKuyL6PNx2SNAZk0QDls7E69jcx4ej049FUCgLJt9PS27MxaP60f9dp cKxpjB0Ix3eQIf/kl8UIy66kqb7yahG/p7KEzezj23KejTA6AUHVwsM1tWg== Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fdfwy7ve-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:22 +0000 (GMT) Received: by mail-pg1-f200.google.com with SMTP id 41be03b00d2f7-b2c36951518so161442a12.2 for ; Wed, 25 Jun 2025 11:59:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877961; x=1751482761; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eqq9rbaAioE9x/Vr1cLJypF4bAqJQvBM2Xsa5cd3ZNI=; b=kUyYIHhbWzK6GMGiHMe86MBpqOaAd3R1YcN4OyQzgxt47FNruFTn0etIl//3lGadI9 QfjNH0IQKQARZnAJUg/F+vbWyZAcDHyd/7HsHGXCOvgp27T6fIdGPtUanbXjljAa/VnY z6tZgNKieVROWuDmFBvNZLGVq4eWVDabpUgaevvQr3XQ9u3M7mEq1xcYQzGgmD2olAWC xyudqnIASX9qHGM692UyKJOAlcubmJZB9xTV999oMydevS2SALY631zjVfUiBwYPIYaW 1UbXdeKUawzRmXsGlSmsxivlN0H46nqUkYYyxxydqPXv+gM4JTIvekQYV+kKFllvcAJK GyWw== X-Forwarded-Encrypted: i=1; AJvYcCWVGwB6pWWjCvTukRMSlGDHexgem5YkjThOIVKErTI3R5/Or/4DmRMWHxVbhjnW/yFywF5GPaX4Choy1Wk=@vger.kernel.org X-Gm-Message-State: AOJu0Yyi1rvUOGP44Mv0hRV2OQd98KFwFJLyjU9gPhpIZ5PcTu5HKm2h SR82rphWewj8RsaDLJQPpxlefLEobg43pQYgeLuh5x4MRJrmCaWI8CtrBqWVTS61WujOshdNEmf Rj0t2/LJAp/jnfgRKhaxokVtmygHIAD5R4HHbY0IRU6iPJgBre5RFUxIaxElVdtSDT94= X-Gm-Gg: ASbGncsqiY7wk9J72vMuWDWxxWDbfGFkAiyq5JoxNjyOdVPrH0fje8ciAoREHYxJLtz yH2JzPGHm/DTr0Po5mEfnyZab8wOtGoWYB+TRRkd94c8DnuSIki6tUC3ru2e7rea2fzI8GSTMtP b3lWMcCI+/OxqtrWH7htYFtxKN8RU7zZm3n7VJgF/K7E93nM0cP8jXesycUcYGG3U9jS8fxDcgp OlYENI7tK0BsJ7Jv4dBSmXkWWo6T8NlyuC4T7iPdp0gUKZgIXf2DL3bk1ICKSpLXjvMigJn+EDl BsarC5nnNC053OC/JutCWx3D+tfwS1je X-Received: by 2002:a17:903:1aab:b0:234:eea2:9483 with SMTP id d9443c01a7336-23824087b81mr74601465ad.51.1750877961043; Wed, 25 Jun 2025 11:59:21 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFH/9u8r/LxodxXtXwo7w52l7fdtoIYCsGzY2nyZs0qLi0VBSXghq7mfLQgk/HczC7JkARW8w== X-Received: by 2002:a17:903:1aab:b0:234:eea2:9483 with SMTP id d9443c01a7336-23824087b81mr74601115ad.51.1750877960671; Wed, 25 Jun 2025 11:59:20 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d860a8a8sm136322695ad.132.2025.06.25.11.59.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:20 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 39/42] drm/msm: use trylock for debugfs Date: Wed, 25 Jun 2025 11:47:32 -0700 Message-ID: <20250625184918.124608-40-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: uP28FgofJs7n8aPYlYmRTW711J1X5JlD X-Proofpoint-ORIG-GUID: uP28FgofJs7n8aPYlYmRTW711J1X5JlD X-Authority-Analysis: v=2.4 cv=MtZS63ae c=1 sm=1 tr=0 ts=685c470a cx=c_pps a=oF/VQ+ItUULfLr/lQ2/icg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=UIWvmcERRd2or3XT2GcA:9 a=3WC7DwWrALyhR5TkjVHa:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfX6IG0reWpAA+N fJYDUi6fpy2PHuw9K7JbK6P0kbwMlz7+vIHmUl++2/ropZS0dvqYFW6bwI0NVrhNAPERcweoirQ kWTndGJ5BZN5s84WJ8anBSQb+x9ELjuQbRPoJVja/+CDaFAhjd5Z6Iqm5w2Z4rB4CYx4JQtMvuP IgszYJmRh0VAQ/gdp0BTTb4VwciVS/8N7oD4curHdXWlntHupoShD5y980lVNfgkZ2TxrASJpcs 1Somt1NCc1ueNoSaLRTs8tg1i3ts0HkXlQIRxQqQGldRU/sRzZTL0Sg3bEvRuUWSn6FMd2SlORz 1nOwq4wQ7wHDpzl+VT0z+tATDTkYdCmYvQk9qbrhP7QdX6P8Cs8GX4DkyaCQi9jR62L+EGT42+L I21uTEMDh95Vi/z+cQY7n+SkViOXCxPbqmLGjLSyDXcoZX/cY2wVLipaF89usMLqQtWhzN39 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 impostorscore=0 suspectscore=0 adultscore=0 phishscore=0 mlxlogscore=999 spamscore=0 malwarescore=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 mlxscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark This resolves a potential deadlock vs msm_gem_vm_close(). Otherwise for _NO_SHARE buffers msm_gem_describe() could be trying to acquire the shared vm resv, while already holding priv->obj_lock. But _vm_close() might drop the last reference to a GEM obj while already holding the vm resv, and msm_gem_free_object() needs to grab priv->obj_lock, a locking inversion. OTOH this is only for debugfs and it isn't critical if we undercount by skipping a locked obj. So just use trylock() and move along if we can't get the lock. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem.c | 3 ++- drivers/gpu/drm/msm/msm_gem.h | 6 ++++++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index e415e6e32a59..b882647144bb 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -946,7 +946,8 @@ void msm_gem_describe(struct drm_gem_object *obj, struc= t seq_file *m, uint64_t off =3D drm_vma_node_start(&obj->vma_node); const char *madv; =20 - msm_gem_lock(obj); + if (!msm_gem_trylock(obj)) + return; =20 stats->all.count++; stats->all.size +=3D obj->size; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 57252b5e08d0..9671c4299cf8 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -280,6 +280,12 @@ msm_gem_lock(struct drm_gem_object *obj) dma_resv_lock(obj->resv, NULL); } =20 +static inline bool __must_check +msm_gem_trylock(struct drm_gem_object *obj) +{ + return dma_resv_trylock(obj->resv); +} + static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A49BD2FCE2B for ; Wed, 25 Jun 2025 18:59:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877966; cv=none; b=i8R8Td6GaXj4RMIcIIrMq4NHh2zwtyhhBDQJu7Ki5aEv6vOqUZr9hfL6CuByiqsW2MrKxd/r2de7R28HFaOAU5+qdPXP5nRuj5TududUeKRCYsKKRLgZJqWgjScQ9STPfI98Mbbwg7V6kKp4MnGAcJFWNZPTKn1R+liIlQSzorQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877966; c=relaxed/simple; bh=Yc9/xrKw3NesAc8iIakavO1+bxkLrBo6rWFNSTz0mhM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GfNc7Hl0Njv30Ss7G59U9YIdKur805ErquOqxRI7/4oC5PAIVvUMqbigunsBQ08ZwceGxCePdU+eIvBaj5U0h7B8ZC77JDbMTxyIXxqABWut9XJnxCvAa5wWG5z8kZXooOnMK2x5/+EuTtaNS1JqHl6ApeXRLjBh1spcSPkJ5XM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=aXY8dKKs; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="aXY8dKKs" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PCdV7N019166 for ; Wed, 25 Jun 2025 18:59:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=VHBzJXi95z6 O97WECG8ege8Ip+1ENafn5APqxEppLk8=; b=aXY8dKKsLj/cT/1qK7TVhpav/7L UPP86Cr1vsRkmb0BWeGizTgB/q5Qx91qYGLq8/lts3a5LNr2x4p4SZJMxRi6bGKN PwdFBELzWvU5FTiWkhGYv1H87BJINRiiCtbtgu1VM5LAka/a98HsKnzn99H3Z/BP wv9VzFGfQIkEI3RH+C85UirTfCdKiWozEJjC7NbYofLAq/TjiR/ciwrYsTSGHfV+ XMvkAZbexekRF4xD85S7FROndkdWfJQdRUw8E0XlKBV5vcCCRmVP8BJrNWSB6y72 AK1KBhhS7aZiISGhY9EFFKEMbsLPy38kgK/uJOZIxLkkiZiNSfIo1hYUYwA== Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47fdfwy7vh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:23 +0000 (GMT) Received: by mail-pf1-f200.google.com with SMTP id d2e1a72fcca58-7398d70abbfso257432b3a.2 for ; Wed, 25 Jun 2025 11:59:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877962; x=1751482762; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VHBzJXi95z6O97WECG8ege8Ip+1ENafn5APqxEppLk8=; b=P85zgfr7PE8XTWWXA1QdbPNWn5CvHiz+awJmz1Cn/lbtHC9N29cdDUhqldUPBc6SMa I7ZV+BvdmDhBhk8y95GrAeDPitLgrbHSGFECxAN8ygpQSz8jwIrhgBCJtInX4jgGnNMF KwCYeJLxYmJhpaOK98aX7kbL+KZhH7x2u2UxsPn0GpzriIS3KuEagzWEn3mTGn02/ppH ufb04++Cx62yTqWre7q5myGXTHL5/gZ76kYu1q509l2QJyU8rJ2oopJFB/7xsuyfGxCl hafDqEdecWWY9UCHnBFmYOOqdDnwSd0A2GukfXhq0/CztTwPCbOcISK4Kln/QuY6eKek ihWQ== X-Forwarded-Encrypted: i=1; AJvYcCXlgzVLPVpY2ffzFytCyBg7tzL99hNXjjbIMoK06xlywgF0ksPaUrZExbgvvF/NxcO5Tq4UrcGx2/NyKXM=@vger.kernel.org X-Gm-Message-State: AOJu0YxaAVhKaEoYtW4Sp3J8B+h4mTFQ8kSeJrYHJzhGmon9EunUiJ14 3ppTadNbfTO0tloCJ5r38003rI8UXZe3SWH3x4WAOj/vBbuilBv+V1659r4tW4gayLaBXKfCVF0 nuW+a5CoROHchG6eiiF4MBU30kH1zRogUQYLLREh3hZ7UfICoF/EB5yuW6Ywu+utgksk= X-Gm-Gg: ASbGncvHW8RmOaVqCXOa1AVl9GkOvrwGxKdPQ43NvcJdNTRyUKBue9xZlLvYNAo/ABF RYfguhQdt3G3p+9HZfTnCzRr5bEm9/T+RAJ0YINYh2YzPA27u4WB1cHYXzQobgCCJ5qOj89VjmW YjBmqnzZEEokHSIW1LvttsZ+pe0x7ZHhOVXNP14OcWRPINOVxnJ1sGQogsbpBgfgiMxHuRLnqrZ rVkitQeQEuRfs5AUsBQ5rMf82lTUhLKmq1Qc8hpGo/ZqxUgVRPbSQmAVo1f6JJowAVhnQNnj5b7 EUBYdSLW/j7JJUFSrMPKslnanWOsLJrl X-Received: by 2002:a05:6a21:596:b0:21f:5598:4c2c with SMTP id adf61e73a8af0-2207f192b9fmr6464792637.13.1750877962243; Wed, 25 Jun 2025 11:59:22 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFdvmmyyyzB3Yjf1DpzF4jlr0xSFd/MY9FsPMtHRz/B0Yw+Ld6nyFDHhrmVvlIxCATF6VP5Sg== X-Received: by 2002:a05:6a21:596:b0:21f:5598:4c2c with SMTP id adf61e73a8af0-2207f192b9fmr6464773637.13.1750877961841; Wed, 25 Jun 2025 11:59:21 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-749c889d219sm4961586b3a.180.2025.06.25.11.59.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:21 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 40/42] drm/msm: Bump UAPI version Date: Wed, 25 Jun 2025 11:47:33 -0700 Message-ID: <20250625184918.124608-41-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: YCwffKY7dyzu3hKFn8hDQWkDAlBDA6B1 X-Proofpoint-ORIG-GUID: YCwffKY7dyzu3hKFn8hDQWkDAlBDA6B1 X-Authority-Analysis: v=2.4 cv=MtZS63ae c=1 sm=1 tr=0 ts=685c470b cx=c_pps a=mDZGXZTwRPZaeRUbqKGCBw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=KgEaFMypzpKrXJt10QQA:9 a=zc0IvFSfCIW2DFIPzwfm:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXziMc8IC/mkH6 ruX50Dz1N+zDTxtIiXiXZ6g2EzMMGlkrGqfPbN3FV9RxKS08mf01auPw4WffXSgpYDXvcdmfbiy MFbolpHfhI3nVtGmvqUGA95Ze8uxJdSvivRn0YjyzCdUEaX3nC8tp9ENDbi1X1gDTP2qj/K4nGw ZYkySO9Y83a4kY6CTTMFvmePvWPpTNdtbWFP2IZHRLbSENaMdNI3YZhKror0Fy8uQT7ke2jtkzw Nayim7p2OD96yZk2eEMNRCL2vKirzm+LetHc6rByWt/tOTqkq6TVc9Bp0S2nMMD58br/W2mUi4s mk4uaxWv2qH1zHoP2JKhS8qjBe++BKbqJ3Gt7AipiV/5kDCKyKdgVvQ68KuQjZz9PWfcex1z3Ka Rfi3ZgLrabj/nEVV73vI5TCJkGU17VLGnFbuLOG9bOBizXgM54/9MDlQAe/L3iLXJiDwcYbF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 impostorscore=0 suspectscore=0 adultscore=0 phishscore=0 mlxlogscore=999 spamscore=0 malwarescore=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 mlxscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" From: Rob Clark Bump version to signal to userspace that VM_BIND is supported. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index bdf775897de8..710046906229 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -41,9 +41,10 @@ * - 1.10.0 - Add MSM_SUBMIT_BO_NO_IMPLICIT * - 1.11.0 - Add wait boost (MSM_WAIT_FENCE_BOOST, MSM_PREP_BOOST) * - 1.12.0 - Add MSM_INFO_SET_METADATA and MSM_INFO_GET_METADATA + * - 1.13.0 - Add VM_BIND */ #define MSM_VERSION_MAJOR 1 -#define MSM_VERSION_MINOR 12 +#define MSM_VERSION_MINOR 13 #define MSM_VERSION_PATCHLEVEL 0 =20 bool dumpstate; --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D098A2E11C4 for ; Wed, 25 Jun 2025 18:59:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877969; cv=none; b=tQYbSYJ9CX8amFRpDbKykeLHnW+/OBy/pG0VleWkK5Q31fSmBjSdCZd26Awdm/wQEHQvDBAOXdgKExU8iJWdiBtDOds5BmtkMOMsZ9wCu21W8P0Xvm53lGEho1kPwE8i/mWtMU/1Ldf17bhwEHwrOST+tc5BLySMkRhrOpdzRa4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877969; c=relaxed/simple; bh=IgMPTl41AOACb9ah611XjQJ/WFilQt78MCmvu3cYwfc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Kh3WXeK9ZW+tF2N+xVcGXNlTj/ZcmUqPpdQmZye6ZvWHigQqfZ9T+Hxu3Sm/g5KIvt47T9FS+44eRlSvxamj10LxvHr1YuHDCTkiBeaKyZNnDTyuMflVFlRd+26buSh3Elnov6EA/z2Xp0OQ47OCZq5tJDOvszBpz64/1gTeiCQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Xc9zjrfY; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Xc9zjrfY" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PAcrQx022979 for ; Wed, 25 Jun 2025 18:59:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=I0h46r1xWj/ RRFuFJ7X/xgD3zfEKUB3FLt0XctwRs8c=; b=Xc9zjrfYyuxOEmv13UcBO03CeIl 7mq7tFcjlL+3Sj66fgdk/R5Xi4Rk9fe9KVP/HVNpyUrmZga8l8y74ZbFp6UlfypN JjmweFdCILRF9mYfRH82BGlLYB6YZOoOOQPg5d8VTnHGssdEhy/0k3Qor2gfTPsO 9/zoFgBxLlk/IBsHPJjafPG+HFRk6WtMDQUeipm/YykJGDMP2nr2CdnCexS0JkC+ 8wPba6TbbrO7ZHGShF6axe9g7TEv/A2a4D9h3JlyucJC85K6Ae593dPifcg1CDGB o34MM5fxrBhIFtPLRRuH7PxztIvN0sN8ixHmmbbdU89ySalDdgDa5vwF7Qg== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47g88fapra-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:25 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-235c897d378so1460015ad.1 for ; Wed, 25 Jun 2025 11:59:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877964; x=1751482764; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=I0h46r1xWj/RRFuFJ7X/xgD3zfEKUB3FLt0XctwRs8c=; b=t8mu287733Bz690aA5DdR0gPo2xMbRtNnwJeZT/BfsbfE102PDvs4dVrkwj5jaoKHa s/Re499vIIv4Knp1WpNL6Ddb5V9DbYlgdcUIlfyyF3ScXIVUK4qQzDAddw1prl8yKDKf wzZzPiiBiCuNF/Zo+viJ5LDY1hlFAuTy3znsxSmLEutShugTYYkq9jH9FJ1F38DfYDmZ 0ABsHUZ0QGb5IcQEoxbQGW1H0LUuyIkttfaBWOxjPKU3OPpRwzmqqBbJOp0NrxXAdnZG QCuWa0vgfPeTq7DAt6or/Da046bVF4kagEv2yVAmXXY+MoniIZv47w1RWAu4H9+N09Xu ZtGw== X-Forwarded-Encrypted: i=1; AJvYcCULlR7uaf9K/IDx8vp1ci4XDvnA/puZklrAIcI99lWsaiC8ZUpukbL6VI1O2mMr9SJPVqpUI3LNm78GE1I=@vger.kernel.org X-Gm-Message-State: AOJu0YxJAMQZlgXE9uG4AtKq9A/sI2yMQpbrH1+qI9OsQAuMog1cq4xX EhS4fv3sGV1OlnbaabY9SNw3ID0Iy07bEcxv0NwyW8Z2WKMhymtu8AmDUc6AIbbO8Bob2Jm/zAP xIephNxEkJwIPGPrp95XlPNBr61iRz8s3uwZ17vt9KVNutc+b0OmcOTrZ1VZhomUDMfw= X-Gm-Gg: ASbGncsDpEairBt4H24nVG81m/4XJG1Fm2X5+Szm8nlIbcEH7Um39cZClVCzSZLDtJW p5v7HH5mlFopV3UIDpt5JBVspfOsTjcbW9NvKdR6gDriwYbMdSd70wsaia+TUU069RcmaB9rfPy p8XHkEoEtdCyLEj7mgz/oDnAkbyhtdpUmeK3qyWVutZVIZObkM0FwUL1FLIooAQ2DFkubu0ZHDL Qx5u96fjR2+bvj6x5EoTg2C55GYq48slTned675mWrT0YcTeTr7cDlIVNm0FRNHAGBgIm2py1fS HqukZPznhyx9tboN4/Beb0horI+H7sxR X-Received: by 2002:a17:902:cec1:b0:234:f580:9f8 with SMTP id d9443c01a7336-23823f87b5amr70645905ad.3.1750877963904; Wed, 25 Jun 2025 11:59:23 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHvhxih0y9qRe4C4d7srb0jgNwb6h7s684Vah0jgoGyb4gqz4t/Gm8EerJvHEPZP+HR0txKXg== X-Received: by 2002:a17:902:cec1:b0:234:f580:9f8 with SMTP id d9443c01a7336-23823f87b5amr70645495ad.3.1750877963386; Wed, 25 Jun 2025 11:59:23 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d860a861sm139781875ad.101.2025.06.25.11.59.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:22 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v7 41/42] drm/msm: Defer VMA unmap for fb unpins Date: Wed, 25 Jun 2025 11:47:34 -0700 Message-ID: <20250625184918.124608-42-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MiBTYWx0ZWRfX5NS4w/vMFrs8 5pVb0GDQzJ5sdPMCuztFMbBwxenK89P/2RNlX2I2/tL4lbSzOa5/ucHgLgPWoQFmOtZdEeB8XaR TLq+HovQalN9rwkdXSDE/lJnLSC7ZRAV7Ml2ADG7RN6pIqmtOqCX8CYZnqR1QonNfNgee7dJyv2 MWGwjLpVk1BPtER91UOoyJPz6xMr+KBaAECjv0LKLB0dVjSbv51QvgrGLDcuGHD63Ugnii+E/6o rkvI74b/usRnxj/ufe4EKUp15046Mdc54+RVEs3yNDNd80mrWNZzLWFAUgRTxknALnFk2AcHRvX PzqY/wXey35c3r/jVcBGF/hX+vH3vJUkQCJ59phlXvviiFYAAaO8rbgj9uiFMtQAFaYShDNFgW6 TjT7vADOnWXFp+eVr6FP9mSKX1W3z6drIx5jsGp1Xs83FX1QRjbZ7i9BftHPYXq8xqmT1Abk X-Proofpoint-ORIG-GUID: jJahSpPIbMFMF0DKUtKq2ronhbM43GZO X-Proofpoint-GUID: jJahSpPIbMFMF0DKUtKq2ronhbM43GZO X-Authority-Analysis: v=2.4 cv=LNNmQIW9 c=1 sm=1 tr=0 ts=685c470d cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=HvHaJK4xSQnTU1JWlMsA:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 suspectscore=0 spamscore=0 bulkscore=0 mlxlogscore=999 impostorscore=0 mlxscore=0 clxscore=1015 adultscore=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250142 Content-Type: text/plain; charset="utf-8" With the conversion to drm_gpuvm, we lost the lazy VMA cleanup, which means that fb cleanup/unpin when pageflipping to new scanout buffers immediately unmaps the scanout buffer. This is costly (with tlbinv, it can be 4-6ms for a 1080p scanout buffer, and more for higher resolutions)! To avoid this, introduce a vma_ref, which is incremented whenever userspace has a GEM handle or dma-buf fd. When unpinning if the vm is the kms->vm we defer tearing down the VMA until the vma_ref drops to zero. If the buffer is still part of a flip-chain then userspace will be holding some sort of reference to the BO, either via a GEM handle and/or dma-buf fd. So this avoids unmapping the VMA when there is a strong possibility that it will be needed again. Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 1 + drivers/gpu/drm/msm/msm_fb.c | 5 ++- drivers/gpu/drm/msm/msm_gem.c | 61 +++++++++++++++++------------ drivers/gpu/drm/msm/msm_gem.h | 28 +++++++++++++ drivers/gpu/drm/msm/msm_gem_prime.c | 55 +++++++++++++++++++++++++- 6 files changed, 124 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 710046906229..585527fe09e7 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -826,6 +826,7 @@ static const struct drm_driver msm_driver =3D { .postclose =3D msm_postclose, .dumb_create =3D msm_gem_dumb_create, .dumb_map_offset =3D msm_gem_dumb_map_offset, + .gem_prime_import =3D msm_gem_prime_import, .gem_prime_import_sg_table =3D msm_gem_prime_import_sg_table, #ifdef CONFIG_DEBUG_FS .debugfs_init =3D msm_debugfs_init, diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 9a4d2b6d459d..b20d94d1a22e 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -246,6 +246,7 @@ void msm_gem_shrinker_cleanup(struct drm_device *dev); struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *ma= p); +struct drm_gem_object *msm_gem_prime_import(struct drm_device *dev, struct= dma_buf *buf); struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *de= v, struct dma_buf_attachment *attach, struct sg_table *sg); struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags= ); diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 8ae2f326ec54..bc7c2bb8f01e 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -89,6 +89,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, b= ool needs_dirtyfb) return 0; =20 for (i =3D 0; i < n; i++) { + msm_gem_vma_get(fb->obj[i]); ret =3D msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)\n", fb->base.id, i, msm_fb->iova[i], ret); @@ -114,8 +115,10 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *f= b, bool needed_dirtyfb) =20 memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); =20 - for (i =3D 0; i < n; i++) + for (i =3D 0; i < n; i++) { msm_gem_unpin_iova(fb->obj[i], vm); + msm_gem_vma_put(fb->obj[i]); + } } =20 uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b882647144bb..c56d773a3d04 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -19,11 +19,11 @@ #include "msm_drv.h" #include "msm_gem.h" #include "msm_gpu.h" +#include "msm_kms.h" =20 static int pgprot =3D 0; module_param(pgprot, int, 0600); =20 - static void update_device_mem(struct msm_drm_private *priv, ssize_t size) { uint64_t total_mem =3D atomic64_add_return(size, &priv->total_mem); @@ -43,6 +43,7 @@ static void update_ctx_mem(struct drm_file *file, ssize_t= size) =20 static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) { + msm_gem_vma_get(obj); update_ctx_mem(file, obj->size); return 0; } @@ -50,33 +51,13 @@ static int msm_gem_open(struct drm_gem_object *obj, str= uct drm_file *file) static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *= vm, bool close, const char *reason); =20 -static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) -{ - msm_gem_assert_locked(obj); - drm_gpuvm_resv_assert_held(vm); - - struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_find(vm, obj); - if (vm_bo) { - struct drm_gpuva *vma; - - drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm !=3D vm) - continue; - msm_gem_vma_unmap(vma, "detach"); - msm_gem_vma_close(vma); - break; - } - - drm_gpuvm_bo_put(vm_bo); - } -} - static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *fil= e) { struct msm_context *ctx =3D file->driver_priv; struct drm_exec exec; =20 update_ctx_mem(file, -obj->size); + msm_gem_vma_put(obj); =20 /* * If VM isn't created yet, nothing to cleanup. And in fact calling @@ -103,7 +84,31 @@ static void msm_gem_close(struct drm_gem_object *obj, s= truct drm_file *file) =20 msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, ctx->vm, true, "close"); - detach_vm(obj, ctx->vm); + drm_exec_fini(&exec); /* drop locks */ +} + +/* + * Get/put for kms->vm VMA + */ + +void msm_gem_vma_get(struct drm_gem_object *obj) +{ + atomic_inc(&to_msm_bo(obj)->vma_ref); +} + +void msm_gem_vma_put(struct drm_gem_object *obj) +{ + struct msm_drm_private *priv =3D obj->dev->dev_private; + struct drm_exec exec; + + if (atomic_dec_return(&to_msm_bo(obj)->vma_ref)) + return; + + if (!priv->kms) + return; + + msm_gem_lock_vm_and_obj(&exec, obj, priv->kms->vm); + put_iova_spaces(obj, priv->kms->vm, true, "vma_put"); drm_exec_fini(&exec); /* drop locks */ } =20 @@ -664,6 +669,13 @@ int msm_gem_set_iova(struct drm_gem_object *obj, return ret; } =20 +static bool is_kms_vm(struct drm_gpuvm *vm) +{ + struct msm_drm_private *priv =3D vm->drm->dev_private; + + return priv->kms && (priv->kms->vm =3D=3D vm); +} + /* * Unpin a iova by updating the reference counts. The memory isn't actually * purged until something else (shrinker, mm_notifier, destroy, etc) decid= es @@ -679,7 +691,8 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj, str= uct drm_gpuvm *vm) if (vma) { msm_gem_unpin_locked(obj); } - detach_vm(obj, vm); + if (!is_kms_vm(vm)) + put_iova_spaces(obj, vm, true, "close"); drm_exec_fini(&exec); /* drop locks */ } =20 diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 9671c4299cf8..47d07a01f0c1 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -211,9 +211,37 @@ struct msm_gem_object { * Protected by LRU lock. */ int pin_count; + + /** + * @vma_ref: Reference count of VMA users. + * + * With the vm_bo/vma holding a reference to the GEM object, we'd + * otherwise have to actively tear down a VMA when, for example, + * a buffer is unpinned for scanout, vs. the pre-drm_gpuvm approach + * where a VMA did not hold a reference to the BO, but instead was + * implicitly torn down when the BO was freed. + * + * To regain the lazy VMA teardown, we use the @vma_ref. It is + * incremented for any of the following: + * + * 1) the BO is exported as a dma_buf + * 2) the BO has open userspace handle + * + * All of those conditions will hold an reference to the BO, + * preventing it from being freed. So lazily keeping around the + * VMA will not prevent the BO from being freed. (Or rather, the + * reference loop is harmless in this case.) + * + * When the @vma_ref drops to zero, then kms->vm VMA will be + * torn down. + */ + atomic_t vma_ref; }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) =20 +void msm_gem_vma_get(struct drm_gem_object *obj); +void msm_gem_vma_put(struct drm_gem_object *obj); + uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); int msm_gem_prot(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *v= ma); diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_= gem_prime.c index 1a6d8099196a..b5cea248b7c3 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -6,6 +6,7 @@ =20 #include =20 +#include #include =20 #include "msm_drv.h" @@ -42,19 +43,69 @@ void msm_gem_prime_vunmap(struct drm_gem_object *obj, s= truct iosys_map *map) msm_gem_put_vaddr_locked(obj); } =20 +static void msm_gem_dmabuf_release(struct dma_buf *dma_buf) +{ + struct drm_gem_object *obj =3D dma_buf->priv; + + msm_gem_vma_put(obj); + drm_gem_dmabuf_release(dma_buf); +} + +static const struct dma_buf_ops msm_gem_prime_dmabuf_ops =3D { + .cache_sgt_mapping =3D true, + .attach =3D drm_gem_map_attach, + .detach =3D drm_gem_map_detach, + .map_dma_buf =3D drm_gem_map_dma_buf, + .unmap_dma_buf =3D drm_gem_unmap_dma_buf, + .release =3D msm_gem_dmabuf_release, + .mmap =3D drm_gem_dmabuf_mmap, + .vmap =3D drm_gem_dmabuf_vmap, + .vunmap =3D drm_gem_dmabuf_vunmap, +}; + +struct drm_gem_object *msm_gem_prime_import(struct drm_device *dev, + struct dma_buf *buf) +{ + if (buf->ops =3D=3D &msm_gem_prime_dmabuf_ops) { + struct drm_gem_object *obj =3D buf->priv; + if (obj->dev =3D=3D dev) { + /* + * Importing dmabuf exported from our own gem increases + * refcount on gem itself instead of f_count of dmabuf. + */ + drm_gem_object_get(obj); + return obj; + } + } + + return drm_gem_prime_import(dev, buf); +} + struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *de= v, struct dma_buf_attachment *attach, struct sg_table *sg) { return msm_gem_import(dev, attach->dmabuf, sg); } =20 - struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags) { if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) return ERR_PTR(-EPERM); =20 - return drm_gem_prime_export(obj, flags); + msm_gem_vma_get(obj); + + struct drm_device *dev =3D obj->dev; + struct dma_buf_export_info exp_info =3D { + .exp_name =3D KBUILD_MODNAME, /* white lie for debug */ + .owner =3D dev->driver->fops->owner, + .ops =3D &msm_gem_prime_dmabuf_ops, + .size =3D obj->size, + .flags =3D flags, + .priv =3D obj, + .resv =3D obj->resv, + }; + + return drm_gem_dmabuf_export(dev, &exp_info); } =20 int msm_gem_prime_pin(struct drm_gem_object *obj) --=20 2.49.0 From nobody Wed Oct 8 17:34:32 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE3012FCFEC for ; Wed, 25 Jun 2025 18:59:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877969; cv=none; b=ld/0ogOM8tmMrsyG1/f90IEQSb71fzTG+EBD2jjIijtwDDuwpq8HGEo6hR1BNaKWGI67g5kanayWdDcivGc+qEij1PjX4KurVpoZT02fodtPahJESahKsIRDgV4dS7TbKPID/4ojRk5JOyY6QY2rmc2rEteCKKT+N+ct/LqFLKA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750877969; c=relaxed/simple; bh=FgISdIgca1xXqXP2FH+qblBtIUVIylfyUHamP1YgR0E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q1akHGF3Jlx/uMrSp2p56D60Y+qDvoUNrMmbgD5Mjmu0ySSbtSY330lBjlOQZcVNpSDfQDG6cF/50Q09QCdeJFsEze5VpwJl2QxC60krFKRQhCpEM3BXa49KtpAVX33trx+n6SDTCE3nTTvko/E78KCpMsK6TJvQDhHofogbECk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=E0kPB7le; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="E0kPB7le" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55PDBIIr015686 for ; Wed, 25 Jun 2025 18:59:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=xpH5LnDLqPp lqQDe5uMXBSo64o86rKmUu4KIOqTfDTg=; b=E0kPB7ler9H1XBVaItMKmLw/Uiw GZPTqVMbhrNYg5mBLE8RcJs1k2duI4GjZrT0T9UxFKB2MkOYVtMXH+UTWjyHuhbK eS+60esDczZG8Luu8n1apcCt72moc80lqTvhoQHVxTB2S8t3j7aJX/DkE6y3t94U ypIok3jQYHQwZ62nYPKvIe7nNFxbJdwA3eVFNii40nAL4W0me/3JSYPuWRNTQKkv pCnXgXEHiwsvc0jwYv7af39KnbAqllxu5WvsuYDJfvz/jvanN/6P40Sg1u0L7xfK LHQqK7Ur5sBADG5nflWzM97/OFdyCY8ywTXMhrYhfaHlTgho5kTvM7/m8qQ== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47f2rq15d6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 Jun 2025 18:59:26 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-23632fd6248so1464515ad.3 for ; Wed, 25 Jun 2025 11:59:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750877965; x=1751482765; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xpH5LnDLqPplqQDe5uMXBSo64o86rKmUu4KIOqTfDTg=; b=lWhRqJiHrRuBA4z55RqQethnB8+n299wwgAcbLzzN1BnyoCvSVeGVq6ySCxXOAMtU+ Ava1WY6r5MgXWe28DjHo6ZcGvY5ji0q9+1Gyf2Ph57hkNfH8mZ/muvG9t4mgl7pozzZw QP63AdUS2M58rij0AHLjjY1RPUCzOgWX28GAtqer0gjZaZYYzviI9NmG/22pGMqQCERH 2F6tsXfwUe24a3B8aa6TY3ZJsznW/r3ugU2d14JVCa1fjQiMrRWpLGgnZwnfAE76txTF ZGCqCjxDZFWH+kCY+yZWsmBoepTYZxty5PbwEMO39FT+Zrwzp5KqRDI75OFhjZg4ICNJ z6cw== X-Forwarded-Encrypted: i=1; AJvYcCU2Ins3zg8SmT7dpuawfTIpa16LFUSE8gtwiVyXCJgzW1hBzlRow4gIZmSG6824foUKi4xW2MlSjYleKQg=@vger.kernel.org X-Gm-Message-State: AOJu0YzaENBM7vqohYhc4JhhMSEmY6DRgl+9OQiGF0tNm79iWcMrqWIg ub75sBcilY7z8JWP3NOSaNR6O8/no0f5ksOipeLqUgIq8kk3ZKdaDS4H1zf71YdhfqBBDPb87b9 TXwXT+ZyO8n3wFcCG0u3q3l5K2jBsG8bSF9xe5EysXTLwLQv/r2CWB97g6TD/InYCr9I= X-Gm-Gg: ASbGnctClIfZLxfi/zvkGhW58Pss3ib0yrq6AgBuDCNxNiTY8rcyKf8wD8BzwXqraUp Qa/o78FuVVP1Xxvdt9W9E3L307Cl+3OdfTf4ym5aoc0SvThW98QP6XnC7X1npDa1LAiikESXMSo cf+BAVaKlNkpxygqdf8Vg9gLPtsN8Q35Y3ZI8IXATXGHsRkOraemxj7m9yTSOmN1B+o+ORitSLl Frl+nNM0VqUJarb4sVmVosVUKfPx6NVlM+5WdKVaAT1uPqwdxSur5p/9lrWK0PYiuXcW9ow495l Gmb5kZtDAWpXKGU/Y3qa4IncaMUnYah1 X-Received: by 2002:a17:902:e88a:b0:236:9726:7261 with SMTP id d9443c01a7336-2382404d970mr68388515ad.39.1750877965192; Wed, 25 Jun 2025 11:59:25 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGtVUuEZYb41RLKeTVjavf65tKB8NX8TxZxdhFwSQPFNvRi5z1jUVtVE9kmMQW87zfFtvY+yg== X-Received: by 2002:a17:902:e88a:b0:236:9726:7261 with SMTP id d9443c01a7336-2382404d970mr68388205ad.39.1750877964797; Wed, 25 Jun 2025 11:59:24 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d8680b07sm143765655ad.182.2025.06.25.11.59.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 11:59:24 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v7 42/42] drm/msm: Add VM_BIND throttling Date: Wed, 25 Jun 2025 11:47:35 -0700 Message-ID: <20250625184918.124608-43-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> References: <20250625184918.124608-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Authority-Analysis: v=2.4 cv=NdDm13D4 c=1 sm=1 tr=0 ts=685c470e cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=9ew1I-5mWBoiHqGEK3IA:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDE0MyBTYWx0ZWRfXwlRt+cLkolwO 4jZaEEalNBs2/esH4eZTLSbWWsT91P0s+e1Q3t492Eu8opN01Loqf/q8JrJytJKRDImxuleX15J gUTAhJM/FgfRx3lzJHDIgnkCwR0NPClTg0e7qLXnLqHTHVcAV9ThuLCtkrsizcWIMiFCgpl8n2g /qF6u7jjpPEy3DMIfhYqg2oSMGlMjvlnGKJFLEmWyfR1GghBzhvGxrONESKO391sGLpRAFJoeOH 6DeNLUahpyZ8/pKv2AQEXMOA3Dx4WO6uHE5HdUIwhnyfHtsr938gIM1eqZwQZ38TiQFK/WhrA87 QCx8otdwasBHJcY6Xl0OFQVpm9yvElrVJdWNJUGRTPxDKJa99/XqsOIEwYjtZ/5q/wQZdM9EOaC RYcOZpmYee67jRnenRWOfQFn7a34SEHI6mdJgnOndSlcr6UbPAOCWxzqnc8O+KDfv7EqgNMu X-Proofpoint-ORIG-GUID: ywil34Xs6TSoLLjvgwsSGtdVEkDIKlQZ X-Proofpoint-GUID: ywil34Xs6TSoLLjvgwsSGtdVEkDIKlQZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_06,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxscore=0 spamscore=0 malwarescore=0 lowpriorityscore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxlogscore=999 adultscore=0 clxscore=1015 impostorscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250143 Content-Type: text/plain; charset="utf-8" A large number of (unsorted or separate) small (<2MB) mappings can cause a lot of, probably unnecessary, prealloc pages. Ie. a single 4k page size mapping will pre-allocate 3 pages (for levels 2-4) for the pagetable. Which can chew up a large amount of unneeded memory. So add a mechanism to put an upper bound on the # of pre-alloc pages. Signed-off-by: Rob Clark Reviewed-by: Antonino Maniscalco Tested-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_gem_vma.c | 23 +++++++++++++++++++++-- drivers/gpu/drm/msm/msm_gpu.h | 3 +++ 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 907ebf5073e6..bb3a6e8320c9 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -705,6 +705,8 @@ msm_vma_job_free(struct drm_sched_job *_job) =20 mmu->funcs->prealloc_cleanup(mmu, &job->prealloc); =20 + atomic_sub(job->prealloc.count, &job->queue->in_flight_prealloc); + drm_sched_job_cleanup(_job); =20 job_foreach_bo (obj, job) @@ -1089,10 +1091,11 @@ ops_are_same_pte(struct msm_vm_bind_op *first, stru= ct msm_vm_bind_op *next) * them as a single mapping. Otherwise the prealloc_count() will not real= ize * they can share pagetable pages and vastly overcount. */ -static void +static int vm_bind_prealloc_count(struct msm_vm_bind_job *job) { struct msm_vm_bind_op *first =3D NULL, *last =3D NULL; + int ret; =20 for (int i =3D 0; i < job->nr_ops; i++) { struct msm_vm_bind_op *op =3D &job->ops[i]; @@ -1121,6 +1124,20 @@ vm_bind_prealloc_count(struct msm_vm_bind_job *job) =20 /* Flush the remaining range: */ prealloc_count(job, first, last); + + /* + * Now that we know the needed amount to pre-alloc, throttle on pending + * VM_BIND jobs if we already have too much pre-alloc memory in flight + */ + ret =3D wait_event_interruptible( + to_msm_vm(job->vm)->sched.job_scheduled, + atomic_read(&job->queue->in_flight_prealloc) <=3D 1024); + if (ret) + return ret; + + atomic_add(job->prealloc.count, &job->queue->in_flight_prealloc); + + return 0; } =20 /* @@ -1411,7 +1428,9 @@ msm_ioctl_vm_bind(struct drm_device *dev, void *data,= struct drm_file *file) if (ret) goto out_unlock; =20 - vm_bind_prealloc_count(job); + ret =3D vm_bind_prealloc_count(job); + if (ret) + goto out_unlock; =20 struct drm_exec exec; unsigned flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WA= IT; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 31b83e9e3673..5508885d865f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -555,6 +555,8 @@ static inline int msm_gpu_convert_priority(struct msm_g= pu *gpu, int prio, * seqno, protected by submitqueue lock * @idr_lock: for serializing access to fence_idr * @lock: submitqueue lock for serializing submits on a queue + * @in_flight_prealloc: for VM_BIND queue, # of preallocated pgtable pages= for + * queued VM_BIND jobs * @ref: reference count * @entity: the submit job-queue */ @@ -569,6 +571,7 @@ struct msm_gpu_submitqueue { struct idr fence_idr; struct spinlock idr_lock; struct mutex lock; + atomic_t in_flight_prealloc; struct kref ref; struct drm_sched_entity *entity; =20 --=20 2.49.0