From nobody Wed Oct 8 11:42:50 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6F0E25EFBC for ; Sun, 29 Jun 2025 20:17:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228238; cv=none; b=YIB7g4Lv6FPryhiNbZgp0+hPAR8Bd96v7Ki1xbBQrJtf+egkAyyvNVOBQiZc4BPye2IIkC8TCHCis0uGw8czAGRar+ippFaBGiPui7+Ts2dwGIYWSD3FTq8hnCkRSh2LyIDBsAjrWtKaHQBGgPtQdhpnlLQcqCloc+L0pMUNDls= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751228238; c=relaxed/simple; bh=AdwHh2APALAKRdDyc1N+01yA+0c44BjsyOPApAyCPiA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jGUoDb2SQO1CyO7G00K6jobRy3UKpBlQuDMdF/i3xsUPvw5IfUF06tIexRDXk5WBz/HrRtnanQkC/g+Dk/zvnQSDDUjXRXvZ8MgLMZrG0Vy/e96FHhaqxQ9CfJVU+IWAQpAlW+ZO+zVtnfFnQ1S2ZkBtURDrQbhRvqnSnkm2o90= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=MbdtTnWv; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="MbdtTnWv" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55TANmZC030508 for ; Sun, 29 Jun 2025 20:17:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=RLSe5h8h7+b W0OLOzZo2quximNt/ChRbuqG4x2TcON8=; b=MbdtTnWvHQs+l+eF8WldIinQanU CcVHc00CwL/LEmydgdGWCOAUwRUuUXIYkayvBMg0YuIy1c5H+BIMna3pNyMk4Z+0 LymZRqVe5JZ+pm9Sk2CnuYBCcAQkgVCOEGMe270knJDlSMgxoKOLkU0ZvV9tcqtu s2q0QTCmKXMe33MQC+wTSndtcE6dxNY0DfN56Hgz9CkFd7yG10GSepxDnFEnGLBF B1M8S5sbak20dRHdA+EagxIbVK+/rirrnN0Mf+vgssH2DtqHMXexSK8GNa0AmXQs d0BsO/NAKhUvNODU7+vQdW8/5WE5yFhArSIHRaNzS6JQHi6yurk4WBCkCSw== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47j5rq2v1f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sun, 29 Jun 2025 20:17:12 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-23507382e64so10329505ad.2 for ; Sun, 29 Jun 2025 13:17:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751228231; x=1751833031; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RLSe5h8h7+bW0OLOzZo2quximNt/ChRbuqG4x2TcON8=; b=Bib4L+3/AhG08oO8cUSesOij/OdwqBYfVhW5gg1QA8H1eRXTBB869GqwUsa398EFOk YyhLlpfipuTsy2u6FSpOnsUsjwYE828PQ8+wbtTIBOy3KWFEtF5mPYp4ojSPhWYVr85m PZhoxemxgrMOW2EAmuvBQCMPM+7GudNEfETroomO2e9NADsBhR112iVmESUwhAKaaVHV 0lnnA9KjOv4WZnLCihfGdD9Fqy9exHOaChqIHxgp5to2m0E8yYS52omQV5rmurpT2+eP QTWhDSuna6ij5E/ZsBWjdwAROPvdiGOzWiK7rqn+X2YWRFDjLCDNNXv+aVjbcze2qQ/a Jv3A== X-Forwarded-Encrypted: i=1; AJvYcCU3ZZSJVyp0LqjlaLvFRsaL6Kt+rQrT3JsOmLJBJQ5XxmzYclFH+N4QkNeOXh51u1gyRJQ5cR+NkWbc99A=@vger.kernel.org X-Gm-Message-State: AOJu0Yxpr0OMPVeu83oIijN5xHuYcyP7r2UgNfpjbAUyQ9WSklQOgXQ7 70azNmEA4UKtkdyG+3GOYiPa0hoKLYRJ738oZULBVyvfYkWV7mF+pMVOqIWVznY7xe8KYRoeUvC PVqDZw7xi76LpgvF4/GdbSj7CHstTQ7b49pcUoKzb8hKx8TtNJS4vOrR7SCY4cbjUBp4= X-Gm-Gg: ASbGncuXbKHJVKUl1nTyeB6PxzJwlPYWXltIJktjjPUxNZyjOkr4wJqYuRpfQPSrR7c IjvFxNhrWPaXbeFjcAo9Wzjf39FM6CAvp5uuz/qE4lRYaUOlCHaNLn+SZlI14o0vj+JtMaYtu3p QJr3j2AwhMrQb3LFj6Y7FUlByL9y+Obq4oGjTornzSEDC0f3f1ZEH1mOtsTiiSxeCr6IcnDtZaJ 3uxc4Qy2PEBpecE8lzCS/BQLL7dkfHAtv7z6j8PXyAB1q0zhUrDPXxqKJA9/FcePGtY4kWmaLvP IIKt+XCBJPFtD1C9NiiVUqcVHkMb6ttnEA== X-Received: by 2002:a17:903:1a4d:b0:237:ed7c:cd0c with SMTP id d9443c01a7336-23ac2d8b52dmr156002515ad.11.1751228229814; Sun, 29 Jun 2025 13:17:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEEYyCC68rM2KsS0M6gr/zHXjEYCqScvbvRT44OZIOCeu0zA7b1SLxZnrEakL3I6eJ45RDmiA== X-Received: by 2002:a17:903:1a4d:b0:237:ed7c:cd0c with SMTP id d9443c01a7336-23ac2d8b52dmr156001995ad.11.1751228228964; Sun, 29 Jun 2025 13:17:08 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:5b3e:de60:4fda:e7b1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23acb2e1b33sm63622955ad.31.2025.06.29.13.17.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jun 2025 13:17:08 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Connor Abbott , Antonino Maniscalco , Danilo Krummrich , Rob Clark , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v9 35/42] drm/msm: Add VM_BIND ioctl Date: Sun, 29 Jun 2025 13:13:18 -0700 Message-ID: <20250629201530.25775-36-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> References: <20250629201530.25775-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: GsAJufmGNrjlCEF8c_MvVVc5FdbrpMCn X-Authority-Analysis: v=2.4 cv=eIYTjGp1 c=1 sm=1 tr=0 ts=68619f48 cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=9iYb1zYOeEBEA4qMVjAA:9 a=lT2Ezh7aeK42-gto:21 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-GUID: GsAJufmGNrjlCEF8c_MvVVc5FdbrpMCn X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI5MDE3MiBTYWx0ZWRfXyOqdKYtw67hJ H3sx26qbrsdls4aTBmetMQ/jSXFaYmaB1Sbkxbnfn3cIOTaZu93kUrIy+3I8hY09S0qAsoCgnt6 NP+zW2L8Dq7ouMe4c4AMc+07mFncsp1/uOQ9S69QcnpxkvJke1Hgdn0Y1NBXuNmDfgvDHrvlWC2 KcGUkgKnVlV52bg4wANyaigxqQvAdkU8H+gT9hCa0jyOXOdEXWiHPjPnJ4cjBlggcbhFwBNOox5 io25U/LZIvAcqerssoUaoFCwj5al6tNqGXk5eisoEO9TxmwpwdcFGFwaHbztX24dhHJI4+SYZBd NIDG3Mq0k1IebAfCZ0Fv5xYb9+XqVDNmKx71Uvk/MLARnGTp43o4TI6l2l3JDI1sPb72kKPuyQ3 RycstrnjtTlteuGphV3pYLwRUC/65zCxlvcXZ44p17uQros0Hf5MIDyNt5oB7QmNfRsX0IBc X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-27_05,2025-06-27_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 clxscore=1015 mlxlogscore=999 priorityscore=1501 adultscore=0 mlxscore=0 phishscore=0 bulkscore=0 spamscore=0 suspectscore=0 lowpriorityscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506290172 Content-Type: text/plain; charset="utf-8" From: Rob Clark Add a VM_BIND ioctl for binding/unbinding buffers into a VM. This is only supported if userspace has opted in to MSM_PARAM_EN_VM_BIND. Signed-off-by: Rob Clark Signed-off-by: Rob Clark Tested-by: Antonino Maniscalco Reviewed-by: Antonino Maniscalco --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_gem.c | 40 +- drivers/gpu/drm/msm/msm_gem.h | 4 + drivers/gpu/drm/msm/msm_gem_submit.c | 22 +- drivers/gpu/drm/msm/msm_gem_vma.c | 1092 +++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 74 +- 7 files changed, 1204 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index c1627cae6ae6..7881afa3a75a 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -795,6 +795,7 @@ static const struct drm_ioctl_desc msm_ioctls[] =3D { DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_NEW, msm_ioctl_submitqueue_new, DRM= _RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_CLOSE, msm_ioctl_submitqueue_close, DRM= _RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_QUERY, msm_ioctl_submitqueue_query, DRM= _RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(MSM_VM_BIND, msm_ioctl_vm_bind, DRM_RENDER_AL= LOW), }; =20 static void msm_show_fdinfo(struct drm_printer *p, struct drm_file *file) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 9b1ccb2b18f6..200c3135bbf9 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -255,7 +255,9 @@ struct drm_gpuvm *msm_kms_init_vm(struct drm_device *de= v); bool msm_use_mmu(struct drm_device *dev); =20 int msm_ioctl_gem_submit(struct drm_device *dev, void *data, - struct drm_file *file); + struct drm_file *file); +int msm_ioctl_vm_bind(struct drm_device *dev, void *data, + struct drm_file *file); =20 #ifdef CONFIG_DEBUG_FS unsigned long msm_gem_shrinker_shrink(struct drm_device *dev, unsigned lon= g nr_to_scan); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b688d397cc47..77fdf53d3e33 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -251,8 +251,7 @@ static void put_pages(struct drm_gem_object *obj) } } =20 -static struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, - unsigned madv) +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigne= d madv) { struct msm_gem_object *msm_obj =3D to_msm_bo(obj); =20 @@ -1052,18 +1051,37 @@ static void msm_gem_free_object(struct drm_gem_obje= ct *obj) /* * We need to lock any VMs the object is still attached to, but not * the object itself (see explaination in msm_gem_assert_locked()), - * so just open-code this special case: + * so just open-code this special case. + * + * Note that we skip the dance if we aren't attached to any VM. This + * is load bearing. The driver needs to support two usage models: + * + * 1. Legacy kernel managed VM: Userspace expects the VMA's to be + * implicitly torn down when the object is freed, the VMA's do + * not hold a hard reference to the BO. + * + * 2. VM_BIND, userspace managed VM: The VMA holds a reference to the + * BO. This can be dropped when the VM is closed and it's associated + * VMAs are torn down. (See msm_gem_vm_close()). + * + * In the latter case the last reference to a BO can be dropped while + * we already have the VM locked. It would have already been removed + * from the gpuva list, but lockdep doesn't know that. Or understand + * the differences between the two usage models. */ - drm_exec_init(&exec, 0, 0); - drm_exec_until_all_locked (&exec) { - struct drm_gpuvm_bo *vm_bo; - drm_gem_for_each_gpuvm_bo (vm_bo, obj) { - drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); - drm_exec_retry_on_contention(&exec); + if (!list_empty(&obj->gpuva.list)) { + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, + drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } } + put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ } - put_iova_spaces(obj, NULL, true); - drm_exec_fini(&exec); /* drop locks */ =20 if (drm_gem_is_imported(obj)) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f369a30a247c..ee464e315643 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -73,6 +73,9 @@ struct msm_gem_vm { /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; =20 + /** @mmu_lock: Protects access to the mmu */ + struct mutex mmu_lock; + /** * @pid: For address spaces associated with a specific process, this * will be non-NULL: @@ -205,6 +208,7 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj= , struct drm_gpuvm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigne= d madv); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm= _gem_submit.c index e2174b7d0e40..283e807c7874 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -193,6 +193,7 @@ static int submit_lookup_objects(struct msm_gem_submit = *submit, static int submit_lookup_cmds(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { + struct msm_context *ctx =3D file->driver_priv; unsigned i; size_t sz; int ret =3D 0; @@ -224,6 +225,20 @@ static int submit_lookup_cmds(struct msm_gem_submit *s= ubmit, goto out; } =20 + if (msm_context_is_vmbind(ctx)) { + if (submit_cmd.nr_relocs) { + ret =3D SUBMIT_ERROR(EINVAL, submit, "nr_relocs must be zero"); + goto out; + } + + if (submit_cmd.submit_idx || submit_cmd.submit_offset) { + ret =3D SUBMIT_ERROR(EINVAL, submit, "submit_idx/offset must be zero"); + goto out; + } + + submit->cmd[i].iova =3D submit_cmd.iova; + } + submit->cmd[i].type =3D submit_cmd.type; submit->cmd[i].size =3D submit_cmd.size / 4; submit->cmd[i].offset =3D submit_cmd.submit_offset / 4; @@ -532,6 +547,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, struct msm_syncobj_post_dep *post_deps =3D NULL; struct drm_syncobj **syncobjs_to_reset =3D NULL; struct sync_file *sync_file =3D NULL; + unsigned cmds_to_parse; int out_fence_fd =3D -1; unsigned i; int ret; @@ -655,7 +671,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, if (ret) goto out; =20 - for (i =3D 0; i < args->nr_cmds; i++) { + cmds_to_parse =3D msm_context_is_vmbind(ctx) ? 0 : args->nr_cmds; + + for (i =3D 0; i < cmds_to_parse; i++) { struct drm_gem_object *obj; uint64_t iova; =20 @@ -686,7 +704,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *= data, goto out; } =20 - submit->nr_cmds =3D i; + submit->nr_cmds =3D args->nr_cmds; =20 idr_preload(GFP_KERNEL); =20 diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 76b79c122182..6ec92b7771f5 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -4,9 +4,16 @@ * Author: Rob Clark */ =20 +#include "drm/drm_file.h" +#include "drm/msm_drm.h" +#include "linux/file.h" +#include "linux/sync_file.h" + #include "msm_drv.h" #include "msm_gem.h" +#include "msm_gpu.h" #include "msm_mmu.h" +#include "msm_syncobj.h" =20 #define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##= __VA_ARGS__) =20 @@ -36,6 +43,97 @@ struct msm_vm_unmap_op { uint64_t range; }; =20 +/** + * struct msm_vma_op - A MAP or UNMAP operation + */ +struct msm_vm_op { + /** @op: The operation type */ + enum { + MSM_VM_OP_MAP =3D 1, + MSM_VM_OP_UNMAP, + } op; + union { + /** @map: Parameters used if op =3D=3D MSM_VMA_OP_MAP */ + struct msm_vm_map_op map; + /** @unmap: Parameters used if op =3D=3D MSM_VMA_OP_UNMAP */ + struct msm_vm_unmap_op unmap; + }; + /** @node: list head in msm_vm_bind_job::vm_ops */ + struct list_head node; + + /** + * @obj: backing object for pages to be mapped/unmapped + * + * Async unmap ops, in particular, must hold a reference to the + * original GEM object backing the mapping that will be unmapped. + * But the same can be required in the map path, for example if + * there is not a corresponding unmap op, such as process exit. + * + * This ensures that the pages backing the mapping are not freed + * before the mapping is torn down. + */ + struct drm_gem_object *obj; +}; + +/** + * struct msm_vm_bind_job - Tracking for a VM_BIND ioctl + * + * A table of userspace requested VM updates (MSM_VM_BIND_OP_UNMAP/MAP/MAP= _NULL) + * gets applied to the vm, generating a list of VM ops (MSM_VM_OP_MAP/UNMA= P) + * which are applied to the pgtables asynchronously. For example a usersp= ace + * requested MSM_VM_BIND_OP_MAP could end up generating both an MSM_VM_OP_= UNMAP + * to unmap an existing mapping, and a MSM_VM_OP_MAP to apply the new mapp= ing. + */ +struct msm_vm_bind_job { + /** @base: base class for drm_sched jobs */ + struct drm_sched_job base; + /** @vm: The VM being operated on */ + struct drm_gpuvm *vm; + /** @fence: The fence that is signaled when job completes */ + struct dma_fence *fence; + /** @queue: The queue that the job runs on */ + struct msm_gpu_submitqueue *queue; + /** @prealloc: Tracking for pre-allocated MMU pgtable pages */ + struct msm_mmu_prealloc prealloc; + /** @vm_ops: a list of struct msm_vm_op */ + struct list_head vm_ops; + /** @bos_pinned: are the GEM objects being bound pinned? */ + bool bos_pinned; + /** @nr_ops: the number of userspace requested ops */ + unsigned int nr_ops; + /** + * @ops: the userspace requested ops + * + * The userspace requested ops are copied/parsed and validated + * before we start applying the updates to try to do as much up- + * front error checking as possible, to avoid the VM being in an + * undefined state due to partially executed VM_BIND. + * + * This table also serves to hold a reference to the backing GEM + * objects. + */ + struct msm_vm_bind_op { + uint32_t op; + uint32_t flags; + union { + struct drm_gem_object *obj; + uint32_t handle; + }; + uint64_t obj_offset; + uint64_t iova; + uint64_t range; + } ops[]; +}; + +#define job_foreach_bo(obj, _job) \ + for (unsigned i =3D 0; i < (_job)->nr_ops; i++) \ + if ((obj =3D (_job)->ops[i].obj)) + +static inline struct msm_vm_bind_job *to_msm_vm_bind_job(struct drm_sched_= job *job) +{ + return container_of(job, struct msm_vm_bind_job, base); +} + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -52,6 +150,9 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) static void vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) { + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); =20 vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); @@ -60,6 +161,9 @@ vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_u= nmap_op *op) static int vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) { + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); =20 return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, @@ -69,17 +173,29 @@ vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_m= ap_op *op) /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); =20 /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; =20 - vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){ + /* + * The mmu_lock is only needed when preallocation is used. But + * in that case we don't need to worry about recursion into + * shrinker + */ + if (!vm->managed) + mutex_lock(&vm->mmu_lock); + + vm_unmap_op(vm, &(struct msm_vm_unmap_op){ .iova =3D vma->va.addr, .range =3D vma->va.range, }); =20 + if (!vm->managed) + mutex_unlock(&vm->mmu_lock); + msm_vma->mapped =3D false; } =20 @@ -87,6 +203,7 @@ void msm_gem_vma_unmap(struct drm_gpuva *vma) int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { + struct msm_gem_vm *vm =3D to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); int ret; =20 @@ -98,6 +215,14 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct= sg_table *sgt) =20 msm_vma->mapped =3D true; =20 + /* + * The mmu_lock is only needed when preallocation is used. But + * in that case we don't need to worry about recursion into + * shrinker + */ + if (!vm->managed) + mutex_lock(&vm->mmu_lock); + /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -107,16 +232,19 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, stru= ct sg_table *sgt) * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret =3D vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){ + ret =3D vm_map_op(vm, &(struct msm_vm_map_op){ .iova =3D vma->va.addr, .range =3D vma->va.range, .offset =3D vma->gem.offset, .sgt =3D sgt, .prot =3D prot, }); - if (ret) { + + if (!vm->managed) + mutex_unlock(&vm->mmu_lock); + + if (ret) msm_vma->mapped =3D false; - } =20 return ret; } @@ -131,6 +259,9 @@ void msm_gem_vma_close(struct drm_gpuva *vma) =20 drm_gpuvm_resv_assert_held(&vm->base); =20 + if (vma->gem.obj) + msm_gem_assert_locked(vma->gem.obj); + if (vma->va.addr && vm->managed) drm_mm_remove_node(&msm_vma->node); =20 @@ -158,6 +289,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, =20 if (vm->managed) { BUG_ON(offset !=3D 0); + BUG_ON(!obj); /* NULL mappings not valid for kernel managed VM */ ret =3D drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -169,7 +301,8 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, range_end =3D range_start + obj->size; } =20 - GEM_WARN_ON((range_end - range_start) > obj->size); + if (obj) + GEM_WARN_ON((range_end - range_start) > obj->size); =20 drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, off= set); vma->mapped =3D false; @@ -178,6 +311,9 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, if (ret) goto err_free_range; =20 + if (!obj) + return &vma->base; + vm_bo =3D drm_gpuvm_bo_obtain(&vm->base, obj); if (IS_ERR(vm_bo)) { ret =3D PTR_ERR(vm_bo); @@ -200,11 +336,297 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_= gem_object *obj, return ERR_PTR(ret); } =20 +static int +msm_gem_vm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec) +{ + struct drm_gem_object *obj =3D vm_bo->obj; + struct drm_gpuva *vma; + int ret; + + vm_dbg("validate: %p", obj); + + msm_gem_assert_locked(obj); + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + ret =3D msm_gem_pin_vma_locked(obj, vma); + if (ret) + return ret; + } + + return 0; +} + +struct op_arg { + unsigned flags; + struct msm_vm_bind_job *job; +}; + +static void +vm_op_enqueue(struct op_arg *arg, struct msm_vm_op _op) +{ + struct msm_vm_op *op =3D kmalloc(sizeof(*op), GFP_KERNEL); + *op =3D _op; + list_add_tail(&op->node, &arg->job->vm_ops); + + if (op->obj) + drm_gem_object_get(op->obj); +} + +static struct drm_gpuva * +vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) +{ + return msm_gem_vma_new(arg->job->vm, op->gem.obj, op->gem.offset, + op->va.addr, op->va.addr + op->va.range); +} + +static int +msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gem_object *obj =3D op->map.gem.obj; + struct drm_gpuva *vma; + struct sg_table *sgt; + unsigned prot; + + vma =3D vma_from_op(arg, &op->map); + if (WARN_ON(IS_ERR(vma))) + return PTR_ERR(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + vma->flags =3D ((struct op_arg *)arg)->flags; + + if (obj) { + sgt =3D to_msm_bo(obj)->sgt; + prot =3D msm_gem_prot(obj); + } else { + sgt =3D NULL; + prot =3D IOMMU_READ | IOMMU_WRITE; + } + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_MAP, + .map =3D { + .sgt =3D sgt, + .iova =3D vma->va.addr, + .range =3D vma->va.range, + .offset =3D vma->gem.offset, + .prot =3D prot, + }, + .obj =3D vma->gem.obj, + }); + + to_msm_vma(vma)->mapped =3D true; + + return 0; +} + +static int +msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) +{ + struct msm_vm_bind_job *job =3D ((struct op_arg *)arg)->job; + struct drm_gpuvm *vm =3D job->vm; + struct drm_gpuva *orig_vma =3D op->remap.unmap->va; + struct drm_gpuva *prev_vma =3D NULL, *next_vma =3D NULL; + struct drm_gpuvm_bo *vm_bo =3D orig_vma->vm_bo; + bool mapped =3D to_msm_vma(orig_vma)->mapped; + unsigned flags; + + vm_dbg("orig_vma: %p:%p:%p: %016llx %016llx", vm, orig_vma, + orig_vma->gem.obj, orig_vma->va.addr, orig_vma->va.range); + + if (mapped) { + uint64_t unmap_start, unmap_range; + + drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range= ); + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_UNMAP, + .unmap =3D { + .iova =3D unmap_start, + .range =3D unmap_range, + }, + .obj =3D orig_vma->gem.obj, + }); + + /* + * Part of this GEM obj is still mapped, but we're going to kill the + * existing VMA and replace it with one or two new ones (ie. two if + * the unmapped range is in the middle of the existing (unmap) VMA). + * So just set the state to unmapped: + */ + to_msm_vma(orig_vma)->mapped =3D false; + } + + /* + * Hold a ref to the vm_bo between the msm_gem_vma_close() and the + * creation of the new prev/next vma's, in case the vm_bo is tracked + * in the VM's evict list: + */ + if (vm_bo) + drm_gpuvm_bo_get(vm_bo); + + /* + * The prev_vma and/or next_vma are replacing the unmapped vma, and + * therefore should preserve it's flags: + */ + flags =3D orig_vma->flags; + + msm_gem_vma_close(orig_vma); + + if (op->remap.prev) { + prev_vma =3D vma_from_op(arg, op->remap.prev); + if (WARN_ON(IS_ERR(prev_vma))) + return PTR_ERR(prev_vma); + + vm_dbg("prev_vma: %p:%p: %016llx %016llx", vm, prev_vma, prev_vma->va.ad= dr, prev_vma->va.range); + to_msm_vma(prev_vma)->mapped =3D mapped; + prev_vma->flags =3D flags; + } + + if (op->remap.next) { + next_vma =3D vma_from_op(arg, op->remap.next); + if (WARN_ON(IS_ERR(next_vma))) + return PTR_ERR(next_vma); + + vm_dbg("next_vma: %p:%p: %016llx %016llx", vm, next_vma, next_vma->va.ad= dr, next_vma->va.range); + to_msm_vma(next_vma)->mapped =3D mapped; + next_vma->flags =3D flags; + } + + if (!mapped) + drm_gpuvm_bo_evict(vm_bo, true); + + /* Drop the previous ref: */ + drm_gpuvm_bo_put(vm_bo); + + return 0; +} + +static int +msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gpuva *vma =3D op->unmap.va; + struct msm_gem_vma *msm_vma =3D to_msm_vma(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + if (!msm_vma->mapped) + goto out_close; + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op =3D MSM_VM_OP_UNMAP, + .unmap =3D { + .iova =3D vma->va.addr, + .range =3D vma->va.range, + }, + .obj =3D vma->gem.obj, + }); + + msm_vma->mapped =3D false; + +out_close: + msm_gem_vma_close(vma); + + return 0; +} + static const struct drm_gpuvm_ops msm_gpuvm_ops =3D { .vm_free =3D msm_gem_vm_free, + .vm_bo_validate =3D msm_gem_vm_bo_validate, + .sm_step_map =3D msm_gem_vm_sm_step_map, + .sm_step_remap =3D msm_gem_vm_sm_step_remap, + .sm_step_unmap =3D msm_gem_vm_sm_step_unmap, }; =20 +static struct dma_fence * +msm_vma_job_run(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job =3D to_msm_vm_bind_job(_job); + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct drm_gem_object *obj; + int ret =3D vm->unusable ? -EINVAL : 0; + + vm_dbg(""); + + mutex_lock(&vm->mmu_lock); + vm->mmu->prealloc =3D &job->prealloc; + + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op =3D + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + + switch (op->op) { + case MSM_VM_OP_MAP: + /* + * On error, stop trying to map new things.. but we + * still want to process the unmaps (or in particular, + * the drm_gem_object_put()s) + */ + if (!ret) + ret =3D vm_map_op(vm, &op->map); + break; + case MSM_VM_OP_UNMAP: + vm_unmap_op(vm, &op->unmap); + break; + } + drm_gem_object_put(op->obj); + list_del(&op->node); + kfree(op); + } + + vm->mmu->prealloc =3D NULL; + mutex_unlock(&vm->mmu_lock); + + /* + * We failed to perform at least _some_ of the pgtable updates, so + * now the VM is in an undefined state. Game over! + */ + if (ret) + vm->unusable =3D true; + + job_foreach_bo (obj, job) { + msm_gem_lock(obj); + msm_gem_unpin_locked(obj); + msm_gem_unlock(obj); + } + + /* VM_BIND ops are synchronous, so no fence to wait on: */ + return NULL; +} + +static void +msm_vma_job_free(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job =3D to_msm_vm_bind_job(_job); + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct drm_gem_object *obj; + + vm->mmu->funcs->prealloc_cleanup(vm->mmu, &job->prealloc); + + drm_sched_job_cleanup(_job); + + job_foreach_bo (obj, job) + drm_gem_object_put(obj); + + msm_submitqueue_put(job->queue); + dma_fence_put(job->fence); + + /* In error paths, we could have unexecuted ops: */ + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op =3D + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + list_del(&op->node); + kfree(op); + } + + kfree(job); +} + static const struct drm_sched_backend_ops msm_vm_bind_ops =3D { + .run_job =3D msm_vma_job_run, + .free_job =3D msm_vma_job_free }; =20 /** @@ -268,6 +690,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, drm_gem_object_put(dummy_gem); =20 vm->mmu =3D mmu; + mutex_init(&vm->mmu_lock); vm->managed =3D managed; =20 drm_mm_init(&vm->mm, va_start, va_size); @@ -280,7 +703,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mm= u *mmu, const char *name, err_free_vm: kfree(vm); return ERR_PTR(ret); - } =20 /** @@ -296,6 +718,7 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) { struct msm_gem_vm *vm =3D to_msm_vm(gpuvm); struct drm_gpuva *vma, *tmp; + struct drm_exec exec; =20 /* * For kernel managed VMs, the VMAs are torn down when the handle is @@ -312,22 +735,655 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) drm_sched_fini(&vm->sched); =20 /* Tear down any remaining mappings: */ - dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL); - drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { - struct drm_gem_object *obj =3D vma->gem.obj; + drm_exec_init(&exec, 0, 2); + drm_exec_until_all_locked (&exec) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(gpuvm)); + drm_exec_retry_on_contention(&exec); =20 - if (obj && obj->resv !=3D drm_gpuvm_resv(gpuvm)) { - drm_gem_object_get(obj); - msm_gem_lock(obj); + drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { + struct drm_gem_object *obj =3D vma->gem.obj; + + /* + * MSM_BO_NO_SHARE objects share the same resv as the + * VM, in which case the obj is already locked: + */ + if (obj && (obj->resv =3D=3D drm_gpuvm_resv(gpuvm))) + obj =3D NULL; + + if (obj) { + drm_exec_lock_obj(&exec, obj); + drm_exec_retry_on_contention(&exec); + } + + msm_gem_vma_unmap(vma); + msm_gem_vma_close(vma); + + if (obj) { + drm_exec_unlock_obj(&exec, obj); + } } + } + drm_exec_fini(&exec); +} + + +static struct msm_vm_bind_job * +vm_bind_job_create(struct drm_device *dev, struct msm_gpu *gpu, + struct msm_gpu_submitqueue *queue, uint32_t nr_ops) +{ + struct msm_vm_bind_job *job; + uint64_t sz; + int ret; + + sz =3D struct_size(job, ops, nr_ops); + + if (sz > SIZE_MAX) + return ERR_PTR(-ENOMEM); + + job =3D kzalloc(sz, GFP_KERNEL | __GFP_NOWARN); + if (!job) + return ERR_PTR(-ENOMEM); + + ret =3D drm_sched_job_init(&job->base, queue->entity, 1, queue); + if (ret) { + kfree(job); + return ERR_PTR(ret); + } =20 - msm_gem_vma_unmap(vma); - msm_gem_vma_close(vma); + job->vm =3D msm_context_vm(dev, queue->ctx); + job->queue =3D queue; + INIT_LIST_HEAD(&job->vm_ops); =20 - if (obj && obj->resv !=3D drm_gpuvm_resv(gpuvm)) { - msm_gem_unlock(obj); - drm_gem_object_put(obj); + return job; +} + +static bool invalid_alignment(uint64_t addr) +{ + /* + * Technically this is about GPU alignment, not CPU alignment. But + * I've not seen any qcom SoC where the SMMU does not support the + * CPU's smallest page size. + */ + return !PAGE_ALIGNED(addr); +} + +static int +lookup_op(struct msm_vm_bind_job *job, const struct drm_msm_vm_bind_op *op) +{ + struct drm_device *dev =3D job->vm->drm; + int i =3D job->nr_ops++; + int ret =3D 0; + + job->ops[i].op =3D op->op; + job->ops[i].handle =3D op->handle; + job->ops[i].obj_offset =3D op->obj_offset; + job->ops[i].iova =3D op->iova; + job->ops[i].range =3D op->range; + job->ops[i].flags =3D op->flags; + + if (op->flags & ~MSM_VM_BIND_OP_FLAGS) + ret =3D UERR(EINVAL, dev, "invalid flags: %x\n", op->flags); + + if (invalid_alignment(op->iova)) + ret =3D UERR(EINVAL, dev, "invalid address: %016llx\n", op->iova); + + if (invalid_alignment(op->obj_offset)) + ret =3D UERR(EINVAL, dev, "invalid bo_offset: %016llx\n", op->obj_offset= ); + + if (invalid_alignment(op->range)) + ret =3D UERR(EINVAL, dev, "invalid range: %016llx\n", op->range); + + if (!drm_gpuvm_range_valid(job->vm, op->iova, op->range)) + ret =3D UERR(EINVAL, dev, "invalid range: %016llx, %016llx\n", op->iova,= op->range); + + /* + * MAP must specify a valid handle. But the handle MBZ for + * UNMAP or MAP_NULL. + */ + if (op->op =3D=3D MSM_VM_BIND_OP_MAP) { + if (!op->handle) + ret =3D UERR(EINVAL, dev, "invalid handle\n"); + } else if (op->handle) { + ret =3D UERR(EINVAL, dev, "handle must be zero\n"); + } + + switch (op->op) { + case MSM_VM_BIND_OP_MAP: + case MSM_VM_BIND_OP_MAP_NULL: + case MSM_VM_BIND_OP_UNMAP: + break; + default: + ret =3D UERR(EINVAL, dev, "invalid op: %u\n", op->op); + break; + } + + return ret; +} + +/* + * ioctl parsing, parameter validation, and GEM handle lookup + */ +static int +vm_bind_job_lookup_ops(struct msm_vm_bind_job *job, struct drm_msm_vm_bind= *args, + struct drm_file *file, int *nr_bos) +{ + struct drm_device *dev =3D job->vm->drm; + int ret =3D 0; + int cnt =3D 0; + + if (args->nr_ops =3D=3D 1) { + /* Single op case, the op is inlined: */ + ret =3D lookup_op(job, &args->op); + } else { + for (unsigned i =3D 0; i < args->nr_ops; i++) { + struct drm_msm_vm_bind_op op; + void __user *userptr =3D + u64_to_user_ptr(args->ops + (i * sizeof(op))); + + /* make sure we don't have garbage flags, in case we hit + * error path before flags is initialized: + */ + job->ops[i].flags =3D 0; + + if (copy_from_user(&op, userptr, sizeof(op))) { + ret =3D -EFAULT; + break; + } + + ret =3D lookup_op(job, &op); + if (ret) + break; + } + } + + if (ret) { + job->nr_ops =3D 0; + goto out; + } + + spin_lock(&file->table_lock); + + for (unsigned i =3D 0; i < args->nr_ops; i++) { + struct drm_gem_object *obj; + + if (!job->ops[i].handle) { + job->ops[i].obj =3D NULL; + continue; + } + + /* + * normally use drm_gem_object_lookup(), but for bulk lookup + * all under single table_lock just hit object_idr directly: + */ + obj =3D idr_find(&file->object_idr, job->ops[i].handle); + if (!obj) { + ret =3D UERR(EINVAL, dev, "invalid handle %u at index %u\n", job->ops[i= ].handle, i); + goto out_unlock; + } + + drm_gem_object_get(obj); + + job->ops[i].obj =3D obj; + cnt++; + } + + *nr_bos =3D cnt; + +out_unlock: + spin_unlock(&file->table_lock); + +out: + return ret; +} + +static void +prealloc_count(struct msm_vm_bind_job *job, + struct msm_vm_bind_op *first, + struct msm_vm_bind_op *last) +{ + struct msm_mmu *mmu =3D to_msm_vm(job->vm)->mmu; + + if (!first) + return; + + uint64_t start_iova =3D first->iova; + uint64_t end_iova =3D last->iova + last->range; + + mmu->funcs->prealloc_count(mmu, &job->prealloc, start_iova, end_iova - st= art_iova); +} + +static bool +ops_are_same_pte(struct msm_vm_bind_op *first, struct msm_vm_bind_op *next) +{ + /* + * Last level pte covers 2MB.. so we should merge two ops, from + * the PoV of figuring out how much pgtable pages to pre-allocate + * if they land in the same 2MB range: + */ + uint64_t pte_mask =3D ~(SZ_2M - 1); + return ((first->iova + first->range) & pte_mask) =3D=3D (next->iova & pte= _mask); +} + +/* + * Determine the amount of memory to prealloc for pgtables. For sparse im= ages, + * in particular, userspace plays some tricks with the order of page mappi= ngs + * to get the desired swizzle pattern, resulting in a large # of tiny MAP = ops. + * So detect when multiple MAP operations are physically contiguous, and c= ount + * them as a single mapping. Otherwise the prealloc_count() will not real= ize + * they can share pagetable pages and vastly overcount. + */ +static void +vm_bind_prealloc_count(struct msm_vm_bind_job *job) +{ + struct msm_vm_bind_op *first =3D NULL, *last =3D NULL; + + for (int i =3D 0; i < job->nr_ops; i++) { + struct msm_vm_bind_op *op =3D &job->ops[i]; + + /* We only care about MAP/MAP_NULL: */ + if (op->op =3D=3D MSM_VM_BIND_OP_UNMAP) + continue; + + /* + * If op is contiguous with last in the current range, then + * it becomes the new last in the range and we continue + * looping: + */ + if (last && ops_are_same_pte(last, op)) { + last =3D op; + continue; + } + + /* + * If op is not contiguous with the current range, flush + * the current range and start anew: + */ + prealloc_count(job, first, last); + first =3D last =3D op; + } + + /* Flush the remaining range: */ + prealloc_count(job, first, last); +} + +/* + * Lock VM and GEM objects + */ +static int +vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exe= c) +{ + int ret; + + /* Lock VM and objects: */ + drm_exec_until_all_locked (exec) { + ret =3D drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(job->vm)); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + + for (unsigned i =3D 0; i < job->nr_ops; i++) { + const struct msm_vm_bind_op *op =3D &job->ops[i]; + + switch (op->op) { + case MSM_VM_BIND_OP_UNMAP: + ret =3D drm_gpuvm_sm_unmap_exec_lock(job->vm, exec, + op->iova, + op->obj_offset); + break; + case MSM_VM_BIND_OP_MAP: + case MSM_VM_BIND_OP_MAP_NULL: + ret =3D drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, + op->iova, op->range, + op->obj, op->obj_offset); + break; + default: + /* + * lookup_op() should have already thrown an error for + * invalid ops + */ + WARN_ON("unreachable"); + } + + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + } + } + + return 0; +} + +/* + * Pin GEM objects, ensuring that we have backing pages. Pinning will move + * the object to the pinned LRU so that the shrinker knows to first consid= er + * other objects for evicting. + */ +static int +vm_bind_job_pin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + /* + * First loop, before holding the LRU lock, avoids holding the + * LRU lock while calling msm_gem_pin_vma_locked (which could + * trigger get_pages()) + */ + job_foreach_bo (obj, job) { + struct page **pages; + + pages =3D msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); + if (IS_ERR(pages)) + return PTR_ERR(pages); + } + + struct msm_drm_private *priv =3D job->vm->drm->dev_private; + + /* + * A second loop while holding the LRU lock (a) avoids acquiring/dropping + * the LRU lock for each individual bo, while (b) avoiding holding the + * LRU lock while calling msm_gem_pin_vma_locked() (which could trigger + * get_pages() which could trigger reclaim.. and if we held the LRU lock + * could trigger deadlock with the shrinker). + */ + mutex_lock(&priv->lru.lock); + job_foreach_bo (obj, job) + msm_gem_pin_obj_locked(obj); + mutex_unlock(&priv->lru.lock); + + job->bos_pinned =3D true; + + return 0; +} + +/* + * Unpin GEM objects. Normally this is done after the bind job is run. + */ +static void +vm_bind_job_unpin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + if (!job->bos_pinned) + return; + + job_foreach_bo (obj, job) + msm_gem_unpin_locked(obj); + + job->bos_pinned =3D false; +} + +/* + * Pre-allocate pgtable memory, and translate the VM bind requests into a + * sequence of pgtable updates to be applied asynchronously. + */ +static int +vm_bind_job_prepare(struct msm_vm_bind_job *job) +{ + struct msm_gem_vm *vm =3D to_msm_vm(job->vm); + struct msm_mmu *mmu =3D vm->mmu; + int ret; + + ret =3D mmu->funcs->prealloc_allocate(mmu, &job->prealloc); + if (ret) + return ret; + + for (unsigned i =3D 0; i < job->nr_ops; i++) { + const struct msm_vm_bind_op *op =3D &job->ops[i]; + struct op_arg arg =3D { + .job =3D job, + }; + + switch (op->op) { + case MSM_VM_BIND_OP_UNMAP: + ret =3D drm_gpuvm_sm_unmap(job->vm, &arg, op->iova, + op->range); + break; + case MSM_VM_BIND_OP_MAP: + if (op->flags & MSM_VM_BIND_OP_DUMP) + arg.flags |=3D MSM_VMA_DUMP; + fallthrough; + case MSM_VM_BIND_OP_MAP_NULL: + ret =3D drm_gpuvm_sm_map(job->vm, &arg, op->iova, + op->range, op->obj, op->obj_offset); + break; + default: + /* + * lookup_op() should have already thrown an error for + * invalid ops + */ + BUG_ON("unreachable"); + } + + if (ret) { + /* + * If we've already started modifying the vm, we can't + * adequetly describe to userspace the intermediate + * state the vm is in. So throw up our hands! + */ + if (i > 0) + vm->unusable =3D true; + return ret; + } + } + + return 0; +} + +/* + * Attach fences to the GEM objects being bound. This will signify to + * the shrinker that they are busy even after dropping the locks (ie. + * drm_exec_fini()) + */ +static void +vm_bind_job_attach_fences(struct msm_vm_bind_job *job) +{ + for (unsigned i =3D 0; i < job->nr_ops; i++) { + struct drm_gem_object *obj =3D job->ops[i].obj; + + if (!obj) + continue; + + dma_resv_add_fence(obj->resv, job->fence, + DMA_RESV_USAGE_KERNEL); + } +} + +int +msm_ioctl_vm_bind(struct drm_device *dev, void *data, struct drm_file *fil= e) +{ + struct msm_drm_private *priv =3D dev->dev_private; + struct drm_msm_vm_bind *args =3D data; + struct msm_context *ctx =3D file->driver_priv; + struct msm_vm_bind_job *job =3D NULL; + struct msm_gpu *gpu =3D priv->gpu; + struct msm_gpu_submitqueue *queue; + struct msm_syncobj_post_dep *post_deps =3D NULL; + struct drm_syncobj **syncobjs_to_reset =3D NULL; + struct sync_file *sync_file =3D NULL; + struct dma_fence *fence; + int out_fence_fd =3D -1; + int ret, nr_bos =3D 0; + unsigned i; + + if (!gpu) + return -ENXIO; + + /* + * Maybe we could allow just UNMAP ops? OTOH userspace should just + * immediately close the device file and all will be torn down. + */ + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + + /* + * Technically, you cannot create a VM_BIND submitqueue in the first + * place, if you haven't opted in to VM_BIND context. But it is + * cleaner / less confusing, to check this case directly. + */ + if (!msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "context does not support vmbind"); + + if (args->flags & ~MSM_VM_BIND_FLAGS) + return UERR(EINVAL, dev, "invalid flags"); + + queue =3D msm_submitqueue_get(ctx, args->queue_id); + if (!queue) + return -ENOENT; + + if (!(queue->flags & MSM_SUBMITQUEUE_VM_BIND)) { + ret =3D UERR(EINVAL, dev, "Invalid queue type"); + goto out_post_unlock; + } + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + out_fence_fd =3D get_unused_fd_flags(O_CLOEXEC); + if (out_fence_fd < 0) { + ret =3D out_fence_fd; + goto out_post_unlock; } } - dma_resv_unlock(drm_gpuvm_resv(gpuvm)); + + job =3D vm_bind_job_create(dev, gpu, queue, args->nr_ops); + if (IS_ERR(job)) { + ret =3D PTR_ERR(job); + goto out_post_unlock; + } + + ret =3D mutex_lock_interruptible(&queue->lock); + if (ret) + goto out_post_unlock; + + if (args->flags & MSM_VM_BIND_FENCE_FD_IN) { + struct dma_fence *in_fence; + + in_fence =3D sync_file_get_fence(args->fence_fd); + + if (!in_fence) { + ret =3D UERR(EINVAL, dev, "invalid in-fence"); + goto out_unlock; + } + + ret =3D drm_sched_job_add_dependency(&job->base, in_fence); + if (ret) + goto out_unlock; + } + + if (args->in_syncobjs > 0) { + syncobjs_to_reset =3D msm_syncobj_parse_deps(dev, &job->base, + file, args->in_syncobjs, + args->nr_in_syncobjs, + args->syncobj_stride); + if (IS_ERR(syncobjs_to_reset)) { + ret =3D PTR_ERR(syncobjs_to_reset); + goto out_unlock; + } + } + + if (args->out_syncobjs > 0) { + post_deps =3D msm_syncobj_parse_post_deps(dev, file, + args->out_syncobjs, + args->nr_out_syncobjs, + args->syncobj_stride); + if (IS_ERR(post_deps)) { + ret =3D PTR_ERR(post_deps); + goto out_unlock; + } + } + + ret =3D vm_bind_job_lookup_ops(job, args, file, &nr_bos); + if (ret) + goto out_unlock; + + vm_bind_prealloc_count(job); + + struct drm_exec exec; + unsigned flags =3D DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WA= IT; + drm_exec_init(&exec, flags, nr_bos + 1); + + ret =3D vm_bind_job_lock_objects(job, &exec); + if (ret) + goto out; + + ret =3D vm_bind_job_pin_objects(job); + if (ret) + goto out; + + ret =3D vm_bind_job_prepare(job); + if (ret) + goto out; + + drm_sched_job_arm(&job->base); + + job->fence =3D dma_fence_get(&job->base.s_fence->finished); + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + sync_file =3D sync_file_create(job->fence); + if (!sync_file) { + ret =3D -ENOMEM; + } else { + fd_install(out_fence_fd, sync_file->file); + args->fence_fd =3D out_fence_fd; + } + } + + if (ret) + goto out; + + vm_bind_job_attach_fences(job); + + /* + * The job can be free'd (and fence unref'd) at any point after + * drm_sched_entity_push_job(), so we need to hold our own ref + */ + fence =3D dma_fence_get(job->fence); + + drm_sched_entity_push_job(&job->base); + + msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); + msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, fence); + + dma_fence_put(fence); + +out: + if (ret) + vm_bind_job_unpin_objects(job); + + drm_exec_fini(&exec); +out_unlock: + mutex_unlock(&queue->lock); +out_post_unlock: + if (ret && (out_fence_fd >=3D 0)) { + put_unused_fd(out_fence_fd); + if (sync_file) + fput(sync_file->file); + } + + if (!IS_ERR_OR_NULL(job)) { + if (ret) + msm_vma_job_free(&job->base); + } else { + /* + * If the submit hasn't yet taken ownership of the queue + * then we need to drop the reference ourself: + */ + msm_submitqueue_put(queue); + } + + if (!IS_ERR_OR_NULL(post_deps)) { + for (i =3D 0; i < args->nr_out_syncobjs; ++i) { + kfree(post_deps[i].chain); + drm_syncobj_put(post_deps[i].syncobj); + } + kfree(post_deps); + } + + if (!IS_ERR_OR_NULL(syncobjs_to_reset)) { + for (i =3D 0; i < args->nr_in_syncobjs; ++i) { + if (syncobjs_to_reset[i]) + drm_syncobj_put(syncobjs_to_reset[i]); + } + kfree(syncobjs_to_reset); + } + + return ret; } diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 6d6cd1219926..5c67294edc95 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -272,7 +272,10 @@ struct drm_msm_gem_submit_cmd { __u32 size; /* in, cmdstream size */ __u32 pad; __u32 nr_relocs; /* in, number of submit_reloc's */ - __u64 relocs; /* in, ptr to array of submit_reloc's */ + union { + __u64 relocs; /* in, ptr to array of submit_reloc's */ + __u64 iova; /* cmdstream address (for VM_BIND contexts) */ + }; }; =20 /* Each buffer referenced elsewhere in the cmdstream submit (ie. the @@ -339,7 +342,74 @@ struct drm_msm_gem_submit { __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ __u32 pad; /*in, reserved for future use, always 0. */ +}; + +#define MSM_VM_BIND_OP_UNMAP 0 +#define MSM_VM_BIND_OP_MAP 1 +#define MSM_VM_BIND_OP_MAP_NULL 2 + +#define MSM_VM_BIND_OP_DUMP 1 +#define MSM_VM_BIND_OP_FLAGS ( \ + MSM_VM_BIND_OP_DUMP | \ + 0) =20 +/** + * struct drm_msm_vm_bind_op - bind/unbind op to run + */ +struct drm_msm_vm_bind_op { + /** @op: one of MSM_VM_BIND_OP_x */ + __u32 op; + /** @handle: GEM object handle, MBZ for UNMAP or MAP_NULL */ + __u32 handle; + /** @obj_offset: Offset into GEM object, MBZ for UNMAP or MAP_NULL */ + __u64 obj_offset; + /** @iova: Address to operate on */ + __u64 iova; + /** @range: Number of bites to to map/unmap */ + __u64 range; + /** @flags: Bitmask of MSM_VM_BIND_OP_FLAG_x */ + __u32 flags; + /** @pad: MBZ */ + __u32 pad; +}; + +#define MSM_VM_BIND_FENCE_FD_IN 0x00000001 +#define MSM_VM_BIND_FENCE_FD_OUT 0x00000002 +#define MSM_VM_BIND_FLAGS ( \ + MSM_VM_BIND_FENCE_FD_IN | \ + MSM_VM_BIND_FENCE_FD_OUT | \ + 0) + +/** + * struct drm_msm_vm_bind - Input of &DRM_IOCTL_MSM_VM_BIND + */ +struct drm_msm_vm_bind { + /** @flags: in, bitmask of MSM_VM_BIND_x */ + __u32 flags; + /** @nr_ops: the number of bind ops in this ioctl */ + __u32 nr_ops; + /** @fence_fd: in/out fence fd (see MSM_VM_BIND_FENCE_FD_IN/OUT) */ + __s32 fence_fd; + /** @queue_id: in, submitqueue id */ + __u32 queue_id; + /** @in_syncobjs: in, ptr to array of drm_msm_gem_syncobj */ + __u64 in_syncobjs; + /** @out_syncobjs: in, ptr to array of drm_msm_gem_syncobj */ + __u64 out_syncobjs; + /** @nr_in_syncobjs: in, number of entries in in_syncobj */ + __u32 nr_in_syncobjs; + /** @nr_out_syncobjs: in, number of entries in out_syncobj */ + __u32 nr_out_syncobjs; + /** @syncobj_stride: in, stride of syncobj arrays */ + __u32 syncobj_stride; + /** @op_stride: sizeof each struct drm_msm_vm_bind_op in @ops */ + __u32 op_stride; + union { + /** @op: used if num_ops =3D=3D 1 */ + struct drm_msm_vm_bind_op op; + /** @ops: userptr to array of drm_msm_vm_bind_op if num_ops > 1 */ + __u64 ops; + }; }; =20 #define MSM_WAIT_FENCE_BOOST 0x00000001 @@ -435,6 +505,7 @@ struct drm_msm_submitqueue_query { #define DRM_MSM_SUBMITQUEUE_NEW 0x0A #define DRM_MSM_SUBMITQUEUE_CLOSE 0x0B #define DRM_MSM_SUBMITQUEUE_QUERY 0x0C +#define DRM_MSM_VM_BIND 0x0D =20 #define DRM_IOCTL_MSM_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM= _GET_PARAM, struct drm_msm_param) #define DRM_IOCTL_MSM_SET_PARAM DRM_IOW (DRM_COMMAND_BASE + DRM_MSM= _SET_PARAM, struct drm_msm_param) @@ -448,6 +519,7 @@ struct drm_msm_submitqueue_query { #define DRM_IOCTL_MSM_SUBMITQUEUE_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_M= SM_SUBMITQUEUE_NEW, struct drm_msm_submitqueue) #define DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE DRM_IOW (DRM_COMMAND_BASE + DRM_M= SM_SUBMITQUEUE_CLOSE, __u32) #define DRM_IOCTL_MSM_SUBMITQUEUE_QUERY DRM_IOW (DRM_COMMAND_BASE + DRM_M= SM_SUBMITQUEUE_QUERY, struct drm_msm_submitqueue_query) +#define DRM_IOCTL_MSM_VM_BIND DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM= _VM_BIND, struct drm_msm_vm_bind) =20 #if defined(__cplusplus) } --=20 2.50.0