From nobody Mon Dec 1 22:04:06 2025 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85F6F32AAA0 for ; Fri, 28 Nov 2025 14:14:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764339273; cv=none; b=S0k1QGOTROF10IDVFJIfEIg2Qm9qNaFTLcYQkJDC0B6ID3HKNWg/iA7UGI0GLanKWj7W2NFq64ONSyHcmbGG6CAGrlUFfQAHj0dnA1oaarmIrNZwlFZ6awH1sdlYD+VLJOnl1jDZ54/tlUPCwOmkZjfDu6A9+OmFyd8QqZesL6U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764339273; c=relaxed/simple; bh=/qKUIi+SwxvR/KkB19A6xZJZbmkfrsHAi66WdDyKSDI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VUMZCGH2OJw6S5s3lHLQgZjSMLYG0yiu5/XWrFwmBSwVK3F1wrXdRhXzREmShV9/VR9tp5vapS3gqU36pIJvZDjrxLE0vRMtSI+W17W8bZlkBuraUPzNBbanjm0NPCPxBwasna5ePNIgYdE31ya3gWMWHJB4Wwm8IeAHuFNISJs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xvhaLQFd; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xvhaLQFd" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-b72a8546d73so218929666b.2 for ; Fri, 28 Nov 2025 06:14:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1764339270; x=1764944070; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xWtS0oeZw0euolNA6rQ3aMYIpi9puV6Ez/UA7Z1Lta8=; b=xvhaLQFdE3/YD+ita5keY8uMuAPAUvjLm1FGn/4gftgVs/aKjnjjzYyNkNULetNumD dP03koFswaD7wp4R4y2nsfaP7VtdJEvvmRS6LEX0xiKyBhPiB91wQ9FhVqXlWTMpJPlj zofSnPOMAk4InT/81HEh4xWRl5ZFd0N40EsQf5vz1lcXCrujVTXF/w47b+WzCtdy5kK5 TQjjI5ESI++9LoPUqS9JV/ad0RCQXDUUl5aLnFOLE6QF89EIlDEP68HTDHJJx3PyqpQA YjO95i+7kK0xSlyC62fH1P7g0EvzwLb+ED4vmJJNJrF9PkOlcxb8E1gjiP6NPPcTheJN tR2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764339270; x=1764944070; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xWtS0oeZw0euolNA6rQ3aMYIpi9puV6Ez/UA7Z1Lta8=; b=LUOie30ZylOwO4QHtg/jCaVrvbjoxfbN9hWHy6IMb0m/WjjGWiV7I1h+OZ9S1tR2XE oahqlg0zOIf2zn592DJwJ4zzeuAxHtD+Jp1rdcXay+txp7YIRV1cXwWY/tvwTdooR1Wv +gsCm0eGdE/mL9iQVbGRUXBXiY6iFfKVtX3jJci4Nic5PG+jDCQaYeWe364W+zPG3JK6 N4MDRF1EYOt8DfZNilXrcP0sWFz0RJC72adS6DiajkUs49gwxKgArMtXiFw0g11AzbgA MYCM/Mcqxct2ijFRGoKq8xD0QW9xIuNA5t1czZ+fNC6pMAlpF/QMKGaCrTv4e1RAmllo dqcA== X-Forwarded-Encrypted: i=1; AJvYcCV8RqIeP+4ILGtGOESfOdMLew6sFEORecBAUwc/2S1EIoVZQ9L6Tf7wm7007hY37p0ONi5pg++wqr5XCsQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxPgD96Uqo4WzYEOyqi+ezqsb6ZDBLy8eqZVkyJHWf3Aph4WVhs oPrN4mxBp48r7gChE/Cl0mpd3w/S+vVpFz46XwHvN7iqoxIpCFDFpOK824C0bLFbwhSBE0x8N3C nRK8eUqi0NguBmqyi3g== X-Google-Smtp-Source: AGHT+IGQaJ2kOmy5nOC4YD7a97BtA5EiRqES+TvuM8s7S+WqX8zQuzMJ2k3tkR7bnBDkFNa0dTcDwtixlZ7umdo= X-Received: from ejcvx12.prod.google.com ([2002:a17:907:a78c:b0:b73:8115:9bc0]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:3e86:b0:b73:926e:ddc6 with SMTP id a640c23a62f3a-b76716844c4mr3146435366b.10.1764339269785; Fri, 28 Nov 2025 06:14:29 -0800 (PST) Date: Fri, 28 Nov 2025 14:14:15 +0000 In-Reply-To: <20251128-gpuvm-rust-v1-0-ebf66bf234e0@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251128-gpuvm-rust-v1-0-ebf66bf234e0@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=5831; i=aliceryhl@google.com; h=from:subject:message-id; bh=/qKUIi+SwxvR/KkB19A6xZJZbmkfrsHAi66WdDyKSDI=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBpKa5C3uoY59uKQmVsr8tyT1tCEHUEqwTTsBb9S b5vlFkI0SGJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCaSmuQgAKCRAEWL7uWMY5 RjwLD/9ShES7XKjmh35DZrFV+Fc0z4KMMPJ1JI0Xlzpm9qtAoEKOXw7zgph3m7PfAHtwS5AHPQ5 3r8W+kUOxuw+CD0lhpbFLA8Fx/msauv85WyPkF+YZ+9HR1XAAE5GkAzLyfP0i5kMGVdjOCCe1AI bPuG9iA7pke+Bsfmp6vG7pUpnXbhFzvAQK30KyowCPBdFVrNcVXw0k+t56KpR4rKJRmNFU6F3f9 +ZXane3mLzMGMKIVyR8oRq0QxM2ddQ0BPJLnsobQv1HTyyqR2g/5pOLiDDnvwS+z2i0s6KOSXGh WSusVN9/vKUQkyC9m5+jb7aNRhGclk6KVrM5/445UugiLuD2XpIGDPqjHF2VsIKYqmWOd6PBTX3 V0eAWXixt1Wkgs+jIvSbfftAwV/PD6NdaTlJ1wX2Z6bCMvobI9qw0uxWiUNn2ccJrYE3Y3N839N 492nTEJubXIjfvs7pdcP+XwZANY3nCLLLufUGYVBlJy3wPeSK8p2SrOji0/Ds+VNQkjBZYmbzmK se/FxIy2Kzo8qYRIQ2m5aYl+VKOwVRWGKTUwNZsZyuSpT8bn0eHaK7r1iLSxejhw6BXdscHUkEC WSv0lEHS4sqcRY3gsKZCczzW/gAmw6rYvqvYSzNuPNZx1aDNJ7VHaTnNwvbeNsmn03XTo4bmbEG qZF5uHItgA1JDLg== X-Mailer: b4 0.14.2 Message-ID: <20251128-gpuvm-rust-v1-1-ebf66bf234e0@google.com> Subject: [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc() From: Alice Ryhl To: Danilo Krummrich , Daniel Almeida Cc: Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Steven Price , Liviu Dudau , Miguel Ojeda , Boqun Feng , Gary Guo , "=?utf-8?q?Bj=C3=B6rn_Roy_Baron?=" , Benno Lossin , Andreas Hindborg , Trevor Gross , Frank Binns , Matt Coster , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , Lyude Paul , Lucas De Marchi , Rodrigo Vivi , Sumit Semwal , "=?utf-8?q?Christian_K=C3=B6nig?=" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, nouveau@lists.freedesktop.org, intel-xe@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable When calling drm_gpuvm_bo_obtain_prealloc() and using immediate mode, this may result in a call to ops->vm_bo_free(vm_bo) while holding the GEMs gpuva mutex. This is a problem if ops->vm_bo_free(vm_bo) performs any operations that are not safe in the fence signalling critical path, and it turns out that Panthor (the only current user of the method) calls drm_gem_shmem_unpin() which takes a resv lock internally. This constitutes both a violation of signalling safety and lock inversion. To fix this, we modify the method to internally take the GEMs gpuva mutex so that the mutex can be unlocked before freeing the preallocated vm_bo. Note that this modification introduces a requirement that the driver uses immediate mode to call drm_gpuvm_bo_obtain_prealloc() as it would otherwise take the wrong lock. Signed-off-by: Alice Ryhl Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gpuvm.c | 58 ++++++++++++++++++++++---------= ---- drivers/gpu/drm/panthor/panthor_mmu.c | 10 ------ 2 files changed, 37 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 936e6c1a60c16ed5a6898546bf99e23a74f6b58b..f08a5cc1d611f971862c1272987= e5ecd6d97c163 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1601,14 +1601,37 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, } EXPORT_SYMBOL_GPL(drm_gpuvm_bo_create); =20 +static void +drm_gpuvm_bo_destroy_not_in_lists(struct drm_gpuvm_bo *vm_bo) +{ + struct drm_gpuvm *gpuvm =3D vm_bo->vm; + const struct drm_gpuvm_ops *ops =3D gpuvm->ops; + struct drm_gem_object *obj =3D vm_bo->obj; + + if (ops && ops->vm_bo_free) + ops->vm_bo_free(vm_bo); + else + kfree(vm_bo); + + drm_gpuvm_put(gpuvm); + drm_gem_object_put(obj); +} + +static void +drm_gpuvm_bo_destroy_not_in_lists_kref(struct kref *kref) +{ + struct drm_gpuvm_bo *vm_bo =3D container_of(kref, struct drm_gpuvm_bo, + kref); + + drm_gpuvm_bo_destroy_not_in_lists(vm_bo); +} + static void drm_gpuvm_bo_destroy(struct kref *kref) { struct drm_gpuvm_bo *vm_bo =3D container_of(kref, struct drm_gpuvm_bo, kref); struct drm_gpuvm *gpuvm =3D vm_bo->vm; - const struct drm_gpuvm_ops *ops =3D gpuvm->ops; - struct drm_gem_object *obj =3D vm_bo->obj; bool lock =3D !drm_gpuvm_resv_protected(gpuvm); =20 if (!lock) @@ -1617,16 +1640,10 @@ drm_gpuvm_bo_destroy(struct kref *kref) drm_gpuvm_bo_list_del(vm_bo, extobj, lock); drm_gpuvm_bo_list_del(vm_bo, evict, lock); =20 - drm_gem_gpuva_assert_lock_held(gpuvm, obj); + drm_gem_gpuva_assert_lock_held(gpuvm, vm_bo->obj); list_del(&vm_bo->list.entry.gem); =20 - if (ops && ops->vm_bo_free) - ops->vm_bo_free(vm_bo); - else - kfree(vm_bo); - - drm_gpuvm_put(gpuvm); - drm_gem_object_put(obj); + drm_gpuvm_bo_destroy_not_in_lists(vm_bo); } =20 /** @@ -1744,9 +1761,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put_deferred); void drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm) { - const struct drm_gpuvm_ops *ops =3D gpuvm->ops; struct drm_gpuvm_bo *vm_bo; - struct drm_gem_object *obj; struct llist_node *bo_defer; =20 bo_defer =3D llist_del_all(&gpuvm->bo_defer); @@ -1765,14 +1780,7 @@ drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuv= m) while (bo_defer) { vm_bo =3D llist_entry(bo_defer, struct drm_gpuvm_bo, list.entry.bo_defer= ); bo_defer =3D bo_defer->next; - obj =3D vm_bo->obj; - if (ops && ops->vm_bo_free) - ops->vm_bo_free(vm_bo); - else - kfree(vm_bo); - - drm_gpuvm_put(gpuvm); - drm_gem_object_put(obj); + drm_gpuvm_bo_destroy_not_in_lists(vm_bo); } } EXPORT_SYMBOL_GPL(drm_gpuvm_bo_deferred_cleanup); @@ -1860,6 +1868,9 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain); * count is decreased. If not found @__vm_bo is returned without further * increase of the reference count. * + * The provided @__vm_bo must not already be in the gpuva, evict, or extobj + * lists prior to calling this method. + * * A new &drm_gpuvm_bo is added to the GEMs gpuva list. * * Returns: a pointer to the found &drm_gpuvm_bo or @__vm_bo if no existing @@ -1872,14 +1883,19 @@ drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *_= _vm_bo) struct drm_gem_object *obj =3D __vm_bo->obj; struct drm_gpuvm_bo *vm_bo; =20 + drm_WARN_ON(gpuvm->drm, !drm_gpuvm_immediate_mode(gpuvm)); + + mutex_lock(&obj->gpuva.lock); vm_bo =3D drm_gpuvm_bo_find(gpuvm, obj); if (vm_bo) { - drm_gpuvm_bo_put(__vm_bo); + mutex_unlock(&obj->gpuva.lock); + kref_put(&__vm_bo->kref, drm_gpuvm_bo_destroy_not_in_lists_kref); return vm_bo; } =20 drm_gem_gpuva_assert_lock_held(gpuvm, obj); list_add_tail(&__vm_bo->list.entry.gem, &obj->gpuva.list); + mutex_unlock(&obj->gpuva.lock); =20 return __vm_bo; } diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/pantho= r/panthor_mmu.c index 9f5f4ddf291024121f3fd5644f2fdeba354fa67c..be8811a70e1a3adec87ca4a85ca= d7c838f54bebf 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -1224,17 +1224,7 @@ static int panthor_vm_prepare_map_op_ctx(struct pant= hor_vm_op_ctx *op_ctx, goto err_cleanup; } =20 - /* drm_gpuvm_bo_obtain_prealloc() will call drm_gpuvm_bo_put() on our - * pre-allocated BO if the association exists. Given we - * only have one ref on preallocated_vm_bo, drm_gpuvm_bo_destroy() will - * be called immediately, and we have to hold the VM resv lock when - * calling this function. - */ - dma_resv_lock(panthor_vm_resv(vm), NULL); - mutex_lock(&bo->base.base.gpuva.lock); op_ctx->map.vm_bo =3D drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo); - mutex_unlock(&bo->base.base.gpuva.lock); - dma_resv_unlock(panthor_vm_resv(vm)); =20 op_ctx->map.bo_offset =3D offset; =20 --=20 2.52.0.487.g5c8c507ade-goog From nobody Mon Dec 1 22:04:06 2025 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98BD232E723 for ; Fri, 28 Nov 2025 14:14:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764339274; cv=none; b=SqKt5DiPvANua//ggOuMDsueNB01ohjh/FTpja43/IIJ1hA8gH/gz7TAZ6d09fY92B8covxSANSTipfz+QHB0+Jc8KsG9KaHQaVACERhjqe0KRUlR4/mgiXndhb7IAvVVU9QZZOYwaGymyb1zXNPzTZQ5iHajIRa96MF4srt14k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764339274; c=relaxed/simple; bh=3qvzej8awCpbn9e07sU237IHBhwftYDUmtyhlWlTDvE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PA67Xg2pnyFPZa+rbStrI8hevgw4xcxOKpWcXCx/BA8kbQV/867Gv2WHB49tXf260p6Og+rG6GJLLFwKg1eHgPCGIY6wDCicGMOGbf1+ytdkm4Qz7J4/xHLM4nWG5Pom7Kq/pVrW8hyxaz/X3F911AhDqlUXXI+ni+TEzOQIbR4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cijZIQ5h; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cijZIQ5h" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-42b3ed2c3e3so1444792f8f.1 for ; Fri, 28 Nov 2025 06:14:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1764339271; x=1764944071; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dErkMdrhZYAl7lh2HrMoD3XpeYIOhuhSS3s5eqeFFf4=; b=cijZIQ5h10jlgCYIhUiphD8ZVdFwgOZd3FlNADwSf8YBUw5YbUycBz8G9yY1hbhcm5 JhteJhDr0nPZhrWdAl/1+xsznMRlRPTatpEnWstN7qubjspBCNWBEypaMdVpjPfQU57E cWgTCwC0Uh+f4bK9/R2Cf0AcksA5Re1IC9oxlvNBdNCP5E/7LSyLwVHQk6c2e/+Pe9h6 xMUTSAGuDE0DSAk5BzRPkqFAO25JVR/aMkbRNwc71B/wn7kh6mez5QW9W+3qsQq3bdTS NZ/zXLiq0YqAwaAqf2HjWB23V6zveltJmMCQ/E4z0dUfwWuTg31zzuoGpFoTNv/3vWMf CXyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764339271; x=1764944071; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dErkMdrhZYAl7lh2HrMoD3XpeYIOhuhSS3s5eqeFFf4=; b=MX9TAANmYtbe1RSFly5zjjS8thI79WlxC6xn4Rao7Cz5DvtS2OUoz8e6gHw755DPoP 49VBy4HF1rFG2rRV3E95ra+RtKCJp1Wdk+rlXU8SX/DXN3zRimgW208JrpK3udJwjL2g iLXZTCs/QLQ7AKgrkD7CeMOYRqQBR5CwUkdKvBO9TF0yy33hKSgyYoHuAseyTGQB9Ke8 c2KBKnrwI1zk1N4VPyXKQ86wRfckxoWcK1qwKCss1FOchrAo6+xKWFv/z0+ZfxIhU0E6 5kY5aRsJiFWahm7jWvmQgYehm0dphQRyL8Y6uJp+B7eo8TjoyOPArHwYi/wJs/YkHpXd 6CMg== X-Forwarded-Encrypted: i=1; AJvYcCXj7r49fSdHkVi3T6j5SCObu0SlmJWr47xDeGaHiLP4rC2Feh5OExRJQWcnsazCuG+op0vqk0V3c9VF2CM=@vger.kernel.org X-Gm-Message-State: AOJu0YwQMWogBK3JYL3uaDEzb/2esonpFBpHx2L5TVjx0xUngNLazrAN Ol4XH6l1e2N+kpTcZdOnZwt3OyZ5RB4eAjf7FJujniskOcf2XBn2pjh+BnitMWxObNidoo5fUi/ hAFWC0l9D6Ae1cFaOmQ== X-Google-Smtp-Source: AGHT+IFcjXie0ZOm1V1x3Sn30UIkvG9lKVYfaz50zhx+L1MWdnE45he0tSnMQ5dDSmjDf3qlQcYbMUPAeDz381I= X-Received: from wrrx5.prod.google.com ([2002:a5d:4445:0:b0:428:5675:7d68]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:adf:b307:0:b0:42b:3090:2680 with SMTP id ffacd0b85a97d-42cc1cd8f8fmr20206132f8f.10.1764339270969; Fri, 28 Nov 2025 06:14:30 -0800 (PST) Date: Fri, 28 Nov 2025 14:14:16 +0000 In-Reply-To: <20251128-gpuvm-rust-v1-0-ebf66bf234e0@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251128-gpuvm-rust-v1-0-ebf66bf234e0@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=6779; i=aliceryhl@google.com; h=from:subject:message-id; bh=3qvzej8awCpbn9e07sU237IHBhwftYDUmtyhlWlTDvE=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBpKa5CDlG2X706aMWo3D+1YjSXr9gGcULvZXzIO bxH54LG946JAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCaSmuQgAKCRAEWL7uWMY5 RlxdD/9Yh+2z63fz2BwB+Oz13nuOAvp3W+klPBdj9p5TvUKtBNk++69CU76sDEi2CGGnPjpCDl/ T/EpX8KTRxxPwAB2iSDidiF3qT0jsqpLh9ZiNUWIPkVpnfLd9bd+ol7ae5dLK6m8o8PfelWXKBo izgUe+OvwowH6DHFB7LUW5n1OG/CBZXJyom9szEfAr3kkTs3NX0QoWy3uNhRiLJ54FE55G0U9au hsUPd5/wnU+KCBUZB1l2R3iep1EnRYiLHP9vlTvlbjh9+aLU87IAtDJjhIiWnCg3yZ1OydQGQDe b0BLFhZ7hwFuqNfTX7f4z2mgj7r/jffqA7j0smRZ+zGd6HqGtPCnkHoEZpZvyzyYz/D+lTjY43g 3rH2VSYCVU6ZjaZWWCafJXsmYKTRQ4UeEiKB3+/lXhJnQc6VlITjiN6r1saX/abXPe0nrieNitz wEslHZba53e3Pfy7lveKkWRxCcg7DOEIDQpKh2MTQ7N0ApniMtEpAEhn8PPeBS9B5bciEd52ma1 P3U9cr0Y/+EKc8ueTcuF6Y8snFJBvFyLedjXOdQIU0eY1P4nR0sHZhGiLblK4VTg4tn/eenqb5G Ax706bZF4ozYMylFXxL+RPWsW6sy6QS4FgYlcdaJsjYHW1XZnYq3CN51VgL9PI7gvHJ6CQ3gY9N kTh8t0qfuepdzJw== X-Mailer: b4 0.14.2 Message-ID: <20251128-gpuvm-rust-v1-2-ebf66bf234e0@google.com> Subject: [PATCH 2/4] drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode From: Alice Ryhl To: Danilo Krummrich , Daniel Almeida Cc: Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Steven Price , Liviu Dudau , Miguel Ojeda , Boqun Feng , Gary Guo , "=?utf-8?q?Bj=C3=B6rn_Roy_Baron?=" , Benno Lossin , Andreas Hindborg , Trevor Gross , Frank Binns , Matt Coster , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , Lyude Paul , Lucas De Marchi , Rodrigo Vivi , Sumit Semwal , "=?utf-8?q?Christian_K=C3=B6nig?=" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, nouveau@lists.freedesktop.org, intel-xe@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable In the previous commit we updated drm_gpuvm_bo_obtain_prealloc() to take locks internally, which means that it's only usable in immediate mode. In this commit, we notice that drm_gpuvm_bo_obtain() requires you to use staged mode. This means that we now have one variant of obtain for each mode you might use gpuvm in. To reflect this information, we add a warning about using it in immediate mode, and to make the distinction clearer we rename the method with a _locked() suffix so that it's clear that it requires the caller to take the locks. Signed-off-by: Alice Ryhl Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gpuvm.c | 16 +++++++++++++--- drivers/gpu/drm/imagination/pvr_vm.c | 2 +- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 2 +- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +- drivers/gpu/drm/xe/xe_vm.c | 4 ++-- include/drm/drm_gpuvm.h | 4 ++-- 7 files changed, 21 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index f08a5cc1d611f971862c1272987e5ecd6d97c163..9cd06c7600dc32ceee0f0beb5e3= daf31698a66b3 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1832,16 +1832,26 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_find); * count of the &drm_gpuvm_bo accordingly. If not found, allocates a new * &drm_gpuvm_bo. * + * Requires the lock for the GEMs gpuva list. + * * A new &drm_gpuvm_bo is added to the GEMs gpuva list. * * Returns: a pointer to the &drm_gpuvm_bo on success, an ERR_PTR on failu= re */ struct drm_gpuvm_bo * -drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm, - struct drm_gem_object *obj) +drm_gpuvm_bo_obtain_locked(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) { struct drm_gpuvm_bo *vm_bo; =20 + /* + * In immediate mode this would require the caller to hold the GEMs + * gpuva mutex, but it's not okay to allocate while holding that lock, + * and this method allocates. Immediate mode drivers should use + * drm_gpuvm_bo_obtain_prealloc() instead. + */ + drm_WARN_ON(gpuvm->drm, drm_gpuvm_immediate_mode(gpuvm)); + vm_bo =3D drm_gpuvm_bo_find(gpuvm, obj); if (vm_bo) return vm_bo; @@ -1855,7 +1865,7 @@ drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm, =20 return vm_bo; } -EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain); +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain_locked); =20 /** * drm_gpuvm_bo_obtain_prealloc() - obtains an instance of the &drm_gpuvm_= bo diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagina= tion/pvr_vm.c index 3d97990170bf6b1341116c5c8b9d01421944eda4..30ff9b84eb14f2455003e76108d= e6d489a13f61a 100644 --- a/drivers/gpu/drm/imagination/pvr_vm.c +++ b/drivers/gpu/drm/imagination/pvr_vm.c @@ -255,7 +255,7 @@ pvr_vm_bind_op_map_init(struct pvr_vm_bind_op *bind_op, bind_op->type =3D PVR_VM_BIND_TYPE_MAP; =20 dma_resv_lock(obj->resv, NULL); - bind_op->gpuvm_bo =3D drm_gpuvm_bo_obtain(&vm_ctx->gpuvm_mgr, obj); + bind_op->gpuvm_bo =3D drm_gpuvm_bo_obtain_locked(&vm_ctx->gpuvm_mgr, obj); dma_resv_unlock(obj->resv); if (IS_ERR(bind_op->gpuvm_bo)) return PTR_ERR(bind_op->gpuvm_bo); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index a4cf31853c5008e171c3ad72cde1004c60fe5212..26dfe3d22e3e847f7e63174481d= 03f72878a8ced 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -60,7 +60,7 @@ struct msm_gem_vm_log_entry { * embedded in any larger driver structure. The GEM object holds a list of * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma * holds a reference to the vm_bo, and drops it when the vma is unlinked. - * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an + * So we just need to call drm_gpuvm_bo_obtain_locked() to return a ref to= an * existing vm_bo, or create a new one. Once the vma is linked, the ref * to the vm_bo can be dropped (since the vma is holding one). */ diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_ge= m_vma.c index 8316af1723c227f919594446c3721e1a948cbc9e..239b6168a26e636b511187b4993= 945d1565d149f 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -413,7 +413,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem= _object *obj, if (!obj) return &vma->base; =20 - vm_bo =3D drm_gpuvm_bo_obtain(&vm->base, obj); + vm_bo =3D drm_gpuvm_bo_obtain_locked(&vm->base, obj); if (IS_ERR(vm_bo)) { ret =3D PTR_ERR(vm_bo); goto err_va_remove; diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouve= au/nouveau_uvmm.c index 79eefdfd08a2678fedf69503ddf7e9e17ed14c6f..d8888bd29cccef4b8dad9eff2bf= 6e2b1fd1a7e4d 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1207,7 +1207,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job, return -ENOENT; =20 dma_resv_lock(obj->resv, NULL); - op->vm_bo =3D drm_gpuvm_bo_obtain(&uvmm->base, obj); + op->vm_bo =3D drm_gpuvm_bo_obtain_locked(&uvmm->base, obj); dma_resv_unlock(obj->resv); if (IS_ERR(op->vm_bo)) return PTR_ERR(op->vm_bo); diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index f602b874e0547591d9008333c18f3de0634c48c7..de52d01b0921cc8ac619deeed47= b578e0ae69257 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -1004,7 +1004,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm, =20 xe_bo_assert_held(bo); =20 - vm_bo =3D drm_gpuvm_bo_obtain(vma->gpuva.vm, &bo->ttm.base); + vm_bo =3D drm_gpuvm_bo_obtain_locked(vma->gpuva.vm, &bo->ttm.base); if (IS_ERR(vm_bo)) { xe_vma_free(vma); return ERR_CAST(vm_bo); @@ -2249,7 +2249,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_= vma_ops *vops, if (err) return ERR_PTR(err); =20 - vm_bo =3D drm_gpuvm_bo_obtain(&vm->gpuvm, obj); + vm_bo =3D drm_gpuvm_bo_obtain_locked(&vm->gpuvm, obj); if (IS_ERR(vm_bo)) { xe_bo_unlock(bo); return ERR_CAST(vm_bo); diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index fdfc575b260360611ff8ce16c327acede787929f..0d3fc1f6cac9966a42f3bc82b0b= 491bfefaf5b96 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -736,8 +736,8 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj); =20 struct drm_gpuvm_bo * -drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm, - struct drm_gem_object *obj); +drm_gpuvm_bo_obtain_locked(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj); struct drm_gpuvm_bo * drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *vm_bo); =20 --=20 2.52.0.487.g5c8c507ade-goog From nobody Mon Dec 1 22:04:06 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 216EE32AAC0 for ; Fri, 28 Nov 2025 14:14:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764339276; cv=none; b=bpZ+AGxO9XQyy0X+Cfnz1SZXkRR4+dzp6Ac7yZl6Be00NLS+dpCoWNQr/wW8SXc4jJW3FcoRaLPjMlTeUEVU8jZQVcYI7/ol34TZR84F0zwRQ9u9NI+KSzAGTHPLwvfiAQKJgtPF+tgxdXR/MtK/n9kIelVJ4mNK5GsZbmY4t90= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764339276; c=relaxed/simple; bh=lZr2m+mQ4TpwozXWPOtl3qOpvh0GxfNQ2/kgcwntGLw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oWYbCelESxit2jaswsNiAujg41mceyCzD1dts+Fo7VN/AWKM/tfT9hcAIz+Luko+LOouxvTHJuzofgofhbnjJ07W8o44mlHCwAA6o2eoONAEz0cP0Xhekh4pEtRwAwa1suLMJPenwWaWrBvA4SB2nqqg57NheyRz1GF4PksEiSY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UvQPPNNM; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UvQPPNNM" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-477a11d9f89so9760325e9.3 for ; Fri, 28 Nov 2025 06:14:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1764339272; x=1764944072; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lHrITmuzWJXdDupbiDV048x4k5m6cyc9L7Gnz7XV3lw=; b=UvQPPNNMRwR+RWoUW4jAn23eMqZXtlv+dzDxfmhsyVVyrogMz+S4ZiEGhcMUtJ/6iq D4kGk4g4Ts+lfQRcKuLUSyd2Oyo7dgvvtH6IeXzxIg+0qnWtOZ+NKN76YfUu4CmnjiXi 2v3JMpmTnKfBlPp0y23q3LYv7K4oT24HswFK5xAXTWbkbM9JVosApRddO9wduNJEAlh5 3LI56uy8un/F6tCSYnsl/EeDlqfxRvUDHNBIdmJQBA3ofjEQ5WVOUzj0E//yH+vSB6q8 wQ90ShkwCHApzBnvwlX/pveePK144ildh72io3r1VO9CzLIrR6t1Vful+oab0bHbq3TN 6lkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764339272; x=1764944072; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lHrITmuzWJXdDupbiDV048x4k5m6cyc9L7Gnz7XV3lw=; b=LXR0p1wHBitOYDCZmmHUb8sn0lZ431PBxjWTQUC2RZTSj/l547lr3Gyt91/bryvTob 3thjv4Mf5or0kD3FcUaxEJT9CZQq7fv8xJcztgjtVmrUqZHceMBbLw+0yvqPF5ELr490 OayzpBKKdPXW1CVSAH1TV9NaH5KMawrw914eqxER5cLRjX9kl2FojQXQlMMcqhzlrZlz JsGgN7F98WPe9AesDXLx9Advn4rqagS9q5YKNH6QL4F7e9T5RCoqalX8WTIZc0Suj1G6 sqKT0mnFTHKkMAUJxEiH9kuXUQzDztm7nDvTAVKccOf3OLKva0QSOIIaiAf7KDPdRVRV M4Mw== X-Forwarded-Encrypted: i=1; AJvYcCUci/qma12YDfheznlxIRc9+CkM4DJ0oj9bEUHi9E/a7bhxQngsDo/LXaGnriGsvV8Zlug3uS6cGN+RVlI=@vger.kernel.org X-Gm-Message-State: AOJu0Ywpp10F1V7uD+P8CcbLBhxgvT0ND8mhHFiS6NTM1iVrMAYyvcPb 1i9yA031FCvsfq+ySW3/KE4T+Z2SdzoQ09Glcgz6Cjw8ktzPyTXjEoMnr3UYJS9ocNJsDawLSl9 MiN8wL1uOcDM98/mfKw== X-Google-Smtp-Source: AGHT+IECdIY/9qERVbVkHVqKEZYtL5qsUp1NRe18CXbO6AbEkFwz27YZTq95W0cUq+5TnNigDeIpYGUpIZBNCto= X-Received: from wmfv21.prod.google.com ([2002:a05:600c:15d5:b0:477:93dd:bbb1]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1994:b0:477:5aaa:57a6 with SMTP id 5b1f17b1804b1-477c016e402mr252259955e9.10.1764339272424; Fri, 28 Nov 2025 06:14:32 -0800 (PST) Date: Fri, 28 Nov 2025 14:14:17 +0000 In-Reply-To: <20251128-gpuvm-rust-v1-0-ebf66bf234e0@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251128-gpuvm-rust-v1-0-ebf66bf234e0@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=2726; i=aliceryhl@google.com; h=from:subject:message-id; bh=lZr2m+mQ4TpwozXWPOtl3qOpvh0GxfNQ2/kgcwntGLw=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBpKa5CqER6z4BX6lmNpQE1dtbBv6O4gMLCZQRm2 WIxSJ5CygqJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCaSmuQgAKCRAEWL7uWMY5 RoGHEACKyrVlHckRCicF0bWCsQj6fIzJXXP4G9d2IdRlfrGJTT9IHHZefQtZHYrRj4CuKEHPn78 vq/7/57MFjLU9h01oEVnydbK1d4b2Q61FBF2O1QHxZXg+jQ5Lpda/paKmnAE6lEW5D0TD6xiknJ 3pibIUf3+qLTJLpUBj63kogyRSrhLlKoIj9hnPt/Bv0onkMIdS5ci5JVncTYEY5gayTCb8Tilq+ DOeStdEObylbdRE9O35R/Ieif6/cs6he7iB2RxcU1TOdlUR7S7AOTpeUN5uQtEQKkAFE560gkEO 0I70691h8RJ+/z04Su3AU6JfvIi/aJRNT8vJnANVSczkAMg0qY/qK3owkrZzwEe+OkNs1TXqhMn 93h/68baGdXCW0p4qGl2nlOYbe7Pk2/r4PgOPTtl1JGuyoaGbrmyVNmTQtCaOgr2StrDFDmLqdY MKBVUVQk8BOid6B4nYOens1zzK/WfMOTtAPO4IFmS2cE76XkieTzgLSgdUcLCGZF2iG1RSqG6IQ jHN9uSXjLHA/tlhHvxmP4AQhYvAJthXJmdu6oBwXIosPV3odLP+8Y0QlMSw2kG0Xtxd7sOfZvBV +pOUdZ60pmFJKPnT8SU8Sfo1BjxArZnN7evYc7mDEU+AXJOMKjtx8wvu7I5+iw3DzxIpcAt9V2R DmXD6m+pLHlNvtA== X-Mailer: b4 0.14.2 Message-ID: <20251128-gpuvm-rust-v1-3-ebf66bf234e0@google.com> Subject: [PATCH 3/4] drm/gpuvm: use const for drm_gpuva_op_* ptrs From: Alice Ryhl To: Danilo Krummrich , Daniel Almeida Cc: Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Steven Price , Liviu Dudau , Miguel Ojeda , Boqun Feng , Gary Guo , "=?utf-8?q?Bj=C3=B6rn_Roy_Baron?=" , Benno Lossin , Andreas Hindborg , Trevor Gross , Frank Binns , Matt Coster , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , Lyude Paul , Lucas De Marchi , Rodrigo Vivi , Sumit Semwal , "=?utf-8?q?Christian_K=C3=B6nig?=" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, nouveau@lists.freedesktop.org, intel-xe@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable These methods just read the values stored in the op pointers without modifying them, so it is appropriate to use const ptrs here. This allows us to avoid const -> mut pointer casts in Rust. Signed-off-by: Alice Ryhl Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gpuvm.c | 6 +++--- include/drm/drm_gpuvm.h | 8 ++++---- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 9cd06c7600dc32ceee0f0beb5e3daf31698a66b3..e06b58aabb8ea6ebd92c625583a= e2852c9d2caf1 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -2283,7 +2283,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_interval_empty); void drm_gpuva_map(struct drm_gpuvm *gpuvm, struct drm_gpuva *va, - struct drm_gpuva_op_map *op) + const struct drm_gpuva_op_map *op) { drm_gpuva_init_from_op(va, op); drm_gpuva_insert(gpuvm, va); @@ -2303,7 +2303,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_map); void drm_gpuva_remap(struct drm_gpuva *prev, struct drm_gpuva *next, - struct drm_gpuva_op_remap *op) + const struct drm_gpuva_op_remap *op) { struct drm_gpuva *va =3D op->unmap->va; struct drm_gpuvm *gpuvm =3D va->vm; @@ -2330,7 +2330,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_remap); * Removes the &drm_gpuva associated with the &drm_gpuva_op_unmap. */ void -drm_gpuva_unmap(struct drm_gpuva_op_unmap *op) +drm_gpuva_unmap(const struct drm_gpuva_op_unmap *op) { drm_gpuva_remove(op->va); } diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 0d3fc1f6cac9966a42f3bc82b0b491bfefaf5b96..655bd9104ffb24170fca14dfa03= 4ee79f5400930 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -1121,7 +1121,7 @@ void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm, struct drm_gpuva_ops *ops); =20 static inline void drm_gpuva_init_from_op(struct drm_gpuva *va, - struct drm_gpuva_op_map *op) + const struct drm_gpuva_op_map *op) { va->va.addr =3D op->va.addr; va->va.range =3D op->va.range; @@ -1265,13 +1265,13 @@ int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *= gpuvm, struct drm_exec *exec, =20 void drm_gpuva_map(struct drm_gpuvm *gpuvm, struct drm_gpuva *va, - struct drm_gpuva_op_map *op); + const struct drm_gpuva_op_map *op); =20 void drm_gpuva_remap(struct drm_gpuva *prev, struct drm_gpuva *next, - struct drm_gpuva_op_remap *op); + const struct drm_gpuva_op_remap *op); =20 -void drm_gpuva_unmap(struct drm_gpuva_op_unmap *op); +void drm_gpuva_unmap(const struct drm_gpuva_op_unmap *op); =20 /** * drm_gpuva_op_remap_to_unmap_range() - Helper to get the start and range= of --=20 2.52.0.487.g5c8c507ade-goog From nobody Mon Dec 1 22:04:06 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2968A32F760 for ; Fri, 28 Nov 2025 14:14:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764339279; cv=none; b=m+cc8bTzyxvYQD4c7hgv9btabBz2JomVpEBayPL7dhAaySgY7kDbKabFtJqL35Z48P03YF1xYAwrqK7fAqJSO0tzRKd9eW06WtI1I/w1EplvVNtsDKGMCIZ4P+6llhcp+iFQghuhAzbwZsSIkLbNNgkHmLYiuj4oFk617wsJf/k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764339279; c=relaxed/simple; bh=sOpmbWd3DDJhy/R01a5PlJS0P0lzOtYODJ4ZTdPb3nk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Y4JgSMfEqJ1cYvAq2qr9W/NFVwn/BZgzNmIIDezxL6awShrqav/5LOlisFaENRM7NjRwtI1BskTd1vez3JKp5NiXThKd9Vf0Z5llzuzDjASC0lrIIsGZy/GBN4R7hh1wr5mgkxfSEWGCrfH0BHWYJMLz0EkLbN4joy6KgjRJ2AA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cfuQa2vK; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cfuQa2vK" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4775f51ce36so14475665e9.1 for ; Fri, 28 Nov 2025 06:14:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1764339273; x=1764944073; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cFuIBoz96bQp0anvghXtc0Jxte1wpjv52sVW+62JCDU=; b=cfuQa2vKT49mSw3HIwYM+wUCE8NRp9ZgvZxzKrY/msiDN/hEMzuNNCYKbIZiiTeOT2 9zIee4ktaNUaauEqsIdmv39+vFX6HXkNDUfAak6G8VU6qtr2ezGuFylzqQyFFBcFJZnN Y4yvq8Ucao/XORZwhk5IcJaLgPJNcny8uB44lV6dK7kYNtZl7snvwNbkFNsZdNQ5/XZa nJa5h+1JJi5z1ev6OZUqmUnkRgVMxxkYaMR82UUV7WzvYWHpjBQ6UIsNspeEO4mPVUz6 VDhOg9O0bSoseZOiLJnMZMy9AHT97C6NWgKnh/XOSynB8Y2DNfA5taYWozCgOl3GjHAk k7BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764339273; x=1764944073; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cFuIBoz96bQp0anvghXtc0Jxte1wpjv52sVW+62JCDU=; b=bqvQ+sTT5WgXU+aF6bg7GBmuyWXoetwcbTZY6zV0ZHXXrP5o5LbrEpBXFZl+Sxybj6 drWjv5Y2K2gXzjU3u+NnrtlUvFdXMqe+f3xZIusxU0aqbxSFjTPXmkG4+SDKd0hzvPgg L2W9jjT1FaXkzFGsYCQpSHUPCmj03KVh0ciNmQeXlPgTbvsuxWqUDN+5ChjdPbXJspkB wvJDXwnloG/WupESg+kcrAmcclB062qKQ0vQbgJRW+TZogJNqBO7K80UH99b5GbKKVYl fMaKK1QALWBLSRSwBilrbyYvvfyi5iefpZkosBVigwlV2b4U/sYf7k0wBK14X/0TLX7y NkXQ== X-Forwarded-Encrypted: i=1; AJvYcCV4O06D3T1bicpKcTUpAurUt2LvoA411UyinoA3miQnSE7AOamV1K/lgHcz5Qq9nsmht36qyDtBlyMmyR4=@vger.kernel.org X-Gm-Message-State: AOJu0YxrjTstySbJhPeIyBsAdQrO8IMoFRXMMfYFtPjVHcrgri+Ln5q3 Ua0ScuHoQ947KlreTVfJbKcKjt3AxLcHRo0FYbqx3/i/eKhNdaKlzlzlL/Ifp3BCxfe8UAMQHsn CzTa9wO4oLhES5GS8Dg== X-Google-Smtp-Source: AGHT+IEwxKroqqNzHRjLME2jfU6DdXe0DHiKhvxNkxTHIOgJjsilfutaF4RUGmbapYADwXatIDfAmiMQdHNRnSM= X-Received: from wmbgx14.prod.google.com ([2002:a05:600c:858e:b0:477:9f68:c324]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:6296:b0:477:5cc6:7e44 with SMTP id 5b1f17b1804b1-477c10d7013mr288231515e9.11.1764339273510; Fri, 28 Nov 2025 06:14:33 -0800 (PST) Date: Fri, 28 Nov 2025 14:14:18 +0000 In-Reply-To: <20251128-gpuvm-rust-v1-0-ebf66bf234e0@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251128-gpuvm-rust-v1-0-ebf66bf234e0@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=48206; i=aliceryhl@google.com; h=from:subject:message-id; bh=sOpmbWd3DDJhy/R01a5PlJS0P0lzOtYODJ4ZTdPb3nk=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBpKa5Dpa+CjJiv1gQbGuRVGsMVzCw/x6u9Ecrck FabAZwbTJyJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCaSmuQwAKCRAEWL7uWMY5 Rig8EAC1WYOOGoGCxoQv0KbZpT52Z2rJIGzRMT4UjTG2ZY6GrwFYKhomT2dTGibitBn9UP1vVvT WqMxn5UVaWxUFj2h8PXX2x3zR9sQ4VUgCFnTyIgQbJj4tFCC0C+wOTzpCzofAt2leG05ce57OD3 zPcMCYjixtfOhDW41nx3fcEqyMRUM5i5gVAITKEoqGcTxyH1Xky/smdWC6DMwsaJcnXYpqsQP9b QDp1cY+VzE+6fvQFAI4w6dTg4MOjkKOKeVatn9X0KL+OUeZXNVuDGpkiTHY4o63tl3/d4hDN/X8 76FtY0HoKLsw6ZdC+sN89rvBtmqhQ5S28Zuf/U95tRjNCmM21NLP4GBp6ZHzXqmD2LSdGyeYPlk Ro2xs4sHNn/kpNvQl26mr3j1vhE/BXJAzfRZ0TJt1E07h2Lk7JFbrvxSWIWNy+dNEhSOtY6Zo+6 gIlZLEa1qp3v2+MXzH/1SS5zyM9NihzLk6C9modDeRvVFqKuOwPmszferSgY/QLtmeXvkZZiOvR j5A3K247Krag7fhIFWxESuHbX79nvTSWtP4LUNX7MY0+7tL3nO7B6/hAhCqcGrfz4st1/HPxEub Rhj5EPh9l+qq3NLUOJZxQ1yf7F+uG+vX8M9utzwCVd80ohHjkFYLJ0iq16mr7kqGN8cQOnZpLCU ipMWF8Xa0OdmCXA== X-Mailer: b4 0.14.2 Message-ID: <20251128-gpuvm-rust-v1-4-ebf66bf234e0@google.com> Subject: [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction From: Alice Ryhl To: Danilo Krummrich , Daniel Almeida Cc: Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Steven Price , Liviu Dudau , Miguel Ojeda , Boqun Feng , Gary Guo , "=?utf-8?q?Bj=C3=B6rn_Roy_Baron?=" , Benno Lossin , Andreas Hindborg , Trevor Gross , Frank Binns , Matt Coster , Rob Clark , Dmitry Baryshkov , Abhinav Kumar , Jessica Zhang , Sean Paul , Marijn Suijten , Lyude Paul , Lucas De Marchi , Rodrigo Vivi , Sumit Semwal , "=?utf-8?q?Christian_K=C3=B6nig?=" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, nouveau@lists.freedesktop.org, intel-xe@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Alice Ryhl , Asahi Lina Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Add a GPUVM abstraction to be used by Rust GPU drivers. GPUVM keeps track of a GPU's virtual address (VA) space and manages the corresponding virtual mappings represented by "GPU VA" objects. It also keeps track of the gem::Object used to back the mappings through GpuVmBo. This abstraction is only usable by drivers that wish to use GPUVM in immediate mode. This allows us to build the locking scheme into the API design. It means that the GEM mutex is used for the GEM gpuva list, and that the resv lock is used for the extobj list. The evicted list is not yet used in this version. This abstraction provides a special handle called the GpuVmCore, which is a wrapper around ARef that provides access to the interval tree. Generally, all changes to the address space requires mutable access to this unique handle. Some of the safety comments are still somewhat WIP, but I think the API should be sound as-is. Co-developed-by: Asahi Lina Signed-off-by: Asahi Lina Co-developed-by: Daniel Almeida Signed-off-by: Daniel Almeida Signed-off-by: Alice Ryhl --- MAINTAINERS | 1 + rust/bindings/bindings_helper.h | 2 + rust/helpers/drm_gpuvm.c | 43 ++++ rust/helpers/helpers.c | 1 + rust/kernel/drm/gpuvm/mod.rs | 394 +++++++++++++++++++++++++++++++++ rust/kernel/drm/gpuvm/sm_ops.rs | 469 ++++++++++++++++++++++++++++++++++++= ++++ rust/kernel/drm/gpuvm/va.rs | 148 +++++++++++++ rust/kernel/drm/gpuvm/vm_bo.rs | 213 ++++++++++++++++++ rust/kernel/drm/mod.rs | 1 + 9 files changed, 1272 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 952aed4619c25d395c12962e559d6cd3362f64a7..946629eb9ebf19922bbe782fed3= 7be07067d6bf2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8591,6 +8591,7 @@ S: Supported T: git https://gitlab.freedesktop.org/drm/misc/kernel.git F: drivers/gpu/drm/drm_gpuvm.c F: include/drm/drm_gpuvm.h +F: rust/kernel/drm/gpuvm/ =20 DRM LOG M: Jocelyn Falempe diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helpe= r.h index 2e43c66635a2c9f31bd99b9817bd2d6ab89fbcf2..c776ae198e1db91f010f88ff1d1= c888a3036a87f 100644 --- a/rust/bindings/bindings_helper.h +++ b/rust/bindings/bindings_helper.h @@ -33,6 +33,7 @@ #include #include #include +#include #include #include #include @@ -103,6 +104,7 @@ const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM =3D ___GFP_= HIGHMEM; const gfp_t RUST_CONST_HELPER___GFP_NOWARN =3D ___GFP_NOWARN; const blk_features_t RUST_CONST_HELPER_BLK_FEAT_ROTATIONAL =3D BLK_FEAT_RO= TATIONAL; const fop_flags_t RUST_CONST_HELPER_FOP_UNSIGNED_OFFSET =3D FOP_UNSIGNED_O= FFSET; +const u32 RUST_CONST_HELPER_DRM_EXEC_INTERRUPTIBLE_WAIT =3D DRM_EXEC_INTER= RUPTIBLE_WAIT; =20 const xa_mark_t RUST_CONST_HELPER_XA_PRESENT =3D XA_PRESENT; =20 diff --git a/rust/helpers/drm_gpuvm.c b/rust/helpers/drm_gpuvm.c new file mode 100644 index 0000000000000000000000000000000000000000..18b7dbd2e32c3162455b344e72e= c2940c632cc6b --- /dev/null +++ b/rust/helpers/drm_gpuvm.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0 or MIT + +#ifdef CONFIG_DRM_GPUVM + +#include + +struct drm_gpuvm *rust_helper_drm_gpuvm_get(struct drm_gpuvm *obj) +{ + return drm_gpuvm_get(obj); +} + +void rust_helper_drm_gpuva_init_from_op(struct drm_gpuva *va, struct drm_g= puva_op_map *op) +{ + drm_gpuva_init_from_op(va, op); +} + +struct drm_gpuvm_bo *rust_helper_drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_= bo) +{ + return drm_gpuvm_bo_get(vm_bo); +} + +void rust_helper_drm_gpuvm_exec_unlock(struct drm_gpuvm_exec *vm_exec) +{ + return drm_gpuvm_exec_unlock(vm_exec); +} + +bool rust_helper_drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + return drm_gpuvm_is_extobj(gpuvm, obj); +} + +int rust_helper_dma_resv_lock(struct dma_resv *obj, struct ww_acquire_ctx = *ctx) +{ + return dma_resv_lock(obj, ctx); +} + +void rust_helper_dma_resv_unlock(struct dma_resv *obj) +{ + dma_resv_unlock(obj); +} + +#endif // CONFIG_DRM_GPUVM diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 551da6c9b5064c324d6f62bafcec672c6c6f5bee..91f45155eb9c2c4e92b56ee1abf= 7d45188873f3c 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -26,6 +26,7 @@ #include "device.c" #include "dma.c" #include "drm.c" +#include "drm_gpuvm.c" #include "err.c" #include "irq.c" #include "fs.c" diff --git a/rust/kernel/drm/gpuvm/mod.rs b/rust/kernel/drm/gpuvm/mod.rs new file mode 100644 index 0000000000000000000000000000000000000000..9834dbb938a3622e46048e9b8e0= 6bc6bf03aa0d2 --- /dev/null +++ b/rust/kernel/drm/gpuvm/mod.rs @@ -0,0 +1,394 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT + +//! DRM GPUVM in immediate mode +//! +//! Rust abstractions for using GPUVM in immediate mode. This is when the = GPUVM state is updated +//! during `run_job()`, i.e., in the DMA fence signalling critical path, t= o ensure that the GPUVM +//! and the GPU's virtual address space has the same state at all times. +//! +//! C header: [`include/drm/drm_gpuvm.h`](srctree/include/drm/drm_gpuvm.h) + +use kernel::{ + alloc::{AllocError, Flags as AllocFlags}, + bindings, drm, + drm::gem::IntoGEMObject, + error::to_result, + prelude::*, + sync::aref::{ARef, AlwaysRefCounted}, + types::Opaque, +}; + +use core::{ + cell::UnsafeCell, + marker::PhantomData, + mem::{ManuallyDrop, MaybeUninit}, + ops::{Deref, DerefMut, Range}, + ptr::{self, NonNull}, +}; + +mod sm_ops; +pub use self::sm_ops::*; + +mod vm_bo; +pub use self::vm_bo::*; + +mod va; +pub use self::va::*; + +/// A DRM GPU VA manager. +/// +/// This object is refcounted, but the "core" is only accessible using a s= pecial unique handle. The +/// core consists of the `core` field and the GPUVM's interval tree. +#[repr(C)] +#[pin_data] +pub struct GpuVm { + #[pin] + vm: Opaque, + /// Accessed only through the [`GpuVmCore`] reference. + core: UnsafeCell, + /// Shared data not protected by any lock. + #[pin] + shared_data: T::SharedData, +} + +// SAFETY: dox +unsafe impl AlwaysRefCounted for GpuVm { + fn inc_ref(&self) { + // SAFETY: dox + unsafe { bindings::drm_gpuvm_get(self.vm.get()) }; + } + + unsafe fn dec_ref(obj: NonNull) { + // SAFETY: dox + unsafe { bindings::drm_gpuvm_put((*obj.as_ptr()).vm.get()) }; + } +} + +impl GpuVm { + const fn vtable() -> &'static bindings::drm_gpuvm_ops { + &bindings::drm_gpuvm_ops { + vm_free: Some(Self::vm_free), + op_alloc: None, + op_free: None, + vm_bo_alloc: GpuVmBo::::ALLOC_FN, + vm_bo_free: GpuVmBo::::FREE_FN, + vm_bo_validate: None, + sm_step_map: Some(Self::sm_step_map), + sm_step_unmap: Some(Self::sm_step_unmap), + sm_step_remap: Some(Self::sm_step_remap), + } + } + + /// Creates a GPUVM instance. + #[expect(clippy::new_ret_no_self)] + pub fn new( + name: &'static CStr, + dev: &drm::Device, + r_obj: &T::Object, + range: Range, + reserve_range: Range, + core: T, + shared: impl PinInit, + ) -> Result, E> + where + E: From, + E: From, + { + let obj =3D KBox::try_pin_init::( + try_pin_init!(Self { + core <- UnsafeCell::new(core), + shared_data <- shared, + vm <- Opaque::ffi_init(|vm| { + // SAFETY: These arguments are valid. `vm` is valid un= til refcount drops to + // zero. + unsafe { + bindings::drm_gpuvm_init( + vm, + name.as_char_ptr(), + bindings::drm_gpuvm_flags_DRM_GPUVM_IMMEDIATE_= MODE + | bindings::drm_gpuvm_flags_DRM_GPUVM_RESV= _PROTECTED, + dev.as_raw(), + r_obj.as_raw(), + range.start, + range.end - range.start, + reserve_range.start, + reserve_range.end - reserve_range.start, + const { Self::vtable() }, + ) + } + }), + }? E), + GFP_KERNEL, + )?; + // SAFETY: This transfers the initial refcount to the ARef. + Ok(GpuVmCore(unsafe { + ARef::from_raw(NonNull::new_unchecked(KBox::into_raw( + Pin::into_inner_unchecked(obj), + ))) + })) + } + + /// Access this [`GpuVm`] from a raw pointer. + /// + /// # Safety + /// + /// For the duration of `'a`, the pointer must reference a valid [`Gpu= Vm`]. + #[inline] + pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm) -> &'a Self { + // SAFETY: `drm_gpuvm` is first field and `repr(C)`. + unsafe { &*ptr.cast() } + } + + /// Get a raw pointer. + #[inline] + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm { + self.vm.get() + } + + /// Access the shared data. + #[inline] + pub fn shared(&self) -> &T::SharedData { + &self.shared_data + } + + /// The start of the VA space. + #[inline] + pub fn va_start(&self) -> u64 { + // SAFETY: Safe by the type invariant of `GpuVm`. + unsafe { (*self.as_raw()).mm_start } + } + + /// The length of the address space + #[inline] + pub fn va_length(&self) -> u64 { + // SAFETY: Safe by the type invariant of `GpuVm`. + unsafe { (*self.as_raw()).mm_range } + } + + /// Returns the range of the GPU virtual address space. + #[inline] + pub fn va_range(&self) -> Range { + let start =3D self.va_start(); + let end =3D start + self.va_length(); + Range { start, end } + } + + /// Returns a [`GpuVmBoObtain`] for the provided GEM object. + #[inline] + pub fn obtain( + &self, + obj: &T::Object, + data: impl PinInit, + ) -> Result, AllocError> { + Ok(GpuVmBoAlloc::new(self, obj, data)?.obtain()) + } + + /// Prepare this GPUVM. + #[inline] + pub fn prepare(&self, num_fences: u32) -> impl PinInit, Error> { + try_pin_init!(GpuVmExec { + exec <- Opaque::try_ffi_init(|exec: *mut bindings::drm_gpuvm_e= xec| { + // SAFETY: exec is valid but unused memory, so we can writ= e. + unsafe { + ptr::write_bytes(exec, 0u8, 1usize); + ptr::write(&raw mut (*exec).vm, self.as_raw()); + ptr::write(&raw mut (*exec).flags, bindings::DRM_EXEC_= INTERRUPTIBLE_WAIT); + ptr::write(&raw mut (*exec).num_fences, num_fences); + } + + // SAFETY: We can prepare the GPUVM. + to_result(unsafe { bindings::drm_gpuvm_exec_lock(exec) }) + }), + _gpuvm: PhantomData, + }) + } + + /// Clean up buffer objects that are no longer used. + #[inline] + pub fn deferred_cleanup(&self) { + // SAFETY: Always safe to perform deferred cleanup. + unsafe { bindings::drm_gpuvm_bo_deferred_cleanup(self.as_raw()) } + } + + /// Check if this GEM object is an external object for this GPUVM. + #[inline] + pub fn is_extobj(&self, obj: &T::Object) -> bool { + // SAFETY: We may call this with any GPUVM and GEM object. + unsafe { bindings::drm_gpuvm_is_extobj(self.as_raw(), obj.as_raw()= ) } + } + + /// Free this GPUVM. + /// + /// # Safety + /// + /// Called when refcount hits zero. + unsafe extern "C" fn vm_free(me: *mut bindings::drm_gpuvm) { + // SAFETY: GPUVM was allocated with KBox and can now be freed. + drop(unsafe { KBox::::from_raw(me.cast()) }) + } +} + +/// The manager for a GPUVM. +pub trait DriverGpuVm: Sized { + /// Parent `Driver` for this object. + type Driver: drm::Driver; + + /// The kind of GEM object stored in this GPUVM. + type Object: IntoGEMObject; + + /// Data stored in the [`GpuVm`] that is fully shared. + type SharedData; + + /// Data stored with each `struct drm_gpuvm_bo`. + type VmBoData; + + /// Data stored with each `struct drm_gpuva`. + type VaData; + + /// The private data passed to callbacks. + type SmContext; + + /// Indicates that a new mapping should be created. + fn sm_step_map<'op>( + &mut self, + op: OpMap<'op, Self>, + context: &mut Self::SmContext, + ) -> Result, Error>; + + /// Indicates that an existing mapping should be removed. + fn sm_step_unmap<'op>( + &mut self, + op: OpUnmap<'op, Self>, + context: &mut Self::SmContext, + ) -> Result, Error>; + + /// Indicates that an existing mapping should be split up. + fn sm_step_remap<'op>( + &mut self, + op: OpRemap<'op, Self>, + context: &mut Self::SmContext, + ) -> Result, Error>; +} + +/// The core of the DRM GPU VA manager. +/// +/// This object is the reference to the GPUVM that +/// +/// # Invariants +/// +/// This object owns the core. +pub struct GpuVmCore(ARef>); + +impl GpuVmCore { + /// Get a reference without access to `core`. + #[inline] + pub fn gpuvm(&self) -> &GpuVm { + &self.0 + } +} + +impl Deref for GpuVmCore { + type Target =3D T; + #[inline] + fn deref(&self) -> &T { + // SAFETY: By the type invariants we may access `core`. + unsafe { &*self.0.core.get() } + } +} + +impl DerefMut for GpuVmCore { + #[inline] + fn deref_mut(&mut self) -> &mut T { + // SAFETY: By the type invariants we may access `core`. + unsafe { &mut *self.0.core.get() } + } +} + +/// The exec token for preparing the objects. +#[pin_data(PinnedDrop)] +pub struct GpuVmExec<'a, T: DriverGpuVm> { + #[pin] + exec: Opaque, + _gpuvm: PhantomData<&'a mut GpuVm>, +} + +impl<'a, T: DriverGpuVm> GpuVmExec<'a, T> { + /// Add a fence. + /// + /// # Safety + /// + /// `fence` arg must be valid. + pub unsafe fn resv_add_fence( + &self, + // TODO: use a safe fence abstraction + fence: *mut bindings::dma_fence, + private_usage: DmaResvUsage, + extobj_usage: DmaResvUsage, + ) { + // SAFETY: Caller ensures fence is ok. + unsafe { + bindings::drm_gpuvm_resv_add_fence( + (*self.exec.get()).vm, + &raw mut (*self.exec.get()).exec, + fence, + private_usage as u32, + extobj_usage as u32, + ) + } + } +} + +#[pinned_drop] +impl<'a, T: DriverGpuVm> PinnedDrop for GpuVmExec<'a, T> { + fn drop(self: Pin<&mut Self>) { + // SAFETY: We hold the lock, so it's safe to unlock. + unsafe { bindings::drm_gpuvm_exec_unlock(self.exec.get()) }; + } +} + +/// How the fence will be used. +#[repr(u32)] +pub enum DmaResvUsage { + /// For in kernel memory management only (e.g. copying, clearing memor= y). + Kernel =3D bindings::dma_resv_usage_DMA_RESV_USAGE_KERNEL, + /// Implicit write synchronization for userspace submissions. + Write =3D bindings::dma_resv_usage_DMA_RESV_USAGE_WRITE, + /// Implicit read synchronization for userspace submissions. + Read =3D bindings::dma_resv_usage_DMA_RESV_USAGE_READ, + /// No implicit sync (e.g. preemption fences, page table updates, TLB = flushes). + Bookkeep =3D bindings::dma_resv_usage_DMA_RESV_USAGE_BOOKKEEP, +} + +/// A lock guard for the GPUVM's resv lock. +/// +/// This guard provides access to the extobj and evicted lists. +/// +/// # Invariants +/// +/// Holds the GPUVM resv lock. +pub struct GpuvmResvLockGuard<'a, T: DriverGpuVm>(&'a GpuVm); + +impl GpuVm { + /// Lock the VM's resv lock. + #[inline] + pub fn resv_lock(&self) -> GpuvmResvLockGuard<'_, T> { + // SAFETY: It's always ok to lock the resv lock. + unsafe { bindings::dma_resv_lock(self.raw_resv_lock(), ptr::null_m= ut()) }; + // INVARIANTS: We took the lock. + GpuvmResvLockGuard(self) + } + + #[inline] + fn raw_resv_lock(&self) -> *mut bindings::dma_resv { + // SAFETY: `r_obj` is immutable and valid for duration of GPUVM. + unsafe { (*(*self.as_raw()).r_obj).resv } + } +} + +impl<'a, T: DriverGpuVm> Drop for GpuvmResvLockGuard<'a, T> { + #[inline] + fn drop(&mut self) { + // SAFETY: We hold the lock so we can release it. + unsafe { bindings::dma_resv_unlock(self.0.raw_resv_lock()) }; + } +} diff --git a/rust/kernel/drm/gpuvm/sm_ops.rs b/rust/kernel/drm/gpuvm/sm_ops= .rs new file mode 100644 index 0000000000000000000000000000000000000000..c0dbd4675de644a3b1cbe7d5281= 94ca7fb471848 --- /dev/null +++ b/rust/kernel/drm/gpuvm/sm_ops.rs @@ -0,0 +1,469 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +#![allow(clippy::tabs_in_doc_comments)] + +use super::*; + +struct SmData<'a, T: DriverGpuVm> { + gpuvm: &'a mut GpuVmCore, + user_context: &'a mut T::SmContext, +} + +#[repr(C)] +struct SmMapData<'a, T: DriverGpuVm> { + sm_data: SmData<'a, T>, + vm_bo: GpuVmBoObtain, +} + +/// The argument for [`GpuVmCore::sm_map`]. +pub struct OpMapRequest<'a, T: DriverGpuVm> { + /// Address in GPU virtual address space. + pub addr: u64, + /// Length of mapping to create. + pub range: u64, + /// Offset in GEM object. + pub offset: u64, + /// The GEM object to map. + pub vm_bo: GpuVmBoObtain, + /// The user-provided context type. + pub context: &'a mut T::SmContext, +} + +impl<'a, T: DriverGpuVm> OpMapRequest<'a, T> { + fn raw_request(&self) -> bindings::drm_gpuvm_map_req { + bindings::drm_gpuvm_map_req { + map: bindings::drm_gpuva_op_map { + va: bindings::drm_gpuva_op_map__bindgen_ty_1 { + addr: self.addr, + range: self.range, + }, + gem: bindings::drm_gpuva_op_map__bindgen_ty_2 { + offset: self.offset, + obj: self.vm_bo.obj().as_raw(), + }, + }, + } + } +} + +/// ``` +/// struct drm_gpuva_op_map { +/// /** +/// * @va: structure containing address and range of a map +/// * operation +/// */ +/// struct { +/// /** +/// * @va.addr: the base address of the new mapping +/// */ +/// u64 addr; +/// +/// /** +/// * @va.range: the range of the new mapping +/// */ +/// u64 range; +/// } va; +/// +/// /** +/// * @gem: structure containing the &drm_gem_object and it's offset +/// */ +/// struct { +/// /** +/// * @gem.offset: the offset within the &drm_gem_object +/// */ +/// u64 offset; +/// +/// /** +/// * @gem.obj: the &drm_gem_object to map +/// */ +/// struct drm_gem_object *obj; +/// } gem; +/// }; +/// ``` +pub struct OpMap<'op, T: DriverGpuVm> { + op: &'op bindings::drm_gpuva_op_map, + // Since these abstractions are designed for immediate mode, the VM BO= needs to be + // pre-allocated, so we always have it available when we reach this po= int. + vm_bo: &'op GpuVmBo, + _invariant: PhantomData<*mut &'op mut T>, +} + +impl<'op, T: DriverGpuVm> OpMap<'op, T> { + /// The base address of the new mapping. + pub fn addr(&self) -> u64 { + self.op.va.addr + } + + /// The length of the new mapping. + pub fn length(&self) -> u64 { + self.op.va.range + } + + /// The offset within the [`drm_gem_object`](crate::gem::Object). + pub fn gem_offset(&self) -> u64 { + self.op.gem.offset + } + + /// The [`drm_gem_object`](crate::gem::Object) to map. + pub fn obj(&self) -> &T::Object { + // SAFETY: The `obj` pointer is guaranteed to be valid. + unsafe { ::from_raw(self.op.gem.obj) } + } + + /// The [`GpuVmBo`] that the new VA will be associated with. + pub fn vm_bo(&self) -> &GpuVmBo { + self.vm_bo + } + + /// Use the pre-allocated VA to carry out this map operation. + pub fn insert(self, va: GpuVaAlloc, va_data: impl PinInit) -> OpMapped<'op, T> { + let va =3D va.prepare(va_data); + // SAFETY: By the type invariants we may access the interval tree. + unsafe { bindings::drm_gpuva_map(self.vm_bo.gpuvm().as_raw(), va, = self.op) }; + // SAFETY: The GEM object is valid, so the mutex is properly initi= alized. + unsafe { bindings::mutex_lock(&raw mut (*self.op.gem.obj).gpuva.lo= ck) }; + // SAFETY: The va is prepared for insertion, and we hold the GEM l= ock. + unsafe { bindings::drm_gpuva_link(va, self.vm_bo.as_raw()) }; + // SAFETY: We took the mutex above, so we may unlock it. + unsafe { bindings::mutex_unlock(&raw mut (*self.op.gem.obj).gpuva.= lock) }; + OpMapped { + _invariant: self._invariant, + } + } +} + +/// Represents a completed [`OpMap`] operation. +pub struct OpMapped<'op, T> { + _invariant: PhantomData<*mut &'op mut T>, +} + +/// ``` +/// struct drm_gpuva_op_unmap { +/// /** +/// * @va: the &drm_gpuva to unmap +/// */ +/// struct drm_gpuva *va; +/// +/// /** +/// * @keep: +/// * +/// * Indicates whether this &drm_gpuva is physically contiguous with the +/// * original mapping request. +/// * +/// * Optionally, if &keep is set, drivers may keep the actual page table +/// * mappings for this &drm_gpuva, adding the missing page table entries +/// * only and update the &drm_gpuvm accordingly. +/// */ +/// bool keep; +/// }; +/// ``` +pub struct OpUnmap<'op, T: DriverGpuVm> { + op: &'op bindings::drm_gpuva_op_unmap, + _invariant: PhantomData<*mut &'op mut T>, +} + +impl<'op, T: DriverGpuVm> OpUnmap<'op, T> { + /// Indicates whether this `drm_gpuva` is physically contiguous with t= he + /// original mapping request. + /// + /// Optionally, if `keep` is set, drivers may keep the actual page tab= le + /// mappings for this `drm_gpuva`, adding the missing page table entri= es + /// only and update the `drm_gpuvm` accordingly. + pub fn keep(&self) -> bool { + self.op.keep + } + + /// The range being unmapped. + pub fn va(&self) -> &GpuVa { + // SAFETY: This is a valid va. + unsafe { GpuVa::::from_raw(self.op.va) } + } + + /// Remove the VA. + pub fn remove(self) -> (OpUnmapped<'op, T>, GpuVaRemoved) { + // SAFETY: The op references a valid drm_gpuva in the GPUVM. + unsafe { bindings::drm_gpuva_unmap(self.op) }; + // SAFETY: The va is no longer in the interval tree so we may unli= nk it. + unsafe { bindings::drm_gpuva_unlink_defer(self.op.va) }; + + // SAFETY: We just removed this va from the `GpuVm`. + let va =3D unsafe { GpuVaRemoved::from_raw(self.op.va) }; + + ( + OpUnmapped { + _invariant: self._invariant, + }, + va, + ) + } +} + +/// Represents a completed [`OpUnmap`] operation. +pub struct OpUnmapped<'op, T> { + _invariant: PhantomData<*mut &'op mut T>, +} + +/// ``` +/// struct drm_gpuva_op_remap { +/// /** +/// * @prev: the preceding part of a split mapping +/// */ +/// struct drm_gpuva_op_map *prev; +/// +/// /** +/// * @next: the subsequent part of a split mapping +/// */ +/// struct drm_gpuva_op_map *next; +/// +/// /** +/// * @unmap: the unmap operation for the original existing mapping +/// */ +/// struct drm_gpuva_op_unmap *unmap; +/// }; +/// ``` +pub struct OpRemap<'op, T: DriverGpuVm> { + op: &'op bindings::drm_gpuva_op_remap, + _invariant: PhantomData<*mut &'op mut T>, +} + +impl<'op, T: DriverGpuVm> OpRemap<'op, T> { + /// The preceding part of a split mapping. + #[inline] + pub fn prev(&self) -> Option<&OpRemapMapData> { + // SAFETY: We checked for null, so the pointer must be valid. + NonNull::new(self.op.prev).map(|ptr| unsafe { OpRemapMapData::from= _raw(ptr) }) + } + + /// The subsequent part of a split mapping. + #[inline] + pub fn next(&self) -> Option<&OpRemapMapData> { + // SAFETY: We checked for null, so the pointer must be valid. + NonNull::new(self.op.next).map(|ptr| unsafe { OpRemapMapData::from= _raw(ptr) }) + } + + /// Indicates whether the `drm_gpuva` being removed is physically cont= iguous with the original + /// mapping request. + /// + /// Optionally, if `keep` is set, drivers may keep the actual page tab= le mappings for this + /// `drm_gpuva`, adding the missing page table entries only and update= the `drm_gpuvm` + /// accordingly. + #[inline] + pub fn keep(&self) -> bool { + // SAFETY: The unmap pointer is always valid. + unsafe { (*self.op.unmap).keep } + } + + /// The range being unmapped. + #[inline] + pub fn va_to_unmap(&self) -> &GpuVa { + // SAFETY: This is a valid va. + unsafe { GpuVa::::from_raw((*self.op.unmap).va) } + } + + /// The [`drm_gem_object`](crate::gem::Object) whose VA is being remap= ped. + #[inline] + pub fn obj(&self) -> &T::Object { + self.va_to_unmap().obj() + } + + /// The [`GpuVmBo`] that is being remapped. + #[inline] + pub fn vm_bo(&self) -> &GpuVmBo { + self.va_to_unmap().vm_bo() + } + + /// Update the GPUVM to perform the remapping. + pub fn remap( + self, + va_alloc: [GpuVaAlloc; 2], + prev_data: impl PinInit, + next_data: impl PinInit, + ) -> (OpRemapped<'op, T>, OpRemapRet) { + let [va1, va2] =3D va_alloc; + + let mut unused_va =3D None; + let mut prev_ptr =3D ptr::null_mut(); + let mut next_ptr =3D ptr::null_mut(); + if self.prev().is_some() { + prev_ptr =3D va1.prepare(prev_data); + } else { + unused_va =3D Some(va1); + } + if self.next().is_some() { + next_ptr =3D va2.prepare(next_data); + } else { + unused_va =3D Some(va2); + } + + // SAFETY: the pointers are non-null when required + unsafe { bindings::drm_gpuva_remap(prev_ptr, next_ptr, self.op) }; + + // SAFETY: The GEM object is valid, so the mutex is properly initi= alized. + unsafe { bindings::mutex_lock(&raw mut (*self.obj().as_raw()).gpuv= a.lock) }; + if !prev_ptr.is_null() { + // SAFETY: The prev_ptr is a valid drm_gpuva prepared for inse= rtion. The vm_bo is still + // valid as the not-yet-unlinked gpuva holds a refcount on the= vm_bo. + unsafe { bindings::drm_gpuva_link(prev_ptr, self.vm_bo().as_ra= w()) }; + } + if !next_ptr.is_null() { + // SAFETY: The next_ptr is a valid drm_gpuva prepared for inse= rtion. The vm_bo is still + // valid as the not-yet-unlinked gpuva holds a refcount on the= vm_bo. + unsafe { bindings::drm_gpuva_link(next_ptr, self.vm_bo().as_ra= w()) }; + } + // SAFETY: We took the mutex above, so we may unlock it. + unsafe { bindings::mutex_unlock(&raw mut (*self.obj().as_raw()).gp= uva.lock) }; + // SAFETY: The va is no longer in the interval tree so we may unli= nk it. + unsafe { bindings::drm_gpuva_unlink_defer((*self.op.unmap).va) }; + + ( + OpRemapped { + _invariant: self._invariant, + }, + OpRemapRet { + // SAFETY: We just removed this va from the `GpuVm`. + unmapped_va: unsafe { GpuVaRemoved::from_raw((*self.op.unm= ap).va) }, + unused_va, + }, + ) + } +} + +/// Part of an [`OpRemap`] that represents a new mapping. +#[repr(transparent)] +pub struct OpRemapMapData(bindings::drm_gpuva_op_map); + +impl OpRemapMapData { + /// # Safety + /// Must reference a valid `drm_gpuva_op_map` for duration of `'a`. + unsafe fn from_raw<'a>(ptr: NonNull) -> &'= a Self { + // SAFETY: ok per safety requirements + unsafe { ptr.cast().as_ref() } + } + + /// The base address of the new mapping. + pub fn addr(&self) -> u64 { + self.0.va.addr + } + + /// The length of the new mapping. + pub fn length(&self) -> u64 { + self.0.va.range + } + + /// The offset within the [`drm_gem_object`](crate::gem::Object). + pub fn gem_offset(&self) -> u64 { + self.0.gem.offset + } +} + +/// Struct containing objects removed or not used by [`OpRemap::remap`]. +pub struct OpRemapRet { + /// The `drm_gpuva` that was removed. + pub unmapped_va: GpuVaRemoved, + /// If the remap did not split the region into two pieces, then the un= used `drm_gpuva` is + /// returned here. + pub unused_va: Option>, +} + +/// Represents a completed [`OpRemap`] operation. +pub struct OpRemapped<'op, T> { + _invariant: PhantomData<*mut &'op mut T>, +} + +impl GpuVmCore { + /// Create a mapping, removing or remapping anything that overlaps. + #[inline] + pub fn sm_map(&mut self, req: OpMapRequest<'_, T>) -> Result { + let gpuvm =3D self.gpuvm().as_raw(); + let raw_req =3D req.raw_request(); + let mut p =3D SmMapData { + sm_data: SmData { + gpuvm: self, + user_context: req.context, + }, + vm_bo: req.vm_bo, + }; + // SAFETY: + // * raw_request() creates a valid request. + // * The private data is valid to be interpreted as both SmData an= d SmMapData since the + // first field of SmMapData is SmData. + to_result(unsafe { + bindings::drm_gpuvm_sm_map(gpuvm, (&raw mut p).cast(), &raw co= nst raw_req) + }) + } + + /// Remove any mappings in the given region. + #[inline] + pub fn sm_unmap(&mut self, addr: u64, length: u64, context: &mut T::Sm= Context) -> Result { + let gpuvm =3D self.gpuvm().as_raw(); + let mut p =3D SmData { + gpuvm: self, + user_context: context, + }; + // SAFETY: + // * raw_request() creates a valid request. + // * The private data is valid to be interpreted as only SmData, b= ut drm_gpuvm_sm_unmap() + // never calls sm_step_map(). + to_result(unsafe { bindings::drm_gpuvm_sm_unmap(gpuvm, (&raw mut p= ).cast(), addr, length) }) + } +} + +impl GpuVm { + /// # Safety + /// Must be called from `sm_map`. + pub(super) unsafe extern "C" fn sm_step_map( + op: *mut bindings::drm_gpuva_op, + p: *mut c_void, + ) -> c_int { + // SAFETY: If we reach `sm_step_map` then we were called from `sm_= map` which always passes + // an `SmMapData` as private data. + let p =3D unsafe { &mut *p.cast::>() }; + let op =3D OpMap { + // SAFETY: sm_step_map is called with a map operation. + op: unsafe { &(*op).__bindgen_anon_1.map }, + vm_bo: &p.vm_bo, + _invariant: PhantomData, + }; + match p.sm_data.gpuvm.sm_step_map(op, p.sm_data.user_context) { + Ok(OpMapped { .. }) =3D> 0, + Err(err) =3D> err.to_errno(), + } + } + /// # Safety + /// Must be called from `sm_map` or `sm_unmap`. + pub(super) unsafe extern "C" fn sm_step_unmap( + op: *mut bindings::drm_gpuva_op, + p: *mut c_void, + ) -> c_int { + // SAFETY: If we reach `sm_step_unmap` then we were called from `s= m_map` or `sm_unmap` which passes either + // an `SmMapData` or `SmData` as private data. Both cases can be c= ast to `SmData`. + let p =3D unsafe { &mut *p.cast::>() }; + let op =3D OpUnmap { + // SAFETY: sm_step_unmap is called with an unmap operation. + op: unsafe { &(*op).__bindgen_anon_1.unmap }, + _invariant: PhantomData, + }; + match p.gpuvm.sm_step_unmap(op, p.user_context) { + Ok(OpUnmapped { .. }) =3D> 0, + Err(err) =3D> err.to_errno(), + } + } + /// # Safety + /// Must be called from `sm_map` or `sm_unmap`. + pub(super) unsafe extern "C" fn sm_step_remap( + op: *mut bindings::drm_gpuva_op, + p: *mut c_void, + ) -> c_int { + // SAFETY: If we reach `sm_step_remap` then we were called from `s= m_map` or `sm_unmap` which passes either + // an `SmMapData` or `SmData` as private data. Both cases can be c= ast to `SmData`. + let p =3D unsafe { &mut *p.cast::>() }; + let op =3D OpRemap { + // SAFETY: sm_step_remap is called with a remap operation. + op: unsafe { &(*op).__bindgen_anon_1.remap }, + _invariant: PhantomData, + }; + match p.gpuvm.sm_step_remap(op, p.user_context) { + Ok(OpRemapped { .. }) =3D> 0, + Err(err) =3D> err.to_errno(), + } + } +} diff --git a/rust/kernel/drm/gpuvm/va.rs b/rust/kernel/drm/gpuvm/va.rs new file mode 100644 index 0000000000000000000000000000000000000000..a31122ff22282186a1d76d4bb08= 5714f6465722b --- /dev/null +++ b/rust/kernel/drm/gpuvm/va.rs @@ -0,0 +1,148 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT + +use super::*; + +/// Represents that a range of a GEM object is mapped in this [`GpuVm`] in= stance. +/// +/// Does not assume that GEM lock is held. +/// +/// # Invariants +/// +/// This is a valid `drm_gpuva` that is resident in the [`GpuVm`] instance. +#[repr(C)] +#[pin_data] +pub struct GpuVa { + #[pin] + inner: Opaque, + #[pin] + data: T::VaData, +} + +impl GpuVa { + /// Access this [`GpuVa`] from a raw pointer. + /// + /// # Safety + /// + /// For the duration of `'a`, the pointer must reference a valid `drm_= gpuva` associated with a + /// [`GpuVm`]. + #[inline] + pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuva) -> &'a Self { + // SAFETY: `drm_gpuva` is first field and `repr(C)`. + unsafe { &*ptr.cast() } + } + + /// Returns a raw pointer to underlying C value. + #[inline] + pub fn as_raw(&self) -> *mut bindings::drm_gpuva { + self.inner.get() + } + + /// Returns the address of this mapping in the GPU virtual address spa= ce. + #[inline] + pub fn addr(&self) -> u64 { + // SAFETY: The `va.addr` field of `drm_gpuva` is immutable. + unsafe { (*self.as_raw()).va.addr } + } + + /// Returns the length of this mapping. + #[inline] + pub fn length(&self) -> u64 { + // SAFETY: The `va.range` field of `drm_gpuva` is immutable. + unsafe { (*self.as_raw()).va.range } + } + + /// Returns `addr..addr+length`. + #[inline] + pub fn range(&self) -> Range { + let addr =3D self.addr(); + addr..addr + self.length() + } + + /// Returns the offset within the GEM object. + #[inline] + pub fn gem_offset(&self) -> u64 { + // SAFETY: The `gem.offset` field of `drm_gpuva` is immutable. + unsafe { (*self.as_raw()).gem.offset } + } + + /// Returns the GEM object. + #[inline] + pub fn obj(&self) -> &T::Object { + // SAFETY: The `gem.offset` field of `drm_gpuva` is immutable. + unsafe { ::from_raw((*self.as_raw()).g= em.obj) } + } + + /// Returns the underlying [`GpuVmBo`] object that backs this [`GpuVa`= ]. + #[inline] + pub fn vm_bo(&self) -> &GpuVmBo { + // SAFETY: The `vm_bo` field has been set and is immutable for the= duration in which this + // `drm_gpuva` is resident in the VM. + unsafe { GpuVmBo::from_raw((*self.as_raw()).vm_bo) } + } +} + +/// A pre-allocated [`GpuVa`] object. +/// +/// # Invariants +/// +/// The memory is zeroed. +pub struct GpuVaAlloc(KBox>>); + +impl GpuVaAlloc { + /// Pre-allocate a [`GpuVa`] object. + pub fn new(flags: AllocFlags) -> Result, AllocError> { + // INVARIANTS: Memory allocated with __GFP_ZERO. + Ok(GpuVaAlloc(KBox::new_uninit(flags | __GFP_ZERO)?)) + } + + /// Prepare this `drm_gpuva` for insertion into the GPUVM. + pub(super) fn prepare(mut self, va_data: impl PinInit) -> *= mut bindings::drm_gpuva { + let va_ptr =3D MaybeUninit::as_mut_ptr(&mut self.0); + // SAFETY: The `data` field is pinned. + let Ok(()) =3D unsafe { va_data.__pinned_init(&raw mut (*va_ptr).d= ata) }; + KBox::into_raw(self.0).cast() + } +} + +/// A [`GpuVa`] object that has been removed. +/// +/// # Invariants +/// +/// The `drm_gpuva` is not resident in the [`GpuVm`]. +pub struct GpuVaRemoved(KBox>); + +impl GpuVaRemoved { + /// Convert a raw pointer into a [`GpuVaRemoved`]. + /// + /// # Safety + /// + /// Must have been removed from a [`GpuVm`]. + pub(super) unsafe fn from_raw(ptr: *mut bindings::drm_gpuva) -> Self { + // SAFETY: Since it has been removed we can take ownership of allo= cation. + GpuVaRemoved(unsafe { KBox::from_raw(ptr.cast()) }) + } + + /// Take ownership of the VA data. + pub fn into_inner(self) -> T::VaData + where + T::VaData: Unpin, + { + KBox::into_inner(self.0).data + } +} + +impl Deref for GpuVaRemoved { + type Target =3D T::VaData; + fn deref(&self) -> &T::VaData { + &self.0.data + } +} + +impl DerefMut for GpuVaRemoved +where + T::VaData: Unpin, +{ + fn deref_mut(&mut self) -> &mut T::VaData { + &mut self.0.data + } +} diff --git a/rust/kernel/drm/gpuvm/vm_bo.rs b/rust/kernel/drm/gpuvm/vm_bo.rs new file mode 100644 index 0000000000000000000000000000000000000000..f21aa17ea4f42c4a2b57b1f3a57= a18dd2c3c8b7b --- /dev/null +++ b/rust/kernel/drm/gpuvm/vm_bo.rs @@ -0,0 +1,213 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT + +use super::*; + +/// Represents that a given GEM object has at least one mapping on this [`= GpuVm`] instance. +/// +/// Does not assume that GEM lock is held. +#[repr(C)] +#[pin_data] +pub struct GpuVmBo { + #[pin] + inner: Opaque, + #[pin] + data: T::VmBoData, +} + +impl GpuVmBo { + pub(super) const ALLOC_FN: Option *mut bindi= ngs::drm_gpuvm_bo> =3D { + use core::alloc::Layout; + let base =3D Layout::new::(); + let rust =3D Layout::new::(); + assert!(base.size() <=3D rust.size()); + if base.size() !=3D rust.size() || base.align() !=3D rust.align() { + Some(Self::vm_bo_alloc) + } else { + // This causes GPUVM to allocate a `GpuVmBo` with `kzalloc(= sizeof(drm_gpuvm_bo))`. + None + } + }; + + pub(super) const FREE_FN: Option =3D { + if core::mem::needs_drop::() { + Some(Self::vm_bo_free) + } else { + // This causes GPUVM to free a `GpuVmBo` with `kfree`. + None + } + }; + + /// Custom function for allocating a `drm_gpuvm_bo`. + /// + /// # Safety + /// + /// Always safe to call. Unsafe to match function pointer type in C st= ruct. + unsafe extern "C" fn vm_bo_alloc() -> *mut bindings::drm_gpuvm_bo { + KBox::::new_uninit(GFP_KERNEL | __GFP_ZERO) + .map(KBox::into_raw) + .unwrap_or(ptr::null_mut()) + .cast() + } + + /// Custom function for freeing a `drm_gpuvm_bo`. + /// + /// # Safety + /// + /// The pointer must have been allocated with [`GpuVmBo::ALLOC_FN`], a= nd must not be used after + /// this call. + unsafe extern "C" fn vm_bo_free(ptr: *mut bindings::drm_gpuvm_bo) { + // SAFETY: + // * The ptr was allocated from kmalloc with the layout of `GpuVmB= o`. + // * `ptr->inner` has no destructor. + // * `ptr->data` contains a valid `T::VmBoData` that we can drop. + drop(unsafe { KBox::::from_raw(ptr.cast()) }); + } + + /// Access this [`GpuVmBo`] from a raw pointer. + /// + /// # Safety + /// + /// For the duration of `'a`, the pointer must reference a valid `drm_= gpuvm_bo` associated with + /// a [`GpuVm`]. + #[inline] + pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm_bo) -> &'a Se= lf { + // SAFETY: `drm_gpuvm_bo` is first field and `repr(C)`. + unsafe { &*ptr.cast() } + } + + /// Returns a raw pointer to underlying C value. + #[inline] + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo { + self.inner.get() + } + + /// The [`GpuVm`] that this GEM object is mapped in. + #[inline] + pub fn gpuvm(&self) -> &GpuVm { + // SAFETY: The `obj` pointer is guaranteed to be valid. + unsafe { GpuVm::::from_raw((*self.inner.get()).vm) } + } + + /// The [`drm_gem_object`](crate::gem::Object) for these mappings. + #[inline] + pub fn obj(&self) -> &T::Object { + // SAFETY: The `obj` pointer is guaranteed to be valid. + unsafe { ::from_raw((*self.inner.get()= ).obj) } + } + + /// The driver data with this buffer object. + #[inline] + pub fn data(&self) -> &T::VmBoData { + &self.data + } +} + +/// A pre-allocated [`GpuVmBo`] object. +/// +/// # Invariants +/// +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData`, has a = refcount of one, and is +/// absent from any gem, extobj, or evict lists. +pub(super) struct GpuVmBoAlloc(NonNull>); + +impl GpuVmBoAlloc { + /// Create a new pre-allocated [`GpuVmBo`]. + /// + /// It's intentional that the initializer is infallible because `drm_g= puvm_bo_put` will call + /// drop on the data, so we don't have a way to free it when the data = is missing. + #[inline] + pub(super) fn new( + gpuvm: &GpuVm, + gem: &T::Object, + value: impl PinInit, + ) -> Result, AllocError> { + // SAFETY: The provided gpuvm and gem ptrs are valid for the durat= ion of this call. + let raw_ptr =3D unsafe { + bindings::drm_gpuvm_bo_create(gpuvm.as_raw(), gem.as_raw()).ca= st::>() + }; + // CAST: `GpuVmBoAlloc::vm_bo_alloc` ensures that this memory was = allocated with the layout + // of `GpuVmBo`. + let ptr =3D NonNull::new(raw_ptr).ok_or(AllocError)?; + // SAFETY: `ptr->data` is a valid pinned location. + let Ok(()) =3D unsafe { value.__pinned_init(&raw mut (*raw_ptr).da= ta) }; + // INVARIANTS: We just created the vm_bo so it's absent from lists= , and the data is valid + // as we just initialized it. + Ok(GpuVmBoAlloc(ptr)) + } + + /// Returns a raw pointer to underlying C value. + #[inline] + pub(super) fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo { + // SAFETY: The pointer references a valid `drm_gpuvm_bo`. + unsafe { (*self.0.as_ptr()).inner.get() } + } + + /// Look up whether there is an existing [`GpuVmBo`] for this gem obje= ct. + #[inline] + pub(super) fn obtain(self) -> GpuVmBoObtain { + let me =3D ManuallyDrop::new(self); + // SAFETY: Valid `drm_gpuvm_bo` not already in the lists. + let ptr =3D unsafe { bindings::drm_gpuvm_bo_obtain_prealloc(me.as_= raw()) }; + + // If the vm_bo does not already exist, ensure that it's in the ex= tobj list. + if ptr::eq(ptr, me.as_raw()) && me.gpuvm().is_extobj(me.obj()) { + let _resv_lock =3D me.gpuvm().resv_lock(); + // SAFETY: We hold the GPUVMs resv lock. + unsafe { bindings::drm_gpuvm_bo_extobj_add(ptr) }; + } + + // INVARIANTS: Valid `drm_gpuvm_bo` in the GEM list. + // SAFETY: `drm_gpuvm_bo_obtain_prealloc` always returns a non-nul= l ptr + GpuVmBoObtain(unsafe { NonNull::new_unchecked(ptr.cast()) }) + } +} + +impl Deref for GpuVmBoAlloc { + type Target =3D GpuVmBo; + #[inline] + fn deref(&self) -> &GpuVmBo { + // SAFETY: By the type invariants we may deref while `Self` exists. + unsafe { self.0.as_ref() } + } +} + +impl Drop for GpuVmBoAlloc { + #[inline] + fn drop(&mut self) { + // SAFETY: It's safe to perform a deferred put in any context. + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) }; + } +} + +/// A [`GpuVmBo`] object in the GEM list. +/// +/// # Invariants +/// +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData` and is = present in the gem list. +pub struct GpuVmBoObtain(NonNull>); + +impl GpuVmBoObtain { + /// Returns a raw pointer to underlying C value. + #[inline] + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo { + // SAFETY: The pointer references a valid `drm_gpuvm_bo`. + unsafe { (*self.0.as_ptr()).inner.get() } + } +} + +impl Deref for GpuVmBoObtain { + type Target =3D GpuVmBo; + #[inline] + fn deref(&self) -> &GpuVmBo { + // SAFETY: By the type invariants we may deref while `Self` exists. + unsafe { self.0.as_ref() } + } +} + +impl Drop for GpuVmBoObtain { + #[inline] + fn drop(&mut self) { + // SAFETY: It's safe to perform a deferred put in any context. + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) }; + } +} diff --git a/rust/kernel/drm/mod.rs b/rust/kernel/drm/mod.rs index 1b82b6945edf25b947afc08300e211bd97150d6b..a4b6c5430198571ec701af2ef45= 2cc9ac55870e6 100644 --- a/rust/kernel/drm/mod.rs +++ b/rust/kernel/drm/mod.rs @@ -6,6 +6,7 @@ pub mod driver; pub mod file; pub mod gem; +pub mod gpuvm; pub mod ioctl; =20 pub use self::device::Device; --=20 2.52.0.487.g5c8c507ade-goog