From nobody Thu Apr 16 01:35:53 2026 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FE043DEFFD for ; Thu, 9 Apr 2026 15:26:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775748381; cv=none; b=JZ4NwOO7oF2CxUbPmiLdx1HYidKgqro9vTI5g9SHO5TZBTAyRNlXHtY8asnYwKlo3Bi2ptlS/PqA/q2JBPBbusfLCnQl6+NhFaGKbQKX/fRCkZzXD/g/y6wr9zVYgXmXol8XB/LjeuhFvaLJCMTGv3ihY5eTWA1s8zmWaFUtShM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775748381; c=relaxed/simple; bh=75xT4EhvuyyBgND7ILMt5vIpFsiKQkSdox6nj9cDzZs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YUDOg6KMEOmjzY6T0NZsLhsj0xA1MA/5yt2zxHQsWZbVsdR0ToHydiQfMAPnPSNKrC1PnYMwSKSFnDxu/QWTEDSKROKIZ59PCfIeZPynSUipqwTcE2XQriXYHj/hfa5M+iYuqT3th2vMSPfUZsF80TeP9gakI4D9LZg/+vYBVQs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=B98zoEHv; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B98zoEHv" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-43d022974acso921011f8f.2 for ; Thu, 09 Apr 2026 08:26:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1775748378; x=1776353178; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+CGxI1M6g+bTxGU/K2fWjBUXBZCiS9bZDpSTFAuLq/U=; b=B98zoEHvlfu41OpPIe+T1PEFRhQbRB8Gi6jLaP9K2qSw4OWtYpC01m8McZy3szdWli Ko3ol+dlFNz+icqODxPnfxar1eRC2M5rWjY7DIuYPBRBjyTsqoPecNMXATqB3CLJ7rT8 HCXPeqcQcVHGFBS7sDV5ApSRnahaf5RLvPIQeWFuAbDviQVqlTMml/+2HDEVIv0OoesH b/h2abrh8tkgNkY9tWysQ2/krgvIjTEVhL0uD7rvibvNUh5HBE8MUnsG9znhVpi52AD9 pIWss0HEcwSe8otvSKw7qy9jdOUOb/z1XfC2nrj2ljoyWSmQqUWqSm5IZRGk66NDv0m/ hOPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775748378; x=1776353178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+CGxI1M6g+bTxGU/K2fWjBUXBZCiS9bZDpSTFAuLq/U=; b=PjdQMGY4qAAXQUf/CFbSJqVuRNUm1DbYEGcsxHJEssQwtJzzXIOXWSThKrYBoJeDvN VcbqJqPnqVdstsP7ytqTbaKSUQ9/zTB/toomgpVBcjX2og47dfvNhjMyGJIlKlmIKc1u YDu6GdrHVvymOUsrUheVdW2RAGmDTScu7996w3e4xrcUwFROAV91yEfdOcgBRNXr9Pck OUM1ZQQoIycgw++NLjGOf9PSY3VfZTjF3rN696XK0KbuA7qwkVQdKWeyz7LVp7ErScZv qHOmojYER6jrh50FVpdsN+1v3qUze0+8iObVKXiOlCBB+sbCu1AGJ5aygFywfxz10xgb Kvag== X-Forwarded-Encrypted: i=1; AJvYcCVMnQ2/X9uSX3FWi7PLSGjU/AbygB323WbpAjXEh6CsAxmOhrIkTsdyWPboQkh9kJVB34wACBgEVW5GBYk=@vger.kernel.org X-Gm-Message-State: AOJu0Yx7BKEjyx5N3NSjHGJOUlpXiVsFAxTAad9xt8j3ZTxLn3SXJ9Fw JGfxp/ZylsxCs0kPXnTWvPpTUPKCsS5wOQZJpSV4R2JeF+cmB4E6jU4A4EbUo8PZYWFFHPBkVST GnDS8N3iNxJJ4bHkxEA== X-Received: from wrtr6.prod.google.com ([2002:a5d:4e46:0:b0:439:b72a:2e5c]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:611:b0:43b:5231:e94a with SMTP id ffacd0b85a97d-43d292c7b56mr39913943f8f.30.1775748378002; Thu, 09 Apr 2026 08:26:18 -0700 (PDT) Date: Thu, 09 Apr 2026 15:26:06 +0000 In-Reply-To: <20260409-gpuvm-rust-v6-0-b16e6ada7261@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260409-gpuvm-rust-v6-0-b16e6ada7261@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=13096; i=aliceryhl@google.com; h=from:subject:message-id; bh=Yfede79I4aPzGAgrGu8z8A6J0rlO49xK92FjHzZrVAs=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBp18UXaIZpOFob+lqnC/VdmqbNIZSqGBZrApMqb 2UGl8xpwu6JAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCadfFFwAKCRAEWL7uWMY5 RjAHD/4/VrBe/IoHXZOJWxKskD9LHc4MTAtQqkUnLz5KffFmMVrpkFdkyUP1B8SpW5Ftt+I/vA4 11NIH2JTk+iwcpX0lOhju2mSvb2IihDvTojaf/bnBxpsnX6G6tWbRw/ANA5JOYiGDd+lAzVEtzp eU2tKUu81OqY6jzw3dq2YFYDGKjT9re/PhnN1CKzMsbqGsArSjyzVmq5cAXfu2W2n2xjUtK/COg uYAZL5pLD74bGwo2HaU06BVW0h/q4RcgyFVBArc34uX2Q8CSoTcYW6I78TaESQZ0A5zEIPIwzXd TxVniRHohbPmUGCYUPxQ9zvuBGEwITu5Q2s8sMHI6SU/L1XCZM6M9eojg7tzKDvwA9p47ddhq0t IyWIOx9GrfUk/e1qLfRHYjWYOQrblElTBSKsP3CtI6BC8OijUYPbqF87URqityhHOVXnH76sD05 baMKTKnxtO8ZVHRoVl25w+e+YvRXcrC+0ePy29KOO2oco8ANjdK5o6bkcVdfAxda0qFlGdsVGmU W1gyq+vw7h9vt9s5z4WjsXBUaBYKrphGnIB5LVKayK8KZcPJ5PIZlK9OcuFh3C/erOAjsnsE47j IrWjkdht7+BfGen33fiPM7ppOzFqamB3n0epbKg7LlVhXE5phFi0uHMbPaf6MF97VX25NIE7lgt GWgYWUflHMWTIsA== X-Mailer: b4 0.14.3 Message-ID: <20260409-gpuvm-rust-v6-1-b16e6ada7261@google.com> Subject: [PATCH v6 1/5] rust: drm: add base GPUVM immediate mode abstraction From: Alice Ryhl To: Danilo Krummrich , Daniel Almeida Cc: Boris Brezillon , Janne Grunau , Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" , Lyude Paul , Asahi Lina , Sumit Semwal , "=?utf-8?q?Christian_K=C3=B6nig?=" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-media@vger.kernel.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Asahi Lina Add a GPUVM abstraction to be used by Rust GPU drivers. GPUVM keeps track of a GPU's virtual address (VA) space and manages the corresponding virtual mappings represented by "GPU VA" objects. It also keeps track of the gem::Object used to back the mappings through GpuVmBo. This abstraction is only usable by drivers that wish to use GPUVM in immediate mode. This allows us to build the locking scheme into the API design. It means that the GEM mutex is used for the GEM gpuva list, and that the resv lock is used for the extobj list. The evicted list is not yet used in this version. This abstraction provides a special handle called the UniqueRefGpuVm, which is a wrapper around ARef that provides access to the interval tree. Generally, all changes to the address space requires mutable access to this unique handle. Signed-off-by: Asahi Lina Co-developed-by: Daniel Almeida Signed-off-by: Daniel Almeida Reviewed-by: Daniel Almeida Co-developed-by: Alice Ryhl Signed-off-by: Alice Ryhl --- MAINTAINERS | 2 + rust/bindings/bindings_helper.h | 1 + rust/helpers/drm_gpuvm.c | 20 ++++ rust/helpers/helpers.c | 1 + rust/kernel/drm/gpuvm/mod.rs | 260 ++++++++++++++++++++++++++++++++++++= ++++ rust/kernel/drm/mod.rs | 1 + 6 files changed, 285 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index b01791963e25..9c93fa23654c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8858,6 +8858,8 @@ S: Supported T: git https://gitlab.freedesktop.org/drm/misc/kernel.git F: drivers/gpu/drm/drm_gpuvm.c F: include/drm/drm_gpuvm.h +F: rust/helpers/drm_gpuvm.c +F: rust/kernel/drm/gpuvm/ =20 DRM LOG M: Jocelyn Falempe diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helpe= r.h index eda8f50d3a3c..cb06a7ff795b 100644 --- a/rust/bindings/bindings_helper.h +++ b/rust/bindings/bindings_helper.h @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include diff --git a/rust/helpers/drm_gpuvm.c b/rust/helpers/drm_gpuvm.c new file mode 100644 index 000000000000..18cf104a8bc7 --- /dev/null +++ b/rust/helpers/drm_gpuvm.c @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 or MIT + +#ifdef CONFIG_DRM_GPUVM + +#include + +__rust_helper +struct drm_gpuvm *rust_helper_drm_gpuvm_get(struct drm_gpuvm *obj) +{ + return drm_gpuvm_get(obj); +} + +__rust_helper +bool rust_helper_drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm, + struct drm_gem_object *obj) +{ + return drm_gpuvm_is_extobj(gpuvm, obj); +} + +#endif // CONFIG_DRM_GPUVM diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index b6b20ad2e0e6..875a9788ad40 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -30,6 +30,7 @@ #include "dma.c" #include "dma-resv.c" #include "drm.c" +#include "drm_gpuvm.c" #include "err.c" #include "irq.c" #include "fs.c" diff --git a/rust/kernel/drm/gpuvm/mod.rs b/rust/kernel/drm/gpuvm/mod.rs new file mode 100644 index 000000000000..1d9138d989b3 --- /dev/null +++ b/rust/kernel/drm/gpuvm/mod.rs @@ -0,0 +1,260 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT + +#![cfg(CONFIG_DRM_GPUVM =3D "y")] + +//! DRM GPUVM in immediate mode +//! +//! Rust abstractions for using GPUVM in immediate mode. This is when the = GPUVM state is updated +//! during `run_job()`, i.e., in the DMA fence signalling critical path, t= o ensure that the GPUVM +//! and the GPU's virtual address space has the same state at all times. +//! +//! C header: [`include/drm/drm_gpuvm.h`](srctree/include/drm/drm_gpuvm.h) + +use kernel::{ + alloc::AllocError, + bindings, + drm, + drm::gem::IntoGEMObject, + prelude::*, + sync::aref::{ + ARef, + AlwaysRefCounted, // + }, + types::Opaque, // +}; + +use core::{ + cell::UnsafeCell, + ops::{ + Deref, + Range, // + }, + ptr::NonNull, // +}; + +/// A DRM GPU VA manager. +/// +/// This object is refcounted, but the locations of mapped ranges may only= be accessed or changed +/// via the special unique handle [`UniqueRefGpuVm`]. +/// +/// # Invariants +/// +/// * Stored in an allocation managed by the refcount in `self.vm`. +/// * Access to `data` and the gpuvm interval tree is controlled via the [= `UniqueRefGpuVm`] type. +/// * Does not contain any sparse `GpuVa` instances. +#[pin_data] +pub struct GpuVm { + #[pin] + vm: Opaque, + /// Accessed only through the [`UniqueRefGpuVm`] reference. + data: UnsafeCell, +} + +// SAFETY: The GPUVM api does not assume that it is tied to a specific thr= ead. The destructor will +// drop the `data` field, which is okay because it is guaranteed `Send` by= the `DriverGpuVm` trait. +unsafe impl Send for GpuVm {} +// SAFETY: The GPUVM api is designed to allow &self methods to be called i= n parallel. +unsafe impl Sync for GpuVm {} + +// SAFETY: By type invariants, the allocation is managed by the refcount i= n `self.vm`. +unsafe impl AlwaysRefCounted for GpuVm { + fn inc_ref(&self) { + // SAFETY: By type invariants, the allocation is managed by the re= fcount in `self.vm`. + unsafe { bindings::drm_gpuvm_get(self.vm.get()) }; + } + + unsafe fn dec_ref(obj: NonNull) { + // SAFETY: By type invariants, the allocation is managed by the re= fcount in `self.vm`. + unsafe { bindings::drm_gpuvm_put((*obj.as_ptr()).vm.get()) }; + } +} + +impl PartialEq for GpuVm { + #[inline] + fn eq(&self, other: &Self) -> bool { + core::ptr::eq(self.as_raw(), other.as_raw()) + } +} +impl Eq for GpuVm {} + +impl GpuVm { + const fn vtable() -> &'static bindings::drm_gpuvm_ops { + &bindings::drm_gpuvm_ops { + vm_free: Some(Self::vm_free), + op_alloc: None, + op_free: None, + vm_bo_alloc: None, + vm_bo_free: None, + vm_bo_validate: None, + sm_step_map: None, + sm_step_unmap: None, + sm_step_remap: None, + } + } + + /// Creates a GPUVM instance. + #[expect(clippy::new_ret_no_self)] + pub fn new( + name: &'static CStr, + dev: &drm::Device, + r_obj: &T::Object, + range: Range, + reserve_range: Range, + data: T, + ) -> Result, E> + where + E: From, + E: From, + { + let obj =3D KBox::try_pin_init::( + try_pin_init!(Self { + data: UnsafeCell::new(data), + vm <- Opaque::ffi_init(|vm| { + // SAFETY: These arguments are valid. `vm` is valid un= til refcount drops to + // zero. The `vm` is zeroed before calling this method= by `__GFP_ZERO` flag + // below. + unsafe { + bindings::drm_gpuvm_init( + vm, + name.as_char_ptr(), + bindings::drm_gpuvm_flags_DRM_GPUVM_IMMEDIATE_= MODE + | bindings::drm_gpuvm_flags_DRM_GPUVM_RESV= _PROTECTED, + dev.as_raw(), + r_obj.as_raw(), + range.start, + range.end - range.start, + reserve_range.start, + reserve_range.end - reserve_range.start, + const { Self::vtable() }, + ) + } + }), + }? E), + GFP_KERNEL | __GFP_ZERO, + )?; + // SAFETY: This transfers the initial refcount to the ARef. + let aref =3D unsafe { + ARef::from_raw(NonNull::new_unchecked(KBox::into_raw( + Pin::into_inner_unchecked(obj), + ))) + }; + // INVARIANT: This reference is unique. + Ok(UniqueRefGpuVm(aref)) + } + + /// Access this [`GpuVm`] from a raw pointer. + /// + /// # Safety + /// + /// The pointer must reference the `struct drm_gpuvm` in a valid [`Gpu= Vm`] that remains + /// valid for at least `'a`. + #[inline] + pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm) -> &'a Self { + // SAFETY: Caller passes a pointer to the `drm_gpuvm` in a `GpuVm<= T>`. Caller ensures the + // pointer is valid for 'a. + unsafe { &*kernel::container_of!(Opaque::cast_from(ptr), Self, vm)= } + } + + /// Returns a raw pointer to the embedded `struct drm_gpuvm`. + #[inline] + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm { + self.vm.get() + } + + /// The start of the VA space. + #[inline] + pub fn va_start(&self) -> u64 { + // SAFETY: The `mm_start` field is immutable. + unsafe { (*self.as_raw()).mm_start } + } + + /// The length of the GPU's virtual address space. + #[inline] + pub fn va_length(&self) -> u64 { + // SAFETY: The `mm_range` field is immutable. + unsafe { (*self.as_raw()).mm_range } + } + + /// Returns the range of the GPU virtual address space. + #[inline] + pub fn va_range(&self) -> Range { + let start =3D self.va_start(); + // OVERFLOW: This reconstructs the Range passed to the constr= uctor, so it won't fail. + let end =3D start + self.va_length(); + Range { start, end } + } + + /// Clean up buffer objects that are no longer used. + #[inline] + pub fn deferred_cleanup(&self) { + // SAFETY: This GPUVM uses immediate mode. + unsafe { bindings::drm_gpuvm_bo_deferred_cleanup(self.as_raw()) } + } + + /// Check if this GEM object is an external object for this GPUVM. + #[inline] + pub fn is_extobj(&self, obj: &T::Object) -> bool { + // SAFETY: We may call this with any GPUVM and GEM object. + unsafe { bindings::drm_gpuvm_is_extobj(self.as_raw(), obj.as_raw()= ) } + } + + /// Free this GPUVM. + /// + /// # Safety + /// + /// Called when refcount hits zero. + unsafe extern "C" fn vm_free(me: *mut bindings::drm_gpuvm) { + // SAFETY: Caller passes a pointer to the `drm_gpuvm` in a `GpuVm<= T>`. + let me =3D unsafe { kernel::container_of!(Opaque::cast_from(me), S= elf, vm).cast_mut() }; + // SAFETY: By type invariants we can free it when refcount hits ze= ro. + drop(unsafe { KBox::from_raw(me) }) + } +} + +/// The manager for a GPUVM. +pub trait DriverGpuVm: Sized + Send { + /// Parent `Driver` for this object. + type Driver: drm::Driver; + + /// The kind of GEM object stored in this GPUVM. + type Object: IntoGEMObject; +} + +/// The core of the DRM GPU VA manager. +/// +/// This object is a unique reference to the VM that can access the interv= al tree and the Rust +/// `data` field. +/// +/// # Invariants +/// +/// Each `GpuVm` instance has at most one `UniqueRefGpuVm` reference. +pub struct UniqueRefGpuVm(ARef>); + +// SAFETY: The GPUVM api is designed to allow &self methods to be called i= n parallel, and +// concurrent access to `data` is safe due to the `T: Sync` requirement. +unsafe impl Sync for UniqueRefGpuVm {} + +impl UniqueRefGpuVm { + /// Access the data owned by this `UniqueRefGpuVm` immutably. + #[inline] + pub fn data_ref(&self) -> &T { + // SAFETY: By the type invariants we may access `data`. + unsafe { &*self.0.data.get() } + } + + /// Access the data owned by this `UniqueRefGpuVm` mutably. + #[inline] + pub fn data(&mut self) -> &mut T { + // SAFETY: By the type invariants we may access `data`. + unsafe { &mut *self.0.data.get() } + } +} + +impl Deref for UniqueRefGpuVm { + type Target =3D GpuVm; + + #[inline] + fn deref(&self) -> &GpuVm { + &self.0 + } +} diff --git a/rust/kernel/drm/mod.rs b/rust/kernel/drm/mod.rs index 1b82b6945edf..a4b6c5430198 100644 --- a/rust/kernel/drm/mod.rs +++ b/rust/kernel/drm/mod.rs @@ -6,6 +6,7 @@ pub mod driver; pub mod file; pub mod gem; +pub mod gpuvm; pub mod ioctl; =20 pub use self::device::Device; --=20 2.53.0.1213.gd9a14994de-goog From nobody Thu Apr 16 01:35:53 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3867F3DF010 for ; Thu, 9 Apr 2026 15:26:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775748384; cv=none; b=gPtZ/cUGD1WlxNr7ifnJGT/mUwAaRqB8QvhrF5LjETnaX3dZq8Gs42YrZK2Ikbdx3jCMvYWPB8uO/l1KF6j+sQmCbw0lWS20IO2U7koOuxXHlqOAAxr5fhSxuC6N+BdVpssRuwZVF5hANJE63Flptj8WFQku7Pewez5+ZIZqML0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775748384; c=relaxed/simple; bh=vj5t+rhR/KNKc0Ceu0mW+ieVah4JKaHwJzJG2RvIoLw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KTU+Qaf+6Qh29EUF7zwgWp8L9jmC+xP3gIEl9fIDHCZHk6YktJSatVVjvP1UBP4U5v6vCAcn8WVC1aSCiuTRZZsXN22ohC/tOURYX+z8tbsvVKxo6L+WpXwqx8J4m7JCeStgZ4anfn0160DWQDnYSk0wsnuD5lBALHeqBYHGZpY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=POxzr0DD; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="POxzr0DD" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4837b6f6b93so8651265e9.3 for ; Thu, 09 Apr 2026 08:26:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1775748380; x=1776353180; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uwPZTgDnQi5A6sDtWS0t3d869EnrSWkSVSYeYDIyfUc=; b=POxzr0DDldygL53J+RW+uwINMLNS7tEpuhlPwyGEC3HgtG0LpU3BC4TAg2cRpIRczD r2SaRWkGmTUfdsnrD5ur4Z1dqskcVUuJBfJvn6UfG8ITVqJ+kYnPlN3aZXnO1rikr1M7 rIXdBkOcmMRDckuLQfWWtaDrUW5s8FffG22lYF09YdZyetR5pi5cPeUd91bit1pBx+zo GukCUmkFsioJZgnr/kxhk+SseRmzWMHEgCcpsdhesInl0oarcyqSnQun8W8KkamvX/uQ 4b0Fyj6OnXrf8DFR8VKAV0VLxoO2rHxFnC0uPqdPc2hgbcWRhQUNjIarNRd6U6nSeBgN 8paA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775748380; x=1776353180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uwPZTgDnQi5A6sDtWS0t3d869EnrSWkSVSYeYDIyfUc=; b=TO1x9sMZrKMNWvFxRxOdivtZOiWl7/VcarBB0GfSnXQyOUIaS2aCoeb7fIiIL9GaOW rKN4Ru1En3bGyhAH8J/yz8sEPT//qPAUEVKBSiAQtuTo4TDeS9uokrBRaGnHe/K3btXN DZIM94R9fue8nZLOm4f757NSGdB0xlIridhWC9aXyRyPHWjStBUr/N3PoOGrdMMVpH1b Um9C841tC0a0osS/TknBXCdGBB5Ha6r2/1WUhMM6LKaFPmnNOBkilTQvh2EsAFiSsngO HsPMmW0jWJJtpV8fuF12xr2BlYOxeAB3fr8EvnpW2t6d6xuCBRWbZHs7uf3ngI5x7Gw3 t+7g== X-Forwarded-Encrypted: i=1; AJvYcCVafq/D6uoX1jAeWUoz+4h6tqs6BhY1rhd5UFEA8fnY2XBjIwzURmEcB9J5sMfq9Cxf3x6/Z6ay+spgoZ8=@vger.kernel.org X-Gm-Message-State: AOJu0YxyzZ71hRDWPqN9k15qqYIkoDbGA1A8XIsWlPnU5H6sSh5Y4s9I 9B6P+AczAbLFzLrSW6yK0WWY+HjHsSNtK2H5vpMm8LhM/TuFtN0YtARNHyV2QYYqOQKVIpcGNxv n7bhYvZyCMx4HYAsiZw== X-Received: from wmjf21.prod.google.com ([2002:a7b:cd15:0:b0:488:811f:e5c4]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:190f:b0:488:caed:5cc7 with SMTP id 5b1f17b1804b1-488ccfd91demr61983165e9.15.1775748379557; Thu, 09 Apr 2026 08:26:19 -0700 (PDT) Date: Thu, 09 Apr 2026 15:26:07 +0000 In-Reply-To: <20260409-gpuvm-rust-v6-0-b16e6ada7261@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260409-gpuvm-rust-v6-0-b16e6ada7261@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=14411; i=aliceryhl@google.com; h=from:subject:message-id; bh=vj5t+rhR/KNKc0Ceu0mW+ieVah4JKaHwJzJG2RvIoLw=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBp18UXwVDiJSx3RYe4wChYnPhMEJ9BPq8i1Y/NG PWMdpaJ12GJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCadfFFwAKCRAEWL7uWMY5 Rt+PEACKfvyDNI6HRjl6dwj+GJG+VQb4tmrGlDMKxLIAUd89kbHp7EbwYxDuA8o407acIkh4Mwz yXQH+UdKtaACm69h/9D6VWJG5bHm426j5bUmLQjcXzuAD3xebtTDa/GHYWwUcBZBufQScTZaqEz Tl64l4bKHETap66bKGDvloSY5Z+IxYuEX6caH2Dp2z0KK/QvmSmT/r9YRgbZ8qpe7BJ7gjTSI9s kUqPSENkwzy4OaGhumvW9A/H8Nw3+esWtRErZ5EkjpPcC0Tv3lZskXmAeN4KkfRsONzcIFhRzRf UWOvKrTPd68KhwOsAiYez9YuAE/AcjaQoPLiKNzZbqip1zmuHXHbX8xXb44vDA9UfCBtTtanRVk BXWemEGHLXdwh+YfI6lh/qum2X/Ne9uDJj/+DrdjPY6fENDKbFnbqZVtDTzP5jchLIEvhuS+iv3 dKWy46/h5qi3lm69UbtW1TYH/Z1Fr+Zxx+BBPNRy3RuFDGg6CN4ai0pwjmu3zvxYbgRF0tJyYBm CHkvbhlE1EUqF+wBDbg2s+sWzSelzOCAcLKbGTGKaU/3ELGDVDtQkkyP7dAQOCFglB35VA13OXM rzHDQdEna6RHEjWcikiFlpbh7x42IDoKA6h6vFugS83JFfDNckNj+mUd5MZqCw63kl7x7yrdQYt 6McEuYF8dCjeqXg== X-Mailer: b4 0.14.3 Message-ID: <20260409-gpuvm-rust-v6-2-b16e6ada7261@google.com> Subject: [PATCH v6 2/5] rust: gpuvm: add GpuVm::obtain() From: Alice Ryhl To: Danilo Krummrich , Daniel Almeida Cc: Boris Brezillon , Janne Grunau , Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" , Lyude Paul , Asahi Lina , Sumit Semwal , "=?utf-8?q?Christian_K=C3=B6nig?=" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-media@vger.kernel.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This provides a mechanism to create (or look up) VMBO instances, which represent the mapping between GPUVM and GEM objects. The GpuVmBoRegistered type can be considered like ARef>, except that no way to increment the refcount is provided. The GpuVmBoAlloc type is more akin to a pre-allocated GpuVmBo, so it's not really a GpuVmBo yet. Its destructor could call drm_gpuvm_bo_destroy_not_in_lists(), but as the type is currently private and never called anywhere, this perf optimization does not need to happen now. Pre-allocating and obtaining the gpuvm_bo object is exposed as a single step. This could theoretically be a problem if one wanted to call drm_gpuvm_bo_obtain_prealloc() during the fence signalling critical path, but that's not a possibility because: 1. Adding the BO to the extobj list requires the resv lock, so it cannot happen during the fence signalling critical path. 2. obtain() requires that the BO is not in the extobj list, so obtain() must be called before adding the BO to the extobj list. Thus, drm_gpuvm_bo_obtain_prealloc() cannot be called during the fence signalling critical path. (For extobjs at least.) Reviewed-by: Daniel Almeida Signed-off-by: Alice Ryhl --- rust/helpers/drm_gpuvm.c | 6 + rust/kernel/drm/gpuvm/mod.rs | 32 +++++- rust/kernel/drm/gpuvm/vm_bo.rs | 242 +++++++++++++++++++++++++++++++++++++= ++++ 3 files changed, 277 insertions(+), 3 deletions(-) diff --git a/rust/helpers/drm_gpuvm.c b/rust/helpers/drm_gpuvm.c index 18cf104a8bc7..ca959d9a66f6 100644 --- a/rust/helpers/drm_gpuvm.c +++ b/rust/helpers/drm_gpuvm.c @@ -4,6 +4,12 @@ =20 #include =20 +__rust_helper +struct drm_gpuvm_bo *rust_helper_drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_= bo) +{ + return drm_gpuvm_bo_get(vm_bo); +} + __rust_helper struct drm_gpuvm *rust_helper_drm_gpuvm_get(struct drm_gpuvm *obj) { diff --git a/rust/kernel/drm/gpuvm/mod.rs b/rust/kernel/drm/gpuvm/mod.rs index 1d9138d989b3..56e02b49a581 100644 --- a/rust/kernel/drm/gpuvm/mod.rs +++ b/rust/kernel/drm/gpuvm/mod.rs @@ -25,13 +25,20 @@ =20 use core::{ cell::UnsafeCell, + mem::ManuallyDrop, ops::{ Deref, Range, // }, - ptr::NonNull, // + ptr::{ + self, + NonNull, // + }, // }; =20 +mod vm_bo; +pub use self::vm_bo::*; + /// A DRM GPU VA manager. /// /// This object is refcounted, but the locations of mapped ranges may only= be accessed or changed @@ -83,8 +90,8 @@ const fn vtable() -> &'static bindings::drm_gpuvm_ops { vm_free: Some(Self::vm_free), op_alloc: None, op_free: None, - vm_bo_alloc: None, - vm_bo_free: None, + vm_bo_alloc: GpuVmBo::::ALLOC_FN, + vm_bo_free: GpuVmBo::::FREE_FN, vm_bo_validate: None, sm_step_map: None, sm_step_unmap: None, @@ -184,6 +191,16 @@ pub fn va_range(&self) -> Range { Range { start, end } } =20 + /// Get or create the [`GpuVmBo`] for this gem object. + #[inline] + pub fn obtain( + &self, + obj: &T::Object, + data: impl PinInit, + ) -> Result>, AllocError> { + Ok(GpuVmBoAlloc::new(self, obj, data)?.obtain()) + } + /// Clean up buffer objects that are no longer used. #[inline] pub fn deferred_cleanup(&self) { @@ -209,6 +226,12 @@ pub fn is_extobj(&self, obj: &T::Object) -> bool { // SAFETY: By type invariants we can free it when refcount hits ze= ro. drop(unsafe { KBox::from_raw(me) }) } + + #[inline] + fn raw_resv(&self) -> *mut bindings::dma_resv { + // SAFETY: `r_obj` is immutable and valid for duration of GPUVM. + unsafe { (*(*self.as_raw()).r_obj).resv } + } } =20 /// The manager for a GPUVM. @@ -218,6 +241,9 @@ pub trait DriverGpuVm: Sized + Send { =20 /// The kind of GEM object stored in this GPUVM. type Object: IntoGEMObject; + + /// Data stored with each [`struct drm_gpuvm_bo`](struct@GpuVmBo). + type VmBoData; } =20 /// The core of the DRM GPU VA manager. diff --git a/rust/kernel/drm/gpuvm/vm_bo.rs b/rust/kernel/drm/gpuvm/vm_bo.rs new file mode 100644 index 000000000000..65f03f93bd21 --- /dev/null +++ b/rust/kernel/drm/gpuvm/vm_bo.rs @@ -0,0 +1,242 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT + +use super::*; + +/// Represents that a given GEM object has at least one mapping on this [`= GpuVm`] instance. +/// +/// Does not assume that GEM lock is held. +/// +/// # Invariants +/// +/// * Allocated with `kmalloc` and refcounted via `inner`. +/// * Is present in the gem list. +#[repr(C)] +#[pin_data] +pub struct GpuVmBo { + #[pin] + inner: Opaque, + #[pin] + data: T::VmBoData, +} + +// SAFETY: By type invariants, the allocation is managed by the refcount i= n `self.inner`. +unsafe impl AlwaysRefCounted for GpuVmBo { + fn inc_ref(&self) { + // SAFETY: By type invariants, the allocation is managed by the re= fcount in `self.inner`. + unsafe { bindings::drm_gpuvm_bo_get(self.inner.get()) }; + } + + unsafe fn dec_ref(obj: NonNull) { + // CAST: `drm_gpuvm_bo` is first field of repr(C) struct. + // SAFETY: By type invariants, the allocation is managed by the re= fcount in `self.inner`. + // This GPUVM instance uses immediate mode, so we may put the refc= ount using the deferred + // mechanism. + unsafe { bindings::drm_gpuvm_bo_put_deferred(obj.as_ptr().cast()) = }; + } +} + +impl PartialEq for GpuVmBo { + #[inline] + fn eq(&self, other: &Self) -> bool { + core::ptr::eq(self.as_raw(), other.as_raw()) + } +} +impl Eq for GpuVmBo {} + +impl GpuVmBo { + /// The function pointer for allocating a GpuVmBo stored in the gpuvm = vtable. + /// + /// Allocation is always implemented according to [`Self::vm_bo_alloc`= ], but it is set to + /// `None` if the default gpuvm behavior is the same as `vm_bo_alloc`. + /// + /// This may be `Some` even if `FREE_FN` is `None`, or vice-versa. + pub(super) const ALLOC_FN: Option *mut bindi= ngs::drm_gpuvm_bo> =3D { + use core::alloc::Layout; + let base =3D Layout::new::(); + let rust =3D Layout::new::(); + assert!(base.size() <=3D rust.size()); + if base.size() !=3D rust.size() || base.align() !=3D rust.align() { + Some(Self::vm_bo_alloc) + } else { + // This causes GPUVM to allocate a `GpuVmBo` with `kzalloc(= sizeof(drm_gpuvm_bo))`. + None + } + }; + + /// The function pointer for freeing a GpuVmBo stored in the gpuvm vta= ble. + /// + /// Freeing is always implemented according to [`Self::vm_bo_free`], b= ut it is set to `None` if + /// the default gpuvm behavior is the same as `vm_bo_free`. + /// + /// This may be `Some` even if `ALLOC_FN` is `None`, or vice-versa. + pub(super) const FREE_FN: Option =3D { + if core::mem::needs_drop::() { + Some(Self::vm_bo_free) + } else { + // This causes GPUVM to free a `GpuVmBo` with `kfree`. + None + } + }; + + /// Custom function for allocating a `drm_gpuvm_bo`. + /// + /// # Safety + /// + /// Always safe to call. + unsafe extern "C" fn vm_bo_alloc() -> *mut bindings::drm_gpuvm_bo { + let raw_ptr =3D KBox::::new_uninit(GFP_KERNEL | __GFP_ZERO) + .map(KBox::into_raw) + .unwrap_or(ptr::null_mut()); + + // CAST: `drm_gpuvm_bo` is first field of `Self`. + raw_ptr.cast() + } + + /// Custom function for freeing a `drm_gpuvm_bo`. + /// + /// # Safety + /// + /// The pointer must have been allocated with [`GpuVmBo::ALLOC_FN`], a= nd must not be used after + /// this call. + unsafe extern "C" fn vm_bo_free(ptr: *mut bindings::drm_gpuvm_bo) { + // CAST: `drm_gpuvm_bo` is first field of `Self`. + // SAFETY: + // * The ptr was allocated from kmalloc with the layout of `GpuVmB= o`. + // * `ptr->inner` has no destructor. + // * `ptr->data` contains a valid `T::VmBoData` that we can drop. + drop(unsafe { KBox::::from_raw(ptr.cast()) }); + } + + /// Access this [`GpuVmBo`] from a raw pointer. + /// + /// # Safety + /// + /// For the duration of `'a`, the pointer must reference a valid `drm_= gpuvm_bo` associated with + /// a [`GpuVm`]. The BO must also be present in the GEM list. + #[inline] + #[expect(dead_code)] + pub(crate) unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm_bo) ->= &'a Self { + // SAFETY: `drm_gpuvm_bo` is first field and `repr(C)`. + unsafe { &*ptr.cast() } + } + + /// Returns a raw pointer to underlying C value. + #[inline] + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo { + self.inner.get() + } + + /// The [`GpuVm`] that this GEM object is mapped in. + #[inline] + pub fn gpuvm(&self) -> &GpuVm { + // SAFETY: The `obj` pointer is guaranteed to be valid. + unsafe { GpuVm::::from_raw((*self.inner.get()).vm) } + } + + /// The [`drm_gem_object`](DriverGpuVm::Object) for these mappings. + #[inline] + pub fn obj(&self) -> &T::Object { + // SAFETY: The `obj` pointer is guaranteed to be valid. + unsafe { ::from_raw((*self.inner.get()= ).obj) } + } + + /// The driver data with this buffer object. + #[inline] + pub fn data(&self) -> &T::VmBoData { + &self.data + } +} + +/// A pre-allocated [`GpuVmBo`] object. +/// +/// # Invariants +/// +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData`, has a = refcount of one, and is +/// absent from any gem, extobj, or evict lists. +pub(super) struct GpuVmBoAlloc(NonNull>); + +impl GpuVmBoAlloc { + /// Create a new pre-allocated [`GpuVmBo`]. + /// + /// It's intentional that the initializer is infallible because `drm_g= puvm_bo_put` will call + /// drop on the data, so we don't have a way to free it when the data = is missing. + #[inline] + pub(super) fn new( + gpuvm: &GpuVm, + gem: &T::Object, + value: impl PinInit, + ) -> Result, AllocError> { + // CAST: `GpuVmBoAlloc::vm_bo_alloc` ensures that this memory was = allocated with the layout + // of `GpuVmBo`. The type is repr(C), so `container_of` is not = required. + // SAFETY: The provided gpuvm and gem ptrs are valid for the durat= ion of this call. + let raw_ptr =3D unsafe { + bindings::drm_gpuvm_bo_create(gpuvm.as_raw(), gem.as_raw()).ca= st::>() + }; + let ptr =3D NonNull::new(raw_ptr).ok_or(AllocError)?; + // SAFETY: `ptr->data` is a valid pinned location. + let Ok(()) =3D unsafe { value.__pinned_init(&raw mut (*raw_ptr).da= ta) }; + // INVARIANTS: We just created the vm_bo so it's absent from lists= , and the data is valid + // as we just initialized it. + Ok(GpuVmBoAlloc(ptr)) + } + + /// Returns a raw pointer to underlying C value. + #[inline] + pub(super) fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo { + // SAFETY: The pointer references a valid `drm_gpuvm_bo`. + unsafe { (*self.0.as_ptr()).inner.get() } + } + + /// Look up whether there is an existing [`GpuVmBo`] for this gem obje= ct. + /// + /// The caller should not hold the GEM mutex or DMA resv lock. + #[inline] + pub(super) fn obtain(self) -> ARef> { + let me =3D ManuallyDrop::new(self); + // SAFETY: Valid `drm_gpuvm_bo` not already in the lists. We do no= t access `me` after this + // call. + let ptr =3D unsafe { bindings::drm_gpuvm_bo_obtain_prealloc(me.as_= raw()) }; + + // SAFETY: `drm_gpuvm_bo_obtain_prealloc` always returns a non-nul= l ptr + let nonnull =3D unsafe { NonNull::new_unchecked(ptr.cast()) }; + + // INVARIANTS: `drm_gpuvm_bo_obtain_prealloc` ensures that the bo = is in the GEM list. + // SAFETY: We received one refcount from `drm_gpuvm_bo_obtain_prea= lloc`. + let ret =3D unsafe { ARef::>::from_raw(nonnull) }; + + // Ensure that external objects are in the extobj list. + // + // Note that we must call `extobj_add` even if `ptr !=3D me` to av= oid a race condition where + // we could end up using the extobj before the thread with `ptr = =3D=3D me` calls extobj_add. + if ret.gpuvm().is_extobj(ret.obj()) { + let resv_lock =3D ret.gpuvm().raw_resv(); + // TODO: Use a proper lock guard here once a dma_resv lock abs= traction exists. + // SAFETY: The GPUVM is still alive, so its resv lock is too. + unsafe { bindings::dma_resv_lock(resv_lock, ptr::null_mut()) }; + // SAFETY: We hold the GPUVMs resv lock. + unsafe { bindings::drm_gpuvm_bo_extobj_add(ptr) }; + // SAFETY: We took the lock, so we can unlock it. + unsafe { bindings::dma_resv_unlock(resv_lock) }; + } + + ret + } +} + +impl Deref for GpuVmBoAlloc { + type Target =3D GpuVmBo; + #[inline] + fn deref(&self) -> &GpuVmBo { + // SAFETY: By the type invariants we may deref while `Self` exists. + unsafe { self.0.as_ref() } + } +} + +impl Drop for GpuVmBoAlloc { + #[inline] + fn drop(&mut self) { + // TODO: Call drm_gpuvm_bo_destroy_not_in_lists() directly. + // SAFETY: It's safe to perform a deferred put in any context. + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) }; + } +} --=20 2.53.0.1213.gd9a14994de-goog From nobody Thu Apr 16 01:35:53 2026 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 952E53DF00C for ; Thu, 9 Apr 2026 15:26:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775748385; cv=none; b=WFBCS8iF/O0IVPEQ9noSz/Yw2q4ASVuKPFnq2umEwysMOX/B/ljOOHjStvSi2T+nFUdRjTmgivZEacDUEvBHcKV9in44Khr30fOdeZgJ3YLIOgcwzpHP+m5yr0sb1HfiQSqas97CO2Ur8h53CcPWDD7YudMtA3VvI4wEwPV5dhw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775748385; c=relaxed/simple; bh=yxjLLW+eyiG9le+2ww647p7emAqKkJd9fk+bDVYvWVY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ncB+0DDxhl+lpkKB59zE9YIriarO8gsrhYxFsqp8JcVn72C98L4RX4iUHl/leF+3n5dxopw4LQc8RxFP3KTdt9BWb9dz1KO3Rosc5Kr0O5mqo8NE8bLTdFzmZCLAvHPQqRqtFMniyG/r3OT8N510EDFFzEwVT0XkG8q5wRD0nrw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aGUiliMP; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aGUiliMP" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-b97f04b44b9so144350066b.0 for ; Thu, 09 Apr 2026 08:26:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1775748381; x=1776353181; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cmCMVFn7A8FBsU7JrtIy9Z3tXezUBNiPB/QWhv76Fgw=; b=aGUiliMPHtt/y0LpnPU2uheom9LTpPwXk6nhISE7FjuHpvaZH26k78AAV1432iGluj Xl2Tptin90U/xAjDTrb5xcX299BchB28/DTHqMd7gIpzbMr96J1E/1br49HduCpT69p6 JIC5Cd9Y0QKrTIyI16sAS6UnJvrwcjpvix6MRqt/RxCHxjrENX1Wr5OoUBYgU8L5M4uy ZEHRhaPfknTiSGtmIQkE2662Srw0Hq16FiznMWqQf+mKgBT+OZFdXDtI2pxreVnVps40 JH1XcSyCTsjNZcO0a9gFZ10XGcJgIk3Rro6zC/b3+Rbrww27CNXhS6y0vVjv6AQsoDyB hDOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775748381; x=1776353181; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cmCMVFn7A8FBsU7JrtIy9Z3tXezUBNiPB/QWhv76Fgw=; b=luIUi+86HafhK0MyvytN/gHEbNHHcVG2AdWoffb021ZvhXDY+HKmNGvhMkoUsnbogr ZZfOREUoQAtQ5UgRhVbCf3AXbCiorJB0PvZARCPh0E/+A1YNTkDFchpf1RYqBaTtzef7 eHjXzvXZtjZ2LGWxUjMV+s6+MLRQWijlCgAt1RwzkArIbppBdcgwKvmv6a7svFC/gVfF q0MIn8Ehw6rwPOHFM6f0SCQQZzEVWG3GhPruH+m6aXXZuWHcARhu0jCKzTPA4LQeBK7o KJflR9r+ljqKxnkTxdvfIFUFCcRNpGzliZ44joGJC8jzhJh3+KK090LJ2XQBz0jLa9if D0XQ== X-Forwarded-Encrypted: i=1; AJvYcCUhA3uzwZE8WZUv5UlafxnXs62ym64Q+zqWC9DT5FCuzRtONJOe/yyzCkuyvxxYVrb3rqqAaPBUOg5AARU=@vger.kernel.org X-Gm-Message-State: AOJu0YyxmzzOHzOyTG3SZhzh3uXykkX48nCfin/OeFCNlZhMwPcVzKoi WdSlQcyCdDTgCiWEDxCIxUSyBALxZNWQfxLsJDQchkA+3DvRyhfrNDz1M6JW37dSqb326zaz4X+ k1H2DZoGDQWNcCP8tXQ== X-Received: from ejcqk37.prod.google.com ([2002:a17:907:7fa5:b0:b9b:ff97:4914]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:6d1d:b0:b87:7485:b4a8 with SMTP id a640c23a62f3a-b9c66ed856emr1315762866b.0.1775748380848; Thu, 09 Apr 2026 08:26:20 -0700 (PDT) Date: Thu, 09 Apr 2026 15:26:08 +0000 In-Reply-To: <20260409-gpuvm-rust-v6-0-b16e6ada7261@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260409-gpuvm-rust-v6-0-b16e6ada7261@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=8866; i=aliceryhl@google.com; h=from:subject:message-id; bh=yxjLLW+eyiG9le+2ww647p7emAqKkJd9fk+bDVYvWVY=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBp18UX8ZfyFfn7kV5FI2v7kmz+lyzYgsLoVvsdo ec1Rqwau+CJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCadfFFwAKCRAEWL7uWMY5 RtKfD/9Td4e1lxNHzVDyLZ2crdh03mOpA7vi/oiLGOvuTfyMYIUm8WjP3XSS8wIKFOXeboACYv1 w3xQWohNglCVF7GQ3qjjDxjSPeSN3/mILQJS1580LkrVOGgItflxtCz/NEBvo4A8PPyKpnNAqJd MVda24R3CgHN3EGnAtJcVeYdEavX36CX/vsU8xNmVCNhHvlHtPLRbMRPlcYZBXURJRkuHvouZjn 0xTcacguzXr+eibS7bVDcKxs/CLYDFuNBKTd4grproUybNheMlYjbRg2Dn04MMMrdQklw1qtywK lkDF7QH6hMoaQP+UIDU45zTbhTFV7Y0RYjk7OrKZDuNnuzQcJzm6r8F8pug1ASi8MjsAniK/lrn kh4YApgYpm58a5c2TYkfwF+TpaPgi3xpXcNjaeA4XJjoX6GxXZzt+ToPtXZfNLMtW4jdBw+aS6T JfWaoJ7xGDsJ8hjCFBm9/xwv2+UpcK5LCW7cZ85eQ17HtPGrTEMPwXrYNeRONS0ORWNk7jGy3ki EwDf0enr5TE7TcejQZTxvaEiVNAzJmGuwz+knoRrqfAqwhxvL3LD1NpP1jj479ApuoMrB+y66v1 OXUrIGDEXlHpS7vklhgzTiXPtmphzw0ri6R9z1SSzh3s8+7jXdfnYOxkWHiwk5lvcR4gG8IZYER D6XBwRtlsUIeTCA== X-Mailer: b4 0.14.3 Message-ID: <20260409-gpuvm-rust-v6-3-b16e6ada7261@google.com> Subject: [PATCH v6 3/5] rust: gpuvm: add GpuVa struct From: Alice Ryhl To: Danilo Krummrich , Daniel Almeida Cc: Boris Brezillon , Janne Grunau , Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" , Lyude Paul , Asahi Lina , Sumit Semwal , "=?utf-8?q?Christian_K=C3=B6nig?=" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-media@vger.kernel.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This struct will be used to keep track of individual mapped ranges in the GPU's virtual memory. Sparse VAs are not yet supported. Co-developed-by: Asahi Lina Signed-off-by: Asahi Lina Co-developed-by: Daniel Almeida Signed-off-by: Daniel Almeida Reviewed-by: Daniel Almeida Signed-off-by: Alice Ryhl --- rust/kernel/drm/gpuvm/mod.rs | 19 ++++- rust/kernel/drm/gpuvm/va.rs | 169 +++++++++++++++++++++++++++++++++++++= ++++ rust/kernel/drm/gpuvm/vm_bo.rs | 1 - 3 files changed, 185 insertions(+), 4 deletions(-) diff --git a/rust/kernel/drm/gpuvm/mod.rs b/rust/kernel/drm/gpuvm/mod.rs index 56e02b49a581..78951e8aa5d3 100644 --- a/rust/kernel/drm/gpuvm/mod.rs +++ b/rust/kernel/drm/gpuvm/mod.rs @@ -11,7 +11,10 @@ //! C header: [`include/drm/drm_gpuvm.h`](srctree/include/drm/drm_gpuvm.h) =20 use kernel::{ - alloc::AllocError, + alloc::{ + AllocError, + Flags as AllocFlags, // + }, bindings, drm, drm::gem::IntoGEMObject, @@ -25,9 +28,13 @@ =20 use core::{ cell::UnsafeCell, - mem::ManuallyDrop, + mem::{ + ManuallyDrop, + MaybeUninit, // + }, ops::{ Deref, + DerefMut, Range, // }, ptr::{ @@ -36,6 +43,9 @@ }, // }; =20 +mod va; +pub use self::va::*; + mod vm_bo; pub use self::vm_bo::*; =20 @@ -48,7 +58,7 @@ /// /// * Stored in an allocation managed by the refcount in `self.vm`. /// * Access to `data` and the gpuvm interval tree is controlled via the [= `UniqueRefGpuVm`] type. -/// * Does not contain any sparse `GpuVa` instances. +/// * Does not contain any sparse [`GpuVa`] instances. #[pin_data] pub struct GpuVm { #[pin] @@ -242,6 +252,9 @@ pub trait DriverGpuVm: Sized + Send { /// The kind of GEM object stored in this GPUVM. type Object: IntoGEMObject; =20 + /// Data stored with each [`struct drm_gpuva`](struct@GpuVa). + type VaData; + /// Data stored with each [`struct drm_gpuvm_bo`](struct@GpuVmBo). type VmBoData; } diff --git a/rust/kernel/drm/gpuvm/va.rs b/rust/kernel/drm/gpuvm/va.rs new file mode 100644 index 000000000000..227c259f7db9 --- /dev/null +++ b/rust/kernel/drm/gpuvm/va.rs @@ -0,0 +1,169 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT + +#![expect(dead_code)] +use super::*; + +/// Represents that a range of a GEM object is mapped in this [`GpuVm`] in= stance. +/// +/// Does not assume that GEM lock is held. +/// +/// # Invariants +/// +/// * This is a valid `drm_gpuva` object that is resident in a [`GpuVm`= ] instance. +/// * It is associated with a [`GpuVmBo`]. Or in other words, it's not = an +/// `gpuvm->kernel_alloc_node` and `DRM_GPUVA_SPARSE` is not set. +/// * The associated [`GpuVmBo`] is part of the GEM list. +#[repr(C)] +#[pin_data] +pub struct GpuVa { + #[pin] + inner: Opaque, + #[pin] + data: T::VaData, +} + +impl PartialEq for GpuVa { + #[inline] + fn eq(&self, other: &Self) -> bool { + core::ptr::eq(self.as_raw(), other.as_raw()) + } +} +impl Eq for GpuVa {} + +impl GpuVa { + /// Access this [`GpuVa`] from a raw pointer. + /// + /// # Safety + /// + /// * For the duration of `'a`, the pointer must reference a valid `dr= m_gpuva` associated with + /// a [`GpuVm`]. + /// * It must be associated with a [`GpuVmBo`]. + /// * The associated [`GpuVmBo`] is part of the GEM list. + #[inline] + pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuva) -> &'a Self { + // CAST: `drm_gpuva` is first field and `repr(C)`. + // SAFETY: The safety requirements match the invariants of `GpuVa`. + unsafe { &*ptr.cast() } + } + + /// Returns a raw pointer to underlying C value. + #[inline] + pub fn as_raw(&self) -> *mut bindings::drm_gpuva { + self.inner.get() + } + + /// Returns the address of this mapping in the GPU virtual address spa= ce. + #[inline] + pub fn addr(&self) -> u64 { + // SAFETY: The `va.addr` field of `drm_gpuva` is immutable. + unsafe { (*self.as_raw()).va.addr } + } + + /// Returns the length of this mapping. + #[inline] + pub fn length(&self) -> u64 { + // SAFETY: The `va.range` field of `drm_gpuva` is immutable. + unsafe { (*self.as_raw()).va.range } + } + + /// Returns `addr..addr+length`. + #[inline] + pub fn range(&self) -> Range { + let addr =3D self.addr(); + addr..addr + self.length() + } + + /// Returns the offset within the GEM object. + #[inline] + pub fn gem_offset(&self) -> u64 { + // SAFETY: The `gem.offset` field of `drm_gpuva` is immutable. + unsafe { (*self.as_raw()).gem.offset } + } + + /// Returns the GEM object. + #[inline] + pub fn obj(&self) -> &T::Object { + // SAFETY: The `gem.obj` field of `drm_gpuva` is immutable. We kno= w that it's not null + // because this VA is associated with a `GpuVmBo`. + unsafe { ::from_raw((*self.as_raw()).g= em.obj) } + } + + /// Returns the underlying [`GpuVmBo`] object that backs this [`GpuVa`= ]. + #[inline] + pub fn vm_bo(&self) -> &GpuVmBo { + // SAFETY: The `vm_bo` field of `drm_gpuva` is immutable. We know = that it's not null + // because this VA is associated with a `GpuVmBo`. The BO is in= the GEM list by the type + // invariants. + unsafe { GpuVmBo::from_raw((*self.as_raw()).vm_bo) } + } +} + +/// A pre-allocated [`GpuVa`] object. +/// +/// # Invariants +/// +/// The memory is zeroed. +pub struct GpuVaAlloc(KBox>>); + +impl GpuVaAlloc { + /// Pre-allocate a [`GpuVa`] object. + pub fn new(flags: AllocFlags) -> Result, AllocError> { + // INVARIANTS: Memory allocated with __GFP_ZERO. + Ok(GpuVaAlloc(KBox::new_uninit(flags | __GFP_ZERO)?)) + } + + /// Prepare this `drm_gpuva` for insertion into the GPUVM. + #[must_use] + pub(super) fn prepare(mut self, va_data: impl PinInit) -> *= mut bindings::drm_gpuva { + let va_ptr =3D MaybeUninit::as_mut_ptr(&mut self.0); + // SAFETY: The `data` field is pinned. + let Ok(()) =3D unsafe { va_data.__pinned_init(&raw mut (*va_ptr).d= ata) }; + KBox::into_raw(self.0).cast() + } +} + +/// A [`GpuVa`] object that has been removed. +/// +/// # Invariants +/// +/// The `drm_gpuva` is not resident in the [`GpuVm`]. +pub struct GpuVaRemoved(KBox>); + +impl GpuVaRemoved { + /// Convert a raw pointer into a [`GpuVaRemoved`]. + /// + /// # Safety + /// + /// * Must have been removed from a [`GpuVm`]. + /// * It must not be a `gpuvm->kernel_alloc_node` va. + pub(super) unsafe fn from_raw(ptr: *mut bindings::drm_gpuva) -> Self { + // SAFETY: Since it used to be a VA in a `GpuVm` and it's not a= kernel_alloc_node, this + // pointer references a `GpuVa` with a valid `T::VaData`. Since= it has been removed, we + // can take ownership of the allocation. + GpuVaRemoved(unsafe { KBox::from_raw(ptr.cast()) }) + } + + /// Take ownership of the VA data. + pub fn into_inner(self) -> T::VaData + where + T::VaData: Unpin, + { + KBox::into_inner(self.0).data + } +} + +impl Deref for GpuVaRemoved { + type Target =3D T::VaData; + fn deref(&self) -> &T::VaData { + &self.0.data + } +} + +impl DerefMut for GpuVaRemoved +where + T::VaData: Unpin, +{ + fn deref_mut(&mut self) -> &mut T::VaData { + &mut self.0.data + } +} diff --git a/rust/kernel/drm/gpuvm/vm_bo.rs b/rust/kernel/drm/gpuvm/vm_bo.rs index 65f03f93bd21..05fd7998f4bd 100644 --- a/rust/kernel/drm/gpuvm/vm_bo.rs +++ b/rust/kernel/drm/gpuvm/vm_bo.rs @@ -114,7 +114,6 @@ impl GpuVmBo { /// For the duration of `'a`, the pointer must reference a valid `drm_= gpuvm_bo` associated with /// a [`GpuVm`]. The BO must also be present in the GEM list. #[inline] - #[expect(dead_code)] pub(crate) unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm_bo) ->= &'a Self { // SAFETY: `drm_gpuvm_bo` is first field and `repr(C)`. unsafe { &*ptr.cast() } --=20 2.53.0.1213.gd9a14994de-goog From nobody Thu Apr 16 01:35:53 2026 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 154F53E0229 for ; Thu, 9 Apr 2026 15:26:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775748386; cv=none; b=O8jqn1Dmrh8xakOOqXvIgJ/O8V8W2eScuM104BHyz40EjUtBF4enzVCT/r/IlP1IvuAMnpTH4Jd/j0YAPe05O1EFlZK2BbOSP/KO5P3M6vboiFlT2nDUfGeO1ion5bZvCteuXXRntlTSCSeoEolHiFW1PAm5sGRy2kgZ7G8Q5hg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775748386; c=relaxed/simple; bh=dbTc61MvZi+51ZExIE6njd6Wlmv7IPUMeb2BmCuJreg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JN3oQxLwCHmiQ3pM15bJmy+S78SM/cBEiFjDa3t4uQ54RWLNvjd/6vImMkOujUDDKo4qeiWDT3p29hi2GK+O7hpREm7FDBDOpuZxMDr0vRxa/HiA17GcLCppTC3TzaH2UP4pltpNcZSYwZB8lhtzKiUuKKUK1dtvb/rQ8CzSWhI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jT9TG2ih; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jT9TG2ih" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-43d0de4bfb7so1241786f8f.3 for ; Thu, 09 Apr 2026 08:26:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1775748382; x=1776353182; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SsvvT/ajEX6uKhiIbhIMJ4UO0Dt37j/GL0F977LH8ug=; b=jT9TG2ihHpAaXQegPzprnryGonUPK1gMusj/IwZQNKO+HfsDWpppFHl5O5Nyyi2E5Y 1dLsMSXQQvuT6f4nXZUDStynPLnBiNcp2H4IkMaWlu/Irxm18n6A47InuVWxKGRxaECx r3WZVn50DbPiU8WPToqOeicJSMDYRwMPtcdfs+g/qZJXLPl/g3th4zP9394QtbgArgvB 85O6njsttJvmEWBwvBRZNhY5BY+3Dy7vBrGKffX9CGGqVGHXuK9VQwbRbIco4IxWI5qn CyvMvcZ04cfeGdEU6I61BuK6iC+0VxZSElAQnPJvLpVJmhJoMuOcTpCrqMVw1L1ehWzO tfEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775748382; x=1776353182; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SsvvT/ajEX6uKhiIbhIMJ4UO0Dt37j/GL0F977LH8ug=; b=T0F0NmKDyMLP4W0cS2Vto4XNdip17duUt5pMkoM4PQfo82qcAIHOISw31JgsSddCLg Tp75VFYsGuvxnEQFukEChAS8/L8kLwgM5+SmfGLCfVkADvJcrjZrmxvfPC4nL/pYIpVy eP2BX6XDJpx0cdLwZ7Jx2fDqXIQ0Lw8s+5UJLx+FFjLJzQx3f8sAULtJ5ASu8+OlGRbz tmASqujDA82uHMcdVpbmCq5fQ//alGivIACeGnI7gbjWPx9U52Kfaluk+K2uMq/JDxOf 9m9jWuCb5v5Jus07RnSG96VEKL2zX/3GQDgS92gQTLg5uXDr0vLn3+0De8PM6Xe0uRfo tSNA== X-Forwarded-Encrypted: i=1; AJvYcCUsuR2u7zuWyuqC19O4E9W+6Sferu3UCz1xu0NMUFFoy33KFwPqexeKQ87PpbjohnZzJP5y/sX7Emtnhi4=@vger.kernel.org X-Gm-Message-State: AOJu0Yw72oMMq4igCQ5zfYO7b9ioEJqy+FbfHQE4CiUjJae8Rt+DOhbP DdzNAKbXGQPOjeo4QpBBFqn4d2XlIVd0a+1H/TUwdNc8FYlMhg8gVdFHlCjT716a9LM8AbG6OWI VcFkBmNCDWWU/ZnhSbg== X-Received: from wrsv2.prod.google.com ([2002:a5d:4a42:0:b0:43d:21c:e7e0]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:4201:b0:43c:f95c:3e66 with SMTP id ffacd0b85a97d-43d29294b62mr39038591f8f.21.1775748382210; Thu, 09 Apr 2026 08:26:22 -0700 (PDT) Date: Thu, 09 Apr 2026 15:26:09 +0000 In-Reply-To: <20260409-gpuvm-rust-v6-0-b16e6ada7261@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260409-gpuvm-rust-v6-0-b16e6ada7261@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=14341; i=aliceryhl@google.com; h=from:subject:message-id; bh=dbTc61MvZi+51ZExIE6njd6Wlmv7IPUMeb2BmCuJreg=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBp18UX/2YIhPghe6mJmtJF6PkwmDrX8DG7baGiF tNeuGzoPhmJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCadfFFwAKCRAEWL7uWMY5 RgwnEACnWUDSj/KzVph18Kl1Jk0OjWRAxPK0DqickJkb60PfWLQMzs+VTzOF7JOi5fx+ieGToj5 NGShh+fQ4w5j8wNiUx8waltWPVQByNax25ojIzc1iTA7OVF1ST2qjOs8bJeISqMByFVtMFDUfgl Wh4M6TnMRtmRTJmPsgjSd6S7zcLcp9+Bq89w4CwMxhCr9/csRNKS3dkdnq3qFTGClXsIN5qTquK BocWnv7JUvxUT8/Fdf3XhFXlnkM83KUgbXghVUfFAoLnohktqHZLZt8bf528wM6Ag1RCVcbIlor J6DdItO0rT8tJMLP/2sHZGxvmrGsV0KuZPsxUSK9lSy4Ftsua+db69w2+PL2kUWHHaN7LTE/l1y faZdkS1t0qWs5UUj5JaQvaWIDQdY16e7Zy5KGmta2wwxCbklha1E1ZNVWADhIMsUaaR31N3/4V7 NVPkbV0Mbvv9UYbJRF/JH5SBi/uAw8+9cADCVllOvXSQBNvt1sh5Eh67YDir5KeBoxTWkiTkACH 5yGBncIxdxuEzVZyf3vlPEGhO40CJQHhvoCY43NY3utAUtLeidNPEVxk2krGyTW0PE6fkH5ltZ6 oD5Ny9yJ/WfLvQJ4goGE5SuAC963c63qT0NOZczNsZEwi+il2P1Garnu6+beO39uadzZozpHlLk qit6eHlcU0a2YWA== X-Mailer: b4 0.14.3 Message-ID: <20260409-gpuvm-rust-v6-4-b16e6ada7261@google.com> Subject: [PATCH v6 4/5] rust: gpuvm: add GpuVmCore::sm_unmap() From: Alice Ryhl To: Danilo Krummrich , Daniel Almeida Cc: Boris Brezillon , Janne Grunau , Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" , Lyude Paul , Asahi Lina , Sumit Semwal , "=?utf-8?q?Christian_K=C3=B6nig?=" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-media@vger.kernel.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Add the entrypoint for unmapping ranges in the GPUVM, and provide callbacks and VA types for the implementation. Co-developed-by: Asahi Lina Signed-off-by: Asahi Lina Reviewed-by: Daniel Almeida Signed-off-by: Alice Ryhl --- rust/kernel/drm/gpuvm/mod.rs | 30 ++++- rust/kernel/drm/gpuvm/sm_ops.rs | 272 ++++++++++++++++++++++++++++++++++++= ++++ rust/kernel/drm/gpuvm/va.rs | 1 - rust/kernel/drm/gpuvm/vm_bo.rs | 8 ++ 4 files changed, 306 insertions(+), 5 deletions(-) diff --git a/rust/kernel/drm/gpuvm/mod.rs b/rust/kernel/drm/gpuvm/mod.rs index 78951e8aa5d3..a6436abd0f9c 100644 --- a/rust/kernel/drm/gpuvm/mod.rs +++ b/rust/kernel/drm/gpuvm/mod.rs @@ -18,6 +18,7 @@ bindings, drm, drm::gem::IntoGEMObject, + error::to_result, prelude::*, sync::aref::{ ARef, @@ -28,6 +29,7 @@ =20 use core::{ cell::UnsafeCell, + marker::PhantomData, mem::{ ManuallyDrop, MaybeUninit, // @@ -43,12 +45,15 @@ }, // }; =20 -mod va; -pub use self::va::*; +mod sm_ops; +pub use self::sm_ops::*; =20 mod vm_bo; pub use self::vm_bo::*; =20 +mod va; +pub use self::va::*; + /// A DRM GPU VA manager. /// /// This object is refcounted, but the locations of mapped ranges may only= be accessed or changed @@ -104,8 +109,8 @@ const fn vtable() -> &'static bindings::drm_gpuvm_ops { vm_bo_free: GpuVmBo::::FREE_FN, vm_bo_validate: None, sm_step_map: None, - sm_step_unmap: None, - sm_step_remap: None, + sm_step_unmap: Some(Self::sm_step_unmap), + sm_step_remap: Some(Self::sm_step_remap), } } =20 @@ -257,6 +262,23 @@ pub trait DriverGpuVm: Sized + Send { =20 /// Data stored with each [`struct drm_gpuvm_bo`](struct@GpuVmBo). type VmBoData; + + /// The private data passed to callbacks. + type SmContext<'ctx>; + + /// Indicates that an existing mapping should be removed. + fn sm_step_unmap<'op, 'ctx>( + &mut self, + op: OpUnmap<'op, Self>, + context: &mut Self::SmContext<'ctx>, + ) -> Result, Error>; + + /// Indicates that an existing mapping should be split up. + fn sm_step_remap<'op, 'ctx>( + &mut self, + op: OpRemap<'op, Self>, + context: &mut Self::SmContext<'ctx>, + ) -> Result, Error>; } =20 /// The core of the DRM GPU VA manager. diff --git a/rust/kernel/drm/gpuvm/sm_ops.rs b/rust/kernel/drm/gpuvm/sm_ops= .rs new file mode 100644 index 000000000000..05f81c638aef --- /dev/null +++ b/rust/kernel/drm/gpuvm/sm_ops.rs @@ -0,0 +1,272 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT + +use super::*; + +/// The actual data that gets threaded through the callbacks. +struct SmData<'a, 'ctx, T: DriverGpuVm> { + gpuvm: &'a mut UniqueRefGpuVm, + user_context: &'a mut T::SmContext<'ctx>, +} + +/// Represents an `sm_step_unmap` operation that has not yet been complete= d. +pub struct OpUnmap<'op, T: DriverGpuVm> { + op: &'op bindings::drm_gpuva_op_unmap, + // This ensures that 'op is invariant, so that `OpUnmap<'long, T>` doe= s not + // coerce to `OpUnmap<'short, T>`. This ensures that the user can't re= turn the + // wrong`OpUnmapped` value. + _invariant: PhantomData<*mut &'op mut T>, +} + +impl<'op, T: DriverGpuVm> OpUnmap<'op, T> { + /// Indicates whether this [`GpuVa`] is physically contiguous with the + /// original mapping request. + /// + /// Optionally, if `keep` is set, drivers may keep the actual page tab= le + /// mappings for this `drm_gpuva`, adding the missing page table entri= es + /// only and update the `drm_gpuvm` accordingly. + pub fn keep(&self) -> bool { + self.op.keep + } + + /// The range being unmapped. + pub fn va(&self) -> &GpuVa { + // SAFETY: This is a valid va. It's not the `kernel_alloc_node` be= cause you can't unmap it, + // and it's not sparse by the `GpuVm` type invariants. + unsafe { GpuVa::::from_raw(self.op.va) } + } + + /// Remove the VA. + pub fn remove(self) -> (OpUnmapped<'op, T>, GpuVaRemoved) { + // SAFETY: The op references a valid drm_gpuva in the GPUVM. + unsafe { bindings::drm_gpuva_unmap(self.op) }; + // SAFETY: The va is no longer in the interval tree so we may unli= nk it. + unsafe { bindings::drm_gpuva_unlink_defer(self.op.va) }; + + // SAFETY: We just removed this va from the `GpuVm`. + let va =3D unsafe { GpuVaRemoved::from_raw(self.op.va) }; + + ( + OpUnmapped { + _invariant: self._invariant, + }, + va, + ) + } +} + +/// Represents a completed [`OpUnmap`] operation. +pub struct OpUnmapped<'op, T> { + _invariant: PhantomData<*mut &'op mut T>, +} + +/// Represents an `sm_step_remap` operation that has not yet been complete= d. +pub struct OpRemap<'op, T: DriverGpuVm> { + op: &'op bindings::drm_gpuva_op_remap, + // This ensures that 'op is invariant, so that `OpRemap<'long, T>` doe= s not + // coerce to `OpRemap<'short, T>`. This ensures that the user can't re= turn the + // wrong`OpRemapped` value. + _invariant: PhantomData<*mut &'op mut T>, +} + +impl<'op, T: DriverGpuVm> OpRemap<'op, T> { + /// The preceding part of a split mapping. + #[inline] + pub fn prev(&self) -> Option<&OpRemapMapData> { + // SAFETY: We checked for null, so the pointer must be valid. + NonNull::new(self.op.prev).map(|ptr| unsafe { OpRemapMapData::from= _raw(ptr) }) + } + + /// The subsequent part of a split mapping. + #[inline] + pub fn next(&self) -> Option<&OpRemapMapData> { + // SAFETY: We checked for null, so the pointer must be valid. + NonNull::new(self.op.next).map(|ptr| unsafe { OpRemapMapData::from= _raw(ptr) }) + } + + /// Indicates whether the `drm_gpuva` being removed is physically cont= iguous with the original + /// mapping request. + /// + /// Optionally, if `keep` is set, drivers may keep the actual page tab= le mappings for this + /// `drm_gpuva`, adding the missing page table entries only and update= the `drm_gpuvm` + /// accordingly. + #[inline] + pub fn keep(&self) -> bool { + // SAFETY: The unmap pointer is always valid. + unsafe { (*self.op.unmap).keep } + } + + /// The range being unmapped. + #[inline] + pub fn va_to_unmap(&self) -> &GpuVa { + // SAFETY: This is a valid va. It's not the `kernel_alloc_node` be= cause you can't unmap it, + // and it's not sparse by the `GpuVm` type invariants. + unsafe { GpuVa::::from_raw((*self.op.unmap).va) } + } + + /// The [`drm_gem_object`](DriverGpuVm::Object) whose VA is being rema= pped. + #[inline] + pub fn obj(&self) -> &T::Object { + self.va_to_unmap().obj() + } + + /// The [`GpuVmBo`] that is being remapped. + #[inline] + pub fn vm_bo(&self) -> &GpuVmBo { + self.va_to_unmap().vm_bo() + } + + /// Update the GPUVM to perform the remapping. + pub fn remap( + self, + va_alloc: [GpuVaAlloc; 2], + prev_data: impl PinInit, + next_data: impl PinInit, + ) -> (OpRemapped<'op, T>, OpRemapRet) { + let [va1, va2] =3D va_alloc; + + let mut unused_va =3D None; + let mut prev_ptr =3D ptr::null_mut(); + let mut next_ptr =3D ptr::null_mut(); + if self.prev().is_some() { + prev_ptr =3D va1.prepare(prev_data); + } else { + unused_va =3D Some(va1); + } + if self.next().is_some() { + next_ptr =3D va2.prepare(next_data); + } else { + unused_va =3D Some(va2); + } + + // SAFETY: the pointers are non-null when required + unsafe { bindings::drm_gpuva_remap(prev_ptr, next_ptr, self.op) }; + + let gpuva_guard =3D self.vm_bo().lock_gpuva(); + if !prev_ptr.is_null() { + // SAFETY: The prev_ptr is a valid drm_gpuva prepared for inse= rtion. The vm_bo is still + // valid as the not-yet-unlinked gpuva holds a refcount on the= vm_bo. + unsafe { bindings::drm_gpuva_link(prev_ptr, self.vm_bo().as_ra= w()) }; + } + if !next_ptr.is_null() { + // SAFETY: The next_ptr is a valid drm_gpuva prepared for inse= rtion. The vm_bo is still + // valid as the not-yet-unlinked gpuva holds a refcount on the= vm_bo. + unsafe { bindings::drm_gpuva_link(next_ptr, self.vm_bo().as_ra= w()) }; + } + drop(gpuva_guard); + + // SAFETY: The va is no longer in the interval tree so we may unli= nk it. + unsafe { bindings::drm_gpuva_unlink_defer((*self.op.unmap).va) }; + + ( + OpRemapped { + _invariant: self._invariant, + }, + OpRemapRet { + // SAFETY: We just removed this va from the `GpuVm`. + unmapped_va: unsafe { GpuVaRemoved::from_raw((*self.op.unm= ap).va) }, + unused_va, + }, + ) + } +} + +/// Part of an [`OpRemap`] that represents a new mapping. +#[repr(transparent)] +pub struct OpRemapMapData(bindings::drm_gpuva_op_map); + +impl OpRemapMapData { + /// # Safety + /// Must reference a valid `drm_gpuva_op_map` for duration of `'a`. + unsafe fn from_raw<'a>(ptr: NonNull) -> &'= a Self { + // SAFETY: ok per safety requirements + unsafe { ptr.cast().as_ref() } + } + + /// The base address of the new mapping. + pub fn addr(&self) -> u64 { + self.0.va.addr + } + + /// The length of the new mapping. + pub fn length(&self) -> u64 { + self.0.va.range + } + + /// The offset within the [`drm_gem_object`](DriverGpuVm::Object). + pub fn gem_offset(&self) -> u64 { + self.0.gem.offset + } +} + +/// Struct containing objects removed or not used by [`OpRemap::remap`]. +pub struct OpRemapRet { + /// The `drm_gpuva` that was removed. + pub unmapped_va: GpuVaRemoved, + /// If the remap did not split the region into two pieces, then the un= used `drm_gpuva` is + /// returned here. + pub unused_va: Option>, +} + +/// Represents a completed [`OpRemap`] operation. +pub struct OpRemapped<'op, T> { + _invariant: PhantomData<*mut &'op mut T>, +} + +impl UniqueRefGpuVm { + /// Remove any mappings in the given region. + /// + /// Internally calls [`DriverGpuVm::sm_step_unmap`] for ranges entirel= y contained within the + /// given range, and [`DriverGpuVm::sm_step_remap`] for ranges that ov= erlap with the range. + #[inline] + pub fn sm_unmap(&mut self, addr: u64, length: u64, context: &mut T::Sm= Context<'_>) -> Result { + let gpuvm =3D self.as_raw(); + let mut p =3D SmData { + gpuvm: self, + user_context: context, + }; + // SAFETY: + // * raw_request() creates a valid request. + // * The private data is valid to be interpreted as SmData. + to_result(unsafe { bindings::drm_gpuvm_sm_unmap(gpuvm, (&raw mut p= ).cast(), addr, length) }) + } +} + +impl GpuVm { + /// # Safety + /// Must be called from `sm_unmap` with a pointer to `SmData`. + pub(super) unsafe extern "C" fn sm_step_unmap( + op: *mut bindings::drm_gpuva_op, + p: *mut c_void, + ) -> c_int { + // SAFETY: The caller provides a pointer to `SmData`. + let p =3D unsafe { &mut *p.cast::>() }; + let op =3D OpUnmap { + // SAFETY: sm_step_unmap is called with an unmap operation. + op: unsafe { &(*op).__bindgen_anon_1.unmap }, + _invariant: PhantomData, + }; + match p.gpuvm.data().sm_step_unmap(op, p.user_context) { + Ok(OpUnmapped { .. }) =3D> 0, + Err(err) =3D> err.to_errno(), + } + } + + /// # Safety + /// Must be called from `sm_unmap` with a pointer to `SmData`. + pub(super) unsafe extern "C" fn sm_step_remap( + op: *mut bindings::drm_gpuva_op, + p: *mut c_void, + ) -> c_int { + // SAFETY: The caller provides a pointer to `SmData`. + let p =3D unsafe { &mut *p.cast::>() }; + let op =3D OpRemap { + // SAFETY: sm_step_remap is called with a remap operation. + op: unsafe { &(*op).__bindgen_anon_1.remap }, + _invariant: PhantomData, + }; + match p.gpuvm.data().sm_step_remap(op, p.user_context) { + Ok(OpRemapped { .. }) =3D> 0, + Err(err) =3D> err.to_errno(), + } + } +} diff --git a/rust/kernel/drm/gpuvm/va.rs b/rust/kernel/drm/gpuvm/va.rs index 227c259f7db9..0b09fe44ab39 100644 --- a/rust/kernel/drm/gpuvm/va.rs +++ b/rust/kernel/drm/gpuvm/va.rs @@ -1,6 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 OR MIT =20 -#![expect(dead_code)] use super::*; =20 /// Represents that a range of a GEM object is mapped in this [`GpuVm`] in= stance. diff --git a/rust/kernel/drm/gpuvm/vm_bo.rs b/rust/kernel/drm/gpuvm/vm_bo.rs index 05fd7998f4bd..c064ac63897b 100644 --- a/rust/kernel/drm/gpuvm/vm_bo.rs +++ b/rust/kernel/drm/gpuvm/vm_bo.rs @@ -144,6 +144,14 @@ pub fn obj(&self) -> &T::Object { pub fn data(&self) -> &T::VmBoData { &self.data } + + pub(super) fn lock_gpuva(&self) -> crate::sync::MutexGuard<'_, ()> { + // SAFETY: The GEM object is valid. + let ptr =3D unsafe { &raw mut (*self.obj().as_raw()).gpuva.lock }; + // SAFETY: The GEM object is valid, so the mutex is properly initi= alized. + let mutex =3D unsafe { crate::sync::Mutex::from_raw(ptr) }; + mutex.lock() + } } =20 /// A pre-allocated [`GpuVmBo`] object. --=20 2.53.0.1213.gd9a14994de-goog From nobody Thu Apr 16 01:35:53 2026 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A47F3E0239 for ; Thu, 9 Apr 2026 15:26:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775748387; cv=none; b=BeyCHIyg/7yBnjk/7GfbcxY9/x3CV/KlT2SBWOdERVQ2ccda+qpkixivNip9jgLz2E/hSGXU2g40Ma6B6LMyCnJqaR7gkFJ+Jhp3zsF3tT8tI8vyzdG02gCFp3cozCiLcDD04/Up2VBtpAAj7XR6CqaIUZ14Px4pXp79JnJAQZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775748387; c=relaxed/simple; bh=zt4LPgEp/Y1M8Naq1Vpctg8BC24Qm5hP/wZncaeqwu4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hVo4EoYVlCKneh4KU+W0yD8sl+ExpD+b3IS79uF6Tt8+dQecirjZiMd9imewO3be+klEY1DX476SZKsUEE5n3wwIGqF2li6tDw9/E5UqsJvK6OtbDK0m//O+qxO7u+b5TKZQvGJwVAMA02r+ei4iSLIXmV1mj8t3OWm8ERwBXdk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FuJ1Cxwr; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FuJ1Cxwr" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-43b8c472f3eso908098f8f.0 for ; Thu, 09 Apr 2026 08:26:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1775748384; x=1776353184; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hFJ1KsbIXhy8rx1cz/BnphR0Bwoj8EJEYS1k3FyY134=; b=FuJ1CxwrevLH3DpWZAqARG4WFCSi3bigwUghgddMKeUIZEk7ikm6jhsri2gKTqcRpT e36Zo3OPuO3d1G/ntRTg6/zYoX2CBWBdDMzF6m55+gpJ5lcNadeUA+p6usBAijormTXQ vVLFFMo+POSL65vnjG/syNJpZhHVgXl70GcoEykfEMc0UagPp55nbv7inLhcz3UhKHSO PTAd/gyKvR4zIrl7DgVFrapgJ/4pGiz52bTeTHmwSRbgxnTxzWD52Ajc0HxDzfsqpbPt r/aeag/balW7QhbX3kuoji6LpNGpGh9sw7/dFoXQci51OGjC1z1CzroMigY6kN7OXHle 9gDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775748384; x=1776353184; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hFJ1KsbIXhy8rx1cz/BnphR0Bwoj8EJEYS1k3FyY134=; b=TiKQe5FuueUQaJgW8D8/UOHwRZUiJqGiw0NvUL+N0DCe2hIf9q+6KEzQUCy3l3nMoq mhIRqkuMindlS1FjHupT/9vSKfDuzOA9DvxM9+CgpcpQ/YdClBlVoLbBSy5XmUmr6Ajo SGOv7MQzMu484CsBrohxg8usNHnvnkvwyodVThVE0uHiIQjLsXALfTmX+K3NyrTgSKLc 3FbprMaJ0uQAnqBV1RuXBjEjGdMLrdEjVNFVdJDu68kZv1isN96qQdPEAHRdT13C0ohQ QTPhXF0PuG3VoUrs+vbzbrnCXLLRR0Av8ZTC4slcc+l52Z8zc2/h/0oPfeMPrA7UaN4t RLuA== X-Forwarded-Encrypted: i=1; AJvYcCX2rZGC10YLDAbwYdNNhCHIt3xvuQ1pg0/A2X3jGS+CZo0O8goJ5KiT4Xhg9TQJF21jQjD+1z5Gt8+imWg=@vger.kernel.org X-Gm-Message-State: AOJu0YxsGyhtUX5Nnsj1iGV00hV4XXKQaG9bBMcvifBNHeP5799zalQD wXrivH3BGV6Wc8aZ/rAcW5Gii1q7G6uErLgkbSgAxaA9VetQNmdnLq0axZmo0M7rzd7f3eMHUcD bBbaf5m8V1zMPThjFnQ== X-Received: from wrbgl27-n2.prod.google.com ([2002:a05:6000:299b:20b0:43c:f8a2:96a5]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d1d:b0:485:3ec6:e634 with SMTP id 5b1f17b1804b1-4889977600bmr347224445e9.15.1775748383869; Thu, 09 Apr 2026 08:26:23 -0700 (PDT) Date: Thu, 09 Apr 2026 15:26:10 +0000 In-Reply-To: <20260409-gpuvm-rust-v6-0-b16e6ada7261@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260409-gpuvm-rust-v6-0-b16e6ada7261@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=9988; i=aliceryhl@google.com; h=from:subject:message-id; bh=zt4LPgEp/Y1M8Naq1Vpctg8BC24Qm5hP/wZncaeqwu4=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBp18UXBxDUIwi4bYaDXeeInwGWa38rJJmBQ0V/q Y9iAYYdLJqJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCadfFFwAKCRAEWL7uWMY5 RqoBEACmNL866m/jhSVtFlJnvTwwVOK2r6pWSCLmciJKNB33T4A6sb7XOYJ91yjA4th6DuMAz3n UJJ5EP6oQ02DcwrPfoGuaviyACW712e91/UZl6FGhMF5EjHc3RFRHCt4z/Jx9tPQPvkGLSX0Mpt SI0lBOvzO55JEduXPjEKu6RqBeK8kmjcyiQopiVBQ8Se8b3oZEoHVH2ZbIp75HBTbx5wXzb/Vz8 /5igmgYWeXKJpl4wrFhVv7J3wCZgW0NmitLtsV2Cc0mMwHVIyqNVJgjkO4Q++OVDAmsnhFeRjL3 YO7lUszRaBe0ITUnjcNNP6oF6e6XNblrRU70SYZvn22Skhmeb4bI7pXRJP8lHe2tNF8YV2dykHB wsZ4h7u4Ji1ZtR5ycBlounzxcxJtIvDrqk+QUj2hZC/ivbOFxQfpcQIdLa68uvMQ4Y64t+lGqxe L8yUNH/P42A5X3+4Y4mKj/q419f5huYetIFXSqKVsXkwG+hS2KBmS0cas2c7eYqUOX2VjpN7E3D QZw1sb6Iwq6P3pykYaemlqZegrBdMMVk95oIeZvHDaqGB4u8T/BTPzPjgK6eWsVDaVgjPqzVqjo P2xBatBbY/xuzKQnUy2BLLLycTHqx1jv1g6KANZ34fpDYrCXp77pkG97pnuPyTt21KmUFUKd06I rvO6i5B+gTQeimg== X-Mailer: b4 0.14.3 Message-ID: <20260409-gpuvm-rust-v6-5-b16e6ada7261@google.com> Subject: [PATCH v6 5/5] rust: gpuvm: add GpuVmCore::sm_map() From: Alice Ryhl To: Danilo Krummrich , Daniel Almeida Cc: Boris Brezillon , Janne Grunau , Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" , Lyude Paul , Asahi Lina , Sumit Semwal , "=?utf-8?q?Christian_K=C3=B6nig?=" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-media@vger.kernel.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Finally also add the operation for creating new mappings. Mapping operations need extra data in the context since they involve a vm_bo coming from the outside. Co-developed-by: Asahi Lina Signed-off-by: Asahi Lina Reviewed-by: Daniel Almeida Signed-off-by: Alice Ryhl --- rust/kernel/drm/gpuvm/mod.rs | 9 ++- rust/kernel/drm/gpuvm/sm_ops.rs | 167 ++++++++++++++++++++++++++++++++++++= ++-- 2 files changed, 170 insertions(+), 6 deletions(-) diff --git a/rust/kernel/drm/gpuvm/mod.rs b/rust/kernel/drm/gpuvm/mod.rs index a6436abd0f9c..dc755f248143 100644 --- a/rust/kernel/drm/gpuvm/mod.rs +++ b/rust/kernel/drm/gpuvm/mod.rs @@ -108,7 +108,7 @@ const fn vtable() -> &'static bindings::drm_gpuvm_ops { vm_bo_alloc: GpuVmBo::::ALLOC_FN, vm_bo_free: GpuVmBo::::FREE_FN, vm_bo_validate: None, - sm_step_map: None, + sm_step_map: Some(Self::sm_step_map), sm_step_unmap: Some(Self::sm_step_unmap), sm_step_remap: Some(Self::sm_step_remap), } @@ -266,6 +266,13 @@ pub trait DriverGpuVm: Sized + Send { /// The private data passed to callbacks. type SmContext<'ctx>; =20 + /// Indicates that a new mapping should be created. + fn sm_step_map<'op, 'ctx>( + &mut self, + op: OpMap<'op, Self>, + context: &mut Self::SmContext<'ctx>, + ) -> Result, Error>; + /// Indicates that an existing mapping should be removed. fn sm_step_unmap<'op, 'ctx>( &mut self, diff --git a/rust/kernel/drm/gpuvm/sm_ops.rs b/rust/kernel/drm/gpuvm/sm_ops= .rs index 05f81c638aef..69a8e5ab2821 100644 --- a/rust/kernel/drm/gpuvm/sm_ops.rs +++ b/rust/kernel/drm/gpuvm/sm_ops.rs @@ -8,6 +8,108 @@ struct SmData<'a, 'ctx, T: DriverGpuVm> { user_context: &'a mut T::SmContext<'ctx>, } =20 +/// Adds an extra field to `SmData` for `sm_map()` callbacks. +/// +/// # Invariants +/// +/// `self.vm_bo.gpuvm() =3D=3D self.sm_data.gpuvm`. +#[repr(C)] +struct SmMapData<'a, 'ctx, T: DriverGpuVm> { + sm_data: SmData<'a, 'ctx, T>, + vm_bo: &'a GpuVmBo, +} + +/// The argument for [`UniqueRefGpuVm::sm_map`]. +pub struct OpMapRequest<'a, 'ctx, T: DriverGpuVm> { + /// Address in GPU virtual address space. + pub addr: u64, + /// Length of mapping to create. + pub range: u64, + /// Offset in GEM object. + pub gem_offset: u64, + /// The GEM object to map. + pub vm_bo: &'a GpuVmBo, + /// The user-provided context type. + pub context: &'a mut T::SmContext<'ctx>, +} + +impl<'a, 'ctx, T: DriverGpuVm> OpMapRequest<'a, 'ctx, T> { + fn raw_request(&self) -> bindings::drm_gpuvm_map_req { + bindings::drm_gpuvm_map_req { + map: bindings::drm_gpuva_op_map { + va: bindings::drm_gpuva_op_map__bindgen_ty_1 { + addr: self.addr, + range: self.range, + }, + gem: bindings::drm_gpuva_op_map__bindgen_ty_2 { + offset: self.gem_offset, + obj: self.vm_bo.obj().as_raw(), + }, + }, + } + } +} + +/// Represents an `sm_step_map` operation that has not yet been completed. +pub struct OpMap<'op, T: DriverGpuVm> { + op: &'op bindings::drm_gpuva_op_map, + // Since these abstractions are designed for immediate mode, the VM BO= needs to be + // pre-allocated, so we always have it available when we reach this po= int. + vm_bo: &'op GpuVmBo, + // This ensures that 'op is invariant, so that `OpMap<'long, T>` does = not + // coerce to `OpMap<'short, T>`. This ensures that the user can't retu= rn + // the wrong `OpMapped` value. + _invariant: PhantomData<*mut &'op mut T>, +} + +impl<'op, T: DriverGpuVm> OpMap<'op, T> { + /// The base address of the new mapping. + pub fn addr(&self) -> u64 { + self.op.va.addr + } + + /// The length of the new mapping. + pub fn length(&self) -> u64 { + self.op.va.range + } + + /// The offset within the [`drm_gem_object`](DriverGpuVm::Object). + pub fn gem_offset(&self) -> u64 { + self.op.gem.offset + } + + /// The [`drm_gem_object`](DriverGpuVm::Object) to map. + pub fn obj(&self) -> &T::Object { + // SAFETY: The `obj` pointer is guaranteed to be valid. + unsafe { ::from_raw(self.op.gem.obj) } + } + + /// The [`GpuVmBo`] that the new VA will be associated with. + pub fn vm_bo(&self) -> &GpuVmBo { + self.vm_bo + } + + /// Use the pre-allocated VA to carry out this map operation. + pub fn insert(self, va: GpuVaAlloc, va_data: impl PinInit) -> OpMapped<'op, T> { + let va =3D va.prepare(va_data); + // SAFETY: By the type invariants we may access the interval tree. + unsafe { bindings::drm_gpuva_map(self.vm_bo.gpuvm().as_raw(), va, = self.op) }; + + let _gpuva_guard =3D self.vm_bo().lock_gpuva(); + // SAFETY: The va is prepared for insertion, and we hold the GEM l= ock. + unsafe { bindings::drm_gpuva_link(va, self.vm_bo.as_raw()) }; + + OpMapped { + _invariant: self._invariant, + } + } +} + +/// Represents a completed [`OpMap`] operation. +pub struct OpMapped<'op, T> { + _invariant: PhantomData<*mut &'op mut T>, +} + /// Represents an `sm_step_unmap` operation that has not yet been complete= d. pub struct OpUnmap<'op, T: DriverGpuVm> { op: &'op bindings::drm_gpuva_op_unmap, @@ -213,6 +315,35 @@ pub struct OpRemapped<'op, T> { } =20 impl UniqueRefGpuVm { + /// Create a mapping, removing or remapping anything that overlaps. + /// + /// Internally calls the [`DriverGpuVm`] callbacks similar to [`Self::= sm_unmap`], except that + /// the [`DriverGpuVm::sm_step_map`] is called once to create the requ= ested mapping. + #[inline] + pub fn sm_map(&mut self, req: OpMapRequest<'_, '_, T>) -> Result { + if req.vm_bo.gpuvm() !=3D &**self { + return Err(EINVAL); + } + + let gpuvm =3D self.as_raw(); + let raw_req =3D req.raw_request(); + // INVARIANT: Checked above that `vm_bo.gpuvm() =3D=3D self`. + let mut p =3D SmMapData { + sm_data: SmData { + gpuvm: self, + user_context: req.context, + }, + vm_bo: req.vm_bo, + }; + // SAFETY: + // * raw_request() creates a valid request. + // * The private data is valid to be interpreted as both SmData an= d SmMapData since the + // first field of SmMapData is SmData. + to_result(unsafe { + bindings::drm_gpuvm_sm_map(gpuvm, (&raw mut p).cast(), &raw co= nst raw_req) + }) + } + /// Remove any mappings in the given region. /// /// Internally calls [`DriverGpuVm::sm_step_unmap`] for ranges entirel= y contained within the @@ -226,19 +357,45 @@ pub fn sm_unmap(&mut self, addr: u64, length: u64, co= ntext: &mut T::SmContext<'_ }; // SAFETY: // * raw_request() creates a valid request. - // * The private data is valid to be interpreted as SmData. + // * The private data is a valid SmData. to_result(unsafe { bindings::drm_gpuvm_sm_unmap(gpuvm, (&raw mut p= ).cast(), addr, length) }) } } =20 impl GpuVm { /// # Safety - /// Must be called from `sm_unmap` with a pointer to `SmData`. + /// Must be called from `sm_map` with a pointer to `SmMapData`. + pub(super) unsafe extern "C" fn sm_step_map( + op: *mut bindings::drm_gpuva_op, + p: *mut c_void, + ) -> c_int { + // SAFETY: If we reach `sm_step_map` then we were called from `sm_= map` which always passes + // an `SmMapData` as private data. + let p =3D unsafe { &mut *p.cast::>() }; + let op =3D OpMap { + // SAFETY: sm_step_map is called with a map operation. + op: unsafe { &(*op).__bindgen_anon_1.map }, + vm_bo: p.vm_bo, + _invariant: PhantomData, + }; + match p + .sm_data + .gpuvm + .data() + .sm_step_map(op, p.sm_data.user_context) + { + Ok(OpMapped { .. }) =3D> 0, + Err(err) =3D> err.to_errno(), + } + } + + /// # Safety + /// Must be called from `sm_map` or `sm_unmap` with a pointer to `SmMa= pData` or `SmData`. pub(super) unsafe extern "C" fn sm_step_unmap( op: *mut bindings::drm_gpuva_op, p: *mut c_void, ) -> c_int { - // SAFETY: The caller provides a pointer to `SmData`. + // SAFETY: The caller provides a pointer that can be treated as `S= mData`. let p =3D unsafe { &mut *p.cast::>() }; let op =3D OpUnmap { // SAFETY: sm_step_unmap is called with an unmap operation. @@ -252,12 +409,12 @@ impl GpuVm { } =20 /// # Safety - /// Must be called from `sm_unmap` with a pointer to `SmData`. + /// Must be called from `sm_map` or `sm_unmap` with a pointer to `SmMa= pData` or `SmData`. pub(super) unsafe extern "C" fn sm_step_remap( op: *mut bindings::drm_gpuva_op, p: *mut c_void, ) -> c_int { - // SAFETY: The caller provides a pointer to `SmData`. + // SAFETY: The caller provides a pointer that can be treated as `S= mData`. let p =3D unsafe { &mut *p.cast::>() }; let op =3D OpRemap { // SAFETY: sm_step_remap is called with a remap operation. --=20 2.53.0.1213.gd9a14994de-goog