From nobody Sat Feb 7 20:40:30 2026 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68583329C67 for ; Fri, 30 Jan 2026 14:24:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769783073; cv=none; b=MLcvXs01vHC8ODrK3gOKEX3jXI5JZ6tNZ1MZheS+5xIjxNkm7WQz8J+GUK4vNHU5EbJUAYVGTF6H8f/swRt805cZVLc+5UPhZ0Qv6hkfNf6l6MntfaCBpLHa3b7zOraeqszMRxhpIb2iZ/mepiwEtTv/52nabYSfSSCUunHM0c4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769783073; c=relaxed/simple; bh=jwVB73Goy1eFiXhQV9XQGXXaSu7p/Z7zsbGnU3h3g48=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SWxKh0zgy/l79IUqH/mnnmU90+V/bbeXZWJGkMAdrHqqXPY/WdbvJw78HFtLQpXP/tmfZTiJBE1Uc9GHBxq5i2wpG+HJsrv0yt0W+f1v6POo54BvQQCnY2tlbckwIXyPKVdaPxyWQzHnThOq39pfSNQTV8JrmvIlpMbq3Uueagc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qEt/EAUB; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qEt/EAUB" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-42fdbba545fso1716306f8f.0 for ; Fri, 30 Jan 2026 06:24:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1769783070; x=1770387870; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yhJUSoGORcPmFs/1pDmni+0CTscySpgtXwx1t8RNgwY=; b=qEt/EAUBHKm8wc3cRx0JAIz7mO7b6y+vn1rCZekGGYk88bsXYiRxcjkKMgoBSrOyDO eH8AbRqD8ecureE7ji8RSbu34fpk/D5r1apW7UJRbw5IqhF63IUmuNMGhCFaTyFq2m6a 3+NrXMJBs5I94MOr/Eu//jHT3ZYdJRxP9f425XiTePfKAe/RksSeD22k+OMWREQQpD9E HGOmM2lPoMdDxU9I/dORodXjpUT1uOvhJFhLPEWriegGE/JvA5HxbYXQ6IeumBKQcHpe lwe+IqBJvF3+COlTJst5IyO2CQZ0sPJwpS0+gB7eko4uX34Uf9SUDaHonuUgiKJ8jsh5 /eug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769783070; x=1770387870; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yhJUSoGORcPmFs/1pDmni+0CTscySpgtXwx1t8RNgwY=; b=fpp2ORSMk7S5MssG5VWbTJUCat7QHvJmWkLGun0UNQj/+Q9sx1xWvNyQiU5c6LkO8q kMwJCmXEDi4WyfDzYBdPUFr4gtXHrm4paFx4BBn/rSRrvGOVt9XZMQDmvkebWciLaVuk S/GsmgQEUWmUgcdMK4qeXrRSwIOetMvNgJMv0R4EGXW5qBBZvXKtmUE64ZY1YeC7W3bH 6mF9sKPTaEn0FTWPTxhlEkTSEZMZW1WHU5JCDTtBEyod9131RnBaw8DFUtFGdX5gtqCa 4POEA6gNbhI5jt+7d1Vnao9QUyu1u1I2o53PzL3mZUsh0r7NGfMXySIA9Cn1EoQxnZ1l cRUw== X-Forwarded-Encrypted: i=1; AJvYcCWKhShNBZe8Mn0uSKsxaNQVciY9HAulIq8pHQTnQVBgWa+nffVU1LcsZ/ujvIIN//nH8e1V8KvH66u7quc=@vger.kernel.org X-Gm-Message-State: AOJu0Yx/Q3UrfLrqL4jQGRytapoFRPS7R/3gDl0+1vEhtoCBXSbrNXYy UgA8xc+qbMcYT17SwqKOCfBDy/6be83iEjcwgEnk3qiT1Ez+uz9DHSVTvEXj6wp06kR0ptaw6Xf VkIS0Em5yg5XHsAlFDw== X-Received: from wmcn27.prod.google.com ([2002:a05:600c:c0db:b0:47e:e414:b8fa]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:45cb:b0:475:ddad:c3a9 with SMTP id 5b1f17b1804b1-482db47de28mr43665845e9.13.1769783069709; Fri, 30 Jan 2026 06:24:29 -0800 (PST) Date: Fri, 30 Jan 2026 14:24:12 +0000 In-Reply-To: <20260130-gpuvm-rust-v4-0-8364d104ff40@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260130-gpuvm-rust-v4-0-8364d104ff40@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=12569; i=aliceryhl@google.com; h=from:subject:message-id; bh=jwVB73Goy1eFiXhQV9XQGXXaSu7p/Z7zsbGnU3h3g48=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBpfL8T2SZ+JHz8VbiIpmjFQwf4kvpvo8SUk/bbn 698aC/2w3iJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCaXy/EwAKCRAEWL7uWMY5 Rs/mD/wL/K/DMPXG147iK8vo68tuxIcmVPxg1tfp6/1KirgIFOLK9SSckmoEkvm27WDyGIGF2F8 92J3bfxN4kDa30TXw5r4nPq3iBf8S8BYSRup0CNPcCXfiGmM0fuTkHUdQ0Q2fknZ9j4PcgaD88v gbdB/lkecuHE0X//wKuYbngb8gequI8qbAkpQxKaAfb799/tkG5/Z8xmWwmwIc+JhB7KTveV/xm 9ExWFiaKrxs2GpXXEGLwXCu/rJHCuq7QfHATISX0XxDqWrUl7bnEyE5G6mIpLNSgK1okmQUaa6p 8+2kQ06n5c3fsn5oLsPv9QrSN1ri/8E69lxrpV3A9JAaj4KaCD9wwrCehSJrAi6CbzPse46Ps+O d1vMw45s6yxr+HC06iKP0G+qMBB8KdFOJktncT/oRddMbv/3usH8jueWtpaf+YGYDsjw0cFQsIX m23u0MCSKn/b99hYz5H+0qxvfl7QGoZbuwa8zian1CjL2SiwXXRftiBasbquQ0uL/oRmSdFwTZs M/c+h9Guf4LAtZvQe6mbgr23WIJjc4UqxrW23pMxWx1NLIy+pS29VdLS91KC3bPAbVb/7i+NXOK 1ToZvBSTPRvL3vbBFHq0w2iSFU2BBUbj0Sv0tcQ7vuybR/0hPq7cAjITeBI685ZTAsVcdqQRH+O He3dQZkNn3/ErNA== X-Mailer: b4 0.14.2 Message-ID: <20260130-gpuvm-rust-v4-3-8364d104ff40@google.com> Subject: [PATCH v4 3/6] rust: gpuvm: add GpuVm::obtain() From: Alice Ryhl To: Danilo Krummrich , Daniel Almeida Cc: Boris Brezillon , Janne Grunau , Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" , Lyude Paul , Asahi Lina , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This provides a mechanism to create (or look up) VMBO instances, which represent the mapping between GPUVM and GEM objects. The GpuVmBoRegistered type can be considered like ARef>, except that no way to increment the refcount is provided. The GpuVmBoAlloc type is more akin to a pre-allocated GpuVmBo, so it's not really a GpuVmBo yet. Its destructor could call drm_gpuvm_bo_destroy_not_in_lists(), but as the type is currently private and never called anywhere, this perf optimization does not need to happen now. Pre-allocating and obtaining the gpuvm_bo object is exposed as a single step. This could theoretically be a problem if one wanted to call drm_gpuvm_bo_obtain_prealloc() during the fence signalling critical path, but that's not a possibility because: 1. Adding the BO to the extobj list requires the resv lock, so it cannot happen during the fence signalling critical path. 2. obtain() requires that the BO is not in the extobj list, so obtain() must be called before adding the BO to the extobj list. Thus, drm_gpuvm_bo_obtain_prealloc() cannot be called during the fence signalling critical path. (For extobjs at least.) Reviewed-by: Daniel Almeida Signed-off-by: Alice Ryhl --- rust/kernel/drm/gpuvm/mod.rs | 32 +++++- rust/kernel/drm/gpuvm/vm_bo.rs | 219 +++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 248 insertions(+), 3 deletions(-) diff --git a/rust/kernel/drm/gpuvm/mod.rs b/rust/kernel/drm/gpuvm/mod.rs index dcb1fccc766115c6a0ca03bda578e3f3e5791492..8f2f1c135e9dd071fd4b4ad0762= a3e79dc922eea 100644 --- a/rust/kernel/drm/gpuvm/mod.rs +++ b/rust/kernel/drm/gpuvm/mod.rs @@ -25,13 +25,20 @@ =20 use core::{ cell::UnsafeCell, + mem::ManuallyDrop, ops::{ Deref, Range, // }, - ptr::NonNull, // + ptr::{ + self, + NonNull, // + }, // }; =20 +mod vm_bo; +pub use self::vm_bo::*; + /// A DRM GPU VA manager. /// /// This object is refcounted, but the "core" is only accessible using a s= pecial unique handle. The @@ -68,8 +75,8 @@ const fn vtable() -> &'static bindings::drm_gpuvm_ops { vm_free: Some(Self::vm_free), op_alloc: None, op_free: None, - vm_bo_alloc: None, - vm_bo_free: None, + vm_bo_alloc: GpuVmBo::::ALLOC_FN, + vm_bo_free: GpuVmBo::::FREE_FN, vm_bo_validate: None, sm_step_map: None, sm_step_unmap: None, @@ -166,6 +173,16 @@ pub fn va_range(&self) -> Range { Range { start, end } } =20 + /// Get or create the [`GpuVmBo`] for this gem object. + #[inline] + pub fn obtain( + &self, + obj: &T::Object, + data: impl PinInit, + ) -> Result, AllocError> { + Ok(GpuVmBoAlloc::new(self, obj, data)?.obtain()) + } + /// Clean up buffer objects that are no longer used. #[inline] pub fn deferred_cleanup(&self) { @@ -191,6 +208,12 @@ pub fn is_extobj(&self, obj: &T::Object) -> bool { // SAFETY: By type invariants we can free it when refcount hits ze= ro. drop(unsafe { KBox::from_raw(me) }) } + + #[inline] + fn raw_resv(&self) -> *mut bindings::dma_resv { + // SAFETY: `r_obj` is immutable and valid for duration of GPUVM. + unsafe { (*(*self.as_raw()).r_obj).resv } + } } =20 /// The manager for a GPUVM. @@ -200,6 +223,9 @@ pub trait DriverGpuVm: Sized { =20 /// The kind of GEM object stored in this GPUVM. type Object: IntoGEMObject; + + /// Data stored with each `struct drm_gpuvm_bo`. + type VmBoData; } =20 /// The core of the DRM GPU VA manager. diff --git a/rust/kernel/drm/gpuvm/vm_bo.rs b/rust/kernel/drm/gpuvm/vm_bo.rs new file mode 100644 index 0000000000000000000000000000000000000000..272e1a83c2d5f43c42dbdd9e09f= 51394a1e855b6 --- /dev/null +++ b/rust/kernel/drm/gpuvm/vm_bo.rs @@ -0,0 +1,219 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT + +use super::*; + +/// Represents that a given GEM object has at least one mapping on this [`= GpuVm`] instance. +/// +/// Does not assume that GEM lock is held. +#[repr(C)] +#[pin_data] +pub struct GpuVmBo { + #[pin] + inner: Opaque, + #[pin] + data: T::VmBoData, +} + +impl GpuVmBo { + pub(super) const ALLOC_FN: Option *mut bindi= ngs::drm_gpuvm_bo> =3D { + use core::alloc::Layout; + let base =3D Layout::new::(); + let rust =3D Layout::new::(); + assert!(base.size() <=3D rust.size()); + if base.size() !=3D rust.size() || base.align() !=3D rust.align() { + Some(Self::vm_bo_alloc) + } else { + // This causes GPUVM to allocate a `GpuVmBo` with `kzalloc(= sizeof(drm_gpuvm_bo))`. + None + } + }; + + pub(super) const FREE_FN: Option =3D { + if core::mem::needs_drop::() { + Some(Self::vm_bo_free) + } else { + // This causes GPUVM to free a `GpuVmBo` with `kfree`. + None + } + }; + + /// Custom function for allocating a `drm_gpuvm_bo`. + /// + /// # Safety + /// + /// Always safe to call. + unsafe extern "C" fn vm_bo_alloc() -> *mut bindings::drm_gpuvm_bo { + KBox::::new_uninit(GFP_KERNEL | __GFP_ZERO) + .map(KBox::into_raw) + .unwrap_or(ptr::null_mut()) + .cast() + } + + /// Custom function for freeing a `drm_gpuvm_bo`. + /// + /// # Safety + /// + /// The pointer must have been allocated with [`GpuVmBo::ALLOC_FN`], a= nd must not be used after + /// this call. + unsafe extern "C" fn vm_bo_free(ptr: *mut bindings::drm_gpuvm_bo) { + // SAFETY: + // * The ptr was allocated from kmalloc with the layout of `GpuVmB= o`. + // * `ptr->inner` has no destructor. + // * `ptr->data` contains a valid `T::VmBoData` that we can drop. + drop(unsafe { KBox::::from_raw(ptr.cast()) }); + } + + /// Access this [`GpuVmBo`] from a raw pointer. + /// + /// # Safety + /// + /// For the duration of `'a`, the pointer must reference a valid `drm_= gpuvm_bo` associated with + /// a [`GpuVm`]. + #[inline] + pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm_bo) -> &'a Se= lf { + // SAFETY: `drm_gpuvm_bo` is first field and `repr(C)`. + unsafe { &*ptr.cast() } + } + + /// Returns a raw pointer to underlying C value. + #[inline] + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo { + self.inner.get() + } + + /// The [`GpuVm`] that this GEM object is mapped in. + #[inline] + pub fn gpuvm(&self) -> &GpuVm { + // SAFETY: The `obj` pointer is guaranteed to be valid. + unsafe { GpuVm::::from_raw((*self.inner.get()).vm) } + } + + /// The [`drm_gem_object`](crate::gem::Object) for these mappings. + #[inline] + pub fn obj(&self) -> &T::Object { + // SAFETY: The `obj` pointer is guaranteed to be valid. + unsafe { ::from_raw((*self.inner.get()= ).obj) } + } + + /// The driver data with this buffer object. + #[inline] + pub fn data(&self) -> &T::VmBoData { + &self.data + } +} + +/// A pre-allocated [`GpuVmBo`] object. +/// +/// # Invariants +/// +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData`, has a = refcount of one, and is +/// absent from any gem, extobj, or evict lists. +pub(super) struct GpuVmBoAlloc(NonNull>); + +impl GpuVmBoAlloc { + /// Create a new pre-allocated [`GpuVmBo`]. + /// + /// It's intentional that the initializer is infallible because `drm_g= puvm_bo_put` will call + /// drop on the data, so we don't have a way to free it when the data = is missing. + #[inline] + pub(super) fn new( + gpuvm: &GpuVm, + gem: &T::Object, + value: impl PinInit, + ) -> Result, AllocError> { + // CAST: `GpuVmBoAlloc::vm_bo_alloc` ensures that this memory was = allocated with the layout + // of `GpuVmBo`. The type is repr(C), so `container_of` is not = required. + // SAFETY: The provided gpuvm and gem ptrs are valid for the durat= ion of this call. + let raw_ptr =3D unsafe { + bindings::drm_gpuvm_bo_create(gpuvm.as_raw(), gem.as_raw()).ca= st::>() + }; + let ptr =3D NonNull::new(raw_ptr).ok_or(AllocError)?; + // SAFETY: `ptr->data` is a valid pinned location. + let Ok(()) =3D unsafe { value.__pinned_init(&raw mut (*raw_ptr).da= ta) }; + // INVARIANTS: We just created the vm_bo so it's absent from lists= , and the data is valid + // as we just initialized it. + Ok(GpuVmBoAlloc(ptr)) + } + + /// Returns a raw pointer to underlying C value. + #[inline] + pub(super) fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo { + // SAFETY: The pointer references a valid `drm_gpuvm_bo`. + unsafe { (*self.0.as_ptr()).inner.get() } + } + + /// Look up whether there is an existing [`GpuVmBo`] for this gem obje= ct. + #[inline] + pub(super) fn obtain(self) -> GpuVmBoRegistered { + let me =3D ManuallyDrop::new(self); + // SAFETY: Valid `drm_gpuvm_bo` not already in the lists. + let ptr =3D unsafe { bindings::drm_gpuvm_bo_obtain_prealloc(me.as_= raw()) }; + + // Add the vm_bo to the extobj list if it's an external object, an= d if the vm_bo does not + // already exist. (If we are using an existing vm_bo, it's already= in the extobj list.) + if ptr::eq(ptr, me.as_raw()) && me.gpuvm().is_extobj(me.obj()) { + let resv_lock =3D me.gpuvm().raw_resv(); + // SAFETY: The GPUVM is still alive, so its resv lock is too. + unsafe { bindings::dma_resv_lock(resv_lock, ptr::null_mut()) }; + // SAFETY: We hold the GPUVMs resv lock. + unsafe { bindings::drm_gpuvm_bo_extobj_add(ptr) }; + // SAFETY: We took the lock, so we can unlock it. + unsafe { bindings::dma_resv_unlock(resv_lock) }; + } + + // INVARIANTS: Valid `drm_gpuvm_bo` in the GEM list. + // SAFETY: `drm_gpuvm_bo_obtain_prealloc` always returns a non-nul= l ptr + GpuVmBoRegistered(unsafe { NonNull::new_unchecked(ptr.cast()) }) + } +} + +impl Deref for GpuVmBoAlloc { + type Target =3D GpuVmBo; + #[inline] + fn deref(&self) -> &GpuVmBo { + // SAFETY: By the type invariants we may deref while `Self` exists. + unsafe { self.0.as_ref() } + } +} + +impl Drop for GpuVmBoAlloc { + #[inline] + fn drop(&mut self) { + // TODO: Call drm_gpuvm_bo_destroy_not_in_lists() directly. + // SAFETY: It's safe to perform a deferred put in any context. + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) }; + } +} + +/// A [`GpuVmBo`] object in the GEM list. +/// +/// # Invariants +/// +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData` and is = present in the gem list. +pub struct GpuVmBoRegistered(NonNull>); + +impl GpuVmBoRegistered { + /// Returns a raw pointer to underlying C value. + #[inline] + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo { + // SAFETY: The pointer references a valid `drm_gpuvm_bo`. + unsafe { (*self.0.as_ptr()).inner.get() } + } +} + +impl Deref for GpuVmBoRegistered { + type Target =3D GpuVmBo; + #[inline] + fn deref(&self) -> &GpuVmBo { + // SAFETY: By the type invariants we may deref while `Self` exists. + unsafe { self.0.as_ref() } + } +} + +impl Drop for GpuVmBoRegistered { + #[inline] + fn drop(&mut self) { + // SAFETY: It's safe to perform a deferred put in any context. + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) }; + } +} --=20 2.53.0.rc1.225.gd81095ad13-goog