From nobody Wed Oct 1 21:23:26 2025 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1006C26B2D7 for ; Wed, 1 Oct 2025 10:41:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759315313; cv=none; b=MvmSJN2meAomRcPhG12JmzafOLQm1S0kb6fH1zFzb98QWYfOaLQ2M6zNGkMU4rQKZAsce3rbk7l2oJK8WzmdFpjjgrAFP+xuztL0Sr0ar3rMqj4iNu0llbbOxz+yHeM5N5SBHgtsgynmvmFMwkQ5gI5XIWcHczXNv855lNS+2WU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759315313; c=relaxed/simple; bh=xbXmpmRdsU8b8NZ3taDqhMtUywk4XL8xmoamv/7M38M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TREn5YoyVSS9AeKg7WWMuAvynAPWL3yhRTk+72UtZhPK++NnwpR93q/CqcJ5973/pXgLS0ZX3sEcKL4ICxqfehXDkB4UwvkogF1vE+xt267eeEinGeFTZAF054O7pouU6lUjEUmp97sNAgSNIb2tjLeRRFbQUqpwpzrJfef4Sk4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NefCMG1H; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NefCMG1H" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-b3d6645acd3so270800366b.1 for ; Wed, 01 Oct 2025 03:41:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1759315309; x=1759920109; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QBpG64BMA+n3mUB7jPVwGE8MpSp12Y17XaQpGfu9n7I=; b=NefCMG1HhsZiP7iX32NsbRL0Min/O2tcUW3RII9HRPkg3/0XxFC6o+0toOXniB3ICo 3Tp9MSs9hcnAHs2fSC/uL3oFnNRr2fWsAUl5VMaCMUAoeOWS7sohsCP9MO9MTO7QzSCx WF7kScKDW7HwIKhKoaW8TPL8EX+Daj4+z34X4MtRJ10Ml9Z7aI8BUwnrU2Bw2adcM6Ci 33y388D/5uZQy7XLuWVObyPzmm5UMqxZEAn3rV/ZzEcED8Epx3Sq2HRD++Co9oN7WKwJ kIFh0f1n5ozxOChHgN1oGGAp/iEHjEBC+Kl3qPi4GWVGKvkjm17wb8tkN3HblEzEOXCV xogw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759315309; x=1759920109; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QBpG64BMA+n3mUB7jPVwGE8MpSp12Y17XaQpGfu9n7I=; b=TLUr08RHECtsC9sVfzg4Cgn7FBY+muxscNDrSxRCBIVSqHJ2e6PGpRkZRdJYJEQIqV peitTC7ZNdonZwl2/2Xr+iLRFLGiTw7d2fZXE6hTYfTUPTkIjteFB71CLdOWm2Yf71YC Zb/9MMfBNYABSDENK55ztYAqzNjWcTSxFBt43jenuiHgVKXopSa+Swe7/rLa93pJaO0t LLnpVEgTxJWPX6vYNBWPXiNwqtcHZ2PPPExe2GaZMg+RkJR5f1w8CDpFkj45L3suFyUW h9QoEbyajaFWsErf0YSGD+hf3sXJ71dEL8Jxx1sCDLJ4fWyy08r8r2/ctGJNjAFa3HS3 lmwg== X-Forwarded-Encrypted: i=1; AJvYcCVk5jWSXvsenwJzP8jgqlHshGY9Jj4vfJpyOBWydkxRq1MNqlDi0C4VMRq1B4RJKS56Zvyb4Lz5gbF557A=@vger.kernel.org X-Gm-Message-State: AOJu0YxRmqrR2WyoKZHjqR4aoodXYGMjV0XKJ+McBfHOusiE9BbVebUh Zv3BGTsvhjRAReiseJu/08rJzfNCb0x/rfzB/fErAqACeM1wZBCrmG28miRRtUXVH1fQg9VfdZ/ gzV4MEzE9ZQQM4fhsvA== X-Google-Smtp-Source: AGHT+IFlcRonXnEVXM7LiFkcI8yiCsyrjiVhLgytwJ4+7gyIbeVhDSedPQb647sB4Ho+OROeap4q9Pvclg7Ig3o= X-Received: from ejzk25.prod.google.com ([2002:a17:906:a399:b0:b3e:29cf:4535]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:3ea1:b0:b44:2d0:e05f with SMTP id a640c23a62f3a-b46e6339e30mr379109666b.33.1759315309379; Wed, 01 Oct 2025 03:41:49 -0700 (PDT) Date: Wed, 01 Oct 2025 10:41:36 +0000 In-Reply-To: <20251001-vmbo-defer-v3-0-a3fe6b6ae185@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251001-vmbo-defer-v3-0-a3fe6b6ae185@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=10607; i=aliceryhl@google.com; h=from:subject:message-id; bh=xbXmpmRdsU8b8NZ3taDqhMtUywk4XL8xmoamv/7M38M=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBo3QVqUkkzy4DuMBpN9FQPoar2swWTfjVpaVFb/ tFNDOIKugSJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCaN0FagAKCRAEWL7uWMY5 Rs0TD/9lrVMbSb4Cr2bI0B3d2ON1GMlTQwPaSe7Jwepe16hCvKFW2G/SjlzKsjIFJ1+teLGpc3o lOlvb4B2qbXyYyhFaArhitXEHGUbZX5jUdJlh2oQsoZZhFK5zSb9ficDKxp1mqZSKLLL9OdTk3G hMO2vxf0+kRuGMtefOLUzdrpJqscCFJOYQ7tMBJraj/sPbbI9spLKj3IMKh/GRweK+6MWHjxCKh Ezd/RgC+OUWW8ZeFL10blrlXlUYT+jEm6+nLlt5W1U+n5+BcIZ1l3cv7vJvQdYdAjuP+EviNUi5 SvlzTnfJurhGiEBBdexzNrJCqLE44pkQaJpopb6JWZxgoDcrYI5KmHjX/eJ7TJ+kRJPjtMiSed+ ndrLNLUYIVp7vvS1DIKe1FheQ1Q3LPZs6NU6loL4S4akIOpoQMClMDkYGc5kQC8fT0o5XYdRfK+ k99W1J92rgcXnEOJKMzgsuQrjTmrlA7ehmf6kt4C5qe5IjPBRXh/ylVK5M1V99X5zPjLo6+uY91 ND/DJABBqAqXw71CAJL+i+OTwrnfUoI3DcMlDsd3SpcZ1ESwbMMF0SaJb+/KdnLPxhJLhs0O1WH A+gF2b/Z7pdrUTMd8x9T8/Li+8vjk4TxwtAg9yJLBwUGIomqZua8kU1PS5+pHhbYiZTboLn7KqX qY66LN4E/jfJgmg== X-Mailer: b4 0.14.2 Message-ID: <20251001-vmbo-defer-v3-1-a3fe6b6ae185@google.com> Subject: [PATCH v3 1/2] drm/gpuvm: add deferred vm_bo cleanup From: Alice Ryhl To: Danilo Krummrich , Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" Cc: Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Steven Price , Daniel Almeida , Liviu Dudau , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable When using GPUVM in immediate mode, it is necessary to call drm_gpuvm_unlink() from the fence signalling critical path. However, unlink may call drm_gpuvm_bo_put(), which causes some challenges: 1. drm_gpuvm_bo_put() often requires you to take resv locks, which you can't do from the fence signalling critical path. 2. drm_gpuvm_bo_put() calls drm_gem_object_put(), which is often going to be unsafe to call from the fence signalling critical path. To solve these issues, add a deferred version of drm_gpuvm_unlink() that adds the vm_bo to a deferred cleanup list, and then clean it up later. The new methods take the GEMs GPUVA lock internally rather than letting the caller do it because it also needs to perform an operation after releasing the mutex again. This is to prevent freeing the GEM while holding the mutex (more info as comments in the patch). This means that the new methods can only be used with DRM_GPUVM_IMMEDIATE_MODE. Reviewed-by: Boris Brezillon Signed-off-by: Alice Ryhl Acked-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuvm.c | 184 ++++++++++++++++++++++++++++++++++++++++= ++++ include/drm/drm_gpuvm.h | 16 ++++ 2 files changed, 200 insertions(+) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index a52e95555549a16c062168253477035679d4775b..a530cf0864c5dd837840f31d3e6= 98d4a82c58d3c 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -876,6 +876,27 @@ __drm_gpuvm_bo_list_add(struct drm_gpuvm *gpuvm, spinl= ock_t *lock, cond_spin_unlock(lock, !!lock); } =20 +/** + * drm_gpuvm_bo_is_zombie() - check whether this vm_bo is scheduled for cl= eanup + * @vm_bo: the &drm_gpuvm_bo + * + * When a vm_bo is scheduled for cleanup using the bo_defer list, it is not + * immediately removed from the evict and extobj lists if they are protect= ed by + * the resv lock, as we can't take that lock during run_job() in immediate + * mode. Therefore, anyone iterating these lists should skip entries that = are + * being destroyed. + * + * Checking the refcount without incrementing it is okay as long as the lo= ck + * protecting the evict/extobj list is held for as long as you are using t= he + * vm_bo, because even if the refcount hits zero while you are using it, f= reeing + * the vm_bo requires taking the list's lock. + */ +static bool +drm_gpuvm_bo_is_zombie(struct drm_gpuvm_bo *vm_bo) +{ + return !kref_read(&vm_bo->kref); +} + /** * drm_gpuvm_bo_list_add() - insert a vm_bo into the given list * @__vm_bo: the &drm_gpuvm_bo @@ -1081,6 +1102,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *n= ame, INIT_LIST_HEAD(&gpuvm->evict.list); spin_lock_init(&gpuvm->evict.lock); =20 + init_llist_head(&gpuvm->bo_defer); + kref_init(&gpuvm->kref); =20 gpuvm->name =3D name ? name : "unknown"; @@ -1122,6 +1145,8 @@ drm_gpuvm_fini(struct drm_gpuvm *gpuvm) "Extobj list should be empty.\n"); drm_WARN(gpuvm->drm, !list_empty(&gpuvm->evict.list), "Evict list should be empty.\n"); + drm_WARN(gpuvm->drm, !llist_empty(&gpuvm->bo_defer), + "VM BO cleanup list should be empty.\n"); =20 drm_gem_object_put(gpuvm->r_obj); } @@ -1217,6 +1242,9 @@ drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gp= uvm, =20 drm_gpuvm_resv_assert_held(gpuvm); list_for_each_entry(vm_bo, &gpuvm->extobj.list, list.entry.extobj) { + if (drm_gpuvm_bo_is_zombie(vm_bo)) + continue; + ret =3D exec_prepare_obj(exec, vm_bo->obj, num_fences); if (ret) break; @@ -1460,6 +1488,9 @@ drm_gpuvm_validate_locked(struct drm_gpuvm *gpuvm, st= ruct drm_exec *exec) =20 list_for_each_entry_safe(vm_bo, next, &gpuvm->evict.list, list.entry.evict) { + if (drm_gpuvm_bo_is_zombie(vm_bo)) + continue; + ret =3D ops->vm_bo_validate(vm_bo, exec); if (ret) break; @@ -1560,6 +1591,7 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, =20 INIT_LIST_HEAD(&vm_bo->list.entry.extobj); INIT_LIST_HEAD(&vm_bo->list.entry.evict); + init_llist_node(&vm_bo->list.entry.bo_defer); =20 return vm_bo; } @@ -1621,6 +1653,124 @@ drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo) } EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put); =20 +/* + * Must be called with GEM mutex held. After releasing GEM mutex, + * drm_gpuvm_bo_defer_free_unlocked() must be called. + */ +static void +drm_gpuvm_bo_defer_free_locked(struct kref *kref) +{ + struct drm_gpuvm_bo *vm_bo =3D container_of(kref, struct drm_gpuvm_bo, + kref); + struct drm_gpuvm *gpuvm =3D vm_bo->vm; + + if (!drm_gpuvm_resv_protected(gpuvm)) { + drm_gpuvm_bo_list_del(vm_bo, extobj, true); + drm_gpuvm_bo_list_del(vm_bo, evict, true); + } + + list_del(&vm_bo->list.entry.gem); +} + +/* + * GEM mutex must not be held. Called after drm_gpuvm_bo_defer_free_locked= (). + */ +static void +drm_gpuvm_bo_defer_free_unlocked(struct drm_gpuvm_bo *vm_bo) +{ + struct drm_gpuvm *gpuvm =3D vm_bo->vm; + + llist_add(&vm_bo->list.entry.bo_defer, &gpuvm->bo_defer); +} + +static void +drm_gpuvm_bo_defer_free(struct kref *kref) +{ + struct drm_gpuvm_bo *vm_bo =3D container_of(kref, struct drm_gpuvm_bo, + kref); + + mutex_lock(&vm_bo->obj->gpuva.lock); + drm_gpuvm_bo_defer_free_locked(kref); + mutex_unlock(&vm_bo->obj->gpuva.lock); + + /* + * It's important that the GEM stays alive for the duration in which we + * hold the mutex, but the instant we add the vm_bo to bo_defer, + * another thread might call drm_gpuvm_bo_deferred_cleanup() and put + * the GEM. Therefore, to avoid kfreeing a mutex we are holding, we add + * the vm_bo to bo_defer *after* releasing the GEM's mutex. + */ + drm_gpuvm_bo_defer_free_unlocked(vm_bo); +} + +/** + * drm_gpuvm_bo_put_deferred() - drop a struct drm_gpuvm_bo reference with + * deferred cleanup + * @vm_bo: the &drm_gpuvm_bo to release the reference of + * + * This releases a reference to @vm_bo. + * + * This might take and release the GEMs GPUVA lock. You should call + * drm_gpuvm_bo_deferred_cleanup() later to complete the cleanup process. + * + * Returns: true if vm_bo is being destroyed, false otherwise. + */ +bool +drm_gpuvm_bo_put_deferred(struct drm_gpuvm_bo *vm_bo) +{ + if (!vm_bo) + return false; + + drm_WARN_ON(vm_bo->vm->drm, !drm_gpuvm_immediate_mode(vm_bo->vm)); + + return !!kref_put(&vm_bo->kref, drm_gpuvm_bo_defer_free); +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put_deferred); + +/** + * drm_gpuvm_bo_deferred_cleanup() - clean up BOs in the deferred list + * deferred cleanup + * @gpuvm: the VM to clean up + * + * Cleans up &drm_gpuvm_bo instances in the deferred cleanup list. + */ +void +drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm) +{ + const struct drm_gpuvm_ops *ops =3D gpuvm->ops; + struct drm_gpuvm_bo *vm_bo; + struct drm_gem_object *obj; + struct llist_node *bo_defer; + + bo_defer =3D llist_del_all(&gpuvm->bo_defer); + if (!bo_defer) + return; + + if (drm_gpuvm_resv_protected(gpuvm)) { + dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL); + llist_for_each_entry(vm_bo, bo_defer, list.entry.bo_defer) { + drm_gpuvm_bo_list_del(vm_bo, extobj, false); + drm_gpuvm_bo_list_del(vm_bo, evict, false); + } + dma_resv_unlock(drm_gpuvm_resv(gpuvm)); + } + + while (bo_defer) { + vm_bo =3D llist_entry(bo_defer, + struct drm_gpuvm_bo, list.entry.bo_defer); + bo_defer =3D bo_defer->next; + obj =3D vm_bo->obj; + if (ops && ops->vm_bo_free) + ops->vm_bo_free(vm_bo); + else + kfree(vm_bo); + + drm_gpuvm_put(gpuvm); + drm_gem_object_put(obj); + } +} +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_deferred_cleanup); + static struct drm_gpuvm_bo * __drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj) @@ -1948,6 +2098,40 @@ drm_gpuva_unlink(struct drm_gpuva *va) } EXPORT_SYMBOL_GPL(drm_gpuva_unlink); =20 +/** + * drm_gpuva_unlink_defer() - unlink a &drm_gpuva with deferred vm_bo clea= nup + * @va: the &drm_gpuva to unlink + * + * Similar to drm_gpuva_unlink(), but uses drm_gpuvm_bo_put_deferred() and= takes + * the lock for the caller. + */ +void +drm_gpuva_unlink_defer(struct drm_gpuva *va) +{ + struct drm_gem_object *obj =3D va->gem.obj; + struct drm_gpuvm_bo *vm_bo =3D va->vm_bo; + bool should_defer_bo; + + if (unlikely(!obj)) + return; + + drm_WARN_ON(vm_bo->vm->drm, !drm_gpuvm_immediate_mode(vm_bo->vm)); + + mutex_lock(&obj->gpuva.lock); + list_del_init(&va->gem.entry); + + /* + * This is drm_gpuvm_bo_put_deferred() except we already hold the mutex. + */ + should_defer_bo =3D kref_put(&vm_bo->kref, drm_gpuvm_bo_defer_free_locked= ); + mutex_unlock(&obj->gpuva.lock); + if (should_defer_bo) + drm_gpuvm_bo_defer_free_unlocked(vm_bo); + + va->vm_bo =3D NULL; +} +EXPORT_SYMBOL_GPL(drm_gpuva_unlink_defer); + /** * drm_gpuva_find_first() - find the first &drm_gpuva in the given range * @gpuvm: the &drm_gpuvm to search in diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 8890ded1d90752a2acbb564f697aa5ab03b5d052..81cc7672cf2d5362c637abfa2a7= 5471e5274ed08 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -27,6 +27,7 @@ =20 #include #include +#include #include #include =20 @@ -152,6 +153,7 @@ void drm_gpuva_remove(struct drm_gpuva *va); =20 void drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo); void drm_gpuva_unlink(struct drm_gpuva *va); +void drm_gpuva_unlink_defer(struct drm_gpuva *va); =20 struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm, u64 addr, u64 range); @@ -331,6 +333,11 @@ struct drm_gpuvm { */ spinlock_t lock; } evict; + + /** + * @bo_defer: structure holding vm_bos that need to be destroyed + */ + struct llist_head bo_defer; }; =20 void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, @@ -714,6 +721,12 @@ struct drm_gpuvm_bo { * &drm_gpuvms evict list. */ struct list_head evict; + + /** + * @list.entry.bo_defer: List entry to attach to + * the &drm_gpuvms bo_defer list. + */ + struct llist_node bo_defer; } entry; } list; }; @@ -746,6 +759,9 @@ drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_bo) =20 bool drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo); =20 +bool drm_gpuvm_bo_put_deferred(struct drm_gpuvm_bo *vm_bo); +void drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm); + struct drm_gpuvm_bo * drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj); --=20 2.51.0.618.g983fd99d29-goog From nobody Wed Oct 1 21:23:26 2025 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33CDB2BFC8F for ; Wed, 1 Oct 2025 10:41:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759315314; cv=none; b=MKV6yIsbQAF8VSMR4txGF5hNcbiwqUVTiU5rbkXX479hyiQt8u9pzPwGy/ip2YUlhnOIaBD5lPV5EPHMCwbb9glS4WBBhaicVvKy7HYZF9Kigsb9BP/N8gjLXhbwzYbscbsFOMbhO2Ows4BC7gvZmqzx52APaOW3okk4ExgkWSs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759315314; c=relaxed/simple; bh=4RxPmpuVab/SUToij6BLFwj/Hu99zhmAE3I8jyKJVCU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mLGGVsXOtM8M4SSrTwJ3A8iRoO0KWH1qsFZ15Xk6c3pTiPvv262+jmz32PmLRmoUDfjgPRcWk9gfVO53lo1cvTFvnBwmwIYrPu4ntIh6yPi4ACnLdXSAV9eRrKsy4sIPImHuQg3s/YQtREBvAGqZfanI0t5ZlsZ9CgnS4cQpnxc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xxwpYG5h; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xxwpYG5h" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-b3a879bd254so78982566b.0 for ; Wed, 01 Oct 2025 03:41:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1759315310; x=1759920110; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HWeScriFeGl10P14+57g/pqOB/HqHy1iDGONUOl5TKE=; b=xxwpYG5hL9ZQ2lkEhAfyuPJzWi3wE6RoCwmMbhq76VbDBdROPpyaQDl4EkhMOJZq0M eqz6r1CVaa9eqo3oHncoATzxp4AlzZF6JtKBLLrtMYoPRwOJxrOpGl6TV/ewAhd9vxgt MkV8mhpx40C5Tz60Bbp5bcLwI4eM4j9gNMWebSaAPqTxTJExdEBK+NY2MXQr7jPhah7N elDy7Xi2DI3ft2rfBDT1hr+4E2Pkym+hYkVmQgEgkkpUvlVPks63sR/feia3BrFQT9RP 5758JemhdbQdTteof8Nrl4evXrdfCzfBofElXNyqsVtxgnDhSiTqP8aNVl2zbVJfv+LR q5BQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759315310; x=1759920110; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HWeScriFeGl10P14+57g/pqOB/HqHy1iDGONUOl5TKE=; b=wonBKkaEETiiAohFEQiio+hDGVQK8NDwblKZxa+5qOg0nvN34utfZChaWQdvpWC6XO 8bZnm5FXo0MGAz+YeELLg+yP1kRPdo//8QhCTPSlMuul3riKq101+j6tyKDaduJMuWWb cv//nobsNWdBBjbe195WrrmfAhM798khjJ6iro2sHtxv9crGWzp+qwtWqfk3yRsHR0Kz 9dRBNjdjwJDprHTp7Aq8Go6ENVnO7+P1niIBNA8oZhKiLnyFmnNxw3dgkkFGK51mJgee pId+daqU3f2FTupzEZ/s/DLQyLcQaZQuxguKiUBvR8wVAAy32/w6OPYtIY6mt0dPyZ4s EzIQ== X-Forwarded-Encrypted: i=1; AJvYcCXJx/OisV2wAhOCGrZGXQ56KqoYklUMmheuCETHRyyraafRcP5K/AhH41GD3Fs91Ts/RQ6XI/CgbIWWfD0=@vger.kernel.org X-Gm-Message-State: AOJu0Yx3XFai5e20Vo4haQMZWOItyudP46AqXMOgEujBuT3VrjnoU1nx fx3pfUGyokGiyJ+A+W6D1Yua3jjAeCb/WqcvBYwSyu/yAG2CsShNZDrgqsKd/Cptvlc6e8aPcZh kwPNU1zkIsnFK0f7ufw== X-Google-Smtp-Source: AGHT+IGFvzVc3aIesqoTOhT0OxQ0rtyZmsWgewwTZSXuHJimiNrD8GtXGO2CvVsYw71iS6fE2JboaRH15VdZU54= X-Received: from edbco8.prod.google.com ([2002:a05:6402:c08:b0:61d:4059:adf3]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:8318:b0:b47:a9c7:3c2d with SMTP id a640c23a62f3a-b47a9c73f12mr116723666b.45.1759315310589; Wed, 01 Oct 2025 03:41:50 -0700 (PDT) Date: Wed, 01 Oct 2025 10:41:37 +0000 In-Reply-To: <20251001-vmbo-defer-v3-0-a3fe6b6ae185@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251001-vmbo-defer-v3-0-a3fe6b6ae185@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=8737; i=aliceryhl@google.com; h=from:subject:message-id; bh=4RxPmpuVab/SUToij6BLFwj/Hu99zhmAE3I8jyKJVCU=; b=owEBbAKT/ZANAwAKAQRYvu5YxjlGAcsmYgBo3QVq1IZjOdFhC3F3Qc9eOXo3Bu6l8QMJp9Jpx qBX3I5+kgWJAjIEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCaN0FagAKCRAEWL7uWMY5 Rif+D/Ym0It2vRMa9bOV//PIEi8SG2FRcX4KxBPqBYPCOJdUcVzVI8E8QGqoLsqI7/R7cvJn9XL kiyK08IlhHdyKCpd9ClwYCXugophGXjP09JQs2QR2YNLDNj8DPVqW0q6HvvBNPT1x3lnewKpSiX +/Q+8UzJbQx3DZiUkHkbCErnA0Cen26ay8EgGR2tFLEKuw7IlNBMc/6IPrrcCuUiToll5p5rJ9N q2I0BdR7+wEcUQq9p9a53EhT+YjyEA9QNUJyrxNBNFrRw62lektFcNQkTG9UIrQiPRoFIZLOm4n VU2IDyypLhxomO2Jnr0td2nuA+cFpT7m6fDHatDHDz7kaIkKal7pRg8yZ/02N7xpT2o6ZTtmWPH fwmjks/lv8jx5x/qKIPdTN+fB0D7aRf2aRUa06EGys5nRXfTH5VrbUsQiztka9ibi5E5bQ8l85P /wnqshlpLNrp97u2qoeyLVmscwjJIR0CHgkniX2niZ4kP3rR1hvZ4dDVEjdHYqorNV8DB/c51bf ZmyZNXOFPQ6XrVljxZWmO5TNuU9TrUil3MKTNIriQ9J8f8HXzS7xcTcUrhMjqMFWJMDExDOsrIl hu8gP11iQkfLxY3V4RFH4BU82MCE6q7tTz8iPj/JIck59RSIA2D4ahVKpbKTq2X2+/0l2UBt7xo XD4cjRODeBFa6 X-Mailer: b4 0.14.2 Message-ID: <20251001-vmbo-defer-v3-2-a3fe6b6ae185@google.com> Subject: [PATCH v3 2/2] panthor: use drm_gpuva_unlink_defer() From: Alice Ryhl To: Danilo Krummrich , Matthew Brost , "=?utf-8?q?Thomas_Hellstr=C3=B6m?=" Cc: Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Steven Price , Daniel Almeida , Liviu Dudau , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Instead of manually deferring cleanup of vm_bos, use the new GPUVM infrastructure for doing so. To avoid manual management of vm_bo refcounts, the panthor_vma_link() and panthor_vma_unlink() methods are changed to get and put a vm_bo refcount on the vm_bo. This simplifies the code a lot. I preserved the behavior where panthor_gpuva_sm_step_map() drops the refcount right away rather than letting panthor_vm_cleanup_op_ctx() do it later. Signed-off-by: Alice Ryhl Reviewed-by: Boris Brezillon --- drivers/gpu/drm/panthor/panthor_mmu.c | 110 ++++++------------------------= ---- 1 file changed, 19 insertions(+), 91 deletions(-) diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/pantho= r/panthor_mmu.c index 6dec4354e3789d17c5a87fc8de3bc86764b804bc..9f5f4ddf291024121f3fd5644f2= fdeba354fa67c 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -181,20 +181,6 @@ struct panthor_vm_op_ctx { u64 range; } va; =20 - /** - * @returned_vmas: List of panthor_vma objects returned after a VM operat= ion. - * - * For unmap operations, this will contain all VMAs that were covered by = the - * specified VA range. - * - * For map operations, this will contain all VMAs that previously mapped = to - * the specified VA range. - * - * Those VMAs, and the resources they point to will be released as part of - * the op_ctx cleanup operation. - */ - struct list_head returned_vmas; - /** @map: Fields specific to a map operation. */ struct { /** @map.vm_bo: Buffer object to map. */ @@ -1081,47 +1067,18 @@ void panthor_vm_free_va(struct panthor_vm *vm, stru= ct drm_mm_node *va_node) mutex_unlock(&vm->mm_lock); } =20 -static void panthor_vm_bo_put(struct drm_gpuvm_bo *vm_bo) +static void panthor_vm_bo_free(struct drm_gpuvm_bo *vm_bo) { struct panthor_gem_object *bo =3D to_panthor_bo(vm_bo->obj); - struct drm_gpuvm *vm =3D vm_bo->vm; - bool unpin; - - /* We must retain the GEM before calling drm_gpuvm_bo_put(), - * otherwise the mutex might be destroyed while we hold it. - * Same goes for the VM, since we take the VM resv lock. - */ - drm_gem_object_get(&bo->base.base); - drm_gpuvm_get(vm); - - /* We take the resv lock to protect against concurrent accesses to the - * gpuvm evicted/extobj lists that are modified in - * drm_gpuvm_bo_destroy(), which is called if drm_gpuvm_bo_put() - * releases sthe last vm_bo reference. - * We take the BO GPUVA list lock to protect the vm_bo removal from the - * GEM vm_bo list. - */ - dma_resv_lock(drm_gpuvm_resv(vm), NULL); - mutex_lock(&bo->base.base.gpuva.lock); - unpin =3D drm_gpuvm_bo_put(vm_bo); - mutex_unlock(&bo->base.base.gpuva.lock); - dma_resv_unlock(drm_gpuvm_resv(vm)); =20 - /* If the vm_bo object was destroyed, release the pin reference that - * was hold by this object. - */ - if (unpin && !drm_gem_is_imported(&bo->base.base)) + if (!drm_gem_is_imported(&bo->base.base)) drm_gem_shmem_unpin(&bo->base); - - drm_gpuvm_put(vm); - drm_gem_object_put(&bo->base.base); + kfree(vm_bo); } =20 static void panthor_vm_cleanup_op_ctx(struct panthor_vm_op_ctx *op_ctx, struct panthor_vm *vm) { - struct panthor_vma *vma, *tmp_vma; - u32 remaining_pt_count =3D op_ctx->rsvd_page_tables.count - op_ctx->rsvd_page_tables.ptr; =20 @@ -1134,16 +1091,12 @@ static void panthor_vm_cleanup_op_ctx(struct pantho= r_vm_op_ctx *op_ctx, kfree(op_ctx->rsvd_page_tables.pages); =20 if (op_ctx->map.vm_bo) - panthor_vm_bo_put(op_ctx->map.vm_bo); + drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo); =20 for (u32 i =3D 0; i < ARRAY_SIZE(op_ctx->preallocated_vmas); i++) kfree(op_ctx->preallocated_vmas[i]); =20 - list_for_each_entry_safe(vma, tmp_vma, &op_ctx->returned_vmas, node) { - list_del(&vma->node); - panthor_vm_bo_put(vma->base.vm_bo); - kfree(vma); - } + drm_gpuvm_bo_deferred_cleanup(&vm->base); } =20 static struct panthor_vma * @@ -1232,7 +1185,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panth= or_vm_op_ctx *op_ctx, return -EINVAL; =20 memset(op_ctx, 0, sizeof(*op_ctx)); - INIT_LIST_HEAD(&op_ctx->returned_vmas); op_ctx->flags =3D flags; op_ctx->va.range =3D size; op_ctx->va.addr =3D va; @@ -1243,7 +1195,9 @@ static int panthor_vm_prepare_map_op_ctx(struct panth= or_vm_op_ctx *op_ctx, =20 if (!drm_gem_is_imported(&bo->base.base)) { /* Pre-reserve the BO pages, so the map operation doesn't have to - * allocate. + * allocate. This pin is dropped in panthor_vm_bo_free(), so + * once we have successfully called drm_gpuvm_bo_create(), + * GPUVM will take care of dropping the pin for us. */ ret =3D drm_gem_shmem_pin(&bo->base); if (ret) @@ -1282,16 +1236,6 @@ static int panthor_vm_prepare_map_op_ctx(struct pant= hor_vm_op_ctx *op_ctx, mutex_unlock(&bo->base.base.gpuva.lock); dma_resv_unlock(panthor_vm_resv(vm)); =20 - /* If the a vm_bo for this combination exists, it already - * retains a pin ref, and we can release the one we took earlier. - * - * If our pre-allocated vm_bo is picked, it now retains the pin ref, - * which will be released in panthor_vm_bo_put(). - */ - if (preallocated_vm_bo !=3D op_ctx->map.vm_bo && - !drm_gem_is_imported(&bo->base.base)) - drm_gem_shmem_unpin(&bo->base); - op_ctx->map.bo_offset =3D offset; =20 /* L1, L2 and L3 page tables. @@ -1339,7 +1283,6 @@ static int panthor_vm_prepare_unmap_op_ctx(struct pan= thor_vm_op_ctx *op_ctx, int ret; =20 memset(op_ctx, 0, sizeof(*op_ctx)); - INIT_LIST_HEAD(&op_ctx->returned_vmas); op_ctx->va.range =3D size; op_ctx->va.addr =3D va; op_ctx->flags =3D DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP; @@ -1387,7 +1330,6 @@ static void panthor_vm_prepare_sync_only_op_ctx(struc= t panthor_vm_op_ctx *op_ctx struct panthor_vm *vm) { memset(op_ctx, 0, sizeof(*op_ctx)); - INIT_LIST_HEAD(&op_ctx->returned_vmas); op_ctx->flags =3D DRM_PANTHOR_VM_BIND_OP_TYPE_SYNC_ONLY; } =20 @@ -2033,26 +1975,13 @@ static void panthor_vma_link(struct panthor_vm *vm, =20 mutex_lock(&bo->base.base.gpuva.lock); drm_gpuva_link(&vma->base, vm_bo); - drm_WARN_ON(&vm->ptdev->base, drm_gpuvm_bo_put(vm_bo)); mutex_unlock(&bo->base.base.gpuva.lock); } =20 -static void panthor_vma_unlink(struct panthor_vm *vm, - struct panthor_vma *vma) +static void panthor_vma_unlink(struct panthor_vma *vma) { - struct panthor_gem_object *bo =3D to_panthor_bo(vma->base.gem.obj); - struct drm_gpuvm_bo *vm_bo =3D drm_gpuvm_bo_get(vma->base.vm_bo); - - mutex_lock(&bo->base.base.gpuva.lock); - drm_gpuva_unlink(&vma->base); - mutex_unlock(&bo->base.base.gpuva.lock); - - /* drm_gpuva_unlink() release the vm_bo, but we manually retained it - * when entering this function, so we can implement deferred VMA - * destruction. Re-assign it here. - */ - vma->base.vm_bo =3D vm_bo; - list_add_tail(&vma->node, &vm->op_ctx->returned_vmas); + drm_gpuva_unlink_defer(&vma->base); + kfree(vma); } =20 static void panthor_vma_init(struct panthor_vma *vma, u32 flags) @@ -2084,12 +2013,12 @@ static int panthor_gpuva_sm_step_map(struct drm_gpu= va_op *op, void *priv) if (ret) return ret; =20 - /* Ref owned by the mapping now, clear the obj field so we don't release = the - * pinning/obj ref behind GPUVA's back. - */ drm_gpuva_map(&vm->base, &vma->base, &op->map); panthor_vma_link(vm, vma, op_ctx->map.vm_bo); + + drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo); op_ctx->map.vm_bo =3D NULL; + return 0; } =20 @@ -2128,16 +2057,14 @@ static int panthor_gpuva_sm_step_remap(struct drm_g= puva_op *op, * owned by the old mapping which will be released when this * mapping is destroyed, we need to grab a ref here. */ - panthor_vma_link(vm, prev_vma, - drm_gpuvm_bo_get(op->remap.unmap->va->vm_bo)); + panthor_vma_link(vm, prev_vma, op->remap.unmap->va->vm_bo); } =20 if (next_vma) { - panthor_vma_link(vm, next_vma, - drm_gpuvm_bo_get(op->remap.unmap->va->vm_bo)); + panthor_vma_link(vm, next_vma, op->remap.unmap->va->vm_bo); } =20 - panthor_vma_unlink(vm, unmap_vma); + panthor_vma_unlink(unmap_vma); return 0; } =20 @@ -2154,12 +2081,13 @@ static int panthor_gpuva_sm_step_unmap(struct drm_g= puva_op *op, return ret; =20 drm_gpuva_unmap(&op->unmap); - panthor_vma_unlink(vm, unmap_vma); + panthor_vma_unlink(unmap_vma); return 0; } =20 static const struct drm_gpuvm_ops panthor_gpuvm_ops =3D { .vm_free =3D panthor_vm_free, + .vm_bo_free =3D panthor_vm_bo_free, .sm_step_map =3D panthor_gpuva_sm_step_map, .sm_step_remap =3D panthor_gpuva_sm_step_remap, .sm_step_unmap =3D panthor_gpuva_sm_step_unmap, --=20 2.51.0.618.g983fd99d29-goog