From nobody Wed Oct 8 02:04:52 2025 Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 533F32BF013 for ; Thu, 3 Jul 2025 20:53:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.251.105.195 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751576027; cv=none; b=e+ibRcc+HU5JP7u8UxGVGE2yqLNkH2pvtJY76tmxHb2dA64NwTmzAauLLQVM6JOdriKUOY/X0K7+S94OzEIgX4zWQFY69a70rEga3N3+FimvqPt3emNOjGppza4l70w6fWchFnVm723Hd4+uXUTp0eEg9Y53+G+nqdUEJFJorp4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751576027; c=relaxed/simple; bh=LM2qFcNqZHKhjcBXVtExdepMcEbEni0Jn8+DH0K6CI4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GBBoVYmsYOPmWs8B0DbzlEuz7u/ierH7bGSou2zzM2fDLl7/AYzOAjoW71ODLa0cuSwEEAZXzmDM2O3vPsmNOg7ConBCr0FxQIAGRqvqwWSp/1yNz1shvKZ7KHiLMk7ta5HcLm4heT9KcplyrbI+JFc8cBEm+VTPYy161NWC3UM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b=L8pmihiZ; arc=none smtp.client-ip=148.251.105.195 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="L8pmihiZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1751576023; bh=LM2qFcNqZHKhjcBXVtExdepMcEbEni0Jn8+DH0K6CI4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L8pmihiZwDdP9QlkxEoKDRhTnh7ZNMBA2lvozt44L3mWmrXhqxcp2iq6NlL28XcD+ P4BkhBgBoKSFwFI/TZ9Sj2aliv6DaOChsou8pH/nYKeoawnM64yES4p8SFQJo+iNg6 JiekDXPCxsx2uOIBHOq+OwyqVeMM/B1gQ3OLo6j6eakkl3HiP9kcB2LHsxrFzrAAVJ idKB5hGeZNbNZ3PfGUagMNCLQwXwAYITMWtw7U+YfRClWme4+2OzRyAObDhoPSM+U8 CIZOg2KUwuz4Q+BRuim+op37a1x6wFRuYIteheGeaRVqbvooQkIaUTZ6pTVm6I0tzF ju7Wdp5OrB2/g== Received: from debian-rockchip-rock5b-rk3588.. (unknown [90.168.160.154]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: nanokatze) by bali.collaboradmins.com (Postfix) with ESMTPSA id 2FD9D17E0FD3; Thu, 3 Jul 2025 22:53:42 +0200 (CEST) From: Caterina Shablia To: "Maarten Lankhorst" , "Maxime Ripard" , "Thomas Zimmermann" , "David Airlie" , "Simona Vetter" , "Frank Binns" , "Matt Coster" , "Karol Herbst" , "Lyude Paul" , "Danilo Krummrich" , "Boris Brezillon" , "Steven Price" , "Liviu Dudau" , "Lucas De Marchi" , =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= , "Rodrigo Vivi" Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org, intel-xe@lists.freedesktop.org, Asahi Lina , Caterina Shablia Subject: [PATCH v3 3/7] drm/gpuvm: Pass map arguments through a struct Date: Thu, 3 Jul 2025 20:52:55 +0000 Message-ID: <20250703205308.19419-4-caterina.shablia@collabora.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250703205308.19419-1-caterina.shablia@collabora.com> References: <20250703205308.19419-1-caterina.shablia@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Boris Brezillon We are about to pass more arguments to drm_gpuvm_sm_map[_ops_create](), so, before we do that, let's pass arguments through a struct instead of changing each call site every time a new optional argument is added. Signed-off-by: Boris Brezillon Signed-off-by: Caterina Shablia --- drivers/gpu/drm/drm_gpuvm.c | 77 +++++++++++--------------- drivers/gpu/drm/imagination/pvr_vm.c | 15 +++-- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 11 ++-- drivers/gpu/drm/panthor/panthor_mmu.c | 13 ++++- drivers/gpu/drm/xe/xe_vm.c | 13 ++++- include/drm/drm_gpuvm.h | 34 ++++++++++-- 6 files changed, 98 insertions(+), 65 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index e89b932e987c..ae201d45e6b8 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -2054,16 +2054,15 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unmap); =20 static int op_map_cb(const struct drm_gpuvm_ops *fn, void *priv, - u64 addr, u64 range, - struct drm_gem_object *obj, u64 offset) + const struct drm_gpuvm_map_req *req) { struct drm_gpuva_op op =3D {}; =20 op.op =3D DRM_GPUVA_OP_MAP; - op.map.va.addr =3D addr; - op.map.va.range =3D range; - op.map.gem.obj =3D obj; - op.map.gem.offset =3D offset; + op.map.va.addr =3D req->va.addr; + op.map.va.range =3D req->va.range; + op.map.gem.obj =3D req->gem.obj; + op.map.gem.offset =3D req->gem.offset; =20 return fn->sm_step_map(&op, priv); } @@ -2102,17 +2101,16 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *p= riv, static int __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, const struct drm_gpuvm_ops *ops, void *priv, - u64 req_addr, u64 req_range, - struct drm_gem_object *req_obj, u64 req_offset) + const struct drm_gpuvm_map_req *req) { struct drm_gpuva *va, *next; - u64 req_end =3D req_addr + req_range; + u64 req_end =3D req->va.addr + req->va.range; int ret; =20 - if (unlikely(!drm_gpuvm_range_valid(gpuvm, req_addr, req_range))) + if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->va.addr, req->va.range))) return -EINVAL; =20 - drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) { + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req->va.addr, req_end) { struct drm_gem_object *obj =3D va->gem.obj; u64 offset =3D va->gem.offset; u64 addr =3D va->va.addr; @@ -2120,9 +2118,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, u64 end =3D addr + range; bool merge =3D !!va->gem.obj; =20 - if (addr =3D=3D req_addr) { - merge &=3D obj =3D=3D req_obj && - offset =3D=3D req_offset; + if (addr =3D=3D req->va.addr) { + merge &=3D obj =3D=3D req->gem.obj && + offset =3D=3D req->gem.offset; =20 if (end =3D=3D req_end) { ret =3D op_unmap_cb(ops, priv, va, merge); @@ -2141,9 +2139,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, if (end > req_end) { struct drm_gpuva_op_map n =3D { .va.addr =3D req_end, - .va.range =3D range - req_range, + .va.range =3D range - req->va.range, .gem.obj =3D obj, - .gem.offset =3D offset + req_range, + .gem.offset =3D offset + req->va.range, }; struct drm_gpuva_op_unmap u =3D { .va =3D va, @@ -2155,8 +2153,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, return ret; break; } - } else if (addr < req_addr) { - u64 ls_range =3D req_addr - addr; + } else if (addr < req->va.addr) { + u64 ls_range =3D req->va.addr - addr; struct drm_gpuva_op_map p =3D { .va.addr =3D addr, .va.range =3D ls_range, @@ -2165,8 +2163,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, }; struct drm_gpuva_op_unmap u =3D { .va =3D va }; =20 - merge &=3D obj =3D=3D req_obj && - offset + ls_range =3D=3D req_offset; + merge &=3D obj =3D=3D req->gem.obj && + offset + ls_range =3D=3D req->gem.offset; u.keep =3D merge; =20 if (end =3D=3D req_end) { @@ -2189,7 +2187,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, .va.range =3D end - req_end, .gem.obj =3D obj, .gem.offset =3D offset + ls_range + - req_range, + req->va.range, }; =20 ret =3D op_remap_cb(ops, priv, &p, &n, &u); @@ -2197,10 +2195,10 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, return ret; break; } - } else if (addr > req_addr) { - merge &=3D obj =3D=3D req_obj && - offset =3D=3D req_offset + - (addr - req_addr); + } else if (addr > req->va.addr) { + merge &=3D obj =3D=3D req->gem.obj && + offset =3D=3D req->gem.offset + + (addr - req->va.addr); =20 if (end =3D=3D req_end) { ret =3D op_unmap_cb(ops, priv, va, merge); @@ -2228,6 +2226,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, .keep =3D merge, }; =20 + ret =3D op_remap_cb(ops, priv, NULL, &n, &u); if (ret) return ret; @@ -2236,9 +2235,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, } } =20 - return op_map_cb(ops, priv, - req_addr, req_range, - req_obj, req_offset); + return op_map_cb(ops, priv, req); } =20 static int @@ -2302,11 +2299,8 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, /** * drm_gpuvm_sm_map() - creates the &drm_gpuva_op split/merge steps * @gpuvm: the &drm_gpuvm representing the GPU VA space - * @req_addr: the start address of the new mapping - * @req_range: the range of the new mapping - * @req_obj: the &drm_gem_object to map - * @req_offset: the offset within the &drm_gem_object * @priv: pointer to a driver private data structure + * @req: map request information * * This function iterates the given range of the GPU VA space. It utilizes= the * &drm_gpuvm_ops to call back into the driver providing the split and mer= ge @@ -2333,8 +2327,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, */ int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv, - u64 req_addr, u64 req_range, - struct drm_gem_object *req_obj, u64 req_offset) + const struct drm_gpuvm_map_req *req) { const struct drm_gpuvm_ops *ops =3D gpuvm->ops; =20 @@ -2343,9 +2336,7 @@ drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv, ops->sm_step_unmap))) return -EINVAL; =20 - return __drm_gpuvm_sm_map(gpuvm, ops, priv, - req_addr, req_range, - req_obj, req_offset); + return __drm_gpuvm_sm_map(gpuvm, ops, priv, req); } EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map); =20 @@ -2485,10 +2476,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops =3D= { /** * drm_gpuvm_sm_map_ops_create() - creates the &drm_gpuva_ops to split and= merge * @gpuvm: the &drm_gpuvm representing the GPU VA space - * @req_addr: the start address of the new mapping - * @req_range: the range of the new mapping - * @req_obj: the &drm_gem_object to map - * @req_offset: the offset within the &drm_gem_object + * @req: map request arguments * * This function creates a list of operations to perform splitting and mer= ging * of existent mapping(s) with the newly requested one. @@ -2516,8 +2504,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops =3D { */ struct drm_gpuva_ops * drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm, - u64 req_addr, u64 req_range, - struct drm_gem_object *req_obj, u64 req_offset) + const struct drm_gpuvm_map_req *req) { struct drm_gpuva_ops *ops; struct { @@ -2535,9 +2522,7 @@ drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm, args.vm =3D gpuvm; args.ops =3D ops; =20 - ret =3D __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args, - req_addr, req_range, - req_obj, req_offset); + ret =3D __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args, req); if (ret) goto err_free_ops; =20 diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagina= tion/pvr_vm.c index 2896fa7501b1..abfdcd279363 100644 --- a/drivers/gpu/drm/imagination/pvr_vm.c +++ b/drivers/gpu/drm/imagination/pvr_vm.c @@ -185,12 +185,17 @@ struct pvr_vm_bind_op { static int pvr_vm_bind_op_exec(struct pvr_vm_bind_op *bind_op) { switch (bind_op->type) { - case PVR_VM_BIND_TYPE_MAP: + case PVR_VM_BIND_TYPE_MAP: { + const struct drm_gpuvm_map_req map_req =3D { + .va.addr =3D bind_op->device_addr, + .va.range =3D bind_op->size, + .gem.obj =3D gem_from_pvr_gem(bind_op->pvr_obj), + .gem.offset =3D bind_op->offset, + }; + return drm_gpuvm_sm_map(&bind_op->vm_ctx->gpuvm_mgr, - bind_op, bind_op->device_addr, - bind_op->size, - gem_from_pvr_gem(bind_op->pvr_obj), - bind_op->offset); + bind_op, &map_req); + } =20 case PVR_VM_BIND_TYPE_UNMAP: return drm_gpuvm_sm_unmap(&bind_op->vm_ctx->gpuvm_mgr, diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouve= au/nouveau_uvmm.c index 48f105239f42..b481700be666 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1276,6 +1276,12 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job, break; case OP_MAP: { struct nouveau_uvma_region *reg; + struct drm_gpuvm_map_req map_req =3D { + .va.addr =3D op->va.addr, + .va.range =3D op->va.range, + .gem.obj =3D op->gem.obj, + .gem.offset =3D op->gem.offset, + }; =20 reg =3D nouveau_uvma_region_find_first(uvmm, op->va.addr, @@ -1301,10 +1307,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job, } =20 op->ops =3D drm_gpuvm_sm_map_ops_create(&uvmm->base, - op->va.addr, - op->va.range, - op->gem.obj, - op->gem.offset); + &map_req); if (IS_ERR(op->ops)) { ret =3D PTR_ERR(op->ops); goto unwind_continue; diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/pantho= r/panthor_mmu.c index 1e58948587a9..a7852485e638 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -2236,15 +2236,22 @@ panthor_vm_exec_op(struct panthor_vm *vm, struct pa= nthor_vm_op_ctx *op, goto out; =20 switch (op_type) { - case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP: + case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP: { + const struct drm_gpuvm_map_req map_req =3D { + .va.addr =3D op->va.addr, + .va.range =3D op->va.range, + .gem.obj =3D op->map.vm_bo->obj, + .gem.offset =3D op->map.bo_offset, + }; + if (vm->unusable) { ret =3D -EINVAL; break; } =20 - ret =3D drm_gpuvm_sm_map(&vm->base, vm, op->va.addr, op->va.range, - op->map.vm_bo->obj, op->map.bo_offset); + ret =3D drm_gpuvm_sm_map(&vm->base, vm, &map_req); break; + } =20 case DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP: ret =3D drm_gpuvm_sm_unmap(&vm->base, vm, op->va.addr, op->va.range); diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 861577746929..80bc741bdb6b 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -2246,10 +2246,17 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct x= e_bo *bo, =20 switch (operation) { case DRM_XE_VM_BIND_OP_MAP: - case DRM_XE_VM_BIND_OP_MAP_USERPTR: - ops =3D drm_gpuvm_sm_map_ops_create(&vm->gpuvm, addr, range, - obj, bo_offset_or_userptr); + case DRM_XE_VM_BIND_OP_MAP_USERPTR: { + struct drm_gpuvm_map_req map_req =3D { + .va.addr =3D addr, + .va.range =3D range, + .gem.obj =3D obj, + .gem.offset =3D bo_offset_or_userptr, + }; + + ops =3D drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req); break; + } case DRM_XE_VM_BIND_OP_UNMAP: ops =3D drm_gpuvm_sm_unmap_ops_create(&vm->gpuvm, addr, range); break; diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 6fdf2aff3e90..a6e6c33fc10b 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -1049,10 +1049,37 @@ struct drm_gpuva_ops { */ #define drm_gpuva_next_op(op) list_next_entry(op, entry) =20 +/** + * struct drm_gpuvm_map_req - arguments passed to drm_gpuvm_sm_map[_ops_cr= eate]() + */ +struct drm_gpuvm_map_req { + /** @va: virtual address related fields */ + struct { + /** @va.addr: start of the virtual address range to map to */ + u64 addr; + + /** @va.size: size of the virtual address range to map to */ + u64 range; + } va; + + /** @gem: GEM related fields */ + struct { + /** + * @obj: GEM object to map. + * + * Can be NULL if the virtual range is not backed by a GEM object. + */ + struct drm_gem_object *obj; + + /** @offset: offset in the GEM */ + u64 offset; + } gem; +}; + struct drm_gpuva_ops * drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm, - u64 addr, u64 range, - struct drm_gem_object *obj, u64 offset); + const struct drm_gpuvm_map_req *req); + struct drm_gpuva_ops * drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm, u64 addr, u64 range); @@ -1198,8 +1225,7 @@ struct drm_gpuvm_ops { }; =20 int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv, - u64 addr, u64 range, - struct drm_gem_object *obj, u64 offset); + const struct drm_gpuvm_map_req *req); =20 int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv, u64 addr, u64 range); --=20 2.47.2