From nobody Wed Oct 8 14:16:05 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05D281A0BD0 for ; Fri, 27 Jun 2025 03:45:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750995958; cv=none; b=OSi9e3id2Rp+NLXQMSd9hHLeZ7Ht7dt5GyngIhq+RxHJqrEutq91qKzkKOLmoufNqdd2TT9UTKLUZRGzu9mEq6DfjQt93THBbcjcbAInLH7+949JUebBkvYF+ApyyKckEGgfunDHFpvLPP1kl+VGpWAjxzyUiJhbOujzJyAbaAY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750995958; c=relaxed/simple; bh=tSebs7xOLupUMtEK2b1IjCu+ScmKGbry/w4go/nFKJs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Vhobjh8pxbgch6pw+4Vy0obCDRBLkzd3+cdFKywWhF3y5eQHte5WymbQLXNqTceea2/F3sIcbV5cMk+lqSGDzi9bA/LXhUpBEi/CmxV7+UWuDp9UMahtnPCpcqfncvUsBCpHa02CwfliIRFHqiEYYCOYlBw/QLvUEOllA1SLjFg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IikwieTp; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IikwieTp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1750995957; x=1782531957; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tSebs7xOLupUMtEK2b1IjCu+ScmKGbry/w4go/nFKJs=; b=IikwieTpRrgDPHpjQ7KfhmltTa0y94oJHTZFQ1u7HYQPtisBc/XYOeC5 X1FLQ1f7g3BjyjFDh6SomJmMGeIRkxKhpDDO78esEkpIxD5ynvbRnbZ+Y VC5VOn+CYHWBPqAwJQOC5QQIKlgzrTyZVuEE6YuwksnTomFS00Xyb+/zH pvhl4Es+wvEf3xak76xMx0tXIRVyoGWdQT8zMIWQYhKVwGf2x3kWoBPWY 2AAIa51tTsIVHkpZGPvddDO7OBp8fGjviHeERYYfQNQ2AlbrfJYM/T4Vn MGEbIfBuscbVm2fWZBVuQ2QI3JbOdCNKpyFTAuCbgnaENr8oW0FuvBIDl A==; X-CSE-ConnectionGUID: +Qcbt/hLTDaJ5kcKgqYnpw== X-CSE-MsgGUID: Nq9e8yyFRSCqIZM8PBMlMw== X-IronPort-AV: E=McAfee;i="6800,10657,11476"; a="53454192" X-IronPort-AV: E=Sophos;i="6.16,269,1744095600"; d="scan'208";a="53454192" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2025 20:45:56 -0700 X-CSE-ConnectionGUID: R4cqYR9qR2SUZNXNUMsEIw== X-CSE-MsgGUID: 6wMZY/fXT7CCZeYvh35i7A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,269,1744095600"; d="scan'208";a="176374833" Received: from yilunxu-optiplex-7050.sh.intel.com ([10.239.159.165]) by fmviesa002.fm.intel.com with ESMTP; 26 Jun 2025 20:45:53 -0700 From: Xu Yilun To: jgg@nvidia.com, jgg@ziepe.ca, kevin.tian@intel.com, will@kernel.org, aneesh.kumar@kernel.org Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, joro@8bytes.org, robin.murphy@arm.com, shuah@kernel.org, nicolinc@nvidia.com, aik@amd.com, dan.j.williams@intel.com, baolu.lu@linux.intel.com, yilun.xu@intel.com Subject: [PATCH v3 1/5] iommufd: Add iommufd_object_tombstone_user() helper Date: Fri, 27 Jun 2025 11:38:05 +0800 Message-Id: <20250627033809.1730752-2-yilun.xu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250627033809.1730752-1-yilun.xu@linux.intel.com> References: <20250627033809.1730752-1-yilun.xu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add the iommufd_object_tombstone_user() helper, which allows the caller to destroy an iommufd object created by userspace. This is useful on some destroy paths when the kernel caller finds the object should have been removed by userspace but is still alive. With this helper, the caller destroys the object but leave the object ID reserved (so called tombstone). The tombstone prevents repurposing the object ID without awareness of the original user. Since this happens for abnomal userspace behavior, for simplicity, the tombstoned object ID would be permanently leaked until iommufd_fops_release(). I.e. the original user gets an error when calling ioctl(IOMMU_DESTROY) on that ID. The first use case would be to ensure the iommufd_vdevice can't outlive the associated iommufd_device. Suggested-by: Jason Gunthorpe Co-developed-by: Aneesh Kumar K.V (Arm) Signed-off-by: Aneesh Kumar K.V (Arm) Signed-off-by: Xu Yilun Reviewed-by: Lu Baolu --- drivers/iommu/iommufd/iommufd_private.h | 23 ++++++++++++++++++++++- drivers/iommu/iommufd/main.c | 19 ++++++++++++++++--- 2 files changed, 38 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommuf= d/iommufd_private.h index 9ccc83341f32..fbc9ef78d81f 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -187,7 +187,8 @@ void iommufd_object_finalize(struct iommufd_ctx *ictx, struct iommufd_object *obj); =20 enum { - REMOVE_WAIT_SHORTTERM =3D 1, + REMOVE_WAIT_SHORTTERM =3D BIT(0), + REMOVE_OBJ_TOMBSTONE =3D BIT(1), }; int iommufd_object_remove(struct iommufd_ctx *ictx, struct iommufd_object *to_destroy, u32 id, @@ -213,6 +214,26 @@ static inline void iommufd_object_destroy_user(struct = iommufd_ctx *ictx, WARN_ON(ret); } =20 +/* + * Similar to iommufd_object_destroy_user(), except that the object ID is = left + * reserved/tombstoned. + */ +static inline void iommufd_object_tombstone_user(struct iommufd_ctx *ictx, + struct iommufd_object *obj) +{ + int ret; + + ret =3D iommufd_object_remove(ictx, obj, obj->id, + REMOVE_WAIT_SHORTTERM | REMOVE_OBJ_TOMBSTONE); + + /* + * If there is a bug and we couldn't destroy the object then we did put + * back the caller's users refcount and will eventually try to free it + * again during close. + */ + WARN_ON(ret); +} + /* * The HWPT allocated by autodomains is used in possibly many devices and * is automatically destroyed when its refcount reaches zero. diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c index 3df468f64e7d..620923669b42 100644 --- a/drivers/iommu/iommufd/main.c +++ b/drivers/iommu/iommufd/main.c @@ -167,7 +167,7 @@ int iommufd_object_remove(struct iommufd_ctx *ictx, goto err_xa; } =20 - xas_store(&xas, NULL); + xas_store(&xas, (flags & REMOVE_OBJ_TOMBSTONE) ? XA_ZERO_ENTRY : NULL); if (ictx->vfio_ioas =3D=3D container_of(obj, struct iommufd_ioas, obj)) ictx->vfio_ioas =3D NULL; xa_unlock(&ictx->objects); @@ -239,6 +239,7 @@ static int iommufd_fops_release(struct inode *inode, st= ruct file *filp) struct iommufd_sw_msi_map *next; struct iommufd_sw_msi_map *cur; struct iommufd_object *obj; + bool empty; =20 /* * The objects in the xarray form a graph of "users" counts, and we have @@ -249,23 +250,35 @@ static int iommufd_fops_release(struct inode *inode, = struct file *filp) * until the entire list is destroyed. If this can't progress then there * is some bug related to object refcounting. */ - while (!xa_empty(&ictx->objects)) { + for (;;) { unsigned int destroyed =3D 0; unsigned long index; =20 + empty =3D true; xa_for_each(&ictx->objects, index, obj) { + empty =3D false; if (!refcount_dec_if_one(&obj->users)) continue; + destroyed++; xa_erase(&ictx->objects, index); iommufd_object_ops[obj->type].destroy(obj); kfree(obj); } + + if (empty) + break; + /* Bug related to users refcount */ if (WARN_ON(!destroyed)) break; } - WARN_ON(!xa_empty(&ictx->groups)); + + /* + * There may be some tombstones left over from + * iommufd_object_tombstone_user() + */ + xa_destroy(&ictx->groups); =20 mutex_destroy(&ictx->sw_msi_lock); list_for_each_entry_safe(cur, next, &ictx->sw_msi_list, sw_msi_item) --=20 2.25.1 From nobody Wed Oct 8 14:16:05 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F33C1A841F for ; Fri, 27 Jun 2025 03:46:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750995962; cv=none; b=ryPQecjmmgFEIxsClGff6PaFHf4dDpfIzKkAz9m5PBV4z2dK8jsygh1TlkrICLV2LsSlspl920ug2p6v3GVRlqqiE2jR3FKXLAI1juoYbPibnesp2VEow0H8bKvOEn5awV3FzntIQ20ApG+OFRajKZMPulpM9FQsQH6tikaqlZQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750995962; c=relaxed/simple; bh=afEJLy5I97odOBS6VJbWXRxt7Bwm+rnXh/rKVVbIvZI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=eTodYQpjCPF8hn4XbHwvIIvVySv9MRcJ23gQS2ZhpyCBPX3FdRC9JK5QQljw0wn3m6QDN3J7BMWp/PxHuXFOR9aVgzGGkOB5q/5dWfd1hX/v/Tq8eRKqjD+gd0dUFHr55HQmLrWQqfUyLgAJPy40oiRwqLN6+vWT43LRb6SAvw4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=S+2yVH0T; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="S+2yVH0T" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1750995960; x=1782531960; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=afEJLy5I97odOBS6VJbWXRxt7Bwm+rnXh/rKVVbIvZI=; b=S+2yVH0TWF0iUGgbgUsaoOwagh7pvJRy6SUFvDfI2B840AABpIH6Kc89 zwDKgFc9S+/0036UGPaDbprj6KeG3sY8RkmalLi1Xw2Swaw4fb4rWqlLt iJZ+HX/BWKU6cgfQmPfQ+33zgTd5uvMIxswExg94AuacrgMCye2QzeaaO SoQX75Mb6AhF7irqqECClAGUz07oDpmqlglyfDirSn8ZjLfLc6u0KMdgQ 9VNYmim9FoJZlMtFZlE69CjmF8qmp6LsAaKfQC+OMpykrmAbAwOZvPQkr ZM5V+SBknlPPBIL7McJicr1FHAnsLCE0FlTu5yQCymkBMBAo6HtW4Zf0F Q==; X-CSE-ConnectionGUID: lbUJNM3ARwKLaTlxutIRSg== X-CSE-MsgGUID: VonzCDyOQvCL4tAhxoXwWw== X-IronPort-AV: E=McAfee;i="6800,10657,11476"; a="53454204" X-IronPort-AV: E=Sophos;i="6.16,269,1744095600"; d="scan'208";a="53454204" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2025 20:46:00 -0700 X-CSE-ConnectionGUID: mQOdIHaNQGSqSIN0g9CrBQ== X-CSE-MsgGUID: QFR0VJbhTZqcGW6s//jgow== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,269,1744095600"; d="scan'208";a="176374852" Received: from yilunxu-optiplex-7050.sh.intel.com ([10.239.159.165]) by fmviesa002.fm.intel.com with ESMTP; 26 Jun 2025 20:45:57 -0700 From: Xu Yilun To: jgg@nvidia.com, jgg@ziepe.ca, kevin.tian@intel.com, will@kernel.org, aneesh.kumar@kernel.org Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, joro@8bytes.org, robin.murphy@arm.com, shuah@kernel.org, nicolinc@nvidia.com, aik@amd.com, dan.j.williams@intel.com, baolu.lu@linux.intel.com, yilun.xu@intel.com Subject: [PATCH v3 2/5] iommufd: Destroy vdevice on idevice destroy Date: Fri, 27 Jun 2025 11:38:06 +0800 Message-Id: <20250627033809.1730752-3-yilun.xu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250627033809.1730752-1-yilun.xu@linux.intel.com> References: <20250627033809.1730752-1-yilun.xu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Destroy iommufd_vdevice (vdev) on iommufd_idevice (idev) destroy so that vdev can't outlive idev. iommufd_device (idev) represents the physical device bound to iommufd, while the iommufd_vdevice (vdev) represents the virtual instance of the physical device in the VM. The lifecycle of the vdev should not be longer than idev. This doesn't cause real problem on existing use cases cause vdev doesn't impact the physical device, only provides virtualization information. But to extend vdev for Confidential Computing (CC), there are needs to do secure configuration for the vdev, e.g. TSM Bind/Unbind. These configurations should be rolled back on idev destroy, or the external driver (VFIO) functionality may be impact. Building the association between idev & vdev requires the two objects pointing each other, but not referencing each other. This requires proper locking. This is done by reviving some of Nicolin's patch [1]. There are 3 cases on idev destroy: 1. vdev is already destroyed by userspace. No extra handling needed. 2. vdev is still alive. Use iommufd_object_tombstone_user() to destroy vdev and tombstone the vdev ID. 3. vdev is being destroyed by userspace. The vdev ID is already freed, but vdev destroy handler is not completed. Destroy the vdev immediately. To solve the racing with userspace destruction, make iommufd_vdevice_abort() reentrant. [1]: https://lore.kernel.org/all/53025c827c44d68edb6469bfd940a8e8bc6147a5.1= 729897278.git.nicolinc@nvidia.com/ Originally-by: Nicolin Chen Suggested-by: Jason Gunthorpe Co-developed-by: Aneesh Kumar K.V (Arm) Signed-off-by: Aneesh Kumar K.V (Arm) Signed-off-by: Xu Yilun Reviewed-by: Lu Baolu --- drivers/iommu/iommufd/device.c | 42 +++++++++++++++++++++++ drivers/iommu/iommufd/iommufd_private.h | 11 +++++++ drivers/iommu/iommufd/main.c | 1 + drivers/iommu/iommufd/viommu.c | 44 +++++++++++++++++++++++-- 4 files changed, 95 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c index 86244403b532..0937d4989185 100644 --- a/drivers/iommu/iommufd/device.c +++ b/drivers/iommu/iommufd/device.c @@ -137,11 +137,53 @@ static struct iommufd_group *iommufd_get_group(struct= iommufd_ctx *ictx, } } =20 +static void iommufd_device_remove_vdev(struct iommufd_device *idev) +{ + struct iommufd_vdevice *vdev; + + mutex_lock(&idev->igroup->lock); + /* vdev has been completely destroyed by userspace */ + if (!idev->vdev) + goto out_unlock; + + vdev =3D iommufd_get_vdevice(idev->ictx, idev->vdev->obj.id); + if (IS_ERR(vdev)) { + /* + * vdev is removed from xarray by userspace, but is not + * destroyed/freed. Since iommufd_vdevice_abort() is reentrant, + * safe to destroy vdev here. + */ + iommufd_vdevice_abort(&idev->vdev->obj); + goto out_unlock; + } + + /* Should never happen */ + if (WARN_ON(vdev !=3D idev->vdev)) { + iommufd_put_object(idev->ictx, &vdev->obj); + goto out_unlock; + } + + /* + * vdev is still alive. Hold a users refcount to prevent racing with + * userspace destruction, then use iommufd_object_tombstone_user() to + * destroy it and leave a tombstone. + */ + refcount_inc(&vdev->obj.users); + iommufd_put_object(idev->ictx, &vdev->obj); + mutex_unlock(&idev->igroup->lock); + iommufd_object_tombstone_user(idev->ictx, &vdev->obj); + return; + +out_unlock: + mutex_unlock(&idev->igroup->lock); +} + void iommufd_device_destroy(struct iommufd_object *obj) { struct iommufd_device *idev =3D container_of(obj, struct iommufd_device, obj); =20 + iommufd_device_remove_vdev(idev); iommu_device_release_dma_owner(idev->dev); iommufd_put_group(idev->igroup); if (!iommufd_selftest_is_mock_dev(idev->dev)) diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommuf= d/iommufd_private.h index fbc9ef78d81f..f58aa4439c53 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -446,6 +446,7 @@ struct iommufd_device { /* always the physical device */ struct device *dev; bool enforce_cache_coherency; + struct iommufd_vdevice *vdev; }; =20 static inline struct iommufd_device * @@ -621,6 +622,7 @@ int iommufd_viommu_alloc_ioctl(struct iommufd_ucmd *ucm= d); void iommufd_viommu_destroy(struct iommufd_object *obj); int iommufd_vdevice_alloc_ioctl(struct iommufd_ucmd *ucmd); void iommufd_vdevice_destroy(struct iommufd_object *obj); +void iommufd_vdevice_abort(struct iommufd_object *obj); =20 struct iommufd_vdevice { struct iommufd_object obj; @@ -628,8 +630,17 @@ struct iommufd_vdevice { struct iommufd_viommu *viommu; struct device *dev; u64 id; /* per-vIOMMU virtual ID */ + struct iommufd_device *idev; }; =20 +static inline struct iommufd_vdevice * +iommufd_get_vdevice(struct iommufd_ctx *ictx, u32 id) +{ + return container_of(iommufd_get_object(ictx, id, + IOMMUFD_OBJ_VDEVICE), + struct iommufd_vdevice, obj); +} + #ifdef CONFIG_IOMMUFD_TEST int iommufd_test(struct iommufd_ucmd *ucmd); void iommufd_selftest_destroy(struct iommufd_object *obj); diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c index 620923669b42..64731b4fdbdf 100644 --- a/drivers/iommu/iommufd/main.c +++ b/drivers/iommu/iommufd/main.c @@ -529,6 +529,7 @@ static const struct iommufd_object_ops iommufd_object_o= ps[] =3D { }, [IOMMUFD_OBJ_VDEVICE] =3D { .destroy =3D iommufd_vdevice_destroy, + .abort =3D iommufd_vdevice_abort, }, [IOMMUFD_OBJ_VEVENTQ] =3D { .destroy =3D iommufd_veventq_destroy, diff --git a/drivers/iommu/iommufd/viommu.c b/drivers/iommu/iommufd/viommu.c index 01df2b985f02..632d1d7b8fd8 100644 --- a/drivers/iommu/iommufd/viommu.c +++ b/drivers/iommu/iommufd/viommu.c @@ -84,16 +84,38 @@ int iommufd_viommu_alloc_ioctl(struct iommufd_ucmd *ucm= d) return rc; } =20 -void iommufd_vdevice_destroy(struct iommufd_object *obj) +void iommufd_vdevice_abort(struct iommufd_object *obj) { struct iommufd_vdevice *vdev =3D container_of(obj, struct iommufd_vdevice, obj); struct iommufd_viommu *viommu =3D vdev->viommu; + struct iommufd_device *idev =3D vdev->idev; + + lockdep_assert_held(&idev->igroup->lock); + + /* + * iommufd_vdevice_abort() could be reentrant, by + * iommufd_device_unbind() or by iommufd_destroy(). Cleanup only once. + */ + if (!viommu) + return; =20 /* xa_cmpxchg is okay to fail if alloc failed xa_cmpxchg previously */ xa_cmpxchg(&viommu->vdevs, vdev->id, vdev, NULL, GFP_KERNEL); refcount_dec(&viommu->obj.users); put_device(vdev->dev); + vdev->viommu =3D NULL; + idev->vdev =3D NULL; +} + +void iommufd_vdevice_destroy(struct iommufd_object *obj) +{ + struct iommufd_vdevice *vdev =3D + container_of(obj, struct iommufd_vdevice, obj); + + mutex_lock(&vdev->idev->igroup->lock); + iommufd_vdevice_abort(obj); + mutex_unlock(&vdev->idev->igroup->lock); } =20 int iommufd_vdevice_alloc_ioctl(struct iommufd_ucmd *ucmd) @@ -124,10 +146,16 @@ int iommufd_vdevice_alloc_ioctl(struct iommufd_ucmd *= ucmd) goto out_put_idev; } =20 + mutex_lock(&idev->igroup->lock); + if (idev->vdev) { + rc =3D -EEXIST; + goto out_unlock_igroup; + } + vdev =3D iommufd_object_alloc(ucmd->ictx, vdev, IOMMUFD_OBJ_VDEVICE); if (IS_ERR(vdev)) { rc =3D PTR_ERR(vdev); - goto out_put_idev; + goto out_unlock_igroup; } =20 vdev->id =3D virt_id; @@ -135,6 +163,14 @@ int iommufd_vdevice_alloc_ioctl(struct iommufd_ucmd *u= cmd) get_device(idev->dev); vdev->viommu =3D viommu; refcount_inc(&viommu->obj.users); + /* + * iommufd_device_destroy() waits until idev->vdev is NULL before + * freeing the idev, which only happens once the vdev is finished + * destruction. Thus we do not need refcounting on either idev->vdev or + * vdev->idev. + */ + vdev->idev =3D idev; + idev->vdev =3D vdev; =20 curr =3D xa_cmpxchg(&viommu->vdevs, virt_id, NULL, vdev, GFP_KERNEL); if (curr) { @@ -147,10 +183,12 @@ int iommufd_vdevice_alloc_ioctl(struct iommufd_ucmd *= ucmd) if (rc) goto out_abort; iommufd_object_finalize(ucmd->ictx, &vdev->obj); - goto out_put_idev; + goto out_unlock_igroup; =20 out_abort: iommufd_object_abort_and_destroy(ucmd->ictx, &vdev->obj); +out_unlock_igroup: + mutex_unlock(&idev->igroup->lock); out_put_idev: iommufd_put_object(ucmd->ictx, &idev->obj); out_put_viommu: --=20 2.25.1 From nobody Wed Oct 8 14:16:05 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF7B819DF4C for ; Fri, 27 Jun 2025 03:46:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750995965; cv=none; b=EC7zc/cl69jAZyjoprTtveTd1vbQkUWL3+qMarImz0XweMCH67iB79WcacDgrpWSd2LN60vYVJSHwBpV0+yQFHBkP6RongW2k059+PiQWMuiOjwDee/5c1RM6Z8s/mEYjnddI1q3RiEDoghma7U6ZopQYVrIkaSr3nsMrIWdz4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750995965; c=relaxed/simple; bh=l9k3mdCWT0ONzAFd2LOEwhh/YT2VGhLejL8ldrBxF3Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OEhIM7U+H16/9EnTbVcEPIl1eboKaO8sXcLekRIS436LQa31qhr86+ZTL9jde23qKrLPFm8jQztOKzau5tE7H7CyGKGh0Hg6d4t7C0tyBEN5OWBL0mgYPVaowYL4Dh3+axKJS6IJ8EKdTu9tsPjk7Rkq00PgnXDXdKHT6XRPUGs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=VXhELsjt; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VXhELsjt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1750995964; x=1782531964; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=l9k3mdCWT0ONzAFd2LOEwhh/YT2VGhLejL8ldrBxF3Q=; b=VXhELsjtRFG5Y/PxJCPDdjzj5lWQR2Fcp5YJynnEjbCzS7s5Kz62vor0 BVM6tHldX3gyCSPQxqs0JcZTLzzToVioR3WSmnAGwDThBAjKGhEpAQ3yQ NI5A+JuxxOBG9vi6+NLD84ozn9FphRlxxtuVOwpQ28g5xnxg0ZCxxyAIx iXejNg7pgwPa2qs2jolDYKaAojWTYHvFsyT214yCKy4Ib1YB0en6LoI1o NzG8pDeNahrZVgkPOajj14WaoUa4/vOO/tKbFb9/9FiYme248D9e3zy2T MJH/9bjo8OShtRWeFIAc4G/Z0fMHXFx2IO5qwS3NCbYx0HsJdvrjaHCQT w==; X-CSE-ConnectionGUID: IR/Oktf4T3ed+EiRfJB4Fg== X-CSE-MsgGUID: 9717XqnFSc2CeZVdthDSaw== X-IronPort-AV: E=McAfee;i="6800,10657,11476"; a="53454216" X-IronPort-AV: E=Sophos;i="6.16,269,1744095600"; d="scan'208";a="53454216" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2025 20:46:03 -0700 X-CSE-ConnectionGUID: YKK1TYiLT2yTuFiJlLIeDw== X-CSE-MsgGUID: EYWa3MkRQvOMZV0SkcTSUQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,269,1744095600"; d="scan'208";a="176374862" Received: from yilunxu-optiplex-7050.sh.intel.com ([10.239.159.165]) by fmviesa002.fm.intel.com with ESMTP; 26 Jun 2025 20:46:00 -0700 From: Xu Yilun To: jgg@nvidia.com, jgg@ziepe.ca, kevin.tian@intel.com, will@kernel.org, aneesh.kumar@kernel.org Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, joro@8bytes.org, robin.murphy@arm.com, shuah@kernel.org, nicolinc@nvidia.com, aik@amd.com, dan.j.williams@intel.com, baolu.lu@linux.intel.com, yilun.xu@intel.com Subject: [PATCH v3 3/5] iommufd/vdevice: Remove struct device reference from struct vdevice Date: Fri, 27 Jun 2025 11:38:07 +0800 Message-Id: <20250627033809.1730752-4-yilun.xu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250627033809.1730752-1-yilun.xu@linux.intel.com> References: <20250627033809.1730752-1-yilun.xu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove struct device *dev from struct vdevice. The dev pointer is the Plan B for vdevice to reference the physical device. As now vdev->idev is added without refcounting concern, just use vdev->idev->dev when needed. Signed-off-by: Xu Yilun Reviewed-by: Jason Gunthorpe Reviewed-by: Lu Baolu --- drivers/iommu/iommufd/driver.c | 4 ++-- drivers/iommu/iommufd/iommufd_private.h | 1 - drivers/iommu/iommufd/viommu.c | 3 --- 3 files changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/iommufd/driver.c b/drivers/iommu/iommufd/driver.c index 922cd1fe7ec2..942d402bba36 100644 --- a/drivers/iommu/iommufd/driver.c +++ b/drivers/iommu/iommufd/driver.c @@ -45,7 +45,7 @@ struct device *iommufd_viommu_find_dev(struct iommufd_vio= mmu *viommu, lockdep_assert_held(&viommu->vdevs.xa_lock); =20 vdev =3D xa_load(&viommu->vdevs, vdev_id); - return vdev ? vdev->dev : NULL; + return vdev ? vdev->idev->dev : NULL; } EXPORT_SYMBOL_NS_GPL(iommufd_viommu_find_dev, "IOMMUFD"); =20 @@ -62,7 +62,7 @@ int iommufd_viommu_get_vdev_id(struct iommufd_viommu *vio= mmu, =20 xa_lock(&viommu->vdevs); xa_for_each(&viommu->vdevs, index, vdev) { - if (vdev->dev =3D=3D dev) { + if (vdev->idev->dev =3D=3D dev) { *vdev_id =3D vdev->id; rc =3D 0; break; diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommuf= d/iommufd_private.h index f58aa4439c53..3700193471db 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -628,7 +628,6 @@ struct iommufd_vdevice { struct iommufd_object obj; struct iommufd_ctx *ictx; struct iommufd_viommu *viommu; - struct device *dev; u64 id; /* per-vIOMMU virtual ID */ struct iommufd_device *idev; }; diff --git a/drivers/iommu/iommufd/viommu.c b/drivers/iommu/iommufd/viommu.c index 632d1d7b8fd8..452a7a24d738 100644 --- a/drivers/iommu/iommufd/viommu.c +++ b/drivers/iommu/iommufd/viommu.c @@ -103,7 +103,6 @@ void iommufd_vdevice_abort(struct iommufd_object *obj) /* xa_cmpxchg is okay to fail if alloc failed xa_cmpxchg previously */ xa_cmpxchg(&viommu->vdevs, vdev->id, vdev, NULL, GFP_KERNEL); refcount_dec(&viommu->obj.users); - put_device(vdev->dev); vdev->viommu =3D NULL; idev->vdev =3D NULL; } @@ -159,8 +158,6 @@ int iommufd_vdevice_alloc_ioctl(struct iommufd_ucmd *uc= md) } =20 vdev->id =3D virt_id; - vdev->dev =3D idev->dev; - get_device(idev->dev); vdev->viommu =3D viommu; refcount_inc(&viommu->obj.users); /* --=20 2.25.1 From nobody Wed Oct 8 14:16:05 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D77F01CD1E1 for ; Fri, 27 Jun 2025 03:46:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750995969; cv=none; b=r+M3r3zMMyP6KIdcDRl719b43xZlzFcz4chlcIm0d5R8x+UJFTLJI1EL9Hu7aUtDeuLkfqS74YKBhtAblTqy11gudZv+WEv0p6SrfM3VQc0vjkGAEaUbvWeyGjbcu6SQ/3uKDIvHDl2U9DOoxBLfCzTNnDKoEm1M3FHGpYqCKPk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750995969; c=relaxed/simple; bh=SCzREkn+LTw/APqsUFRvSQCOubI1qSzNruatKQg5rOo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WGCv6XonkjTN4pgXg0QV1QWNw7Ze/uLmiOUAgsAtlmD+MQ03l0DXiVyDC/EdqB1yva79Yu3vfShyLrTkwGrhTJ1ZdRZpte9WATyVUEhOev64xIaNb1Ga/ENkSFahMqj8/sbQLEQKEdCUszutV5fjDU5Z0ekzSAJcAnSvKzTqKjY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=CztoNh5T; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="CztoNh5T" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1750995968; x=1782531968; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SCzREkn+LTw/APqsUFRvSQCOubI1qSzNruatKQg5rOo=; b=CztoNh5TpRhmrepVvORLs+NJcZQZbJ5QE4q/f6Tb38lVek/wPGtwoEZt ZOs/No91wrYAX/gudtTelDVxrD+XxyVmK7glaLRZBhbJ40pP47NQZ61vM dSvBcxOfGJRpadOhCvcY8ao4/xsIzevMjBAXBAv4ATh26ZdHTdvcdGrNn mqNUUB31Lsi992pK6qPdJybFjSprhy/idMAjakpxkpFsVE1JG/KCpc+0e Ut8h/KXPQCWd8OosneFmNfo+q44iokdw+iHybIDPbOAqNoZE2fygqVLu3 0CkB1TpC3bnU7DHLvvjW/pkAS48AOLO9m8IItMMvqYU4T/UsY2GzDzQq5 Q==; X-CSE-ConnectionGUID: lC9eZkpTQgml8WTurjndgw== X-CSE-MsgGUID: 0++t/lcCQ6KqlmIcBddnxQ== X-IronPort-AV: E=McAfee;i="6800,10657,11476"; a="53454227" X-IronPort-AV: E=Sophos;i="6.16,269,1744095600"; d="scan'208";a="53454227" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2025 20:46:07 -0700 X-CSE-ConnectionGUID: aZlsw916SQK2+OTFu9P81g== X-CSE-MsgGUID: DkxvRhqKQdaF0iYh/nA/Gw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,269,1744095600"; d="scan'208";a="176374872" Received: from yilunxu-optiplex-7050.sh.intel.com ([10.239.159.165]) by fmviesa002.fm.intel.com with ESMTP; 26 Jun 2025 20:46:03 -0700 From: Xu Yilun To: jgg@nvidia.com, jgg@ziepe.ca, kevin.tian@intel.com, will@kernel.org, aneesh.kumar@kernel.org Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, joro@8bytes.org, robin.murphy@arm.com, shuah@kernel.org, nicolinc@nvidia.com, aik@amd.com, dan.j.williams@intel.com, baolu.lu@linux.intel.com, yilun.xu@intel.com Subject: [PATCH v3 4/5] iommufd/selftest: Explicitly skip tests for inapplicable variant Date: Fri, 27 Jun 2025 11:38:08 +0800 Message-Id: <20250627033809.1730752-5-yilun.xu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250627033809.1730752-1-yilun.xu@linux.intel.com> References: <20250627033809.1730752-1-yilun.xu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" no_viommu is not applicable for some viommu/vdevice tests. Explicitly report the skipping, don't do it silently. Only add the prints. No functional change intended. Signed-off-by: Xu Yilun Reviewed-by: Jason Gunthorpe --- tools/testing/selftests/iommu/iommufd.c | 378 ++++++++++++------------ 1 file changed, 190 insertions(+), 188 deletions(-) diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selfte= sts/iommu/iommufd.c index 1a8e85afe9aa..4a9b6e3b37fa 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -2760,35 +2760,36 @@ TEST_F(iommufd_viommu, viommu_alloc_nested_iopf) uint32_t fault_fd; uint32_t vdev_id; =20 - if (self->device_id) { - test_ioctl_fault_alloc(&fault_id, &fault_fd); - test_err_hwpt_alloc_iopf( - ENOENT, dev_id, viommu_id, UINT32_MAX, - IOMMU_HWPT_FAULT_ID_VALID, &iopf_hwpt_id, - IOMMU_HWPT_DATA_SELFTEST, &data, sizeof(data)); - test_err_hwpt_alloc_iopf( - EOPNOTSUPP, dev_id, viommu_id, fault_id, - IOMMU_HWPT_FAULT_ID_VALID | (1 << 31), &iopf_hwpt_id, - IOMMU_HWPT_DATA_SELFTEST, &data, sizeof(data)); - test_cmd_hwpt_alloc_iopf( - dev_id, viommu_id, fault_id, IOMMU_HWPT_FAULT_ID_VALID, - &iopf_hwpt_id, IOMMU_HWPT_DATA_SELFTEST, &data, - sizeof(data)); + if (!dev_id) + SKIP(return, "Skipping test for variant no_viommu"); =20 - /* Must allocate vdevice before attaching to a nested hwpt */ - test_err_mock_domain_replace(ENOENT, self->stdev_id, - iopf_hwpt_id); - test_cmd_vdevice_alloc(viommu_id, dev_id, 0x99, &vdev_id); - test_cmd_mock_domain_replace(self->stdev_id, iopf_hwpt_id); - EXPECT_ERRNO(EBUSY, - _test_ioctl_destroy(self->fd, iopf_hwpt_id)); - test_cmd_trigger_iopf(dev_id, fault_fd); + test_ioctl_fault_alloc(&fault_id, &fault_fd); + test_err_hwpt_alloc_iopf( + ENOENT, dev_id, viommu_id, UINT32_MAX, + IOMMU_HWPT_FAULT_ID_VALID, &iopf_hwpt_id, + IOMMU_HWPT_DATA_SELFTEST, &data, sizeof(data)); + test_err_hwpt_alloc_iopf( + EOPNOTSUPP, dev_id, viommu_id, fault_id, + IOMMU_HWPT_FAULT_ID_VALID | (1 << 31), &iopf_hwpt_id, + IOMMU_HWPT_DATA_SELFTEST, &data, sizeof(data)); + test_cmd_hwpt_alloc_iopf( + dev_id, viommu_id, fault_id, IOMMU_HWPT_FAULT_ID_VALID, + &iopf_hwpt_id, IOMMU_HWPT_DATA_SELFTEST, &data, + sizeof(data)); + + /* Must allocate vdevice before attaching to a nested hwpt */ + test_err_mock_domain_replace(ENOENT, self->stdev_id, + iopf_hwpt_id); + test_cmd_vdevice_alloc(viommu_id, dev_id, 0x99, &vdev_id); + test_cmd_mock_domain_replace(self->stdev_id, iopf_hwpt_id); + EXPECT_ERRNO(EBUSY, + _test_ioctl_destroy(self->fd, iopf_hwpt_id)); + test_cmd_trigger_iopf(dev_id, fault_fd); =20 - test_cmd_mock_domain_replace(self->stdev_id, self->ioas_id); - test_ioctl_destroy(iopf_hwpt_id); - close(fault_fd); - test_ioctl_destroy(fault_id); - } + test_cmd_mock_domain_replace(self->stdev_id, self->ioas_id); + test_ioctl_destroy(iopf_hwpt_id); + close(fault_fd); + test_ioctl_destroy(fault_id); } =20 TEST_F(iommufd_viommu, vdevice_alloc) @@ -2849,169 +2850,170 @@ TEST_F(iommufd_viommu, vdevice_cache) uint32_t vdev_id =3D 0; uint32_t num_inv; =20 - if (dev_id) { - test_cmd_vdevice_alloc(viommu_id, dev_id, 0x99, &vdev_id); - - test_cmd_dev_check_cache_all(dev_id, - IOMMU_TEST_DEV_CACHE_DEFAULT); - - /* Check data_type by passing zero-length array */ - num_inv =3D 0; - test_cmd_viommu_invalidate(viommu_id, inv_reqs, - sizeof(*inv_reqs), &num_inv); - assert(!num_inv); - - /* Negative test: Invalid data_type */ - num_inv =3D 1; - test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, - IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST_INVALID, - sizeof(*inv_reqs), &num_inv); - assert(!num_inv); - - /* Negative test: structure size sanity */ - num_inv =3D 1; - test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, - IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, - sizeof(*inv_reqs) + 1, &num_inv); - assert(!num_inv); - - num_inv =3D 1; - test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, - IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, - 1, &num_inv); - assert(!num_inv); - - /* Negative test: invalid flag is passed */ - num_inv =3D 1; - inv_reqs[0].flags =3D 0xffffffff; - inv_reqs[0].vdev_id =3D 0x99; - test_err_viommu_invalidate(EOPNOTSUPP, viommu_id, inv_reqs, - IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, - sizeof(*inv_reqs), &num_inv); - assert(!num_inv); - - /* Negative test: invalid data_uptr when array is not empty */ - num_inv =3D 1; - inv_reqs[0].flags =3D 0; - inv_reqs[0].vdev_id =3D 0x99; - test_err_viommu_invalidate(EINVAL, viommu_id, NULL, - IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, - sizeof(*inv_reqs), &num_inv); - assert(!num_inv); - - /* Negative test: invalid entry_len when array is not empty */ - num_inv =3D 1; - inv_reqs[0].flags =3D 0; - inv_reqs[0].vdev_id =3D 0x99; - test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, - IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, - 0, &num_inv); - assert(!num_inv); - - /* Negative test: invalid cache_id */ - num_inv =3D 1; - inv_reqs[0].flags =3D 0; - inv_reqs[0].vdev_id =3D 0x99; - inv_reqs[0].cache_id =3D MOCK_DEV_CACHE_ID_MAX + 1; - test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, - IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, - sizeof(*inv_reqs), &num_inv); - assert(!num_inv); - - /* Negative test: invalid vdev_id */ - num_inv =3D 1; - inv_reqs[0].flags =3D 0; - inv_reqs[0].vdev_id =3D 0x9; - inv_reqs[0].cache_id =3D 0; - test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, - IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, - sizeof(*inv_reqs), &num_inv); - assert(!num_inv); - - /* - * Invalidate the 1st cache entry but fail the 2nd request - * due to invalid flags configuration in the 2nd request. - */ - num_inv =3D 2; - inv_reqs[0].flags =3D 0; - inv_reqs[0].vdev_id =3D 0x99; - inv_reqs[0].cache_id =3D 0; - inv_reqs[1].flags =3D 0xffffffff; - inv_reqs[1].vdev_id =3D 0x99; - inv_reqs[1].cache_id =3D 1; - test_err_viommu_invalidate(EOPNOTSUPP, viommu_id, inv_reqs, - IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, - sizeof(*inv_reqs), &num_inv); - assert(num_inv =3D=3D 1); - test_cmd_dev_check_cache(dev_id, 0, 0); - test_cmd_dev_check_cache(dev_id, 1, - IOMMU_TEST_DEV_CACHE_DEFAULT); - test_cmd_dev_check_cache(dev_id, 2, - IOMMU_TEST_DEV_CACHE_DEFAULT); - test_cmd_dev_check_cache(dev_id, 3, - IOMMU_TEST_DEV_CACHE_DEFAULT); + if (!dev_id) + SKIP(return, "Skipping test for variant no_viommu"); + + test_cmd_vdevice_alloc(viommu_id, dev_id, 0x99, &vdev_id); + + test_cmd_dev_check_cache_all(dev_id, + IOMMU_TEST_DEV_CACHE_DEFAULT); + + /* Check data_type by passing zero-length array */ + num_inv =3D 0; + test_cmd_viommu_invalidate(viommu_id, inv_reqs, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: Invalid data_type */ + num_inv =3D 1; + test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, + IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST_INVALID, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: structure size sanity */ + num_inv =3D 1; + test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, + IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs) + 1, &num_inv); + assert(!num_inv); + + num_inv =3D 1; + test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, + IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, + 1, &num_inv); + assert(!num_inv); + + /* Negative test: invalid flag is passed */ + num_inv =3D 1; + inv_reqs[0].flags =3D 0xffffffff; + inv_reqs[0].vdev_id =3D 0x99; + test_err_viommu_invalidate(EOPNOTSUPP, viommu_id, inv_reqs, + IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: invalid data_uptr when array is not empty */ + num_inv =3D 1; + inv_reqs[0].flags =3D 0; + inv_reqs[0].vdev_id =3D 0x99; + test_err_viommu_invalidate(EINVAL, viommu_id, NULL, + IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: invalid entry_len when array is not empty */ + num_inv =3D 1; + inv_reqs[0].flags =3D 0; + inv_reqs[0].vdev_id =3D 0x99; + test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, + IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, + 0, &num_inv); + assert(!num_inv); + + /* Negative test: invalid cache_id */ + num_inv =3D 1; + inv_reqs[0].flags =3D 0; + inv_reqs[0].vdev_id =3D 0x99; + inv_reqs[0].cache_id =3D MOCK_DEV_CACHE_ID_MAX + 1; + test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, + IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); + + /* Negative test: invalid vdev_id */ + num_inv =3D 1; + inv_reqs[0].flags =3D 0; + inv_reqs[0].vdev_id =3D 0x9; + inv_reqs[0].cache_id =3D 0; + test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, + IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(!num_inv); =20 - /* - * Invalidate the 1st cache entry but fail the 2nd request - * due to invalid cache_id configuration in the 2nd request. - */ - num_inv =3D 2; - inv_reqs[0].flags =3D 0; - inv_reqs[0].vdev_id =3D 0x99; - inv_reqs[0].cache_id =3D 0; - inv_reqs[1].flags =3D 0; - inv_reqs[1].vdev_id =3D 0x99; - inv_reqs[1].cache_id =3D MOCK_DEV_CACHE_ID_MAX + 1; - test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, - IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, - sizeof(*inv_reqs), &num_inv); - assert(num_inv =3D=3D 1); - test_cmd_dev_check_cache(dev_id, 0, 0); - test_cmd_dev_check_cache(dev_id, 1, - IOMMU_TEST_DEV_CACHE_DEFAULT); - test_cmd_dev_check_cache(dev_id, 2, - IOMMU_TEST_DEV_CACHE_DEFAULT); - test_cmd_dev_check_cache(dev_id, 3, - IOMMU_TEST_DEV_CACHE_DEFAULT); - - /* Invalidate the 2nd cache entry and verify */ - num_inv =3D 1; - inv_reqs[0].flags =3D 0; - inv_reqs[0].vdev_id =3D 0x99; - inv_reqs[0].cache_id =3D 1; - test_cmd_viommu_invalidate(viommu_id, inv_reqs, - sizeof(*inv_reqs), &num_inv); - assert(num_inv =3D=3D 1); - test_cmd_dev_check_cache(dev_id, 0, 0); - test_cmd_dev_check_cache(dev_id, 1, 0); - test_cmd_dev_check_cache(dev_id, 2, - IOMMU_TEST_DEV_CACHE_DEFAULT); - test_cmd_dev_check_cache(dev_id, 3, - IOMMU_TEST_DEV_CACHE_DEFAULT); - - /* Invalidate the 3rd and 4th cache entries and verify */ - num_inv =3D 2; - inv_reqs[0].flags =3D 0; - inv_reqs[0].vdev_id =3D 0x99; - inv_reqs[0].cache_id =3D 2; - inv_reqs[1].flags =3D 0; - inv_reqs[1].vdev_id =3D 0x99; - inv_reqs[1].cache_id =3D 3; - test_cmd_viommu_invalidate(viommu_id, inv_reqs, - sizeof(*inv_reqs), &num_inv); - assert(num_inv =3D=3D 2); - test_cmd_dev_check_cache_all(dev_id, 0); + /* + * Invalidate the 1st cache entry but fail the 2nd request + * due to invalid flags configuration in the 2nd request. + */ + num_inv =3D 2; + inv_reqs[0].flags =3D 0; + inv_reqs[0].vdev_id =3D 0x99; + inv_reqs[0].cache_id =3D 0; + inv_reqs[1].flags =3D 0xffffffff; + inv_reqs[1].vdev_id =3D 0x99; + inv_reqs[1].cache_id =3D 1; + test_err_viommu_invalidate(EOPNOTSUPP, viommu_id, inv_reqs, + IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(num_inv =3D=3D 1); + test_cmd_dev_check_cache(dev_id, 0, 0); + test_cmd_dev_check_cache(dev_id, 1, + IOMMU_TEST_DEV_CACHE_DEFAULT); + test_cmd_dev_check_cache(dev_id, 2, + IOMMU_TEST_DEV_CACHE_DEFAULT); + test_cmd_dev_check_cache(dev_id, 3, + IOMMU_TEST_DEV_CACHE_DEFAULT); =20 - /* Invalidate all cache entries for nested_dev_id[1] and verify */ - num_inv =3D 1; - inv_reqs[0].vdev_id =3D 0x99; - inv_reqs[0].flags =3D IOMMU_TEST_INVALIDATE_FLAG_ALL; - test_cmd_viommu_invalidate(viommu_id, inv_reqs, - sizeof(*inv_reqs), &num_inv); - assert(num_inv =3D=3D 1); - test_cmd_dev_check_cache_all(dev_id, 0); - test_ioctl_destroy(vdev_id); - } + /* + * Invalidate the 1st cache entry but fail the 2nd request + * due to invalid cache_id configuration in the 2nd request. + */ + num_inv =3D 2; + inv_reqs[0].flags =3D 0; + inv_reqs[0].vdev_id =3D 0x99; + inv_reqs[0].cache_id =3D 0; + inv_reqs[1].flags =3D 0; + inv_reqs[1].vdev_id =3D 0x99; + inv_reqs[1].cache_id =3D MOCK_DEV_CACHE_ID_MAX + 1; + test_err_viommu_invalidate(EINVAL, viommu_id, inv_reqs, + IOMMU_VIOMMU_INVALIDATE_DATA_SELFTEST, + sizeof(*inv_reqs), &num_inv); + assert(num_inv =3D=3D 1); + test_cmd_dev_check_cache(dev_id, 0, 0); + test_cmd_dev_check_cache(dev_id, 1, + IOMMU_TEST_DEV_CACHE_DEFAULT); + test_cmd_dev_check_cache(dev_id, 2, + IOMMU_TEST_DEV_CACHE_DEFAULT); + test_cmd_dev_check_cache(dev_id, 3, + IOMMU_TEST_DEV_CACHE_DEFAULT); + + /* Invalidate the 2nd cache entry and verify */ + num_inv =3D 1; + inv_reqs[0].flags =3D 0; + inv_reqs[0].vdev_id =3D 0x99; + inv_reqs[0].cache_id =3D 1; + test_cmd_viommu_invalidate(viommu_id, inv_reqs, + sizeof(*inv_reqs), &num_inv); + assert(num_inv =3D=3D 1); + test_cmd_dev_check_cache(dev_id, 0, 0); + test_cmd_dev_check_cache(dev_id, 1, 0); + test_cmd_dev_check_cache(dev_id, 2, + IOMMU_TEST_DEV_CACHE_DEFAULT); + test_cmd_dev_check_cache(dev_id, 3, + IOMMU_TEST_DEV_CACHE_DEFAULT); + + /* Invalidate the 3rd and 4th cache entries and verify */ + num_inv =3D 2; + inv_reqs[0].flags =3D 0; + inv_reqs[0].vdev_id =3D 0x99; + inv_reqs[0].cache_id =3D 2; + inv_reqs[1].flags =3D 0; + inv_reqs[1].vdev_id =3D 0x99; + inv_reqs[1].cache_id =3D 3; + test_cmd_viommu_invalidate(viommu_id, inv_reqs, + sizeof(*inv_reqs), &num_inv); + assert(num_inv =3D=3D 2); + test_cmd_dev_check_cache_all(dev_id, 0); + + /* Invalidate all cache entries for nested_dev_id[1] and verify */ + num_inv =3D 1; + inv_reqs[0].vdev_id =3D 0x99; + inv_reqs[0].flags =3D IOMMU_TEST_INVALIDATE_FLAG_ALL; + test_cmd_viommu_invalidate(viommu_id, inv_reqs, + sizeof(*inv_reqs), &num_inv); + assert(num_inv =3D=3D 1); + test_cmd_dev_check_cache_all(dev_id, 0); + test_ioctl_destroy(vdev_id); } =20 FIXTURE(iommufd_device_pasid) --=20 2.25.1 From nobody Wed Oct 8 14:16:05 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2571D1D5154 for ; Fri, 27 Jun 2025 03:46:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750995972; cv=none; b=VoSJGkMLBsFn0O3qQ1glJG+wiH55pbecYaPyWOH8CtdGAlonDTQeiR3GXoX8W32K861uxEG4xsIUrx6jn6mKG8bP2Agd4TZzcf0AWfZzYSB+uUI3OXUary08x0HEjYRHc2q6LpqD/zHJ12MnDmGU+E96t9inquvBlg1K7zwGvfc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750995972; c=relaxed/simple; bh=fK16mo7DG7cquXHm19Z+gYhG43gjt271XQww/gwmjL8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VmyR1Q922QDPdgANsWvx/AGXjh4un0jG7aXWaS82X5Go/hsQT9eumCT8BnAhyv/EbWWzhPKS0ZaJukyLuDJQbkxY407RYzsOp+Wc9FWAteDVh2TCAAk7xguWMD5RmZWKyuijSYyqnhskESzaT6kCZoKXQezJIP8vu7xwmKd7Okg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cISax95j; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cISax95j" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1750995971; x=1782531971; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fK16mo7DG7cquXHm19Z+gYhG43gjt271XQww/gwmjL8=; b=cISax95jfTghKUODVnF0OVa26zkhzPnDl1LxekEuETWsc+SZ2f+pbqsz 1H92j7BB/M57nun2JTqfl6eGcyRdKvQjU8um2ZqPpcwl5TD2UDZB+I1i0 FQViWb6cu2I+nJi2a2/SKoS344O0ZgliIUq9QvjR6Ty4ef6NNe/iJhM/1 LsJXekmMCmXme68DIdsrGukmPXBvw4VVSh796++dWDTP2GqTI7OhzT2yG Rb8SnxcoNVLGzEWVDKrxwzQ8kknRJbALMG5O9DoZsyZvIWYPJmXODJBIM sB5zYzjUz8yz0EffKI2XPF66MeDi1fq8GxlXg5i7zo4lZ0jFeoprz2Scv w==; X-CSE-ConnectionGUID: WuoN9KeGSK69l+CHVf7jpA== X-CSE-MsgGUID: FFsJTerQQsKRnpssX4RlaQ== X-IronPort-AV: E=McAfee;i="6800,10657,11476"; a="53454238" X-IronPort-AV: E=Sophos;i="6.16,269,1744095600"; d="scan'208";a="53454238" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2025 20:46:11 -0700 X-CSE-ConnectionGUID: HGgcEo6HTXSar56noxdDmA== X-CSE-MsgGUID: 5sXzOePbRUWsHhSHnsuEkg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,269,1744095600"; d="scan'208";a="176374879" Received: from yilunxu-optiplex-7050.sh.intel.com ([10.239.159.165]) by fmviesa002.fm.intel.com with ESMTP; 26 Jun 2025 20:46:07 -0700 From: Xu Yilun To: jgg@nvidia.com, jgg@ziepe.ca, kevin.tian@intel.com, will@kernel.org, aneesh.kumar@kernel.org Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, joro@8bytes.org, robin.murphy@arm.com, shuah@kernel.org, nicolinc@nvidia.com, aik@amd.com, dan.j.williams@intel.com, baolu.lu@linux.intel.com, yilun.xu@intel.com Subject: [PATCH v3 5/5] iommufd/selftest: Add coverage for vdevice tombstone Date: Fri, 27 Jun 2025 11:38:09 +0800 Message-Id: <20250627033809.1730752-6-yilun.xu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250627033809.1730752-1-yilun.xu@linux.intel.com> References: <20250627033809.1730752-1-yilun.xu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This tests the flow to tombstone vdevice when idevice is to be unbound before vdevice destruction. The expected result is: - idevice unbinding tombstones vdevice ID, the ID can't be repurposed anymore. - Even ioctl(IOMMU_DESTROY) can't free the tombstoned ID. - iommufd_fops_release() can still free everything. Signed-off-by: Xu Yilun --- tools/testing/selftests/iommu/iommufd.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selfte= sts/iommu/iommufd.c index 4a9b6e3b37fa..e1470a7a42cd 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -3016,6 +3016,20 @@ TEST_F(iommufd_viommu, vdevice_cache) test_ioctl_destroy(vdev_id); } =20 +TEST_F(iommufd_viommu, vdevice_tombstone) +{ + uint32_t viommu_id =3D self->viommu_id; + uint32_t dev_id =3D self->device_id; + uint32_t vdev_id =3D 0; + + if (!dev_id) + SKIP(return, "Skipping test for variant no_viommu"); + + test_cmd_vdevice_alloc(viommu_id, dev_id, 0x99, &vdev_id); + test_ioctl_destroy(self->stdev_id); + EXPECT_ERRNO(ENOENT, _test_ioctl_destroy(self->fd, vdev_id)); +} + FIXTURE(iommufd_device_pasid) { int fd; --=20 2.25.1