The iommu_deferred_attach() function invokes __iommu_attach_device(), but
doesn't hold the group->mutex like other __iommu_attach_device() callers.
Though there is no pratical bug being triggered so far, it would be better
to apply the same locking to this __iommu_attach_device(), since the IOMMU
drivers nowaday are more aware of the group->mutex -- some of them use the
iommu_group_mutex_assert() function that could be potentially in the path
of an attach_dev callback function invoked by the __iommu_attach_device().
Worth mentioning that the iommu_deferred_attach() will soon need to check
group->resetting_domain that must be locked also.
Thus, grab the mutex to guard __iommu_attach_device() like other callers.
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/iommu/iommu.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 2ca990dfbb884..170e522b5bda4 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2185,10 +2185,17 @@ EXPORT_SYMBOL_GPL(iommu_attach_device);
int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain)
{
- if (dev->iommu && dev->iommu->attach_deferred)
- return __iommu_attach_device(domain, dev, NULL);
+ /*
+ * This is called on the dma mapping fast path so avoid locking. This is
+ * racy, but we have an expectation that the driver will setup its DMAs
+ * inside probe while being single threaded to avoid racing.
+ */
+ if (!dev->iommu || !dev->iommu->attach_deferred)
+ return 0;
- return 0;
+ guard(mutex)(&dev->iommu_group->mutex);
+
+ return __iommu_attach_device(domain, dev, NULL);
}
void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
--
2.43.0