As the final remaining piece of bus-dependent API, iommu_domain_alloc()
can now take responsibility for the "one iommu_ops per bus" rule for
itself. It turns out we can't safely make the internal allocation call
any more group-based or device-based yet - that will have to wait until
the external callers can pass the right thing - but we can at least get
as far as deriving "bus ops" based on which driver is actually managing
devices on the given bus, rather than whichever driver won the race to
register first.
This will then leave us able to convert the last of the core internals
over to the IOMMU-instance model, allow multiple drivers to register and
actually coexist (modulo the above limitation for unmanaged domain users
in the short term), and start trying to solve the long-standing
iommu_probe_device() mess.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
v5: Rewrite, de-scoping to just retrieve ops under the same assumptions
as the existing code.
---
drivers/iommu/iommu.c | 25 ++++++++++++++++++++++---
1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 578292d3b152..18667dc2ff86 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2140,12 +2140,31 @@ __iommu_group_domain_alloc(struct iommu_group *group, unsigned int type)
return __iommu_domain_alloc(dev_iommu_ops(dev), dev, type);
}
+static int __iommu_domain_alloc_dev(struct device *dev, void *data)
+{
+ const struct iommu_ops **ops = data;
+
+ if (!dev_has_iommu(dev))
+ return 0;
+
+ if (WARN_ONCE(*ops && *ops != dev_iommu_ops(dev),
+ "Multiple IOMMU drivers present for bus %s, which the public IOMMU API can't fully support yet. You will still need to disable one or more for this to work, sorry!\n",
+ dev_bus_name(dev)))
+ return -EBUSY;
+
+ *ops = dev_iommu_ops(dev);
+ return 0;
+}
+
struct iommu_domain *iommu_domain_alloc(const struct bus_type *bus)
{
- if (bus == NULL || bus->iommu_ops == NULL)
+ const struct iommu_ops *ops = NULL;
+ int err = bus_for_each_dev(bus, NULL, &ops, __iommu_domain_alloc_dev);
+
+ if (err || !ops)
return NULL;
- return __iommu_domain_alloc(bus->iommu_ops, NULL,
- IOMMU_DOMAIN_UNMANAGED);
+
+ return __iommu_domain_alloc(ops, NULL, IOMMU_DOMAIN_UNMANAGED);
}
EXPORT_SYMBOL_GPL(iommu_domain_alloc);
--
2.39.2.101.g768bb238c484.dirty
On Wed, Oct 11, 2023 at 07:14:51PM +0100, Robin Murphy wrote: > As the final remaining piece of bus-dependent API, iommu_domain_alloc() > can now take responsibility for the "one iommu_ops per bus" rule for > itself. It turns out we can't safely make the internal allocation call > any more group-based or device-based yet - that will have to wait until > the external callers can pass the right thing - but we can at least get > as far as deriving "bus ops" based on which driver is actually managing > devices on the given bus, rather than whichever driver won the race to > register first. > > This will then leave us able to convert the last of the core internals > over to the IOMMU-instance model, allow multiple drivers to register and > actually coexist (modulo the above limitation for unmanaged domain users > in the short term), and start trying to solve the long-standing > iommu_probe_device() mess. > > Signed-off-by: Robin Murphy <robin.murphy@arm.com> > > --- > > v5: Rewrite, de-scoping to just retrieve ops under the same assumptions > as the existing code. > --- Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
On Wed, Oct 11, 2023 at 07:14:51PM +0100, Robin Murphy wrote: > As the final remaining piece of bus-dependent API, iommu_domain_alloc() > can now take responsibility for the "one iommu_ops per bus" rule for > itself. It turns out we can't safely make the internal allocation call > any more group-based or device-based yet - that will have to wait until > the external callers can pass the right thing - but we can at least get > as far as deriving "bus ops" based on which driver is actually managing > devices on the given bus, rather than whichever driver won the race to > register first. > > This will then leave us able to convert the last of the core internals > over to the IOMMU-instance model, allow multiple drivers to register and > actually coexist (modulo the above limitation for unmanaged domain users > in the short term), and start trying to solve the long-standing > iommu_probe_device() mess. > > Signed-off-by: Robin Murphy <robin.murphy@arm.com> > > --- > > v5: Rewrite, de-scoping to just retrieve ops under the same assumptions > as the existing code. > --- > drivers/iommu/iommu.c | 25 ++++++++++++++++++++++--- > 1 file changed, 22 insertions(+), 3 deletions(-) Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> FWIW, I was thinking afterwords that domain_alloc_paging() probably should have been: (domain_alloc_paging *)(struct iommu_device *iommu, struct iommu_group *group) Most drivers can use the iommu and ignore the group, a few special ones can do the needed reduce operation across the group. We can get to that later when we get deeper into the PASID troubles, it also requires the deferal of the domain creation like the bus code probe does but the fwnode path doesn't :\ Jason
© 2016 - 2026 Red Hat, Inc.