The iommu_get_domain_for_dev() helper will be reworked to check a per-gdv
flag, so it will need to hold the group->mutex. This will give trouble to
existing attach_dev callback functions that call the helper for currently
attached old domains, since group->mutex is already held in an attach_dev
context.
To address this, step one is to pass in the attached "old" domain pointer
to the attach_dev op, similar to set_dev_pasid op.
However, the release_dev op is tricky in the iommu core, because it could
be invoked when the group isn't allocated, i.e. no way of guarateeing the
group->mutex being held. Thus, it would not be able to do any attachment
in the release_dev callback function, arm_smmu_release_device() here.
Add a release_domain, moving the attach from arm_smmu_release_device() to
the iommu_deinit_device() in the core, so that arm_smmu_release_device()
will not need to worry about the group->mutex.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 34 ++++++++++++++++-----
1 file changed, 26 insertions(+), 8 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 5968043ac8023..1a21d1a2dd454 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -3291,6 +3291,31 @@ static struct iommu_domain arm_smmu_blocked_domain = {
.ops = &arm_smmu_blocked_ops,
};
+static int arm_smmu_attach_dev_release(struct iommu_domain *domain,
+ struct device *dev)
+{
+ struct arm_smmu_master *master = dev_iommu_priv_get(dev);
+
+ WARN_ON(master->iopf_refcount);
+
+ /* Put the STE back to what arm_smmu_init_strtab() sets */
+ if (dev->iommu->require_direct)
+ arm_smmu_attach_dev_identity(&arm_smmu_identity_domain, dev);
+ else
+ arm_smmu_attach_dev_blocked(&arm_smmu_blocked_domain, dev);
+
+ return 0;
+}
+
+static const struct iommu_domain_ops arm_smmu_release_ops = {
+ .attach_dev = arm_smmu_attach_dev_release,
+};
+
+static struct iommu_domain arm_smmu_release_domain = {
+ .type = IOMMU_DOMAIN_BLOCKED,
+ .ops = &arm_smmu_release_ops,
+};
+
static struct iommu_domain *
arm_smmu_domain_alloc_paging_flags(struct device *dev, u32 flags,
const struct iommu_user_data *user_data)
@@ -3580,14 +3605,6 @@ static void arm_smmu_release_device(struct device *dev)
{
struct arm_smmu_master *master = dev_iommu_priv_get(dev);
- WARN_ON(master->iopf_refcount);
-
- /* Put the STE back to what arm_smmu_init_strtab() sets */
- if (dev->iommu->require_direct)
- arm_smmu_attach_dev_identity(&arm_smmu_identity_domain, dev);
- else
- arm_smmu_attach_dev_blocked(&arm_smmu_blocked_domain, dev);
-
arm_smmu_disable_pasid(master);
arm_smmu_remove_master(master);
if (arm_smmu_cdtab_allocated(&master->cd_table))
@@ -3678,6 +3695,7 @@ static int arm_smmu_def_domain_type(struct device *dev)
static const struct iommu_ops arm_smmu_ops = {
.identity_domain = &arm_smmu_identity_domain,
.blocked_domain = &arm_smmu_blocked_domain,
+ .release_domain = &arm_smmu_release_domain,
.capable = arm_smmu_capable,
.hw_info = arm_smmu_hw_info,
.domain_alloc_sva = arm_smmu_sva_domain_alloc,
--
2.43.0
> From: Nicolin Chen <nicolinc@nvidia.com>
> Sent: Monday, September 1, 2025 7:32 AM
>
> +static int arm_smmu_attach_dev_release(struct iommu_domain *domain,
> + struct device *dev)
> +{
> + struct arm_smmu_master *master = dev_iommu_priv_get(dev);
> +
> + WARN_ON(master->iopf_refcount);
> +
> + /* Put the STE back to what arm_smmu_init_strtab() sets */
> + if (dev->iommu->require_direct)
> +
> arm_smmu_attach_dev_identity(&arm_smmu_identity_domain,
> dev);
> + else
> +
> arm_smmu_attach_dev_blocked(&arm_smmu_blocked_domain,
> dev);
it's a bit confusing that a BLOCKED domain type could turn to the
identity mode, though this movement doesn't change the original
behavior.
> +
> + return 0;
> +}
> +
> +static const struct iommu_domain_ops arm_smmu_release_ops = {
> + .attach_dev = arm_smmu_attach_dev_release,
> +};
> +
> +static struct iommu_domain arm_smmu_release_domain = {
> + .type = IOMMU_DOMAIN_BLOCKED,
> + .ops = &arm_smmu_release_ops,
> +};
> +
On Fri, Sep 12, 2025 at 09:33:06AM +0000, Tian, Kevin wrote:
> > From: Nicolin Chen <nicolinc@nvidia.com>
> > Sent: Monday, September 1, 2025 7:32 AM
> >
> > +static int arm_smmu_attach_dev_release(struct iommu_domain *domain,
> > + struct device *dev)
> > +{
> > + struct arm_smmu_master *master = dev_iommu_priv_get(dev);
> > +
> > + WARN_ON(master->iopf_refcount);
This doesn't look right anymore..
Now that iopf is managed automatically it technically doesn't go to
zero until the attaches below:
> > +
> > + /* Put the STE back to what arm_smmu_init_strtab() sets */
> > + if (dev->iommu->require_direct)
> > +
> > arm_smmu_attach_dev_identity(&arm_smmu_identity_domain,
> > dev);
> > + else
> > +
> > arm_smmu_attach_dev_blocked(&arm_smmu_blocked_domain,
> > dev);
And I'd argue the attaches internally should have the assertion. If no
pasids and blocked/identity the iopf == 0.
> it's a bit confusing that a BLOCKED domain type could turn to the
> identity mode, though this movement doesn't change the original
> behavior.
That isn't what is happening here..
If dev->iommu->require_direct is set we prevent attaching BLOCKING
domains entirely:
if (dev->iommu->require_direct &&
(new_domain->type == IOMMU_DOMAIN_BLOCKED ||
new_domain == group->blocking_domain)) {
dev_warn(dev,
"Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor.\n");
return -EINVAL;
}
So in most sane cases the above will never convert BLOCKING to
IDENTITY. What it is doing is preserving the RMRs...
Also, I don't think this should be in the smmu driver, every driver
should have this same logic, it is part of the definition of RMR
Let's put it in the core code:
if (!dev->iommu->attach_deferred && ops->release_domain) {
struct iommu_domain *release_domain = ops->release_domain;
/*
* If the device requires direct mappings then it should not
* be parked on a BLOCKED domain during release as that would
* break the direct mappings.
*/
if (dev->iommu->require_direct && ops->identity_domain &&
release_domain == ops->blocked_domain)
release_domain = ops->identity_domain;
release_domain->ops->attach_dev(release_domain, dev);
}
Jason
On Mon, Sep 15, 2025 at 09:35:15AM -0300, Jason Gunthorpe wrote:
> On Fri, Sep 12, 2025 at 09:33:06AM +0000, Tian, Kevin wrote:
> > > From: Nicolin Chen <nicolinc@nvidia.com>
> > > Sent: Monday, September 1, 2025 7:32 AM
> > >
> > > +static int arm_smmu_attach_dev_release(struct iommu_domain *domain,
> > > + struct device *dev)
> > > +{
> > > + struct arm_smmu_master *master = dev_iommu_priv_get(dev);
> > > +
> > > + WARN_ON(master->iopf_refcount);
>
> This doesn't look right anymore..
>
> Now that iopf is managed automatically it technically doesn't go to
> zero until the attaches below:
I will leave this WARN_ON in the arm_smmu_release_device(), while
having a release_domain to call arm_smmu_attach_dev_blocked():
-----------------------------------------------------------------
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 2a8b46b948f05..3b21790938d24 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -3291,6 +3291,16 @@ static struct iommu_domain arm_smmu_blocked_domain = {
.ops = &arm_smmu_blocked_ops,
};
+/* Same as arm_smmu_blocked_ops but less set_dev_pasid */
+static const struct iommu_domain_ops arm_smmu_release_ops = {
+ .attach_dev = arm_smmu_attach_dev_blocked,
+};
+
+static struct iommu_domain arm_smmu_release_domain = {
+ .type = IOMMU_DOMAIN_BLOCKED,
+ .ops = &arm_smmu_release_ops,
+};
+
static struct iommu_domain *
arm_smmu_domain_alloc_paging_flags(struct device *dev, u32 flags,
const struct iommu_user_data *user_data)
@@ -3582,12 +3592,6 @@ static void arm_smmu_release_device(struct device *dev)
WARN_ON(master->iopf_refcount);
- /* Put the STE back to what arm_smmu_init_strtab() sets */
- if (dev->iommu->require_direct)
- arm_smmu_attach_dev_identity(&arm_smmu_identity_domain, dev);
- else
- arm_smmu_attach_dev_blocked(&arm_smmu_blocked_domain, dev);
-
arm_smmu_disable_pasid(master);
arm_smmu_remove_master(master);
if (arm_smmu_cdtab_allocated(&master->cd_table))
@@ -3678,6 +3682,7 @@ static int arm_smmu_def_domain_type(struct device *dev)
static const struct iommu_ops arm_smmu_ops = {
.identity_domain = &arm_smmu_identity_domain,
.blocked_domain = &arm_smmu_blocked_domain,
+ .release_domain = &arm_smmu_release_domain,
.capable = arm_smmu_capable,
.hw_info = arm_smmu_hw_info,
.domain_alloc_sva = arm_smmu_sva_domain_alloc,
-----------------------------------------------------------------
> > > +
> > > + /* Put the STE back to what arm_smmu_init_strtab() sets */
> > > + if (dev->iommu->require_direct)
> > > +
> > > arm_smmu_attach_dev_identity(&arm_smmu_identity_domain,
> > > dev);
> > > + else
> > > +
> > > arm_smmu_attach_dev_blocked(&arm_smmu_blocked_domain,
> > > dev);
>
> And I'd argue the attaches internally should have the assertion. If no
> pasids and blocked/identity the iopf == 0.
Ack. I will try a separate SMMU patch from this series.
> Also, I don't think this should be in the smmu driver, every driver
> should have this same logic, it is part of the definition of RMR
> Let's put it in the core code:
Ack. Adding this patch prior to the SMMU release_domain:
-----------------------------------------------------------------
From: Jason Gunthorpe <jgg@nvidia.com>
Date: Fri, 19 Sep 2025 22:26:45 +0000
Subject: [PATCH] iommu: Use identity_domain as release_domain for
require_direct
If dev->iommu->require_direct is set, the core prevent attaching a BLOCKED
domains entirely in __iommu_device_set_domain():
if (dev->iommu->require_direct &&
(new_domain->type == IOMMU_DOMAIN_BLOCKED ||
new_domain == group->blocking_domain)) {
dev_warn(dev, "....");
return -EINVAL;
}
Thus, in most sane cases, the above will never convert BLOCKED to IDENTITY
in order to preserve the RMRs (direct mappings).
A similar situation would happen to the release_domain: while driver might
have set it to a BLOCKED domain, replace it with an IDENTITY for RMRs.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/iommu/iommu.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 08ba7b929580f..438458b465cac 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -516,8 +516,20 @@ static void iommu_deinit_device(struct device *dev)
* Regardless, if a delayed attach never occurred, then the release
* should still avoid touching any hardware configuration either.
*/
- if (!dev->iommu->attach_deferred && ops->release_domain)
- ops->release_domain->ops->attach_dev(ops->release_domain, dev);
+ if (!dev->iommu->attach_deferred && ops->release_domain) {
+ struct iommu_domain *release_domain = ops->release_domain;
+
+ /*
+ * If the device requires direct mappings then it should not
+ * be parked on a BLOCKED domain during release as that would
+ * break the direct mappings.
+ */
+ if (dev->iommu->require_direct && ops->identity_domain &&
+ release_domain == ops->blocked_domain)
+ release_domain = ops->identity_domain;
+
+ release_domain->ops->attach_dev(release_domain, dev);
+ }
if (ops->release_device)
ops->release_device(dev);
--
2.43.0
-----------------------------------------------------------------
Thanks
Nicolin
On Fri, Sep 19, 2025 at 03:47:49PM -0700, Nicolin Chen wrote:
> +/* Same as arm_smmu_blocked_ops but less set_dev_pasid */
> +static const struct iommu_domain_ops arm_smmu_release_ops = {
> + .attach_dev = arm_smmu_attach_dev_blocked,
> +};
don't worry about set_dev_pasid for the release domain, it is never
called anyhow.
The intention is to just use identity or blocked domains as is.
Jason
> -----------------------------------------------------------------
> From: Jason Gunthorpe <jgg@nvidia.com>
> Date: Fri, 19 Sep 2025 22:26:45 +0000
> Subject: [PATCH] iommu: Use identity_domain as release_domain for
> require_direct
>
> If dev->iommu->require_direct is set, the core prevent attaching a BLOCKED
> domains entirely in __iommu_device_set_domain():
>
> if (dev->iommu->require_direct &&
> (new_domain->type == IOMMU_DOMAIN_BLOCKED ||
> new_domain == group->blocking_domain)) {
> dev_warn(dev, "....");
> return -EINVAL;
> }
>
> Thus, in most sane cases, the above will never convert BLOCKED to IDENTITY
> in order to preserve the RMRs (direct mappings).
>
> A similar situation would happen to the release_domain: while driver might
> have set it to a BLOCKED domain, replace it with an IDENTITY for RMRs.
How about
[PATCH] iommu: Generic support for RMRs during device release
Generally an IOMMU driver should leave the translation as BLOCKED
until the translation entry is probed onto a struct device. When the
struct device is removed the translation should be put back to
BLOCKED.
Drivers that are able to work like this can set their release_domain
to their blocking domain and the core code handles this work.
The exception is if the device has a IOMMU_RESV_DIRECT region. In this
case the OS should continuously allow translation for the given
range. The core code generally prevents using a BLOCKED domain with
this device.
Continue this logic for the device release and hoist some open coding
from drivers. If the device has dev->iommu->require_direct and the
driver is using a BLOCKED release_domain override this to IDENTITY to
preserve the semantics.
The only remaining required driver code for IOMMU_RESV_DIRECT should
preset an IDENTITY translation during early IOMMU startup for those
devices.
Jason
On Tue, Sep 23, 2025 at 02:22:29PM -0300, Jason Gunthorpe wrote:
> On Fri, Sep 19, 2025 at 03:47:49PM -0700, Nicolin Chen wrote:
> > +/* Same as arm_smmu_blocked_ops but less set_dev_pasid */
> > +static const struct iommu_domain_ops arm_smmu_release_ops = {
> > + .attach_dev = arm_smmu_attach_dev_blocked,
> > +};
>
> don't worry about set_dev_pasid for the release domain, it is never
> called anyhow.
>
> The intention is to just use identity or blocked domains as is.
Ah, I see AMD separate release_domain from blocked_domain..
OK. Will just set it to blocked_domain. And will update the commit
message as well.
Thanks
Nicolin
On Tue, Sep 23, 2025 at 10:37:31AM -0700, Nicolin Chen wrote:
> On Tue, Sep 23, 2025 at 02:22:29PM -0300, Jason Gunthorpe wrote:
> > On Fri, Sep 19, 2025 at 03:47:49PM -0700, Nicolin Chen wrote:
> > > +/* Same as arm_smmu_blocked_ops but less set_dev_pasid */
> > > +static const struct iommu_domain_ops arm_smmu_release_ops = {
> > > + .attach_dev = arm_smmu_attach_dev_blocked,
> > > +};
> >
> > don't worry about set_dev_pasid for the release domain, it is never
> > called anyhow.
> >
> > The intention is to just use identity or blocked domains as is.
>
> Ah, I see AMD separate release_domain from blocked_domain..
Send a patch for AMD since you found it?
Jason
On Tue, Sep 23, 2025 at 02:44:53PM -0300, Jason Gunthorpe wrote:
> On Tue, Sep 23, 2025 at 10:37:31AM -0700, Nicolin Chen wrote:
> > On Tue, Sep 23, 2025 at 02:22:29PM -0300, Jason Gunthorpe wrote:
> > > On Fri, Sep 19, 2025 at 03:47:49PM -0700, Nicolin Chen wrote:
> > > > +/* Same as arm_smmu_blocked_ops but less set_dev_pasid */
> > > > +static const struct iommu_domain_ops arm_smmu_release_ops = {
> > > > + .attach_dev = arm_smmu_attach_dev_blocked,
> > > > +};
> > >
> > > don't worry about set_dev_pasid for the release domain, it is never
> > > called anyhow.
> > >
> > > The intention is to just use identity or blocked domains as is.
> >
> > Ah, I see AMD separate release_domain from blocked_domain..
>
> Send a patch for AMD since you found it?
Yea, sure.
Nicolin
© 2016 - 2026 Red Hat, Inc.