> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Tuesday, June 20, 2023 8:47 PM > > On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: > > I wonder whether we have argued passed each other. > > > > This series adds reserved regions to S2. I challenged the necessity as > > S2 is not directly accessed by the device. > > > > Then you replied that doing so still made sense to support identity > > S1. > > I think I said/ment if we attach the "s2" iommu domain as a direct > attach for identity - eg at boot time, then the IOAS must gain the > reserved regions. This is our normal protocol. > > But when we use the "s2" iommu domain as an actual nested S2 then we > don't gain reserved regions. Then we're aligned. Yi/Nicolin, please update this series to not automatically add reserved regions to S2 in the nesting configuration. It also implies that the user cannot rely on IOAS_IOVA_RANGES to learn reserved regions for arranging addresses in S1. Then we also need a new ioctl to report reserved regions per dev_id. > > > Intel VT-d supports 4 configurations: > > - passthrough (i.e. identity mapped) > > - S1 only > > - S2 only > > - nested > > > > 'S2 only' is used when vIOMMU is configured in passthrough. > > S2 only is modeled as attaching an S2 format iommu domain to the RID, > and when this is done the IOAS should gain the reserved regions > because it is no different behavior than attaching any other iommu > domain to a RID. > > When the S2 is replaced with a S1 nest then the IOAS should loose > those reserved regions since it is no longer attached to a RID. yes > > > My understanding of ARM SMMU is that from host p.o.v. the CD is the > > S1 in the nested configuration. 'identity' is one configuration in the CD > > then it's in the business of nesting. > > I think it is the same. A CD doesn't come into the picture until the > guest installs a CD pointing STE. Until that time the S2 is being used > as identity. > > It sounds like the same basic flow. After a CD table is installed in a STE I assume the SMMU still allows to configure an individual CD entry as identity? e.g. while vSVA is enabled on a device the guest can continue to keep CD#0 as identity when the default domain of the device is set as 'passthrough'. In this case the IOAS still needs to gain reserved regions even though S2 is not directly attached from host p.o.v. > > > My preference was that ALLOC_HWPT allows vIOMMU to opt whether > > reserved regions of dev_id should be added to the IOAS of the parent > > S2 hwpt. > > Having an API to explicitly load reserved regions of a specific device > to an IOAS makes some sense to me. > > Jason
On Wed, Jun 21, 2023 at 06:02:21AM +0000, Tian, Kevin wrote: > > On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: > > > I wonder whether we have argued passed each other. > > > > > > This series adds reserved regions to S2. I challenged the necessity as > > > S2 is not directly accessed by the device. > > > > > > Then you replied that doing so still made sense to support identity > > > S1. > > > > I think I said/ment if we attach the "s2" iommu domain as a direct > > attach for identity - eg at boot time, then the IOAS must gain the > > reserved regions. This is our normal protocol. > > > > But when we use the "s2" iommu domain as an actual nested S2 then we > > don't gain reserved regions. > > Then we're aligned. > > Yi/Nicolin, please update this series to not automatically add reserved > regions to S2 in the nesting configuration. I'm a bit late for the conversation here. Yet, how about the IOMMU_RESV_SW_MSI on ARM in the nesting configuration? We'd still call iommufd_group_setup_msi() on the S2 HWPT, despite attaching the device to a nested S1 HWPT right? > It also implies that the user cannot rely on IOAS_IOVA_RANGES to > learn reserved regions for arranging addresses in S1. > > Then we also need a new ioctl to report reserved regions per dev_id. So, in a nesting configuration, QEMU would poll a device's S2 MSI region (i.e. IOMMU_RESV_SW_MSI) to prevent conflict? Thanks Nic
> From: Nicolin Chen <nicolinc@nvidia.com> > Sent: Thursday, June 22, 2023 1:13 AM > > On Wed, Jun 21, 2023 at 06:02:21AM +0000, Tian, Kevin wrote: > > > > On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: > > > > I wonder whether we have argued passed each other. > > > > > > > > This series adds reserved regions to S2. I challenged the necessity as > > > > S2 is not directly accessed by the device. > > > > > > > > Then you replied that doing so still made sense to support identity > > > > S1. > > > > > > I think I said/ment if we attach the "s2" iommu domain as a direct > > > attach for identity - eg at boot time, then the IOAS must gain the > > > reserved regions. This is our normal protocol. > > > > > > But when we use the "s2" iommu domain as an actual nested S2 then we > > > don't gain reserved regions. > > > > Then we're aligned. > > > > Yi/Nicolin, please update this series to not automatically add reserved > > regions to S2 in the nesting configuration. > > I'm a bit late for the conversation here. Yet, how about the > IOMMU_RESV_SW_MSI on ARM in the nesting configuration? We'd > still call iommufd_group_setup_msi() on the S2 HWPT, despite > attaching the device to a nested S1 HWPT right? Yes, based on current design of ARM nesting. But please special case it instead of pretending that all reserved regions are added to IOAS which is wrong in concept based on the discussion. > > > It also implies that the user cannot rely on IOAS_IOVA_RANGES to > > learn reserved regions for arranging addresses in S1. > > > > Then we also need a new ioctl to report reserved regions per dev_id. > > So, in a nesting configuration, QEMU would poll a device's S2 > MSI region (i.e. IOMMU_RESV_SW_MSI) to prevent conflict? > Qemu needs to know all the reserved regions of the device and skip them when arranging S1 layout. I'm not sure whether the MSI region needs a special MSI type or just a general RESV_DIRECT type for 1:1 mapping, though.
On Mon, Jun 26, 2023 at 06:42:58AM +0000, Tian, Kevin wrote: > > > Yi/Nicolin, please update this series to not automatically add reserved > > > regions to S2 in the nesting configuration. > > > > I'm a bit late for the conversation here. Yet, how about the > > IOMMU_RESV_SW_MSI on ARM in the nesting configuration? We'd > > still call iommufd_group_setup_msi() on the S2 HWPT, despite > > attaching the device to a nested S1 HWPT right? > > Yes, based on current design of ARM nesting. > > But please special case it instead of pretending that all reserved regions > are added to IOAS which is wrong in concept based on the discussion. Ack. Yi made a version of change dropping it completely along with the iommufd_group_setup_msi() call for a nested S1 HWPT. So I thought there was a misalignment. I made another version preserving the pathway for MSI on ARM, and perhaps we should go with this one: https://github.com/nicolinc/iommufd/commit/c63829a12d35f2d7a390f42821a079f8a294cff8 > > > It also implies that the user cannot rely on IOAS_IOVA_RANGES to > > > learn reserved regions for arranging addresses in S1. > > > > > > Then we also need a new ioctl to report reserved regions per dev_id. > > > > So, in a nesting configuration, QEMU would poll a device's S2 > > MSI region (i.e. IOMMU_RESV_SW_MSI) to prevent conflict? > > > > Qemu needs to know all the reserved regions of the device and skip > them when arranging S1 layout. OK. > I'm not sure whether the MSI region needs a special MSI type or > just a general RESV_DIRECT type for 1:1 mapping, though. I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI and IOMMU_RESV_SW_MSI? Or does it juset mean we should report the iommu_resv_type along with reserved regions in new ioctl? Thanks Nic
> From: Nicolin Chen <nicolinc@nvidia.com> > Sent: Tuesday, June 27, 2023 1:29 AM > > > I'm not sure whether the MSI region needs a special MSI type or > > just a general RESV_DIRECT type for 1:1 mapping, though. > > I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI > and IOMMU_RESV_SW_MSI? Or does it juset mean we should report > the iommu_resv_type along with reserved regions in new ioctl? > Currently those are iommu internal types. When defining the new ioctl we need think about what are necessary presenting to the user. Probably just a list of reserved regions plus a flag to mark which one is SW_MSI? Except SW_MSI all other reserved region types just need the user to reserve them w/o knowing more detail.
On Tue, Jun 27, 2023 at 06:02:13AM +0000, Tian, Kevin wrote: > > From: Nicolin Chen <nicolinc@nvidia.com> > > Sent: Tuesday, June 27, 2023 1:29 AM > > > > > I'm not sure whether the MSI region needs a special MSI type or > > > just a general RESV_DIRECT type for 1:1 mapping, though. > > > > I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI > > and IOMMU_RESV_SW_MSI? Or does it juset mean we should report > > the iommu_resv_type along with reserved regions in new ioctl? > > > > Currently those are iommu internal types. When defining the new > ioctl we need think about what are necessary presenting to the user. > > Probably just a list of reserved regions plus a flag to mark which > one is SW_MSI? Except SW_MSI all other reserved region types > just need the user to reserve them w/o knowing more detail. I think I prefer the idea we just import the reserved regions from a devid and do not expose any of this detail to userspace. Kernel can make only the SW_MSI a mandatory cut out when the S2 is attached. Jason
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Wednesday, June 28, 2023 12:01 AM > > On Tue, Jun 27, 2023 at 06:02:13AM +0000, Tian, Kevin wrote: > > > From: Nicolin Chen <nicolinc@nvidia.com> > > > Sent: Tuesday, June 27, 2023 1:29 AM > > > > > > > I'm not sure whether the MSI region needs a special MSI type or > > > > just a general RESV_DIRECT type for 1:1 mapping, though. > > > > > > I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI > > > and IOMMU_RESV_SW_MSI? Or does it juset mean we should report > > > the iommu_resv_type along with reserved regions in new ioctl? > > > > > > > Currently those are iommu internal types. When defining the new > > ioctl we need think about what are necessary presenting to the user. > > > > Probably just a list of reserved regions plus a flag to mark which > > one is SW_MSI? Except SW_MSI all other reserved region types > > just need the user to reserve them w/o knowing more detail. > > I think I prefer the idea we just import the reserved regions from a > devid and do not expose any of this detail to userspace. > > Kernel can make only the SW_MSI a mandatory cut out when the S2 is > attached. > I'm confused. The VMM needs to know reserved regions per dev_id and report them to the guest. And we have aligned on that reserved regions (except SW_MSI) should not be automatically added to S2 in nesting case. Then the VMM cannot rely on IOAS_IOVA_RANGES to identify the reserved regions. So there needs a new interface for the user to discover reserved regions per dev_id, within which the SW_MSI region should be marked out so identity mapping can be installed properly for it in S1. Did I misunderstand your point in previous discussion?
On Wed, Jun 28, 2023 at 02:47:02AM +0000, Tian, Kevin wrote: > > From: Jason Gunthorpe <jgg@nvidia.com> > > Sent: Wednesday, June 28, 2023 12:01 AM > > > > On Tue, Jun 27, 2023 at 06:02:13AM +0000, Tian, Kevin wrote: > > > > From: Nicolin Chen <nicolinc@nvidia.com> > > > > Sent: Tuesday, June 27, 2023 1:29 AM > > > > > > > > > I'm not sure whether the MSI region needs a special MSI type or > > > > > just a general RESV_DIRECT type for 1:1 mapping, though. > > > > > > > > I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI > > > > and IOMMU_RESV_SW_MSI? Or does it juset mean we should report > > > > the iommu_resv_type along with reserved regions in new ioctl? > > > > > > > > > > Currently those are iommu internal types. When defining the new > > > ioctl we need think about what are necessary presenting to the user. > > > > > > Probably just a list of reserved regions plus a flag to mark which > > > one is SW_MSI? Except SW_MSI all other reserved region types > > > just need the user to reserve them w/o knowing more detail. > > > > I think I prefer the idea we just import the reserved regions from a > > devid and do not expose any of this detail to userspace. > > > > Kernel can make only the SW_MSI a mandatory cut out when the S2 is > > attached. > > > > I'm confused. > > The VMM needs to know reserved regions per dev_id and report them > to the guest. > > And we have aligned on that reserved regions (except SW_MSI) should > not be automatically added to S2 in nesting case. Then the VMM cannot > rely on IOAS_IOVA_RANGES to identify the reserved regions. We also said we need a way to load the reserved regions to create an identity compatible version of the HWPT So we have a model where the VMM will want to load in regions beyond the currently attached device needs > So there needs a new interface for the user to discover reserved regions > per dev_id, within which the SW_MSI region should be marked out so > identity mapping can be installed properly for it in S1. > > Did I misunderstand your point in previous discussion? This is another discussion, if the vmm needs this then we probably need a new API to get it. Jason
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Wednesday, June 28, 2023 8:36 PM > > On Wed, Jun 28, 2023 at 02:47:02AM +0000, Tian, Kevin wrote: > > > From: Jason Gunthorpe <jgg@nvidia.com> > > > Sent: Wednesday, June 28, 2023 12:01 AM > > > > > > On Tue, Jun 27, 2023 at 06:02:13AM +0000, Tian, Kevin wrote: > > > > > From: Nicolin Chen <nicolinc@nvidia.com> > > > > > Sent: Tuesday, June 27, 2023 1:29 AM > > > > > > > > > > > I'm not sure whether the MSI region needs a special MSI type or > > > > > > just a general RESV_DIRECT type for 1:1 mapping, though. > > > > > > > > > > I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI > > > > > and IOMMU_RESV_SW_MSI? Or does it juset mean we should report > > > > > the iommu_resv_type along with reserved regions in new ioctl? > > > > > > > > > > > > > Currently those are iommu internal types. When defining the new > > > > ioctl we need think about what are necessary presenting to the user. > > > > > > > > Probably just a list of reserved regions plus a flag to mark which > > > > one is SW_MSI? Except SW_MSI all other reserved region types > > > > just need the user to reserve them w/o knowing more detail. > > > > > > I think I prefer the idea we just import the reserved regions from a > > > devid and do not expose any of this detail to userspace. > > > > > > Kernel can make only the SW_MSI a mandatory cut out when the S2 is > > > attached. > > > > > > > I'm confused. > > > > The VMM needs to know reserved regions per dev_id and report them > > to the guest. > > > > And we have aligned on that reserved regions (except SW_MSI) should > > not be automatically added to S2 in nesting case. Then the VMM cannot > > rely on IOAS_IOVA_RANGES to identify the reserved regions. > > We also said we need a way to load the reserved regions to create an > identity compatible version of the HWPT > > So we have a model where the VMM will want to load in regions beyond > the currently attached device needs No question on this. > > > So there needs a new interface for the user to discover reserved regions > > per dev_id, within which the SW_MSI region should be marked out so > > identity mapping can be installed properly for it in S1. > > > > Did I misunderstand your point in previous discussion? > > This is another discussion, if the vmm needs this then we probably > need a new API to get it. > Then it's clear. 😊
On Mon, Jun 26, 2023 at 06:42:58AM +0000, Tian, Kevin wrote: > I'm not sure whether the MSI region needs a special MSI type or > just a general RESV_DIRECT type for 1:1 mapping, though. It probably always needs a special type :( Jason
On Wed, Jun 21, 2023 at 06:02:21AM +0000, Tian, Kevin wrote: > > > My understanding of ARM SMMU is that from host p.o.v. the CD is the > > > S1 in the nested configuration. 'identity' is one configuration in the CD > > > then it's in the business of nesting. > > > > I think it is the same. A CD doesn't come into the picture until the > > guest installs a CD pointing STE. Until that time the S2 is being used > > as identity. > > > > It sounds like the same basic flow. > > After a CD table is installed in a STE I assume the SMMU still allows to > configure an individual CD entry as identity? e.g. while vSVA is enabled > on a device the guest can continue to keep CD#0 as identity when the > default domain of the device is set as 'passthrough'. In this case the > IOAS still needs to gain reserved regions even though S2 is not directly > attached from host p.o.v. In any nesting configuration the hypervisor cannot directly restrict what IOVA the guest will use. The VM could make a normal nest and try to use unusable IOVA. Identity is not really special. The VMM should construct the guest memory map so that an identity iommu_domain can meet the reserved requirements - it needs to do this anyhow for the initial boot part. It shouuld try to forward the reserved regions to the guest via ACPI/etc. Being able to explicitly load reserved regions into an IOAS seems like a useful way to help construct this. Jason
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Wednesday, June 21, 2023 8:05 PM > > On Wed, Jun 21, 2023 at 06:02:21AM +0000, Tian, Kevin wrote: > > > > My understanding of ARM SMMU is that from host p.o.v. the CD is the > > > > S1 in the nested configuration. 'identity' is one configuration in the CD > > > > then it's in the business of nesting. > > > > > > I think it is the same. A CD doesn't come into the picture until the > > > guest installs a CD pointing STE. Until that time the S2 is being used > > > as identity. > > > > > > It sounds like the same basic flow. > > > > After a CD table is installed in a STE I assume the SMMU still allows to > > configure an individual CD entry as identity? e.g. while vSVA is enabled > > on a device the guest can continue to keep CD#0 as identity when the > > default domain of the device is set as 'passthrough'. In this case the > > IOAS still needs to gain reserved regions even though S2 is not directly > > attached from host p.o.v. > > In any nesting configuration the hypervisor cannot directly restrict > what IOVA the guest will use. The VM could make a normal nest and try > to use unusable IOVA. Identity is not really special. Sure. What I talked is the end result e.g. after the user explicitly requests to load reserved regions into an IOAS. > > The VMM should construct the guest memory map so that an identity > iommu_domain can meet the reserved requirements - it needs to do this > anyhow for the initial boot part. It shouuld try to forward the > reserved regions to the guest via ACPI/etc. Yes. > > Being able to explicitly load reserved regions into an IOAS seems like > a useful way to help construct this. > And it's correct in concept because the IOAS is 'implicitly' accessed by the device when the guest domain is identity in this case.
> From: Tian, Kevin <kevin.tian@intel.com> > Sent: Wednesday, June 21, 2023 2:02 PM > > > From: Jason Gunthorpe <jgg@nvidia.com> > > Sent: Tuesday, June 20, 2023 8:47 PM > > > > On Tue, Jun 20, 2023 at 01:43:42AM +0000, Tian, Kevin wrote: > > > I wonder whether we have argued passed each other. > > > > > > This series adds reserved regions to S2. I challenged the necessity as > > > S2 is not directly accessed by the device. > > > > > > Then you replied that doing so still made sense to support identity > > > S1. > > > > I think I said/ment if we attach the "s2" iommu domain as a direct > > attach for identity - eg at boot time, then the IOAS must gain the > > reserved regions. This is our normal protocol. > > > > But when we use the "s2" iommu domain as an actual nested S2 then we > > don't gain reserved regions. > > Then we're aligned. > > Yi/Nicolin, please update this series to not automatically add reserved > regions to S2 in the nesting configuration. Got it. > It also implies that the user cannot rely on IOAS_IOVA_RANGES to > learn reserved regions for arranging addresses in S1. > > Then we also need a new ioctl to report reserved regions per dev_id. Shall we add it now? I suppose yes. > > > > > Intel VT-d supports 4 configurations: > > > - passthrough (i.e. identity mapped) > > > - S1 only > > > - S2 only > > > - nested > > > > > > 'S2 only' is used when vIOMMU is configured in passthrough. > > > > S2 only is modeled as attaching an S2 format iommu domain to the RID, > > and when this is done the IOAS should gain the reserved regions > > because it is no different behavior than attaching any other iommu > > domain to a RID. > > > > When the S2 is replaced with a S1 nest then the IOAS should loose > > those reserved regions since it is no longer attached to a RID. > > yes Makes sense. Regards, Yi Liu > > > > > > My understanding of ARM SMMU is that from host p.o.v. the CD is the > > > S1 in the nested configuration. 'identity' is one configuration in the CD > > > then it's in the business of nesting. > > > > I think it is the same. A CD doesn't come into the picture until the > > guest installs a CD pointing STE. Until that time the S2 is being used > > as identity. > > > > It sounds like the same basic flow. > > After a CD table is installed in a STE I assume the SMMU still allows to > configure an individual CD entry as identity? e.g. while vSVA is enabled > on a device the guest can continue to keep CD#0 as identity when the > default domain of the device is set as 'passthrough'. In this case the > IOAS still needs to gain reserved regions even though S2 is not directly > attached from host p.o.v. > > > > > > My preference was that ALLOC_HWPT allows vIOMMU to opt whether > > > reserved regions of dev_id should be added to the IOAS of the parent > > > S2 hwpt. > > > > Having an API to explicitly load reserved regions of a specific device > > to an IOAS makes some sense to me. > > > > Jason
© 2016 - 2026 Red Hat, Inc.