include/linux/iommu.h | 12 +++ drivers/iommu/iommu.c | 180 ++++++++++++++++++++++++++++++++++++++--- drivers/pci/pci-acpi.c | 21 ++++- drivers/pci/pci.c | 84 +++++++++++++++++-- drivers/pci/quirks.c | 27 ++++++- 5 files changed, 307 insertions(+), 17 deletions(-)
Hi all, PCIe permits a device to ignore ATS invalidation TLPs, while processing a reset. This creates a problem visible to the OS where an ATS invalidation command will time out: e.g. an SVA domain will have no coordination with a reset event and can racily issue ATS invalidations to a resetting device. The OS should do something to mitigate this as we do not want production systems to be reporting critical ATS failures, especially in a hypervisor environment. Broadly, OS could arrange to ignore the timeouts, block page table mutations to prevent invalidations, or disable and block ATS. The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and block ATS before initiating a Function Level Reset. It also mentions that other reset methods could have the same vulnerability as well. Provide a callback from the PCI subsystem that will enclose the reset and have the iommu core temporarily change all the attached domain to BLOCKED. After attaching a BLOCKED domain, IOMMU drivers should fence any incoming ATS queries, synchronously stop issuing new ATS invalidations, and wait for all ATS invalidations to complete. This can avoid any ATS invaliation timeouts. When a device is resetting, any new domain attachment should be deferred, until the reset is finished. The iommu callback will log the However, if there is a domain attachment/replacement happening during an ongoing reset, the ATS might be re-enabled between the two function calls. Introduce a new pending_reset flag in iommu_group to defer any attachment during a reset, allowing iommu core to cache the target domains in the SW level but bypassing the driver. The iommu_dev_reset_done() will re-attach these soft-attached domains via __iommu_attach_device/set_group_pasid(). Notes: - This only works for IOMMU drivers that implemented ops->blocked_domain correctly with pci_disable_ats(). - This only works for IOMMU drivers that will not issue ATS invalidation requests to the device, after it's docked at ops->blocked_domain. Driver should fix itself to align with the aforementioned notes. This is on Github: https://github.com/nicolinc/iommufd/commits/iommu_dev_reset-rfcv2 Changelog v2 * [iommu] Update kdocs, inline comments, and commit logs * [iommu] Replace long-holding group->mutex with a pending_reset flag * [pci] Abort reset routines if iommu_dev_reset_prepare() fails * [pci] Apply the same vulnerability fix to other reset functions v1 https://lore.kernel.org/all/cover.1749494161.git.nicolinc@nvidia.com/ Thanks Nicolin Nicolin Chen (4): iommu: Lock group->mutex in iommu_deferred_attach iommu: Pass in gdev to __iommu_device_set_domain iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done() pci: Suspend iommu function prior to resetting a device include/linux/iommu.h | 12 +++ drivers/iommu/iommu.c | 180 ++++++++++++++++++++++++++++++++++++++--- drivers/pci/pci-acpi.c | 21 ++++- drivers/pci/pci.c | 84 +++++++++++++++++-- drivers/pci/quirks.c | 27 ++++++- 5 files changed, 307 insertions(+), 17 deletions(-) -- 2.43.0
On 6/28/2025 3:42 PM, Nicolin Chen wrote: > Hi all, > > PCIe permits a device to ignore ATS invalidation TLPs, while processing a > reset. This creates a problem visible to the OS where an ATS invalidation > command will time out: e.g. an SVA domain will have no coordination with a > reset event and can racily issue ATS invalidations to a resetting device. > > The OS should do something to mitigate this as we do not want production > systems to be reporting critical ATS failures, especially in a hypervisor > environment. Broadly, OS could arrange to ignore the timeouts, block page > table mutations to prevent invalidations, or disable and block ATS. > > The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and > block ATS before initiating a Function Level Reset. It also mentions that > other reset methods could have the same vulnerability as well. > > Provide a callback from the PCI subsystem that will enclose the reset and > have the iommu core temporarily change all the attached domain to BLOCKED. > After attaching a BLOCKED domain, IOMMU drivers should fence any incoming > ATS queries, synchronously stop issuing new ATS invalidations, and wait > for all ATS invalidations to complete. This can avoid any ATS invaliation > timeouts. This approach seems effective for reset operations initiated through software interface functions, but how would we handle those triggered by hardware mechanisms? For example, resets caused by PCIe DPC mechanisms, device firmware, or manual hot-plug operations? Thanks, Ethan > > When a device is resetting, any new domain attachment should be deferred, > until the reset is finished. The iommu callback will log the > > However, if there is a domain attachment/replacement happening during an > ongoing reset, the ATS might be re-enabled between the two function calls. > Introduce a new pending_reset flag in iommu_group to defer any attachment > during a reset, allowing iommu core to cache the target domains in the SW > level but bypassing the driver. The iommu_dev_reset_done() will re-attach > these soft-attached domains via __iommu_attach_device/set_group_pasid(). > > Notes: > - This only works for IOMMU drivers that implemented ops->blocked_domain > correctly with pci_disable_ats(). > - This only works for IOMMU drivers that will not issue ATS invalidation > requests to the device, after it's docked at ops->blocked_domain. > Driver should fix itself to align with the aforementioned notes. > > This is on Github: > https://github.com/nicolinc/iommufd/commits/iommu_dev_reset-rfcv2 > > Changelog > v2 > * [iommu] Update kdocs, inline comments, and commit logs > * [iommu] Replace long-holding group->mutex with a pending_reset flag > * [pci] Abort reset routines if iommu_dev_reset_prepare() fails > * [pci] Apply the same vulnerability fix to other reset functions > v1 > https://lore.kernel.org/all/cover.1749494161.git.nicolinc@nvidia.com/ > > Thanks > Nicolin > > Nicolin Chen (4): > iommu: Lock group->mutex in iommu_deferred_attach > iommu: Pass in gdev to __iommu_device_set_domain > iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done() > pci: Suspend iommu function prior to resetting a device > > include/linux/iommu.h | 12 +++ > drivers/iommu/iommu.c | 180 ++++++++++++++++++++++++++++++++++++++--- > drivers/pci/pci-acpi.c | 21 ++++- > drivers/pci/pci.c | 84 +++++++++++++++++-- > drivers/pci/quirks.c | 27 ++++++- > 5 files changed, 307 insertions(+), 17 deletions(-) >
On Thu, Jul 24, 2025 at 02:50:53PM +0800, Ethan Zhao wrote: > On 6/28/2025 3:42 PM, Nicolin Chen wrote: > > PCIe permits a device to ignore ATS invalidation TLPs, while processing a > > reset. This creates a problem visible to the OS where an ATS invalidation > > command will time out: e.g. an SVA domain will have no coordination with a > > reset event and can racily issue ATS invalidations to a resetting device. > > > > The OS should do something to mitigate this as we do not want production > > systems to be reporting critical ATS failures, especially in a hypervisor > > environment. Broadly, OS could arrange to ignore the timeouts, block page > > table mutations to prevent invalidations, or disable and block ATS. > > > > The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and > > block ATS before initiating a Function Level Reset. It also mentions that > > other reset methods could have the same vulnerability as well. > > > > Provide a callback from the PCI subsystem that will enclose the reset and > > have the iommu core temporarily change all the attached domain to BLOCKED. > > After attaching a BLOCKED domain, IOMMU drivers should fence any incoming > > ATS queries, synchronously stop issuing new ATS invalidations, and wait > > for all ATS invalidations to complete. This can avoid any ATS invaliation > > timeouts. > > This approach seems effective for reset operations initiated through > software interface functions, but how would we handle those triggered by > hardware mechanisms? For example, resets caused by PCIe DPC mechanisms, > device firmware, or manual hot-plug operations? That's a good point. But I am not sure what SW can do about those. IIUIC, DPC resets PCI at the HW level, SW only gets a notification after the HW reset finishes. So, during this HW reset, iommu might issue ATC invalidations (resulting in invalidation timeout noises) since at the SW level the device is still actively attached to an IOMMU instance. Right? Nicolin
On 7/26/2025 12:41 AM, Nicolin Chen wrote: > On Thu, Jul 24, 2025 at 02:50:53PM +0800, Ethan Zhao wrote: >> On 6/28/2025 3:42 PM, Nicolin Chen wrote: >>> PCIe permits a device to ignore ATS invalidation TLPs, while processing a >>> reset. This creates a problem visible to the OS where an ATS invalidation >>> command will time out: e.g. an SVA domain will have no coordination with a >>> reset event and can racily issue ATS invalidations to a resetting device. >>> >>> The OS should do something to mitigate this as we do not want production >>> systems to be reporting critical ATS failures, especially in a hypervisor >>> environment. Broadly, OS could arrange to ignore the timeouts, block page >>> table mutations to prevent invalidations, or disable and block ATS. >>> >>> The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and >>> block ATS before initiating a Function Level Reset. It also mentions that >>> other reset methods could have the same vulnerability as well. >>> >>> Provide a callback from the PCI subsystem that will enclose the reset and >>> have the iommu core temporarily change all the attached domain to BLOCKED. >>> After attaching a BLOCKED domain, IOMMU drivers should fence any incoming >>> ATS queries, synchronously stop issuing new ATS invalidations, and wait >>> for all ATS invalidations to complete. This can avoid any ATS invaliation >>> timeouts. >> >> This approach seems effective for reset operations initiated through >> software interface functions, but how would we handle those triggered by >> hardware mechanisms? For example, resets caused by PCIe DPC mechanisms, >> device firmware, or manual hot-plug operations? > > That's a good point. But I am not sure what SW can do about those. > > IIUIC, DPC resets PCI at the HW level, SW only gets a notification > after the HW reset finishes. So, during this HW reset, iommu might > issue ATC invalidations (resulting in invalidation timeout noises) > since at the SW level the device is still actively attached to an > IOMMU instance. Right? Yup, the situation is this: When the system receives notification of a DPC event, the reset action triggered by the DPC has already occurred. At the very least, the software has an opportunity to be notified that a reset happened – though this notification inevitably lags behind the actual reset behavior, creating a time window between the reset action and its notification. For DPC specifically, there is no notification mechanism before the reset behavior takes place. Surprise Hot-plug events likely operate under a similar constraint. (while we do have good opportunity to know a hot-plug action is about to happen after attention button was pressed for standard hot-plug hardware, adding code there is okay for now). This becomes particularly thorny if an Address Translation Cache (ATC) Invalidation request occurs within this time window. Asynchronously cancelling such requests later would likely be problematic. Is this an accurate assessment ? At least, we can do some attempt in DPC and Hot-plug driver, and then push the hardware specification update to provide pre-reset notification for DPC & hotplug. does it make sense ? Thanks, Ethan > > Nicolin
On Sun, Jul 27, 2025 at 08:48:26PM +0800, Ethan Zhao wrote: > At least, we can do some attempt in DPC and Hot-plug driver, and then > push the hardware specification update to provide pre-reset notification for > DPC & hotplug. does it make sense ? I think DPC is a different case.. If we get a DPC we should also push the iommu into blocking, disable ATS and abandon any outstanding ATC invalidations as part of recovering from the DPC. Once everythings is cleaned up we can set the iommu back up again and allow the driver to recover the device. I think the current series is a good step along that path, but we'd also need to improve the drivers to handle abandonding/aborting the ATC invalidations. IMHO DPC and SW initiated reset are separate projects. Jason
On 7/28/2025 12:20 AM, Jason Gunthorpe wrote: > On Sun, Jul 27, 2025 at 08:48:26PM +0800, Ethan Zhao wrote: > >> At least, we can do some attempt in DPC and Hot-plug driver, and then >> push the hardware specification update to provide pre-reset notification for >> DPC & hotplug. does it make sense ? > > I think DPC is a different case.. More complex and practical case. > > If we get a DPC we should also push the iommu into blocking, disable > ATS and abandon any outstanding ATC invalidations as part of > recovering from the DPC. Once everythings is cleaned up we can set the Yup, even pure software resets, there might be ATC invalidation pending (in software queue or HW queue). > iommu back up again and allow the driver to recover the device. > > I think the current series is a good step along that path, but we'd > also need to improve the drivers to handle abandonding/aborting the > ATC invalidations. Also aborting ATC invalidation works as per-condition for DPC or Hot-plug cases. agree, such improvement seems necessary. > > IMHO DPC and SW initiated reset are separate projects. Of Course, Rome wasn't built in a day; I endorse the success philosophy of restricting project scope. The discussion is purely focused on technical methodology. Thanks, Ethan > > Jason
On Tue, Jul 29, 2025 at 02:16:43PM +0800, Ethan Zhao wrote: > > > On 7/28/2025 12:20 AM, Jason Gunthorpe wrote: > > On Sun, Jul 27, 2025 at 08:48:26PM +0800, Ethan Zhao wrote: > > > > > At least, we can do some attempt in DPC and Hot-plug driver, and then > > > push the hardware specification update to provide pre-reset notification for > > > DPC & hotplug. does it make sense ? > > > > I think DPC is a different case.. > More complex and practical case. I'm not sure about that, we do FLRs all the time as a normal part of VFIO and VMM operations. DPC is pretty rare, IMHO. > > If we get a DPC we should also push the iommu into blocking, disable > > ATS and abandon any outstanding ATC invalidations as part of > > recovering from the DPC. Once everythings is cleaned up we can set the > Yup, even pure software resets, there might be ATC invalidation pending > (in software queue or HW queue). The design of this patch series will require the iommu driver to wait for the in-flight ATC invalidations during the blocking domain attach. So for the SW initiated resets there should not be pending ATC invalidations when the FLR is triggered. We have been talking about DPC internally, and I think it will need a related, but different flow since DPC can unavoidably trigger ATC invalidation timeouts/failures and we must sensibly handle them in the driver. Jason
On 7/29/2025 8:59 PM, Jason Gunthorpe wrote: > On Tue, Jul 29, 2025 at 02:16:43PM +0800, Ethan Zhao wrote: >> >> >> On 7/28/2025 12:20 AM, Jason Gunthorpe wrote: >>> On Sun, Jul 27, 2025 at 08:48:26PM +0800, Ethan Zhao wrote: >>> >>>> At least, we can do some attempt in DPC and Hot-plug driver, and then >>>> push the hardware specification update to provide pre-reset notification for >>>> DPC & hotplug. does it make sense ? >>> >>> I think DPC is a different case.. >> More complex and practical case. > > I'm not sure about that, we do FLRs all the time as a normal part of > VFIO and VMM operations. DPC is pretty rare, IMHO. DPC reset could be triggered by simply accessing its control bit, that is boring, while data corruption hardware issue is really rare. > >>> If we get a DPC we should also push the iommu into blocking, disable >>> ATS and abandon any outstanding ATC invalidations as part of >>> recovering from the DPC. Once everythings is cleaned up we can set the >> Yup, even pure software resets, there might be ATC invalidation pending >> (in software queue or HW queue). > > The design of this patch series will require the iommu driver to wait > for the in-flight ATC invalidations during the blocking domain I see there is pci_wait_for_pending_transaction() before the blocking domain attachment.> attach. So for the SW initiated resets there should not be pending ATC > invalidations when the FLR is triggered. > > We have been talking about DPC internally, and I think it will need a > related, but different flow since DPC can unavoidably trigger ATC > invalidation timeouts/failures and we must sensibly handle them in the There is race window for software to handle. And for DPC containing data corruption as priority, seems not rational to issue notification to software and then do resetting. alternative way might be async modal support in iommu ATC invalidation path ? Thanks, Ethan > driver. > > Jason
On Thu, Jul 31, 2025 at 09:10:59AM +0800, Ethan Zhao wrote: > > invalidations when the FLR is triggered. > > > > We have been talking about DPC internally, and I think it will need a > > related, but different flow since DPC can unavoidably trigger ATC > > invalidation timeouts/failures and we must sensibly handle them in the > There is race window for software to handle. > And for DPC containing data corruption as priority, seems not rational to > issue notification to software and then do resetting. alternative > way might be async modal support in iommu ATC invalidation path ? DPC would still act in HW to prevent corruption, SW would learn about it either through a DPC async notify or through an ATC timeout, then SW can reprogram the IOMMU to disable ATS. We can't make the invalidation path async, the invalidation must succeed or the iommu itself must fully fence future access to the now-invalidate memory - most likely by disabling ATS, blocking accepting translated TLPs and flushing out all previously accepted translated TLPs. Once invalidation finishes there must not be any IOMMU access to the memory that was invalidation, and this cannot fail. Jason
© 2016 - 2025 Red Hat, Inc.