[PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete

Leon Romanovsky posted 7 patches 2 weeks, 6 days ago
There is a newer version of this series
[PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Leon Romanovsky 2 weeks, 6 days ago
From: Leon Romanovsky <leonro@nvidia.com>

dma-buf invalidation is performed asynchronously by hardware, so VFIO must
wait until all affected objects have been fully invalidated.

Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/vfio/pci/vfio_pci_dmabuf.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
index d4d0f7d08c53..33bc6a1909dd 100644
--- a/drivers/vfio/pci/vfio_pci_dmabuf.c
+++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
@@ -321,6 +321,9 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
 			dma_resv_lock(priv->dmabuf->resv, NULL);
 			priv->revoked = revoked;
 			dma_buf_move_notify(priv->dmabuf);
+			dma_resv_wait_timeout(priv->dmabuf->resv,
+					      DMA_RESV_USAGE_KERNEL, false,
+					      MAX_SCHEDULE_TIMEOUT);
 			dma_resv_unlock(priv->dmabuf->resv);
 		}
 		fput(priv->dmabuf->file);
@@ -342,6 +345,8 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
 		priv->vdev = NULL;
 		priv->revoked = true;
 		dma_buf_move_notify(priv->dmabuf);
+		dma_resv_wait_timeout(priv->dmabuf->resv, DMA_RESV_USAGE_KERNEL,
+				      false, MAX_SCHEDULE_TIMEOUT);
 		dma_resv_unlock(priv->dmabuf->resv);
 		vfio_device_put_registration(&vdev->vdev);
 		fput(priv->dmabuf->file);

-- 
2.52.0
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Christian König 2 weeks, 5 days ago
On 1/20/26 15:07, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
> 
> dma-buf invalidation is performed asynchronously by hardware, so VFIO must
> wait until all affected objects have been fully invalidated.
> 
> Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>

Reviewed-by: Christian König <christian.koenig@amd.com>

Please also keep in mind that the while this wait for all fences for correctness you also need to keep the mapping valid until dma_buf_unmap_attachment() was called.

In other words you can only redirect the DMA-addresses previously given out into nirvana (or a dummy memory or similar), but you still need to avoid re-using them for something else.

Regards,
Christian.

> ---
>  drivers/vfio/pci/vfio_pci_dmabuf.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
> index d4d0f7d08c53..33bc6a1909dd 100644
> --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
> +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
> @@ -321,6 +321,9 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
>  			dma_resv_lock(priv->dmabuf->resv, NULL);
>  			priv->revoked = revoked;
>  			dma_buf_move_notify(priv->dmabuf);
> +			dma_resv_wait_timeout(priv->dmabuf->resv,
> +					      DMA_RESV_USAGE_KERNEL, false,
> +					      MAX_SCHEDULE_TIMEOUT);
>  			dma_resv_unlock(priv->dmabuf->resv);
>  		}
>  		fput(priv->dmabuf->file);
> @@ -342,6 +345,8 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
>  		priv->vdev = NULL;
>  		priv->revoked = true;
>  		dma_buf_move_notify(priv->dmabuf);
> +		dma_resv_wait_timeout(priv->dmabuf->resv, DMA_RESV_USAGE_KERNEL,
> +				      false, MAX_SCHEDULE_TIMEOUT);
>  		dma_resv_unlock(priv->dmabuf->resv);
>  		vfio_device_put_registration(&vdev->vdev);
>  		fput(priv->dmabuf->file);
> 

Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Jason Gunthorpe 2 weeks, 5 days ago
On Wed, Jan 21, 2026 at 10:20:51AM +0100, Christian König wrote:
> On 1/20/26 15:07, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@nvidia.com>
> > 
> > dma-buf invalidation is performed asynchronously by hardware, so VFIO must
> > wait until all affected objects have been fully invalidated.
> > 
> > Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> 
> Reviewed-by: Christian König <christian.koenig@amd.com>
> 
> Please also keep in mind that the while this wait for all fences for
> correctness you also need to keep the mapping valid until
> dma_buf_unmap_attachment() was called.

Can you elaborate on this more?

I think what we want for dma_buf_attach_revocable() is the strong
guarentee that the importer stops doing all access to the memory once
this sequence is completed and the exporter can rely on it. I don't
think this works any other way.

This is already true for dynamic move capable importers, right?

For the non-revocable importers I can see the invalidate sequence is
more of an advisory thing and you can't know the access is gone until
the map is undone.

> In other words you can only redirect the DMA-addresses previously
> given out into nirvana (or a dummy memory or similar), but you still
> need to avoid re-using them for something else.

Does any driver do this? If you unload/reload a GPU driver it is
going to re-use the addresses handed out?

Jason
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Christian König 2 weeks, 5 days ago
On 1/21/26 14:31, Jason Gunthorpe wrote:
> On Wed, Jan 21, 2026 at 10:20:51AM +0100, Christian König wrote:
>> On 1/20/26 15:07, Leon Romanovsky wrote:
>>> From: Leon Romanovsky <leonro@nvidia.com>
>>>
>>> dma-buf invalidation is performed asynchronously by hardware, so VFIO must
>>> wait until all affected objects have been fully invalidated.
>>>
>>> Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
>>> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
>>
>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>
>> Please also keep in mind that the while this wait for all fences for
>> correctness you also need to keep the mapping valid until
>> dma_buf_unmap_attachment() was called.
> 
> Can you elaborate on this more?
> 
> I think what we want for dma_buf_attach_revocable() is the strong
> guarentee that the importer stops doing all access to the memory once
> this sequence is completed and the exporter can rely on it. I don't
> think this works any other way.
> 
> This is already true for dynamic move capable importers, right?

Not quite, no.

> For the non-revocable importers I can see the invalidate sequence is
> more of an advisory thing and you can't know the access is gone until
> the map is undone.
> 
>> In other words you can only redirect the DMA-addresses previously
>> given out into nirvana (or a dummy memory or similar), but you still
>> need to avoid re-using them for something else.
> 
> Does any driver do this? If you unload/reload a GPU driver it is
> going to re-use the addresses handed out?

I never fully read through all the source code, but if I'm not completely mistaken that is enforced for all GPU drivers through the DMA-buf and DRM layer lifetime handling and I think even in other in kernel frameworks like V4L, alsa etc...

What roughly happens is that each DMA-buf mapping through a couple of hoops keeps a reference on the device, so even after a hotplug event the device can only fully go away after all housekeeping structures are destroyed and buffers freed.

Background is that a lot of device still make reads even after you have invalidated a mapping, but then discard the result.

So when you don't have same grace period you end up with PCI AER, warnings from IOMMU, random accesses to PCI BARs which just happen to be in the old location of something etc...

I would rather like to keep that semantics even for forcefully shootdowns since it proved to be rather reliable.

Regards,
Christian.

> 
> Jason

Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Jason Gunthorpe 2 weeks, 5 days ago
On Wed, Jan 21, 2026 at 04:28:17PM +0100, Christian König wrote:
> On 1/21/26 14:31, Jason Gunthorpe wrote:
> > On Wed, Jan 21, 2026 at 10:20:51AM +0100, Christian König wrote:
> >> On 1/20/26 15:07, Leon Romanovsky wrote:
> >>> From: Leon Romanovsky <leonro@nvidia.com>
> >>>
> >>> dma-buf invalidation is performed asynchronously by hardware, so VFIO must
> >>> wait until all affected objects have been fully invalidated.
> >>>
> >>> Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
> >>> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> >>
> >> Reviewed-by: Christian König <christian.koenig@amd.com>
> >>
> >> Please also keep in mind that the while this wait for all fences for
> >> correctness you also need to keep the mapping valid until
> >> dma_buf_unmap_attachment() was called.
> > 
> > Can you elaborate on this more?
> > 
> > I think what we want for dma_buf_attach_revocable() is the strong
> > guarentee that the importer stops doing all access to the memory once
> > this sequence is completed and the exporter can rely on it. I don't
> > think this works any other way.
> > 
> > This is already true for dynamic move capable importers, right?
> 
> Not quite, no.

:(

It is kind of shocking to hear these APIs work like this with such a
loose lifetime definition. Leon can you include some of these detail
in the new comments?

> >> In other words you can only redirect the DMA-addresses previously
> >> given out into nirvana (or a dummy memory or similar), but you still
> >> need to avoid re-using them for something else.
> > 
> > Does any driver do this? If you unload/reload a GPU driver it is
> > going to re-use the addresses handed out?
> 
> I never fully read through all the source code, but if I'm not
> completely mistaken that is enforced for all GPU drivers through the
> DMA-buf and DRM layer lifetime handling and I think even in other in
> kernel frameworks like V4L, alsa etc...

> What roughly happens is that each DMA-buf mapping through a couple
> of hoops keeps a reference on the device, so even after a hotplug
> event the device can only fully go away after all housekeeping
> structures are destroyed and buffers freed.

A simple reference on the device means nothing for these kinds of
questions. It does not stop unloading and reloading a driver.

Obviously if the driver is loaded fresh it will reallocate.

To do what you are saying the DRM drivers would have to block during
driver remove until all unmaps happen.

> Background is that a lot of device still make reads even after you
> have invalidated a mapping, but then discard the result.

And they also don't insert fences to conclude that?

> So when you don't have same grace period you end up with PCI AER,
> warnings from IOMMU, random accesses to PCI BARs which just happen
> to be in the old location of something etc...

Yes, definitely. It is very important to have a definitive point in
the API where all accesses stop. While "read but discard" seems
harmless on the surface, there are corner cases where it is not OK.

Am I understanding right that these devices must finish their reads
before doing unmap?

> I would rather like to keep that semantics even for forcefully
> shootdowns since it proved to be rather reliable.

We can investigate making unmap the barrier point if this is the case.

Jason
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Christian König 2 weeks, 4 days ago
On 1/21/26 17:01, Jason Gunthorpe wrote:
> On Wed, Jan 21, 2026 at 04:28:17PM +0100, Christian König wrote:
>> On 1/21/26 14:31, Jason Gunthorpe wrote:
>>> On Wed, Jan 21, 2026 at 10:20:51AM +0100, Christian König wrote:
>>>> On 1/20/26 15:07, Leon Romanovsky wrote:
>>>>> From: Leon Romanovsky <leonro@nvidia.com>
>>>>>
>>>>> dma-buf invalidation is performed asynchronously by hardware, so VFIO must
>>>>> wait until all affected objects have been fully invalidated.
>>>>>
>>>>> Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
>>>>> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
>>>>
>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>>>
>>>> Please also keep in mind that the while this wait for all fences for
>>>> correctness you also need to keep the mapping valid until
>>>> dma_buf_unmap_attachment() was called.
>>>
>>> Can you elaborate on this more?
>>>
>>> I think what we want for dma_buf_attach_revocable() is the strong
>>> guarentee that the importer stops doing all access to the memory once
>>> this sequence is completed and the exporter can rely on it. I don't
>>> think this works any other way.
>>>
>>> This is already true for dynamic move capable importers, right?
>>
>> Not quite, no.
> 
> :(
> 
> It is kind of shocking to hear these APIs work like this with such a
> loose lifetime definition. Leon can you include some of these detail
> in the new comments?

Yeah, when the API was designed we intentionally said that by waiting for the fences means waiting for all operations to finish.

But then came reality and there HW just do stuff like speculatively read ahead... and with that all the nice design goes to the trash-bin.

>>>> In other words you can only redirect the DMA-addresses previously
>>>> given out into nirvana (or a dummy memory or similar), but you still
>>>> need to avoid re-using them for something else.
>>>
>>> Does any driver do this? If you unload/reload a GPU driver it is
>>> going to re-use the addresses handed out?
>>
>> I never fully read through all the source code, but if I'm not
>> completely mistaken that is enforced for all GPU drivers through the
>> DMA-buf and DRM layer lifetime handling and I think even in other in
>> kernel frameworks like V4L, alsa etc...
> 
>> What roughly happens is that each DMA-buf mapping through a couple
>> of hoops keeps a reference on the device, so even after a hotplug
>> event the device can only fully go away after all housekeeping
>> structures are destroyed and buffers freed.
> 
> A simple reference on the device means nothing for these kinds of
> questions. It does not stop unloading and reloading a driver.

Well as far as I know it stops the PCIe address space from being re-used.

So when you do an "echo 1 > remove" and then an re-scan on the upstream bridge that works, but you get different addresses for your MMIO BARs!

> Obviously if the driver is loaded fresh it will reallocate.
> 
> To do what you are saying the DRM drivers would have to block during
> driver remove until all unmaps happen.

Oh, well I never looked to deeply into that.

As far as I know it doesn't block, but rather the last drm_dev_put() just cleans things up.

And we have a CI test system which exercises that stuff over and over again because we have a big customer depending on that.

>> Background is that a lot of device still make reads even after you
>> have invalidated a mapping, but then discard the result.
> 
> And they also don't insert fences to conclude that?

Nope, that is just speculatively read ahead from other operations which actually doesn't have anything TODO with our buffer.

>> So when you don't have same grace period you end up with PCI AER,
>> warnings from IOMMU, random accesses to PCI BARs which just happen
>> to be in the old location of something etc...
> 
> Yes, definitely. It is very important to have a definitive point in
> the API where all accesses stop. While "read but discard" seems
> harmless on the surface, there are corner cases where it is not OK.
> 
> Am I understanding right that these devices must finish their reads
> before doing unmap?

Yes, and that is a big one. Otherwise we basically loose any chance of sanely handling this.

>> I would rather like to keep that semantics even for forcefully
>> shootdowns since it proved to be rather reliable.
> 
> We can investigate making unmap the barrier point if this is the case.

I mean when you absolutely just can't do it otherwise just make sure that a speculative read doesn't result in any form of error message or triggering actions or similar. That approach works as well.

And yes we absolutely have to document all those findings and behavior in the DMA-buf API.

Regards,
Christian.
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Jason Gunthorpe 2 weeks, 3 days ago
On Thu, Jan 22, 2026 at 12:32:03PM +0100, Christian König wrote:
> >> What roughly happens is that each DMA-buf mapping through a couple
> >> of hoops keeps a reference on the device, so even after a hotplug
> >> event the device can only fully go away after all housekeeping
> >> structures are destroyed and buffers freed.
> > 
> > A simple reference on the device means nothing for these kinds of
> > questions. It does not stop unloading and reloading a driver.
> 
> Well as far as I know it stops the PCIe address space from being re-used.
> 
> So when you do an "echo 1 > remove" and then an re-scan on the
> upstream bridge that works, but you get different addresses for your
> MMIO BARs!

That's pretty a niche scenario.. Most people don't rescan their PCI
bus. If you just do rmmod/insmod then it will be re-used, there is no
rescan to move the MMIO around on that case.

> Oh, well I never looked to deeply into that.
> 
> As far as I know it doesn't block, but rather the last drm_dev_put()
> just cleans things up.
> 
> And we have a CI test system which exercises that stuff over and
> over again because we have a big customer depending on that.

I doubt a CI would detect a UAF like we are discussing here..

Connect a RDMA pinned importer. Do rmmod. If rmmod doesn't hang the
driver has a UAF on some RAS cases. Not great, but is unlikely to
actually trouble any real user.

Jason
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Jason Gunthorpe 2 weeks, 3 days ago
On Thu, Jan 22, 2026 at 07:44:04PM -0400, Jason Gunthorpe wrote:
> On Thu, Jan 22, 2026 at 12:32:03PM +0100, Christian König wrote:
> > >> What roughly happens is that each DMA-buf mapping through a couple
> > >> of hoops keeps a reference on the device, so even after a hotplug
> > >> event the device can only fully go away after all housekeeping
> > >> structures are destroyed and buffers freed.
> > > 
> > > A simple reference on the device means nothing for these kinds of
> > > questions. It does not stop unloading and reloading a driver.
> > 
> > Well as far as I know it stops the PCIe address space from being re-used.
> > 
> > So when you do an "echo 1 > remove" and then an re-scan on the
> > upstream bridge that works, but you get different addresses for your
> > MMIO BARs!
> 
> That's pretty a niche scenario.. Most people don't rescan their PCI
> bus. If you just do rmmod/insmod then it will be re-used, there is no
> rescan to move the MMIO around on that case.

Ah I just remembered there is another important detail here.

It is illegal to call the DMA API after your driver is unprobed. The
kernel can oops. So if a driver is allowing remove() to complete
before all the dma_buf_unmaps have been called it is buggy and risks
an oops.

https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26738@kernel.org/T/#m0c7dda0fb5981240879c5ca489176987d688844c

As calling a dma_buf_unmap() -> dma_unma_sg() after remove() returns
is not allowed..

Jason
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Christian König 2 weeks, 3 days ago
On 1/23/26 15:11, Jason Gunthorpe wrote:
> On Thu, Jan 22, 2026 at 07:44:04PM -0400, Jason Gunthorpe wrote:
>> On Thu, Jan 22, 2026 at 12:32:03PM +0100, Christian König wrote:
>>>>> What roughly happens is that each DMA-buf mapping through a couple
>>>>> of hoops keeps a reference on the device, so even after a hotplug
>>>>> event the device can only fully go away after all housekeeping
>>>>> structures are destroyed and buffers freed.
>>>>
>>>> A simple reference on the device means nothing for these kinds of
>>>> questions. It does not stop unloading and reloading a driver.
>>>
>>> Well as far as I know it stops the PCIe address space from being re-used.
>>>
>>> So when you do an "echo 1 > remove" and then an re-scan on the
>>> upstream bridge that works, but you get different addresses for your
>>> MMIO BARs!
>>
>> That's pretty a niche scenario.. Most people don't rescan their PCI
>> bus. If you just do rmmod/insmod then it will be re-used, there is no
>> rescan to move the MMIO around on that case.
> 
> Ah I just remembered there is another important detail here.
> 
> It is illegal to call the DMA API after your driver is unprobed. The
> kernel can oops. So if a driver is allowing remove() to complete
> before all the dma_buf_unmaps have been called it is buggy and risks
> an oops.
> 
> https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26738@kernel.org/T/#m0c7dda0fb5981240879c5ca489176987d688844c
> 
> As calling a dma_buf_unmap() -> dma_unma_sg() after remove() returns
> is not allowed..

That is not even in the hands of the driver. The DMA-buf framework itself does a module_get() on the exporter.

So as long as a DMA-buf exists you *can't* rmmod the module which provides the exporting driver (expect of course of force unloading).

Revoking the DMA mappings won't change anything on that, the importer needs to stop using the DMA-buf and drop all their references.

Christian.

> 
> Jason

Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Jason Gunthorpe 2 weeks, 3 days ago
On Fri, Jan 23, 2026 at 05:23:34PM +0100, Christian König wrote:
> > It is illegal to call the DMA API after your driver is unprobed. The
> > kernel can oops. So if a driver is allowing remove() to complete
> > before all the dma_buf_unmaps have been called it is buggy and risks
> > an oops.
> > 
> > https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26738@kernel.org/T/#m0c7dda0fb5981240879c5ca489176987d688844c
> > 
> > As calling a dma_buf_unmap() -> dma_unma_sg() after remove() returns
> > is not allowed..
> 
> That is not even in the hands of the driver. The DMA-buf framework
> itself does a module_get() on the exporter.

module_get() prevents the module from being unloaded. It does not
prevent the user from using /sys/../unbind or various other ways to
remove the driver from the device.

rmmod is a popular way to trigger remove() on a driver but not the
only way, and you can't point to a module_get() to dismiss issues with
driver remove() correctness.

> Revoking the DMA mappings won't change anything on that, the
> importer needs to stop using the DMA-buf and drop all their
> references.

And to be correct an exporting driver needs to wait in its remove
function until all the unmaps are done.

Jason
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Leon Romanovsky 2 weeks, 4 days ago
On Wed, Jan 21, 2026 at 12:01:40PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 21, 2026 at 04:28:17PM +0100, Christian König wrote:
> > On 1/21/26 14:31, Jason Gunthorpe wrote:
> > > On Wed, Jan 21, 2026 at 10:20:51AM +0100, Christian König wrote:
> > >> On 1/20/26 15:07, Leon Romanovsky wrote:
> > >>> From: Leon Romanovsky <leonro@nvidia.com>
> > >>>
> > >>> dma-buf invalidation is performed asynchronously by hardware, so VFIO must
> > >>> wait until all affected objects have been fully invalidated.
> > >>>
> > >>> Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
> > >>> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > >>
> > >> Reviewed-by: Christian König <christian.koenig@amd.com>
> > >>
> > >> Please also keep in mind that the while this wait for all fences for
> > >> correctness you also need to keep the mapping valid until
> > >> dma_buf_unmap_attachment() was called.
> > > 
> > > Can you elaborate on this more?
> > > 
> > > I think what we want for dma_buf_attach_revocable() is the strong
> > > guarentee that the importer stops doing all access to the memory once
> > > this sequence is completed and the exporter can rely on it. I don't
> > > think this works any other way.
> > > 
> > > This is already true for dynamic move capable importers, right?
> > 
> > Not quite, no.
> 
> :(
> 
> It is kind of shocking to hear these APIs work like this with such a
> loose lifetime definition. Leon can you include some of these detail
> in the new comments?

If we can clarify what needs to be addressed for v5, I will proceed.  
At the moment, it's still unclear what is missing in v4.

Thanks
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Thomas Hellström 2 weeks, 5 days ago
Hi, Christian,

On Wed, 2026-01-21 at 10:20 +0100, Christian König wrote:
> On 1/20/26 15:07, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@nvidia.com>
> > 
> > dma-buf invalidation is performed asynchronously by hardware, so
> > VFIO must
> > wait until all affected objects have been fully invalidated.
> > 
> > Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO
> > regions")
> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> 
> Reviewed-by: Christian König <christian.koenig@amd.com>
> 
> Please also keep in mind that the while this wait for all fences for
> correctness you also need to keep the mapping valid until
> dma_buf_unmap_attachment() was called.

I'm wondering shouldn't we require DMA_RESV_USAGE_BOOKKEEP here, as
*any* unsignaled fence could indicate access through the map?

/Thomas

> 
> In other words you can only redirect the DMA-addresses previously
> given out into nirvana (or a dummy memory or similar), but you still
> need to avoid re-using them for something else.
> 
> Regards,
> Christian.
> 
> > ---
> >  drivers/vfio/pci/vfio_pci_dmabuf.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> > 
> > diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c
> > b/drivers/vfio/pci/vfio_pci_dmabuf.c
> > index d4d0f7d08c53..33bc6a1909dd 100644
> > --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
> > +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
> > @@ -321,6 +321,9 @@ void vfio_pci_dma_buf_move(struct
> > vfio_pci_core_device *vdev, bool revoked)
> >  			dma_resv_lock(priv->dmabuf->resv, NULL);
> >  			priv->revoked = revoked;
> >  			dma_buf_move_notify(priv->dmabuf);
> > +			dma_resv_wait_timeout(priv->dmabuf->resv,
> > +					     
> > DMA_RESV_USAGE_KERNEL, false,
> > +					     
> > MAX_SCHEDULE_TIMEOUT);
> >  			dma_resv_unlock(priv->dmabuf->resv);
> >  		}
> >  		fput(priv->dmabuf->file);
> > @@ -342,6 +345,8 @@ void vfio_pci_dma_buf_cleanup(struct
> > vfio_pci_core_device *vdev)
> >  		priv->vdev = NULL;
> >  		priv->revoked = true;
> >  		dma_buf_move_notify(priv->dmabuf);
> > +		dma_resv_wait_timeout(priv->dmabuf->resv,
> > DMA_RESV_USAGE_KERNEL,
> > +				      false,
> > MAX_SCHEDULE_TIMEOUT);
> >  		dma_resv_unlock(priv->dmabuf->resv);
> >  		vfio_device_put_registration(&vdev->vdev);
> >  		fput(priv->dmabuf->file);
> > 
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Christian König 2 weeks, 5 days ago
On 1/21/26 10:36, Thomas Hellström wrote:
> Hi, Christian,
> 
> On Wed, 2026-01-21 at 10:20 +0100, Christian König wrote:
>> On 1/20/26 15:07, Leon Romanovsky wrote:
>>> From: Leon Romanovsky <leonro@nvidia.com>
>>>
>>> dma-buf invalidation is performed asynchronously by hardware, so
>>> VFIO must
>>> wait until all affected objects have been fully invalidated.
>>>
>>> Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO
>>> regions")
>>> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
>>
>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>
>> Please also keep in mind that the while this wait for all fences for
>> correctness you also need to keep the mapping valid until
>> dma_buf_unmap_attachment() was called.
> 
> I'm wondering shouldn't we require DMA_RESV_USAGE_BOOKKEEP here, as
> *any* unsignaled fence could indicate access through the map?

Yes, exactly that. I totally missed this detail.

Thanks a lot to Matthew and you to pointing this out.

Regards,
Christian.

> 
> /Thomas
> 
>>
>> In other words you can only redirect the DMA-addresses previously
>> given out into nirvana (or a dummy memory or similar), but you still
>> need to avoid re-using them for something else.
>>
>> Regards,
>> Christian.
>>
>>> ---
>>>  drivers/vfio/pci/vfio_pci_dmabuf.c | 5 +++++
>>>  1 file changed, 5 insertions(+)
>>>
>>> diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c
>>> b/drivers/vfio/pci/vfio_pci_dmabuf.c
>>> index d4d0f7d08c53..33bc6a1909dd 100644
>>> --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
>>> +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
>>> @@ -321,6 +321,9 @@ void vfio_pci_dma_buf_move(struct
>>> vfio_pci_core_device *vdev, bool revoked)
>>>  			dma_resv_lock(priv->dmabuf->resv, NULL);
>>>  			priv->revoked = revoked;
>>>  			dma_buf_move_notify(priv->dmabuf);
>>> +			dma_resv_wait_timeout(priv->dmabuf->resv,
>>> +					     
>>> DMA_RESV_USAGE_KERNEL, false,
>>> +					     
>>> MAX_SCHEDULE_TIMEOUT);
>>>  			dma_resv_unlock(priv->dmabuf->resv);
>>>  		}
>>>  		fput(priv->dmabuf->file);
>>> @@ -342,6 +345,8 @@ void vfio_pci_dma_buf_cleanup(struct
>>> vfio_pci_core_device *vdev)
>>>  		priv->vdev = NULL;
>>>  		priv->revoked = true;
>>>  		dma_buf_move_notify(priv->dmabuf);
>>> +		dma_resv_wait_timeout(priv->dmabuf->resv,
>>> DMA_RESV_USAGE_KERNEL,
>>> +				      false,
>>> MAX_SCHEDULE_TIMEOUT);
>>>  		dma_resv_unlock(priv->dmabuf->resv);
>>>  		vfio_device_put_registration(&vdev->vdev);
>>>  		fput(priv->dmabuf->file);
>>>

Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Matthew Brost 2 weeks, 5 days ago
On Tue, Jan 20, 2026 at 04:07:06PM +0200, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
> 
> dma-buf invalidation is performed asynchronously by hardware, so VFIO must
> wait until all affected objects have been fully invalidated.
> 
> Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>  drivers/vfio/pci/vfio_pci_dmabuf.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
> index d4d0f7d08c53..33bc6a1909dd 100644
> --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
> +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
> @@ -321,6 +321,9 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
>  			dma_resv_lock(priv->dmabuf->resv, NULL);
>  			priv->revoked = revoked;
>  			dma_buf_move_notify(priv->dmabuf);
> +			dma_resv_wait_timeout(priv->dmabuf->resv,
> +					      DMA_RESV_USAGE_KERNEL, false,
> +					      MAX_SCHEDULE_TIMEOUT);

Should we explicitly call out in the dma_buf_move_notify() /
invalidate_mappings kernel-doc that KERNEL slots are the mechanism
for communicating asynchronous dma_buf_move_notify /
invalidate_mappings events via fences?

Yes, this is probably implied, but it wouldn’t hurt to state this
explicitly as part of the cross-driver contract.

Here is what we have now:

 	 * - Dynamic importers should set fences for any access that they can't
	 *   disable immediately from their &dma_buf_attach_ops.invalidate_mappings
 	 *   callback.

Matt

>  			dma_resv_unlock(priv->dmabuf->resv);
>  		}
>  		fput(priv->dmabuf->file);
> @@ -342,6 +345,8 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
>  		priv->vdev = NULL;
>  		priv->revoked = true;
>  		dma_buf_move_notify(priv->dmabuf);
> +		dma_resv_wait_timeout(priv->dmabuf->resv, DMA_RESV_USAGE_KERNEL,
> +				      false, MAX_SCHEDULE_TIMEOUT);
>  		dma_resv_unlock(priv->dmabuf->resv);
>  		vfio_device_put_registration(&vdev->vdev);
>  		fput(priv->dmabuf->file);
> 
> -- 
> 2.52.0
> 
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Christian König 2 weeks, 5 days ago
On 1/20/26 21:44, Matthew Brost wrote:
> On Tue, Jan 20, 2026 at 04:07:06PM +0200, Leon Romanovsky wrote:
>> From: Leon Romanovsky <leonro@nvidia.com>
>>
>> dma-buf invalidation is performed asynchronously by hardware, so VFIO must
>> wait until all affected objects have been fully invalidated.
>>
>> Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
>> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
>> ---
>>  drivers/vfio/pci/vfio_pci_dmabuf.c | 5 +++++
>>  1 file changed, 5 insertions(+)
>>
>> diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
>> index d4d0f7d08c53..33bc6a1909dd 100644
>> --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
>> +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
>> @@ -321,6 +321,9 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
>>  			dma_resv_lock(priv->dmabuf->resv, NULL);
>>  			priv->revoked = revoked;
>>  			dma_buf_move_notify(priv->dmabuf);
>> +			dma_resv_wait_timeout(priv->dmabuf->resv,
>> +					      DMA_RESV_USAGE_KERNEL, false,
>> +					      MAX_SCHEDULE_TIMEOUT);
> 
> Should we explicitly call out in the dma_buf_move_notify() /
> invalidate_mappings kernel-doc that KERNEL slots are the mechanism
> for communicating asynchronous dma_buf_move_notify /
> invalidate_mappings events via fences?

Oh, I missed that! And no that is not correct.

This should be DMA_RESV_USAGE_BOOKKEEP so that we wait for everything.

Regards,
Christian.

> 
> Yes, this is probably implied, but it wouldn’t hurt to state this
> explicitly as part of the cross-driver contract.
> 
> Here is what we have now:
> 
>  	 * - Dynamic importers should set fences for any access that they can't
> 	 *   disable immediately from their &dma_buf_attach_ops.invalidate_mappings
>  	 *   callback.
> 
> Matt
> 
>>  			dma_resv_unlock(priv->dmabuf->resv);
>>  		}
>>  		fput(priv->dmabuf->file);
>> @@ -342,6 +345,8 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
>>  		priv->vdev = NULL;
>>  		priv->revoked = true;
>>  		dma_buf_move_notify(priv->dmabuf);
>> +		dma_resv_wait_timeout(priv->dmabuf->resv, DMA_RESV_USAGE_KERNEL,
>> +				      false, MAX_SCHEDULE_TIMEOUT);
>>  		dma_resv_unlock(priv->dmabuf->resv);
>>  		vfio_device_put_registration(&vdev->vdev);
>>  		fput(priv->dmabuf->file);
>>
>> -- 
>> 2.52.0
>>

Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Leon Romanovsky 2 weeks, 5 days ago
On Wed, Jan 21, 2026 at 11:41:48AM +0100, Christian König wrote:
> On 1/20/26 21:44, Matthew Brost wrote:
> > On Tue, Jan 20, 2026 at 04:07:06PM +0200, Leon Romanovsky wrote:
> >> From: Leon Romanovsky <leonro@nvidia.com>
> >>
> >> dma-buf invalidation is performed asynchronously by hardware, so VFIO must
> >> wait until all affected objects have been fully invalidated.
> >>
> >> Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
> >> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> >> ---
> >>  drivers/vfio/pci/vfio_pci_dmabuf.c | 5 +++++
> >>  1 file changed, 5 insertions(+)
> >>
> >> diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
> >> index d4d0f7d08c53..33bc6a1909dd 100644
> >> --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
> >> +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
> >> @@ -321,6 +321,9 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
> >>  			dma_resv_lock(priv->dmabuf->resv, NULL);
> >>  			priv->revoked = revoked;
> >>  			dma_buf_move_notify(priv->dmabuf);
> >> +			dma_resv_wait_timeout(priv->dmabuf->resv,
> >> +					      DMA_RESV_USAGE_KERNEL, false,
> >> +					      MAX_SCHEDULE_TIMEOUT);
> > 
> > Should we explicitly call out in the dma_buf_move_notify() /
> > invalidate_mappings kernel-doc that KERNEL slots are the mechanism
> > for communicating asynchronous dma_buf_move_notify /
> > invalidate_mappings events via fences?
> 
> Oh, I missed that! And no that is not correct.
> 
> This should be DMA_RESV_USAGE_BOOKKEEP so that we wait for everything.

Will change.

> 
> Regards,
> Christian.
> 
> > 
> > Yes, this is probably implied, but it wouldn’t hurt to state this
> > explicitly as part of the cross-driver contract.
> > 
> > Here is what we have now:
> > 
> >  	 * - Dynamic importers should set fences for any access that they can't
> > 	 *   disable immediately from their &dma_buf_attach_ops.invalidate_mappings
> >  	 *   callback.
> > 
> > Matt
> > 
> >>  			dma_resv_unlock(priv->dmabuf->resv);
> >>  		}
> >>  		fput(priv->dmabuf->file);
> >> @@ -342,6 +345,8 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
> >>  		priv->vdev = NULL;
> >>  		priv->revoked = true;
> >>  		dma_buf_move_notify(priv->dmabuf);
> >> +		dma_resv_wait_timeout(priv->dmabuf->resv, DMA_RESV_USAGE_KERNEL,
> >> +				      false, MAX_SCHEDULE_TIMEOUT);
> >>  		dma_resv_unlock(priv->dmabuf->resv);
> >>  		vfio_device_put_registration(&vdev->vdev);
> >>  		fput(priv->dmabuf->file);
> >>
> >> -- 
> >> 2.52.0
> >>
> 
> 
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Matthew Brost 2 weeks, 5 days ago
On Wed, Jan 21, 2026 at 12:44:51PM +0200, Leon Romanovsky wrote:
> On Wed, Jan 21, 2026 at 11:41:48AM +0100, Christian König wrote:
> > On 1/20/26 21:44, Matthew Brost wrote:
> > > On Tue, Jan 20, 2026 at 04:07:06PM +0200, Leon Romanovsky wrote:
> > >> From: Leon Romanovsky <leonro@nvidia.com>
> > >>
> > >> dma-buf invalidation is performed asynchronously by hardware, so VFIO must
> > >> wait until all affected objects have been fully invalidated.
> > >>
> > >> Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
> > >> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > >> ---
> > >>  drivers/vfio/pci/vfio_pci_dmabuf.c | 5 +++++
> > >>  1 file changed, 5 insertions(+)
> > >>
> > >> diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
> > >> index d4d0f7d08c53..33bc6a1909dd 100644
> > >> --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
> > >> +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
> > >> @@ -321,6 +321,9 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
> > >>  			dma_resv_lock(priv->dmabuf->resv, NULL);
> > >>  			priv->revoked = revoked;
> > >>  			dma_buf_move_notify(priv->dmabuf);
> > >> +			dma_resv_wait_timeout(priv->dmabuf->resv,
> > >> +					      DMA_RESV_USAGE_KERNEL, false,
> > >> +					      MAX_SCHEDULE_TIMEOUT);
> > > 
> > > Should we explicitly call out in the dma_buf_move_notify() /
> > > invalidate_mappings kernel-doc that KERNEL slots are the mechanism
> > > for communicating asynchronous dma_buf_move_notify /
> > > invalidate_mappings events via fences?
> > 
> > Oh, I missed that! And no that is not correct.
> > 

+1 on DMA_RESV_USAGE_BOOKKEEP, I reasoned we have to wait for all fences
after I typed the original response. For example preempt fences GPU
drivers are in BOOKKEEP which you'd certainly have to wait on for move
notify to called complete. Likewise a user issued unbind or TLB
invalidation fence would typically be in BOOKKEEP as well, which again
would need to be waited on.

Matt

> > This should be DMA_RESV_USAGE_BOOKKEEP so that we wait for everything.
> 
> Will change.
> 
> > 
> > Regards,
> > Christian.
> > 
> > > 
> > > Yes, this is probably implied, but it wouldn’t hurt to state this
> > > explicitly as part of the cross-driver contract.
> > > 
> > > Here is what we have now:
> > > 
> > >  	 * - Dynamic importers should set fences for any access that they can't
> > > 	 *   disable immediately from their &dma_buf_attach_ops.invalidate_mappings
> > >  	 *   callback.
> > > 
> > > Matt
> > > 
> > >>  			dma_resv_unlock(priv->dmabuf->resv);
> > >>  		}
> > >>  		fput(priv->dmabuf->file);
> > >> @@ -342,6 +345,8 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
> > >>  		priv->vdev = NULL;
> > >>  		priv->revoked = true;
> > >>  		dma_buf_move_notify(priv->dmabuf);
> > >> +		dma_resv_wait_timeout(priv->dmabuf->resv, DMA_RESV_USAGE_KERNEL,
> > >> +				      false, MAX_SCHEDULE_TIMEOUT);
> > >>  		dma_resv_unlock(priv->dmabuf->resv);
> > >>  		vfio_device_put_registration(&vdev->vdev);
> > >>  		fput(priv->dmabuf->file);
> > >>
> > >> -- 
> > >> 2.52.0
> > >>
> > 
> > 
Re: [PATCH v3 6/7] vfio: Wait for dma-buf invalidation to complete
Posted by Leon Romanovsky 2 weeks, 5 days ago
On Tue, Jan 20, 2026 at 12:44:50PM -0800, Matthew Brost wrote:
> On Tue, Jan 20, 2026 at 04:07:06PM +0200, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@nvidia.com>
> > 
> > dma-buf invalidation is performed asynchronously by hardware, so VFIO must
> > wait until all affected objects have been fully invalidated.
> > 
> > Fixes: 5d74781ebc86 ("vfio/pci: Add dma-buf export support for MMIO regions")
> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > ---
> >  drivers/vfio/pci/vfio_pci_dmabuf.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> > 
> > diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
> > index d4d0f7d08c53..33bc6a1909dd 100644
> > --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
> > +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
> > @@ -321,6 +321,9 @@ void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked)
> >  			dma_resv_lock(priv->dmabuf->resv, NULL);
> >  			priv->revoked = revoked;
> >  			dma_buf_move_notify(priv->dmabuf);
> > +			dma_resv_wait_timeout(priv->dmabuf->resv,
> > +					      DMA_RESV_USAGE_KERNEL, false,
> > +					      MAX_SCHEDULE_TIMEOUT);
> 
> Should we explicitly call out in the dma_buf_move_notify() /
> invalidate_mappings kernel-doc that KERNEL slots are the mechanism
> for communicating asynchronous dma_buf_move_notify /
> invalidate_mappings events via fences?
> 
> Yes, this is probably implied, but it wouldn’t hurt to state this
> explicitly as part of the cross-driver contract.
> 
> Here is what we have now:
> 
>  	 * - Dynamic importers should set fences for any access that they can't
> 	 *   disable immediately from their &dma_buf_attach_ops.invalidate_mappings
>  	 *   callback.

I believe I documented this in patch 4:
https://lore.kernel.org/all/20260120-dmabuf-revoke-v3-4-b7e0b07b8214@nvidia.com/"
Is there anything else that should be added?

  1275 /**
  1276  * dma_buf_move_notify - notify attachments that DMA-buf is moving
  1277  *
  1278  * @dmabuf:     [in]    buffer which is moving
  1279  *
  1280  * Informs all attachments that they need to destroy and recreate all their
  1281  * mappings. If the attachment is dynamic then the dynamic importer is expected
  1282  * to invalidate any caches it has of the mapping result and perform a new
  1283  * mapping request before allowing HW to do any further DMA.
  1284  *
  1285  * If the attachment is pinned then this informs the pinned importer that
  1286  * the underlying mapping is no longer available. Pinned importers may take
  1287  * this is as a permanent revocation so exporters should not trigger it
  1288  * lightly.
  1289  *
  1290  * For legacy pinned importers that cannot support invalidation this is a NOP.
  1291  * Drivers can call dma_buf_attach_revocable() to determine if the importer
  1292  * supports this.
  1293  *
  1294  * NOTE: The invalidation triggers asynchronous HW operation and the callers
  1295  * need to wait for this operation to complete by calling
  1296  * to dma_resv_wait_timeout().
  1297  */

Thanks


> 
> Matt
> 
> >  			dma_resv_unlock(priv->dmabuf->resv);
> >  		}
> >  		fput(priv->dmabuf->file);
> > @@ -342,6 +345,8 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
> >  		priv->vdev = NULL;
> >  		priv->revoked = true;
> >  		dma_buf_move_notify(priv->dmabuf);
> > +		dma_resv_wait_timeout(priv->dmabuf->resv, DMA_RESV_USAGE_KERNEL,
> > +				      false, MAX_SCHEDULE_TIMEOUT);
> >  		dma_resv_unlock(priv->dmabuf->resv);
> >  		vfio_device_put_registration(&vdev->vdev);
> >  		fput(priv->dmabuf->file);
> > 
> > -- 
> > 2.52.0
> >