From: Li Zhe <lizhe.67@bytedance.com>
When vfio_unpin_pages_remote() is called with a range of addresses that
includes large folios, the function currently performs individual
put_pfn() operations for each page. This can lead to significant
performance overheads, especially when dealing with large ranges of pages.
It would be very rare for reserved PFNs and non reserved will to be mixed
within the same range. So this patch utilizes the has_rsvd variable
introduced in the previous patch to determine whether batch put_pfn()
operations can be performed. Moreover, compared to put_pfn(),
unpin_user_page_range_dirty_lock() is capable of handling large folio
scenarios more efficiently.
The performance test results for completing the 16G VFIO IOMMU DMA
unmapping are as follows.
Base(v6.16-rc4):
./vfio-pci-mem-dma-map 0000:03:00.0 16
------- AVERAGE (MADV_HUGEPAGE) --------
VFIO UNMAP DMA in 0.135 s (118.6 GB/s)
------- AVERAGE (MAP_POPULATE) --------
VFIO UNMAP DMA in 0.312 s (51.3 GB/s)
------- AVERAGE (HUGETLBFS) --------
VFIO UNMAP DMA in 0.136 s (117.3 GB/s)
With this patchset:
------- AVERAGE (MADV_HUGEPAGE) --------
VFIO UNMAP DMA in 0.045 s (357.6 GB/s)
------- AVERAGE (MAP_POPULATE) --------
VFIO UNMAP DMA in 0.288 s (55.6 GB/s)
------- AVERAGE (HUGETLBFS) --------
VFIO UNMAP DMA in 0.045 s (352.9 GB/s)
For large folio, we achieve an over 66% performance improvement in
the VFIO UNMAP DMA item. For small folios, the performance test
results appear to show no significant changes.
Suggested-by: Jason Gunthorpe <jgg@ziepe.ca>
Signed-off-by: Li Zhe <lizhe.67@bytedance.com>
---
drivers/vfio/vfio_iommu_type1.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index a02bc340c112..7cacfb2cefe3 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -802,17 +802,29 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
return pinned;
}
+static inline void put_valid_unreserved_pfns(unsigned long start_pfn,
+ unsigned long npage, int prot)
+{
+ unpin_user_page_range_dirty_lock(pfn_to_page(start_pfn), npage,
+ prot & IOMMU_WRITE);
+}
+
static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova,
unsigned long pfn, unsigned long npage,
bool do_accounting)
{
long unlocked = 0, locked = vpfn_pages(dma, iova, npage);
- long i;
- for (i = 0; i < npage; i++)
- if (put_pfn(pfn++, dma->prot))
- unlocked++;
+ if (dma->has_rsvd) {
+ long i;
+ for (i = 0; i < npage; i++)
+ if (put_pfn(pfn++, dma->prot))
+ unlocked++;
+ } else {
+ put_valid_unreserved_pfns(pfn, npage, dma->prot);
+ unlocked = npage;
+ }
if (do_accounting)
vfio_lock_acct(dma, locked - unlocked, true);
--
2.20.1
On Mon, Jun 30, 2025 at 03:25:18PM +0800, lizhe.67@bytedance.com wrote: > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index a02bc340c112..7cacfb2cefe3 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -802,17 +802,29 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, > return pinned; > } > > +static inline void put_valid_unreserved_pfns(unsigned long start_pfn, > + unsigned long npage, int prot) > +{ > + unpin_user_page_range_dirty_lock(pfn_to_page(start_pfn), npage, > + prot & IOMMU_WRITE); > +} I don't think you need this wrapper. This patch and the prior look OK Jason
On Wed, 2 Jul 2025 15:28:44 -0300, jgg@ziepe.ca wrote: > On Mon, Jun 30, 2025 at 03:25:18PM +0800, lizhe.67@bytedance.com wrote: > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > > index a02bc340c112..7cacfb2cefe3 100644 > > --- a/drivers/vfio/vfio_iommu_type1.c > > +++ b/drivers/vfio/vfio_iommu_type1.c > > @@ -802,17 +802,29 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, > > return pinned; > > } > > > > +static inline void put_valid_unreserved_pfns(unsigned long start_pfn, > > + unsigned long npage, int prot) > > +{ > > + unpin_user_page_range_dirty_lock(pfn_to_page(start_pfn), npage, > > + prot & IOMMU_WRITE); > > +} > > I don't think you need this wrapper. > > This patch and the prior look OK Thank you very much for your review. The primary purpose of adding this wrapper is to make the code more comprehensible. Would it be better to keep this wrapper? Perhaps we could also save on the need for some comments. Thanks, Zhe
© 2016 - 2025 Red Hat, Inc.