From: Leon Romanovsky <leonro@nvidia.com>
Convert HMM DMA operations from the legacy page-based API to the new
physical address-based dma_map_phys() and dma_unmap_phys() functions.
This demonstrates the preferred approach for new code that should use
physical addresses directly rather than page+offset parameters.
The change replaces dma_map_page() and dma_unmap_page() calls with
dma_map_phys() and dma_unmap_phys() respectively, using the physical
address that was already available in the code. This eliminates the
redundant page-to-physical address conversion and aligns with the
DMA subsystem's move toward physical address-centric interfaces.
This serves as an example of how new code should be written to leverage
the more efficient physical address API, which provides cleaner interfaces
for drivers that already have access to physical addresses.
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
mm/hmm.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index feac86196a65..9354fae3ae06 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -779,8 +779,8 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct hmm_dma_map *map,
if (WARN_ON_ONCE(dma_need_unmap(dev) && !dma_addrs))
goto error;
- dma_addr = dma_map_page(dev, page, 0, map->dma_entry_size,
- DMA_BIDIRECTIONAL);
+ dma_addr = dma_map_phys(dev, paddr, map->dma_entry_size,
+ DMA_BIDIRECTIONAL, 0);
if (dma_mapping_error(dev, dma_addr))
goto error;
@@ -823,8 +823,8 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_dma_map *map, size_t idx)
dma_iova_unlink(dev, state, idx * map->dma_entry_size,
map->dma_entry_size, DMA_BIDIRECTIONAL, attrs);
} else if (dma_need_unmap(dev))
- dma_unmap_page(dev, dma_addrs[idx], map->dma_entry_size,
- DMA_BIDIRECTIONAL);
+ dma_unmap_phys(dev, dma_addrs[idx], map->dma_entry_size,
+ DMA_BIDIRECTIONAL, 0);
pfns[idx] &=
~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | HMM_PFN_P2PDMA_BUS);
--
2.49.0
Hi Leon, On Wed, Jun 25, 2025 at 04:19:05PM +0300, Leon Romanovsky wrote: > From: Leon Romanovsky <leonro@nvidia.com> > > Convert HMM DMA operations from the legacy page-based API to the new > physical address-based dma_map_phys() and dma_unmap_phys() functions. > This demonstrates the preferred approach for new code that should use > physical addresses directly rather than page+offset parameters. > > The change replaces dma_map_page() and dma_unmap_page() calls with > dma_map_phys() and dma_unmap_phys() respectively, using the physical > address that was already available in the code. This eliminates the > redundant page-to-physical address conversion and aligns with the > DMA subsystem's move toward physical address-centric interfaces. > > This serves as an example of how new code should be written to leverage > the more efficient physical address API, which provides cleaner interfaces > for drivers that already have access to physical addresses. I'm struggling a little to see how this is cleaner or more efficient than the old code. From what I can tell, dma_map_page_attrs() takes a 'struct page *' and converts it to a physical address using page_to_phys() whilst your new dma_map_phys() interface takes a physical address and converts it to a 'struct page *' using phys_to_page(). In both cases, hmm_dma_map_pfn() still needs the page for other reasons. If anything, existing users of dma_map_page_attrs() now end up with a redundant page-to-phys-to-page conversion which hopefully the compiler folds away. I'm assuming there's future work which builds on top of the new API and removes the reliance on 'struct page' entirely, is that right? If so, it would've been nicer to be clearer about that as, on its own, I'm not really sure this patch series achieves an awful lot and the efficiency argument looks quite weak to me. Cheers, Will
On Tue, Jul 15, 2025 at 02:24:38PM +0100, Will Deacon wrote: > Hi Leon, > > On Wed, Jun 25, 2025 at 04:19:05PM +0300, Leon Romanovsky wrote: > > From: Leon Romanovsky <leonro@nvidia.com> > > > > Convert HMM DMA operations from the legacy page-based API to the new > > physical address-based dma_map_phys() and dma_unmap_phys() functions. > > This demonstrates the preferred approach for new code that should use > > physical addresses directly rather than page+offset parameters. > > > > The change replaces dma_map_page() and dma_unmap_page() calls with > > dma_map_phys() and dma_unmap_phys() respectively, using the physical > > address that was already available in the code. This eliminates the > > redundant page-to-physical address conversion and aligns with the > > DMA subsystem's move toward physical address-centric interfaces. > > > > This serves as an example of how new code should be written to leverage > > the more efficient physical address API, which provides cleaner interfaces > > for drivers that already have access to physical addresses. > > I'm struggling a little to see how this is cleaner or more efficient > than the old code. It is not, the main reason for hmm conversion is to show how the API is used. HMM is built around struct page. > > From what I can tell, dma_map_page_attrs() takes a 'struct page *' and > converts it to a physical address using page_to_phys() whilst your new > dma_map_phys() interface takes a physical address and converts it to > a 'struct page *' using phys_to_page(). In both cases, hmm_dma_map_pfn() > still needs the page for other reasons. If anything, existing users of > dma_map_page_attrs() now end up with a redundant page-to-phys-to-page > conversion which hopefully the compiler folds away. > > I'm assuming there's future work which builds on top of the new API > and removes the reliance on 'struct page' entirely, is that right? If > so, it would've been nicer to be clearer about that as, on its own, I'm > not really sure this patch series achieves an awful lot and the > efficiency argument looks quite weak to me. Yes, there is ongoing work, which is built on top of dma_map_phys() API and can't be built without DMA phys. My WIP branch, where I'm using it can be found here: https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=dmabuf-vfio In that branch, we save one phys_to_page conversion in block datapath: block-dma: migrate to dma_map_phys instead of map_page and implement DMABUF exporter for MMIO pages: vfio/pci: Allow MMIO regions to be exported through dma-buf see vfio_pci_dma_buf_map() function. Thanks > > Cheers, > > Will >
© 2016 - 2025 Red Hat, Inc.