Hi Alex,
This regards vfio passthru support on hyperv running linux as dom0 aka
root. At a high level, cloud hypervisor uses vfio for set up as usual,
then maps the mmio ranges via the hyperv linux driver ioctls.
Over a year ago, when working on this I had used vm_pgoff to get the pfn
for the mmio, that was 5.15 and early 6.x kernels. Now that I am porting
to 6.18 for upstreaming, I noticed:
commit aac6db75a9fc
Author: Alex Williamson <alex.williamson@redhat.com>
vfio/pci: Use unmap_mapping_range()
changed the behavior and vm_pgoff is no longer holding the pfn. In light
of that, I wondered if the following minor change, making vma_to_pfn()
public (after renaming it), would be acceptable to you.
Thanks,
-Mukesh
-----------------------------------------------------------------------------
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 7dcf5439dedc..43083a16d8a2 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -1628,7 +1628,7 @@ void vfio_pci_memory_unlock_and_restore(struct vfio_pci_core_device *vdev, u16 c
up_write(&vdev->memory_lock);
}
-static unsigned long vma_to_pfn(struct vm_area_struct *vma)
+unsigned long vfio_pci_vma_to_pfn(struct vm_area_struct *vma)
{
struct vfio_pci_core_device *vdev = vma->vm_private_data;
int index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT);
@@ -1647,7 +1647,7 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
struct vfio_pci_core_device *vdev = vma->vm_private_data;
unsigned long addr = vmf->address & ~((PAGE_SIZE << order) - 1);
unsigned long pgoff = (addr - vma->vm_start) >> PAGE_SHIFT;
- unsigned long pfn = vma_to_pfn(vma) + pgoff;
+ unsigned long pfn = vfio_pci_vma_to_pfn(vma) + pgoff;
vm_fault_t ret = VM_FAULT_SIGBUS;
if (order && (addr < vma->vm_start ||
diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
index f541044e42a2..88925c6b8a22 100644
--- a/include/linux/vfio_pci_core.h
+++ b/include/linux/vfio_pci_core.h
@@ -119,6 +119,7 @@ ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf,
size_t count, loff_t *ppos);
ssize_t vfio_pci_core_write(struct vfio_device *core_vdev, const char __user *buf,
size_t count, loff_t *ppos);
+unsigned long vfio_pci_vma_to_pfn(struct vm_area_struct *vma);
int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma);
void vfio_pci_core_request(struct vfio_device *core_vdev, unsigned int count);
int vfio_pci_core_match(struct vfio_device *core_vdev, char *buf);
On Mon, Oct 27, 2025 at 02:21:56PM -0700, Mukesh R wrote: > Hi Alex, > > This regards vfio passthru support on hyperv running linux as dom0 aka > root. At a high level, cloud hypervisor uses vfio for set up as usual, > then maps the mmio ranges via the hyperv linux driver ioctls. > > Over a year ago, when working on this I had used vm_pgoff to get the pfn > for the mmio, that was 5.15 and early 6.x kernels. Now that I am porting > to 6.18 for upstreaming, I noticed: > > commit aac6db75a9fc > Author: Alex Williamson <alex.williamson@redhat.com> > vfio/pci: Use unmap_mapping_range() > > changed the behavior and vm_pgoff is no longer holding the pfn. In light > of that, I wondered if the following minor change, making vma_to_pfn() > public (after renaming it), would be acceptable to you. No way, no driver should be looking into VMAs like this - it is already a known security problem. Is this "hyperv linux driver ioctls" upstream? You should probably be looking to use the coming DMABUF stuff instead. Jason
On Mon, 27 Oct 2025 14:21:56 -0700 Mukesh R <mrathor@linux.microsoft.com> wrote: > Hi Alex, > > This regards vfio passthru support on hyperv running linux as dom0 aka > root. At a high level, cloud hypervisor uses vfio for set up as usual, > then maps the mmio ranges via the hyperv linux driver ioctls. > > Over a year ago, when working on this I had used vm_pgoff to get the pfn > for the mmio, that was 5.15 and early 6.x kernels. Now that I am porting > to 6.18 for upstreaming, I noticed: > > commit aac6db75a9fc > Author: Alex Williamson <alex.williamson@redhat.com> > vfio/pci: Use unmap_mapping_range() > > changed the behavior and vm_pgoff is no longer holding the pfn. In light > of that, I wondered if the following minor change, making vma_to_pfn() > public (after renaming it), would be acceptable to you. How do you know the device is using vfio_pci_core_mmap() with these semantics for vm_pgoff versus something like nvgrace_gpu_mmap() that uses vm_pgoff more like you're expecting? vma_to_pfn() is specific to the vfio-pci-core semantics, it's not portable to expose for other use cases. Thanks, Alex
On 10/27/25 19:17, Alex Williamson wrote: > On Mon, 27 Oct 2025 14:21:56 -0700 > Mukesh R <mrathor@linux.microsoft.com> wrote: > >> Hi Alex, >> >> This regards vfio passthru support on hyperv running linux as dom0 aka >> root. At a high level, cloud hypervisor uses vfio for set up as usual, >> then maps the mmio ranges via the hyperv linux driver ioctls. >> >> Over a year ago, when working on this I had used vm_pgoff to get the pfn >> for the mmio, that was 5.15 and early 6.x kernels. Now that I am porting >> to 6.18 for upstreaming, I noticed: >> >> commit aac6db75a9fc >> Author: Alex Williamson <alex.williamson@redhat.com> >> vfio/pci: Use unmap_mapping_range() >> >> changed the behavior and vm_pgoff is no longer holding the pfn. In light >> of that, I wondered if the following minor change, making vma_to_pfn() >> public (after renaming it), would be acceptable to you. > > How do you know the device is using vfio_pci_core_mmap() with these > semantics for vm_pgoff versus something like nvgrace_gpu_mmap() that > uses vm_pgoff more like you're expecting? vma_to_pfn() is specific to The gpu mmap will not come thru this ioctl path into the hyperv driver. > uses vm_pgoff more like you're expecting? vma_to_pfn() is specific to > the vfio-pci-core semantics, it's not portable to expose for other use > cases. Thanks, Ok. Will think of alternate way, just thought would check before going that route. Thanks, -Mukesh > > Alex
© 2016 - 2026 Red Hat, Inc.