From nobody Thu Oct 2 00:46:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E88B5230D1E; Thu, 25 Sep 2025 13:18:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806317; cv=none; b=c/VGMpj2fAwyMq2Dt91lerVvI+A/KE5yPrLXn5sj+aZri2Hi+Xa20PYwxpwybQHql+cv9pjYeNaEIKcGWk8qmOrdQbzXQQ+JdeH1gaa7y508OkJCnLgxJPWgTxgasP95foPYKSlJL6PWqauNcoHDNdh6ARlHL9UDkC6m8GBnQHI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806317; c=relaxed/simple; bh=bLiFTmXHIlh7nPvGU+BJZLpSYXPsRMfThGRWFETaRWQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PksX+iYt03SSuCXafq6O4FTBbBQmFF3rlIiNFPsV/5lFE7MTpa8XVQ+d64P4G1h6GNh6Ewqi99304c/T9Cm3ncKfxvPiXbsywA0/ZRTwv6jOMvh94VKaCyKC0B17ZmBw7+lIGlTImqCrG1GHBeIHW1yHnG8iitgCeqCuKiTWzfk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=uk78CthZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uk78CthZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CFC55C4CEF0; Thu, 25 Sep 2025 13:18:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1758806316; bh=bLiFTmXHIlh7nPvGU+BJZLpSYXPsRMfThGRWFETaRWQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uk78CthZmFsOCifltdwntmj7PtKGdyc/NeK4L6a3nc5EdAhTLPvNMOHdARMN0VoL2 4R7TYnpGJo1LON0oR32ibV7ftBfOetF1jmIAe6DOHxGubqw48RwQC6jlBe5ph1AZ8F M+5rubtOOzRh6UdIMXf9+EKKQkJqVRcfYnoLdxTbvZglux/G8NXX8p5jZF11hl+qj3 H+znNvLJlxy3AdCrKnS3RKfwC3VWoVM5lMRyWhjPDEIfnIQqyOASPWEizVeHghXcjv Ho9qi9q++83xflMWf5Miua7j0NpEG18iI103f26egQd8/6JHqLm03U10BSR2kmJK/Z 2VKmnl3IQEsSA== From: Leon Romanovsky To: Alex Williamson Cc: Leon Romanovsky , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Vivek Kasireddy , Will Deacon Subject: [PATCH v3 01/10] PCI/P2PDMA: Separate the mmap() support from the core logic Date: Thu, 25 Sep 2025 16:14:29 +0300 Message-ID: <3ae95fe8fd5753cd8394387e6c67611128224235.1758804980.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Currently the P2PDMA code requires a pgmap and a struct page to function. The was serving three important purposes: - DMA API compatibility, where scatterlist required a struct page as input - Life cycle management, the percpu_ref is used to prevent UAF during device hot unplug - A way to get the P2P provider data through the pci_p2pdma_pagemap The DMA API now has a new flow, and has gained phys_addr_t support, so it no longer needs struct pages to perform P2P mapping. Lifecycle management can be delegated to the user, DMABUF for instance has a suitable invalidation protocol that does not require struct page. Finding the P2P provider data can also be managed by the caller without need to look it up from the phys_addr. Split the P2PDMA code into two layers. The optional upper layer, effectively, provides a way to mmap() P2P memory into a VMA by providing struct page, pgmap, a genalloc and sysfs. The lower layer provides the actual P2P infrastructure and is wrapped up in a new struct p2pdma_provider. Rework the mmap layer to use new p2pdma_provider based APIs. Drivers that do not want to put P2P memory into VMA's can allocate a struct p2pdma_provider after probe() starts and free it before remove() completes. When DMA mapping the driver must convey the struct p2pdma_provider to the DMA mapping code along with a phys_addr of the MMIO BAR slice to map. The driver must ensure that no DMA mapping outlives the lifetime of the struct p2pdma_provider. The intended target of this new API layer is DMABUF. There is usually only a single p2pdma_provider for a DMABUF exporter. Most drivers can establish the p2pdma_provider during probe, access the single instance during DMABUF attach and use that to drive the DMA mapping. DMABUF provides an invalidation mechanism that can guarantee all DMA is halted and the DMA mappings are undone prior to destroying the struct p2pdma_provider. This ensures there is no UAF through DMABUFs that are lingering past driver removal. The new p2pdma_provider layer cannot be used to create P2P memory that can be mapped into VMA's, be used with pin_user_pages(), O_DIRECT, and so on. These use cases must still use the mmap() layer. The p2pdma_provider layer is principally for DMABUF-like use cases where DMABUF natively manages the life cycle and access instead of vmas/pin_user_pages()/struct page. In addition, remove the bus_off field from pci_p2pdma_map_state since it duplicates information already available in the pgmap structure. The bus_offset is only used in one location (pci_p2pdma_bus_addr_map) and is always identical to pgmap->bus_offset. Signed-off-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/pci/p2pdma.c | 43 ++++++++++++++++++++------------------ include/linux/pci-p2pdma.h | 19 ++++++++++++----- 2 files changed, 37 insertions(+), 25 deletions(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index da5657a020074..176a99232fdca 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -28,9 +28,8 @@ struct pci_p2pdma { }; =20 struct pci_p2pdma_pagemap { - struct pci_dev *provider; - u64 bus_offset; struct dev_pagemap pgmap; + struct p2pdma_provider mem; }; =20 static struct pci_p2pdma_pagemap *to_p2p_pgmap(struct dev_pagemap *pgmap) @@ -204,8 +203,8 @@ static void p2pdma_page_free(struct page *page) { struct pci_p2pdma_pagemap *pgmap =3D to_p2p_pgmap(page_pgmap(page)); /* safe to dereference while a reference is held to the percpu ref */ - struct pci_p2pdma *p2pdma =3D - rcu_dereference_protected(pgmap->provider->p2pdma, 1); + struct pci_p2pdma *p2pdma =3D rcu_dereference_protected( + to_pci_dev(pgmap->mem.owner)->p2pdma, 1); struct percpu_ref *ref; =20 gen_pool_free_owner(p2pdma->pool, (uintptr_t)page_to_virt(page), @@ -270,14 +269,15 @@ static int pci_p2pdma_setup(struct pci_dev *pdev) =20 static void pci_p2pdma_unmap_mappings(void *data) { - struct pci_dev *pdev =3D data; + struct pci_p2pdma_pagemap *p2p_pgmap =3D data; =20 /* * Removing the alloc attribute from sysfs will call * unmap_mapping_range() on the inode, teardown any existing userspace * mappings and prevent new ones from being created. */ - sysfs_remove_file_from_group(&pdev->dev.kobj, &p2pmem_alloc_attr.attr, + sysfs_remove_file_from_group(&p2p_pgmap->mem.owner->kobj, + &p2pmem_alloc_attr.attr, p2pmem_group.name); } =20 @@ -328,10 +328,9 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int = bar, size_t size, pgmap->nr_range =3D 1; pgmap->type =3D MEMORY_DEVICE_PCI_P2PDMA; pgmap->ops =3D &p2pdma_pgmap_ops; - - p2p_pgmap->provider =3D pdev; - p2p_pgmap->bus_offset =3D pci_bus_address(pdev, bar) - - pci_resource_start(pdev, bar); + p2p_pgmap->mem.owner =3D &pdev->dev; + p2p_pgmap->mem.bus_offset =3D + pci_bus_address(pdev, bar) - pci_resource_start(pdev, bar); =20 addr =3D devm_memremap_pages(&pdev->dev, pgmap); if (IS_ERR(addr)) { @@ -340,7 +339,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int b= ar, size_t size, } =20 error =3D devm_add_action_or_reset(&pdev->dev, pci_p2pdma_unmap_mappings, - pdev); + p2p_pgmap); if (error) goto pages_free; =20 @@ -973,16 +972,16 @@ void pci_p2pmem_publish(struct pci_dev *pdev, bool pu= blish) } EXPORT_SYMBOL_GPL(pci_p2pmem_publish); =20 -static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pg= map, - struct device *dev) +static enum pci_p2pdma_map_type +pci_p2pdma_map_type(struct p2pdma_provider *provider, struct device *dev) { enum pci_p2pdma_map_type type =3D PCI_P2PDMA_MAP_NOT_SUPPORTED; - struct pci_dev *provider =3D to_p2p_pgmap(pgmap)->provider; + struct pci_dev *pdev =3D to_pci_dev(provider->owner); struct pci_dev *client; struct pci_p2pdma *p2pdma; int dist; =20 - if (!provider->p2pdma) + if (!pdev->p2pdma) return PCI_P2PDMA_MAP_NOT_SUPPORTED; =20 if (!dev_is_pci(dev)) @@ -991,7 +990,7 @@ static enum pci_p2pdma_map_type pci_p2pdma_map_type(str= uct dev_pagemap *pgmap, client =3D to_pci_dev(dev); =20 rcu_read_lock(); - p2pdma =3D rcu_dereference(provider->p2pdma); + p2pdma =3D rcu_dereference(pdev->p2pdma); =20 if (p2pdma) type =3D xa_to_value(xa_load(&p2pdma->map_types, @@ -999,7 +998,7 @@ static enum pci_p2pdma_map_type pci_p2pdma_map_type(str= uct dev_pagemap *pgmap, rcu_read_unlock(); =20 if (type =3D=3D PCI_P2PDMA_MAP_UNKNOWN) - return calc_map_type_and_dist(provider, client, &dist, true); + return calc_map_type_and_dist(pdev, client, &dist, true); =20 return type; } @@ -1007,9 +1006,13 @@ static enum pci_p2pdma_map_type pci_p2pdma_map_type(= struct dev_pagemap *pgmap, void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, struct device *dev, struct page *page) { - state->pgmap =3D page_pgmap(page); - state->map =3D pci_p2pdma_map_type(state->pgmap, dev); - state->bus_off =3D to_p2p_pgmap(state->pgmap)->bus_offset; + struct pci_p2pdma_pagemap *p2p_pgmap =3D to_p2p_pgmap(page_pgmap(page)); + + if (state->mem =3D=3D &p2p_pgmap->mem) + return; + + state->mem =3D &p2p_pgmap->mem; + state->map =3D pci_p2pdma_map_type(&p2p_pgmap->mem, dev); } =20 /** diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index 075c20b161d98..27a2c399f47da 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h @@ -16,6 +16,16 @@ struct block_device; struct scatterlist; =20 +/** + * struct p2pdma_provider + * + * A p2pdma provider is a range of MMIO address space available to the CPU. + */ +struct p2pdma_provider { + struct device *owner; + u64 bus_offset; +}; + #ifdef CONFIG_PCI_P2PDMA int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, u64 offset); @@ -144,11 +154,11 @@ enum pci_p2pdma_map_type { }; =20 struct pci_p2pdma_map_state { - struct dev_pagemap *pgmap; + struct p2pdma_provider *mem; enum pci_p2pdma_map_type map; - u64 bus_off; }; =20 + /* helper for pci_p2pdma_state(), do not use directly */ void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, struct device *dev, struct page *page); @@ -167,8 +177,7 @@ pci_p2pdma_state(struct pci_p2pdma_map_state *state, st= ruct device *dev, struct page *page) { if (IS_ENABLED(CONFIG_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { - if (state->pgmap !=3D page_pgmap(page)) - __pci_p2pdma_update_state(state, dev, page); + __pci_p2pdma_update_state(state, dev, page); return state->map; } return PCI_P2PDMA_MAP_NONE; @@ -186,7 +195,7 @@ static inline dma_addr_t pci_p2pdma_bus_addr_map(struct pci_p2pdma_map_state *state, phys_addr_t pa= ddr) { WARN_ON_ONCE(state->map !=3D PCI_P2PDMA_MAP_BUS_ADDR); - return paddr + state->bus_off; + return paddr + state->mem->bus_offset; } =20 #endif /* _LINUX_PCI_P2P_H */ --=20 2.51.0 From nobody Thu Oct 2 00:46:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C027B2264CA; Thu, 25 Sep 2025 13:18:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806312; cv=none; b=fwdbaq44EL90duYcTZql8ngi5EtS2Gzv/LOUwBl+DWZW8sDHN1uvKN/TqM/rB24MZLsqxuoFxgLk/DvlaEIwDBmsQr0gppcrkHKh9mOxnRsbM54/MwL2eplxyq5ptGwOjCQwnnYWYrqIVJIq/WiKgirw3Pu9AoinVQpSvAmc4yk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806312; c=relaxed/simple; bh=LldS4UL7ZlxQUaPAsaNZtO9N1rGFM0ivWFvFz+2VF14=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nGxIfenchLs1/DGkAN7SvFEXkgiUg993FISdio96wyx9Yri/4xBEE6+Y4VaIpnIGhkVZxUvRw5k9nE7suueLG3d8rQSf4CMdgLAO4IVsDDL61CALLlQE9QF0mOCz8x0o2o3svOGOZ4ghh/bTe1c5Juk/+g3c8ofZgcnYSDYGmlo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dZ0bIaXN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dZ0bIaXN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA7B1C4CEF5; Thu, 25 Sep 2025 13:18:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1758806312; bh=LldS4UL7ZlxQUaPAsaNZtO9N1rGFM0ivWFvFz+2VF14=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dZ0bIaXN1PNaTAD9Kjs2+Ss22c94ZLLAQ4uBkcOYx+4DrRfD8Ljbo+7DR4+/wrM70 p/8S1PqUZ8eIRnYejDIXO+xLr++iHb38s22BV20uCzd8BjS2yX6RYCu7uVD4Gk8Avh lhVtk/aazGItiQgVbCSmrm8pXgCs8HUb3O5Hwq+DwbGW7LxonUFPF1IjyuMn/9spNB WJXJIpN+U176dqfHLdBpIo1RFdmWFUuSRbnMAGrWO9p2UUXf/t+fEKnJCJPGWeEFHy d6UwfMnq2MHRc1mozoUURZ0ev5Dkq8os7bPCApIuZ0ExOIXY2BFneibYTHrVqLpLMD /ST1Js7q/4XLw== From: Leon Romanovsky To: Alex Williamson Cc: Leon Romanovsky , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Vivek Kasireddy , Will Deacon Subject: [PATCH v3 02/10] PCI/P2PDMA: Simplify bus address mapping API Date: Thu, 25 Sep 2025 16:14:30 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Update the pci_p2pdma_bus_addr_map() function to take a direct pointer to the p2pdma_provider structure instead of the pci_p2pdma_map_state. This simplifies the API by removing the need for callers to extract the provider from the state structure. The change updates all callers across the kernel (block layer, IOMMU, DMA direct, and HMM) to pass the provider pointer directly, making the code more explicit and reducing unnecessary indirection. This also removes the runtime warning check since callers now have direct control over which provider they use. Signed-off-by: Leon Romanovsky --- block/blk-mq-dma.c | 2 +- drivers/iommu/dma-iommu.c | 4 ++-- include/linux/pci-p2pdma.h | 7 +++---- kernel/dma/direct.c | 4 ++-- mm/hmm.c | 2 +- 5 files changed, 9 insertions(+), 10 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index d415088ed9fd2..430e51ec494a6 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -79,7 +79,7 @@ static inline bool blk_can_dma_map_iova(struct request *r= eq, =20 static bool blk_dma_map_bus(struct blk_dma_iter *iter, struct phys_vec *ve= c) { - iter->addr =3D pci_p2pdma_bus_addr_map(&iter->p2pdma, vec->paddr); + iter->addr =3D pci_p2pdma_bus_addr_map(iter->p2pdma.mem, vec->paddr); iter->len =3D vec->len; return true; } diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7944a3af4545e..e52d19d2e8334 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1439,8 +1439,8 @@ int iommu_dma_map_sg(struct device *dev, struct scatt= erlist *sg, int nents, * as a bus address, __finalise_sg() will copy the dma * address into the output segment. */ - s->dma_address =3D pci_p2pdma_bus_addr_map(&p2pdma_state, - sg_phys(s)); + s->dma_address =3D pci_p2pdma_bus_addr_map( + p2pdma_state.mem, sg_phys(s)); sg_dma_len(s) =3D sg->length; sg_dma_mark_bus_address(s); continue; diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index 27a2c399f47da..eef96636c67e6 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h @@ -186,16 +186,15 @@ pci_p2pdma_state(struct pci_p2pdma_map_state *state, = struct device *dev, /** * pci_p2pdma_bus_addr_map - Translate a physical address to a bus address * for a PCI_P2PDMA_MAP_BUS_ADDR transfer. - * @state: P2P state structure + * @provider: P2P provider structure * @paddr: physical address to map * * Map a physically contiguous PCI_P2PDMA_MAP_BUS_ADDR transfer. */ static inline dma_addr_t -pci_p2pdma_bus_addr_map(struct pci_p2pdma_map_state *state, phys_addr_t pa= ddr) +pci_p2pdma_bus_addr_map(struct p2pdma_provider *provider, phys_addr_t padd= r) { - WARN_ON_ONCE(state->map !=3D PCI_P2PDMA_MAP_BUS_ADDR); - return paddr + state->mem->bus_offset; + return paddr + provider->bus_offset; } =20 #endif /* _LINUX_PCI_P2P_H */ diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 1062caac47e7b..3e058c99fe856 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -484,8 +484,8 @@ int dma_direct_map_sg(struct device *dev, struct scatte= rlist *sgl, int nents, } break; case PCI_P2PDMA_MAP_BUS_ADDR: - sg->dma_address =3D pci_p2pdma_bus_addr_map(&p2pdma_state, - sg_phys(sg)); + sg->dma_address =3D pci_p2pdma_bus_addr_map( + p2pdma_state.mem, sg_phys(sg)); sg_dma_mark_bus_address(sg); continue; default: diff --git a/mm/hmm.c b/mm/hmm.c index 6556c0e074ba8..012b78688fa18 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -751,7 +751,7 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, break; case PCI_P2PDMA_MAP_BUS_ADDR: pfns[idx] |=3D HMM_PFN_P2PDMA_BUS | HMM_PFN_DMA_MAPPED; - return pci_p2pdma_bus_addr_map(p2pdma_state, paddr); + return pci_p2pdma_bus_addr_map(p2pdma_state->mem, paddr); default: return DMA_MAPPING_ERROR; } --=20 2.51.0 From nobody Thu Oct 2 00:46:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E7562376F7; Thu, 25 Sep 2025 13:18:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806329; cv=none; b=NpNt6dpjNc0pNiFn1I/uC4oApMs2OqhDh9gltEqt3QVPWHJQz/2iiptvQ93ZUCO2eWb3Xcsqj9gEtsefioQxy/AkxuorAQT4MGfvmBDMM+JRC5t0iiXwJhIvR30/ow41LdL3a8JP/U9jjK1PFcEzi18PELMeiXTVmYKxxYCbCgg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806329; c=relaxed/simple; bh=zowH1OUef1vijrfUt/VMnvnpjJAgrGRYmdoRVaN/ufQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cHmMmR1DtpoqID5KZieD2Rfjn/hWDUMQHlZqHJH3d6isB6YYS4IKUzF0xkjIGLk4ntXbJ0Roo6GgHD3xRWZAnrH6Qg8Qk58Yt4bzrEgigs0EREdXHPWYjy1iuMt7q/CyKf6plGvxOip792I9Dr/ZbI4/dxiVntlGSS+Ps1M2gQY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LeLdhSJz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LeLdhSJz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 83C8AC4CEF0; Thu, 25 Sep 2025 13:18:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1758806329; bh=zowH1OUef1vijrfUt/VMnvnpjJAgrGRYmdoRVaN/ufQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LeLdhSJzTPyKuawpr1Ogyun5PsDEujgdpes2x3sF7Q2hdgP6+/C0ykdzyLVk8BW17 +zNlF5HOhxmkzIKcrAv+NlGrHsDf6UXSUlKKpukta8Q6Njomni/lrGrdq38v58zqpx twp63dLxXV28b9MIgm+4D+sm5Uv9TwGtkVBL0JuDkfnTAPo95x3pHbKOvSVobiE+0S yX3cz2NH7sXcsmG0yd7XC7xtVSMyByxZs3YCBmc+zBTpf5w/tvXPGPMvqhmee5EboM Of0Zgel4TF6IiKxtPt5mcchnWsifnyHQx3LRfvhVNiv2u2BE4Q/EJKZPVIiYOAtsUr H3aCqzL1mDdWw== From: Leon Romanovsky To: Alex Williamson Cc: Leon Romanovsky , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Vivek Kasireddy , Will Deacon Subject: [PATCH v3 03/10] PCI/P2PDMA: Refactor to separate core P2P functionality from memory allocation Date: Thu, 25 Sep 2025 16:14:31 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Refactor the PCI P2PDMA subsystem to separate the core peer-to-peer DMA functionality from the optional memory allocation layer. This creates a two-tier architecture: The core layer provides P2P mapping functionality for physical addresses based on PCI device MMIO BARs and integrates with the DMA API for mapping operations. This layer is required for all P2PDMA users. The optional upper layer provides memory allocation capabilities including gen_pool allocator, struct page support, and sysfs interface for user space access. This separation allows subsystems like VFIO to use only the core P2P mapping functionality without the overhead of memory allocation features they don't need. The core functionality is now available through the new pcim_p2pdma_provider() function that returns a p2pdma_provider structure. Signed-off-by: Leon Romanovsky --- drivers/pci/p2pdma.c | 140 ++++++++++++++++++++++++++++--------- include/linux/pci-p2pdma.h | 6 ++ 2 files changed, 112 insertions(+), 34 deletions(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 176a99232fdca..76496a5ab82e0 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -25,11 +25,12 @@ struct pci_p2pdma { struct gen_pool *pool; bool p2pmem_published; struct xarray map_types; + struct p2pdma_provider mem[PCI_STD_NUM_BARS]; }; =20 struct pci_p2pdma_pagemap { struct dev_pagemap pgmap; - struct p2pdma_provider mem; + struct p2pdma_provider *mem; }; =20 static struct pci_p2pdma_pagemap *to_p2p_pgmap(struct dev_pagemap *pgmap) @@ -204,7 +205,7 @@ static void p2pdma_page_free(struct page *page) struct pci_p2pdma_pagemap *pgmap =3D to_p2p_pgmap(page_pgmap(page)); /* safe to dereference while a reference is held to the percpu ref */ struct pci_p2pdma *p2pdma =3D rcu_dereference_protected( - to_pci_dev(pgmap->mem.owner)->p2pdma, 1); + to_pci_dev(pgmap->mem->owner)->p2pdma, 1); struct percpu_ref *ref; =20 gen_pool_free_owner(p2pdma->pool, (uintptr_t)page_to_virt(page), @@ -227,44 +228,97 @@ static void pci_p2pdma_release(void *data) =20 /* Flush and disable pci_alloc_p2p_mem() */ pdev->p2pdma =3D NULL; - synchronize_rcu(); + if (p2pdma->pool) + synchronize_rcu(); + xa_destroy(&p2pdma->map_types); + + if (!p2pdma->pool) + return; =20 gen_pool_destroy(p2pdma->pool); sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); - xa_destroy(&p2pdma->map_types); } =20 -static int pci_p2pdma_setup(struct pci_dev *pdev) +/** + * pcim_p2pdma_provider - Get peer-to-peer DMA provider + * @pdev: The PCI device to enable P2PDMA for + * @bar: BAR index to get provider + * + * This function gets peer-to-peer DMA provider and initializes + * the peer-to-peer DMA infrastructure for a PCI device. It allocates + * and sets up the necessary data structures to support P2PDMA operations, + * including mapping type tracking. + */ +struct p2pdma_provider *pcim_p2pdma_provider(struct pci_dev *pdev, int bar) { - int error =3D -ENOMEM; struct pci_p2pdma *p2p; + int i, ret; + + if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) + return NULL; + + p2p =3D rcu_dereference_protected(pdev->p2pdma, 1); + if (p2p) + /* PCI device was already initialized */ + return &p2p->mem[bar]; =20 p2p =3D devm_kzalloc(&pdev->dev, sizeof(*p2p), GFP_KERNEL); if (!p2p) - return -ENOMEM; + return ERR_PTR(-ENOMEM); =20 xa_init(&p2p->map_types); + /* + * Iterate over all standard PCI BARs and record only those that + * correspond to MMIO regions. Skip non-memory resources (e.g. I/O + * port BARs) since they cannot be used for peer-to-peer (P2P) + * transactions. + */ + for (i =3D 0; i < PCI_STD_NUM_BARS; i++) { + if (!(pci_resource_flags(pdev, i) & IORESOURCE_MEM)) + continue; =20 - p2p->pool =3D gen_pool_create(PAGE_SHIFT, dev_to_node(&pdev->dev)); - if (!p2p->pool) - goto out; + p2p->mem[i].owner =3D &pdev->dev; + p2p->mem[i].bus_offset =3D + pci_bus_address(pdev, i) - pci_resource_start(pdev, i); + } =20 - error =3D devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev); - if (error) - goto out_pool_destroy; + ret =3D devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev); + if (ret) + goto out_p2p; =20 - error =3D sysfs_create_group(&pdev->dev.kobj, &p2pmem_group); - if (error) + rcu_assign_pointer(pdev->p2pdma, p2p); + return &p2p->mem[bar]; + +out_p2p: + devm_kfree(&pdev->dev, p2p); + return ERR_PTR(ret); +} +EXPORT_SYMBOL_GPL(pcim_p2pdma_provider); + +static int pci_p2pdma_setup_pool(struct pci_dev *pdev) +{ + struct pci_p2pdma *p2pdma; + int ret; + + p2pdma =3D rcu_dereference_protected(pdev->p2pdma, 1); + if (p2pdma->pool) + /* We already setup pools, do nothing, */ + return 0; + + p2pdma->pool =3D gen_pool_create(PAGE_SHIFT, dev_to_node(&pdev->dev)); + if (!p2pdma->pool) + return -ENOMEM; + + ret =3D sysfs_create_group(&pdev->dev.kobj, &p2pmem_group); + if (ret) goto out_pool_destroy; =20 - rcu_assign_pointer(pdev->p2pdma, p2p); return 0; =20 out_pool_destroy: - gen_pool_destroy(p2p->pool); -out: - devm_kfree(&pdev->dev, p2p); - return error; + gen_pool_destroy(p2pdma->pool); + p2pdma->pool =3D NULL; + return ret; } =20 static void pci_p2pdma_unmap_mappings(void *data) @@ -276,7 +330,7 @@ static void pci_p2pdma_unmap_mappings(void *data) * unmap_mapping_range() on the inode, teardown any existing userspace * mappings and prevent new ones from being created. */ - sysfs_remove_file_from_group(&p2p_pgmap->mem.owner->kobj, + sysfs_remove_file_from_group(&p2p_pgmap->mem->owner->kobj, &p2pmem_alloc_attr.attr, p2pmem_group.name); } @@ -295,6 +349,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int b= ar, size_t size, u64 offset) { struct pci_p2pdma_pagemap *p2p_pgmap; + struct p2pdma_provider *mem; struct dev_pagemap *pgmap; struct pci_p2pdma *p2pdma; void *addr; @@ -312,15 +367,32 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int= bar, size_t size, if (size + offset > pci_resource_len(pdev, bar)) return -EINVAL; =20 - if (!pdev->p2pdma) { - error =3D pci_p2pdma_setup(pdev); + p2pdma =3D rcu_dereference_protected(pdev->p2pdma, 1); + if (!p2pdma) { + mem =3D pcim_p2pdma_provider(pdev, bar); + if (IS_ERR(mem)) + return PTR_ERR(mem); + + /* + * We checked validity of BAR prior to call + * to pcim_p2pdma_provider. It should never return NULL. + */ + if (WARN_ON(!mem)) + return -EINVAL; + + error =3D pci_p2pdma_setup_pool(pdev); if (error) return error; - } + + p2pdma =3D rcu_dereference_protected(pdev->p2pdma, 1); + } else + mem =3D &p2pdma->mem[bar]; =20 p2p_pgmap =3D devm_kzalloc(&pdev->dev, sizeof(*p2p_pgmap), GFP_KERNEL); - if (!p2p_pgmap) - return -ENOMEM; + if (!p2p_pgmap) { + error =3D -ENOMEM; + goto free_pool; + } =20 pgmap =3D &p2p_pgmap->pgmap; pgmap->range.start =3D pci_resource_start(pdev, bar) + offset; @@ -328,9 +400,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int b= ar, size_t size, pgmap->nr_range =3D 1; pgmap->type =3D MEMORY_DEVICE_PCI_P2PDMA; pgmap->ops =3D &p2pdma_pgmap_ops; - p2p_pgmap->mem.owner =3D &pdev->dev; - p2p_pgmap->mem.bus_offset =3D - pci_bus_address(pdev, bar) - pci_resource_start(pdev, bar); + p2p_pgmap->mem =3D mem; =20 addr =3D devm_memremap_pages(&pdev->dev, pgmap); if (IS_ERR(addr)) { @@ -343,7 +413,6 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int b= ar, size_t size, if (error) goto pages_free; =20 - p2pdma =3D rcu_dereference_protected(pdev->p2pdma, 1); error =3D gen_pool_add_owner(p2pdma->pool, (unsigned long)addr, pci_bus_address(pdev, bar) + offset, range_len(&pgmap->range), dev_to_node(&pdev->dev), @@ -359,7 +428,10 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int = bar, size_t size, pages_free: devm_memunmap_pages(&pdev->dev, pgmap); pgmap_free: - devm_kfree(&pdev->dev, pgmap); + devm_kfree(&pdev->dev, p2p_pgmap); +free_pool: + sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); + gen_pool_destroy(p2pdma->pool); return error; } EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource); @@ -1008,11 +1080,11 @@ void __pci_p2pdma_update_state(struct pci_p2pdma_ma= p_state *state, { struct pci_p2pdma_pagemap *p2p_pgmap =3D to_p2p_pgmap(page_pgmap(page)); =20 - if (state->mem =3D=3D &p2p_pgmap->mem) + if (state->mem =3D=3D p2p_pgmap->mem) return; =20 - state->mem =3D &p2p_pgmap->mem; - state->map =3D pci_p2pdma_map_type(&p2p_pgmap->mem, dev); + state->mem =3D p2p_pgmap->mem; + state->map =3D pci_p2pdma_map_type(p2p_pgmap->mem, dev); } =20 /** diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index eef96636c67e6..476650ae8d4d8 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h @@ -27,6 +27,7 @@ struct p2pdma_provider { }; =20 #ifdef CONFIG_PCI_P2PDMA +struct p2pdma_provider *pcim_p2pdma_provider(struct pci_dev *pdev, int bar= ); int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, u64 offset); int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **cli= ents, @@ -45,6 +46,11 @@ int pci_p2pdma_enable_store(const char *page, struct pci= _dev **p2p_dev, ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, bool use_p2pdma); #else /* CONFIG_PCI_P2PDMA */ +static inline struct p2pdma_provider *pcim_p2pdma_provider(struct pci_dev = *pdev, + int bar) +{ + return ERR_PTR(-EOPNOTSUPP); +} static inline int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, u64 offset) { --=20 2.51.0 From nobody Thu Oct 2 00:46:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FD51264F81; Thu, 25 Sep 2025 13:18:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806321; cv=none; b=GfDWk5ewLVhuQfzBvVTYsCHdAkBFc3MVbQH+U15+lz2zJtBT7NU2Ydvnro/mcPUYbU1R+PhK27Dlz8ZJs70MiDg0hO0kXGqt6H38aO8jIFy6fS2lxOR5xkdiQOgwG7v0zyvNKfGkPuVpzf9NwiWcmp7JptIkfUjB87Ax5WxCgto= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806321; c=relaxed/simple; bh=Tdd1VXrM5bIO3utNsCsXIXtcfaKqVCGKamZRBmRVDZw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JQYZz1OSfLOOx3DTWu2UP4jz2yHvgjmT8hx2RLsBvwOJqov4ZrnagroXdXp3hYHgRpKmtl9lizv1uzAtNYXTWKqdYwHbJMqc7PggSXEozuHwgN8dkNWjPDmlYFDvDkI/MJ1gzWwC/TvsT00fzK2AkILDytSgLFuUvj2TAB/UnsA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fYDyxepd; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fYDyxepd" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6011BC4CEF5; Thu, 25 Sep 2025 13:18:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1758806321; bh=Tdd1VXrM5bIO3utNsCsXIXtcfaKqVCGKamZRBmRVDZw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fYDyxepdK+vm+8WXUwLaEeADhPIGdeZqIRW2J1O8LoqXjkycRqhlyrw7b1szxLDrD WwMDU/T2lrHRvgt3e7wwq+vn0rA/RImDYLDY0peuXkieOD7SA+slSPV8uW+rmIMFv6 Uijf6mR6BomLqBBaapLn1fGvrMzv+QNGrxSMeDpddErQAaXsT6SCA9ji923SeOspR0 EqWQwD+EcCiMusIhUT/BpKGvMqc6F0JuVbkGSddRqFiGGrtBtIb5RBab2QknnFgyFo 0wFSnGav8RD8m3qmrnXpuEeenqP7+yC5VcvjylmSIt25u+396aapowZybx43uRTuGO H5hCFcxv35rHQ== From: Leon Romanovsky To: Alex Williamson Cc: Leon Romanovsky , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Vivek Kasireddy , Will Deacon Subject: [PATCH v3 04/10] PCI/P2PDMA: Export pci_p2pdma_map_type() function Date: Thu, 25 Sep 2025 16:14:32 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Export the pci_p2pdma_map_type() function to allow external modules and subsystems to determine the appropriate mapping type for P2PDMA transfers between a provider and target device. The function determines whether peer-to-peer DMA transfers can be done directly through PCI switches (PCI_P2PDMA_MAP_BUS_ADDR) or must go through the host bridge (PCI_P2PDMA_MAP_THRU_HOST_BRIDGE), or if the transfer is not supported at all. This export enables subsystems like VFIO to properly handle P2PDMA operations by querying the mapping type before attempting transfers, ensuring correct DMA address programming and error handling. Signed-off-by: Leon Romanovsky --- drivers/pci/p2pdma.c | 15 ++++++- include/linux/pci-p2pdma.h | 85 +++++++++++++++++++++----------------- 2 files changed, 59 insertions(+), 41 deletions(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 76496a5ab82e0..3ebe2e8bb335e 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -1044,8 +1044,18 @@ void pci_p2pmem_publish(struct pci_dev *pdev, bool p= ublish) } EXPORT_SYMBOL_GPL(pci_p2pmem_publish); =20 -static enum pci_p2pdma_map_type -pci_p2pdma_map_type(struct p2pdma_provider *provider, struct device *dev) +/** + * pci_p2pdma_map_type - Determine the mapping type for P2PDMA transfers + * @provider: P2PDMA provider structure + * @dev: Target device for the transfer + * + * Determines how peer-to-peer DMA transfers should be mapped between + * the provider and the target device. The mapping type indicates whether + * the transfer can be done directly through PCI switches or must go + * through the host bridge. + */ +enum pci_p2pdma_map_type pci_p2pdma_map_type(struct p2pdma_provider *provi= der, + struct device *dev) { enum pci_p2pdma_map_type type =3D PCI_P2PDMA_MAP_NOT_SUPPORTED; struct pci_dev *pdev =3D to_pci_dev(provider->owner); @@ -1074,6 +1084,7 @@ pci_p2pdma_map_type(struct p2pdma_provider *provider,= struct device *dev) =20 return type; } +EXPORT_SYMBOL_GPL(pci_p2pdma_map_type); =20 void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state, struct device *dev, struct page *page) diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index 476650ae8d4d8..55cfbfcba8b39 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h @@ -26,6 +26,45 @@ struct p2pdma_provider { u64 bus_offset; }; =20 +enum pci_p2pdma_map_type { + /* + * PCI_P2PDMA_MAP_UNKNOWN: Used internally as an initial state before + * the mapping type has been calculated. Exported routines for the API + * will never return this value. + */ + PCI_P2PDMA_MAP_UNKNOWN =3D 0, + + /* + * Not a PCI P2PDMA transfer. + */ + PCI_P2PDMA_MAP_NONE, + + /* + * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will + * traverse the host bridge and the host bridge is not in the + * allowlist. DMA Mapping routines should return an error when + * this is returned. + */ + PCI_P2PDMA_MAP_NOT_SUPPORTED, + + /* + * PCI_P2PDMA_MAP_BUS_ADDR: Indicates that two devices can talk to + * each other directly through a PCI switch and the transaction will + * not traverse the host bridge. Such a mapping should program + * the DMA engine with PCI bus addresses. + */ + PCI_P2PDMA_MAP_BUS_ADDR, + + /* + * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: Indicates two devices can talk + * to each other, but the transaction traverses a host bridge on the + * allowlist. In this case, a normal mapping either with CPU physical + * addresses (in the case of dma-direct) or IOVA addresses (in the + * case of IOMMUs) should be used to program the DMA engine. + */ + PCI_P2PDMA_MAP_THRU_HOST_BRIDGE, +}; + #ifdef CONFIG_PCI_P2PDMA struct p2pdma_provider *pcim_p2pdma_provider(struct pci_dev *pdev, int bar= ); int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, @@ -45,6 +84,8 @@ int pci_p2pdma_enable_store(const char *page, struct pci_= dev **p2p_dev, bool *use_p2pdma); ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, bool use_p2pdma); +enum pci_p2pdma_map_type pci_p2pdma_map_type(struct p2pdma_provider *provi= der, + struct device *dev); #else /* CONFIG_PCI_P2PDMA */ static inline struct p2pdma_provider *pcim_p2pdma_provider(struct pci_dev = *pdev, int bar) @@ -106,6 +147,11 @@ static inline ssize_t pci_p2pdma_enable_show(char *pag= e, { return sprintf(page, "none\n"); } +static inline enum pci_p2pdma_map_type +pci_p2pdma_map_type(struct p2pdma_provider *provider, struct device *dev) +{ + return PCI_P2PDMA_MAP_NOT_SUPPORTED; +} #endif /* CONFIG_PCI_P2PDMA */ =20 =20 @@ -120,45 +166,6 @@ static inline struct pci_dev *pci_p2pmem_find(struct d= evice *client) return pci_p2pmem_find_many(&client, 1); } =20 -enum pci_p2pdma_map_type { - /* - * PCI_P2PDMA_MAP_UNKNOWN: Used internally as an initial state before - * the mapping type has been calculated. Exported routines for the API - * will never return this value. - */ - PCI_P2PDMA_MAP_UNKNOWN =3D 0, - - /* - * Not a PCI P2PDMA transfer. - */ - PCI_P2PDMA_MAP_NONE, - - /* - * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will - * traverse the host bridge and the host bridge is not in the - * allowlist. DMA Mapping routines should return an error when - * this is returned. - */ - PCI_P2PDMA_MAP_NOT_SUPPORTED, - - /* - * PCI_P2PDMA_MAP_BUS_ADDR: Indicates that two devices can talk to - * each other directly through a PCI switch and the transaction will - * not traverse the host bridge. Such a mapping should program - * the DMA engine with PCI bus addresses. - */ - PCI_P2PDMA_MAP_BUS_ADDR, - - /* - * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: Indicates two devices can talk - * to each other, but the transaction traverses a host bridge on the - * allowlist. In this case, a normal mapping either with CPU physical - * addresses (in the case of dma-direct) or IOVA addresses (in the - * case of IOMMUs) should be used to program the DMA engine. - */ - PCI_P2PDMA_MAP_THRU_HOST_BRIDGE, -}; - struct pci_p2pdma_map_state { struct p2pdma_provider *mem; enum pci_p2pdma_map_type map; --=20 2.51.0 From nobody Thu Oct 2 00:46:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C97B62376F7; Thu, 25 Sep 2025 13:18:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806325; cv=none; b=ppLuKSB8rDiQvHmjM9IA3ePWomyyklsiNi61SM8zt9wftBmlRN3O4JdSUbkv0waYam5nZnA4/AD9TYICv3z/33L6LnTG/dTf81JqGgJ7x3sZqcUBftbkcAjkQZnf5vTqwgw+Rz9zDakwRhxVRSvr2Zzk9H4W1bfyvyqrBI0KD7k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806325; c=relaxed/simple; bh=4Zq6h9bZhtbBadNPN71BbedaIVCHib6EpkAfWXyEd4A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WqFEtHOErObtgd1oVIZFUHT5tgbpD9kyySXVJI3LD5hTJ82XcSuaOUxPOvrk46mk2gd8LxDn54WtpnRyOzpUDCbo23VevUkh3zg4b+M4tox1VCCKlhYrq8xtyEH3/rDiI3LQbgEsRrpKi+4yLDsJBT/vRbQl0F6Fk+cSy2PGG5s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bzKG8rz/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bzKG8rz/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 938F6C4CEF5; Thu, 25 Sep 2025 13:18:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1758806325; bh=4Zq6h9bZhtbBadNPN71BbedaIVCHib6EpkAfWXyEd4A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bzKG8rz/enFK/kA4HnAGYMksgeq61FavbJCJcIx5n3L9m8euwOV7Eo/WkBc9Kc21/ EFoUq9FbsgK1yYJswKmN8/9l0vK+3Cjd/U8ww8dKSs1guV0ZUjZePE/+XumsZtA2Dk Rw5Cgsn3j+mzndkuY3fms2ZnFStMKn0PNNmWYuZXecSbmFpcE4aWb2q3RqWm2gB2yg vMmg7IaKXLNslgkNeYTHTpCOYb7KCBWq+x0dZ2uTxR01tW1q4zPHBY1FrtTc8It3s4 qoF5VTYtcIFgmzTvu2fN4nxp9hIxgvJ27TG9zFguHAUHswsjrU8HJc+P2FNuz7+4WF 6GprLm0nARPcg== From: Leon Romanovsky To: Alex Williamson Cc: Leon Romanovsky , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Vivek Kasireddy , Will Deacon Subject: [PATCH v3 05/10] types: move phys_vec definition to common header Date: Thu, 25 Sep 2025 16:14:33 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Move the struct phys_vec definition from block/blk-mq-dma.c to include/linux/types.h to make it available for use across the kernel. The phys_vec structure represents a physical address range with a length, which is used by the new physical address-based DMA mapping API. This structure is already used by the block layer and will be needed by upcoming VFIO patches for dma-buf operations. Moving this definition to types.h provides a centralized location for this common data structure and eliminates code duplication across subsystems that need to work with physical address ranges. Signed-off-by: Leon Romanovsky --- block/blk-mq-dma.c | 5 ----- include/linux/types.h | 5 +++++ 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index 430e51ec494a6..8d2646ab27953 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -5,11 +5,6 @@ #include #include "blk.h" =20 -struct phys_vec { - phys_addr_t paddr; - u32 len; -}; - static bool blk_map_iter_next(struct request *req, struct req_iterator *it= er, struct phys_vec *vec) { diff --git a/include/linux/types.h b/include/linux/types.h index 6dfdb8e8e4c35..2bc56681b2e62 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -170,6 +170,11 @@ typedef u64 phys_addr_t; typedef u32 phys_addr_t; #endif =20 +struct phys_vec { + phys_addr_t paddr; + u32 len; +}; + typedef phys_addr_t resource_size_t; =20 /* --=20 2.51.0 From nobody Thu Oct 2 00:46:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 743A1285C87; Thu, 25 Sep 2025 13:19:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806343; cv=none; b=sG1GXrR43WjCGl1zzgnKw5gioFvPTNfQg0SWV0AeVPqXY76aOgLAAVEvL5vVM0qZ7Xdw42PJRXmT1pT/sBVz7+SclyPo7Dpv9O1pF3BmR22N8S2a1rhb/Nv5L37o/ZHiCkpCp5dEQLmvZ9GsjcyMfnpauUM+Hq2utaCiUyLcNXI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806343; c=relaxed/simple; bh=XSq9qJYMvX5x9A0SGQeu7EMmnMC35VY7+EoC77mNcp8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WbB0m95rmz3YpTIKS/6/V+72dZ+qdHRO3F4Q5vQgEKBIUPJxK8J6EZuDCa04hZI+wsFjnLdLrMbMOGhTKWi1a1kvs7Bw9bV2voxbcqJwuNIeooa5Te16pcffIWLyeDvYbzxNhDpeOQbu2gM59ytfa1Z8UO5+8RezkngpreWFs4k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sYD5NkY/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sYD5NkY/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D723FC4CEF0; Thu, 25 Sep 2025 13:19:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1758806343; bh=XSq9qJYMvX5x9A0SGQeu7EMmnMC35VY7+EoC77mNcp8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sYD5NkY/gt6PJ9290ed0uQCf1pREgS8pIpgHC8i+dKrn31Vnb/TT+6MTBrwxDYuDe N+OmD7Vbe7zFg2ltYJPJHLB7n6ZRqSFnIlER9Pduq8ZdF924/6Gm9GYCiuEBoZmLTA BdyaUZ5sLxNEbSvDiuPsuIu6PK3tpo615K0i2bgIUHcAYTKZ9+QiXnl0x/ly2j2C4f iJwp+JVovB9uxnZalgMBG8t3Y86tYogcTQ4gg1ERwW1S8mGJIhYBNmnkGDfFII5IOE RSK2LjQxW1bymbGPb/UjVBjq7Qizrl0JKsnPhtMe1ys0+dXLDRTFIFAfdc8tQE4KEI iy5XvSaeafdNg== From: Leon Romanovsky To: Alex Williamson Cc: Vivek Kasireddy , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Will Deacon Subject: [PATCH v3 06/10] vfio: Export vfio device get and put registration helpers Date: Thu, 25 Sep 2025 16:14:34 +0300 Message-ID: <4b6d97b645d7cfeee1f7435251ac87ec37edd681.1758804980.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vivek Kasireddy These helpers are useful for managing additional references taken on the device from other associated VFIO modules. Original-patch-by: Jason Gunthorpe Signed-off-by: Vivek Kasireddy Signed-off-by: Leon Romanovsky --- drivers/vfio/vfio_main.c | 2 ++ include/linux/vfio.h | 2 ++ 2 files changed, 4 insertions(+) diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c index 5046cae052224..2f0dcec67ffe3 100644 --- a/drivers/vfio/vfio_main.c +++ b/drivers/vfio/vfio_main.c @@ -171,11 +171,13 @@ void vfio_device_put_registration(struct vfio_device = *device) if (refcount_dec_and_test(&device->refcount)) complete(&device->comp); } +EXPORT_SYMBOL_GPL(vfio_device_put_registration); =20 bool vfio_device_try_get_registration(struct vfio_device *device) { return refcount_inc_not_zero(&device->refcount); } +EXPORT_SYMBOL_GPL(vfio_device_try_get_registration); =20 /* * VFIO driver API diff --git a/include/linux/vfio.h b/include/linux/vfio.h index eb563f538dee5..217ba4ef17522 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -297,6 +297,8 @@ static inline void vfio_put_device(struct vfio_device *= device) int vfio_register_group_dev(struct vfio_device *device); int vfio_register_emulated_iommu_dev(struct vfio_device *device); void vfio_unregister_group_dev(struct vfio_device *device); +bool vfio_device_try_get_registration(struct vfio_device *device); +void vfio_device_put_registration(struct vfio_device *device); =20 int vfio_assign_device_set(struct vfio_device *device, void *set_id); unsigned int vfio_device_set_open_count(struct vfio_device_set *dev_set); --=20 2.51.0 From nobody Thu Oct 2 00:46:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D80AB27A92E; Thu, 25 Sep 2025 13:18:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806333; cv=none; b=da50hkk87IK9soSmuk8RnZo/AIgYw4+OF8NE+/gPG4LFa6hNr/Y+h5UgJj1j8ilcI3eGFfR+oUy3DB6vEFafVYMDrBpF1e4aYqxMbjD+ErlTkTUBmBeg0NK/bhVD3mpZHXyX07Vc6L8jyUlNJp8R8/yaXkf3qz+z3wpzDYNgfcY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806333; c=relaxed/simple; bh=J3JcsAko8zSIGuiXV9DflNw5AuJmzcXfJS1bY+J5d1Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=A9qnV48YvtAbSx04GS1Vgl1Xxzr4QHOZCdyehR9qgwd3gYCGDuaS+o+OJN3SghjGFuQXsKiGS9J4da6uxoj+z0nE50xdAKgXAHjvHhc10/2IFOI6Mfl0EmlDPezIqYCCCOjNNMVPSdgi4Ab+hHCh3Q/nPCyjd/CtJHrj8+sg8oo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Nyyiardu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Nyyiardu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EBDB3C4CEF7; Thu, 25 Sep 2025 13:18:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1758806332; bh=J3JcsAko8zSIGuiXV9DflNw5AuJmzcXfJS1bY+J5d1Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Nyyiardu2Jm2P99Ky9DPkc45OpBK6VXC2cDJ2ntyOymvqcGgd7qxrLq7aOKcpmvrB 5XtWc3qlZGvSsKGzKNx+xxbBh9eXebQcre8M4Bb/SEeFez1GqGCqP9IYRto9erX4mA xMbo23J/F/gS9OYPIVigsvaRVXoubU6we02xEUbwHPq2xk6hXv4kjVijzWspwlc1DQ i35RE92E6sbTQMb59F66xvBBp/vg5DYOswYfaxASsoGs62yrjSwe8ELTEJGw6PkZ4a C+Bv9QieZeKuF7wRH7Mb2kELbArKXalgOTpImh/q8etNoCarPsndtLwMReFQauTWd0 bwvkCQ5iBtzuQ== From: Leon Romanovsky To: Alex Williamson Cc: Leon Romanovsky , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Vivek Kasireddy , Will Deacon Subject: [PATCH v3 07/10] vfio/pci: Add dma-buf export config for MMIO regions Date: Thu, 25 Sep 2025 16:14:35 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Add new kernel config which indicates support for dma-buf export of MMIO regions, which implementation is provided in next patches. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/Kconfig | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig index 2b0172f546652..55ae888bf26ae 100644 --- a/drivers/vfio/pci/Kconfig +++ b/drivers/vfio/pci/Kconfig @@ -55,6 +55,26 @@ config VFIO_PCI_ZDEV_KVM =20 To enable s390x KVM vfio-pci extensions, say Y. =20 +config VFIO_PCI_DMABUF + bool "VFIO PCI extensions for DMA-BUF" + depends on VFIO_PCI_CORE + depends on PCI_P2PDMA && DMA_SHARED_BUFFER + default y + help + Enable support for VFIO PCI extensions that allow exporting + device MMIO regions as DMA-BUFs for peer devices to access via + peer-to-peer (P2P) DMA. + + This feature enables a VFIO-managed PCI device to export a portion + of its MMIO BAR as a DMA-BUF file descriptor, which can be passed + to other userspace drivers or kernel subsystems capable of + initiating DMA to that region. + + Say Y here if you want to enable VFIO DMABUF-based MMIO export + support for peer-to-peer DMA use cases. + + If unsure, say N. + source "drivers/vfio/pci/mlx5/Kconfig" =20 source "drivers/vfio/pci/hisilicon/Kconfig" --=20 2.51.0 From nobody Thu Oct 2 00:46:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5840C27A92E; Thu, 25 Sep 2025 13:18:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806336; cv=none; b=ncSY9K52bgRjKMY7oHsCalMbB006WjQGpZFJR4NqVbkaHfAbvc/JkeNiyeAv7aeaUtH31E07f8Eoktkv/xKIglZNQZvurDoYZbjT4HOq+fPwbJWqC8fvwdbQpvfihFMnCQmty1xtkYp1rfuI9PEnrwgxMnxbDesCYp3Xj0g8xiM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806336; c=relaxed/simple; bh=KyvoIVO5yd9BSrcJftqBm+1vYzNbuC1mO8MyRdzIYcg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UHu8SOKPIZHrzFSkpg0i/Sf8n4ynGam2/TUZynV9NsgxpjnGjaK4byqAbAi0g0Q+vaPkwFyH7sMAIxcsIbDT6wVioFNhiD5Ykj251TlCSlC1mMCEBivnF1VyOOtCeYK035Ie7+sLLyrDNJpg0UYm8pPbz3AMVeyiixO/r35hGpw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ifN8f/b7; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ifN8f/b7" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87DBCC4CEF7; Thu, 25 Sep 2025 13:18:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1758806336; bh=KyvoIVO5yd9BSrcJftqBm+1vYzNbuC1mO8MyRdzIYcg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ifN8f/b7xNvXk4yklfFciNbhLhFDN2pkIXiduF42mX7McmhVOTROAIJh6sdHzyR7I IgdkvMQHVN0FjSppxopIVWhetGWpoc7kqFpxnE+6X8JlNiiR4kNGHaBlnLHPqV+PPZ LOWF7+YNsb2VWjaT9ndJ/3LA6FJNOMYmVy49i8vb0YVcC4GME3GFK294j6S/HWUFN5 kcpF1JjtOTynLslV5scuGLfXWpLz9sFkXFYp3MulpMtWLJRadTHqkH3RxVyANqRyKW uoJCxqeEv5HGx32G2ndbHij44kNTxzyNHZzLt4Vc3ZrDCB3RgHP7gHcSs7FXaQlGdX dl7ukQUm8QdXQ== From: Leon Romanovsky To: Alex Williamson Cc: Leon Romanovsky , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Vivek Kasireddy , Will Deacon Subject: [PATCH v3 08/10] vfio/pci: Enable peer-to-peer DMA transactions by default Date: Thu, 25 Sep 2025 16:14:36 +0300 Message-ID: <79ffd43cd22763a376a3abc75fc7f9ed89ec9a9d.1758804980.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Make sure that all VFIO PCI devices have peer-to-peer capabilities enables, so we would be able to export their MMIO memory through DMABUF, Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/vfio_pci_core.c | 11 +++++++++++ include/linux/vfio_pci_core.h | 3 +++ 2 files changed, 14 insertions(+) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_c= ore.c index 7dcf5439dedc9..356a0e2fd3780 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -28,6 +28,9 @@ #include #include #include +#ifdef CONFIG_VFIO_PCI_DMABUF +#include +#endif #if IS_ENABLED(CONFIG_EEH) #include #endif @@ -2085,6 +2088,7 @@ int vfio_pci_core_init_dev(struct vfio_device *core_v= dev) { struct vfio_pci_core_device *vdev =3D container_of(core_vdev, struct vfio_pci_core_device, vdev); + int __maybe_unused i; =20 vdev->pdev =3D to_pci_dev(core_vdev->dev); vdev->irq_type =3D VFIO_PCI_NUM_IRQS; @@ -2094,6 +2098,13 @@ int vfio_pci_core_init_dev(struct vfio_device *core_= vdev) INIT_LIST_HEAD(&vdev->dummy_resources_list); INIT_LIST_HEAD(&vdev->ioeventfds_list); INIT_LIST_HEAD(&vdev->sriov_pfs_item); +#ifdef CONFIG_VFIO_PCI_DMABUF + for (i =3D 0; i < PCI_STD_NUM_BARS; i++) { + vdev->provider[i] =3D pcim_p2pdma_provider(vdev->pdev, i); + if (IS_ERR(vdev->provider[i])) + return PTR_ERR(vdev->provider[i]); + } +#endif init_rwsem(&vdev->memory_lock); xa_init(&vdev->ctx); =20 diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index f541044e42a2a..2184ba65348b8 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -94,6 +94,9 @@ struct vfio_pci_core_device { struct vfio_pci_core_device *sriov_pf_core_dev; struct notifier_block nb; struct rw_semaphore memory_lock; +#ifdef CONFIG_VFIO_PCI_DMABUF + struct p2pdma_provider *provider[PCI_STD_NUM_BARS]; +#endif }; =20 /* Will be exported for vfio pci drivers usage */ --=20 2.51.0 From nobody Thu Oct 2 00:46:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C017283FFC; Thu, 25 Sep 2025 13:18:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806340; cv=none; b=icS5fUUDxE1Be32m/O897UbTtAQxdTLQ52UbROOBxcIV3Tsa4+lLiUq2lHOCK6nmMdrHbkl45sUbnqlDjTPjSCsOw2+sH9ubpKNiXkXxv8kMz0O20zMxB3GzrsRYwAZ6ACjSkrIB0LlNs+gTijgwxmewiuzDWrCNI4X9J1QfWp0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806340; c=relaxed/simple; bh=iaKDo8+GXeu4wXRq9Qaze1AB6MboGJIHM+btEoU720U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aWqQhTTtBkauc0I1dIFnQOGdUdWOwykud2z/eJ1/C6LCIxCMODGtWzmATt/mGNAkPbMYVy+B8t68eQj4ESkDGtCdqo9X07uuZVBm46DXe4N7u4BVmcDBZfAq2xV3bGijBSMQFEVQrNDlW8sF2fVrEoP5BHZkqyet6bsYjx9Wm2w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PPRxNcAE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PPRxNcAE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1FFE2C4CEF0; Thu, 25 Sep 2025 13:18:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1758806339; bh=iaKDo8+GXeu4wXRq9Qaze1AB6MboGJIHM+btEoU720U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PPRxNcAER0sBWU04ti7ERNNBjC4G0s3V1tSjBJcRkLsU27oCADu167yr+SKaKuJsV 3cxMTsFtjpObe+gkW2NlpiwqylI1bAjB+9UeAFjc120uXcKXRhetP6xq7q74RVf87B y46YRc12SJmkX26594tJiP5IyYUGlZwvCeM4FSHaS3WTlN6BlhzMoDMRVpdQQ9fEvL ZgkbbZHTmFBDD1WwZiK6z8vmuNgMg3D/DIvKBXsp20NSTz83EptJGRfomKMu9xlX7a cd8k2BAfxMFUmK37XcjUessk/YtP+t+hk5M03BU0Jx4lRwIemZaMzOvNCUKt1Hhvzu TuQn5w2tocV+g== From: Leon Romanovsky To: Alex Williamson Cc: Vivek Kasireddy , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Will Deacon Subject: [PATCH v3 09/10] vfio/pci: Share the core device pointer while invoking feature functions Date: Thu, 25 Sep 2025 16:14:37 +0300 Message-ID: <6bf5ecfd312ba0f2e71873322def45c7e42a1b1d.1758804980.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vivek Kasireddy There is no need to share the main device pointer (struct vfio_device *) with all the feature functions as they only need the core device pointer. Therefore, extract the core device pointer once in the caller (vfio_pci_core_ioctl_feature) and share it instead. Signed-off-by: Vivek Kasireddy Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/vfio_pci_core.c | 30 +++++++++++++----------------- 1 file changed, 13 insertions(+), 17 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_c= ore.c index 356a0e2fd3780..17bf711a929c4 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -302,11 +302,9 @@ static int vfio_pci_runtime_pm_entry(struct vfio_pci_c= ore_device *vdev, return 0; } =20 -static int vfio_pci_core_pm_entry(struct vfio_device *device, u32 flags, +static int vfio_pci_core_pm_entry(struct vfio_pci_core_device *vdev, u32 f= lags, void __user *arg, size_t argsz) { - struct vfio_pci_core_device *vdev =3D - container_of(device, struct vfio_pci_core_device, vdev); int ret; =20 ret =3D vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_SET, 0); @@ -323,12 +321,10 @@ static int vfio_pci_core_pm_entry(struct vfio_device = *device, u32 flags, } =20 static int vfio_pci_core_pm_entry_with_wakeup( - struct vfio_device *device, u32 flags, + struct vfio_pci_core_device *vdev, u32 flags, struct vfio_device_low_power_entry_with_wakeup __user *arg, size_t argsz) { - struct vfio_pci_core_device *vdev =3D - container_of(device, struct vfio_pci_core_device, vdev); struct vfio_device_low_power_entry_with_wakeup entry; struct eventfd_ctx *efdctx; int ret; @@ -379,11 +375,9 @@ static void vfio_pci_runtime_pm_exit(struct vfio_pci_c= ore_device *vdev) up_write(&vdev->memory_lock); } =20 -static int vfio_pci_core_pm_exit(struct vfio_device *device, u32 flags, +static int vfio_pci_core_pm_exit(struct vfio_pci_core_device *vdev, u32 fl= ags, void __user *arg, size_t argsz) { - struct vfio_pci_core_device *vdev =3D - container_of(device, struct vfio_pci_core_device, vdev); int ret; =20 ret =3D vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_SET, 0); @@ -1476,11 +1470,10 @@ long vfio_pci_core_ioctl(struct vfio_device *core_v= dev, unsigned int cmd, } EXPORT_SYMBOL_GPL(vfio_pci_core_ioctl); =20 -static int vfio_pci_core_feature_token(struct vfio_device *device, u32 fla= gs, - uuid_t __user *arg, size_t argsz) +static int vfio_pci_core_feature_token(struct vfio_pci_core_device *vdev, + u32 flags, uuid_t __user *arg, + size_t argsz) { - struct vfio_pci_core_device *vdev =3D - container_of(device, struct vfio_pci_core_device, vdev); uuid_t uuid; int ret; =20 @@ -1507,16 +1500,19 @@ static int vfio_pci_core_feature_token(struct vfio_= device *device, u32 flags, int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags, void __user *arg, size_t argsz) { + struct vfio_pci_core_device *vdev =3D + container_of(device, struct vfio_pci_core_device, vdev); + switch (flags & VFIO_DEVICE_FEATURE_MASK) { case VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY: - return vfio_pci_core_pm_entry(device, flags, arg, argsz); + return vfio_pci_core_pm_entry(vdev, flags, arg, argsz); case VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP: - return vfio_pci_core_pm_entry_with_wakeup(device, flags, + return vfio_pci_core_pm_entry_with_wakeup(vdev, flags, arg, argsz); case VFIO_DEVICE_FEATURE_LOW_POWER_EXIT: - return vfio_pci_core_pm_exit(device, flags, arg, argsz); + return vfio_pci_core_pm_exit(vdev, flags, arg, argsz); case VFIO_DEVICE_FEATURE_PCI_VF_TOKEN: - return vfio_pci_core_feature_token(device, flags, arg, argsz); + return vfio_pci_core_feature_token(vdev, flags, arg, argsz); default: return -ENOTTY; } --=20 2.51.0 From nobody Thu Oct 2 00:46:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32DF1287506; Thu, 25 Sep 2025 13:19:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806347; cv=none; b=j6xykrnmJzqvyaqIWh20HbLJVj7tEqdPYORBTPEpeojMJffpaXvK2F0wuBsYzfQgqAScP95PZaYl8yHhqF72PivSX5pMMoBX8xmYG/i86Mr8adWKXFAcRigm4MMkpjGBkBW+CjyJuKKnGEQxjdvKy/+8nB45ApWlJwJQPD3Rmgc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758806347; c=relaxed/simple; bh=ZYVARYMu/k117G/jZ9d+JmhoGnfYBOWGX+mITPtqTPM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dnaU+q2vduvJQ+XO6rwH76j+1BMoInZPd8AUU0rqfc2GzGCrzpg10C7sKhIpQulRbXSAASDNw8KkvKPBkZfA1yPu8OtMx2K82qRqSMZGUFAqPL99SJQ4lhOLgBLIw4AaXYOLeIAh2X2GkeUjqGCoh8qFhMbWWT7Eo12XIhx3QP8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PET1Pjec; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PET1Pjec" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4EFCBC4CEF7; Thu, 25 Sep 2025 13:19:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1758806346; bh=ZYVARYMu/k117G/jZ9d+JmhoGnfYBOWGX+mITPtqTPM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PET1PjecpXrBVQrB0EQ4hVbYIdlqqRvtuOba4nKO48rFzTY0xj5pdoEyfxL9nGrQU hEieXcaSMb7WIYDqHIQz2UJ3WUiIjU0WHA56fYNgpuOr/0K76kvISD2FS9vJ1l4hOK yFzMSmBQI7ssWUxes/kXhjwgcugjY0EFIjrQp5D0V0o7lYGG+RtUr26YrQcd/ULqS4 quuT9auSyo9ADzdiVuK2fuPedRiyi51/7aVtnaNRVtyWj6pc43Q6q4HsxKReNDNPqd lMG0yPl8ApKGk0tbUbaXt7g6I615RZxG2E45+NzPwKScqOpg+SJGsxyCyTNb16MdaF EmTq3j1F7Ifpg== From: Leon Romanovsky To: Alex Williamson Cc: Leon Romanovsky , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Vivek Kasireddy , Will Deacon Subject: [PATCH v3 10/10] vfio/pci: Add dma-buf export support for MMIO regions Date: Thu, 25 Sep 2025 16:14:38 +0300 Message-ID: <9906a77230bc94d695f5577144af4f18363cbd49.1758804980.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Add support for exporting PCI device MMIO regions through dma-buf, enabling safe sharing of non-struct page memory with controlled lifetime management. This allows RDMA and other subsystems to import dma-buf FDs and build them into memory regions for PCI P2P operations. The implementation provides a revocable attachment mechanism using dma-buf move operations. MMIO regions are normally pinned as BARs don't change physical addresses, but access is revoked when the VFIO device is closed or a PCI reset is issued. This ensures kernel self-defense against potentially hostile userspace. Signed-off-by: Jason Gunthorpe Signed-off-by: Vivek Kasireddy Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/Makefile | 2 + drivers/vfio/pci/vfio_pci_config.c | 22 +- drivers/vfio/pci/vfio_pci_core.c | 17 ++ drivers/vfio/pci/vfio_pci_dmabuf.c | 394 +++++++++++++++++++++++++++++ drivers/vfio/pci/vfio_pci_priv.h | 23 ++ include/linux/vfio_pci_core.h | 1 + include/uapi/linux/vfio.h | 25 ++ 7 files changed, 480 insertions(+), 4 deletions(-) create mode 100644 drivers/vfio/pci/vfio_pci_dmabuf.c diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile index cf00c0a7e55c8..f9155e9c5f630 100644 --- a/drivers/vfio/pci/Makefile +++ b/drivers/vfio/pci/Makefile @@ -2,7 +2,9 @@ =20 vfio-pci-core-y :=3D vfio_pci_core.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio= _pci_config.o vfio-pci-core-$(CONFIG_VFIO_PCI_ZDEV_KVM) +=3D vfio_pci_zdev.o + obj-$(CONFIG_VFIO_PCI_CORE) +=3D vfio-pci-core.o +vfio-pci-core-$(CONFIG_VFIO_PCI_DMABUF) +=3D vfio_pci_dmabuf.o =20 vfio-pci-y :=3D vfio_pci.o vfio-pci-$(CONFIG_VFIO_PCI_IGD) +=3D vfio_pci_igd.o diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci= _config.c index 8f02f236b5b4b..1f6008eabf236 100644 --- a/drivers/vfio/pci/vfio_pci_config.c +++ b/drivers/vfio/pci/vfio_pci_config.c @@ -589,10 +589,12 @@ static int vfio_basic_config_write(struct vfio_pci_co= re_device *vdev, int pos, virt_mem =3D !!(le16_to_cpu(*virt_cmd) & PCI_COMMAND_MEMORY); new_mem =3D !!(new_cmd & PCI_COMMAND_MEMORY); =20 - if (!new_mem) + if (!new_mem) { vfio_pci_zap_and_down_write_memory_lock(vdev); - else + vfio_pci_dma_buf_move(vdev, true); + } else { down_write(&vdev->memory_lock); + } =20 /* * If the user is writing mem/io enable (new_mem/io) and we @@ -627,6 +629,8 @@ static int vfio_basic_config_write(struct vfio_pci_core= _device *vdev, int pos, *virt_cmd &=3D cpu_to_le16(~mask); *virt_cmd |=3D cpu_to_le16(new_cmd & mask); =20 + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); } =20 @@ -707,12 +711,16 @@ static int __init init_pci_cap_basic_perm(struct perm= _bits *perm) static void vfio_lock_and_set_power_state(struct vfio_pci_core_device *vde= v, pci_power_t state) { - if (state >=3D PCI_D3hot) + if (state >=3D PCI_D3hot) { vfio_pci_zap_and_down_write_memory_lock(vdev); - else + vfio_pci_dma_buf_move(vdev, true); + } else { down_write(&vdev->memory_lock); + } =20 vfio_pci_set_power_state(vdev, state); + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); } =20 @@ -900,7 +908,10 @@ static int vfio_exp_config_write(struct vfio_pci_core_= device *vdev, int pos, =20 if (!ret && (cap & PCI_EXP_DEVCAP_FLR)) { vfio_pci_zap_and_down_write_memory_lock(vdev); + vfio_pci_dma_buf_move(vdev, true); pci_try_reset_function(vdev->pdev); + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); } } @@ -982,7 +993,10 @@ static int vfio_af_config_write(struct vfio_pci_core_d= evice *vdev, int pos, =20 if (!ret && (cap & PCI_AF_CAP_FLR) && (cap & PCI_AF_CAP_TP)) { vfio_pci_zap_and_down_write_memory_lock(vdev); + vfio_pci_dma_buf_move(vdev, true); pci_try_reset_function(vdev->pdev); + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); } } diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_c= ore.c index 17bf711a929c4..87831644d954b 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -289,6 +289,8 @@ static int vfio_pci_runtime_pm_entry(struct vfio_pci_co= re_device *vdev, * semaphore. */ vfio_pci_zap_and_down_write_memory_lock(vdev); + vfio_pci_dma_buf_move(vdev, true); + if (vdev->pm_runtime_engaged) { up_write(&vdev->memory_lock); return -EINVAL; @@ -372,6 +374,8 @@ static void vfio_pci_runtime_pm_exit(struct vfio_pci_co= re_device *vdev) */ down_write(&vdev->memory_lock); __vfio_pci_runtime_pm_exit(vdev); + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); } =20 @@ -692,6 +696,8 @@ void vfio_pci_core_close_device(struct vfio_device *cor= e_vdev) #endif vfio_pci_core_disable(vdev); =20 + vfio_pci_dma_buf_cleanup(vdev); + mutex_lock(&vdev->igate); if (vdev->err_trigger) { eventfd_ctx_put(vdev->err_trigger); @@ -1224,7 +1230,10 @@ static int vfio_pci_ioctl_reset(struct vfio_pci_core= _device *vdev, */ vfio_pci_set_power_state(vdev, PCI_D0); =20 + vfio_pci_dma_buf_move(vdev, true); ret =3D pci_try_reset_function(vdev->pdev); + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); =20 return ret; @@ -1513,6 +1522,8 @@ int vfio_pci_core_ioctl_feature(struct vfio_device *d= evice, u32 flags, return vfio_pci_core_pm_exit(vdev, flags, arg, argsz); case VFIO_DEVICE_FEATURE_PCI_VF_TOKEN: return vfio_pci_core_feature_token(vdev, flags, arg, argsz); + case VFIO_DEVICE_FEATURE_DMA_BUF: + return vfio_pci_core_feature_dma_buf(vdev, flags, arg, argsz); default: return -ENOTTY; } @@ -2100,6 +2111,7 @@ int vfio_pci_core_init_dev(struct vfio_device *core_v= dev) if (IS_ERR(vdev->provider[i])) return PTR_ERR(vdev->provider[i]); } + INIT_LIST_HEAD(&vdev->dmabufs); #endif init_rwsem(&vdev->memory_lock); xa_init(&vdev->ctx); @@ -2465,6 +2477,7 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_dev= ice_set *dev_set, break; } =20 + vfio_pci_dma_buf_move(vdev, true); vfio_pci_zap_bars(vdev); } =20 @@ -2488,6 +2501,10 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_de= vice_set *dev_set, =20 ret =3D pci_reset_bus(pdev); =20 + list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list) + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); + vdev =3D list_last_entry(&dev_set->device_list, struct vfio_pci_core_device, vdev.dev_set_list); =20 diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci= _dmabuf.c new file mode 100644 index 0000000000000..f687011cedfd5 --- /dev/null +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c @@ -0,0 +1,394 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. + */ +#include +#include +#include + +#include "vfio_pci_priv.h" + +MODULE_IMPORT_NS("DMA_BUF"); + +struct vfio_pci_dma_buf { + struct dma_buf *dmabuf; + struct vfio_pci_core_device *vdev; + struct list_head dmabufs_elm; + size_t size; + struct phys_vec *phys_vec; + struct p2pdma_provider *provider; + u32 nr_ranges; + u8 revoked : 1; +}; + +static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct vfio_pci_dma_buf *priv =3D dmabuf->priv; + + if (!attachment->peer2peer) + return -EOPNOTSUPP; + + if (priv->revoked) + return -ENODEV; + + switch (pci_p2pdma_map_type(priv->provider, attachment->dev)) { + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + break; + case PCI_P2PDMA_MAP_BUS_ADDR: + /* + * There is no need in IOVA at all for this flow. + * We rely on attachment->priv =3D=3D NULL as a marker + * for this mode. + */ + return 0; + default: + return -EINVAL; + } + + attachment->priv =3D kzalloc(sizeof(struct dma_iova_state), GFP_KERNEL); + if (!attachment->priv) + return -ENOMEM; + + dma_iova_try_alloc(attachment->dev, attachment->priv, 0, priv->size); + return 0; +} + +static void vfio_pci_dma_buf_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + kfree(attachment->priv); +} + +static void fill_sg_entry(struct scatterlist *sgl, unsigned int length, + dma_addr_t addr) +{ + /* + * Follow the DMABUF rules for scatterlist, the struct page can be + * NULL'd for MMIO only memort. + */ + sg_set_page(sgl, NULL, length, 0); + sg_dma_address(sgl) =3D addr; + sg_dma_len(sgl) =3D length; +} + +static struct sg_table * +vfio_pci_dma_buf_map(struct dma_buf_attachment *attachment, + enum dma_data_direction dir) +{ + struct vfio_pci_dma_buf *priv =3D attachment->dmabuf->priv; + struct dma_iova_state *state =3D attachment->priv; + struct phys_vec *phys_vec =3D priv->phys_vec; + unsigned long attrs =3D DMA_ATTR_MMIO; + unsigned int mapped_len =3D 0; + struct scatterlist *sgl; + struct sg_table *sgt; + dma_addr_t addr; + int ret, i; + + dma_resv_assert_held(priv->dmabuf->resv); + + if (priv->revoked) + return ERR_PTR(-ENODEV); + + sgt =3D kzalloc(sizeof(*sgt), GFP_KERNEL); + if (!sgt) + return ERR_PTR(-ENOMEM); + + ret =3D sg_alloc_table(sgt, 1, GFP_KERNEL | __GFP_ZERO); + if (ret) + goto err_kfree_sgt; + + sgl =3D sgt->sgl; + + for (i =3D 0; i < priv->nr_ranges; i++) { + if (!state) { + addr =3D pci_p2pdma_bus_addr_map(priv->provider, + phys_vec[i].paddr); + } else if (dma_use_iova(state)) { + ret =3D dma_iova_link(attachment->dev, state, + phys_vec[i].paddr, 0, + phys_vec[i].len, dir, attrs); + if (ret) + goto err_unmap_dma; + + mapped_len +=3D phys_vec[i].len; + } else { + addr =3D dma_map_phys(attachment->dev, phys_vec[i].paddr, + phys_vec[i].len, dir, attrs); + ret =3D dma_mapping_error(attachment->dev, addr); + if (ret) + goto err_unmap_dma; + } + + if (!state || !dma_use_iova(state)) { + /* + * In IOVA case, there is only one SG entry which spans + * for whole IOVA address space. So there is no need + * to call to sg_next() here. + */ + fill_sg_entry(sgl, phys_vec[i].len, addr); + sgl =3D sg_next(sgl); + } + } + + if (state && dma_use_iova(state)) { + WARN_ON_ONCE(mapped_len !=3D priv->size); + ret =3D dma_iova_sync(attachment->dev, state, 0, mapped_len); + if (ret) + goto err_unmap_dma; + fill_sg_entry(sgl, mapped_len, state->addr); + } + + return sgt; + +err_unmap_dma: + if (!i || !state) + ; /* Do nothing */ + else if (dma_use_iova(state)) + dma_iova_destroy(attachment->dev, state, mapped_len, dir, + attrs); + else + for_each_sgtable_dma_sg(sgt, sgl, i) + dma_unmap_phys(attachment->dev, sg_dma_address(sgl), + sg_dma_len(sgl), dir, attrs); + sg_free_table(sgt); +err_kfree_sgt: + kfree(sgt); + return ERR_PTR(ret); +} + +static void vfio_pci_dma_buf_unmap(struct dma_buf_attachment *attachment, + struct sg_table *sgt, + enum dma_data_direction dir) +{ + struct vfio_pci_dma_buf *priv =3D attachment->dmabuf->priv; + struct dma_iova_state *state =3D attachment->priv; + unsigned long attrs =3D DMA_ATTR_MMIO; + struct scatterlist *sgl; + int i; + + if (!state) + ; /* Do nothing */ + else if (dma_use_iova(state)) + dma_iova_destroy(attachment->dev, state, priv->size, dir, + attrs); + else + for_each_sgtable_dma_sg(sgt, sgl, i) + dma_unmap_phys(attachment->dev, sg_dma_address(sgl), + sg_dma_len(sgl), dir, attrs); + + sg_free_table(sgt); + kfree(sgt); +} + +static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf) +{ + struct vfio_pci_dma_buf *priv =3D dmabuf->priv; + + /* + * Either this or vfio_pci_dma_buf_cleanup() will remove from the list. + * The refcount prevents both. + */ + if (priv->vdev) { + down_write(&priv->vdev->memory_lock); + list_del_init(&priv->dmabufs_elm); + up_write(&priv->vdev->memory_lock); + vfio_device_put_registration(&priv->vdev->vdev); + } + kfree(priv->phys_vec); + kfree(priv); +} + +static const struct dma_buf_ops vfio_pci_dmabuf_ops =3D { + .attach =3D vfio_pci_dma_buf_attach, + .detach =3D vfio_pci_dma_buf_detach, + .map_dma_buf =3D vfio_pci_dma_buf_map, + .release =3D vfio_pci_dma_buf_release, + .unmap_dma_buf =3D vfio_pci_dma_buf_unmap, +}; + +static void dma_ranges_to_p2p_phys(struct vfio_pci_dma_buf *priv, + struct vfio_device_feature_dma_buf *dma_buf, + struct vfio_region_dma_range *dma_ranges) +{ + struct pci_dev *pdev =3D priv->vdev->pdev; + phys_addr_t pci_start; + int i; + + pci_start =3D pci_resource_start(pdev, dma_buf->region_index); + for (i =3D 0; i < dma_buf->nr_ranges; i++) { + priv->phys_vec[i].len =3D dma_ranges[i].length; + priv->phys_vec[i].paddr =3D pci_start + dma_ranges[i].offset; + priv->size +=3D priv->phys_vec[i].len; + } + priv->nr_ranges =3D dma_buf->nr_ranges; + priv->provider =3D priv->vdev->provider[dma_buf->region_index]; +} + +static int validate_dmabuf_input(struct vfio_pci_core_device *vdev, + struct vfio_device_feature_dma_buf *dma_buf, + struct vfio_region_dma_range *dma_ranges) +{ + struct pci_dev *pdev =3D vdev->pdev; + u32 bar =3D dma_buf->region_index; + resource_size_t bar_size; + u64 sum; + int i; + + if (dma_buf->flags) + return -EINVAL; + /* + * For PCI the region_index is the BAR number like everything else. + */ + if (bar >=3D VFIO_PCI_ROM_REGION_INDEX) + return -ENODEV; + + if (!vdev->provider[bar]) + return -EINVAL; + + bar_size =3D pci_resource_len(pdev, bar); + for (i =3D 0; i < dma_buf->nr_ranges; i++) { + u64 offset =3D dma_ranges[i].offset; + u64 len =3D dma_ranges[i].length; + + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + + if (check_add_overflow(offset, len, &sum) || sum > bar_size) + return -EINVAL; + } + + return 0; +} + +int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 f= lags, + struct vfio_device_feature_dma_buf __user *arg, + size_t argsz) +{ + struct vfio_device_feature_dma_buf get_dma_buf =3D {}; + struct vfio_region_dma_range *dma_ranges; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct vfio_pci_dma_buf *priv; + int ret; + + ret =3D vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_GET, + sizeof(get_dma_buf)); + if (ret !=3D 1) + return ret; + + if (copy_from_user(&get_dma_buf, arg, sizeof(get_dma_buf))) + return -EFAULT; + + if (!get_dma_buf.nr_ranges) + return -EINVAL; + + dma_ranges =3D memdup_array_user(&arg->dma_ranges, get_dma_buf.nr_ranges, + sizeof(*dma_ranges)); + if (IS_ERR(dma_ranges)) + return PTR_ERR(dma_ranges); + + ret =3D validate_dmabuf_input(vdev, &get_dma_buf, dma_ranges); + if (ret) + return ret; + + priv =3D kzalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) { + ret =3D -ENOMEM; + goto err_free_ranges; + } + priv->phys_vec =3D kcalloc(get_dma_buf.nr_ranges, sizeof(*priv->phys_vec), + GFP_KERNEL); + if (!priv->phys_vec) { + ret =3D -ENOMEM; + goto err_free_priv; + } + + priv->vdev =3D vdev; + dma_ranges_to_p2p_phys(priv, &get_dma_buf, dma_ranges); + kfree(dma_ranges); + dma_ranges =3D NULL; + + if (!vfio_device_try_get_registration(&vdev->vdev)) { + ret =3D -ENODEV; + goto err_free_phys; + } + + exp_info.ops =3D &vfio_pci_dmabuf_ops; + exp_info.size =3D priv->size; + exp_info.flags =3D get_dma_buf.open_flags; + exp_info.priv =3D priv; + + priv->dmabuf =3D dma_buf_export(&exp_info); + if (IS_ERR(priv->dmabuf)) { + ret =3D PTR_ERR(priv->dmabuf); + goto err_dev_put; + } + + /* dma_buf_put() now frees priv */ + INIT_LIST_HEAD(&priv->dmabufs_elm); + down_write(&vdev->memory_lock); + dma_resv_lock(priv->dmabuf->resv, NULL); + priv->revoked =3D !__vfio_pci_memory_enabled(vdev); + list_add_tail(&priv->dmabufs_elm, &vdev->dmabufs); + dma_resv_unlock(priv->dmabuf->resv); + up_write(&vdev->memory_lock); + + /* + * dma_buf_fd() consumes the reference, when the file closes the dmabuf + * will be released. + */ + return dma_buf_fd(priv->dmabuf, get_dma_buf.open_flags); + +err_dev_put: + vfio_device_put_registration(&vdev->vdev); +err_free_phys: + kfree(priv->phys_vec); +err_free_priv: + kfree(priv); +err_free_ranges: + kfree(dma_ranges); + return ret; +} + +void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked) +{ + struct vfio_pci_dma_buf *priv; + struct vfio_pci_dma_buf *tmp; + + lockdep_assert_held_write(&vdev->memory_lock); + + list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) { + if (!get_file_active(&priv->dmabuf->file)) + continue; + + if (priv->revoked !=3D revoked) { + dma_resv_lock(priv->dmabuf->resv, NULL); + priv->revoked =3D revoked; + dma_buf_move_notify(priv->dmabuf); + dma_resv_unlock(priv->dmabuf->resv); + } + dma_buf_put(priv->dmabuf); + } +} + +void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev) +{ + struct vfio_pci_dma_buf *priv; + struct vfio_pci_dma_buf *tmp; + + down_write(&vdev->memory_lock); + list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) { + if (!get_file_active(&priv->dmabuf->file)) + continue; + + dma_resv_lock(priv->dmabuf->resv, NULL); + list_del_init(&priv->dmabufs_elm); + priv->vdev =3D NULL; + priv->revoked =3D true; + dma_buf_move_notify(priv->dmabuf); + dma_resv_unlock(priv->dmabuf->resv); + vfio_device_put_registration(&vdev->vdev); + dma_buf_put(priv->dmabuf); + } + up_write(&vdev->memory_lock); +} diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_p= riv.h index a9972eacb2936..28a405f8b97c9 100644 --- a/drivers/vfio/pci/vfio_pci_priv.h +++ b/drivers/vfio/pci/vfio_pci_priv.h @@ -107,4 +107,27 @@ static inline bool vfio_pci_is_vga(struct pci_dev *pde= v) return (pdev->class >> 8) =3D=3D PCI_CLASS_DISPLAY_VGA; } =20 +#ifdef CONFIG_VFIO_PCI_DMABUF +int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 f= lags, + struct vfio_device_feature_dma_buf __user *arg, + size_t argsz); +void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev); +void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked= ); +#else +static inline int +vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags, + struct vfio_device_feature_dma_buf __user *arg, + size_t argsz) +{ + return -ENOTTY; +} +static inline void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *v= dev) +{ +} +static inline void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, + bool revoked) +{ +} +#endif + #endif diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 2184ba65348b8..2d612c14d3cb3 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -96,6 +96,7 @@ struct vfio_pci_core_device { struct rw_semaphore memory_lock; #ifdef CONFIG_VFIO_PCI_DMABUF struct p2pdma_provider *provider[PCI_STD_NUM_BARS]; + struct list_head dmabufs; #endif }; =20 diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 75100bf009baf..63214467c875a 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1478,6 +1478,31 @@ struct vfio_device_feature_bus_master { }; #define VFIO_DEVICE_FEATURE_BUS_MASTER 10 =20 +/** + * Upon VFIO_DEVICE_FEATURE_GET create a dma_buf fd for the + * regions selected. + * + * open_flags are the typical flags passed to open(2), eg O_RDWR, O_CLOEXE= C, + * etc. offset/length specify a slice of the region to create the dmabuf f= rom. + * nr_ranges is the total number of (P2P DMA) ranges that comprise the dma= buf. + * + * Return: The fd number on success, -1 and errno is set on failure. + */ +#define VFIO_DEVICE_FEATURE_DMA_BUF 11 + +struct vfio_region_dma_range { + __u64 offset; + __u64 length; +}; + +struct vfio_device_feature_dma_buf { + __u32 region_index; + __u32 open_flags; + __u32 flags; + __u32 nr_ranges; + struct vfio_region_dma_range dma_ranges[]; +}; + /* -------- API for Type1 VFIO IOMMU -------- */ =20 /** --=20 2.51.0