From nobody Wed Oct 1 22:33:19 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF94A231A21; Sun, 28 Sep 2025 14:51:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071101; cv=none; b=pcYx+cd7phZArTI9xq9lr1za/83RZTSJedKfs1G6WZKegzX6pdns+ie9pSugARMdPazfXm3Ii6X0EVdPulQ1vvEOGQctR9OhkaFbqP3c5a3q+EERjkTUbiwip+pmFe4lUyyd72Hj8WMWhfYcPVm2qkP2d+itKgQkXKabRLjRcoQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071101; c=relaxed/simple; bh=dgoJMONAaQ0FkHEaGJp6kyyPUaHl6mfDGBVyCOc4l40=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kjq2DIR07uoy3JUfwyCqSHBxkZRwlraPM0mSvFclG+pRopQ2/pxgbWUhMtrkkN2cGZ6S0G4E/ijiIeVvroAlAI/R6wW8RgWVRG6vKarYVhQK2H1D5+ZRujAmh1SqD6CckwzbLu0033gQhKqG3H4GlH3POa+vxLijTlYH8bwyhms= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IecM6cnp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IecM6cnp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5E67C4CEF0; Sun, 28 Sep 2025 14:51:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1759071101; bh=dgoJMONAaQ0FkHEaGJp6kyyPUaHl6mfDGBVyCOc4l40=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IecM6cnpNQvD5wQqEvIeJ/E6uGESzXx3kbcqbVzD/Dvj5U0kHPMCxBYsK134UQTWZ peWN/Mt/KWo875ErMqKWbmPtdm2xxCfMfmCrH2YdSQnOSQ1/ns/IKgrB2tcvPrqHKX R0UocbSiDFfdIt07cha7D7zhiujISujBVDlvoJDl5wLHXvjv/Trq4Q5svk4SUZPgxp MkUrdGopSUJcJNHfP4+neherNzeyd4c/13oAnGUjCiy9UtYwSpXPvogFwrQdahEsL4 Qnv29RL4DibXJX7l9M8ynLh5Uv95cjHAoe/5jX7bEv8V5XHipTG+nX4GG8ICSoBY49 XuSzg+p6uWOFQ== From: Leon Romanovsky To: Alex Williamson Cc: Leon Romanovsky , Jason Gunthorpe , Andrew Morton , Bjorn Helgaas , =?UTF-8?q?Christian=20K=C3=B6nig?= , dri-devel@lists.freedesktop.org, iommu@lists.linux.dev, Jens Axboe , Joerg Roedel , kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, Logan Gunthorpe , Marek Szyprowski , Robin Murphy , Sumit Semwal , Vivek Kasireddy , Will Deacon Subject: [PATCH v4 10/10] vfio/pci: Add dma-buf export support for MMIO regions Date: Sun, 28 Sep 2025 17:50:20 +0300 Message-ID: <53f3ea1947919a5e657b4f83e74ca53aa45814d4.1759070796.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Add support for exporting PCI device MMIO regions through dma-buf, enabling safe sharing of non-struct page memory with controlled lifetime management. This allows RDMA and other subsystems to import dma-buf FDs and build them into memory regions for PCI P2P operations. The implementation provides a revocable attachment mechanism using dma-buf move operations. MMIO regions are normally pinned as BARs don't change physical addresses, but access is revoked when the VFIO device is closed or a PCI reset is issued. This ensures kernel self-defense against potentially hostile userspace. Signed-off-by: Jason Gunthorpe Signed-off-by: Vivek Kasireddy Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/Makefile | 2 + drivers/vfio/pci/vfio_pci_config.c | 22 +- drivers/vfio/pci/vfio_pci_core.c | 17 ++ drivers/vfio/pci/vfio_pci_dmabuf.c | 398 +++++++++++++++++++++++++++++ drivers/vfio/pci/vfio_pci_priv.h | 23 ++ include/linux/vfio_pci_core.h | 3 + include/uapi/linux/vfio.h | 25 ++ 7 files changed, 486 insertions(+), 4 deletions(-) create mode 100644 drivers/vfio/pci/vfio_pci_dmabuf.c diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile index cf00c0a7e55c..f9155e9c5f63 100644 --- a/drivers/vfio/pci/Makefile +++ b/drivers/vfio/pci/Makefile @@ -2,7 +2,9 @@ =20 vfio-pci-core-y :=3D vfio_pci_core.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio= _pci_config.o vfio-pci-core-$(CONFIG_VFIO_PCI_ZDEV_KVM) +=3D vfio_pci_zdev.o + obj-$(CONFIG_VFIO_PCI_CORE) +=3D vfio-pci-core.o +vfio-pci-core-$(CONFIG_VFIO_PCI_DMABUF) +=3D vfio_pci_dmabuf.o =20 vfio-pci-y :=3D vfio_pci.o vfio-pci-$(CONFIG_VFIO_PCI_IGD) +=3D vfio_pci_igd.o diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci= _config.c index 8f02f236b5b4..1f6008eabf23 100644 --- a/drivers/vfio/pci/vfio_pci_config.c +++ b/drivers/vfio/pci/vfio_pci_config.c @@ -589,10 +589,12 @@ static int vfio_basic_config_write(struct vfio_pci_co= re_device *vdev, int pos, virt_mem =3D !!(le16_to_cpu(*virt_cmd) & PCI_COMMAND_MEMORY); new_mem =3D !!(new_cmd & PCI_COMMAND_MEMORY); =20 - if (!new_mem) + if (!new_mem) { vfio_pci_zap_and_down_write_memory_lock(vdev); - else + vfio_pci_dma_buf_move(vdev, true); + } else { down_write(&vdev->memory_lock); + } =20 /* * If the user is writing mem/io enable (new_mem/io) and we @@ -627,6 +629,8 @@ static int vfio_basic_config_write(struct vfio_pci_core= _device *vdev, int pos, *virt_cmd &=3D cpu_to_le16(~mask); *virt_cmd |=3D cpu_to_le16(new_cmd & mask); =20 + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); } =20 @@ -707,12 +711,16 @@ static int __init init_pci_cap_basic_perm(struct perm= _bits *perm) static void vfio_lock_and_set_power_state(struct vfio_pci_core_device *vde= v, pci_power_t state) { - if (state >=3D PCI_D3hot) + if (state >=3D PCI_D3hot) { vfio_pci_zap_and_down_write_memory_lock(vdev); - else + vfio_pci_dma_buf_move(vdev, true); + } else { down_write(&vdev->memory_lock); + } =20 vfio_pci_set_power_state(vdev, state); + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); } =20 @@ -900,7 +908,10 @@ static int vfio_exp_config_write(struct vfio_pci_core_= device *vdev, int pos, =20 if (!ret && (cap & PCI_EXP_DEVCAP_FLR)) { vfio_pci_zap_and_down_write_memory_lock(vdev); + vfio_pci_dma_buf_move(vdev, true); pci_try_reset_function(vdev->pdev); + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); } } @@ -982,7 +993,10 @@ static int vfio_af_config_write(struct vfio_pci_core_d= evice *vdev, int pos, =20 if (!ret && (cap & PCI_AF_CAP_FLR) && (cap & PCI_AF_CAP_TP)) { vfio_pci_zap_and_down_write_memory_lock(vdev); + vfio_pci_dma_buf_move(vdev, true); pci_try_reset_function(vdev->pdev); + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); } } diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_c= ore.c index 0c39368280d7..aa88c42db69b 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -289,6 +289,8 @@ static int vfio_pci_runtime_pm_entry(struct vfio_pci_co= re_device *vdev, * semaphore. */ vfio_pci_zap_and_down_write_memory_lock(vdev); + vfio_pci_dma_buf_move(vdev, true); + if (vdev->pm_runtime_engaged) { up_write(&vdev->memory_lock); return -EINVAL; @@ -372,6 +374,8 @@ static void vfio_pci_runtime_pm_exit(struct vfio_pci_co= re_device *vdev) */ down_write(&vdev->memory_lock); __vfio_pci_runtime_pm_exit(vdev); + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); } =20 @@ -692,6 +696,8 @@ void vfio_pci_core_close_device(struct vfio_device *cor= e_vdev) #endif vfio_pci_core_disable(vdev); =20 + vfio_pci_dma_buf_cleanup(vdev); + mutex_lock(&vdev->igate); if (vdev->err_trigger) { eventfd_ctx_put(vdev->err_trigger); @@ -1224,7 +1230,10 @@ static int vfio_pci_ioctl_reset(struct vfio_pci_core= _device *vdev, */ vfio_pci_set_power_state(vdev, PCI_D0); =20 + vfio_pci_dma_buf_move(vdev, true); ret =3D pci_try_reset_function(vdev->pdev); + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); up_write(&vdev->memory_lock); =20 return ret; @@ -1513,6 +1522,8 @@ int vfio_pci_core_ioctl_feature(struct vfio_device *d= evice, u32 flags, return vfio_pci_core_pm_exit(vdev, flags, arg, argsz); case VFIO_DEVICE_FEATURE_PCI_VF_TOKEN: return vfio_pci_core_feature_token(vdev, flags, arg, argsz); + case VFIO_DEVICE_FEATURE_DMA_BUF: + return vfio_pci_core_feature_dma_buf(vdev, flags, arg, argsz); default: return -ENOTTY; } @@ -2098,6 +2109,7 @@ int vfio_pci_core_init_dev(struct vfio_device *core_v= dev) ret =3D pcim_p2pdma_init(vdev->pdev); if (ret) return ret; + INIT_LIST_HEAD(&vdev->dmabufs); #endif init_rwsem(&vdev->memory_lock); xa_init(&vdev->ctx); @@ -2463,6 +2475,7 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_dev= ice_set *dev_set, break; } =20 + vfio_pci_dma_buf_move(vdev, true); vfio_pci_zap_bars(vdev); } =20 @@ -2486,6 +2499,10 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_de= vice_set *dev_set, =20 ret =3D pci_reset_bus(pdev); =20 + list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list) + if (__vfio_pci_memory_enabled(vdev)) + vfio_pci_dma_buf_move(vdev, false); + vdev =3D list_last_entry(&dev_set->device_list, struct vfio_pci_core_device, vdev.dev_set_list); =20 diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci= _dmabuf.c new file mode 100644 index 000000000000..838619f812aa --- /dev/null +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c @@ -0,0 +1,398 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. + */ +#include +#include +#include + +#include "vfio_pci_priv.h" + +MODULE_IMPORT_NS("DMA_BUF"); + +struct vfio_pci_dma_buf { + struct dma_buf *dmabuf; + struct vfio_pci_core_device *vdev; + struct list_head dmabufs_elm; + size_t size; + struct phys_vec *phys_vec; + struct p2pdma_provider *provider; + u32 nr_ranges; + u8 revoked : 1; +}; + +static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct vfio_pci_dma_buf *priv =3D dmabuf->priv; + + if (!attachment->peer2peer) + return -EOPNOTSUPP; + + if (priv->revoked) + return -ENODEV; + + switch (pci_p2pdma_map_type(priv->provider, attachment->dev)) { + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + break; + case PCI_P2PDMA_MAP_BUS_ADDR: + /* + * There is no need in IOVA at all for this flow. + * We rely on attachment->priv =3D=3D NULL as a marker + * for this mode. + */ + return 0; + default: + return -EINVAL; + } + + attachment->priv =3D kzalloc(sizeof(struct dma_iova_state), GFP_KERNEL); + if (!attachment->priv) + return -ENOMEM; + + dma_iova_try_alloc(attachment->dev, attachment->priv, 0, priv->size); + return 0; +} + +static void vfio_pci_dma_buf_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + kfree(attachment->priv); +} + +static void fill_sg_entry(struct scatterlist *sgl, unsigned int length, + dma_addr_t addr) +{ + /* + * Follow the DMABUF rules for scatterlist, the struct page can be + * NULL'd for MMIO only memort. + */ + sg_set_page(sgl, NULL, length, 0); + sg_dma_address(sgl) =3D addr; + sg_dma_len(sgl) =3D length; +} + +static struct sg_table * +vfio_pci_dma_buf_map(struct dma_buf_attachment *attachment, + enum dma_data_direction dir) +{ + struct vfio_pci_dma_buf *priv =3D attachment->dmabuf->priv; + struct dma_iova_state *state =3D attachment->priv; + struct phys_vec *phys_vec =3D priv->phys_vec; + unsigned long attrs =3D DMA_ATTR_MMIO; + unsigned int mapped_len =3D 0; + struct scatterlist *sgl; + struct sg_table *sgt; + dma_addr_t addr; + int ret, i; + + dma_resv_assert_held(priv->dmabuf->resv); + + if (priv->revoked) + return ERR_PTR(-ENODEV); + + sgt =3D kzalloc(sizeof(*sgt), GFP_KERNEL); + if (!sgt) + return ERR_PTR(-ENOMEM); + + ret =3D sg_alloc_table(sgt, 1, GFP_KERNEL | __GFP_ZERO); + if (ret) + goto err_kfree_sgt; + + sgl =3D sgt->sgl; + + for (i =3D 0; i < priv->nr_ranges; i++) { + if (!state) { + addr =3D pci_p2pdma_bus_addr_map(priv->provider, + phys_vec[i].paddr); + } else if (dma_use_iova(state)) { + ret =3D dma_iova_link(attachment->dev, state, + phys_vec[i].paddr, 0, + phys_vec[i].len, dir, attrs); + if (ret) + goto err_unmap_dma; + + mapped_len +=3D phys_vec[i].len; + } else { + addr =3D dma_map_phys(attachment->dev, phys_vec[i].paddr, + phys_vec[i].len, dir, attrs); + ret =3D dma_mapping_error(attachment->dev, addr); + if (ret) + goto err_unmap_dma; + } + + if (!state || !dma_use_iova(state)) { + /* + * In IOVA case, there is only one SG entry which spans + * for whole IOVA address space. So there is no need + * to call to sg_next() here. + */ + fill_sg_entry(sgl, phys_vec[i].len, addr); + sgl =3D sg_next(sgl); + } + } + + if (state && dma_use_iova(state)) { + WARN_ON_ONCE(mapped_len !=3D priv->size); + ret =3D dma_iova_sync(attachment->dev, state, 0, mapped_len); + if (ret) + goto err_unmap_dma; + fill_sg_entry(sgl, mapped_len, state->addr); + } + + return sgt; + +err_unmap_dma: + if (!i || !state) + ; /* Do nothing */ + else if (dma_use_iova(state)) + dma_iova_destroy(attachment->dev, state, mapped_len, dir, + attrs); + else + for_each_sgtable_dma_sg(sgt, sgl, i) + dma_unmap_phys(attachment->dev, sg_dma_address(sgl), + sg_dma_len(sgl), dir, attrs); + sg_free_table(sgt); +err_kfree_sgt: + kfree(sgt); + return ERR_PTR(ret); +} + +static void vfio_pci_dma_buf_unmap(struct dma_buf_attachment *attachment, + struct sg_table *sgt, + enum dma_data_direction dir) +{ + struct vfio_pci_dma_buf *priv =3D attachment->dmabuf->priv; + struct dma_iova_state *state =3D attachment->priv; + unsigned long attrs =3D DMA_ATTR_MMIO; + struct scatterlist *sgl; + int i; + + if (!state) + ; /* Do nothing */ + else if (dma_use_iova(state)) + dma_iova_destroy(attachment->dev, state, priv->size, dir, + attrs); + else + for_each_sgtable_dma_sg(sgt, sgl, i) + dma_unmap_phys(attachment->dev, sg_dma_address(sgl), + sg_dma_len(sgl), dir, attrs); + + sg_free_table(sgt); + kfree(sgt); +} + +static void vfio_pci_dma_buf_release(struct dma_buf *dmabuf) +{ + struct vfio_pci_dma_buf *priv =3D dmabuf->priv; + + /* + * Either this or vfio_pci_dma_buf_cleanup() will remove from the list. + * The refcount prevents both. + */ + if (priv->vdev) { + down_write(&priv->vdev->memory_lock); + list_del_init(&priv->dmabufs_elm); + up_write(&priv->vdev->memory_lock); + vfio_device_put_registration(&priv->vdev->vdev); + } + kfree(priv->phys_vec); + kfree(priv); +} + +static const struct dma_buf_ops vfio_pci_dmabuf_ops =3D { + .attach =3D vfio_pci_dma_buf_attach, + .detach =3D vfio_pci_dma_buf_detach, + .map_dma_buf =3D vfio_pci_dma_buf_map, + .release =3D vfio_pci_dma_buf_release, + .unmap_dma_buf =3D vfio_pci_dma_buf_unmap, +}; + +static void dma_ranges_to_p2p_phys(struct vfio_pci_dma_buf *priv, + struct vfio_device_feature_dma_buf *dma_buf, + struct vfio_region_dma_range *dma_ranges, + struct p2pdma_provider *provider) +{ + struct pci_dev *pdev =3D priv->vdev->pdev; + phys_addr_t pci_start; + int i; + + pci_start =3D pci_resource_start(pdev, dma_buf->region_index); + for (i =3D 0; i < dma_buf->nr_ranges; i++) { + priv->phys_vec[i].len =3D dma_ranges[i].length; + priv->phys_vec[i].paddr =3D pci_start + dma_ranges[i].offset; + priv->size +=3D priv->phys_vec[i].len; + } + priv->nr_ranges =3D dma_buf->nr_ranges; + priv->provider =3D provider; +} + +static int validate_dmabuf_input(struct vfio_pci_core_device *vdev, + struct vfio_device_feature_dma_buf *dma_buf, + struct vfio_region_dma_range *dma_ranges, + struct p2pdma_provider **provider) +{ + struct pci_dev *pdev =3D vdev->pdev; + u32 bar =3D dma_buf->region_index; + resource_size_t bar_size; + u64 sum; + int i; + + if (dma_buf->flags) + return -EINVAL; + /* + * For PCI the region_index is the BAR number like everything else. + */ + if (bar >=3D VFIO_PCI_ROM_REGION_INDEX) + return -ENODEV; + + *provider =3D pcim_p2pdma_provider(pdev, bar); + if (!provider) + return -EINVAL; + + bar_size =3D pci_resource_len(pdev, bar); + for (i =3D 0; i < dma_buf->nr_ranges; i++) { + u64 offset =3D dma_ranges[i].offset; + u64 len =3D dma_ranges[i].length; + + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + + if (check_add_overflow(offset, len, &sum) || sum > bar_size) + return -EINVAL; + } + + return 0; +} + +int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 f= lags, + struct vfio_device_feature_dma_buf __user *arg, + size_t argsz) +{ + struct vfio_device_feature_dma_buf get_dma_buf =3D {}; + struct vfio_region_dma_range *dma_ranges; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct p2pdma_provider *provider; + struct vfio_pci_dma_buf *priv; + int ret; + + ret =3D vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_GET, + sizeof(get_dma_buf)); + if (ret !=3D 1) + return ret; + + if (copy_from_user(&get_dma_buf, arg, sizeof(get_dma_buf))) + return -EFAULT; + + if (!get_dma_buf.nr_ranges) + return -EINVAL; + + dma_ranges =3D memdup_array_user(&arg->dma_ranges, get_dma_buf.nr_ranges, + sizeof(*dma_ranges)); + if (IS_ERR(dma_ranges)) + return PTR_ERR(dma_ranges); + + ret =3D validate_dmabuf_input(vdev, &get_dma_buf, dma_ranges, &provider); + if (ret) + return ret; + + priv =3D kzalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) { + ret =3D -ENOMEM; + goto err_free_ranges; + } + priv->phys_vec =3D kcalloc(get_dma_buf.nr_ranges, sizeof(*priv->phys_vec), + GFP_KERNEL); + if (!priv->phys_vec) { + ret =3D -ENOMEM; + goto err_free_priv; + } + + priv->vdev =3D vdev; + dma_ranges_to_p2p_phys(priv, &get_dma_buf, dma_ranges, provider); + kfree(dma_ranges); + dma_ranges =3D NULL; + + if (!vfio_device_try_get_registration(&vdev->vdev)) { + ret =3D -ENODEV; + goto err_free_phys; + } + + exp_info.ops =3D &vfio_pci_dmabuf_ops; + exp_info.size =3D priv->size; + exp_info.flags =3D get_dma_buf.open_flags; + exp_info.priv =3D priv; + + priv->dmabuf =3D dma_buf_export(&exp_info); + if (IS_ERR(priv->dmabuf)) { + ret =3D PTR_ERR(priv->dmabuf); + goto err_dev_put; + } + + /* dma_buf_put() now frees priv */ + INIT_LIST_HEAD(&priv->dmabufs_elm); + down_write(&vdev->memory_lock); + dma_resv_lock(priv->dmabuf->resv, NULL); + priv->revoked =3D !__vfio_pci_memory_enabled(vdev); + list_add_tail(&priv->dmabufs_elm, &vdev->dmabufs); + dma_resv_unlock(priv->dmabuf->resv); + up_write(&vdev->memory_lock); + + /* + * dma_buf_fd() consumes the reference, when the file closes the dmabuf + * will be released. + */ + return dma_buf_fd(priv->dmabuf, get_dma_buf.open_flags); + +err_dev_put: + vfio_device_put_registration(&vdev->vdev); +err_free_phys: + kfree(priv->phys_vec); +err_free_priv: + kfree(priv); +err_free_ranges: + kfree(dma_ranges); + return ret; +} + +void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked) +{ + struct vfio_pci_dma_buf *priv; + struct vfio_pci_dma_buf *tmp; + + lockdep_assert_held_write(&vdev->memory_lock); + + list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) { + if (!get_file_active(&priv->dmabuf->file)) + continue; + + if (priv->revoked !=3D revoked) { + dma_resv_lock(priv->dmabuf->resv, NULL); + priv->revoked =3D revoked; + dma_buf_move_notify(priv->dmabuf); + dma_resv_unlock(priv->dmabuf->resv); + } + dma_buf_put(priv->dmabuf); + } +} + +void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev) +{ + struct vfio_pci_dma_buf *priv; + struct vfio_pci_dma_buf *tmp; + + down_write(&vdev->memory_lock); + list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) { + if (!get_file_active(&priv->dmabuf->file)) + continue; + + dma_resv_lock(priv->dmabuf->resv, NULL); + list_del_init(&priv->dmabufs_elm); + priv->vdev =3D NULL; + priv->revoked =3D true; + dma_buf_move_notify(priv->dmabuf); + dma_resv_unlock(priv->dmabuf->resv); + vfio_device_put_registration(&vdev->vdev); + dma_buf_put(priv->dmabuf); + } + up_write(&vdev->memory_lock); +} diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_p= riv.h index a9972eacb293..28a405f8b97c 100644 --- a/drivers/vfio/pci/vfio_pci_priv.h +++ b/drivers/vfio/pci/vfio_pci_priv.h @@ -107,4 +107,27 @@ static inline bool vfio_pci_is_vga(struct pci_dev *pde= v) return (pdev->class >> 8) =3D=3D PCI_CLASS_DISPLAY_VGA; } =20 +#ifdef CONFIG_VFIO_PCI_DMABUF +int vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 f= lags, + struct vfio_device_feature_dma_buf __user *arg, + size_t argsz); +void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev); +void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, bool revoked= ); +#else +static inline int +vfio_pci_core_feature_dma_buf(struct vfio_pci_core_device *vdev, u32 flags, + struct vfio_device_feature_dma_buf __user *arg, + size_t argsz) +{ + return -ENOTTY; +} +static inline void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *v= dev) +{ +} +static inline void vfio_pci_dma_buf_move(struct vfio_pci_core_device *vdev, + bool revoked) +{ +} +#endif + #endif diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index f541044e42a2..68afa18630d4 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -94,6 +94,9 @@ struct vfio_pci_core_device { struct vfio_pci_core_device *sriov_pf_core_dev; struct notifier_block nb; struct rw_semaphore memory_lock; +#ifdef CONFIG_VFIO_PCI_DMABUF + struct list_head dmabufs; +#endif }; =20 /* Will be exported for vfio pci drivers usage */ diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 75100bf009ba..63214467c875 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1478,6 +1478,31 @@ struct vfio_device_feature_bus_master { }; #define VFIO_DEVICE_FEATURE_BUS_MASTER 10 =20 +/** + * Upon VFIO_DEVICE_FEATURE_GET create a dma_buf fd for the + * regions selected. + * + * open_flags are the typical flags passed to open(2), eg O_RDWR, O_CLOEXE= C, + * etc. offset/length specify a slice of the region to create the dmabuf f= rom. + * nr_ranges is the total number of (P2P DMA) ranges that comprise the dma= buf. + * + * Return: The fd number on success, -1 and errno is set on failure. + */ +#define VFIO_DEVICE_FEATURE_DMA_BUF 11 + +struct vfio_region_dma_range { + __u64 offset; + __u64 length; +}; + +struct vfio_device_feature_dma_buf { + __u32 region_index; + __u32 open_flags; + __u32 flags; + __u32 nr_ranges; + struct vfio_region_dma_range dma_ranges[]; +}; + /* -------- API for Type1 VFIO IOMMU -------- */ =20 /** --=20 2.51.0