From nobody Tue Feb 10 05:09:54 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42C54155A5D; Sat, 24 Jan 2026 19:14:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769282092; cv=none; b=EnYIZg8Q9Hxvmkg2Tt0P32AbB0gocfE3VgT+cGtSlasB1PR0ohsKvMv8GWdbfMZBpyXMLfuOAD/K0FBKONuE6V1olZNmH2uLP/gSw2OS569RW29MzWW8UMmqukIDTeDSjSdTUSMyDiH0d4FQdMabt8PX1DygiL8RUbVPFzC6kF0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769282092; c=relaxed/simple; bh=3D3OvmHdl7CtonK+GTGLGOQGYsGv5xRF/fJKjHtWrds=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kzOuezeNHIZ4LhWVVo35+gJoFVpvgJ965CmdFDf25uJ10mnWx6PwzGLpcXaAMOf3mNDFzH0Ta1Jzxf5JHRGdEFXtwAduaQ7Z4C5PCSVAFJe8s34/lMXsb/oDNc1G9wel65uSnHrlPmTRdOsFH1IJDRfl/9uLQdT7iWHEsXw9Cpg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sPfmjiey; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sPfmjiey" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3EA13C2BC9E; Sat, 24 Jan 2026 19:14:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1769282091; bh=3D3OvmHdl7CtonK+GTGLGOQGYsGv5xRF/fJKjHtWrds=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sPfmjieyBFmJ+auLUXTwkDyqx9iP+AX0DmWHt0nCeNCFRRf7NdRHJ1d9eXDX8eyIz +qJXLVSNo4bMNlFIr/ylvpCMd0NJ/+j1qcpyqxPE1WC3WpwY1yCESe4iXbsVxDwqXg hurBEjxmkBczLU0e+eBWMxLeKKKax3BBjQrLnSbHOjiZ9TCD5BVWNvU/QoC3tON27N Y1ECQH5eaoQubNVoCLjVGElO4pBCZVWVV66UvJZUV4D7f6mBHjYcvqO80luEsTdzTT /Ai4nWHNm31uRVjOJYA0Ex5rsK9LmZ9OdKpM4QTZyqy4dCxleQCPXKaD076YtLmD+7 8t8RDmAastbhA== From: Leon Romanovsky To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Alex Deucher , David Airlie , Simona Vetter , Gerd Hoffmann , Dmitry Osipenko , Gurchetan Singh , Chia-I Wu , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Jason Gunthorpe , Leon Romanovsky , Kevin Tian , Joerg Roedel , Will Deacon , Robin Murphy , Felix Kuehling , Alex Williamson , Ankit Agrawal , Vivek Kasireddy Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org, amd-gfx@lists.freedesktop.org, virtualization@lists.linux.dev, intel-xe@lists.freedesktop.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, kvm@vger.kernel.org Subject: [PATCH v5 6/8] dma-buf: Add dma_buf_attach_revocable() Date: Sat, 24 Jan 2026 21:14:18 +0200 Message-ID: <20260124-dmabuf-revoke-v5-6-f98fca917e96@nvidia.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260124-dmabuf-revoke-v5-0-f98fca917e96@nvidia.com> References: <20260124-dmabuf-revoke-v5-0-f98fca917e96@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev-47773 Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky Some exporters need a flow to synchronously revoke access to the DMA-buf by importers. Once revoke is completed the importer is not permitted to touch the memory otherwise they may get IOMMU faults, AERs, or worse. DMA-buf today defines a revoke flow, for both pinned and dynamic importers, which is broadly: dma_resv_lock(dmabuf->resv, NULL); // Prevent new mappings from being established priv->revoked =3D true; // Tell all importers to eventually unmap dma_buf_invalidate_mappings(dmabuf); // Wait for any inprogress fences on the old mapping dma_resv_wait_timeout(dmabuf->resv, DMA_RESV_USAGE_BOOKKEEP, false, MAX_SCHEDULE_TIMEOUT); dma_resv_unlock(dmabuf->resv, NULL); // Wait for all importers to complete unmap wait_for_completion(&priv->unmapped_comp); This works well, and an importer that continues to access the DMA-buf after unmapping it is very buggy. However, the final wait for unmap is effectively unbounded. Several importers do not support invalidate_mappings() at all and won't unmap until userspace triggers it. This unbounded wait is not suitable for exporters like VFIO and RDMA tha need to issue revoke as part of their normal operations. Add dma_buf_attach_revocable() to allow exporters to determine the difference between importers that can complete the above in bounded time, and those that can't. It can be called inside the exporter's attach op to reject incompatible importers. Document these details about how dma_buf_invalidate_mappings() works and what the required sequence is to achieve a full revocation. Signed-off-by: Leon Romanovsky --- drivers/dma-buf/dma-buf.c | 48 +++++++++++++++++++++++++++++++++++++++++++= +++- include/linux/dma-buf.h | 9 +++------ 2 files changed, 50 insertions(+), 7 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 1629312d364a..f0e05227bda8 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1242,13 +1242,59 @@ void dma_buf_unmap_attachment_unlocked(struct dma_b= uf_attachment *attach, } EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, "DMA_BUF"); =20 +/** + * dma_buf_attach_revocable - check if a DMA-buf importer implements + * revoke semantics. + * @attach: the DMA-buf attachment to check + * + * Returns true if the DMA-buf importer can support the revoke sequence + * explained in dma_buf_invalidate_mappings() within bounded time. Meaning= the + * importer implements invalidate_mappings() and ensures that unmap is cal= led as + * a result. + */ +bool dma_buf_attach_revocable(struct dma_buf_attachment *attach) +{ + return attach->importer_ops && + attach->importer_ops->invalidate_mappings; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_attach_revocable, "DMA_BUF"); + /** * dma_buf_invalidate_mappings - notify attachments that DMA-buf is moving * * @dmabuf: [in] buffer which is moving * * Informs all attachments that they need to destroy and recreate all their - * mappings. + * mappings. If the attachment is dynamic then the dynamic importer is exp= ected + * to invalidate any caches it has of the mapping result and perform a new + * mapping request before allowing HW to do any further DMA. + * + * If the attachment is pinned then this informs the pinned importer that = the + * underlying mapping is no longer available. Pinned importers may take th= is is + * as a permanent revocation and never establish new mappings so exporters + * should not trigger it lightly. + * + * Upon return importers may continue to access the DMA-buf memory. The ca= ller + * must do two additional waits to ensure that the memory is no longer bei= ng + * accessed: + * 1) Until dma_resv_wait_timeout() retires fences the importer is allowe= d to + * fully access the memory. + * 2) Until the importer calls unmap it is allowed to speculatively + * read-and-discard the memory. It must not write to the memory. + * + * A caller wishing to use dma_buf_invalidate_mappings() to fully stop acc= ess to + * the DMA-buf must wait for both. Dynamic callers can often use just the = first. + * + * All importers providing a invalidate_mappings() op must ensure that unm= ap is + * called within bounded time after the op. + * + * Pinned importers that do not support a invalidate_mappings() op will + * eventually perform unmap when they are done with the buffer, which may = be an + * ubounded time from calling this function. dma_buf_attach_revocable() ca= n be + * used to prevent such importers from attaching. + * + * Importers are free to request a new mapping in parallel as this function + * returns. */ void dma_buf_invalidate_mappings(struct dma_buf *dmabuf) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index d5c3ce2b3aa4..84a7ec8f5359 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -468,12 +468,8 @@ struct dma_buf_attach_ops { * called with this lock held as well. This makes sure that no mapping * is created concurrently with an ongoing move operation. * - * Mappings stay valid and are not directly affected by this callback. - * But the DMA-buf can now be in a different physical location, so all - * mappings should be destroyed and re-created as soon as possible. - * - * New mappings can be created after this callback returns, and will - * point to the new location of the DMA-buf. + * See the kdoc for dma_buf_invalidate_mappings() for details on the + * required behavior. */ void (*invalidate_mappings)(struct dma_buf_attachment *attach); }; @@ -601,6 +597,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_= attachment *, void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table= *, enum dma_data_direction); void dma_buf_invalidate_mappings(struct dma_buf *dma_buf); +bool dma_buf_attach_revocable(struct dma_buf_attachment *attach); int dma_buf_begin_cpu_access(struct dma_buf *dma_buf, enum dma_data_direction dir); int dma_buf_end_cpu_access(struct dma_buf *dma_buf, --=20 2.52.0